You are on page 1of 965

Theory of

Knowledge
Structures and Processes

8893_9789814522670_TP.indd 1 12/10/16 8:22 AM


World Scientific Series in Information Studies
(ISSN: 1793-7876)

Series Editor: Mark Burgin (University of California, Los Angeles, USA)

International Advisory Board:


Søren Brier (Copenhagen Business School, Copenhagen, Denmark)
Tony Bryant (Leeds Metropolitan University, Leeds, United Kingdom)
Gordana Dodig-Crnkovic (Mälardalen University, Eskilstuna, Sweden)
Wolfgang Hofkirchner (ICT&S Center, University of Salzburg, Salzburg, Austria)
William R King (University of Pittsburgh, Pittsburgh, USA)

Vol. 1 Theory of Information — Fundamentality, Diversity and Unification


by Mark Burgin

Vol. 2 Information and Computation — Essays on Scientific and Philosophical


Understanding of Foundations of Information and Computation
edited by Gordana Dodig-Crnkovic & Mark Burgin

Vol. 3 Emergent Information — A Unified Theory of Information Framework


by Wolfgang Hofkirchner

Vol. 4 An Information Approach to Mitochondrial Dysfunction:


Extending Swerdlow’s Hypothesis
by Rodrick Wallace

Vol. 5 Theory of Knowledge: Structures and Processes


by Mark Burgin

Sajani - Theory of Knowledge.indd 1 25-07-16 2:02:32 PM


World Scientific Series in Information Studies — Vol. 5

Theory of
Knowledge
Structures and Processes

Mark Burgin
University of California, Los Angeles, USA

World Scientific
NEW JERSEY • LONDON • SINGAPORE • BEIJING • SHANGHAI • HONG KONG • TA I P E I • CHENNAI • TOKYO

8893_9789814522670_TP.indd 2 12/10/16 8:22 AM


Published by
World Scientific Publishing Co. Pte. Ltd.
5 Toh Tuck Link, Singapore 596224
USA office: 27 Warren Street, Suite 401-402, Hackensack, NJ 07601
UK office: 57 Shelton Street, Covent Garden, London WC2H 9HE

Library of Congress Cataloging-in-Publication Data


Names: Burgin, M. S. (Mark Semenovich), author.
Title: Theory of knowledge : structures and processes / Mark Burgin.
Description: New Jersey : World Scientific, 2016. | Series: World Scientific series in information
studies ; Volume 5 | Includes bibliographical references and index.
Identifiers: LCCN 2015049963 | ISBN 9789814522670 (hc : alk. paper)
Subjects: LCSH: Knowledge, Theory of.
Classification: LCC BD161 .B865 2216 | DDC 121--dc23
LC record available at http://lccn.loc.gov/2015049963

British Library Cataloguing-in-Publication Data


A catalogue record for this book is available from the British Library.

Copyright © 2017 by World Scientific Publishing Co. Pte. Ltd.


All rights reserved. This book, or parts thereof, may not be reproduced in any form or by any means,
electronic or mechanical, including photocopying, recording or any information storage and retrieval
system now known or to be invented, without written permission from the publisher.

For photocopying of material in this volume, please pay a copying fee through the Copyright Clearance
Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, USA. In this case permission to photocopy
is not required from the publisher.

Desk Editors: Dr. Sree Meenakshi Sajani/Tan Rok Ting

Typeset by Stallion Press


Email: enquiries@stallionpress.com

Printed in Singapore

Sajani - Theory of Knowledge.indd 2 25-07-16 2:02:32 PM


September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-fm page v

Contents

Preface ix
Acknowledgments xiii
About the Author xv

1. Introduction 1
1.1. The role of knowledge in the contemporary
society . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2. A brief history of knowledge studies . . . . . . . . . . 9
1.3. Structure of the book . . . . . . . . . . . . . . . . . . 39

2. Knowledge Characteristics and Typology 45


2.1. The differentiation and classification of knowledge . . 45
2.2. Existential characteristics of knowledge . . . . . . . . 77
2.3. Descriptive properties of knowledge
and corresponding typology . . . . . . . . . . . . . . . 91
2.3.1. Dimensions and other characteristics
of knowledge . . . . . . . . . . . . . . . . . . . 94
2.3.2. Correctness, relevance, and consistency
of knowledge . . . . . . . . . . . . . . . . . . . 96
2.3.3. Confidence in and certainty of knowledge . . . 119
2.3.4. Complexity and clarity of knowledge . . . . . . 122
2.3.5. Significance of knowledge . . . . . . . . . . . . 131
2.3.6. Efficiency of knowledge . . . . . . . . . . . . . 134
2.3.7. Reliability of knowledge . . . . . . . . . . . . . 136

v
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-fm page vi

vi Contents

2.3.8. Abstractness and generality of knowledge . . . 137


2.3.9. Completeness of knowledge versus precision
of knowledge . . . . . . . . . . . . . . . . . . . 139
2.3.10. Meaning of knowledge . . . . . . . . . . . . . . 140
2.3.11. Other descriptive properties of knowledge . . . 149
2.4. Metaknowledge and metadata . . . . . . . . . . . . . . 151

3. Knowledge Evaluation and Validation


in the Context of Epistemic Structures 169
3.1. Knowledge in the context of epistemic structures
and knowledge scales . . . . . . . . . . . . . . . . . . . 170
3.2. Knowledge evaluation, justification, and testing . . . . 215
3.2.1. Knowledge evaluation . . . . . . . . . . . . . . 215
3.2.2. Knowledge validation, justification,
and testing . . . . . . . . . . . . . . . . . . . . 240
3.3. Local consistency versus global consistency
in knowledge representation . . . . . . . . . . . . . . . 263

4. Knowledge Structure and Functioning:


Microlevel or Quantum Theory of Knowledge 307
4.1. Basic structures of knowledge units on the quantum
level — knowledge quanta and semantic links . . . . . 309
4.1.1. Quantum theory of knowledge (QTK) . . . . . 310
4.1.2. Semantic link network theory (SLNT) and
Semantic link theory of knowledge (SLTK) . . 329
4.1.3. QTK–SLTK connection . . . . . . . . . . . . . 340
4.2. Signs and symbols as quantum units of knowledge . . 343
4.3. Operations with and relations between quantum
knowledge units . . . . . . . . . . . . . . . . . . . . . . 358
4.3.1. Properties of and relations between nodes
and links in SLN and knowledge quanta
in QTK . . . . . . . . . . . . . . . . . . . . . . 360
4.3.2. Operations with extended knowledge
quanta . . . . . . . . . . . . . . . . . . . . . . 369
4.3.3. Operations with symbolic knowledge quanta
and complete semantic links . . . . . . . . . . 380
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-fm page vii

Contents vii

5. Knowledge Structure and Functioning:


Macrolevel or Theory of Average Knowledge 395
5.1. Language as a universal tool for knowledge
representation . . . . . . . . . . . . . . . . . . . . . . . 402
5.1.1. Natural languages . . . . . . . . . . . . . . . . 403
5.1.2. Languages of science and mathematics . . . . . 411
5.1.3. Algorithmic and programming languages . . . 423
5.2. Logic as a tool for knowledge representation
and production . . . . . . . . . . . . . . . . . . . . . . 428
5.2.1. Concepts, names, terms, and objects . . . . . . 446
5.2.2. Statements, queries, and instructions . . . . . 481
5.2.3. Logical systems of inference . . . . . . . . . . . 491
5.3. Theory of abstract properties . . . . . . . . . . . . . . 500
5.4. Semantic networks and ontology . . . . . . . . . . . . 518
5.5. Scripts and productions . . . . . . . . . . . . . . . . . 527
5.6. Frames and Schemas . . . . . . . . . . . . . . . . . . . 536

6. Knowledge Structure and Functioning:


Megalevel or Global Theory of Knowledge 593
6.1. A typology of structures and scientific knowledge . . . 595
6.2. Nuclear and comprehensive knowledge systems . . . . 603
6.3. Logic-linguistic knowledge system and descriptive
knowledge . . . . . . . . . . . . . . . . . . . . . . . . . 612
6.4. Model-representation knowledge system and
representational knowledge . . . . . . . . . . . . . . . 617
6.5. Procedural, axiological and instrumental
knowledge systems, and operational knowledge . . . . 622
6.6. Relations between and operations with global
knowledge systems . . . . . . . . . . . . . . . . . . . . 631
6.7. Hierarchies of knowledge systems . . . . . . . . . . . . 636

7. Knowledge Production, Acquisition,


Engineering, and Application 643
7.1. Knowledge production, learning, and acquisition
as basic cognitive processes . . . . . . . . . . . . . . . 644
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-fm page viii

viii Contents

7.1.1. Scientific cognition . . . . . . . . . . . . . . . . 658


7.1.2. Intuition as a cognitive instrument . . . . . . . 669
7.1.3. Computers and networks as cognitive tools . . 688
7.1.4. Learning . . . . . . . . . . . . . . . . . . . . . 696
7.1.5. Knowledge creation in organizations . . . . . . 705
7.2. Knowledge organization and engineering . . . . . . . . 711
7.3. Knowledge management and application . . . . . . . . 714

8. Knowledge, Data, and Information 721


8.1. Epistemic structures and cognitive information . . . . 722
8.2. Structural aspects of knowledge–information
duality . . . . . . . . . . . . . . . . . . . . . . . . . . . 727
8.3. Information as a source of knowledge . . . . . . . . . . 760
8.4. Dynamic aspects of knowledge, data, and
information interaction . . . . . . . . . . . . . . . . . . 766
8.5. Knowledge as a measure of information . . . . . . . . 791

9. Conclusion 803

Appendix 809
A. Set theoretical foundations . . . . . . . . . . . . . . . 809
B. Elements of the theory of algorithms . . . . . . . . . . 819
C. Elements of algebra and category theory . . . . . . . . 825
D. Numbers and numerical functions . . . . . . . . . . . . 831
E. Topological, metric and normed spaces . . . . . . . . . 833

Bibliography 837

Subject Index 927


September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-fm page ix

Preface

If the extent of . . . knowledge is the hallmark of our civilization,


the use to be made of it may be its crisis.
S. Dilon Ripley

An investment in knowledge pays the best interest.


Benjamin Franklin

Knowledge has always been important in society and all educated


people have always understood importance of knowledge. That is
why Western philosophers have studied knowledge as an important
phenomenon from the time of Plato and Aristotle. Thinkers from
other countries, such as China and India, also tried to understand
the essence of knowledge from ancient times.
In contemporary society, importance of knowledge is much higher
and continues to grow very fast. Researchers concluded that knowl-
edge had become the key strategic asset for the 21st century and
for every organization. Consequently, the necessity in developing the
best strategy for identifying, developing, and applying the knowl-
edge assets has become critical. Every organization needs to invest in
creating and implementing the best knowledge networks, processes,
methods, tools, and technologies.
The growing needs in knowledge and efficient knowledge organiza-
tion intensified studies of knowledge. There are three main directions

ix
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-fm page x

x Preface

in these studies:

— The philosophical and methodological direction, which comprises


epistemology and the methodology of science and mathematics.
— The area of artificial intelligence (AI), in which knowledge is
perceived as the base of intelligence.
— The field of knowledge management where knowledge is treated
as the main asset of companies and organizations.

AI is typically directed at knowledge representation and


processing.
Epistemology is largely interested in knowledge definition and
acquisition (cognition).
Knowledge management is mostly concerned with knowledge
organization and utilization.
In addition, knowledge is also explored in psychology, sociology,
and linguistics.
Intensification of studies in area of knowledge brought forth a
quantity of books on a variety of issues and problems of knowledge.
So, why is this book different? It is different because its main goal is
to present, organize and synthesize the basic ideas, results, and con-
cepts from these three directions, which are loosely related now, into
a unified theory of knowledge and knowledge processes. It is called
the synthetic theory of knowledge. It is multidisciplinary and trans-
disciplinary at the same time. The approach presented in this book
provides a new explanation of important relations between knowl-
edge and information demonstrating new kinds of possibilities for
knowledge management, information technology, data mining, infor-
mation sciences, computer science, knowledge engineering, psychol-
ogy, social sciences, genetics, and education that are made available
by the synthetic theory of knowledge.
Explanation of knowledge essence, structure and functioning is
given in this book, as well as answers to the following questions:

— How knowledge is related to information and data?


— How knowledge is modeled by mathematical and logical
structures?
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-fm page xi

Preface xi

— How these models are used to better understand and utilize com-
puters and Internet, cognition and education, communication and
computation?

Knowledge is inseparable from information. People acquire knowl-


edge receiving cognitive information. At the same time, knowledge,
by its essence, contains information and this is the main feature of
knowledge. This intrinsic unity of knowledge and information forms
the base of the synthetic theory of knowledge.
b2530 International Strategic Relations and China’s National Security: World at the Crossroads

This page intentionally left blank

b2530_FM.indd 6 01-Sep-16 11:03:06 AM


September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-fm page xiii

Acknowledgments

Many wonderful people have made contributions to my efforts with


this work. I am especially grateful to the staff at World Scientific
and especially, Ms. Tan Rok Ting, for their encouragement and help
in bringing about this publication, as well as to Ms. Raghavarshini
for diligent preparation of this text for publication. I would like
to thank the teachers and especially, my thesis advisor, Alexan-
der Gennadievich Kurosh, who helped shape my scientific viewpoint
and research style. In developing ideas in knowledge theory, I have
benefited from conversations and discussions with many friends and
colleagues. Thus, I am grateful for the interest and helpful discus-
sions with those who have communicated with me on these prob-
lems. I greatly appreciate advice and help of Andrei Nikolayevich
Kolmogorov from Moscow State University in the development of
the holistic view on mathematics and its connections with the phys-
ical world. I have also benefited from the discussions I had with
Michael Arbib from USC on schema theory and with Frank Land
from the London School of Economics and Political Science on knowl-
edge management. Collaboration with Kees de vey Mestdagh from
the University of Groningen gave much to the development of the
theory of logical varieties as a tool for representing and reasoning
with inconsistent knowledge. Collaboration with Victor Gladun from
Gorsystemotechnika (Kiev) gave much to the development of math-
ematical modeling of semantic networks. Collaboration with Dmitri
Gorsky from the Moscow Institute of Philosophy contributed to the

xiii
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-fm page xiv

xiv Acknowledgments

further development of mathematical theory of concepts. Collabora-


tion with Vladimir Kuznetsov from the Kiev Institute of Philosophy
in the methodology of science contributed to the further development
of mathematical models of scientific theories and global knowledge.
Collaboration with Paul Zellweger from ArborWay Labs and Rex
Gantenbein from the University of Wyoming contributed to better
understanding of knowledge discovery and representation. Credit for
my desire to write this book must go to my academic colleagues.
Their questions and queries made significant contribution to my
understanding of knowledge and information. I would particularly
like to thank many fine participants of the Jacob Marschak Interdis-
ciplinary Colloquium on Mathematics in the Behavioral Sciences at
UCLA and especially, Colloquium Director, Michael Intrilligator, for
extensive and helpful discussions on problems of knowledge and infor-
mation that gave me much encouragement for further work in this
direction. Comments and observations of participants of the Applied
Mathematics Colloquium of the Department of Mathematics, Semi-
nar of Theoretical Computer Science of the Department of Computer
Science at UCLA, various conferences where I presented these mate-
rials and the Internet discussion group on Foundations of Information
Science (FIS) were useful in the development of my views on knowl-
edge. I would also like to thank the Departments of Mathematics
and Computer Science in the School of Engineering at UCLA for
providing space, equipment, and helpful discussions.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-fm page xv

About the Author

Dr. Mark Burgin received his M.A. and Ph.D. in mathematics from
Moscow State University, which was one of the best universities in
the world at that time, and Doctor of Science in logic and philos-
ophy from the National Academy of Sciences of Ukraine. He was
a Professor at the Institute of Education, Kiev; at International
Solomon University, Kiev; at Kiev State University, Ukraine; and
Head of the Assessment Laboratory in the Research Center of Sci-
ence at the National Academy of Sciences of Ukraine. Currently he
is working at UCLA, USA. Dr. Burgin is a member of the New York
Academy of Sciences and an Honorary Professor of the Aerospace
Academy of Ukraine. Dr. Burgin is a member of the Science Advi-
sory Committee at Science of Information Institute, Washington.
He was the Editor-in-Chief of the international journals Integra-
tion and Information, as well as an Editor and Member of Editorial
Boards of various journals. Dr. Burgin is doing research, has publica-
tions, and taught courses in various areas of mathematics, artificial
intelligence, information sciences, system theory, computer science,
epistemology, logic, psychology, social sciences, and methodology of
science. He originated theories such as the general theory of informa-
tion, theory of named sets, mathematical theory of schemas, theory
of oracles, hyperprobability theory, system theory of time, theory
of non-Dophantine arithmetics and neoclassical analysis (in mathe-
matics) and made essential contributions to fields such as founda-
tions of mathematics, theory of algorithms and computation, theory
of knowledge, theory of intellectual activity, and complexity studies.

xv
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-fm page xvi

xvi About the Author

He was the first to discover Non-Diophantine arithmetics, the first to


axiomatize and build mathematical foundations for negative proba-
bility used in physics, finance and economics, and the first to explic-
itly overcome the barrier posed by the Church-Turing Thesis. Dr.
Burgin has authorized and co-authorized more than 500 papers and
21 books, including “Structural Reality” (2012), “Hypernumbers and
Extrafunctions” (2012), “Theory of Named Sets” (2011), “Theory
of Information” (2010), “Neoclassical Analysis: Calculus Closer to
the Real World” (2008), “Super-recursive Algorithms” (2005), “On
the Nature and Essence of Mathematics” (1998), “Intellectual Com-
ponents of Creativity” (1998), “Fundamental Structures of Knowl-
edge and Information” (1997), “The World of Theories and Power
of Mind” (1992), and “Axiological Aspects of Scientific Theories”
(1991). Dr. Burgin was also the Editor of 8 books.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch01 page 1

Chapter 1

Introduction

All men by nature desire knowledge.


Aristotle

There is an abundance of different books and papers treating various


problems and studying different issues of knowledge (cf., for example,
(Aune, 1967; Polanyi, 1974; Cleveland, 1985; Chisholm, 1989; Bloor,
1991; Burgin, 1997; Boisot, 1998; Choo, 1998; Rao, 1998; Pollock
and Cruz, 1999; Bernecker and Dretske, 2000; Bean and Green,
2001; Popper, 2002; Goldman, 2004; Dalkir, 2005; Leydesdorff, 2006;
Magnani, 2007; Nguen, 2008; Fantl and McGrath, 2009; Zhuge,
2012)). A lot of ideas, models, and several theories have been sug-
gested in this area. The whole area of knowledge related activities
consists of three parts:

1. Knowledge studies (theoretical and experimental).


2. Knowledge engineering.
3. Knowledge utilization and management.

The two latter parts belong to knowledge technology — knowledge


engineering deals with technology of knowledge production, orga-
nization, transformation, management, preservation, capture and
acquisition, while knowledge utilization studies how people and orga-
nizations use knowledge, developing new techniques and approaches
for this purpose.

1
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch01 page 2

2 Theory of Knowledge: Structures and Processes

There are three types of knowledge theories:

1. Philosophical theories comprised by the philosophical discipline


called epistemology are interested in three fundamental problems:
(1) knowledge definition, i.e., trying to find what knowledge is
and how to separate knowledge from beliefs; (2) limits of knowl-
edge acquisition, i.e., what it is possible to know; and (3) ways
of knowledge creation and acquisition, i.e., how knowledge
is obtained.
2. Mathematical theories include mathematical logic, which provides
means for formal knowledge representation and formation; theory
of algorithms, which provides means for knowledge transformation
and preservation dealing mostly with procedural or operational
knowledge (cf., Chapter 6); and mathematical linguistics, which
studies informal knowledge representation and formation.
3. Empirical theories are oriented at the practice of knowledge func-
tioning, including theories of many disciplines, such as artificial
intelligence, knowledge management, knowledge bases, cognitol-
ogy, knowledge acquisition, cognitive psychology, cognitive neu-
roscience, cognitive anthropology, cognitive sociology, education,
and the sociology of knowledge.

Experimental exploration of knowledge emerged in ancient times.


A brilliant example of such an experimentation is presented in the
Plato dialogue Theætetus describing how Sokrates and Theaetetus
discuss and investigate the essence and nature of knowledge. For a
long time, people used mental experiments for knowledge studies.
With the advance of computers, computer experiment has become
crucial in AI and knowledge management. Besides, various experi-
ments have been conducted with physical carriers of knowledge. For
instance, psychologists, educators and sociologists organized various
experiments examining how people acquire, store and disseminate
knowledge.
All research in the area of knowledge can be divided into three
directions:

• Structural analysis of knowledge strives to understand how knowl-


edge is built and what properties it has.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch01 page 3

Introduction 3

• Axiological analysis of knowledge aims at explanation of those fea-


tures that are primary for knowledge as a social and technological
phenomenon.
• Functional analysis of knowledge tries to find how knowledge func-
tions, how it is produced and acquired.

Structural analysis of knowledge is the main tool for the system


theory of knowledge, knowledge bases, and artificial intelligence (AI).
Axiological analysis of knowledge is the core instrument for the
philosophy of knowledge, psychology, and social sciences, including
the sociology of knowledge, which is the study of the relationship
between human creativity and the social context within which it
arises, of the effects knowledge has on individuals, organizations and
societies dealing with broad fundamental questions, of the extent and
limits of social influences on cognition, and of the social and cultural
foundations of knowledge about the world.
Functional analysis of knowledge is the key device for epistemol-
ogy, knowledge engineering, and cognitology.

1.1. The role of knowledge in the contemporary


society

Knowledge is power.
Francis Bacon

To survive and to prosper, people have always needed knowledge.


Through the ages, philosophers contemplated problems of knowledge
and cognition. The importance of knowledge has grown all the time
and now active knowledge assets become crucial. This is true for
all levels of society. Simply to function in the contemporary society,
any individual needs some basic knowledge. Many organizations feel
obliged to run their business based on efficient knowledge manage-
ment just to keep up. More and more people and organizations are
coming to the understanding that the optimal generation, acquisi-
tion, and application of knowledge is the key to success.
Although the role of knowledge in the economy is not new,
in recent years, knowledge has gained increased importance, both
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch01 page 4

4 Theory of Knowledge: Structures and Processes

quantitatively and qualitatively, due to the development and uti-


lization of information processing and communication technologies
(Foray, 2004). The main roles of knowledge are (Tuomi, 1999): a
resource, a product, and a restriction. Indeed, knowledge is clearly
the primary resource in the technologically advanced industries, such
as the computer, communication and software industries, and other
knowledge-intensive industries, such as pharmaceuticals, but it is fast
becoming the primary source of wealth in more traditional sectors of
the economy as well (Stata, 1989). It is also estimated that knowledge
now accounts for approximately three-fourths of the value increase
in the manufacturing sector (Stewart, 1997).
At the same time, in contrast to many other resources, people
can produce knowledge, which now plays the role of a product. As a
result, importance of knowledge production and creation grows very
fast. Governments and other organizations invest more and more into
knowledge production.
Knowledge has become an intellectual property, attached to
a name or group of names and certified by copyright, or some
other form of social recognition, e.g., publication or awarding prizes
(Granstrand, 1999). As an economical commodity, knowledge and
knowledge production are paid for in the research, communication,
and educational areas. As the result, knowledge has moved to the
social overhead investment of society in the form presented in books,
articles, patents or computer programs, written down, printed or
recorded at some point for transmission and utilization (Bell, 1973).
Our civilization is based on knowledge and information process-
ing. In contemporary knowledge-driven economy, organizations ulti-
mately gain their value from intellectual and knowledge-based assets
rather than material commodities. That is why it is so important to
know properties of knowledge and how to work with it. For instance,
the principal problem for computer science as well as for computer
technology is to process not only data but also knowledge. Knowl-
edge processing and management make problem solving much more
efficient and are crucial (if not vital) for big companies and insti-
tutions (Ueno, 1987; Osuga, 1989; Dalkir, 2005). To achieve this
goal, it is necessary to make a distinction between knowledge and
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch01 page 5

Introduction 5

knowledge representation to know regularities of knowledge struc-


ture, functioning and representation, and to develop software (and
in some cases, hardware) that is based on theses regularities. Many
intelligent systems search knowledge spaces, which are explicitly or
implicitly predefined by the choice of knowledge representation. In
effect, the knowledge representation serves as a strong bias.
People increasingly rely on AI processing systems, which in turn,
depend on their software, while information is processed in the
search of knowledge. Sophisticated safety-critical software is embed-
ded in a diversity of systems across most industry sectors, rang-
ing from automotive and aerospace to energy and maritime (Kandel
and Dick, 2005). This situation once more demonstrates importance
of knowledge because software is a form of operational knowledge
representation.
At the same time, the National Institute of Standards and Tech-
nology (NIST) reported that low quality software costs the U.S. econ-
omy almost $60 billion per year (Tassey, 2002; Thibodeau, 2002).
Besides, only one quarter of software projects are judged a success
(Standish Group). Software defects are accepted as inevitable by both
the software industry and the long-suffering user community. In any
other engineering discipline, this defect rate would be unacceptable.
Moreover, when safety and security are at stake, the extent of cur-
rent software vulnerability also becomes unsustainable (Croxford and
Chapman, 2005). Therefore, validation of operational knowledge in
the form of software has become an urgent task for contemporary
society.
In our time, importance of knowledge has grown very fast with the
advancement of society. Thus, in the 20th century, with the advent of
computers, knowledge has become a concern of science. As a result,
now knowledge is studied in such areas as AI, computer science, data-
and knowledge bases, global networks (e.g., the Internet), informa-
tion science, knowledge engineering, and knowledge management.
Philosophers also continue their studies of knowledge (Chisholm,
1989).
However, knowledge is not an easy concept to understand. As
Land et al. (2007) write, knowledge is understood to be a slippery
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch01 page 6

6 Theory of Knowledge: Structures and Processes

concept, which has many definitions. This is apparent in the many


questions philosophers and other thinkers ask themselves about the
essence, distinctive characteristics, functions and roles of knowledge
in society. These questions can vary from theoretical considerations
to practical applications.
For instance, relations between knowledge and information are
blurred in contemporary society. Some comprehend knowledge as
a kind of information (cf., for example, (Osuga and Saeki, 1990;
Davenport, 1997; Probst et al., 1999; Gundry, 2001; Stenmark, 2002;
Dalkir, 2005)), while others claim that information is a kind of knowl-
edge (cf., for example, (Kogut and Zander, 1992; Tuomi, 1999)).
In addition, there are opinions that information and knowledge
are essentially different essences (cf., for example, (Davenport and
Prusak, 1998; Lenski, 2004; Burgin, 2010)).
All basic questions about knowledge are related to the way in
which we organize and direct the development and application of
knowledge on different levels — from individuals through companies
and organizations through the whole society. For instance, in many
organizations, knowledge management has come to occupy a central
place in their functioning. It is a role that makes great demands
on an organization’s strategic insight, problem solving ability, and
successful development.
As Kalfoglou et al. (2004) write, managing knowledge is a dif-
ficult and tricky enterprise. A wide variety of technologies have to
be invoked in providing support for knowledge requirements, ranging
from the acquisition, modeling, maintenance, transmission, dissem-
ination, retrieval, reuse, and publishing of knowledge. Knowledge is
a valuable asset and resource. So, any toolset capable of providing
support for operating with knowledge would be valuable as its effects
can percolate down to all the application domains structured around
the domain representation.
To reflect importance of knowledge, the term knowledge society
was coined as a description of the contemporary society by its piv-
otal characteristic. Some researchers suggest that knowledge society
is the next stage of the information society. In essence, every soci-
ety has its own knowledge assets. However, in our times, knowledge
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch01 page 7

Introduction 7

together with information is becoming the key tool not only for fur-
ther development but also for present survival in conditions of the
knowledge economy.
To describe the role of knowledge in contemporary society, Fritz
Machlup (1902–1983) introduced the concept knowledge economy in
the book (Machlup, 1962). The knowledge economy is a particular
knowledge-driven stage of economical development, based on knowl-
edge, succeeding a phase based on physical assets such as workforce,
energy, and matter. Knowledge is in the process of taking the place
of the workforce and other resources making possible getting better
results with less workforce and other resources. Knowledge is sub-
stance and money substitutable, meaning that knowledge can replace,
to some extent, capital, labor, or physical materials. Namely, knowl-
edge allows one to use less money, labor, or physical materials than
it is possible to do without this knowledge. As a result, the cre-
ated wealth is measured less by the output of work itself but more
and more by the general level of scientific and technological develop-
ment (Jaffe and Trajtenberg, 2002). Amidon explained that knowl-
edge about how to produce different products and provide services
as well as their embedded knowledge is often more valuable than
the products and services themselves or the materials they contain
(Amidon, 1997).
That is why Machlup (1962) defined knowledge as a commod-
ity, developing techniques for measuring the magnitude of its pro-
duction and distribution within a modern economy. He correctly
assumed that all devices involved in knowledge production, dissem-
ination, and utilization have to be taken into account in these mea-
surements.
A diversity of activities linked to research, education, and ser-
vices, tend to assume increasing importance in the knowledge econ-
omy. Besides, the importance of knowledge in economic activity is
not confined to the high-tech sectors but also pervades modes of orga-
nization of production and commerce in apparently low-tech sectors,
which have also been essentially transformed. Toffler explains that
knowledge is a wealth and force multiplier, in that it augments what
is available or reduces the amount of resources needed to achieve a
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch01 page 8

8 Theory of Knowledge: Structures and Processes

given purpose (Toffler, 1990). Stewart calls knowledge the intellectual


capital (Stewart, 2002).
Many researchers, economists, authors, governments, policy-
makers, international organizations, and think tanks declare that
people now live in a knowledge-based economy as knowledge is the
basis for various decisions in different areas, as well as a priceless asset
to individuals and organizations. Moreover, few concepts introduced
by economists have been more successful than that of a knowledge-
based economy reflecting a qualitative transition in economic condi-
tions (Foray and Lundvall, 1996; Leydesdorff, 2006a).
To represent and study this new situation, the economical triple
helix of university–industry–government relations was introduced
(Etzkowitz and Leydesdorff, 1995; 1997; 1998; Leydesdorff, 2006;
2006a). Governance is treated as the force that instantiates and
organizes systems in the socio-geographical dimension of the model.
Industry is the main mover of material production and exchange,
while academe plays the leading role in the organization of the knowl-
edge production function. As a result, knowledge production and
exchange becomes an economy in itself (Foray, 2004) and the devel-
opment of a knowledge base turns out to be essentially dependent
on the condition that knowledge production is socially organized and
regulated.
Naturally, the global economy now places much greater value on
knowledge production and dissemination activities such as design
with an emphasis on Research and Development including patenting,
on education and on information effort such as marketing, network-
ing, computation, and communication. Information is a source for
knowledge, while knowledge is a base for producing and retrieving
information.
Naturally, importance of knowledge grows very rapidly as society
becomes more and more advanced. As a result, in the 20th century,
with the advent of computers, knowledge has become a concern of
science and now knowledge is studied in such areas as AI, computer
science, data and knowledge bases, global networks (e.g., Inter-
net), information science, knowledge engineering, and knowledge
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch01 page 9

Introduction 9

management. Philosophers also continue their studies of knowledge


(Chisholm, 1989).

1.2. A brief history of knowledge studies

Some people drink deeply from the fountain of knowledge.


Others just gargle.
Grant M. Bright

Knowledge has been always important in society. That is why the


best minds have been concerned with the problem of knowledge from
ancient times. Studies of knowledge formed one of the pivotal philo-
sophical disciplines, which is called epistemology from Greek words
episteme, which means knowledge, and logos, which means cognition,
study or reason. In other words, epistemology is the philosophical
theory of knowledge and cognition.
In this section, we give a very brief exposition of the epistemo-
logical research presenting approaches of some leading philosophers
in the history of the human civilization and starting with the most
ancient explorations and ideas.
In Upanishad, which is one of the principal classical texts in Indian
culture written from the end of the second millennium B.C.E. to the
middle of the second millennium C.E., two kinds of knowledge, higher
knowledge and lower knowledge, were discerned. Later Nyaya school
of Hindu philosophy considered four types of knowledge acquisition:
perception when senses make contact with an object, inference, anal-
ogy, and verbal testimony of reliable persons. Inference was used in
three forms: a priory inference, a posteriory inference, and inference
by common sense.
In general, theory of knowledge has a long-standing tradition in
Indian philosophy with many achievements and interesting insights.
Let us get some glimpses on this big knowledge field developed in
ancient India.
In his book “Theories of knowledge”, Rao presents eight directions
in the philosophical and methodological studies of knowledge in India
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch01 page 10

10 Theory of Knowledge: Structures and Processes

(Rao, 1998):

— Samkhya (Yoga) theory of knowledge


— Vedantins’ theories of knowledge
— Visistadvaita theory of knowledge
— Madhva theory of knowledge
— Mimansaka theories of knowledge
— Jaina theory of knowledge
— Buddhist theories of knowledge
— Logician’s (Nyaya) theory of knowledge

The Samkhya (Yoga) theory of knowledge

Samkhya, also Sankhya, Sām . khya, or Sāṅkhya, is one of the most


prominent and one of the oldest directions in Indian philosophy.
It belongs to the six basic schools of the classical Indian philoso-
phy. Bhagavad Gita identifies Samkhya with understanding of knowl-
edge. The word Samkhya is based upon the Sanskrit word samkhya
which means ‘number’ or ‘perfect knowledge’. An eminent, great sage
Kapila (between 8th and 6th B.C.E.) was the founder of the Samkhya
philosophy.
Samkhya may be characterized as a dualistic realism. It is dualis-
tic because it advocates two ultimate realities: Prakriti, matter and
Purusha, self, spirit or consciousness. At the same time, Samkhya
is a kind of realism as it considers that both matter and spirit are
equally real. In addition, Samkhya is pluralistic because it is teaching
that Purusha is not one but many.
Samkhya has a developed theory of knowledge discerning three
sources of valid knowledge: perception, inference based on Sankhya
syllogism and valid testimony. The procedure of knowledge acqui-
sition starts when the sense-organs come in contact with an object
causing sensations and impressions to come to the manas (mind). The
manas processes these impressions into proper forms and converts
them into definite percepts. These percepts are carried to the Mahat
(intellect) inducing changes in Mahat, and Mahat takes the form of
the object, from which these sensations come. This transformation
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch01 page 11

Introduction 11

of Mahat is known as vritti or modification of buddhi. As Mahat is a


physical entity, the process of knowledge formation is not complete.
Thus, the consciousness of the Purusha (self) transforms Mahat pro-
ducing in it consciousness of the form of the object, from which these
sensations come. To better explain this, the following analogy is used.
A mirror cannot produce an image by itself. It needs light to reflect
and produce the image and thereby reveal the object. In a similar
way, Mahat needs the “light” of the consciousness of the Purusha to
produce knowledge.
Besides, Samkhya discerns two types of perceptions: inde-
terminate (nirvikalpa) perceptions and determinate (savikalpa)
perceptions.
Indeterminate perceptions are like pure sensations or crude
impressions containing no knowledge of the form or the name of the
object. There is only vague awareness about an object.
Determinate perceptions are the mature form of perceptions
obtained from sensations, which have been processed, categorized
and interpreted properly. In turn, determinate perceptions generate
knowledge by inference based on analogy.
Samkhya is related to Yoga, which is a specific religious system
within Hinduism emerging from the older Samkhya system. The the-
oretical part of Yoga, i.e., its philosophy, was derived almost entirely
from Samkhya.

The Vedanta Theory of knowledge

Vedanta is one the most prominent and philosophically advanced six


basic schools of the classical Indian philosophy. According to Bala-
subramanian (2000), the Vedantic philosophy is as old as the Vedas,
since the basic ideas of the Vedanta systems are derived from the
Vedas during the Vedic period (1500–600 B.C.E.). The term veda
means “knowledge” and the term anta means “end”. Thus, Vedanta
means complete knowledge of the Veda. Originally, Vedanta denoted
the Upanishads, a collection of foundational texts in Hinduism con-
sidered as the final layer of the Vedic canon. By the 8th century,
the meaning of Vedanta changed for standing for all philosophical
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch01 page 12

12 Theory of Knowledge: Structures and Processes

traditions concerned with interpreting the three basic texts of Hin-


duist philosophy, namely, the Upanishads, the Brahma Sutras, and
the Bhagavad Gita. There are at least 10 schools of Vedanta as the
system of philosophy that further develops the implications in the
Upanishads that all reality is a single principle, Brahman, teaching
that the believer’s goal is to transcend the limitations of self-identity
and achieve unity with Brahman.
According to the Vedanta Theory of knowledge, Brahman is self-
indulgent and knowledge is not different from Brahman. Therefore,
knowledge is eternal and without beginning. However, ignorance
also exists until it is destroyed by knowledge. Although knowl-
edge is without beginning, the state of knowing is produced by
mental modification (Vrtti) of the internal organ (Abhivyanjaka).
The Vrtti is four-fold consisting of doubt, definite knowledge,
egoism, and recollection. Knowledge is produced with the help of
two causes, the material cause (Upadana) and the efficient cause
(Nimitta).
The Vedanta Theory discerned two types of knowledge: the medi-
ate knowledge (Paroksa) and the immediate knowledge (Aparoksa).
An example of mediate knowledge is the statement “Brahman is”,
while an example of immediate knowledge is the statement “I am
Brahman is” (cf., (Rao, 1998)). Here is another example. The state-
ment “I see fire” is immediate knowledge, while “I see smoke, so there
is fire” is mediate knowledge.
It might be interesting to compare this knowledge classification
with a similar classification of Kant who considered knowledge of two
kinds: intuitions as immediate knowledge and concepts as mediate
knowledge.

The Visistadvaita theory of knowledge

Visistadvaita is a philosophy of religion, in which the central idea


is integration and harmonization of all knowledge, while knowledge,
jnaana, is obtained through sense perception, inference, and revela-
tion. According to the Upanisads, knowledge comes from Brahman as
“he who knows the Brahman attains the highest”. This asserts unity
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch01 page 13

Introduction 13

of the threefold system of Vedantic wisdom known as tattva, hita,


and purusartha.
Answering the basic question of epistemology about the origin
and possibility of knowledge, Visistadvaita affirms possibility of get-
ting knowledge about reality stating that people can know things as
they are. Knowledge essentially presupposes a knowing self and an
object of thought and is obtained in the process of ascent from the
corresponding sensation to the self. Namely, this process starts with
sensations, which form the raw material of knowledge and become
percepts by action of the “a priori” form prescribed by the mind. The
perceived objects are conceived and arranged by the synthetic mind
or understanding which brings together the perceived objects pro-
ducing judgments. Then reason unifies these judgments and forming
conception in the self as the synthetic unity of knowledge. This shows
that knowledge is not a plain synthetic construction, but originates in
a process by which things are revealed. The objects in nature exist
by themselves and are not created by thought, which only reveals
them. Thus, knowledge is the self-revelation of a real object as a
holistic system, while the object is not the copy of the idea, nor is
the idea the archetype of the object, neither is deduced from the
other.
The Visistadvaita theory of knowledge assumes the integrity of
experience on all its levels and forms, which constitute pratyaksa
(perception), anumana (inference), and sastra (scripture). As a
result, Visistadvaita is a dualistic philosophy assuming independent
existence of the perceiving self, and of the external world that is
perceived.

The Madhva theory of knowledge

The Dvaita or “dualist” school of Hindu Vedanta philosophy


originated by Sri Madhvacarya, or Madhva (ca. 1238–1317), who
considered himself an avatara of the wind-god Vayu and taught the
fundamental difference between the individual self or Atman and
the ultimate reality, Brahman. Thus, according to Madhva, there are
three orders of reality: (1) the independent ultimate reality, Brahman;
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch01 page 14

14 Theory of Knowledge: Structures and Processes

and the dependent reality, paratantra, which consists of (2) souls


(jivas), and (3) lifeless objects (jada).
Madhva’s pluralistic ontology is founded on his realistic epistemol-
ogy. He argues that God and the human soul are separate because
our daily experience of separateness from God and of plurality in
general is given to people as an undeniable fact, fundamental to our
knowledge of all things. Madhva considered two means of valid knowl-
edge (Pramana): valid knowledge itself (Kevala Pramana), and the
instrument of knowledge (Anupramana). In turn, Anupramana con-
sists of three sources of knowledge: sense perception (Pratyaksha),
inference (Anumāna), and testimony of Vedic literature (Aagama)
(Sharma, 1994). Further, existence of invalid knowledge acquired by
sense perception demands permanent questioning of the knowledge
content.

The Mimansaka theories of knowledge

Mı̄mām. sā is a Sanskrit word meaning “revered thought”. It is also the


name of one of the six astika (orthodox) schools of Hindu philosophy
based on the Vedas. Its core tenets are ritualism, anti-asceticism,
and anti-mysticism. The central aim of the school is explanation of
the nature of dharma to maintain the harmony of the universe and
provide the personal well-being of the person who follows ritual obli-
gations and prerogatives.
The Mimamsa school traces the source of the knowledge of
dharma neither to sense-experience nor inference, but to verbal cogni-
tion (knowledge of words and meanings). In order to understand the
correct dharma for specific situations, it is necessary to rely on exam-
ples of explicit or implicit commands in the Vedic texts. An implicit
command must be understood by studying parallels in other, similar
passages. If one text does not provide details for how a priest should
proceed with a particular action, the details must be sought in other,
related Vedic texts. This preoccupation with precision and accuracy
required meticulous examination of the structures of sentences con-
veying commands, and led to an extensive exegesis of the Vedas and
a detailed analysis of semantics.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch01 page 15

Introduction 15

The Mimamsa made notable contributions to Indian thought


in the fields of logic and epistemology. The Mimamsa doctrine
of knowledge affirms that the world is real. Mimamsa introduced
two additional means of valid knowledge in addition to the four
traditional means of perception, inference, comparison and testi-
mony, recognized by other schools of Hinduism. They are arthapatti
(pre-conception or postulation) and abhava (absence, negation, non-
existence). Mimamsa advanced the unique epistemological theory
that all cognition is valid. All knowledge is true, until it is super-
seded by further cognition. What is to be proved is not the truth
of a cognition, but its falsity. Mimamsakas drew on this theory of
validity to establish the unchallengeable validity of the Vedas.

The Jaina theory of knowledge

The concept of soul is central in philosophy. Knowledge (Jnana)


according to Jainas, is the soul’s intrinsic, inherent, inseparable,
and inalienable attribute, without which no soul can exist. Knowl-
edge plays an important part in the conception of soul and its
emancipation. As a result, Jain epistemology or Jain theory of knowl-
edge thus becomes vital in Jaina philosophy including the theory of
knowledge along with various topics such as psychology, teaching
about feelings, emotions, and passions, theory of causation, logic,
philosophy of non-absolutism, and the conditional mode of predica-
tion (Shah, 1990).
Consciousness (Cetana), according to Jainas, is the power of the
soul knowledge and operates through understanding (Upyoga). It gets
experience in three ways: (1) some experience is the fruit of karma;
(2) other experience comes from activity of the soul; and (3) one
more kind of experience is knowledge itself (Shah, 1990). According
to Jaina thinkers, Cetana (consciousness) culminates in pure and per-
fect knowledge and knowledge itself has grades and modes. In turn,
understanding (Upyoga) is divided into two: sensation (Darsana) and
Cognition (Jnana). Uma Svati says: “Understanding is the distin-
guishing characteristic of the soul. It is of two sets — Jnana and
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch01 page 16

16 Theory of Knowledge: Structures and Processes

Darsana. The first is of eight kinds and the second, of four” (Shah,
1990). Namely, sensation (Darsana) is of four kinds:
• Visual (Cakshusa)
• Non-visual (Acakshusa)
• Clairvoyant (Avadhi Dersana)
• Pure (Kevala)
Each piece of knowledge is experienced with reference to its
characteristic (Dharma) and its substratum (Dharmin). In addition,
Jainas discerned two kinds of knowledge: direct knowledge and indi-
rect knowledge. Direct knowledge does not demand the medium of
another knowledge in contrast to indirect knowledge.
According to Jainas, it is possible to obtain indirect knowledge
by five techniques: recollection, recognition, Reductio ad Absurdum
(Tarka), inference, and syllogism.

The Buddhist theories of knowledge

Being a strict empiricist, Siddhartha Gautama Sakyamuni (the


“Buddha” or “awakened one”) believed that people can have
knowledge of only those things that can be directly experienced. It is
impossible to achieve ultimate knowledge until the follies and weak-
ness of human life bring one to despair. That is why Buddha famously
refused to answer ultimate questions such as “Does the world have
a beginning or not?”, “Does God exist?”, and “Does the soul per-
ish after death or not?”. Later, Buddhists developed a technique of
denying all the logically alternative answers to such questions. For
instance, the answer to the first question has to be: “No, the world
does not have a beginning, it does not fail to have a beginning, it does
not have and not have a beginning, nor does it neither have nor not
have a beginning”.
Knowledge in the Buddhist understanding is of prime importance
to people. One of the principles of Buddhist philosophy instructs
that the pleasure of advancing knowledge becomes a duty. Theory of
knowledge in Buddhism is not treated as relative but is presumed to
be perfectly true and absolute.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch01 page 17

Introduction 17

With respect to their ontological assumptions, Buddhist religious


directions are separated into four classes (Rao, 1998):

— Madhyamika presupposes that the entire world is void, everything


is fleeting and all activity goes in the dream state.
— Yogasaras hold that there are no external objects in the world
asserting that the object cognized and the cognizing person are
the same.
— Sautrantikas admit existence of the objective world, which cannot
be perceived by senses but it is only inferred.
— Vaibhastika admits existence of the objective world but rejects
existence of objects of inference claiming that only indeterminate
knowledge is valid.

As reasoning is an important procedure in knowledge acquisition,


three features of reason are explicated and utilized:

— Existence only in the subject (Paksa).


— Existence in the homologue (Sapaksa).
— Non-existence only in the heterologue (Vipaksa).

In addition, reason in the Buddhist theory of knowledge has three


types:

— Non-cognition (Anupalabdhi).
— Cause in itself (Svabhava).
— Effect (Karya).

Besides, the Buddhist theory of knowledge uses four forms of


predication:

1. S is P , e.g., “a square is a rectangle” or “there is a world of ideas”.


2. S is not P , e.g., “a square is not a circle” or “there is no world of
ideas”.
3. S is and is not P , e.g., “a ball that is partially green and partially
yellow is green and is not green” or “there is and is no world of
ideas”.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch01 page 18

18 Theory of Knowledge: Structures and Processes

4. S neither is nor is not P , e.g., “a ball neither is green nor it is not


green” or “the world of ideas neither is real nor it is not real”.

The Buddhists assume that at least one of these alternatives is


always true in any meaningful situation and use this assumption for
logical classification. However, when the question is considered mean-
ingless, all four alternatives are rejected. At the same time when the
answer is ‘yes’ to each of the alternatives, it was treated as misleading
and all four alternatives are also excluded.

The Logician’s (Nyaya) theory of knowledge

In the Logician’s theory of knowledge, knowledge (Buddhi or Jñāna)


is a special property of the soul, while mind (Manas) is a separate
substance (Rao, 1998). Knowledge is obtained by experience (Anub-
hava) and recollection (Smrti). In turn, experience gives (is) two-
fold — valid knowledge (Yatharthanubhava or Prama) and invalid
knowledge (Ayatharthanubhava or Bhrama). There are four ways for
getting valid knowledge (Yatharthanubhava or Prama):

— Perception gives perceptual knowledge (Pratyksa).


— Inference (Anumana) gives inferential knowledge (Anumiti).
— Analogy (Upamana) gives analogical knowledge (Upamiti).
— Utilization of language (verbal testimony) gives verbal knowledge
(Sabda).

According to Gautama, there are four factors involved in direct


perception (Pratyksa):

— the senses (indriyas).


— the sensual objects (artha).
— the contact of the senses and the objects (sannikarsa).
— the cognition produced by this contact (jnana).

In addition, the Nyaya believed that the five sense organs — eye,
ear, nose, tongue, and skin — have the five elements — light, ether,
earth, water, and air — as their field, with corresponding qualities
of color, sound, smell, taste, and touch.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch01 page 19

Introduction 19

According to logicians, there are also three ways for getting invalid
knowledge (Ayatharthanubhava or Bhrama):

— Doubt gives (is) uncertain knowledge (Samsaya).


— Wrong reasoning gives invalid knowledge (Viparyaya).
— Reductio ad absurdum gives (is) invalid knowledge (Tarka).

Tarka includes:

— Faults of self-dependence (Atmasraya).


— Faults of mutual dependence (Anyonyasraya).
— Faults of dependence on a cycle (Cakrakasraya).
— Faults of infinite regress (Anavastha).
— Statements of undesirable effects (Anistaprasanga).

Inference (Anumana) is knowledge from the perceived about the


unperceived and this relation may be of three sorts:

— the inferred constituent may be the cause of the element


perceived.
— the inferred constituent may be the effect of the element
perceived.
— both may be the joint effects of something else.

In addition, inference (and the results of inference) has two types:

— Inference for one’s own sake (Svartha).


— Inference for another’s own sake (Parartha).

Verbal testimony (and its results) has two types:

— Scriptural testimony (Vaidika).


— Non-scriptural testimony (Laukika).

Perception (and the results of perception) has two types:

— Determinate perception (Nirvikalpaka).


— Indeterminate perception (Savikalpaka).
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch01 page 20

20 Theory of Knowledge: Structures and Processes

In addition, there three kinds of transcendental perception


(Alaukika):

— Perception in the Samanyalakşana supernormal contact.


— Perception in the Jnanalakşana supernormal contact.
— Perception in the Yogaja (Lakşana) supernormal contact.

In the process of cognition, mind (Manas) mediates between the


self and the senses. When the mind is in contact with one sense organ,
it cannot be so with another. It is therefore said to be atomic in
dimension. The Nyaya assumed that due to the nature of the mind
that experiences of people are discrete and linear, although quick
succession of impressions may give the appearance of simultaneity.
It is possible to read more about Indian theories of knowledge in
the book (Rao, 1998).
In other countries, philosophers also paid considerable attention
to the problems of knowledge and cognition. In China, Confucius
(551–479 B.C.E.) thoroughly considered knowledge and its sources.
He discerned two kinds of knowledge: one was innate, while the
other came from learning. According to him, knowledge consisted of
two components: knowledge of facts (statics) and skills of reasoning
(dynamics). The contemporary methodology of science classifies the
first type as a part of the logic-linguistic subsystem, which contains
declarative knowledge, while the second type is a part of the proce-
dural subsystem of a developed knowledge system, which contains
procedural knowledge (Burgin and Kuznetsov, 1994). For Confucius,
to know was to know people. He was not interested in knowledge
about nature, studied by modern science. The philosophy of Confu-
cius had the main impact on Chinese society for many centuries.
Besides, Chinese philosophers paid much attention to names as
carriers (bearers) of knowledge reflecting intrinsic aspects of reality.
In this respect, Confucius writing about names and their rectifica-
tion, asserted (Confucius, 1979):
“If names be not correct, language is not in accordance with the truth of
things. If language be not in accordance with the truth of things, affairs
cannot be carried on to success.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch01 page 21

Introduction 21

When affairs cannot be carried on to success, proprieties and music


do not flourish. When proprieties and music do not flourish, punish-
ments will not be properly awarded. When punishments are not properly
awarded, the people do not know how to move hand or foot”.

One of the basic aims of name rectification was to create a con-


sistent knowledge representation in language that would allow each
word to have a consistent and universal meaning, providing accurate
knowledge of things and actions, while avoiding confusion of multiple
Ways (Dao).
Later Xun Zi, also called Hsün Tzu, (ca. 312–230 B.C.E.) contin-
ued exploration of names as knowledge representations. Xun Zi wrote
a tract on the rectification of names, arguing for the rectification of
names so, that a ruler could adequately control his people in accor-
dance with Dao (the Way), without being misunderstood. Indeed,
when misapprehension became easy, then Dao would not effectively
be put into action. Xun Zi explained (cf., (Watson, 2003)): “When
the ruler’s accomplishments are long lasting and his undertakings
are brought to completion, this is the height of a good government.
All of this is the result of being careful to see that men stick to the
names which have been agreed upon”.
Necessity for rectifying names is both political and epistemolog-
ical. On one hand, there is a need to distinguish the higher from
the lower in terms of the social rank, while on the other hand, it is
necessary to discriminate the different states and qualities of things.
“When the distinctions between the noble and the humble are clear
and similarities and differences [of things] are discriminated, there
will be no danger of ideas being misunderstood and work encounter-
ing difficulties or being neglected” (cf., (Ding, 2008)).
Besides, explaining that understanding right and wrong causes
morality to be more unbiased, Xun Zi argued that without univer-
sally accepted interpretations of names, knowledge of right and wrong
would become hazy. According to Xun Zi, the ancient knowledgeable
kings chose names that gave correct knowledge of actualities, but
later generations confused terminology, coined new names, and thus
could no longer differentiate right from wrong.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch01 page 22

22 Theory of Knowledge: Structures and Processes

Xun Zi assumed that utilization of senses through seeing, hearing,


smelling, tasting, and touching is the key source for getting knowl-
edge of distinctions between things and thus, allowing people to give
names based on the sameness or difference between various things.
Consequently, this was the way of producing true knowledge of the
world, i.e., true knowledge was achieved through naming.
Xun Zi also wrote about “things which share the same form but
occupy different places, and things which have different forms but
occupy the same place”. The former, e.g., two identical flutes, should
be distinguished as two separate things, although they have the same
form and other properties, because they occupy different places. At
the same time, as one of these identical things, e.g., flutes, is used
and becomes damaged or broken over time, it appears to change
into something else. But even though it seems to become something
different, it is still the same things, e.g., flute, and should be regarded
as such.
Another representative of the School of Names Gongsun Long (ca.
325–250 B.C.E.) asserted in his work “On Names and Actualities”
that because all things in the world come into sight in particular
shapes and substances, they are given different names. To know if
the meaning of a word correctly corresponds to the essence of the
thing named by it or not, it is necessary to know the conditions
which give rise to it. Gongsun Long writes (cf., (Ding, 2008)): “A
name is to designate an actuality. If we know that this is not this
and know that this is not here, we shall not call it [‘this’]. If we
know that is not that and know that is not there, we shall not call
it [‘there’]”.
In ancient Greece, Plato (427–347 B.C.E.) performed even more
profound analysis of the problem of knowledge. For instance, in one
of Plato dialogues, Theætetus, Socrates and Theaetetus discuss the
nature of knowledge and Socrates asks the question that permanently
puzzles him: “What is knowledge?”.
To answer this question, three approaches are suggested. At first,
the conjecture “knowledge and perception are the same” is proposed.
Socrates refutes this idea by explaining that it is possible to perceive
without knowing and it is possible to know without perceiving. For
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch01 page 23

Introduction 23

instance, it is possible to see a text in a foreign language without us


knowing it.
The second hypothesis is that true belief is knowledge. Socrates
invalidates this idea by giving the following example. When a jury
believes a defendant is guilty by listening to the prosecutor instead
of looking at solid evidence, it cannot be said that jurors know that
the accused is guilty even if, in fact, he is.
The third proposition is that true belief with a rational valida-
tion is knowledge. However, Socrates also challenges this approach
because all interpretations of this definition look inadequate. Thus,
Socrates demonstrates that all three definitions of knowledge: knowl-
edge as nothing but perception, knowledge as true judgment, and,
finally, knowledge as a true judgment with justification, are unsatis-
factory.
In spite of this, according to Cornford (2003), in many of
his works, e.g., Meno, Phaedo, Phaedo, Symposium, Republic, and
Timaeus, Plato treated knowledge as a justified true belief, and this
approach prevailed becoming a stable tradition in philosophy. Much
later Bertrand Russell in (Russell, 1912; 1948), Edmund Gettier in
(Gettier, 1963), Elliot Sober in (Sober, 1991) and some other thinkers
gave persuasive examples demonstrating that the definition of knowl-
edge as a justified true belief is not adequate.
Let us consider an example demonstrating deficiencies of this def-
inition (Russell, 1912; 1948; Scheffler, 1965). A woman looks at a
clock at 3 p.m. The clock shows 3 p.m. So, the woman thinks that it
is 3 p.m. Thus, she has a belief, which is true and justified by obser-
vation of the clock. Now suppose that the clock is not going though
the woman thinks it is. Thus, it seems wrong to hold that she knows
that it is 3 p.m.
Plato was also interested in the problem of knowledge acquisition.
His idea was that people learn in this life by remembering knowledge
originally acquired in a previous life. In essence, the soul has all
knowledge and knowledge acquisition is recollection of what the soul
already knows.
Plato conceived it is possible to achieve correct knowledge only
through the knowledge of the forms, or ideas (eidos), because what
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch01 page 24

24 Theory of Knowledge: Structures and Processes

came through our senses is not knowledge of the thing itself but only
knowledge of the imperfect changing copy of the form. Thus, the only
possible way to acquire correct knowledge of the forms was through
reasoning as senses could provide only opinion.
For a long time, philosophers were not able to clearly and consis-
tently explain what Plato forms, or ideas (eidos), are. Only at the
end of the 20th century, it was discovered that the concept structure
provides the scientific representation of Plato forms, while the exis-
tence of the world of structures was postulated and proved (Burgin,
1997; 2010; 2012).
Another great philosopher Aristotle (384–322 B.C.E.) studied
problems of knowledge categorizing knowledge with respect to knowl-
edge domains (objects) and the relative certainty with which one
could know those domains (objects). He assumed that certain
domains (such as in mathematics or logic) permit one to have abso-
lute knowledge that is true all the time. However, his examples of
absolute knowledge, such as two plus two is always equal to four or all
swans are white, failed when new discoveries were made. For instance,
the statement two plus two always equals four was disproved when
non-Diophantine arithmetics were discovered (Burgin, 1977; 1997c;
2007; 2010c). The statement “all swans are white” was invalidated
when Europeans came to Australia and found black swans.
According to Aristotle, absolute knowledge, e.g., mathematical
knowledge, is characterized by certainty and precise explanations.
However, unlike Plato and Socrates, Aristotle did not demand cer-
tainty in everything. Some domains, such as human behavior, do
not permit precise knowledge. The corresponding vague knowledge
involves expectations, chances, and imprecise explanations. Knowl-
edge that falls into this category is related to ethics, psychology, or
politics. One cannot expect the same level of certainty in politics or
ethics that one can demand in geometry or logic. In his work Ethics,
Aristotle defines the difference between knowledge in different areas
in the following way:

“we must be satisfied to indicate the truth with a rough and general
sketch: when the subject and the basis of a discussion consist of matters
which hold good only as a general rule, but not always, the conclusions
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch01 page 25

Introduction 25

reached must be of the same order ... For a well-schooled man is one
who searches for that degree of precision in each kind of study which
the nature of the subject at hand admits: it is obviously just as foolish
to accept arguments of probability from a mathematician as to demand
strict demonstrations from an orator”.
(Aristotle, 1984)

Aristotle was deeply interested in how people got knowledge. He


identified three sources of knowledge: sensation as the passive capac-
ity for the soul to be changed through the contact of the associ-
ated body with external objects, thought as the more active process
of engaging in the manipulation of forms without any contact with
external objects at all, and desire as the origin of movement towards
some goal.
Developing logic as a tool for knowledge acquisition, Aristotle con-
structed rules of logical inference. The basic rule is called syllogism.
It has the following form

All A are B.
C is A.
Therefore, C is B.

Here is the famous example of a syllogism:

All men are mortal.


Socrates is a man.
Therefore, Socrates is mortal.

Treating syllogism as the main tool of knowledge acquisition, Aris-


totle conceives of knowledge as hierarchically structured by inference.
He puts this claim forward in the Posterior Analytics (Aristotle,
1984). To have knowledge of a fact, it is not enough simply to be
able to repeat the fact, while in many cases, for example, in history,
it is impossible to repeat the fact. Thus, to have knowledge, it is also
necessary to be able to give the reasons why that fact is true. Aris-
totle calls this process demonstration, which is essentially a matter
of showing that the fact in question is the conclusion to a valid syllo-
gism. Thus, knowledge that is premises for obtaining other knowledge
is logically prior to the knowledge that follows from it. Eventually,
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch01 page 26

26 Theory of Knowledge: Structures and Processes

there must be one or several “first principles”, from which all other
knowledge follows and which themselves do not follow from anything.
However, if these first principles do not follow from anything, then
by Aristotle, they cannot count as knowledge because there are no
reasons or premises we can give to prove that they are true. Aristotle
suggests that these first principles are a kind of intuition of the facts
and ideas we recognize in experience.
Aristotle believes that knowledge domains or objects are struc-
tured hierarchically. Consequently, he treats definition as a process
of division and specification. For instance, defining whale, we observe
that whales are animals, which is the genus to which they belong.
Then we search for various conditions, which distinguish whales from
other animals such as: whales live in water, unlike tigers, and they
are very big, unlike mice.
While true knowledge is derived from knowledge of first principles,
actual argument and debate is much less immaculate. When two
people argue, they do not go back to first principles to ground every
claim but simply suggest premises they both acquiesce. The essence
of the debates is to find premises your opponent can agree with and
then show that conclusions different from your opponent’s position
to follow necessarily from these premises. In the Topics, Aristotle
classifies the kinds of conclusions that can be drawn from different
kinds of premises, while in the Sophistical Refutations, he explores
various logical ploys used to trick people into accepting a faulty line
of reasoning.
Thus, we can see that Aristotle strives to organize knowledge in
the manner of a well-structured, architectural construction with a
firm foundation of unshakable first principles and an upper struc-
ture of propositions firmly attached to the foundation by steadfast
inference. In such a way, the Euclid’s geometry and virtually any
axiomatic mathematical system is built. It has a foundation of defi-
nitions, postulates, and axioms or common notions as first principles
and an upper structure of deduced propositions — theorems and
lemmas.
In the first millennium, the distinguished philosopher Abü Naşr
al-Farabi (870–950) also studied knowledge and its sources. He
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch01 page 27

Introduction 27

defined the highest level of knowledge as theoretical or genuine


knowledge, which is the excellence of the theoretical part of the soul
and comes from (or is) science (‘ilm). As al-Farabi wrote, for genuine
knowledge, certainty is achieved within the soul. For the entities that
do not depend on human production, their existence and determina-
tion of what each one of them is and how it is can be accomplished
by demonstration of true, necessary, universal and primary premises,
securely grasped and naturally known by reasons (Fusul, p. 51). As
a result, genuine knowledge is indispensable, unchangeable, and uni-
versal. The highest type of this theoretical knowledge, for al-Farabi,
is wisdom (hikmah), which is the knowledge of the ultimate causes
of all existing entities (metaphysics) as well as the proximate causes
of everything caused (physics).
According to al-Farabi, certain knowledge is threefold:

— certain knowledge that the thing exists, which is called the knowl-
edge of existence;
— certain knowledge of the cause of the thing, which is called knowl-
edge why;
— the certain knowledge of the both together.

The syllogisms used in attaining this threefold epistemic certainty


are also three:

— syllogisms used to prove only existence of the thing;


— syllogisms used to prove only its cause;
— syllogisms used to prove the two together.

Later many outstanding philosophers, such as Thomas Aquinas


(1224–1274), René Descartes (1596–1650), Baruch Spinoza (1632 –
1677), John Locke (1634–1704), George Berkeley (1685–1753), David
Hume (1711–1776), Immanuel Kant (1724–1804), Georg Wilhelm
Friedrich Hegel (1770–1831), Bertrand Russell (1872–1970), Lud-
wig Josef Johann Wittgenstein (1889–1951), Michael Polanyi (1891–
1976) and Karl Raimund Popper (1902–1993) studied problems of
knowledge.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch01 page 28

28 Theory of Knowledge: Structures and Processes

The great medieval philosopher Thomas Aquinas assumed that


all knowledge of people comes from sense perception writing:

“. . . it is natural to man to attain to intellectual truths through sensible


objects, because all of our knowledge originates from the sense.”

In its turn, sense perception comes from the actual things them-
selves, while the human mind does not have inborn ideas. At the same
time, people possess a natural ability to abstract knowledge. When
people see an object such as a tree, the actual tree is what the person
observes and perceives its reflection by senses. The mind knows that
what it is seeing corresponds to reality and as a result, an individual
attains knowledge about the tree. The form of the real object, e.g.,
a tree, is not generated by the senses, or the mind of the perceiver,
but is impressed by the object itself. All external knowledge obtained
through sense is combined by the common sense, which causes the
unifying process of the senses into a single perception, which is then
presented to the mind. The mind forms a representation sent to the
intellect, which generates the universal idea from it by abstraction
and names it by a word.
The great French philosopher Rene Descartes evaluates knowl-
edge in terms of doubt and certainty, distinguishing certain rigorous
knowledge (scientia) and knowledge with lesser grades of certainty
(persuasio). Descartes posits that doubt and certainty are comple-
mentary feelings — when certainty increases, doubt decreases, and
vice versa. Consequently, according to Descartes, knowledge is con-
viction based on a reason so strong that it could never be shaken by
any stronger reason. As a result, knowledge becomes absolute and
utterly indefeasible. Descartes writes:

“. . . we reject all such merely probable knowledge and make it a rule to


trust only what is completely known and incapable of being doubted”.
(Descartes, 1984)

That is why Cartesian methodology of cognition starts with


assessing convictions or beliefs by doubting and reasoning in the
process of discovering innate truths and obtaining knowledge. This
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch01 page 29

Introduction 29

method demands not merely to apply doubt to all candidates for


knowledge, but to apply doubt collectively to these candidates and
describes this process in the following way (Descartes, 1984):
“. . . those who have never philosophized correctly have various opinions
in their minds which they have begun to store up since childhood, and
which they therefore have reason to believe may in many cases be false.
They then attempt to separate the false beliefs from the others, so as to
prevent their contaminating the rest and making the whole lot uncertain.
Now the best way they can accomplish this is to reject all their beliefs
together in one go, as if they were all uncertain and false. They can then
go over each belief in turn and re-adopt only those which they recognize
to be true and indubitable”.

Descartes promotes skeptic arguments precisely in acknowledge-


ment that there is a definite reason for the overall doubt, while it is
necessary to have valid arguments for truth recognition. Note that
although Descartes suggests applying doubt universally to all candi-
dates for knowledge, he does not recommend to do this with tools
for founding knowledge.
Besides, understanding of cognitive processes by Descartes is sim-
ilar to Plato’s doctrine of recollection as Descartes writes that cog-
nition seems not so much learning something new as remembering
what was known before.
Descartes also evokes that there are three possible options for the
kind of external essences causing sensations:

(1) God
(2) Material/corporeal substance
(3) Other created substance.

However, Descartes discards options (a) and (c) leaving only the
second possibility for sensations.
The basic Descartes’ principle of doubting any knowledge claim,
as well as every attempt at justification of knowledge claims gained
much support in traditional epistemology. It has been assumed
that it is vital to find a bedrock of certain knowledge immune to
all possible doubt. However, this search did not bring conclusive
solutions.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch01 page 30

30 Theory of Knowledge: Structures and Processes

The great European philosopher Baruch Spinoza called by the


name Benedito de Espinosa when he was born and later by the name
Benedict de Spinoza also studied problems of knowledge elaborating
the triadic typology of knowledge:

✧ The first kind of knowledge is obtained in two ways — from opinion


or random experience and from imagination.
✧ The second kind of knowledge arises from the intellect, which
employs common notions and elaborates adequate ideas of the
properties of things.
✧ The third kind of knowledge comes from intuition allowing people
to have adequate knowledge, and therefore, to get absolute truth
about things.

Treating problems of knowledge, the outstanding British philoso-


pher John Locke, at first, explains the origin of ideas that people
have and the use of words to signify them. He assumes, being in good
agreement with Chinese philosophers, that making of the names of
substances is a kind of discovery through an abstract general idea,
which is named and then introduced into language. By Locke, names
of substances are supposed to copy the properties of the substances
they refer to.
After this, Locke gives a simple definition of knowledge writing:
“Knowledge then seems to me to be nothing but the perception of the
connection and agreement, or disagreement and repugnancy of any of
our Ideas. In this alone it consists” (Locke, 1975).

Thus, genuine knowledge occurs only when people actually are


perceiving. Locke also considered habitual knowledge, which is related
to what was known in the past but is not perceived now. Besides,
he rejected innate knowledge arguing that otherwise children (and
mental defectives) would be the most pure and reliable guides to
logical truth. Observing the development of knowledge in individual
cases, it is possible to see gradual acquisition of the requisite ideas,
perception of agreement or disagreement of which forms knowing,
although there is self-evident knowledge.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch01 page 31

Introduction 31

Locke’s definition of knowledge as perception of agreement or dis-


agreement of ideas involves two criteria for knowledge acquisition:
first, it is necessary to have the requisite ideas and then to perceive
the connection, i.e., agreement or disagreement, between them. As a
result, knowledge has to be relational in structure and propositional
in form.
Locke recognizes four types of knowledge:

— Knowledge of identity and diversity, which rests upon recognition


of the difference of each idea from any other.
— Knowledge of relation, which reflects positive non-identical con-
nections among ideas.
— Knowledge of co-existence, which is based on coincident appear-
ance of qualities.
— Knowledge of real existence, which presumes some connection
between an idea and the real thing it represents.

In addition, Locke supposes that these types of knowledge can


occur in any of three forms:

— Intuitive knowledge is a certain and unquestionable perception of


identity and relation of any two ideas without the mediation of
any other. It is the clearest and the most certain of all degrees
of human knowledge. It accounts for self-evident truths serving
as the foundation upon which all other genuine knowledge is
built.
— Demonstrative knowledge is obtained through a series of con-
nections between intermediate ideas by means of reasoning. The
standard area of demonstrative human knowledge is mathemat-
ics, where our possession of distinct ideas of particular quantities
yield the requisite clarity, while disciplined reasoning helps to
uncover the intermediate links that establish knowledge of iden-
tity and relation. However, Locke thinks that it is possible to have
demonstrative knowledge of moral relations.
— Sensitive knowledge provides some evidence of the existence of
particular objects outside ourselves, although it is not always
true that there must exist an external object corresponding
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch01 page 32

32 Theory of Knowledge: Structures and Processes

to each idea of sensation. Locke makes serious reservations


about the reliability of our sensitive knowledge of the natural
world.

Another outstanding British philosopher George Berkeley


studied problems of knowledge in his Treatise Concerning the Prin-
ciples of Human Knowledge (1710). The goal was to make an
inquiry into the first principles of human knowledge for discover-
ing what had led to doubt, uncertainty, absurdity, and contradiction
in philosophy.
Berkeley claimed that the mind cannot conceive abstract ideas
and declared that words, such as names, do not signify abstract ideas.
To the contrary, he stressed that people could only think of particular
things that had perceived. Thus, names denoted general ideas, not
abstract ideas. General ideas represent any one of several particular
ideas, while existence of an idea of a thing was actually the state
of perception of a perceiver. Based on this approach, Berkeley came
to a conclusion that all motion is relative, which perfectly correlates
with contemporary physics.
Human minds know ideas, not objects. Ideas, which constitute
knowledge, are brought forth by sensation, thought and imagination.
When several ideas are associated together, they are comprehended
as ideas of one distinct thing, which is then signified by one name.
Even more, according to Berkeley, the outside world is composed
only of ideas because “ideas can only resemble ideas”. However, the
world possesses logic and regularity given by God.
Berkeley challenged that even if some things exist outside the
human mind, we cannot know this. Indeed, knowledge through our
senses only gives us knowledge of our senses but not of any of the
unperceived things. Knowledge through reason does not guarantee
that there are, necessarily, unperceived objects while imagination,
the third source of knowledge, has proved to produce mostly non-
existing (imaginary) entities. For instance, in dreams, people have
ideas that do not correspond to external objects.
Another outstanding British philosopher David Hume also tried
to solve the enigma of knowledge. He claimed that all knowledge
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch01 page 33

Introduction 33

stemmed from sense experience being justified in terms of what was


in peoples’ minds. Thus, all knowledge consists of impressions and
ideas. The former are vivid and clear perceptions, while the latter
are less vivid and clear copies of impressions.
Hume contended that it is possible to have knowledge of two
kinds — the relations between ideas and matters of fact. “Relations
between ideas can be known with absolute certainty, and can be
known by the “mere operations of thought”. In his treatise, Hume
cites only mathematics as an example of relations of ideas.
At the same time, “matters of fact” can never be known with the
same degree of certainty, and cannot be known by the mere opera-
tions of thought. Knowledge of matters of fact is always a posteriori
and synthetic as people obtain it by using observation and employ-
ing induction and reasoning about what is probable. The foundation
of this knowledge is what people experience in the present or can
remember from the past. Knowledge that goes beyond testimony of
the senses or the records of our memory rests on causal inference.
Discussing inference, Hume questions validity of induction arguing
that our belief that the future will resemble the past is not based on
reason at all.
Hume was especially interested in different ways used to justify
that some belief we had in essence was knowledge maintaining that
all knowledge comes from and must be justified by experience. For
instance, matters of fact are justified by probable arguments and not
by deductive reasoning.
Immanuel Kant is one of the most influential philosophers in the
history of Western thought. His ideas in metaphysics, epistemology,
ethics, and aesthetics have made a profound impact on almost every
philosophical movement that followed his work.
A substantial part of Kant’s philosophy addresses the question
“What can we know?” In answering this question, he discerned three
parts of theoretical knowledge:

— logic, which, according to Kant, gives absolute knowledge and


have not changed from the time of Aristotle.
— arithmetic and geometry, which give the most reliable knowledge.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch01 page 34

34 Theory of Knowledge: Structures and Processes

— the fundamental principles of natural sciences, which are changing


with time giving relative knowledge.

Besides, Kant defined items of human knowledge as representa-


tions, dividing representations into two classes:

— Intuitions, which are “immediate” representations.


— Concepts, which are “mediate” representations.

These representations could be pure without any relation to expe-


rience or empirical, coming from experience. Pure intuitions gave
perceptions of basic forms, e.g., intuitions of space and time, which
turn unorganized sensations into perceptions. Pure concepts give
their basic forms of conceptual knowledge facilitating understand-
ing and comprising the 12 categories described by Kant in accor-
dance with the Aristotelian logic. Being necessary for experience of
physical objects, their causal behavior and structural properties, the
conceptual categories cannot be circumvented to achieve a mind-
independent world. Reason, according to Kant, is structured by forms
of experience and categories, giving practical and logical arrangement
to people’s everyday experience.
In addition, Kant brought in two kinds of knowledge:

— analytic knowledge (analytic representations), which is expressed


by self-justifying judgments about properties of objects that exist
in these objects by definition, e.g., propositions the predicate con-
cept of which is contained in its subject concept.
— synthetic knowledge (synthetic representations), which
is expressed by judgments about properties of objects that are
added to these objects, e.g., propositions, the predicate concept
of which is not contained in its subject concept

These two kinds were related to two classes of knowledge:

— a priori knowledge (representations) known before and indepen-


dent of experience.
— a posteriori knowledge (representations) is obtained from
experience.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch01 page 35

Introduction 35

As a result, Kant discriminated three kinds of knowledge:

• analytical a priori knowledge, which is exact and certain but


mostly uninformative as it expounds only what is contained in
definitions;
• synthetic a posteriori knowledge, which conveys information about
what is learned from experience, but it is subject to the errors of
the senses;
• synthetic a priori knowledge, which is uncovered by pure intuition
and is both exact and certain, for it expresses the necessary con-
ditions that the mind imposes on all objects of experience.

According to Kant, mathematics and philosophy give synthetic a


priori knowledge.
Kant explains that analytic knowledge is a priori knowledge, while
synthetic knowledge is sometimes a posteriori knowledge and some-
times a priori knowledge. For instance, the statement “Any natural
number is larger than or equal to one” is analytic because this prop-
erty is contained in the definition of natural numbers, which start
with one and are built by consecutive addition of one. At the same
time, the statement “Any natural number is either prime or com-
pound” is synthetic.
As the majority of philosophers, Kant assumed that knowl-
edge was characterized by propositions or statements. Analytic
propositions are true by nature of the meaning of the words involved
in the sentence, while synthetic statements only tell people something
about the world. Thus, the truth or falsehood of synthetic statements
comes from something outside of their linguistic content.
However, Kant does not demand coincidence of analytic and
a priori knowledge explaining that elementary mathematics, e.g.,
arithmetic, is synthetic and a priori because its statements provide
new knowledge, but knowledge that is not derived from experience.
This becomes part of his main argument for transcendental idealism,
in which the possibility of experience depends on certain necessary
conditions called a priori forms, which organize comprehension of
the world of experience.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch01 page 36

36 Theory of Knowledge: Structures and Processes

It is possible to suggest that knowledge of basic arithmetic does


not demand any empirical experience to know that 2 + 2 = 4, which
is essentially analytic. However, Kant disproves this explaining that
if the number 2 in this calculation is examined, there is nothing to
be found in it by which the number 4 can be inferred. Thus, it is self-
evident, and undeniably a priori, but at the same time, it is synthetic.
It is interesting that this might be true for arithmetic as an empir-
ical science. When axioms of arithmetic, e.g., Peano axioms (cf., for
example, (Shoenfield, 2001)), were constructed, arithmetic proposi-
tions that were not axioms became analytic and a posteriori. This
became even more transparent with the discovery of non-Diophantine
arithmetics (Burgin, 1977; 1997c; 2007; 2010c).
Moreover the whole mathematics is, in a definite sense, an empir-
ical science where experiments, mostly mental experiments, play the
leading role (Burgin, 1998). Consequently, the majority of mathe-
matical propositions, even many axioms, are a posteriori by their
nature.
In the 20th century, Bertrand Russell also studied problems of
knowledge. His views are exposed in the article (Russell, 1926). He
writes, “the question how knowledge should be defined is perhaps the
most important and difficult” of all problems related to knowledge.
“This may seem surprising”: he continues, — at first sight, it might
be thought that knowledge might be defined as belief which is in
agreement with the facts. The trouble is that no one knows what a
belief is, no one knows what a fact is, and no one knows what sort
of agreement would make a belief true”.
According to Russell, theory of knowledge is partly logical,
partly psychological and we can add, partly algorithmic. Connection
between these parts is not very pronounced. Taking precision and
certainty as the basic characteristics of knowledge, Russell assumes
that they have different degrees, or in modern terms, precision and
certainty are fuzzy properties. In essence, there is no absolutely pre-
cise knowledge or knowledge with absolute certainty — all knowl-
edge is more or less uncertain and more or less vague. Often vague
knowledge seems more reliable than precise knowledge, but is less
useful. Russell believes that one of the aims of science is to increase
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch01 page 37

Introduction 37

precision without diminishing certainty although it is incorrect to


restrict understanding of knowledge to what has the highest degree
of both precision and certainty.
It is interesting that Russell treats data as a kind of knowledge,
namely, immediate knowledge. Indeed, he writes:

“the separation into data and inferences belongs to a well-developed


stage of knowledge, and is absent in its beginnings” (Russell, 1926).

At the same time, he assumes that all knowledge is represented


by propositions and is obtained by observations (data) and inference
(inferred knowledge). Traditionally, two sorts of data are considered:
one physical, derived from the senses, the other mental, derived from
introspection. Russell suggests that the difference between the phys-
ical and the mental belongs to inferences and constructions and not
to data.
Russell also distinguishes two kinds of inference, deduction and
induction. Deduction, as he maintains, is obviously of great practical
importance, since it embraces the whole of mathematics. Inductive
inferences are essential to the conduct of life. Russell implies that “we
have to accept merely probable knowledge in daily life, and theory
of knowledge must help us to decide when it really is probable, and
not mere animal prejudice”. In addition, Russell also writes about
analogy as a way of inference.
Interestingly, that parallel to conventional inference, Russell
acknowledges animal inference explaining why there are grounds for
doing this.
Studying knowledge, the outstanding Austrian philosopher Lud-
wig Wittgenstein asserts that some statements, such as “here is a
hand” or “the world has existed for more than five minutes”, look
like empirical propositions saying something factual about the world
and open to doubt. However, in essence, they are similar to logical
propositions because they function in language, which makes possi-
ble for empirical propositions to make sense. This compels us to take
such propositions for granted to allow us to speak about things in
the external world. According to Wittgenstein, a proposition has no
meaning unless it is placed within a particular context.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch01 page 38

38 Theory of Knowledge: Structures and Processes

It is not the goal of Wittgenstein to refute skeptical doubts about


the existence of an external world. Instead, he tries to circumvent
them by explaining that the doubts, as they are understood in phi-
losophy, do not do what they are meant to do. By ascribing logical
nature to certain fundamental propositions, Wittgenstein explicates
their structural role in communication and behavior of people. For
instance, the statement “Here is a hand” is an implicit definition of
the word hand by showing an example. In addition, this statement
indicates how the word hand is used rather than making an empirical
claim about the presence of a hand. Doubts aimed at such proposi-
tions destroy language and its utilization. Communication and ratio-
nal thought are only possible, provided there is some sort of common
ground. Although skeptical doubts are sensible in rational debate,
doubting too much undermines rationality and as a consequence,
very foundation for doubt.
In addition, similar to many Chinese philosophers mentioned
above, Wittgenstein explores how words acquire meaning. However,
in contrast to them, Wittgenstein derives meaning from usage and
not vice versa. In doing this, he asserts that one should look to real
language to answer questions about the meaning of words. Wittgen-
stein demonstrates that many philosophical problems arose from
philosophers’ redefining words and then applying their own defini-
tions to promote their ideas and to defeat their opponents.
Wittgenstein does not try to define knowledge but suggests look-
ing at the way the word knowledge is used in natural languages.
He apprehends knowledge as an instance of a family resemblance
reconstructing the concept of knowledge as a cluster conception that
comprises relevant features but cannot be adequately captured by
any precise definition.
Wittgenstein also discusses the distinction between sense-data
and reality indicating that people learn what a tree is by being shown
trees and not by being given tree sense-data. According to him, the
tree sense-data are irrelevant in this case and do not bear on any-
thing useful for people. The possibility of illusion is there, of course,
but there are criteria for deciding what constitutes an illusion. These
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch01 page 39

Introduction 39

criteria work for most people. Those for whom they fail are called
“mad” and are widely disregarded.
However, according to contemporary psychological and neuro-
physiological theories, an individual sees a tree only if she receives
tree sense-data through her senses. However, the reception of sense-
data is not enough. To see a tree as a tree, the brain has to correctly
process sense-data, building a relevant image and assigning the cor-
rect name “tree” to this image. Besides, it is possible to know what
a tree is by observing not trees but their images, e.g., pictures or
movies with trees. After images of trees are stored in the memory,
an individual can see a tree in her dreams. In this case, the brain
simulates acceptance of sense-data from a physical tree or from its
picture.

1.3. Structure of the book

There’s only one solution: look at the map.

Umberto Eco, Foucalt’s Pendulum

The map is not the territory,


and the name is not the thing named.

Alfred Korzybski

The main goal of this book is to achieve a synthesized understand-


ing of the complex, multifaceted phenomenon called knowledge by
building a synthetic theory of knowledge, which allows systematizing
and binding together existing approaches to knowledge in one unified
theoretical system.
However, we do not try to represent all approaches and directions
of knowledge studies in a complete form or even to give all important
results of this area. Our goal is to give an introduction to the main
approaches and directions, explaining their basics and demonstrating
how they can be comprehended in the context of the general theory
of knowledge. Besides, references are given to sources where an inter-
ested reader can find more information about these approaches and
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch01 page 40

40 Theory of Knowledge: Structures and Processes

directions of knowledge studies. The goal is to present a broad pic-


ture of contemporary knowledge studies, provide a unifying theory of
knowledge and synthesize all existing approaches in an amalgamated
structure of ideas, constructions, methods, and applications.
That is why in Chapter 1, we explain the leading role of knowledge
in the contemporary society and describe a brief history of knowl-
edge studies in different countries and cultures exhibiting not only
the development of knowledge studies in Western countries but also
achievements of the Eastern civilizations in the fields of logic and
epistemology.
Chapter 2 studies properties of knowledge and its classifications.
Usually, knowledge is studied in the context of beliefs (cf., for exam-
ple, (Gettier, 1963; Pollock, 1974; Pollock and Cruz, 1999; Dretske,
2000)). In this book, we treat knowledge in the more general setting,
namely, in the context of epistemic structures. Knowledge items are
epistemic structures. Beliefs are epistemic structures associated with
descriptive knowledge. However, beliefs are related only to declar-
ative or descriptive knowledge while there are also other epistemic
structures, to which knowledge is intrinsically attached. In particu-
lar, there is operational knowledge and representational knowledge.
To understand knowledge, it is important to know that there are
various types, sorts, and kinds of knowledge. That is why we start
Chapter 2 (Section 2.1) with exposition and exploration of diverse
classifications, taxonomies and typologies of knowledge.
In Section 2.2, different approaches to knowledge characterization
are discussed and analyzed from the perspective of the existential
characteristics of knowledge. In Section 2.3, dimensions of knowledge
are described and investigated. Section 2.4 contains knowledge about
metaknowledge and metadata, where metaknowledge is knowledge
about knowledge, while metadata provide information about data.
However, it is necessary to not only know properties of knowl-
edge but also be able to evaluate and justify these properties. This
is the main topic of Chapter 3, where Section 3.1 tells the reader
about knowledge evaluation and Section 3.2 narrates knowledge jus-
tification issues. It is especially significant to appraise and justify
consistency of knowledge. That is why in Section 3.3, we explain
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch01 page 41

Introduction 41

how to work with knowledge that has been traditionally considered


inconsistent giving an overview of existing approaches to this prob-
lem and an exposition of some parts of the theory of logical varieties.
This section is based on the papers (Burgin and de Vey Mestdagh,
2011; 2015).
In the next part of the book, we separate and study three key
levels of knowledge: the microlevel, macrolevel, and megalevel (Bur-
gin, 1997). Chapter 4 describes the microlevel, or the quantum level
of knowledge, its structures, properties, and processes. This level
contains “bricks” and “blocks” of knowledge that are used for con-
struction of other knowledge systems. We call such minimal “bricks”
knowledge quanta and study them in Section 4.1. Two fundamen-
tal theories of the knowledge quanta are presented — the Quantum
Theory of Knowledge (QTK) created by Burgin (1995a; 1997; 2004)
and the Semantic Link Network Theory (SLNT) developed by Zhuge
(2002; 2004; 2010; 2012). Relations between these two theories are
established.
“Blocks” of knowledge are identified with structured quantum
knowledge items and we consider such quantum knowledge items
as signs and symbols discussing different approaches and models
in Section 4.2. Operations with and relations between knowledge
quanta and other quantum knowledge units representing dynamics
and structural organization of the quantum level of knowledge are
constructed and explored in Section 4.3.
On the macrolevel, or the level of average knowledge, con-
sidered in Chapter 5, researchers study knowledge representation
used by people and artificial systems for practical purposes. Sec-
tion 5.1 explains utilization of languages, such as natural, math-
ematical, programming, and scientific languages, for knowledge
representation, preservation, and processing. Section 5.2 presents
means of logics, which are used for knowledge representation, val-
idation, preservation, and processing, while Section 5.3 describes ele-
ments of the theory of abstract properties, which is a synthesis of
logic and qualitative physics providing even more powerful means
for knowledge representation, validation, acquisition, preservation
and processing. Next three sections of Chapter 5 are dedicated to
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch01 page 42

42 Theory of Knowledge: Structures and Processes

knowledge representation in AI. Semantic networks and ontology are


the topics of Section 5.4. Scripts and productions are exposed in
Section 5.5. Frames and schemas are studied in Section 5.6 with
the emphasis on the new direction in this area called mathematical
schema theory.
On the megalevel, or the global level of knowledge, researchers con-
sider the immense knowledge systems such as mathematics, physics,
biology, advanced mathematical, and physical theories. Chapter 6
contains an exposition of the global level of knowledge describing
structure and organization of such knowledge systems.
Knowledge production, acquisition, engineering and application
are studied in Chapter 7. Section 7.1 analyzes knowledge production
and acquisition as basic cognitive processes. Section 7.2 is concerned
with problems of knowledge organization and engineering. Section 7.3
treats issues of knowledge application and management.
Relations between information and knowledge are stud-
ied in Chapter 8. Section 8.1 presents structural aspects of
knowledge–information duality exploring different opinions about the
triad Data–Information–Knowledge. Section 8.2 considers relations
between epistemic structures and cognitive information. Dynamic
aspects of knowledge, data, and information interaction are the main
concern of Section 8.3. Section 8.4 analyzes information as a source
of knowledge, while Section 8.5 investigates knowledge as a measure
of information in the context of mathematical stratum of the general
theory of information.
The last Chapter 9 contains some conclusions and directions for
future research.
Exposition of material is aimed at different groups of readers.
Those who want to know more about history of knowledge studies
and get a general perspective of the current situation in this area
can skip proofs and even many theoretical results given in the strict
mathematical form. At the same time, those who have a sufficient
mathematical training and are interested in formalized knowledge
theories can skip preliminary deliberations and go directly to the
sections that contain mathematical exposition. Thus, a variety of
readers will be able to find interesting and useful issues in this book
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch01 page 43

Introduction 43

if each reader chooses those topics that are of interest to her or


to him.
It is necessary to remark that the research in the area of knowledge
studies and application is extremely active, while knowledge is related
almost to everything. Consequently, it is impossible to include all
ideas, issues, directions, and references to materials that exist in this
area, for which we ask the reader’s forbearance.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 45

Chapter 2

Knowledge Characteristics
and Typology

In questions of science, the authority of a thousand is not worth


the humble reasoning of a single individual.
Galileo Galilei

2.1. The differentiation and classification of knowledge

There is a strong tendency to reduce the many to the few,


the complex to the simple, the various to the uniform.
Richard Pring

For millennia, philosophers, who were the first to study the prob-
lems of knowledge, have asserted that knowledge is a kind of beliefs
reducing all knowledge to declarative or descriptive knowledge and
actively imposing this opinion on all others. Even now, the major-
ity of philosophers believe in this declaration. For instance, such
experts in contemporary philosophical theories of knowledge as John
Pollock and Joseph Cruz write: “Epistemology might better be called
“doxastology”, which means the study of beliefs” (Pollock and Cruz,
1999).
However, this understanding was challenged. At first, physi-
cists discovered operational knowledge. It was Nobel laureate Percy
Williams Bridgman (1882–1961), who insisted that conceptual
knowledge is, in essence, operational. He wrote that “any concept

45
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 46

46 Theory of Knowledge: Structures and Processes

[is] nothing more than a set of operations; the concept is synony-


mous with the corresponding set of operations” (Bridgman, 1927).
Many scientists, especially, physicists and psychologists, became
enthusiasts of this methodological approach bringing into being oper-
ationalism, also called operationism, as a direction in methodology
of science based on the idea that to know the meaning of a concept
is to have a method of measurement for it (Bridgman, 1936; 1959;
Boring et al., 1945; Chang, 2009).
Behaviorist psychologists, such as Edwin Boring (1886–1968),
Stanley Smith Stevens (1906–1973), and Edward Chace Tolman
(1886–1959), became ardent adherents of operationalism. They used
operationalism as a weapon in their fight against more traditional
psychologists (Feest, 2005).
Nevertheless, despite the initial popularity of Bridgman’s
approach, by the middle of the 20th century, the common attitude
among philosophers and philosophically-minded scientists towards
operationalism was strongly critical (Chang, 2009), although opera-
tional knowledge was explicitly used by logical positivism in its veri-
fication theory of meaning (Frank, 1956).
Another kind of knowledge — representational knowledge — was
elucidated in methodology of science. Namely, the structuralist direc-
tion represented scientific knowledge in the form of a scientific theory
as a system of models (Sneed, 1971; Stegmüller, 1976; 1979; Balzer
et al., 1987).
The computational approach treats scientific knowledge in the
form of a scientific theory as complex data structures in com-
putational systems, which contain organized packages of rules
(operational knowledge), concepts (representational knowledge), and
problem solutions (operational and descriptive knowledge) (Thagard,
1988).
Lobovikov included questions and problems (erotetic knowledge)
into scientific knowledge in the form of a scientific theory (Lobovikov,
1984).
Pearce and Rantala combined representational and descriptive
knowledge in their model of a scientific theory (Pearce and Rantala,
1981).
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 47

Knowledge Characteristics and Typology 47

Later the structure-nominative direction in methodology of


science included descriptive, representational knowledge and opera-
tional knowledge as specific components of general knowledge sys-
tems in general and scientific theories in particular (Burgin and
Kuznetsov, 1988; 1989; 1991; 1992; 1993; 1994; Balzer et al., 1991;
Burgin, 2011). Namely, in the structure-nominative model of sci-
entific knowledge, operational knowledge constitutes the pragmatic-
procedural subsystem, while representational knowledge constitutes
the model-representing subsystem (Burgin and Kuznetsov, 1988;
1989; 1991; 1992a; 1992b; 1994).
In addition, operational knowledge also called procedural knowl-
edge, has become popular in knowledge management (Valente and
Rigallo, 2002; 2003).
Some philosophers also made a distinction between “know-that”
as descriptive/declarative knowledge and “know-how” as opera-
tional/procedural knowledge. In general, they interpreted opera-
tional/procedural knowledge as knowledge that is manifested in the
use of a skill, whereas descriptive/declarative knowledge as explicit
knowledge of a fact (Fantl, 2012).
Although there is a discussion whether ancient Greeks consid-
ered “know-how” as a specific kind of knowledge, it is assumed that
Gilbert Ryle was the first philosopher to treat “know-how” as knowl-
edge distinguishing it from propositional knowledge or “know-that.”
He identified “know-how” with a disposition whose “exercises are
observances of rules or canons or the application of criteria” (Ryle,
1949). His main argument was that it was possible to have lots of
knowledge-that, without possessing any knowledge-how. In addition,
he insisted that “knowledge-how is a concept logically prior to the
concept of knowledge-that” (1971/1946).
Later in her analysis of behavior from the epistemological perspec-
tive, Katherine Hawley came to the conclusion that “know-how” was
a matter of successful actions plus warrant (Hawley, 2003).
In a similar way, Thorkelson (2008) writes:

“What is knowledge? The time-worn and widely criticized philosophi-


cal definition is “justified true belief ” (Gettier, 1966; Goldman, 1967;
Lewis, 1996); for anthropological purposes it suffers from three major
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 48

48 Theory of Knowledge: Structures and Processes

problems centered especially around the term “belief.” First, the defini-
tion reduces knowledge to propositional knowledge, “knowing-that,” thus
occluding other knowledge types like practical “know-how” (knowledge
embodied in routinized dispositions), affective states (knowledge embod-
ied in emotion and sentiment), and phenomenological acquaintance (con-
ferred, for instance, by sensory experience or artistic representation).”

Although many philosophers started to understand that


knowledge-how and knowledge-that are distinct kinds of knowledge
and consequently, procedural knowledge is non-propositional, others
tried to reduce procedural knowledge to propositions. For instance,
Bruce Kogut and Udo Zander write:
“Procedural knowledge consists of statements that describe a pro-
cess, such as the method by which inventory is minimized” (Kogut
and Zander, 1992).
Some educators also rejected reduction of knowledge to descrip-
tive knowledge and especially, to propositions or beliefs. For instance,
Scheffler (1965) discerns “know-that” as descriptive knowledge and
“know-how” as operational knowledge.
There are also an extreme comprehensions of operational knowl-
edge. For instance, Nonaka and Takeuchi (1995) assume that all
knowledge is about action as any knowledge must be used to
some end.
Thus, methodological and sociological analysis shows that there
are three basic categories of knowledge:

∗ Representational knowledge about an object is representations of


this object by knowledge structures, such as models and images,
e.g., when Bob has an image of his friend Ann, it is representational
knowledge about Ann.
∗ Descriptive knowledge also called declarative knowledge or some-
times propositional knowledge is knowledge about properties and
relations of the objects of knowledge, e.g., “a swan is white”, “a lion
is an animal” or “three is larger than two”. However, objectively,
declarative knowledge is only a kind of descriptive knowledge,
while propositional knowledge is a kind of declarative knowledge.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 49

Knowledge Characteristics and Typology 49

∗ Operational knowledge also called procedural knowledge consists of


rules, procedures, algorithms, etc., and serves for organization of
behavior of people and animals, for control of system functioning
and for performing actions. A more exact categorization shows
that procedural knowledge is a kind of operational knowledge.

Note that in the literature, there is no agreement as to the defi-


nition of operational (procedural) knowledge. For instance, Lewicki
et al. (1987) equate procedural knowledge either with cognitive skills
or with processing rules. At the same time, Kogut and Zander claim
that operational (procedural) knowledge consists of statements that
describe the process (Kogut and Zander, 1992).
In addition, each basic category consists of three subcategories.

∗ Representational knowledge can represent statics or/and dynamics,


while dynamics knowledge, in turn, is divided into two more exact
categories — representation of functions or of processes.
∗ Descriptive knowledge can be either informal, e.g., linguistic, i.e.,
represented by texts in natural languages; or semiformal such as
the conventional mathematical language; or formal, e.g., logical,
i.e., represented by logical expressions (formulas).
∗ Operational knowledge can be either procedural, e.g., in the form of
instructions, algorithms, programs, plans and scenarios; or instru-
mental, e.g., descriptions of tools of operations, operators and per-
formers; or axiological, e.g., in the form of goals, tasks, values,
estimates, norms or judgments.

Moreover, the basic categories of knowledge contain other sub-


categories. For instance, existential knowledge, i.e., knowledge about
existence of the object of knowledge, is a kind of descriptive knowl-
edge because existence and its characteristics are properties of the
object.
Representational knowledge also comprises structural knowledge,
which is basic to problem solving in creation of plans and strate-
gies, setting conditions for different procedures, and in determining
structures of different systems.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 50

50 Theory of Knowledge: Structures and Processes

Differentiation of knowledge into three types allows solving the


problem of relations between art and knowledge. Different thinkers
have been seriously puzzled by the following situation (cf., for exam-
ple, (Pring, 1976; Reid, 1985; Bender, 1993; John, 1998)). On one
hand, art definitely gives knowledge. On the other hand, if we assume
that knowledge is only descriptive, e.g., in the form of logical proposi-
tions, then we all know that art goes beyond descriptions not speak-
ing about logical propositions. For instance, each proposition in logic
has its negation but as Pring writes, what conceivably could be the
negation of the Mona Lisa (Pring, 1976). As a result of this confu-
sion, philosophers even suggested “to reconceive knowledge in such
a way that we may eventually come to understand how it can be
gained non-propositionally” (Worth, 2010).
In contrast to this, art can convey representational and opera-
tional knowledge. Indeed, on the one hand, art is representation of
different things. It can imitate (represent or reflect) states of the
external world — nature, people, society, etc., as well as the inner
state of the artist. “Art as a representation of outer existence (admit-
tedly “seen through a temperament”) has been replaced by art as
an expression of humans’ inner life” (Worth, 2010). In such a way,
art gives representational knowledge. On the one hand, art can teach
people providing models of different actions, behavior, and attitudes.
In such a way, art gives operational knowledge.
It is interesting that descriptive and representational knowledge
have operational representations, descriptive and operational knowl-
edge have representational (model) representations and representa-
tional and operational knowledge have descriptive representations.
For instance, productions (cf., Section 5.5) give operational repre-
sentation of descriptive (declarative) knowledge in the form of con-
ditional propositions. Algorithms serve as models of processes and
actions giving operational representation of representational (model)
knowledge. Programs in declarative programming languages (cf., Sec-
tion 5.1.3) give descriptive representation of algorithms as a form
of operational knowledge. Propositions and predicates as forms of
descriptive knowledge have model representations in the structural-
ist model (reconstruction) of a scientific theory (cf., Chapter 6), as
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 51

Knowledge Characteristics and Typology 51

well as in the possible-world semantics, also called Kripke semantics


(Kripke, 1963).
Processing of different types of knowledge in the human brain
involves corresponding types of memory. Namely, declarative memory
and procedural memory are two major parts of the long-term
memory. Experimental evidence for a distinction between declara-
tive memory and procedural memory was demonstrated by Milner
(1962).
Declarative memory (descriptive memory) is the memory that
stores declarative knowledge, such as knowledge of facts and events.
It is also called explicit memory because knowledge it accumulates
is explicitly stored and retrieved.
Procedural memory (operational memory) is the memory that
stores procedural (operational) knowledge in the form of skills and
knowledge how to do things, such as the utilization of things or move-
ments of the body. Procedural memory is also called implicit memory,
because knowledge it accumulates is typically acquired through rep-
etition and practice and used without explicit and conscious aware-
ness. Cohen and Squire (1980) coined the term procedural memory
for storing and using skills and procedures.
These parts of long-term memory involve different regions of the
brain and function in a different manner. Declarative memory uses
the hippocampus, entorhinal cortex and perirhinal cortex as the cod-
ing system and is mostly situated in the temporal cortex. Procedu-
ral memory uses the cerebellum, putamen, caudate nucleus and the
motor cortex as the coding system and is situated in different parts
of the brain. For instance, learned skills such as driving are stored in
the putamen, while instinctive procedures such as sleeping are stored
in the caudate nucleus and the cerebellum is involved with timing
and coordination of body skills.
In addition, researchers demonstrated that declarative memory
can be further sub-divided into episodic memory and semantic mem-
ory (Tulving, 1972).
Episodic memory is the memory of experiences and specific events
in time in a serial form, from which we can reconstruct the actual
events that took place at any given point in our lives.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 52

52 Theory of Knowledge: Structures and Processes

Semantic memory is a structured representation of general factual


knowledge, such as facts, meanings, concepts and knowledge about
the external world. Knowledge in semantic memory often is abstract
and relative.
It is possible to suggest that representational knowledge is related
to eidetic memory.
As we know, there is a big variety of properties and characteristics
of knowledge, as well as many different types and kinds of knowledge.
To organize this huge diversity into a system, it is worthwhile to
classify knowledge with respect to various criteria. These criteria are
based on five types of attributes:

— Characteristics of knowledge and its representations.


— Features of processes that are related to knowledge.
— Parameters of systems that produce or/and use knowledge.
— Properties of the knowledge domain.
— Traits of the knowledge carriers.

Here are some examples.


The most popular feature of knowledge is truthfulness, which
takes two values True and False in the classical interpretation. In
fuzzy logics, truthfulness, takes values in the interval [0, 1] where the
value 0 means absolutely false and the value 1 means absolutely true.
Truthfulness of knowledge depends both on of characteristics, knowl-
edge and its representations and on the properties of the knowledge
domain.
It is necessary to remark that from ancient times, many resear-
chers have thought that knowledge and information cannot be false
assuming that being true is an inherent characteristic of knowledge.
They believe that if a belief is false, then it is not knowledge as knowl-
edge is a justified true belief. Other thinkers admit that knowledge
can be true and can be false. For instance, in his description of the
World 3, Popper (1972; 1979) asserted that this world contains all
the knowledge and theories of the world, both true and false. Thus,
Popper assumed existence of false knowledge. Burgin (2010) gives
a detailed explanation why information and hence knowledge can
be false.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 53

Knowledge Characteristics and Typology 53

An important feature of knowledge from the perspective of related


processes is complexity, for example, complexity of acquisition,
complexity of integration or complexity of learning. We see that
complexity of knowledge depends on such processes as knowledge
acquisition, knowledge integration or learning.
There are different types of professional knowledge, for exam-
ple, professional knowledge of a lawyer, professional knowledge of
an architect or professional knowledge of a physicist.
There is knowledge about specific domains, for example, mathe-
matical, physical, biological, or sociological knowledge.
It is possible to consider four kinds of knowledge based on infor-
mation characteristics introduced by Collins (1993):

1. Symbol-type knowledge.
2. Embodied knowledge.
3. Embrained knowledge.
4. Encultured knowledge.

Symbol-type knowledge is represented by symbols and can be


transferred without loss on flashcards, hard drives, CD-ROM drives,
floppy disks, and so forth.
Embodied knowledge depends on properties and functioning of the
human body. For instance, embodied knowledge of the notion chair
depends on the ability to put the body in the sitting position on a
chair.
Embodied knowledge is a kind of embedded knowledge. In this
context, a general understanding treats embedded knowledge as the
knowledge that is set in processes, products, culture, routines, arti-
facts, or structures (Horvath, 2000; Gamble and Blackwell, 2001).
Knowledge is embedded either formally, for example, through a man-
agement initiative to formalize a certain useful technique, or infor-
mally as the organization uses or people behavior.
Embrained knowledge depends on the physical characteristics of
the brain. For instance, Collins (1993) explains, there are kinds
of knowledge dependent on the way neurons are interconnected or
on the chemistry the brain or on the formation of solid shapes in
the brain.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 54

54 Theory of Knowledge: Structures and Processes

Encultured knowledge depends on the social environment. For


instance, natural languages are the model example of encultured
knowledge. Thus, the right way to use a language, e.g., to speak,
is the sanction of a social group but not of a separate individual and
those who do not remain in contact with the social group will soon
cease to know how to speak properly according to the rules of the
group.
Researchers also consider three kinds of knowledge, which form a
representational classification:

1. Symbolic knowledge is represented by symbols.


2. Subsymbolic knowledge is constructed from knowledge elements
that are not symbols.
3. Wired knowledge is a part of a physical system.

Let us consider some examples.

Example 2.1.1. Images on the screen of a computer or TV are units


of representational knowledge. These images are formed from pixels
(points of different colors and brightness) on the screen. Thus, these
images are units of subsymbolic representational knowledge.

Example 2.1.2. An algorithm in the form of a computer program is


a unit of symbolic operational knowledge, while an algorithm imple-
mented in the hardware of a computer is a unit of wired operational
knowledge.

Application of the representational knowledge classification allows


researchers to solve some methodological problems. For instance, in
the theory of computations, many think that computations of neural
networks are not algorithmic because they assume that algorithms
can be only symbolic. However, algorithms as a kind of operational
knowledge can be not only symbolic but also wired, and a neural
network is just a wired algorithm. This understanding is supported
by the fact that artificial neural networks are modeled by conven-
tional computers where these networks are represented by conven-
tional symbolic algorithms in the form of computer programs.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 55

Knowledge Characteristics and Typology 55

It is possible to consider three important types of knowledge


related to information characteristics suggested by Banathy (1995):

1. Referential knowledge has meaning in the system R.


2. Non-referential knowledge has meaning outside the system R, e.g.,
knowledge that reflects mere observation of R.
3. State-referential knowledge reflects an external model of the sys-
tem R, e.g., knowledge that represents R as a state transition
system.

Philosophers usually consider two kinds of knowledge, which form


a cognitive classification:

• A priori knowledge, which is known independently of experience.


For instance, Kant interpreted a priori knowledge as a “transcen-
dental” form of knowledge coming from “rational insight”.
• A posteriori knowledge is knowledge that people get from experi-
ence that can be of two types:
◦ Empirical experience, which is accumulated from practical activ-
ity, e.g., experimentation, giving empirical knowledge.
◦ Theoretical experience, which is accumulated from mental activ-
ity, e.g., thinking, giving theoretical knowledge.

Separating knowledge with respect to the knower, i.e., to the sys-


tem that has knowledge, we come to the system classification:

— Personal knowledge.
— Communal knowledge.
— Network knowledge.

Michael Polanyi (1891–1976) explicated two sorts of personal knowl-


edge (Polanyi, 1966; 1974):

 Explicit knowledge is codified knowledge, such as knowledge found


in documents.
 Tacit knowledge is intuitive, hard to define knowledge that is
mostly experience based.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 56

56 Theory of Knowledge: Structures and Processes

This accessibility classification is extremely popular within busi-


ness and knowledge management where the following descriptions of
these sorts of knowledge were elaborated.
Explicit knowledge can be articulated in formal language and
records, communicated by people or through information technology
and stored.
Tacit (or implicit) knowledge is personal knowledge embedded in
individuals based on their experience and involving such intangi-
ble factors as personal emotions, beliefs, procedures, perspectives,
goals and values. People can know about something but be unable
to explain the process that led to their knowledge and even explain
what they know. Tacit knowledge is difficult to articulate, commu-
nicate and store, although it can be communicated through face-to-
face contact and storytelling. According to Polanyi, tacit knowledge
that underlies explicit knowledge being more fundamental in that all
knowledge is either tacit or it was initially rooted in tacit knowledge,
which cannot be objective.
Tacit knowledge located exclusively in the human mind consti-
tutes the invisible part of organizational knowledge including organi-
zational culture, experience, feelings, confidence, relationships being
the principal driving force of the organization. Thus, it is possible to
compare the organizational knowledge base to an iceberg, the explicit
knowledge is the visible part of this iceberg, codified and classified
knowledge integrated into documents, procedures and business pro-
cesses, and codified in informational systems.
However, Botha et al. (2008) pointed out that tacit and explicit
knowledge should be seen as a spectrum (the accessibility spectrum)
rather than as two separate points. Tacit and explicit knowledge
are the endpoints of the accessibility spectrum. Thus, knowledge is
mostly a mixture of tacit and explicit elements rather than being one
or the other. Taking into account this issue, it is possible to formalize
the accessibility spectrum defining a measure of knowledge explicit-
ness with the scale from 0 to 1. In this scale, the measure of tacit
knowledge will be 0, while the measure of explicit knowledge will be 1.
Logical tools for adequate description, valid transformation and
effective generation of explicit and implicit knowledge are developed
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 57

Knowledge Characteristics and Typology 57

in (Burgin and Rybalov, 2003). As explicit and implicit knowl-


edge contain propositions that can be inconsistent for two reasons:
(1) due to utilization of different systems of knowledge generation,
e.g., strong entailment in one case and weak entailment in the other,
and (2) because of application of these mechanisms to different parts
of new knowledge depending on level of conflict the latter generated.
Thus, there is no consistent classical calculus that can generate both
explicit and implicit knowledge. Therefore, the logical representation
of explicit and implicit knowledge together forms a non-trivial logical
variety (Burgin, 1997).
Here we slightly extend the accessibility classification considering
three following sorts of personal knowledge:
— Externally explicit or articulated (codified) knowledge.
— Internally explicit (unarticulated) knowledge.
— Tacit (incommunicable) knowledge.
Two latter categories form implicit knowledge in contrast to
explicit knowledge.
In this extended accessibility classification, internally explicit
knowledge is situated in the accessibility spectrum between exter-
nally explicit knowledge and tacit knowledge.
People very often have internally explicit but unarticulated knowl-
edge. This is well known to teachers, who habitually encounter situ-
ations when their students can apply definite rules to solve problems
but cannot explain and sometimes even repeat these rules.
There are also three gradations of implicit knowledge:
— Instincts are a form of operational knowledge.
— Unconscious knowledge belongs to the knower but the knower is
not aware of it.
— Conscious but explicitly inexpressible knowledge — the knower
knows that she/he has this knowledge but cannot explicitly
express it.
The following situations apparently demonstrate the differ-
ence between unconscious knowledge and conscious but explic-
itly inexpressible knowledge. Imagine students in a class who are
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 58

58 Theory of Knowledge: Structures and Processes

performing arithmetical operations with numbers. Always there are


students who correctly use the commutative law in their calcula-
tions. However, some of them do this without remembering this law.
They have unconscious knowledge of this law. Other students do this
remembering this law, but if the teacher asks them to formulate the
law, they would be unable to do this. They have conscious but explic-
itly inexpressible knowledge of this law. Finally, there are students
who have externally explicit knowledge of the commutative law.
There are three gradations of explicit knowledge:
— Personal knowledge.
— Shared knowledge.
— Personalized (internalized) knowledge.
When an engineer invents some new device, her knowledge about
this device is personal. If she writes a paper or tells a colleague about
it, her knowledge about this device becomes shared. For the colleague
who hears this new device and understands, this knowledge becomes
personalized (internalized).
In addition, there is a carrier-based classification suggested in
(Nonaka and Takeuchi, 1995):
— Individual knowledge.
— Group knowledge.
— Organizational knowledge.
— Knowledge of a group of organizations or super-organizational
knowledge
Amalgamating three latter classes into one class, we obtain the
following classification:
— Personal knowledge.
— Public knowledge.
One more knowledge classification is considered in (Ekinge and
Lennartsson, 2000):
— Individual knowledge.
— Shared knowledge.
— Objectified knowledge.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 59

Knowledge Characteristics and Typology 59

As it is sometimes happens in science, the same word tacit is used


for denoting different concepts because it is also used in the following
classification also suggested by Polanyi.
— Focal knowledge is about the object or phenomenon that is in
focus.
— Tacit knowledge is the general background knowledge used as a
tool to handle or improve what is in focus.
The focal and tacit dimensions are complementary. Tacit knowl-
edge varies from one situation to another. It functions as a back-
ground knowledge assisting in accomplishing a task which is in focus.
By the level of externalization, knowledge is broken up into three
classes:
1. Personal knowledge is knowledge that belongs to an individual.
2. Shared knowledge is knowledge that is shared by a group of
individuals.
3. External knowledge is knowledge that does not belong to an
individual.
For instance, knowledge of what a person is going to do during
the day is personal. Knowledge of mathematics is shared knowledge.
For a non-mathematician, knowledge of category theory is external
knowledge.
By the level of internalization, knowledge is divided into three
categories:
1. Subconscious knowledge is knowledge of an individual such that
the individual is not aware of its existence.
2. Implicit knowledge is knowledge of an individual such that the
individual is aware of its existence but cannot express it, e.g.,
verbalize it.
3. Explicit knowledge is knowledge of an individual such that the
individual can express it, e.g., verbalize it.
Aristotle’s very influential three-fold classification of disciplines
as theoretical, productive, or practical is used as the base for classi-
fication of forms of knowledge in (Smith, 1999).
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 60

60 Theory of Knowledge: Structures and Processes

— Theoretical knowledge is pursuing truth for its own sake.


— Productive knowledge is knowledge for making things.
— Practical knowledge displays in ability of making judgments and
decisions.

Theoretical knowledge is related to the form of thinking appro-


priate to theoretical activities, which according to Aristotle, is con-
templative involving meditating over facts and ideas that the person
already possesses (knows).
Productive knowledge and enquiry is used in productive disci-
plines for performing an action or operation. Aristotle associated
this form of thinking and doing, with the work of craftspeople or
artisans.
Practical knowledge was originally associated with ethical and
political life. Their purpose was the cultivation of wisdom and knowl-
edge involving decision-making and human interaction.
For instance pure mathematics is theoretical knowledge, tool-
making procedures are productive knowledge, and social work train-
ing methods are practical knowledge.
Piaget (1967/1971) identified three kinds of knowledge:

1. Physical knowledge consists of facts about the features of some-


thing such as “the window is transparent,” “the crayon is white,”
“the cat is grey” and “the air was cold and dry yesterday.” Phys-
ical knowledge directly reflects the objects and can be obtained
by exploring objects and noticing their qualities.
2. Social knowledge consists of names and conventions made up by
people. Here are some examples: “The name of this dog is Bounty,”
“New Year is on January 1,” or “It is polite to say thank you for a
gift.” Social knowledge may be arbitrary and is knowable by being
told or demonstrated by other people, found in books, journals,
and on the Internet.
3. Logico-mathematical knowledge, according to Piaget, consists of
relations and structures. Logico-mathematical knowledge is con-
structed by each individual, inside his or her own head or learned
from people, books, journals, and the Internet.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 61

Knowledge Characteristics and Typology 61

There is also a problem-oriented knowledge classification.


• Know-what is the fundamental form of knowledge, e.g., people/
group/organizations know what they know (perhaps through their
formal education) but do not necessarily know when and how to
apply the knowledge to solve problems.
• Know-how is the ability to solve various problems.
• Know-why is explanatory knowledge enabling individuals to move
a step beyond know-how and create opportunities to deal with
unknown interactions and unseen situations.
Sveiby (1997) analyzes two types of knowledge:
• Agentive knowledge is mostly oriented towards using the body as
a tool.
• Intellective knowledge is oriented towards using the mind as a tool.
In the domain of religion and mysticism, usually two types of knowl-
edge are considered:
• Esoteric knowledge is preserved and/or understood by a small
group of those specially initiated, or of rare or unusual interest.
• Exoteric knowledge is, to the contrary, open to everybody although
it does mean that anybody can understand it.
By its representation of the domain (object), knowledge has three
types:
— Exact knowledge.
— Fuzzy or vague knowledge.
— Indeterminate knowledge including probabilistic knowledge.
To understand the difference between these types of knowledge,
let us consider the following examples. Imagine we take an urn with
ten balls. If all balls in the urn are definitely blue and we know this,
then we have exact knowledge that if we take one ball from the urn
at random, then it will be a blue ball. If the color of the balls is
between blue and green, then because there is no strict boundary
between blue and green, we have fuzzy knowledge that if we take one
ball from the urn at random, then it will be a blue ball. If five balls
in the urn are definitely blue, five balls in the urn are definitely red
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 62

62 Theory of Knowledge: Structures and Processes

and we know this, then we have probabilistic knowledge that if we


take one ball from the urn at random, then it will be a blue ball,
namely, we assume that the probability will be 1/2.
Probabilistic knowledge is knowledge for which only the probabil-
ity of being correct (true) is given. On one hand, this is contrary to
the established through millennia approach to knowledge, which has
to be always true. On the other hand, probability theory was created
by Blaise Pascal (1623–1662) and Pierre de Fermat (1601–1665) in
an attempt to describe uncertain knowledge in mathematical terms.
However, the crucial incursion of probabilistic knowledge hap-
pened with the advent of quantum mechanics, which persuasively
demonstrated that in many situations, people could have only prob-
abilistic knowledge about nature.
This changed understanding of the role of probabilistic knowledge.
The chief proponent of the new approach was Hans Reichenbach
(1891–1953). He assumed that probability is the pillar of knowledge
systems and without this understanding, the structure of the world
cannot be correctly represented and interpreted because, according
to Reichenbach, knowledge about the future cannot be as accurate
as knowledge about past events (Reichenbach, 1949). Consequently,
knowledge about the future is inevitably probabilistic. Moreover,
descriptions of the past events also are not completely accurate and
thus, they demand probabilistic representation. In essence, all knowl-
edge can be only probabilistic in such a way that for each knowledge
unit, there is the probability of being true or correct.
It is interesting that mathematics, which is traditionally treated
as the most exact discipline because, as many think, mathematical
proofs establish, in some sense, absolute knowledge, is also coming to
probabilistic knowledge. Namely, some mathematicians suggest using
probabilistic proofs of mathematical results. In this case, a theorem is
asserted as true only with some high probability p (cf., for example,
(Bass and Burdzy, 1989; Alon and Spencer, 2000)).
Traditionally probability is considered as a function that takes val-
ues in the interval [0, 1] although each value of this function is also
called the probability of an event. All conventional interpretations of
probability support this assumption about the range of probability,
while all popular formal descriptions, e.g., axioms for probability,
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 63

Knowledge Characteristics and Typology 63

such as Kolmogorov’s axioms, canonize this premise. However, sci-


entific research and practical problems brought researchers to the
necessity to use more wide-ranging concepts of probability in general
and negative probabilities, in particular. Negative probabilities have
been extensively used in physics (Dirac, 1930; 1942; 1974; Heisenberg,
1931; Wigner, 1932; Pauli 1943; 1956; Feynman, 1950) and mathe-
matical finance (Jarrow and Turnbull, 1995; Duffie and Singleton,
1999; Forsyth et al., 2001; Haug, 2004; Burgin and Meissner, 2011;
2012), their mathematical theory is developed in (Bartlett, 1945;
1986; Allen, 1976; Burgin, 2009; 2010; Khrennikov, 2009). Appli-
cations of negative probability show that it has been useful for
knowledge evaluation in physics and mathematical finance. How-
ever, negative probability could be a useful tool for representation,
exploration and utilization of probabilistic declarative and represen-
tational knowledge in general and not only in these areas. This pos-
sibility is based on the existence of opposite knowledge, namely, if a
statement r contains some knowledge, than it is natural to assume
that the statement r (not r) contains opposite knowledge.
Let us take a statement r and assume that it is true with the
probability p(r). In classical logic, the Law of Excluded Middle tells
us that when r is not true, then the negation r of r is true. This
implies the equality p(r) = 1 − p(r). However, in real life, there
is often a possibility when different options exist in the case when r
is not true. For instance, it is possible that r is not defined for some
objects. Thus, the probability p(r) is not uniquely defined by the
probability p(r). In this situation, it is beneficial to use probabilities
that can take both positive and negative values treating r as the
statement opposite to r. Then a negative value of the probability
p(r) can be interpreted as positive probability for the opposite state-
ment r. It is possible to read more about interpretation of negative
probability in (Burgin, 2010e).
Probabilistic operational knowledge is represented by proba-
bilistic algorithms and automata, while more general indetermi-
nate operational knowledge is represented by non-deterministic
algorithms and automata.
A non-deterministic algorithm is an algorithm where the result
and/or the way the result is obtained depend on chance. Examples of
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 64

64 Theory of Knowledge: Structures and Processes

non-deterministic algorithms are non-deterministic finite automata


or non-deterministic Turing machines (Burgin, 2005). In turn,
non-deterministic algorithms are examples of non-deterministic oper-
ational knowledge.
A probabilistic algorithm also called randomized algorithm is an
algorithm where the result and/or the way the result is obtained
depend on chance with the known probability. Examples of proba-
bilistic algorithms are probabilistic finite automata or probabilistic
Turing machines (Burgin, 2005). Probabilistic algorithms are exam-
ples of probabilistic operational knowledge.
There is also a domain-oriented classification:

— General knowledge.
— Domain-specific knowledge.

Note that general and domain-specific knowledge are the end-


points of the knowledge domain spectrum.
Pollock and Cruz (1999) divide knowledge into several areas:

— Perceptual knowledge is knowledge from perception.


— A priori knowledge is what is known independently of experience.
— Moral knowledge is knowledge of ethical principles.
— Memorized knowledge is knowledge from the memory.
— Inductive knowledge is knowledge of inductive generalizations.
— Knowledge of other minds.

Here are some more classifications of knowledge.

Reif (1997) use the following classification of knowledge:

— General knowledge.
— Procedural interpretation knowledge
— Declarative knowledge.
— Procedural knowledge.
— Special knowledge.
— Compiled knowledge.
— Coherent knowledge.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 65

Knowledge Characteristics and Typology 65

Reif and Allen (1992) use the following classification of knowledge:


— General knowledge.
— Main interpretation knowledge.
— Definitional knowledge.
— Ancillary knowledge.
— Supplementary knowledge.
— Case-specific knowledge.
— Entailed knowledge.
— Concept knowledge.
To categorize knowledge, Wiig (1993) constructs a three-
dimensional classification. The first is the possession dimension with
three categories:
— Public knowledge.
— Shared knowledge.
— Personal knowledge.
The second is the dynamic dimension with two categories:
— Active knowledge.
— Passive knowledge.
The third is the typological dimension with four categories:
— Factual knowledge.
— Conceptual knowledge.
— Expectational knowledge.
— Methodological knowledge.
To categorize knowledge, Boisot (1998) constructs a three-
dimensional classification. The first is the codification dimension with
two categories:
— Codified knowledge.
— Uncodified knowledge.
The second is the abstraction dimension with two categories:
— Abstract knowledge.
— Concrete knowledge.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 66

66 Theory of Knowledge: Structures and Processes

The third, diffusion dimension has two categories:


— Diffused knowledge.
— Undiffused knowledge.
Ueno et al. (1987) consider two types of knowledge:
— Factual knowledge (facts).
— Knowledge for decision-making (rules).
van Dijk (2004) introduces several classifications of knowledge.
The social classification:
— Personal knowledge.
— Interpersonal knowledge.
— Social (group) knowledge.
— Cultural knowledge.

The hierarchical classification:


— Specific/particular knowledge.
— General knowledge.

The ontological classification:


— Real knowledge.
— Concrete knowledge.
— Abstract knowledge.
— Fictitious knowledge.
— Historical knowledge.
— Future knowledge.

The confidence classification:


— Absolutely certain knowledge.
— More or less certain knowledge.
Tuomi (1999) suggested an eight-fold bidirectional classification
of knowledge, which is presented in Table 2.1 and has eight classes
of knowledge.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 67

Knowledge Characteristics and Typology 67

Table 2.1. The eight-fold bidirectional classification of knowledge.

Dynamic typology

Self-referential, i.e., Stockpiled (or sediment),


Acquisition typology active, knowledge i.e., passive, knowledge

Ontogenetic, i.e., learned, Cognitive knowledge Habitual knowledge


knowledge
Phylogenetic, i.e., trans- Socio-cultural Instinctive knowledge
generational, knowledge knowledge

Tuomi (1999) remarks that habits, i.e., habitual knowledge and


its expression in behavior, bridge the mind and body by imbedding
meaning into the body. Besides, in contrast to active knowledge, pas-
sive (sediment) knowledge does not change or changes very slowly. On
the individual level, passive (sediment) knowledge is wired into the
structure of the personality, while on the organizational (social) level,
it is embedded in the organizational (respectively, social) practice.
De Jong and Ferguson-Hessler (1996) use the following classifica-
tion of knowledge:
— Situational knowledge is knowledge about situations as they typ-
ically appear in a particular domain.
— Procedural knowledge contains actions and operations that are
valid within a domain helping problem solver to make transitions
from one problem state to another.
— Conceptual knowledge is static knowledge about facts, principles,
and concepts that apply within a domain.
— Strategic knowledge helps organizing problem-solving processes
providing a general plan of solution activities.
In addition, De Jong and Ferguson-Hessler (1996) define levels of
knowledge introducing the hierarchical classification:
— Surface or superficial knowledge.
— Deep or deep-level knowledge.
There are different approaches to discern surface and deep-level
knowledge. The accessibility hardship differentiates these types in the
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 68

68 Theory of Knowledge: Structures and Processes

following way:
— Surface or superficial knowledge is easily accessible knowledge.
— Deep or deep-level knowledge is knowledge that is hard to obtain.
For instance, according to the accessibility hardship, knowledge
that the Sun gives light is surface knowledge, while knowledge that
the Sun radiates its energy due to thermonuclear processes is deep
knowledge.
The representability trait sets these two types apart on the differ-
ent foundation:
— Surface or superficial knowledge is knowledge about outward
attributes of the knowledge object (domain).
— Deep or deep-level knowledge is knowledge about imperative prop-
erties of the knowledge object (domain).
For instance, according to the representability trait, knowledge
that the Earth is big is surface knowledge, while knowledge that the
Earth is a planet is deep knowledge.
One of the criteria for knowledge classification is the nature of the
carrier of knowledge. According to this criterion, we have the follow-
ing types: digital knowledge, printed knowledge, written knowledge,
symbolic knowledge, molecular knowledge, quantum knowledge, and
so on. For instance, digital knowledge is represented by digits, printed
knowledge is contained in printed texts, and quantum knowledge is
contained in quantum systems.
Another criterion is the type of the system that acquires infor-
mation used in knowledge formation. According to this criterion,
we have the following types: visual knowledge, auditory knowledge,
olfactory knowledge, cognitive knowledge, genetic knowledge, and so
on. For instance, according to neuropsychological data, 80% of all
information that people get through their senses is visual informa-
tion, 10% of all information is auditory information, and only 10%
of information that people get through other senses.
One more criterion for knowledge classification is the specific
domain this knowledge is about. According to this criterion, we have
the following types: physical knowledge biological knowledge, genetic
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 69

Knowledge Characteristics and Typology 69

knowledge, social knowledge, physiological knowledge, ethic knowl-


edge, esthetic knowledge, weather knowledge, car knowledge, emo-
tional knowledge (in the sense of (Sanders, 1985; Keane and Parrish,
1992; George and McIllhagga, 2000; Bosmans and Baumgartner,
2005; Knapska et al., 2006)), author knowledge, political knowledge,
health care knowledge, quality knowledge, geological knowledge, eco-
nomical knowledge, stock market knowledge, and so on.
One more criterion is the area to which knowledge is applied.
This criterion determines orientations of knowledge. It is possible to
discern the following orientations of knowledge:
∗ Cognitive knowledge provides information about different objects
and domains.
∗ Procedural knowledge serves for organization of behavior of peo-
ple and animals, functioning of various systems and performing
actions.
∗ Educational knowledge helps learning and becoming educated.
∗ Pragmatic knowledge serves for gaining something.
∗ Economic knowledge serves for getting profit.
Machlup (1992) introduced five types of knowledge:
◦ practical knowledge;
◦ intellectual knowledge, which includes knowledge related to general
culture and knowledge that satisfies of intellectual curiosity;
◦ pastime knowledge, i.e., knowledge that satisfies non-intellectual
curiosity or the desire for light entertainment and emotional
stimulation;
◦ spiritual and religious knowledge;
◦ unwanted knowledge, which is accidentally acquired and aimlessly
retained.
Kesh and Ratnasingam (2007) use the following knowledge clas-
sification:
• Declarative knowledge as know-about;
• Procedural knowledge as know-how;
• Individual knowledge as knowledge created and inherent in the
individual;
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 70

70 Theory of Knowledge: Structures and Processes

• Social knowledge as knowledge created and inherent in the collec-


tive actions of the group;
• Conditional knowledge know-when;
• Relational knowledge know-with;
• Pragmatic knowledge as useful knowledge of an organization.

Knowledge usually has gradations in its orientation. Let us consider


some of them.
Cognitive levels of knowledge about an object A reflect the intent
of knowledge. These levels, or grades, are ordered from the lowest to
highest:

1. Knowledge about existence of the object A, which includes naming


when this knowledge is explicit.
2. Knowledge as a description of the object A and/or of what is
related to the object A, which includes knowledge of properties of
A and/or of what is related to A.
3. Knowledge as understanding (of a description) of properties of the
object A and/or of what is related to the object A.
4. Knowledge as holistic understanding of the object A.
5. Knowledge as an ability to explain properties of the object A
and/or of what is related to the object A.
6. Knowledge as an ability to explain the structure of the object A.
7. Knowledge as an ability to explain the object A as a system with
its internal and external connections.

As an example of levels of knowledge, we can take knowledge


about such an object as the Earth. At first, people only knew that
there was something where they lived, that is, the Earth (the first
level). Then they found some properties of the Earth by observation
(the second level). For instance, they found that different plants grow
on the Earth and different animals live on the Earth. Later people
began to understand some properties of the Earth (the third level).
For instance, they understood how to use soil to grow useful plants
and how seasons are changing. However, holistic understanding of the
Earth was achieved only when it was demonstrated that the Earth
is a planet, which rotates around the Sun (the fourth level). Later
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 71

Knowledge Characteristics and Typology 71

scientists explained some properties of the planet Earth (the fifth


level). For instance, it was explained why the Earth rotates around
the Sun. Finding the configuration of the Solar System brought the
knowledge about the external structure of the Earth, while geological
studies explained (to some extent) the inner structure of the Earth
(the sixth level). However, the seventh level is not yet achieved as
science still does not have a good explanation of the Earth as a mul-
tifaceted system, which includes ecological, geological, and physical
(not only mechanical) explanations.
It is possible to compress cognitive levels of knowledge into three
epistemological gradations of knowledge:

— Xerox knowledge is knowledge without understanding.


— Understandable knowledge is knowledge that is understood by its
owner.
— Explainable knowledge is knowledge that its owner can explain to
others.

Relation to the knowledge domain, i.e., the domain this knowledge


is about, gives us one more classification:

• Complete knowledge completely describes its domain.


• Partial knowledge only partially describes its domain.
• Irrelevant knowledge does not at all describe its domain.

Operational levels of knowledge about an object A reflect the


potency of knowledge. These levels are also ordered from the lowest
to highest:

1. Knowledge as an ability to perceive the object A, which usually


includes naming.
2. Knowledge as an ability to do something with the object A.
3. Knowledge as an ability to use the object A for some purpose.

For instance, at first, people were able to perceive electricity in the


form of a lightning (the first level). Then they invented the lightning
rod to divert lightning from people and buildings (the second level).
Later they learned how to use electricity (the third level) and as
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 72

72 Theory of Knowledge: Structures and Processes

we know, now electricity is one of the material pillars of the human


civilization.
Relations between knowledge and a given system Q determine
three important types of knowledge:

— knowledge K is accessible for Q if Q has access to K,


— knowledge K is available for Q if Q can get (obtain) K,
— knowledge K is acceptable for Q if given K, the system Q can
accept K.

There is essential difference between these classes.


For instance, even if a person has access to some knowledge, it
does not mean that this person can get this knowledge. Imagine
you come to a library that has one million books. You know that
knowledge you need is in some of these books but you do not know
in which one. If you do not have contemporary search tools to get
this knowledge and can only read books to find it, then it will not
be available to you. You cannot read all million books.
Here is one more example. Knowledge about Lebesgue integration,
which is a high-level mathematical concept with a developed theory,
is accessible to anybody who has a book on Lebesgue integration
but it is available only to those who know mathematics. Some laws
of physics, e.g., Heisenberg’s uncertainty principle, state that there
is knowledge about physical reality unavailable to anybody. Some
mathematical results, e.g., Gödel’s incompleteness theorems, claim
that there is knowledge about mathematical structures unavailable to
anybody. In computer science, it is proved (cf., for example, (Sipser,
1997; Burgin, 2005)) that for a universal Turing machine, knowledge
whether this machine halts given arbitrary input is unavailable.
As we know, when a person can get some knowledge, it does
not mean that this person accepts this knowledge. Imagine you read
about some unusual event in a newspaper, but you do not believe
that it is possible. Then knowledge about this event is available to
you, but you cannot accept it because you do not believe that it is
possible. There are many historical examples of such situations.
For millennia, mathematicians tried to directly prove that it is
possible to deduce the fifth postulate of the Euclidean geometry
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 73

Knowledge Characteristics and Typology 73

from the first four postulates. Being unable to achieve this, math-
ematicians were becoming frustrated and tried some indirect meth-
ods. Girolamo Saccheri (1667–1733) tried to prove a contradiction by
assuming that the first four postulates were valid, while the fifth pos-
tulate was not true (Burton, 1997). To do this, he developed an essen-
tial part of what is now called a non-Euclidean geometry. Thus, he
was able to become the creator of the first non-Euclidean geometry.
However, Saccheri was so sure that the only possible geometry is the
Euclidean geometry that at some point, he claimed a contradiction
and stopped further reasoning. Actually, his contradiction was only
applicable in Euclidean geometry. Of course, Saccheri did not realize
this at the time and he died thinking he had proved Euclid’s fifth
postulate from the first four. Thus, knowledge about non-Euclidean
geometries was available but not acceptable to Saccheri. As a result,
he missed an opportunity to obtain one of the most outstanding
results in the whole mathematics.
A more tragic situation due to biased comprehension involved
such outstanding mathematicians as Niels Henrik Abel (1802–1829)
and Carl Friedrich Gauss (1777–1855). As history tells us (Bell,
1965), there was a famous long-standing problem of solvability in
radicals of an arbitrary fifth-degree algebraic equation. Abel solved
this problem proving impossibility of solving that problem in a gen-
eral case. In spite of being very poor, Abel himself paid for printing
a memoir with his solution. This was an outstanding mathematical
achievement. That is why Abel sent his memoir to Gauss, the best
mathematician of his time. Gauss duly received the work of Abel
and without deigning to read it he tossed it aside with the disgusted
exclamation “Here is another of those monstrosities!”
Moreover, people often do not want to hear truth because truth is
unacceptable to them. For instance, the Catholic Church suppressed
knowledge that the Earth rotates around the Sun because people who
were in control (the Pope and others) believed that this knowledge
contradicts to what was written in the Bible.
Relations between these three types of knowledge show that any
available knowledge is also accessible. However, not any accessible
knowledge is available and not any acceptable knowledge is available
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 74

74 Theory of Knowledge: Structures and Processes

or accessible. For instance, there are many statements that a per-


son can accept but they are inaccessible for this person. A simple
example gives theory of algorithms. It is known that given a word x
and a Turing machine T , it is impossible, in general, to find whether
T accepts x or not using only recursive algorithms (cf., for exam-
ple, (Burgin, 2005)). Thus, knowledge about acceptance of x by T is
acceptable to any computer scientist because this knowledge is neu-
tral. At the same time, this knowledge is in principle inaccessible by
recursive algorithms for infinitely many words.
Exploring accessibility, we find that it is possible to have access
to some knowledge to a different extent. For instance, imagine that
you need two books A and B. The first one, A, is in your library at
your home. You can go to the shelf where the book is and take it
any time you want. At the same time, the second book, B, is only
in the library of another city. You can get it, but only by the inter-
library exchange. Thus, both books are accessible, but the first one
is much easier to retrieve. This shows that accessibility is a property
of knowledge, which can be estimated (measured) and used in the
knowledge quality assessment.
As knowledge may be available to a different extent, availability is
a graded property of knowledge, which can be estimated (measured)
and used in the knowledge quality assessment.
There are different levels at which knowledge may be acceptable.
For instance, knowledge about yesterday’s temperature is accept-
able as knowledge, while knowledge about tomorrow’s temperature
is acceptable only as a hypothesis. Thus, acceptability is a graded,
fuzzy, or linguistic property (Zadeh, 1973) of knowledge, which can
be estimated (measured) and used in the knowledge quality assess-
ment (cf., Section 6.2).
In addition, it is possible to distinguish conditional counterparts
of accessible, available, and acceptable knowledge.

— knowledge K is conditionally accessible for Q if Q has access to


a carrier of K;
— knowledge K is conditionally available for Q if Q can get (obtain)
a carrier of K;
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 75

Knowledge Characteristics and Typology 75

— knowledge K is conditionally acceptable for Q if given K, the


system Q can accept a carrier of K.

To see the difference between accessible knowledge and condition-


ally accessible knowledge, imagine a book in English. For a person
E who knows English and has the book, knowledge in this book is
accessible. At the same time, for a person D who does not know
English and has the book, knowledge in this book is only condition-
ally accessible. There are different conditions for accessibility. One
condition is that D learns English. Another condition is that D finds
an interpreter. One more condition assures that, that the book is
translated from English to the language D knows.
To see the difference between available knowledge and condition-
ally available knowledge, imagine a book in English on Lebesgue
integration. For a person D who knows English, basic calculus and
can get this book, knowledge in it is available. At the same time, for
a person C whose knowledge of mathematics is very low but she can
buy this book, knowledge in it is only conditionally available.
Location characteristics generate the following classification of
knowledge:

— Individual knowledge is knowledge to which only one person has


access.
— Group knowledge is knowledge to which only people from a certain
group have access.
— Public knowledge is knowledge accessible to everybody.

Modalities reflect definite aspects of knowledge.

There are three existential modalities of knowledge:

— Existential knowledge reflects what is, i.e., the existing state of


the knowledge domain.
— Potential knowledge reflects what can be, i.e., possible state of
the knowledge domain.
— Compulsory knowledge reflects what must be, i.e., the necessary
state of the knowledge domain.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 76

76 Theory of Knowledge: Structures and Processes

There are three confidential modalities of knowledge:

— Assertoric knowledge affirms the content of an epistemic structure


meaning that the epistemic structure is knowledge, which asserts
something about its domain (object).
— Hypothetic knowledge conjectures the content of an epistemic
structure suggesting that an epistemic structure may be knowl-
edge.
— Erotetic knowledge inquires about the content of an epistemic
structure expressing lack of knowledge and having the form of a
question, problem or puzzle.

For instance, an assertoric proposition asserts that something is


(or is not) the case, in contrast to a hypothetic proposition, which
asserts the possibility of something being (or not being) the case,
or to an apodeictic proposition, which assert that something is (or
is not) necessarily the case, e.g., something is necessarily or self-
evidently true or false. Note that the division of propositions into
these three classes is rather subjective depending on the opinion.
For instance, for some people, e.g., for the majority of philosophers
and mathematicians, the proposition “two plus two always equals
four” is apodeictic. For other people, e.g., for the majority of physi-
cists, this proposition is assertoric, while for those who know about
non-Diophantine arithmetics studied in (Burgin, 1977; 1997g; 2007;
2010c) this proposition is hypothetic.
Researchers also used other names for these types of knowledge.
For instance, LaDuke called erotetic knowledge by the name anti-
knowledge (LaDuke, 2002).
There are three temporal modalities of knowledge, which reflect
directions in knowledge time-orientation:

— Knowledge about the future has anticipation modality.


— Knowledge about present has current modality.
— Knowledge about the past has bygone modality.

It is easy to comprehend these modalities for descriptive knowl-


edge. For instance, propositions stating something about the past,
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 77

Knowledge Characteristics and Typology 77

e.g., “Yesterday was cold”, have bygone modality. Propositions


stating something about present, e.g., “Today is warm”, have cur-
rent modality. Propositions stating something about the future, e.g.,
“According to the weather prognosis, tomorrow will be hot”, have
anticipation modality. Note that knowledge with any modality can
be true or false although knowledge with anticipation modality is
more often false in comparison with two other modalities as people
rarely have the ability to predict future.
However, operational and representational knowledge also have
temporal modalities. For instance, taking operational knowledge, we
see that now the majority of quantum algorithms (Deutsch, 1985)
have anticipation modality because there are no quantum computers,
which can perform these algorithms. At the same time, algorithms
for counting using abacus have bygone modality, while algorithms
used in contemporary operating systems have current modality.
Taking representational knowledge, we see that, for example, the
Ptolemaic model of the Solar system has bygone modality, while the
Copernican model of the Solar system has current modality.

2.2. Existential characteristics of knowledge

Knowledge is an unending adventure at the edge of uncertainty.


Jacob Bronowski

Existence is an important property of anything. Properly inquiring


whether an object A exists, it is necessary to define or at least, to
describe this object. Thus, discussing existence of knowledge, we need
to explain what knowledge is and here we come to a big problem.
From ancient times, as we have seen in the previous chapter, philoso-
phers and other researchers have tried to build a comprehensive def-
inition of knowledge and still different opinions exist causing a lot of
controversy in this area. There were many suggestions but in spite
of this, the diversity of essences called knowledge evades any exact
and comprehensive definition.
In spite of many unsuccessful efforts to figure out the unique def-
inition of knowledge, there are various descriptions of knowledge,
some of which are essentially disparate.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 78

78 Theory of Knowledge: Structures and Processes

For instance, Froehlich writes, “for some philosophers, validated,


true information is that which coheres with other truths (coherence
theory of truth). For others, what corresponds to reality (correspon-
dence theory of truth). For others, it is what works or is functional
(pragmatic theory of truth). At any event it is always contextual”
(cf., ((Zins, 2007)).
Even larger diversity of understandings and interpretations is
reflected in dictionaries and encyclopedias. For instance, in the Web-
ster’s Revised Unabridged Dictionary (1998), knowledge is defined as:
1. A dynamic process:
(a) the act or state of knowing;
(b) clear perception of fact, truth, or duty;
(c) certain apprehension;
(d) familiar cognizance;
(e) learning;
(f) cognition.
2. An object:
(a) that which is or may be known;
(b) the object of an act of knowing;
(c) a cognition.
3. An object:
(a) that which is gained and preserved by knowing;
(b) instruction;
(c) acquaintance;
(d) enlightenment;
(e) scholarship;
(f) erudition.
4. A property:
(a) that familiarity which is gained by actual experience;
(b) practical skill; as, a knowledge of life.
5. As a domain:
(a) scope of information;
(b) cognizance;
(c) notice; as, it has not come to my knowledge.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 79

Knowledge Characteristics and Typology 79

In the Oxford English Dictionary, knowledge is defined as:

(i) expertise, and skills acquired by a person through experience


or education; the theoretical or practical understanding of a
subject,
(ii) what is known in a particular field or in total; facts and
information,
(iii) awareness or familiarity gained by experience of a fact or
situation.

In monographs on knowledge engineering (Osuga et al., 1990), we


find the following definitions:

1. Knowledge is a result of cognition.


2. Knowledge is a formalized information, to which references are
made or which is utilized in logical inference.

The most popular approach in artificial intelligence is to define


knowledge functionally as it was suggested by Allen Newell (1927–
1992). It means that if an external observer O can ascribe to some
system A, e.g., to an agent, definite goals and if O witnesses that
A is behaving so as to achieve these goals in systematic, rational
mode, i.e., according to the principle of rationality, the observer O
assumes that A has knowledge (Newell, 1982). One of the problems
with this methodology is that as it is demonstrated in (Burgin and
Krymsky, 1985), there is no one unique concept of rationality —
different people and different systems interpret rationality in their
own way. It implies that rationality is relative and what seems ratio-
nal to one person can be completely irrational to another one. This
makes the functional definition of knowledge essentially dependent
on the observer and instead of unification, it generates a multiplicity
of concepts of knowledge.
Herbert Simon (1916–2001) suggested that the development of
information technology changes the meaning of the term “to know.”
While traditional meaning is to have knowledge in ones memory, now
it is understood as to have knowledge where to find the necessary
knowledge (Simon, 1971).
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 80

80 Theory of Knowledge: Structures and Processes

All approaches to knowledge discussed above give some gen-


eral ideas about knowledge but these definitions, or better, to
say, descriptions, are not sufficiently constructive even to unmis-
takably distinguish knowledge from knowledge representation and
from information. The following example demonstrates differences
between knowledge and knowledge representation. Some event may
be described in several articles written in different languages, for
example, in English, Spanish, and Chinese, but by the same author.
These articles convey the same semantic information and contain
the same knowledge about the event. However, representation of this
knowledge is different. As in the case with information, there is also
a distinction between knowledge representation and knowledge car-
rier. For instance, an individual can be a carrier of knowledge but
representation of this knowledge is only in his brain.
That is why here we do not strive to obtain an encompassing
precise definition of knowledge. There are many books and papers,
in which this goal is pursued. As an example, we can take such an
interesting and well-written book as (Pollock and Cruz, 1999).
In contrast to this, we follow the steps of scientists, who
build models of studied phenomena instead of explanation of what
these phenomena are in layman terms. Thus, our goal is to build
efficient and flexible models of knowledge on different levels of its
existence.
To achieve this goal, we implement the pragmatic approach to
knowledge, which is adopted in artificial intelligence (AI) where there
is often no “attempt to define knowledge in the philosophical or even
the popular view” (Fayyad et al., 1996). That is, we do not try to give
a complete characterization of knowledge precisely discerning it from
other epistemological structures. Our goal is to describe knowledge
structure, acquisition, behavior, relations, and utilization.
Many researchers assume that in contrast to data and informa-
tion, knowledge, exists only in some knowledge system, such as an
individual, society, or a knowledge base. Consequently, we apply
the observer-oriented approach to knowledge. Namely, we do not
try to exactly define knowledge in general or to describe it in an
absolute manner. In contrast to this, we presume that an observer
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 81

Knowledge Characteristics and Typology 81

(user or knower) characterizes and utilizes some epistemic struc-


tures as knowledge. That is, dealing with epistemic structures, an
observer makes use of an assembly K of knowledge properties as a
knowledge criterion labeling epistemic structures that satisfy crite-
ria from the assembly K by the name knowledge (Chisholm, 1989;
Pollock and Cruz, 1999). Criteria (properties) from the arrangement
K are called existential characteristics of knowledge because to find
whether knowledge with respect to K exists, we need to find an
object that satisfies these criteria. For instance, Fayyad et al. (1996)
write “we can consider a pattern to be knowledge if it exceeds some
interestingness threshold . . .”
However, in (Burgin, 1989a, 2010), it is assumed that these crite-
ria are subjective and can be different for different individuals and
different societies. For instance, what was treated as knowledge three
millennia ago, e.g., that the Sun rotates around the Earth, now is
often considered as being a misconception.
In a similar way, Fayyad et al. (1996) write “knowledge . . . is
purely user oriented and domain specific and is determined by what-
ever functions and thresholds the user chooses.”
Let us look at some examples of knowledge criteria (existential
characteristics of knowledge).
It is possible to consider the following assembly K of knowledge
properties as an objective knowledge criterion:
1. To be a belief.
2. To be true.
3. To be justified.
This criterion corresponds to the time-worn and widely criti-
cized philosophical definition of knowledge as a “justified true belief”
(Russell, 1912; 1948; Gettier, 1966; Goldman, 1967; Lewis, 1996).
As Thorkelson shows, this definition suffers from three major prob-
lems centered especially around the term “belief” (Thorkelson, 2008).
First, this definition reduces knowledge to propositional knowledge,
“knowing-that.” As a result, other types of knowledge such as oper-
ational knowledge in the form of practical “know-how” (knowl-
edge embodied in actions, behaviors, and procedures), as well as
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 82

82 Theory of Knowledge: Structures and Processes

representation knowledge in the form of affective states (knowledge


embodied in emotion and sentiment) and phenomenological acquain-
tance (conferred, for instance, by sensory experience or artistic rep-
resentation) are excluded. Second, insofar as “belief” is considered
as a mental state of the individual, the definition directs towards
an egocentric rather than socio-centric theory of knowledge (Silver-
stein, 2004). Third, since a belief is an isolated, singular entity, it
is possible to think of knowledge as an unordered aggregate of iso-
lated epistemic items, e.g., propositions, instead of as a coordinated,
though not necessarily total epistemic system.
Scheffler (1965) suggests the following criteria of descriptive/
declarative knowledge:
1. To be a belief.
2. To be supported by an adequate evidence.
To efficiently treat descriptive/declarative knowledge, it is also
possible to use the following assembly K of knowledge properties as
a subjective knowledge criterion:
6. To be a belief.
7. To be believed (assumed) as being true.
8. To be believed (assumed) as being justified.
Here is one more similar assembly K of knowledge properties,
which can be used as a knowledge criterion:
1. To be a belief.
2. To give correct information about the knowledge object (domain).
3. To give exact information about the knowledge object (domain).
However, knowledge criteria do not need to include the condition
of being a belief. For instance, operational knowledge, as a rule, has
the form of instructions and not of beliefs. Consequently, it is possible
to use other knowledge criteria such as “to be useful,” “to be correct”
or “to be constructive.”
The observer-oriented approach makes it possible to solve some
paradoxes related to knowledge. For instance, let us imagine that
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 83

Knowledge Characteristics and Typology 83

two millennia ago somebody was asked what the Sun was doing.
Historical sources allow us to presume that he or she would
tell:

The Sun is giving light and is rotating around the Earth. (2.1)

Contemporary knowledge supports the first statement but rejects


the second one as according to the present-day celestial mechan-
ics, the Earth rotates around the Sun. It means that what people
considered as knowledge two millennia ago, now is treated as a mis-
conception. This example and many others bring people to the skep-
tical view on knowledge, which claims that it is impossible to have
knowledge or to correctly discern knowledge from other epistemic
structures.
However, the observer-oriented approach to knowledge gives a
different answer to this riddle. It says that the statement (2.1) was
a temporal subjective knowledge about the Sun.
In our approach, we distinguish three basic types of observers:
internal observers and two kinds of external observers — real external
observers and abstract external observers.
An external observer can be either a real system (real external
observer), such as a scientist, or an abstract system (abstract external
observer), such as a scientific theory.
An internal observer is a knowing system (knower), i.e., the sys-
tem such as a reader of a book or a computer that stores knowledge.
Usually, an observer is treated as a physical system that in the
same way, i.e., physically, interacts with the observed object. Very
often, an observer is interpreted as a human being. However, here we
use a broader perspective, allowing an abstract system also to be an
observer because in our case the observed object is knowledge, i.e., it
is an abstract system itself. In addition, it is necessary to understand
that interaction between abstract systems involves representations of
these systems and the performing system, which perform interaction
and is usually physical.
An interesting approach to knowledge posits it as a process of
knowing. For instance, Polanyi (1974) regards knowledge as both
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 84

84 Theory of Knowledge: Structures and Processes

static “knowledge structure” or “knowledge item” and dynamic


“knowing”.
When people discuss something, they usually either suppose that
what they discuss exists or try to show that it does not exist. It means
that existence is treated as a dualistic possibility — either something
exists or it does not exist. However, this is very inexact and in many
cases even wrong because there are different kinds, types, and modes
of existence (Burgin, 2012). For instance, does a fictional hero of some
novel exist? Many think that he does not exist but a more correct
answer is that this fictional hero exists as a mental entity but does
not exist as a material object.
This kind of existence was considered by Alexius Meinong (1853–
1920) in his analysis of language. He assumed that language, when
properly understood, is a guide to ontology and this ontology permits
two kinds of existence: genuine existence of physical objects and gen-
eralized existence of objects in imagination, e.g., fictional characters,
square circles, and golden mountains.
To have a complete picture of reality, we come to the conclusion
that forms of existence are determined by the world stratification and
structuration (Burgin, 2012). Taking the structuration determined
by the Existential Triad of the world (Burgin, 1997; 2010), which
stems from the long-standing tradition in philosophy and is presented
in Figure 2.1, we come to the three existential forms — material/
physical existence, mental existence, and structural existence.
In this stratification, the Physical (material) World is interpreted
as the physical reality studied by natural sciences (cf., for exam-
ple, (Born, 1953)), the Mental World encompasses different levels of
mentality, and the World of Structures consists of various forms and
types of structures.

World of Structures

Physical World Mental World

Figure 2.1. The existential triad of the world


September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 85

Knowledge Characteristics and Typology 85

Usually people comprehend the Mental World as individual men-


tality. Science extended this picture exploring the Mental World on
three levels, all of which are included in the Mental World of the
Existential Triad:

• The first level treats mentality of separate individuals and is the


subject of psychological studies. As in the case of physical reality,
now psychology knows a lot about mentality/psyche of people. It is
necessary also to remark that in the same way as physics does not
study the physical reality as a whole but explores different parts,
levels and aspects of it, psychology also separates and investigates
different parts and aspects of the individual mental reality, such as
intelligence, emotions, conscience, or unconscious. However, there
are components of individual mentality that yet lay beyond the
studies of contemporary psychology.
• The second level deals with group mentality of various groups
of people and is the subject of social psychology, which bridges
sociology and conventional psychology. In particular, this level
includes group conscience, which incorporates collective memory
(Durkheim, 1984), collective intelligence (Brown and Lauder, 2000;
Nguen, 2008a) and is projected on the collective unconscious in the
sense Jung (see (Jung, 1969)) by the process of internalization.
• The third level encompasses mental issues of society as a whole.
Social mentality includes social memory, social intelligence, and
social conscience. Social psychology also studies some features of
this level.

However, these three levels do not exhaust the whole Mental


World. In fact, the Mental World from the Existential Triad com-
prises higher (than the third) levels of mentality although they are
not yet studied by science (Burgin, 1997; 2010). It is possible to
relate higher levels of the Mental World to the spiritual mystical
worlds described in many religious esoteric teachings.
Some thinkers, following Descartes, consider the individual mental
world as independent of the physical world. Others assume that indi-
vidual mentality is completely generated by physical systems of the
organism, such as the nervous system and brain as its part. However,
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 86

86 Theory of Knowledge: Structures and Processes

in any case, the mental world is different from the physical world and
constitutes an important part of our reality.
Psychological experiments and theoretical considerations show
that the Mental World is stratified into three spheres: cognitive or
intellectual sphere, affective or emotional sphere and effective or reg-
ulative sphere. This stratification is based on the extended theory
of triune brain and the concept of the triadic mental information
(Burgin, 2010).
The Mental World has elements and components, which are sim-
ilar to elements and components of the Physical World. In a natural
way, the Mental World has its mental space, mental objects (struc-
tures), and mental representations (Burgin, 1998a).
It is also necessary to explain that the World of Structures directly
corresponds to Plato’s World of Ideas/Forms because ideas or forms
might be associated with structures. Indeed, on the level of ideas,
it is possible to link ideas or forms to structures in the same way
as atoms of modern physics may be related to atoms of Democritus
and Leucippus. Only recently, modern science came to a new under-
standing of Plato ideas, representing the global world structure as the
Existential Triad of the world, in which the World of Structures is
much more comprehensible, exact, and explored in comparison with
the World of Ideas/Forms. When Plato and other adherents of the
World of Ideas/Forms were asked what idea or form was, they did
not have a satisfactory answer. In contrast to this, many researchers
have been analyzing and developing the concept of a structure (Ore,
1935; 1936; Bourbaki, 1948; 1957; 1960; Bucur and Deleanu, 1968;
Corry, 1996; Burgin, 1997; 2010; 2011; 2012; Landry, 1999; 2006). It
is possible to find the most thorough analysis and the most advanced
concept of a structure in (Burgin, 2012). As a result, in contrast to
Plato, mathematics and science has been able to elaborate a suffi-
ciently exact definition of a structure and to prove existence of the
world of structures, demonstrating by means of observations and
experiments, that this world constitutes the structural level of the
world as a whole. Informally, a structure is a collection of symbolic
(abstract) objects and relations between these objects. Each system,
phenomenon or process in nature, technology or society has some
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 87

Knowledge Characteristics and Typology 87

structure. These structures exist like material things, such as tables,


chairs, or buildings do, and form the structural level of the world.
When it is necessary to learn or to create a system or to start a pro-
cess, it is done, as a rule, by means of knowledge of the corresponding
structure. Structures mould things in their being and comprehension.
If we say that structures exist only embodied in material things,
then we have to admit that material things exist only in a structured
form, i.e., matter (physical entities) cannot exist independently of
structures. For instance, atomic structure influences how the atoms
are bonded together, which in turn helps one to categorize materi-
als as metals, ceramics, and polymers and permits us to draw some
general conclusions concerning the mechanical properties and phys-
ical behavior of these three classes of materials. Even chaos has its
structure and not a unique one.
The three worlds from the Existential Triad are not separate
realities: they interact and intersect. Individual mentality is based
on the brain, which is a material thing, while in the opinion of
many physicists mentality influences physical world (see, for example,
(Herbert, 1987)). At the same time, our knowledge of the physical
world essentially depends on interaction between mental and mate-
rial worlds (see, for example, (von Bayer, 2004)).
Moreover, our mentality influences the physical world and can
change it. We can see how ideas change our planet, create many
new things and destroy existing ones. Even physicists, who research
the very foundation of the physical world, developed the, so-called,
observer-created reality interpretation of quantum phenomena.
A prominent physicist, Wheeler, suggests that in such a way it is
possible to change even the past. He stresses (Wheeler et al., 1983)
that elementary phenomena are unreal until observed.
In addition, there is a projection of the Mental World into the
Physical World in the form of creations of human mentality (cre-
ativity), such as books, movies, magazines, newspapers, cars, planes,
computers, and computer networks. This projection determines the
Extended Mental World, which consists of the Mental World and its
projection. The Extended Mental World correlates with the World 3
from the General Popper Triad of the world.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 88

88 Theory of Knowledge: Structures and Processes

Structural and material worlds are even more intertwined. Actu-


ally, no material thing exists without a structure. Even chaos has
its chaotic structure. Structures make things what they are. For
instance, it is possible to make a table from different material: wood,
plastics, iron, aluminum, etc. What all these tables have in com-
mon is not their material; it is specific peculiarities of their struc-
ture. In a similar way, according to Poincaré (1908), space is in
reality amorphous, and it is only the things in space that give it
a structure (form). As some physicists argue, physics studies not
physical systems as they are but structures of these systems, or
physical structures. In some sciences, such as chemistry, and areas
of practical activity, such as engineering, structures play the lead-
ing role. For instance, the spatial structure of atoms, chemical ele-
ments, and molecules determines many properties of these chemical
systems.
Contemporary physics treats the physical world as a net of inter-
acting components (systems), where there is no physical meaning to
the state of an isolated object. A physical system (or, more precisely,
its contingent state) is represented by the net of relations with the
surrounding objects it retains. As a result, the physical structure of
the world is identified with such a global net of system relationships.
As North (2009) writes, physics is supposed to be telling us
about the nature of the world, while physical theories are formulated
in a mathematical language, using mathematical structures, which
implies that mathematics is somehow telling us about the physical
make-up of the world.
Plato postulated independent existence of his world of ideas, and
it was demonstrated that it is possible to consider structures as sci-
entific counterparts of Plato ideas (Burgin, 2011). Thus, it is natural
to ask the question whether structures exist without matter. Here we
are not going into detailed consideration of this fundamental prob-
lem. It is important that as a coin has two sides, material things
always have two aspects — substance and structure.
Like atoms studied by contemporary physics were prefigured by
ancient thinkers, such as Democritus from Abdera (460–370 B.C.E.)
and Leucippus of Miletus (ca. 480 – ca. 420 B.C.E.), the Existential
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 89

Knowledge Characteristics and Typology 89

Triad also has several precursors suggested by various thinkers such


as Plato, Aristotle, Popper, or Gödel. John Locke also suggested a
similar triadic stratification of the world and knowledge about the
world. He wrote:
“All that can fall within the compass of human understanding, being
either, first, the nature of things, as they are in themselves, their rela-
tions, and their manner of operation: or, secondly, that which man him-
self ought to do, as a rational and voluntary agent, for the attainment of
any end, especially happiness: or, thirdly, the ways and means whereby
the knowledge of both the one and the other of these is attained and
communicated; I think science may be divided properly into these three
sorts.” (Locke, 1690)

This gives us the structures presented in Figures 2.2 and 2.3.


Naturally, the structure of science, according to Locke, is struc-
turally isomorphic to his structure of the World.
Note that Locke triads are similar (but not exactly) to the Exis-
tential Triad. The difference is that: (1) nature is only a part of the
physical world, e.g., people and machines are elements of the physi-
cal world but do not belong to nature; (2) signs are only one kind of
structures; and (3) although mentality is a pivotal characteristics of
human beings, not only human beings have mentality, while a human
being cannot be reduced to her/his mentality.

World of Signs

Nature Human beings

Figure 2.2. The Locke triad of the world

Doctrine of Signs

Natural sciences Social sciences and Humanities

Figure 2.3. The Locke triad of science


September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 90

90 Theory of Knowledge: Structures and Processes

According to the existential stratification of reality, knowledge


exists as structure in the world of structures but has many repre-
sentations in two other worlds. Namely, there are different physical
representations of knowledge, e.g., printed texts in books and jour-
nals, written algorithms, software of computer and networks, written
manuscripts or states of the computer memory in knowledge bases.
There are also mental representations of knowledge in the mentality
(mind) of people and in the mentality of society. However, even hav-
ing correct knowledge representation, the knowing system (knower)
does not always possess knowledge. Indeed, imagine a situation when
you have a book written in a language you do not know, e.g., in
Hindu or Japanese. This book can have a lot of knowledge but you
do not possess this knowledge because you cannot read the book.
Thus, knowledge exists but is not accessible to you.
This and other examples show that there are many modes, modal-
ities, kinds, types, gradations, and dimensions of existence (Burgin,
2012). However, when existence is treated as a property, it usually
means that some object exists, at least, as a mental entity, i.e., it has
a name (sometimes several names) and different ascribed properties.
Although we can ask the question whether there is a material object
that has these properties. For instance, after Dirac in 1930’s sug-
gested a new particle — positron, the majority of physicists believed
that such a particle did not exist. However, after several years exis-
tence of positron was proved by experiments.
Knowing is frequently related to existence. Very often people
assume that what they do not know does not exist. This exhibits the
subjective form of existence, that is, existence in mentality of people.
For instance, for a long time, people did not know that the Solar sys-
tem has such a planet as Neptune. Consequently, this planet did not
exist for them although it existed as a celestial body, i.e., it had phys-
ical existence. Moreover, existence is a property not only of material
things but also of theoretical constructs. For a long time, mathemati-
cians did not know about negative numbers. Consequently, negative
numbers did not exist for them. Even after negative numbers had
been discovered in the East, namely, in China and India, and then
brought to Europe, many European mathematicians tried to argue
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 91

Knowledge Characteristics and Typology 91

that negative numbers did not exist (Martinez, 2006). In a similar


way, some notable mathematicians of the 19th century insisted that
irrational numbers did not exist (Burton, 1997).
For a long time, it was assumed that knowledge is something
that exists only in the mentality of people. Some researchers believe
that this is the crucial difference between knowledge and informa-
tion, which exists in anything. However, the technological develop-
ment changed the situation. Indeed, because knowledge is vital to the
whole existence of people, various artificial tools have been invented
for knowledge acquisition, storage, transmission, and transformation.
Among other things, people invented and developed oral and written
languages, papyrus, clay tablets, paper, printing, books, and comput-
ers. This brought an understanding that knowledge also existed not
only in people’s mentality but also in various physical things but not
in all in contrast to information. As a result, researchers started to
explore knowledge in artificial systems only after computers came
into being and the research area called AI emerged.
Later when knowledge became crucial for organizations,
researchers began studying knowledge not only on the level of individ-
ual mentality but also in the group (collective) mentality, performing
research in social epistemology and knowledge management. One of
the important results of this research was explication of distinctions
between tacit knowledge, which exists in individual mentality, and
explicit knowledge, which usually belongs both to individual mental-
ity and to collective mentality.

2.3. Descriptive properties of knowledge and


corresponding typology

Beware of false knowledge; it is more dangerous than ignorance.


Bernard Shaw

Descriptive properties (characteristics) of knowledge can be for-


mally described by a function assigning values from the property
scale to knowledge items or by an abstract property (cf., Chapter 5
and (Burgin, 1985)). For instance, truthfulness is a descriptive prop-
erty with the scale {True, False}. Note that it is possible to represent
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 92

92 Theory of Knowledge: Structures and Processes

relations between knowledge items as abstract properties (Burgin,


2010).
Relational properties (characteristics) of knowledge characterize
relations of knowledge items, e.g., represented by texts or computer
files, to physical or mental objects (systems). For instance, the deno-
tation theory of meaning is based on relations between knowledge
items and objects they denote (Russell, 1905; Ryle, 1957; Parkin-
son, 1968). In other words, relational characteristics of knowledge
are molded by relations between knowledge and systems that are
related to knowledge.
Contextual properties (characteristics) of knowledge are proper-
ties of objects related to knowledge. It is possible also to formally
describe them by a function assigning values from the property scale
to these objects or by an abstract property (cf., Chapter 5 and
(Burgin, 1985)).
Internal characteristics of knowledge are innate to knowledge as
a structural phenomenon. We start with the structure of knowledge,
which is the basic integral characteristic of knowledge. Note that we
make a distinction between a knowledge structure and the structure
of knowledge. A knowledge structure is one or several knowledge
items in their structural form, while the structure of knowledge is a
structural description of knowledge organization displaying its inter-
nal and external relations in a general form, that is, relations that
are innate to knowledge in general.
To find the structure of knowledge, we observe that the indis-
pensable trait of knowledge indicates that each element or system of
knowledge refers to some object domain because knowledge is always
knowledge about something. It means that for any knowledge system
(element) K, there is a domain D of real or abstract objects and K
describes the whole D or its part. Note that it is possible to treat
a domain as one object. In such a way, we come to the following
diagram, which is a specific case of named sets (cf., Appendix).

representation
Knowledge Item K Object (Domain) D. (2.2)
reflection
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 93

Knowledge Characteristics and Typology 93

Diagram (2.2) is also related to Diagram (2.3) because knowledge


is a cognitive representation of knowledge objects and knowledge
domains, which is intrinsically related to their symbolic representa-
tions. Besides, cognitive representation is often symbolic.

Object (Domain) D Cognitive (symbolic) representation of D


(2.3)
For instance, a symbolic representation of an object A may be a
sentence in a natural language (English, Spanish, or French), a log-
ical formula, a mathematical expression and so on. Thus, according
to this new approach, the statement “People live on the Earth” is not
knowledge per se. It is a cognitive structure, which becomes knowl-
edge only when it is connected to the object (system) that consists
of people and the celestial body called the Earth.
However, Diagram (2.2) does not give a complete structure of
knowledge because knowledge does exist by itself but belongs to a
knowledgeable system or a knower. This gives us Diagram (2.4).

Knower (knowledgeable system)


knowing possession
(2.4)
representation
Knowledge item K Domain D

The structures of knowledge presented by Diagrams (2.2)–(2.4)


reflect the surface level of knowledge organization. An exposition of
further details about knowledge organization and structure is given
in Chapters 4–6.
We discern the following kinds of knowledge systems:

— Knowledge item is a knowledge system that is contemplated sep-


arately of other knowledge systems.
— Knowledge unit is a knowledge item that is used for constructing
other knowledge systems and treated as a unified entity.
— Knowledge quantum is a minimal, in some sense, knowledge unit.
— Knowledge element is an element of a knowledge system
(structure).
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 94

94 Theory of Knowledge: Structures and Processes

By definition, any knowledge unit is also a knowledge item but the


converse is not always true. To understand the difference between a
knowledge unit and a knowledge item, let us look at some examples.
Knowledge in one story (novel) is a knowledge unit.
Knowledge in several unrelated stories (novels) is a knowledge item.
Knowledge about one person is a knowledge unit.
Knowledge about several unrelated persons is a knowledge item.
However, knowledge about members of one family is a knowledge
unit.
All of the above are knowledge structures or knowledge systems.
Here we utilize two meanings of the term knowledge system:
1. Knowledge system as a concise representation of knowledge.
2. Knowledge system as a structure consisting of knowledge elements
and relations between them.
An example of a knowledge item:
“This book is about knowledge. Its title is “Theory of Knowledge:
Structures and Processes.” It has many pages. It has nine chapters.”

An example of a knowledge unit:


“This book is about knowledge.”
This statement is also a knowledge quantum.
An example of a knowledge element:
“a book”
At the same time, there are also composite knowledge elements,
e.g., “an interesting book.”

2.3.1. Dimensions and other characteristics


of knowledge
Dimensions are basic descriptive characteristics of epistemic struc-
tures in general and knowledge in particular. Each dimension has
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 95

Knowledge Characteristics and Typology 95

different gradations or/and modalities. Some of these gradations are


discrete, while others can be considered as continuous.
It is possible to discern the following dimensions of epistemic
structures (of knowledge):
1. The correctness dimension reflects adequacy of to its object or
domain representation by the knowledge item.
2. The confidence dimension reflects confidence of the knower
(knowledge user) in the knowledge item property estimation,
including certainty of the knower (knowledge user) in knowledge
item correctness as its component.
3. The validation dimension reflects confirmation of confidence of
the knower (knowledge user) in the knowledge item property
estimation, including justification of the correctness estimation
of a knowledge item.
4. The complexity dimension reflects complexity of a knowledge
item including several components such as clarity.
5. The significance dimension reflects value and significance of a
knowledge item.
6. The efficiency dimension reflects the role of a knowledge item in
achieving some goals.
7. The reliability dimension reflects reliability of a knowledge item.
8. The abstractness/generality dimension reflects the level of
abstraction of a knowledge item, as well as the degree of gen-
eralization achieved by a knowledge item.
9. The completeness/exactness dimension reflects completeness of
a knowledge item with respect to its object (domain), as well
as exactness with which a knowledge item reflects/represents its
object (domain).
10. The meaning dimension reflects meaning of a knowledge item.
The first three dimensions are the separation dimensions as these
traits have been traditionally used to separate knowledge from other
epistemic structures, e.g., from beliefs (cf., Section 2.2).
The next six dimensions are the feature dimensions.
The 10th dimension is the integration dimension as it can include
any other dimension and is the primary feature for knowledge
utilization.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 96

96 Theory of Knowledge: Structures and Processes

Each dimension integrates specific knowledge properties usually


containing these properties as its components.
There are also other characteristics of knowledge.
Novelty shows the extent to which the content is original, or its
relation to other works, e.g., whether it repeats, duplicates, adds to,
or contradicts the previous work.
Knowledge domain is what this knowledge is about. It is expressed
as the subject matter as well as named or implied persons, places,
institutions, devices, etc.
Specificity or depth refers to depth of coverage or degree of detail
of the knowledge in a message.
Amount of knowledge has many different meanings and mea-
sures. One of such measures is the number of characters, pages or
other physical characteristics of a text, which is a carrier of knowl-
edge. Another measure of knowledge amount is the Hartley–Shannon
entropy (Burgin, 2010). One more estimate for knowledge amount is
the recipient’s sense of number of known facts or ideas although
there is not yet a formal measure for this. Algorithmic complexity
gives a possibility to measure amount of knowledge about construc-
tive objects (Burgin, 2010).

2.3.2. Correctness, relevance, and consistency


of knowledge
Here we give the most general definition of correctness treating it
as a relational property. In essence, correctness reflects relation of
knowledge to some system C of correctness conditions.

Definition 2.3.1. A knowledge system (unit) K is correct with


respect to a system C of conditions or simply C -correct if it sat-
isfies all conditions from C .

Definition 2.3.2. Conditions from the system C are called compo-


nents of C -correctness.

Let us consider some examples.

Example 2.3.1. Let us look at such procedural/operational knowl-


edge as programs (BK). This knowledge informs computer what to
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 97

Knowledge Characteristics and Typology 97

do or how to perform computations. Correctness becomes a critical


issue in software and hardware production and utilization as a result
of the increased social and individual reliance upon computers and
computer networks. A study of the National Institute of Standards
found that only software defects resulted in $59.5 billion annual losses
(Tassey, 2002; Thibodeau, 2002). It is possible to find a detailed anal-
ysis of the concept software correctness and the most comprehensive
development of this concept in (Burgin and Debnath, 2006; 2007).
There are different forms of software correctness, such as functional,
descriptive, procedural, temporal, and resource correctness.

Example 2.3.2. Let us contemplate propositional knowledge, i.e.,


knowledge expressed by propositions in some logical language. This
may be, the most popular form of knowledge representation (cf.,
(Bar-Hillel and Carnap, 1952; 1958; Halpern and Moses, 1985)). Then
a knowledge system is usually represented by a propositional calcu-
lus. Traditionally, it is assumed that such knowledge is correct if this
calculus is consistent.

Software correctness introduced in (Burgin and Debnath, 2006)


is one more example of knowledge correctness as a software system
is a representation of operational knowledge.
Consistency of descriptive knowledge in the sense of (Nuseibeh
et al., 2001) is also an example of knowledge correctness.
Existence of various correctness conditions results in a variety of
correctness types.
Types of correctness:
— Truth
— Correlation
— Consistency
In addition, correctness of knowledge may be higher or lower. For
instance, the system K can satisfy only of conditions from C or some
conditions can be satisfied only partially. As a result, correctness is, in
essence, a gradual, often fuzzy, property because in many cases, con-
ditions from the set C can be satisfied only partially. This shows that
it is possible to introduce different measures of correctness, which are
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 98

98 Theory of Knowledge: Structures and Processes

functions on knowledge items that can take numerical values, vector


values, or even values in a partially ordered set. This brings us to
degrees of correctness, which turn correctness into a fuzzy property
or more generally, into an abstract property (cf., (Burgin, 1985) and
Chapter 6).
Let us take a system of conditions C and consider a measure m
of condition satisfaction by knowledge items.

Definition 2.3.3. A knowledge system (unit) K is correct to the


degree n with respect to a system C of conditions if the measure m
of satisfaction of conditions from C is equal to n.

Note that n may be not only a number but also a vector when
we separately consider correctness components or an element from a
partially ordered set. Let us consider some examples.

Example 2.3.3. Let us consider propositional knowledge, i.e.,


knowledge represented by propositions. Defining correctness of
propositions, it is possible to take into consideration three aspects
of propositions — syntactic, semantic, and pragmatic aspects. For
each of these aspects, we take one criterion of knowledge correctness.

The syntactic criterion of correctness c1 :

The proposition has the form of a syntactically


correct English sentence.

The semantic criterion of correctness c2 :

The proposition is true, e.g., true in the sense


of classical logic.

The pragmatic criterion of correctness c3 :

The proposition has a model in the real world.

This allows us to formally define knowledge correctness taking as


the system C = {c1 , c2 , c3 } of correctness conditions and evaluate
some propositions. We denote the weight of a proposition p relative
to the correctness criterion ci by wi (p), i = 1, 2, 3.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 99

Knowledge Characteristics and Typology 99

Proposition p1 :

7 ≥ 3.

Correctness weights:
w1 (p1 ) = 0 because it is not a syntactically correct English sentence.
w2 (p1 ) = 1 because it is true that 7 ≥ 3.
w3 (p1 ) = 1 because the conventional (Diophantine) arithmetic is a
model for p1 .

Proposition p2 :

A bear is an animal.

Correctness weights:
w1 (p2 ) = 1 because it is a syntactically correct English sentence.
w2 (p2 ) = 1 because it is true that a bear is an animal.
w3 (p2 ) = 1 because the set of all animals is a model for p2 .
Proposition p3 :

In 2000, the King of France was blue.


w1 (p3 ) = 1 because it is a syntactically correct English sentence.
w2 (p3 ) = 0 because it is not true that in 2000, the King of France
was blue.
w3 (p3 ) = 0 because there is no model for p3 in the real world.
Thus, we have three elements of a weighted knowledge space (cf.,
Section 3.1):

(p1 ; 0, 1, 1)
(p2 ; 1, 1, 1)
(p3 ; 1, 0, 0)

Note that Example 2.3.3 shows that the most popular attribute
of knowledge — truth — is only one kind of knowledge correctness.

Example 2.3.4. Let us consider operational knowledge, i.e., knowl-


edge represented by automata or algorithms.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 100

100 Theory of Knowledge: Structures and Processes

The model criterion of correctness c1 :


The automaton A is a Turing machine (Burgin, 2005; or
Section 2.3).
The termination criterion of correctness c2 :
The automaton A is defined for all words in its alphabet.
This allows us to formally define knowledge correctness taking as
the system C = {c1 , c2 } of correctness conditions and evaluate some
propositions. We denote the weight of an automaton A as an oper-
ational knowledge relative to the correctness criterion ci by wi (p),
i = 1, 2.
Automaton A1 is a deterministic finite automaton (Burgin, 2005).
Correctness weights:
w1 (A1 ) = 0 because it is not a Turing machine.
w2 (A1 ) = 1 because any deterministic automaton A is defined
for all words in its alphabet (Burgin, 2005).
Automaton A2 is a universal Turing machine (Burgin, 2005).
Correctness weights:
w1 (A2 ) = 1 because it is a Turing machine.
w2 (A2 ) = 0 because a universal Turing machine is defined for all
words in its alphabet.
Thus, we have two elements of a weighted knowledge space (cf.,
Section 3.1):
(A1 ; 0, 1)
(A2 ; 1, 1)
Definition 2.3.1 allows us to introduce three modalities of knowl-
edge correctness:
— Description modality of knowledge correctness reflects how well
this knowledge represents or describes its domain.
— Attribution modality of knowledge correctness reflects how well
this knowledge is connected to the domain that this knowledge is
attributed to.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 101

Knowledge Characteristics and Typology 101

— Logical modality of knowledge correctness reflects how well this


knowledge satisfies some logical rules, such as, for example,
absence of contradictions.

Description and attribution modalities are definitely relational


properties of knowledge because they depend on relations between
knowledge and some domains. Logical modality can be relational
property of knowledge in some cases and can be internal property
of knowledge. For instance, such a component of logical modality
as (inner) consistency is an internal property of knowledge sys-
tems, while consistency of one knowledge system with respect to
another knowledge system is a relational property of knowledge
systems.
Each of knowledge correctness modalities has its components.
Accuracy, exactness and precision are components of the descrip-
tion modality of knowledge correctness.

Definition 2.3.4. Accuracy of descriptive knowledge reflects to what


extent the given description is sufficient and does not include unnec-
essary issues.
For instance, if a man is tall and thin, then the proposition “He
is a big man” is less accurate than the proposition “He is a tall
man.” Usually informal descriptions are less accurate than formal
descriptions. For instance, the proposition “His height is seven feet”
is more accurate than the proposition “He is a tall man.”

Definition 2.3.5. Accuracy of representational knowledge shows to


what extent the given representation is sufficient and does not include
unnecessary features.
For instance, the Copernican model of the Solar system is more
accurate than the Ptolemaic model of the Solar system.

Definition 2.3.6. Accuracy of operational knowledge shows to what


extent the given procedures, tools, goals, norms are sufficient and
necessary for reaching the desired goal.
For instance, a car mechanic has more accurate operational knowl-
edge about cars than an average person.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 102

102 Theory of Knowledge: Structures and Processes

Note that knowledge represented by a statement can be clear but


not accurate, as in the case of the statement “The weight of a dog is
between one and one thousand pounds.”

Definition 2.3.7. Precision of descriptive and representational


knowledge reflects whether it is possible to give more details or if
a knowledge item could be more specific.
For instance, representation of real numbers has different levels
of exactness or precision. For floating point numbers, there are three
commonly used levels of precision: single precision, double precision
and long double, or extended, precision (Sauer, 2006). In the single
precision representation, the exponent of a number has 8 bits and
the mantissa of a number has 23 bits. In the double precision rep-
resentation, the exponent of a number has 11 bits and the mantissa
of a number has 52 bits. In the long double precision representation,
the exponent of a number has 15 bits and the mantissa of a number
has 64 bits.

Definition 2.3.8. Precision of operational knowledge shows how


close it allows to approach the desired goal.
For instance, two ways of number truncation are usually used —
chopping and rounding (Sauer, 2006). By construction, rounding
gives more precise results than chopping.
Note that knowledge represented by a statement can be both clear
and accurate, but not precise, as in the case “Bob is overweight”
because we do not know how overweight Bob is — one pound or 500
pounds.

Definition 2.3.9. Exactness of descriptive and representational


knowledge shows how a knowledge item matches the knowledge
domain, i.e., whether a descriptive knowledge item describes and
representational knowledge represents a larger or a smaller domain
in comparison with its assigned domain and the extent of the existing
difference.

For a long time, philosophers assumed that knowledge is always


absolutely true and completely exact. However, a little by little,
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 103

Knowledge Characteristics and Typology 103

new generations of researchers has begun to understand that knowl-


edge can be only partially true and moderately exact. The first idea
to mathematical treatment of partially true and moderately exact
knowledge dates back at least to the middle of the 19th century
when George Boole aimed to reconcile the classical logic, which tends
to express complete knowledge or complete ignorance, and proba-
bility theory as an extension of the classical logic such as it tends
to express partial or/and imprecise knowledge or ignorance (Boole,
1854). His approach represented subjective interpretation of proba-
bilities because people often do not have enough information to assign
definite numbers to probabilities of given events. Keynes formulated
and applied an explicit interval estimate approach to probability, fur-
ther developing the theory of imprecise probability and describing its
applications (Keynes, 1921).
However, probability represents only one aspect knowledge vague-
ness and inexactness. To reflect these properties of knowledge in a
better way, researchers developed fuzzy set theory, which has become
one of the most popular mathematical approaches to problems of
uncertainty and imprecision is fuzzy set theory. Fuzzy sets were intro-
duced by Lofti Asker Zadeh in 1965 and approximately at the same
time, Salii (1965) defined a more general kind of structures called
L-relations, which were studied by him in an abstract algebraic con-
text. Fuzzy relations, which are used now in different areas, such as
decision-making (Kuzmin, 1982) and clustering (Bezdek, 1978), are
special cases of L-relations when L is the unit interval [0, 1].
The aim of Zadeh was to get better mathematical models for real-
life systems and processes, as well as better techniques for human rea-
soning and decision-making, than the conventional set theory allowed
by constructing a more realistic set theory. To achieve this goal,
Zadeh considered generalizations of sets that allow graded member-
ship of their elements. Thus, he assumes that elements can have
different grades of membership in a set. His main argument was that
“classes of objects encountered in real physical world do not have pre-
cisely defined criteria of membership” (Zadeh, 1965). This approach
also reflects situations in which our knowledge about membership
is incomplete. Fuzzy set theory replaces the two-valued membership
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 104

104 Theory of Knowledge: Structures and Processes

function used for sets with a real-valued membership function. As a


result, membership may be treated as a probability, or as a degree
of truthfulness. In a similar way, it is possible to assign a real value
to assertions as an indication of their degree of truthfulness.
To represent imprecise, vague, inexact or fuzzy knowledge, Lofty
Zadeh also suggested using linguistic variables as a second-level struc-
ture based on fuzzy sets (Zadeh, 1973).
To understand knowledge correctness, let us consider its compo-
nents and facets.
Relevance, domain interpretability and domain describability are
components of the attribution modality of knowledge correctness.
Relevance of knowledge is a pivotal component of knowledge cor-
rectness, which has three basic types:

• Knowledge K is domain relevant if it is related to the domain


(object) it is attributed to.
• Knowledge K is problem relevant if it is related to the problem
under consideration.
• Knowledge K is goal relevant if it is useful in achieving a definite
goal.

Definition 2.3.10. The domain relevance of a knowledge item shows


the extent to which this knowledge item is related to some issue of a
considered domain or how does that bear on the issue.

As relations can be stronger or weaker, domain relevance may be


higher or lower. For instance, if the domain is a forest in the U.S., then
knowledge about the river near this forest is more relevant to this
domain than knowledge about some river in Australia. To represent
these distinctions in the quantitative form, it is possible to introduce
different measures of domain relevance, the scale of which can be
either the two-element set {0, 1} or the interval [0, 1] or the set of
all non-negative real numbers.
True knowledge about a domain A can be irrelevant to a domain
B, which is not related to A. For instance, knowledge about elemen-
tary particles is irrelevant to music or art. Knowledge of geometry is
irrelevant to moral issues.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 105

Knowledge Characteristics and Typology 105

Definition 2.3.11. The problem relevance of a knowledge item


shows the extent to which this knowledge item is related to some
problem.
As relations can be stronger or weaker, problem relevance may be
higher or lower. To represent these distinctions in the quantitative
form, it is possible to introduce different measures of problem rele-
vance, the scale of which can be either the two-element set {0, 1} or
the interval [0, 1] or the set of all non-negative real numbers.

Definition 2.3.12. The goal relevance of a knowledge item shows


the extent to which this knowledge item is helpful (useful) in achiev-
ing some goal.

As in the case of other types of relevance, it is possible to introduce


different measures of domain relevance, the scale of which can be the
two-element set {0, 1} or the interval [0, 1] or the set of all non-
negative real numbers. For instance, if the goal is to know weather in
Los Angeles, then knowledge about weather in Santa Monica is more
relevant to this goal than knowledge about weather in New York.
A knowledge item represented by a statement can be clear, accu-
rate, and precise, but not relevant to the question at issue. For
instance, students often think that the amount of effort they put
into a course should be used in raising their grade in a course. Often,
however, the “effort” does not measure the quality of student learn-
ing, and when this is so, effort is irrelevant to their appropriate grade
(Elder and Paul, 2009). It means that if the goal of a student is to
get good knowledge or a high grade, then the operational knowledge
of how to apply student’s effort can be weakly goal relevant or even
goal irrelevant.
Relevance of knowledge influences knowledge correctness in gen-
eral because if a knowledge unit K is irrelevant to some issue K
of the domain D of its attribution, then this knowledge cannot be
treated as correct with respect to the issue K.
It is possible to consider relevance as a binary property with only
two values — relevant and irrelevant. However, a more exact repre-
sentation of this property treats relevance as a fuzzy property allow-
ing different degrees of relevance.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 106

106 Theory of Knowledge: Structures and Processes

Let us consider two other components of the attribution modality


of knowledge correctness.

Definition 2.3.13. Domain interpretability of knowledge reflects


how an item of knowledge can be interpreted in the domain this
knowledge is attributed to.
The third component of the attribution modality of knowledge
correctness is not the same for different types of knowledge.

Definition 2.3.14. Domain descriptability of descriptive knowledge


reflects how well this knowledge can describe the domain (object) it
is attributed to.
For instance, the statement “The Mars has two satellites” has
higher domain descriptability than the statement “The Mars has
satellites.”

Definition 2.3.15. Domain representability of representational


knowledge reflects how well this knowledge can represent the domain
(object) it is attributed to.
For instance, when the domain (object) of knowledge is number
π, then the number 3.14159 has higher domain representability (i.e.,
gives a better approximation) than the number 3.14.

Definition 2.3.16. Domain applicability of operational knowledge


reflects how well this knowledge can be applied to the domain
(object) it is attributed to.

For instance, when the domain of knowledge consists of compu-


tations, then knowledge in the form of Turing machines has higher
domain applicability than knowledge in the form of finite automata,
while knowledge in the form of inductive Turing machines has higher
domain applicability than knowledge in the form of Turing machines
(Burgin, 2005).
The logical modality of knowledge correctness also has three com-
ponents. Namely, consistency, provability, and testability are compo-
nents of the logical modality of knowledge correctness.
Consistency is an important relational characteristic of a knowl-
edge system. The traditional approach to knowledge consistency
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 107

Knowledge Characteristics and Typology 107

implies separation (and elimination) elements of knowledge that are


called contradictions. For instance, a standard example of a logical
contradiction is the expression p∧p where p is a proposition, e.g.,
the expression “It is a table and it is not a table” is contradictory by
the rules of classical logic. However, as we have seen in Chapter 1 and
will see in Chapter 6, Indian logic accepts statements of the form “S
is and is not P ,” which are unacceptable, for example, in Aristotle’s
syllogistics.
It is interesting to know that in the 20th century, fuzzy logic also
made such statements logically correct (Bandemer and Gottwald,
1996). For instance, a ball that is partially green and partially yellow
is green and is not green.
Studying system consistency in logic and beyond, researchers
came to the conclusion that consistency is not an absolute prop-
erty quality as in classical logic but is a relative property of various
systems, including logical systems. The most general definition of
consistency is given in (Nuseibeh et al., 2001).
Namely, at first, a system C of consistency conditions is deter-
mined in a class of systems K. Then we have the following concept.
Definition 2.3.17. A system R from K is consistent (inconsis-
tent) with respect to C if satisfies (does not satisfy) all conditions
from C .
The most popular example of consistency is logical consistency
when a system of propositions or predicates is consistent when it does
not allow inference (deduction) of expressions A and A. A weaker
kind of consistency is weak logical consistency when a system of
propositions or predicates is consistent when it does not contain
expressions A and A. In logical calculi and in logics, weak logical
consistency coincides with logical consistency.
Here we are mostly interested in consistency of knowledge sys-
tems. When we are dealing with propositional knowledge, the most
popular is the conventional consistency, the basic condition for which
is absence of contradictions. Another reason for exclusion of con-
tradiction is the situation when any false statement (contradiction)
implies any other statement in classical logics. As a result, people
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 108

108 Theory of Knowledge: Structures and Processes

always considered contradictions as abhorrent irregularities of think-


ing, which have to be eliminated from correct thinking and valid logic.
However, as we have seen before, what is a contradiction by clas-
sical criteria can provide useful knowledge. As, a result, a new kind
of logical consistency — paraconsistency — was introduced in the
20th century by weakening conditions of logical consistency (Priest
et al., 1989).
Namely, a logic is paraconsistent if and only if its logical conse-
quence relation is not explosive. Here explosiveness means that using
axioms and consequence relations of the logic, it is possible to deduce
any formula, e.g., proposition, in the language of this logic. Paracon-
sistent logics accommodate inconsistency in a manner that treats
inconsistent information and inconsistent knowledge as informative.
There are different systems of paraconsistent logics, e.g., discussive
logics, non-adjunctive systems, preservationism, adaptive logics, log-
ics of formal inconsistency, many-valued logics, and relevant logics.
As it is possible to have several consistency conditions and/or
some of the conditions can be satisfied only partially, in general,
consistency and inconsistency are fuzzy properties.
In general, consistency is a particular case of correctness. Namely,
comparing Definitions 2.3.1 and 2.3.17, we see that correctness
becomes consistency when correctness conditions actually are con-
sistency conditions.
However, consistency in general and logical consistency, in partic-
ular, is only one type of correctness. There are also other types such
as provability and testability.
Provability as a component of the logical modality of knowledge
correctness reflects how and to what extent it is possible to prove,
e.g., support by evidence or infer, correctness of a given knowledge
item.
For instance, we can assume that a system of statements, e.g., a
formal theory, is correct only when it is consistent and it is possi-
ble to prove its consistency. From this point of view, any sufficiently
powerful mathematical theory U , i.e., a theory that includes the for-
mal arithmetic, is not correct by itself because by the second Gödel’s
incompleteness theorem, it is impossible to prove consistency of U
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 109

Knowledge Characteristics and Typology 109

using only means from the theory U . Nevertheless, the theory U may
be internally incorrect but externally correct if there are other means
to prove its consistency.
Testability as a component of the logical modality of knowledge
correctness reflects how and to what extent it is possible to estimate,
e.g., support by evidence or infer correctness of a given knowledge
item.
Testability is essentially important for operational knowledge. For
instance, it is possible to treat a computer program as a potentially
correct operational knowledge item if it is possible to test it finding
and correcting all deficiencies. In this case, the possibility of defi-
ciency correction is also a correctness condition.
An important type of knowledge correctness is truthfulness. There
are different ways to introduce truthfulness of knowledge. All of
them involve two types of truthfulness functions: domain-oriented,
reference-oriented, and attitude-oriented. According to the first
approach, we have the following model of truthfulness.
A system T , or as it is now fashionable to call it now, an agent
A that has knowledge K about the domain (object) D is considered.
Then the truthfulness K means that (condition from C) the descrip-
tion that K gives for D is true. Thus, the truthfulness tr(K, D) of
the knowledge K about the domain D is a function of two variables
that takes two values — true and false.
In addition, the function tr(T , D) gives conditions for differenti-
ating knowledge from similar structures, such as beliefs, descriptions
or fantasies.
Knowledge truthfulness, or domain related correctness, shows
absence of distortions in knowledge representation of its domain.
Thus, truthfulness is closely related to accuracy of knowledge, which
reflects how close is given knowledge to the absolutely exact knowl-
edge. However, truthfulness and accuracy of knowledge are different
properties. For instance, statements “π is approximately equal to
3.14” and “π is approximately equal to 3.14159” are both true, i.e.,
their truthfulness is equal to 1. At the same time, their accuracy is
different. The second statement is more accurate than the first one.
We see that conventional truthfulness can indicate only two possibil-
ities: complete truth and complete falsehood.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 110

110 Theory of Knowledge: Structures and Processes

To measure truthfulness in a better way, we utilize a measure tr


of truthfulness of knowledge in the system T . Such a measure, allows
one to develop the comprehensive approach to true knowledge.
However, in many situations, it is impossible to verify (or vali-
date) truthfulness or falsehood and we come to three basic types of
knowledge:

— correct knowledge,
— incorrect knowledge,
— unverified knowledge.

To formalize these concepts, let us take a number k from the interval


[0, 1].

Definition 2.3.18. (a) A portion of knowledge I is called true


or genuine knowledge about D with respect to a measure cor if
cor (I, TA ) > 0.

(b) A portion of knowledge I is called true or genuine to the degree


k knowledge about D with respect to a measure cor if cor (I, TA ) > k.

This definition looks natural and adequately works in many situa-


tions. However, there are some problems with it. Imagine that infor-
mation that gives correct knowledge about some domain (object) D
comes to A but it does not change the knowledge system TA because
correct knowledge about D already exists in TA . In this case, cor(I,
TA ) = cor (I(TB ), TA ) — cor(TB , TA ) = 0.
This implies that it is necessary to distinguish relative, i.e., rela-
tive to a knowledge system TA , knowledge correctness and absolute
correction. To define absolute correction, we take a knowledge system
T 0D that has no a priori knowledge about the domain (object) D.

Definition 2.3.19. A portion of knowledge I is called purely true


knowledge about D with respect to a measure cor if cor(I, T0D ) > 0.

It is necessary to understand that it is not a simple task to


find such a knowledge system T0D that has no a priori knowledge
about the domain (object) D. Besides, truthfulness depends on other
properties of the knowledge system T0D , e.g., on algorithms that
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 111

Knowledge Characteristics and Typology 111

are used for conversion of received information into knowledge. It is


possible to get true information, which eventually transformed into
incorrect knowledge. For instance, during the Cold War, witty people
told the following joke.
“Russia challenged the United States to a foot race. Each country
sent their fastest athlete to a neutral track for the race. The American
athlete won. The next day, the major Soviet newspaper “Pravda”,
which means truth in Russian, published an article with the following
title:

Russia takes second in international track event,


while the United States comes in next to last.”

We see that literally the article of “Pravda” is true, but people’s


a priori knowledge makes them to assume a big race with many
participants and in such a way, they get a wrong impression and
false knowledge if they did not know how many participants was in
the race.
It is possible to treat truthfulness and correctness as linguis-
tic variables in the sense of (Zadeh, 1973). For instance, we can
separate such classes as highly correct/true knowledge, sufficiently
correct/true knowledge, weakly correct/true knowledge, weakly false
knowledge, and highly false knowledge.
From this point of view, we come to three fuzzy types of know-
ledge:

— true or genuine knowledge,


— partially true knowledge,
— false knowledge.

However, it is possible to separate these types of knowledge in a


more general situation, utilizing the concept of knowledge measure.
There are different ways to do this. Analyzing different publications,
we separate two classes of approaches: relativistic definitions and
universal definitions. The latter approach is subdivided into object-
dependent, reference-dependent, and attitude-dependent classes. At
first, we consider the relativistic approach to this problem.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 112

112 Theory of Knowledge: Structures and Processes

To have genuine knowledge in the conventional sense, we take such


measure m as correctness of knowledge or such measure as validity of
knowledge or of knowledge acquisition. Let us specify the relativistic
approach in the case of cognitive knowledge, taking a measure m
that reflects such property as truthfulness. Thus, to get more exact
representation of the convenient meaning of the term true knowledge,
we consider only cognitive knowledge and assume that true cognitive
knowledge gives true knowledge, or more exactly, make knowledge
truer than before.
However, it is necessary to understand that the truth of knowl-
edge and the validity of its acquisition are not always the same. For
instance, the truth of knowledge represented by propositions and the
validity of reasoning are distinct properties, while there are relations
between them (cf., for example, (Suber, 2000)). This relationship is
not entirely straightforward. It is not true that truth and validity, in
this sense, are utterly independent because the impossibility of “case
zero” (a valid argument with true premises and false conclusion)
shows that one combination of truth-values is an absolute bar to
validity. According to the classical logic, when an argument has true
premises and a false conclusion, it must be invalid. In fact, this is
how we define invalidity. However, in real life, people are able to take
a true statement and to infer something false. An example of such a
situation gives the Cold War Joke considered in this section.
To formalize the concept of knowledge truthfulness, we use the
model developed in (Burgin, 2004) and described in Chapter 4.
According to this model, general knowledge K about an object F
has the structure represented by Diagram (2.5) and high level of
validation.
g
W L

t p (2.5)

U C
f
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 113

Knowledge Characteristics and Typology 113

This diagram has the following components:


(1) some class U containing an object F ;
(2) an intrinsic property that is represented by an abstract property
T = (U, t, W ) with the scale W , which is defined for objects from
U (cf., Chapter 5);
(3) some class C of names, which includes a name “F ” of the
object F ;
(4) an ascribed property that is represented by an abstract property
P = (C, p, L) with the scale L, which is defined for names from
C (cf., Chapter 5);
(5) the correspondence f assigns names from C to objects from U
where in general case, an object has a system of names or more
generally, conceptual image (Burgin and Gorsky, 1991) assigned
to it;
(6) the correspondence g assigns values of the property P to values
of the property T . In other words, the correspondence g relates
values of the intrinsic property to values of the ascribed property.
For instance, when we consider a property of people such as
height (the intrinsic property), in measuring the height, we can
get only an approximate value of the real height, or height with
some precision (the ascribed property).
In more detail, the basic structure of knowledge is discussed and
described in Chapter 4.
According to the attitude-dependent approach, we have the follow-
ing definitions.

Definition 2.3.20. General knowledge T about an object F for


a system R is the entity that has the structure represented by
Diagram (2.5) that is estimated (believed) by the system R to rep-
resent with high extent of confidence true relations.
Consequently, we come to three main types of knowledge about
some object (domain):
— objectively true knowledge,
— objectively neutral knowledge,
— objectively false knowledge.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 114

114 Theory of Knowledge: Structures and Processes

Taking some object domain D and tr as the measure m in


Definition 2.3.3, we obtain unconditional concepts of true and false
knowledge.

Definition 2.3.21. A portion of knowledge I is called objectively


true knowledge about D if tr(I, D) > 0.
To adequately discuss a possibility of false knowledge existence
from a methodological point of view, it is necessary to take into
account three important issues: multifaceted approach to reality, his-
torical context, and personal context. Thus, we come to the following
conclusion.

First, there is a structural issue in this problem. Namely, the


dichotomous approach, which is based on classical two-valued logic,
rigidly divides any set into two parts, in our case, true and false
knowledge. As a result, the dichotomous approach gives a very
approximate image of reality. Much better approximation is achieved
through the multifaceted approach based on multivalued logics, fuzzy
reasoning, and linguistic variables.
Second, there is a temporal issue in this problem. Namely, the
problem of false knowledge has to be treated in the historical or,
more exactly, temporal context, i.e., we must consider time as an
essential parameter of the concept. Indeed, what is treated as true
in one period of time can be discarded as false knowledge in another
period of time.
Third, there is a personal issue in this problem, i.e., distinction
between genuine and false knowledge often depends on the per-
son who estimates this knowledge. For instance, for those who do
not know about non-Diophantine arithmetics (Burgin, 1997d; 2001b;
2010c), 2 + 2 is always equal to 4. At the same time, for those who
know about non-Diophantine arithmetics, it becomes possible that 2
+ 2 is not equal to 4.
In light of the first issue of our discussion about false knowledge,
we can see that in cognitive processes, the dichotomous approach,
which separates all objects into two groups, A and not A, is not
efficient. Thus, if we take the term “false knowledge”, then given
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 115

Knowledge Characteristics and Typology 115

a statement, it is not always possible to tell if it contains gen-


uine or false knowledge. To show this, let us consider following
statements:

1. “π is equal to 3.”
2. “π is equal to 3.1.”
3. “π is equal to 3.14.”
4. “π is equal to 3.1415926535.”
5. “π is equal to (4/3)2 .”

According to the definition of pi and our contemporary knowledge


that states that pi is a transcendent number, all these statements
contain false knowledge. In practice, they are all true but with dif-
ferent exactness. For example, the statement (4) is truer than the
statement (1). Nevertheless, in the ancient Orient, the value of pi
was frequently taken as 3 and people were satisfied with this value
(Eves, 1983). Archimedes found that pi is equal to 3.14. For cen-
turies, students, and engineers have used 3.14 as the value for pi and
had good practical results. Now calculators and computers allow us
to operate with much better approximations of pi, but nobody can
give the exact decimal value of this number.
Importance of the temporal issue is demonstrated by the following
example from the history of science that helps to better understand
the situation with false knowledge. Famous Greek philosophers Leu-
cippus (fl. 445 B.C.E.) and Democritus (460–360 B.C.E.) suggested
that all material bodies consist of small particles, which were called
atoms. “In reality,” said Democritus, “there are only atoms and the
void.”
We can ask the question whether this idea about atoms contains
genuine or false knowledge. From the point of view of those scientists
who lived after Democritus but before the 15th century, it contained
false knowledge. This was grounded by the fact that those scien-
tists were not able to look sufficiently deep into the matter to find
atoms.
However, the development of scientific instruments and exper-
imental methods made it possible to discover microparticles such
that have been and are called atoms. Consequently, now it is a fact,
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 116

116 Theory of Knowledge: Structures and Processes

which is accepted by everybody, that all material bodies consist of


atoms. As a result, now people assume that the idea of Leucippus
and Democritus contains genuine or true knowledge.
This shows how people’s comprehension of what is genuine knowl-
edge and what is understood as false knowledge changes with time.
Lakatos (1976) and Kline (1980) give interesting examples of simi-
lar situations in the history of mathematics, while Cartwright (1983)
discusses analogous situations in the history of physics.
All these examples demonstrate that it is necessary to consider
false knowledge as we use negative numbers, as well as not to dis-
card false knowledge as we do not reject utility of such number as 0.
History of mathematics demonstrates that understanding that 0 is
a number and a very important number demanded a lot of hard
intellectual efforts from European mathematicians when Arab math-
ematicians brought to them knowledge about 0 from India.
Going to the third point of the discussion about false knowledge
related to the personal issue, let us consider other examples from
the history of science as here we are studying knowledge by scientific
methods.
In his lectures on optics, Isaac Newton (1642–1727) developed a
corpuscular theory of light. According to this theory, light consists of
small moving particles. Approximately at the same time, Christian
Huygens (1629–1695) and Robert Hook (1635–1703) built a wave the-
ory of light. According to their theory, light is a wave phenomenon.
Thus, it was possible to ask the question who among them, i.e., was
it Newton or Huygens and Hook, gave genuine knowledge and who
gave false knowledge. For a long time, both theories were competing.
As a result, the answer to our question depended whether the respon-
dent knew physics and was an adherent of the Newton’s theory or of
the theory of Huygens and Hook. However, for the majority of peo-
ple who lived at that time both theories did not provide knowledge
because those people did not understand physics.
A modern physicist believes that both theories contain genuine
knowledge. So, distinction between genuine and false knowledge in
some bulk of knowledge depends on the person who estimates this
knowledge.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 117

Knowledge Characteristics and Typology 117

Existence of false knowledge is recognized by the vast major-


ity of people, but some theoreticians insist that false knowledge is
not knowledge. There is persuasive evidence supporting the opin-
ion that false knowledge exists. For instance, readers find a lot of
false or inaccurate knowledge in newspapers, books, and magazines.
Recent studies found a considerable amount of inaccurate knowledge
on the Internet (Hernon, 1995; Connell and Triple, 1999; Bruce, 2000;
Berland et al., 2001).
The new and truly wonderful medium, the Internet, unfortunately
has one glaring downside. Namely, along with all the valid knowledge
it provides, the Internet also contains much misleading knowledge,
false knowledge, and outright hype. This is also true for the fields of
science and science criticism. Certainly many so-called “discussion
groups” and informal “book review” sites are good examples of the
blind leading the blind (Fallis, 2004).
On an Internet blog, the author of this book once encountered an
assertion of one of the bloggers that there was a critique of a book A
in a paper D. Finding the paper D, the author did not come across
any such a critic in it and was very surprised even thinking about
such a possibility because the paper D had been published in 2003,
while the book A had been published only in 2005.
Another example of the situation when “the blind leads the blind”
we can take the critique of the professor P aimed at the book B.
Indeed, this critique was irrelevant and contained essential logical
and factual mistakes. However, being asked if he read the book he
criticized, P answered that he did not need to do that because he saw
an article by the same author and understood nothing. Naturally, in
this situation, his critique was an example not only of incompetent
but also indecent behavior because using Internet and other contem-
porary means of communication, professor P transmitted this false
knowledge to those who read his writings on this topic.
Thus, we see that the problem of false knowledge is an impor-
tant part of knowledge studies and we need more developed scientific
methods to treat these problems in an adequate manner.
To have genuine knowledge relevant to usual understanding, we
take such measure as correctness of knowledge or such measure as
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 118

118 Theory of Knowledge: Structures and Processes

validity of knowledge. Thus, we can call cognitive knowledge false


when it decreases validity of knowledge it gives. Definition 2.3.8
shows that we have false knowledge when its acceptance makes our
knowledge less correct. For instance, let us consider people who lived
in ancient Greece and accepted ideas of Leucippus and Democritus
that all material bodies consist of atoms. Then they read Aristotle’s
physics that eliminated the idea of atoms. Because we now know that
the idea of atoms is true, Aristotle’s physics decreased true knowl-
edge about such an aspect of the world as the existence of atoms and
thus, gave false knowledge about atoms.
We can ask the question whether this idea contains true or false
knowledge. From the point of view of those scientists who lived after
Democritus but before the 15th century, it contained false knowledge.
This was grounded by the fact that those scientists were not able to
go sufficiently deep into the matter to find atoms.
We see that false knowledge is also knowledge because it has
a definite impact on the infological system. Only this impact is
negative.
It is necessary to understand that the concept of false knowledge
is relative, depending on the chosen measure. Let us consider the fol-
lowing situation. A message M comes, telling something completely
incorrect. Thus, it will be wrong knowledge with respect to a seman-
tic measure of knowledge (cf., Chapter 4). At the same time, if all
letters in the message M were transmitted correctly, it will contain
genuine knowledge with respect to the (statistical) Shannon’s mea-
sure of information (cf., Chapter 3).
It is interesting that there is no direct correlation between false
knowledge and meaningless knowledge. Bloch in his book “Apology of
History” (1949) gives examples when false knowledge was meaningful
for people, while genuine knowledge was meaningless for them.
Moreover, a knowledge unit can be true knowledge with respect
to another measure and false knowledge with respect to the third
measure. For instance, let us consider some statement X made by a
person A. It can be true with respect to what A thinks. Thus, knowl-
edge in X is genuine with respect to what A thinks (i.e., according
to the measure m2 that estimates correlation between the statement
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 119

Knowledge Characteristics and Typology 119

X and beliefs of A). The statement X can be false with respect to


the real situation. Thus, knowledge in X is false with respect to the
real situation (i.e., according to the measure m3 that estimates cor-
relation between the statement X and reality). At the same time,
knowledge in X can be pseudo knowledge from the point of view
of the person D who does not understand it (i.e., according to the
measure m1 that estimates correlation between the statement X and
knowledge of D).
The opposite property to correctness is incorrectness.

Definition 2.3.22. (a) A knowledge system K is incorrect with


respect to a system C of conditions if it violates at least one condition
from C.
(b) A knowledge system K is strongly incorrect with respect to a
system C of conditions if it violates all conditions from C.

For instance, taking the following system of conditions for formal


logics C = {(1) a logic L is not trivial, i.e., it is not empty and
does not contain all formulas from the logical language; (2) a logic
L does not contain expressions of the form A&A}, we see that any
classical logic is incorrect with respect to C if and only if it is strongly
incorrect with respect to C (Shoenfield, 2001). However, there are
paraconsistent logics that are incorrect with respect to C validating
the second condition from C but not strongly incorrect with respect
to C because they still may satisfy the first condition from C (Priest
et al., 1989).

2.3.3. Confidence in and certainty of knowledge


Confidence is an essentially psychological characteristic of knowl-
edge, which shows the extent to which an individual or a group
strongly believes (is convinced) that some epistemic structures are
knowledge. In a more general interpretation, confidence, as a knowl-
edge characteristic, reflects the knower’s (knowledge user) mental
state of being without doubt about estimation of the epistemic struc-
ture, e.g., knowledge item, property. For instance, a person can be
confident that her belief, e.g., belief that the Venus is a planet of
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 120

120 Theory of Knowledge: Structures and Processes

the Solar system or that Venus is a goddess, is knowledge, i.e., it


is true.
We can see that these two beliefs are essentially different. To
reflect these differences, it possible to call conventional confidence
by the name psychological confidence and consider another type of
confidence called epistemic confidence, which reflects epistemological
status of knowledge items, i.e., whether they are indeed knowledge
items and not other epistemic structures.
It is useful to distinguish three kinds of psychological
confidence — internal, external, and exterior confidence.
Internal confidence is confidence of an individual or a group in
epistemic structures of the same individual or group. For instance,
self-confidence is having confidence in oneself or more exactly, con-
fidence in own abilities and qualities, which are certain epistemic
structures.
External confidence is confidence of an individual or a group in
epistemic structures of another individual or group.
Exterior confidence is confidence of an individual or a group in
epistemic structures stored in some knowledge carrier, such as a book,
journal, or knowledge base.
Certainty is a more restricted characteristic of knowledge, reflect-
ing higher levels of confidence. Thus, it is a psychological character-
istic of the knower (knowledge user). For instance, it can be certainty
of the knower (knowledge user) in knowledge item correctness, e.g.,
knowledge item is certain for the knower (knowledge user) when this
knower is supremely convinced of its truth.
However, there are kinds, or more exactly, interpretations of
certainty. Another kind is epistemic certainty, which is not a psy-
chological but an epistemological characteristic estimating that an
epistemic structure (knowledge item) has the highest possible epis-
temic status. This status has to be validated in epistemology as the
theory of knowledge.
Epistemic certainty often but not always correlates with psycho-
logical certainty. For instance, it is possible that a knower (knowledge
user) has epistemically certain knowledge, e.g., a belief that enjoys
the highest possible epistemic status, but does not have psychological
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 121

Knowledge Characteristics and Typology 121

certainty either being unaware of epistemic certainty or having doubt


about its validity. An opposite situation is also possible when a
knower (knowledge user) has psychological certainty about certain
knowledge item strongly believing in its truthfulness, but in spite
of this, the knowledge item does not have epistemic certainty. For
instance, Aristotle was (psychologically) certain that all swans were
white but as it was discovered later, this belief was not epistemically
certain.
A more general than epistemic certainty is source certainty, which
is a knowledge characteristic reflecting the highest possible status of
the knowledge item coming from some source. For instance, epistemic
certainty gets the status from epistemology. Moral certainty discussed
by some philosophers gets the status from God or from tradition.
External certainty, as a highest degree of external confidence, gets the
status from an authoritative group or individual, e.g., a knowledge
item can be certain because Aristotle or Kant said so.
Psychological confidence and certainty can be based on different
reasons — on assurance, validation, groundedness, or even on per-
suasion. For instance, groundedness by evidence reflects the extent of
the of knowledge evaluation. When confidence is ungrounded, it is
called arrogance or hubris.
It is natural to consider degrees of confidence and certainty. For
instance, Carnap treated epistemic certainty as a having some degree,
which could be objectively measured. Definite techniques for mea-
suring confidence and consequently, certainty have been developed in
statistics where such concepts as confidence level, confidence interval,
confidence coefficient and confidence bounds have been introduced for
this purpose.
A confidence interval is an interval estimate of the confidence
that a sample characteristic or parameter gives confident (reliable)
knowledge of the same characteristic (parameter) for the whole popu-
lation (Fisher, 1956). How frequently the calculated confidence inter-
val contains the parameter is determined by the confidence level or
confidence coefficient, which is a numerical estimate of the confi-
dence. For instance, a 90% confidence level means that it is possi-
ble to expect the corresponding confidence interval to include 90%
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 122

122 Theory of Knowledge: Structures and Processes

of the sample characteristic or sample parameter estimated for dif-


ferent samples. While two-sided confidence limits form a confidence
interval, their one-sided counterparts are called lower or upper con-
fidence bounds and are also numerical estimates of the confidence
(Fisher, 1956).
There are also methods of discriminating degrees of psychologi-
cal confidence and certainty. For instance, in the legal practice, the
following degrees of certainty are used:

 no credible evidence,
 some credible evidence,
 a preponderance of evidence,
 clear and convincing evidence,
 beyond reasonable doubt,
 beyond any shadow of a doubt, what is usually recognized as an
impossible standard to meet.

Degrees of certainty are related to degrees of confidence but they


are not the same. For instance, a reasonable degree of confidence can
correspond to a low degree of certainty.
It also happens that the degree of psychological confidence is dif-
ferent from the degree of epistemological confidence. When the degree
of psychological confidence is essentially larger than the degree of
epistemological confidence, people speak about overconfidence or pre-
sumptuousness. For instance, overconfidence is an excessive belief in
someone or something, e.g., a plan, succeeding, without any regard
for possible failure.

2.3.4. Complexity and clarity of knowledge


Complexity has become a buzzword in contemporary science. This
term utilized in a variety of scientific fields and from them, it entered
popular usage on a new level of credibility. Trying to explain, why
complexity is so important and why it is more important now than
it was before, we come to three following issues.
First, people have to deal with more and more complex systems.
On one hand, the development of science is bringing cognition to
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 123

Knowledge Characteristics and Typology 123

more and more complex systems. On the other hand, the develop-
ment of engineering and social organization resulted in building more
and more complex technical systems and developing more and more
complex social systems.
All this is directly related to knowledge. Studying complex sys-
tems in nature, society, and technology, scientists, as a rule, need ade-
quately complex knowledge systems to represent and model systems
they study. To create and invent complex systems, engineers, includ-
ing software and social engineers, need sufficiently complex opera-
tional knowledge.
Second, complexity serves as a measure of needed resources. In
turn, needed resources correlate with system efficiency. Indeed, when
two systems give the same results but the first one demands less
resources than the second system, then the fist system is more effi-
cient than the second. Thus, complexity becomes a measure of effi-
ciency. For instance, knowledge that demands less time or less efforts
for understanding is more efficient for learning. At the same time,
usually simple knowledge demands less time and efforts for under-
standing than complex knowledge. For instance, it is easier to under-
stand that 2 + 2 = 4 than the statement that there are infinitely
many prime numbers.
Pager (1970) defines efficiency of computation as the value that
is inversely proportional to complexity of the same computation. In
the same way, it is possible to define efficiency of any process as the
value that is inversely proportional to complexity of this process.
Efficiency is a clue problem and a pivotal characteristic of any
activity. Inefficient systems are ousted by more efficient systems.
Consequently, problems of efficiency are vital to any society and any
individual. Many great societies, Roman Empire, British Empire and
others perished because they had become inefficient. However, there
are many different criteria of efficiency, and to understand this impor-
tance and, at the same time, complex phenomenon, it is necessary
to use mathematical methods. Such methods are provided by the
mathematical theory of complexity.
Moreover, many other properties of systems are related to com-
plexity. For example, Carlson and Doyle (2002) investigate relations
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 124

124 Theory of Knowledge: Structures and Processes

between complexity and robustness in biological, social, economical,


and engineering systems. They show a non-trivial interplay of these
important properties.
Third, complexity helps to comprehend what is practically possi-
ble to achieve and what is not. Many things that are possible to build
or compute in theory are not constructible in reality because there
are no means to do this. That is why, in particular, in the theory of
algorithms, the field oriented at operational knowledge, difference is
made between computable and tractable problems.
Manin (1991) suggests that the development of mathematical
knowledge, and we would like to add, also of scientific knowledge
is directed by complexity issues. The reason is that simpler systems
are more feasible for cognition. Therefore, cognition goes from sim-
ple to more and more complex systems of knowledge. In the past,
mankind has learned to understand reality mostly through simplifi-
cation and analysis, ignoring a huge number of factors and details.
That is why, for example, physics is more developed than biology:
biological systems are much more complex than physical systems.
However, in spite of all its importance, there is no generally
accepted, formalized, and unique definition of complexity in gen-
eral and knowledge complexity, in particular. Complexity has proved
to be an elusive concept. Different researchers in different fields are
bringing new philosophical and theoretical tools to deal with com-
plex phenomena in complex systems. “What is complexity?” is a
basic question of Gell-Mann (1995). However, after many elaborate
considerations and creative insights, he comes to the conclusion that
“a variety of different measures would be required to capture all our
intuitive ideas about what is meant by complexity and by its opposite,
simplicity.”
Going back to the origin of the word complexity, we find the Latin
word “complexus”, which means “entwined” or “twisted together”.
That is why, in mathematics (more exactly, in topology), topological
complex is a structure built from simplexes (Spanier, 1966). This also
reflects the situation when a system that consists of many parts is
considered complex. However, this is not always true. For instance,
the sequence 11 . . . 1 that consists of a thousand of symbols, 1 is not
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 125

Knowledge Characteristics and Typology 125

complex. At the same time, the sequence that consists of a thou-


sand of symbols from the number “pi” taken from the left side is
complex. In general, as Heylighen (1996) writes, complexity can be
characterized by lack of some symmetry or “symmetry breaking”,
that is, by the fact that no part or aspect of a complex entity
can provide sufficient information to actually or statistically pre-
dict the properties of the others parts. In other words, complexity
is connected to the difficulty of modeling associated with complex
systems.
If we analyze what it means when we say that some system or
process is complex, we come to a conclusion that it is complex to
do something with this system or process: to study it, to describe it,
to build it, to control it, and so on. Here are two examples of types
of complexity taken from the world of business and industry (Paul,
2002). The first one is complexity of functioning that reflects a high
number of operations to be performed. The second one is complexity
of integration that reflects a high number of problems encountered
in integration processes.
The same is true for complexity of knowledge, which is esti-
mated from the perspective of related processes. For instance,
complexity of knowledge depends on such processes as knowledge
acquisition, knowledge transmission, knowledge integration, teach-
ing, and learning.
Thus, complexity is always complexity of doing something. Being
related to activity and functioning, complexity allows one to repre-
sent efficiency in a natural way: when a process has high efficiency,
it is simple from the point of view of demanded resources, and when
a process has low efficiency, it is complex from the point of view of
demanded resources. For example, we can take time as a measure
of efficiency: what is possible to do in one hour is efficient, while
what is impossible to do even in 1,000 years is inefficient. To esti-
mate temporal efficiency of processes and procedures, such measure
as computational complexity is utilized. It estimates time of compu-
tation or any other algorithmic process.
At the same time, as there are many resources, there are many
corresponding measures of complexity. There are various relations
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 126

126 Theory of Knowledge: Structures and Processes

between different measures of complexity. In particular, an impor-


tant and interesting relation is trade-off between different kinds of
complexity. For example, in computation, it is possible in some cases
to use more memory for program execution, decreasing the time of
execution, or to use less memory, paying for with more time. Highly
optimized tolerance (HOT) is one recent attempt, in long history
of efforts, to develop a general framework for studying complexity
in such fields as biology and engineering (Carlson and Doyle, 2002).
The main idea of HOT is that higher structural complexity of a sys-
tem (more complex for construction, modeling or understanding) is
aimed at decreasing behavioral/functioning complexity of a system
(simpler maintenance, less changes under external influence, etc.).
This shows how a trade-off between structural and behavioral com-
plexity can inspire the development of systems.
Here we use the informal definition of complexity from the book
(Burgin, 2005).

Definition 2.3.23. Complexity of a system R with respect to a pro-


cess (or a group of processes) P is the quantitative or qualitative
characteristic (measure) of resources necessary for (used by) the pro-
cess P involving R.

There are different kinds of involvement.


P may be a process in the system R. For instance, R is a scientific
domain, e.g., physics, as a dynamic knowledge system, P is a process
of the development of a scientific theory in R, and the resource is
researchers who work in this area.
P may be a process that is realized by the system R. For instance,
R is a computer, P is a computational process in R, and the resource
is memory.
P may be a process controlled by the system R. For instance, R is
operational knowledge in the form of a program, P is a computational
process controlled by R, and the resource is time.
P may be a process that builds the system R. For instance, R
is operational knowledge in the form of a software system, P is the
process of its design, and the resource is programmers.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 127

Knowledge Characteristics and Typology 127

P may be a process that operates with R, e.g., transforms, utilizes,


models, and/or predicts behavior of the system R. For instance, R
is operational knowledge in the form of a program, P is the process
of writing the program R, and the resource is programmers who are
writing this program.

Definition 2.3.24. Complexity of a system R with respect to a pro-


cess (or a group of processes) P is the quantitative or qualitative
characteristic (measure) of resources necessary for (used by) the pro-
cess P involving R.

Definition 2.3.25. A complexity measure on a set of systems U is a


(partial) numerical function that assigns higher numbers to systems
with higher complexity.

In cognitive processes, complexity is closely related to informa-


tion and knowledge, representing specific kind of information and
knowledge measures.
In turn, processes use different kinds of resources:

Natural resources consumed by a process P : time, space, information,


energy, power, minerals, and so on.
Social resources consumed by a process P : people involved, their
time, efforts, expertise, knowledge, and so on.
Artificial resources consumed by a process P : system time, system
space, data, knowledge, memory, system units, system actions,
computers, experimental devices, e.g., telescopes or microscopes,
and so on.

The general definition of system complexity gives us definition of


knowledge complexity.

Definition 2.3.26. Complexity of a knowledge system K with


respect to a process (or a group of processes) P is the quantita-
tive or qualitative characteristic (measure) of resources necessary for
(used by) the process (the processes from) P involving K.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 128

128 Theory of Knowledge: Structures and Processes

Different kinds of processes determine different kinds of comp-


lexity:

— If processes from P are processes of utilization of K, then we have


utilization complexity of K.
— If processes from P are processes of transformation of K, then
we have transformation complexity of K.
— If processes from P are using K for solving some problems, then
we have problem complexity of K.
— If processes from P are processes for obtaining, e.g., acquisition
of, K, then we have cognitive complexity of K.

Note that in the first three cases, knowledge K plays the role of
the used resource and its complexity is a significant characteristic of
this resource.
Problem complexity is very important because problems represent
a pivotal form of erotetic knowledge. People solve problems all the
time and solution of some of these problems is vital for individuals,
organizations, and communities. Thus, complexity of some problems
is essentially important for people. If it is impossible to solve a prob-
lem with given resources, we assume that it has infinite complexity.
The halting problem for Turing machines is an example of a problem
with infinite complexity for operational knowledge in the form of Tur-
ing machines since we know that it has no solution in the class of all
Turing machines. However, for operational knowledge in the form of
inductive Turing machines, this problem has finite complexity. This
shows that, in general, problem complexity is a relative property,
which essentially depends on knowledge used for solving the problem.
Definitions 2.3.25 and 2.3.27 imply that complexity is always com-
plexity of doing something and although complexity is attributed
to a system, it is a principal characteristic of a process and of the
operational knowledge in the form of an algorithm if the process is
determined by an algorithm. However, it is possible to extend the
constructions of such measures to complexity of arbitrary processes
and through processes to arbitrary systems. For instance, if we take
some non-algorithmic process, such as cognition, then it is possible
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 129

Knowledge Characteristics and Typology 129

to measure its complexity by the amount of resources this process


needs.
To make Definitions 2.3.25 and 2.3.27 constructive, it is neces-
sary to build mathematical models of efficiency. One of such models
is complexity of algorithms. Complexity is a mirror reflection of effi-
ciency: the more efficient is an algorithm (system of algorithms) A
for a problem P (problems from some class K ), the less complex
is P (problems from K ) for the algorithm (system of algorithms)
A. Mathematical models of complexity allow researchers to measure
efficiency of various algorithms.
It is necessary to have different complexity measures to estimate
complexity of knowledge from different perspectives. For instance,
complexity of working with such a representational knowledge as a
model depends on the coarse graining (level of detail) of the descrip-
tion of the entity, on the previous knowledge and understanding of
the world that is assumed, on the language employed, on the cod-
ing method used for conversion from that language into a string
of bits, and on the particular ideal computer chosen as a standard
(Gell-Mann, 1995). It is possible to consider these characteristics sep-
arately assigning to each a specific complexity measure or to build an
integral complexity measure for estimation of the overall complexity.
Complexity of a system, e.g., of a knowledge system, or a pro-
cess, e.g., of cognition depends on the system making estimation. It
may have an external observer, a user, or a designer. Relativity of
complexity in general and of knowledge complexity, in particular, is
perfectly demonstrated by the following joke.
A Mathematician (M) and an Engineer (E) attend a lecture by a
Physicist. The topic concerns Kaluza–Klein theories involving knowl-
edge about spaces with dimensions of 11, 12 and even higher. M is
sitting, clearly enjoying the lecture, while E is frowning and look-
ing generally confused and puzzled. By the end, E has a terrible
headache. After the lecture ends, M comments about the wonderful
lecture.

E says, “How do you understand this stuff?”


M: “I just visualize the process.”
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 130

130 Theory of Knowledge: Structures and Processes

E: “How can you POSSIBLY visualize something that occurs in 11-


dimensional space?”
M: “Easy, you first visualize it in an n-dimensional space, then let
n go to 11.”

Mathematics makes subjective complexity objective introducing


various criteria for complexity. For instance, a problem A is com-
plex because its solution demands a huge amount of memory, while
a problem B is complex because its solution involves performance
of a huge amount of operations. Consequently, the problem A is
complex for a computer with small memory, but it is simple for a
computer with big memory. At the same time, the problem B is sim-
ple for a high performance computer, but is complex for an ordinary
computer.
Clarity and comprehensibility show easiness of understanding and
often vary with the individual user of knowledge, e.g., the reader of
a book. It is a very important property because if a knowledge item
is unclear, it is hard to determine other properties of this item, e.g.,
whether it is accurate or relevant. In fact, it is impossible to tell any-
thing about it without understanding what information it conveys.
There are several practical criteria of clarity:

— It is possible to elaborate further on that issue.


— It is possible to express that issue in another way.
— It is possible to give an illustration for that issue.
— It is possible to give an example for that issue.

Accessibility of knowledge is a kind of knowledge complexity.


There are different measures of knowledge accessibility. Here we
reflect on two of them.
In developed knowledge systems, there are different levels
of knowledge storage, which have different complexity, e.g., time,
of knowledge access. This situation is modeled by the measure of
actuality — the easier is the access the higher is the measure of
actuality.
On the other hand, complexity, e.g., time, of extraction and/or
production of potential knowledge from actual knowledge can be
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 131

Knowledge Characteristics and Typology 131

rather different for different knowledge items. This situation is mod-


eled by the measure of potentiality — the easier is the extraction/
production the less is the measure of potentiality.
The size of knowledge representation is a kind of complexity. For
instance, an algorithm or a program is an operational knowledge
item, while the length of an algorithm or a program is a popular
measure of complexity in the theory of algorithms and in algorithmic
information theory (Burgin, 2005). Other examples of operational
knowledge complexity measures are:

— Kolmogorov or algorithmic complexity;


— Time complexity;
— Space complexity;
— Average time complexity;
— Average space complexity;
— Static complexity measures;
— Dynamic complexity measures;
— Direct complexity measures;
— Dual complexity measures.

These and other complexity measures are used in various areas.


Axiomatic approach to complexity of operational knowledge in the
form of algorithms and abstract automata is developed in (Blum,
1967; Burgin, 2005; 2010d; Câmpeanu, 2012). Axiomatic approach
to complexity of operational knowledge in the form of computer
software is developed in (Prather, 1984; Bollmann and Zuse, 1987;
Burgin and Debnath, 2003).

2.3.5. Significance of knowledge


Significance of knowledge is a relative characteristic, which depends
on the person or system that evaluates this knowledge. What is sig-
nificant for one person can be insignificant for another. For instance,
a scientist makes observation, which is not important or even sig-
nificant for him but is very important for his colleague. In a similar
way, social knowledge is, as a rule, not important for a physicist but
rather essential for a sociologist.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 132

132 Theory of Knowledge: Structures and Processes

It is possible to find an interesting example of knowledge sig-


nificance in the areas of real number representations and computer
arithmetic. In the standard decimal or binary representation, a real
number is represented by a signed sequence of digits. These digits
give knowledge about numbers. For instance, if the last digit of a
number in the standard decimal representation is 4, then it is an
even number. In a similar way, if a number has three digits in the
standard decimal representation, then this number is larger than 99.
Note that by tradition, the positive sign + is not usually displayed.
For instance, 246.159 is the standard decimal representation of a
real number. It is also possible to represent the same number by the
sequence 00246.159000. However, zeroes in this sequence are insignif-
icant because they do not change the number and they are omitted,
as a rule, in the standard representation.
Besides, what is significant in one type of representations can
become insignificant in another type of representations. For instance,
there is the scientific notation or scientific representation of real num-
bers, which consists of three parts: the sign of the number, the man-
tissa of the number, and the exponent of the number. The mantissa of
the number is a real number, which is less than 10 and larger or equal
to one, and the exponent is an arbitrary integer number. For instance,
3.159 × 1011 is the scientific notation. In science, very large and
very small numbers frequently occur in many fields. So, to represent
these numbers in a much shorter form, scientists invented scien-
tific notation. For instance, the mass of a proton is approximately
0.00000000000000000000000165 gram. This is the standard represen-
tation of a decimal number. In scientific notation, this number has
a much shorter representation. Namely, it is equal to 1.65 × 10−24
gram. Here is one more example. France’s national debt at the
end of 2012 was $5,200,000,000,000. In scientific notation, we have
$5.2 × 1012 . We see that only digits in the mantissa are significant
with respect to scientific notation, while in the standard notation are
significant.
Later scientific notation was used as the floating point representa-
tion of real numbers in computers. This technique allows computers
to operate in a much larger range of numbers than the fixed point,
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 133

Knowledge Characteristics and Typology 133

i.e., standard representation. However, the floating point represen-


tation has its pitfalls. One of the significant problem that arises in
different situations is the loss of significant digits. For instance, num-
bers 246.1593 and 246.1592 have seven significant digits and thus,
seven-digit accuracy, while their difference 246.1593 − 246.1592 =
0.0001 = 1 × 10−4 has only one significant digit and thus, only one-
digit accuracy.
Being relative, significance, nevertheless, is an important char-
acteristic of knowledge because for an individual to successfully
function, this individual needs knowledge that is significant with
respect to her or his functioning. Sometimes absence of the neces-
sary knowledge can cause very bad consequences and even bring to
disaster.
At the same time, the whole amount of knowledge accumulated by
society and its members is so huge that one individual cannot acquire
all of it and it is necessary to make the right selection. Significance
is one of the most important criteria for this selection.
It is possible to treat significance as a binary property with two
values — significant and insignificant. However, a more accurate
approach regards significance as a gradual property of knowledge.
It is possible to represent graduality in different scales. The most
exact are numerical scales, in which it is possible to have significance
of order 5 or of order 3. There are also ordered scales. For instance, for
a student it is more significant what she will get as a final grade than
what will be her grade at an intermediate test. There are also nom-
inal scales. For instance, it is possible to use such a scale {extremely
significant, very significant, moderately significant, sufficiently signifi-
cant, slightly significant, almost insignificant, insignificant, completely
insignificant}.
High orders or levels of significance are called importance. It is
possible to use the threshold for importance in the scale of signifi-
cance — knowledge significance of which is larger than the thresh-
old for importance is important. Otherwise, it is unimportant. In
this case, importance of knowledge can be a binary property as sig-
nificance can be. However, it is more natural to consider different
grades of importance treating it as a gradual property. Importance
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 134

134 Theory of Knowledge: Structures and Processes

of knowledge as a category of knowledge significance can be repre-


sented in numerical scales, ordered scales, and nominal scales.
Value of knowledge also reflects its significance. Starting with
Plato, philosophers have discussed what is it about knowledge (if
anything) that makes especially valuable for people. With the devel-
opment of human civilization, the value of knowledge has continu-
ously grown. However, different thinkers expressed diverse opinions
on this issue. For instance, in 1775, Samuel Johnson (1709–1784)
wrote, “All knowledge is, of itself of some value,” while Samuel Taylor
Coleridge (1772–1834) stated, “The worth and value of knowledge is
in proportion to the worth and value of its object” connecting value
to the knowledge object or domain. Pragmatic approach, which is
already present in Plato dialogues, asserts that the value of knowl-
edge depends how this knowledge helps people in their activity aimed
at achieving definite goals, e.g., Francis Bacon (1561–1626) declared
“Knowledge Itself Is Power” (ipsa scientia potestas est), while Alvin
Toffler (1990) proposed that knowledge is a wealth and force multi-
plier, in that it augments what is available or reduces the amount
needed to achieve a given purpose.

2.3.6. Efficiency of knowledge


The efficiency dimension reflects the role of a knowledge item in
achieving certain goals or solving particular problems. It means that
efficiency of knowledge is a relative property — the same knowledge
item can be highly efficient for one goal and have low efficiency for
another goal. For instance, knowledge in a textbook in mathematics
is efficient for learning mathematics and is not efficient for learning
music.
In addition, efficiency also depends on the knowledge user. One
individual can use the same knowledge more efficiently than another
one. For instance, knowledge in a monograph on category theory will
be useless, and thus, inefficient, for a non-professional but it may be
very efficient for a mathematician who works in this area.
Let us consider efficiency of operational knowledge. One kind of
efficiency is related to the number of problems that can be solved
using this operational knowledge.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 135

Knowledge Characteristics and Typology 135

Definition 2.3.27. The more problems can be solved using an oper-


ational knowledge item K, the more potentially efficient K is.

However, it is important not only to know that it is possible to


solve some problem P in principle, but also to be able to find a
relevant solution in practice. Such problems for which the latter is
possible are called tractable. Tractability of a problem is a relative
property, being dependent on the operational knowledge that is used
for solution. This gives us a definition for pragmatic efficiency of
operational knowledge.

Definition 2.3.28. The more problems are tractable with respect


to an operational knowledge item K, the more pragmatically or func-
tionally efficient K is.

Thus, pragmatic efficiency of operational knowledge depends on


two parameters: power of the provided by operational knowledge
means for solving problems and resources that are used in the pro-
cess of solution. If it is impossible to get the necessary resources, it
is unattainable to solve the problem under consideration. Thus, we
come to the concept of resource efficiency of algorithms.

Definition 2.3.29. The fewer resources are used for solution of


a problem (of problems from some class) by means provided by
operational knowledge, the more resource efficient this operational
knowledge is with respect to this problem (class of problems).

One more kind of efficiency is related to the quality of solution.

Definition 2.3.30. The better solution for problems is provided by


operational knowledge, the more mission efficient this operational
knowledge is.
Reliability, exactness, and relevance are examples of mission effi-
ciency demonstrating that different dimensions of knowledge may
have common components as, for example, relevance is a component
of both correctness and efficiency.
This analysis shows that knowledge efficiency is a function E(K,
G, U ) where K is a knowledge item, G is a goal and U is a knowledge
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 136

136 Theory of Knowledge: Structures and Processes

user. Note that not only an individual but also a software system or
a robot can be a knowledge user.

2.3.7. Reliability of knowledge


What Ketelaar wrote about information (Ketelaar, 1997) can be even
more applied to knowledge, especially, because information is the
source of knowledge:
Why do we demand more of the quality of food or a car than we demand
of that other essential resource — knowledge? Reliability and authen-
ticity determine the credibility and the usefulness of knowledge. These
concepts, developed in different cultures and at different times, are essen-
tial for our information society in its dependence on trust in knowledge
and information. When new knowledge is created and distributed, con-
ditions should be met to ensure the reliability and authenticity of this
knowledge.

Reliability of knowledge shows to what extent it is possible to rely


on some knowledge item. For instance, if a person remembers the
telephone number of her friend, but when she dials this number, she
finds that the number is incorrect. This shows that her knowledge of
the telephone number is not very reliable. There are different reasons
for this unreliability — she may simply forget the right number, her
friend can change her number or she can dial a wrong number by
mistake.
Reliability of knowledge has three dimensions:

Reliability of knowledge content depends on several attributes such


as accuracy, veracity, credibility, correctness, and validity.
Reliability of knowledge source is obtained when the attributes of
content reliability are applied to the origin of knowledge, e.g., the
author or corporate source of knowledge in the same way as to its
content. This may be, for example, a rating of the previous content
reliability of knowledge from this source, or of the circumstances
under which a particular message originated.
Reliability of knowledge transmission or/and production is obtained
when the way (technique or process) of knowledge transmission
or/and production is estimated.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 137

Knowledge Characteristics and Typology 137

All these three dimensions are important for knowledge. First,


people often judge reliability of knowledge they receive from some
source by the reliability of this source. For instance, people are more
inclined to believe authoritative individuals.
Second, reliability of knowledge transmission may be crucial
because invalid transmission can convert true knowledge into false
knowledge. This sometimes happen in newspapers when truth is
intentionally or unintentionally distorted. For instance, all newspa-
pers in the Soviet Union provided false knowledge about Western
countries to their readers.
Third, humankind strived to develop reliable methods of knowl-
edge production. One of the most (if not the most) reliable in this
field is science. Unfortunately, even in science, some researchers faked
their results demonstration that it is possible to corrupt even reliable
knowledge production by an unreliable source.

2.3.8. Abstractness and generality of knowledge


Abstractness and generality as properties of knowledge stem from
two cognitive operations — abstraction and generalization.
In the abstractness/generality dimension, abstractness reflects the
level of abstraction of a knowledge item. It is possible to compare
knowledge items with respect to their generality. It is possible to
measure the level of abstraction of a knowledge item by the quantity
of features in describing/representing the knowledge object or knowl-
edge domain. When fewer features are taken into account, then the
level of abstraction increases.
Analyzing abstraction, it is possible to introduce levels of abstract-
ness or levels of abstraction for epistemological structures in general
and knowledge, in particular.

Definition 2.3.31. (a) Knowledge (an epistemological structure)


the domain of which consists of material objects has the first level of
abstractness.
(b) Knowledge (an epistemological structure) the domain of which
consists of knowledge (epistemological) structures of the level n of
abstractness has the (n + 1)th level of abstractness.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 138

138 Theory of Knowledge: Structures and Processes

Note that the level of abstractness is not an absolute characteristic


of a knowledge item but reflects the angle of contemplation.

In mathematics, levels of abstraction are formalized, for example,


in type theory developed by Whitehead and Russell (1910–1913).
In this theory, a set may contain only sets that have types lower
than this set as its elements. Therefore, each type determines the
corresponding level of abstraction.
Another attempt to describe levels of abstraction was done by
Hayakawa (1949), who introduced the concept Abstraction Ladder
based on the approach from (Korzybski, 1933). In the Abstraction
Ladder, individual (proper) names of material things form the first
(verbal) level of abstraction, while common names, such as a dog
or a plane, form the second level of abstraction. Higher level are
constructed by taking concept with less and less defining properties.
As a characteristic related to abstractness, generality reflects
degree of generalization achieved by a knowledge item. The degree of
generalization of a knowledge item reflects the scope of the described/
represented knowledge domain. The degree of generalization
increases when a broader domain is described/represented or more
aspects of the domain are mirrored.
These two options are revealed in two components of generality —
depth and breadth.
Depth of a knowledge item reflects how many and to what extent
aspects of an issue related to the knowledge item domain are taken
into account. This shows that depth is the aspect of generality.
Breadth of a knowledge item reflects the scope of the knowledge
item domain, i.e., whether the knowledge content is applicable to a
broad domain or to highly specific one.
Note that a line of reasoning may be clear accurate, precise, rele-
vant, and deep, but lack breadth as in an argument from either the
conservative or liberal standpoint, which gets deeply into an issue,
but only recognizes the insights of one side of the question (Elder
and Paul, 2009).
It is possible to compare knowledge items with respect to their
generality. The more abstract knowledge is usually more general.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 139

Knowledge Characteristics and Typology 139

Definition 2.3.32. A knowledge item (an epistemological structure)


K the domain of which contains the domain of a knowledge item (an
epistemological structure) H is more general than H.
Analyzing generality, it is useful to introduce degrees of generality
for epistemological structures in general and knowledge, in particular.
Let us consider two knowledge items (epistemological structures)
K and H.

Definition 2.3.33. (a) If the domain of the knowledge item (the


epistemological structure) K contains the domain of the knowledge
item (the epistemological structure) H and there is no knowledge
item (the epistemological structure) G such that the domain of K
contains the domain of G, which, in turn, contains the domain of H,
then K has the first degree of generality over H.
(b) If the knowledge item (the epistemological structure) K has
the first degree of generality over a knowledge item (the epistemo-
logical structure) G and G has the degree n of generality over the
knowledge item (the epistemological structure) H, then K has the
degree n + 1 of generality over H.
As we can see, the level of generality is not an absolute charac-
teristic of a knowledge item but reflects the angle of contemplation.
Mathematical formalization of knowledge generality and abstrac-
tion are elaborated in Section 4.3.1.

2.3.9. Completeness of knowledge versus precision


of knowledge
Completeness of a knowledge item (system) K with respect to a
domain D characterizes to what extent all essential aspects of the
domain D are represented by K.
For instance, knowledge that most birds fly but there are birds,
e.g., penguins, that do not fly is more complete with respect to birds
than knowledge that all birds fly. Knowledge that there are white and
black swans is more complete with respect to birds than knowledge
that all swans are white.
There is a direction in epistemology that focuses on partial knowl-
edge. In most cases, it is impossible to have complete knowledge and
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 140

140 Theory of Knowledge: Structures and Processes

what we have is, as rule, incomplete or partial knowledge. It is possible


to find complete knowledge about the state of affairs only in math-
ematical problems from a textbook. In real life, knowledge is only
more or less complete. Thus, people use partial knowledge to solve
real-life problems and epistemology studies partial knowledge. One of
the consequences of this situation is that only bounded rationality is
possible as complete knowledge and comprehensive information are
inaccessible when people make decisions in real life situations.
Precision of a knowledge item (system) K with respect to an
issue (aspect) A characterizes the difference (or ratio) between (of)
the issue (aspect) A and (to) its representation by K.
For instance, correct knowledge of time in minutes is more precise
than correct knowledge of time in hours. Knowledge that number π
is equal to 3.14 is more precise than knowledge that number π is
equal to 3.
There is an ongoing conflict between completeness of knowledge
versus precision of knowledge in information retrieval in databases
and in search engines on the Internet.
In information retrieval, precision (also called positive predictive
value) is the ratio of the number of retrieved relevant instances, e.g.,
documents, to the number of all retrieved instances (documents).
Precision ratios are often used in evaluation of the search engine
quality.
Completeness (also called recall) is the ratio of the number of
retrieved relevant instances, e.g., documents, to the number of all
relevant instances in the system. While it is possible to estimate this
characteristic in database information retrieval, it is unrealistic even
to try to calculate this value in the Internet search because search
engines are unable to index or retrieve all the potentially available
information. However, it is possible to make estimates of the precision
of a given search engine for some area of knowledge.

2.3.10. Meaning of knowledge


Knowledge exists in different forms and shapes. The most popular
form is symbolic expressions, which are carriers of this knowledge.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 141

Knowledge Characteristics and Typology 141

However, as we discussed in Section 2.1, practical knowledge or


“know-how” often is embodied in customized dispositions, affec-
tive states such as emotions and sentiments, and phenomenologi-
cal acquaintances conferred, for example, by sensory experience or
artistic representation. Various images can be carriers of knowledge.
Therefore, speaking about meaning of knowledge, it is necessary to
take into account, knowledge carriers, and representations.
We start with the meaning of symbolic expressions for three
reasons. First, it is the most popular representation of knowledge.
Second, the theory of meaning is mostly developed for expressions
in natural languages, logical languages, and programming languages.
Third, it is much easier to assign meaning to expressions than to other
forms of knowledge representations, e.g., for emotions or feelings.
When knowledge exists in the form of an expression, it is possible
to derive its meaning by finding meaning of this expression. To do
this, it is possible to use a special discipline called semantics, which
has been developed in semiotics, linguistics, computer science and
logic and studies meaning of expressions (in linguistics and logic)
and signs (in semiotics).
Often researchers do not discern semantics and theory of meaning.
However, a more exact approach separates semantics as an opera-
tional theory of meaning that assigns semantic contents to signs and
expressions of a language and the foundational theory of meaning,
which explores the reason in virtue of which signs and expressions
have the semantic contents that they have (Speaks, 2014).
Here we consider only the operational theory of meaning but in
a much broader sense, namely, as a theory the goal of which is to
assign meaning to arbitrary objects. We begin with semantics in the
classical sense.
Semanticists generally recognize two sorts of meaning that an
expression (such as the sentence “Knowledge is power”) or a sign
(such as the sign ∫ ) may have: extensional meaning and intentional
meaning.
Extensional meaning is the relation that the expression (the sign)
has to things and situations in the real world, as well as in possible
worlds.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 142

142 Theory of Knowledge: Structures and Processes

Intentional meaning is the relation the expression (the sign) has


to other expressions (signs).
In semiotics, extensional meaning is presented by the relation
between a sign and its object, which is traditionally called denotat or
denotation. Intentional meaning presented by the relation between
a given sign and the signs that serve in practical interpretations of
the given sign is traditionally called sign connotation. However, many
theorists prefer to restrict the application of semantics to the denota-
tive aspect, using other terms or completely ignoring the connotative
aspect.
As a result, semiotic semantics related to symbols (signs), e.g.,
letters, words and texts, consists of two components. One of them,
relational semantics, expresses intentional meaning and represents
relations between symbols (signs). The other one, denotational
semantics, expresses extensional meaning and represents relations
between symbols and objects these symbols (signs) represent.
In knowledge theory, extensional meaning of a knowledge item is
presented by the relation between the knowledge item and its domain,
which we call denotation of the knowledge item. Intentional mean-
ing of a knowledge item is presented by the relations between of
the knowledge item and other knowledge items, which we call con-
notation or content of the knowledge item. As a result, we get the
Epistemological Triad presented in Figure 2.4.
For instance, the domain of procedural knowledge consists of pro-
cedures and experiences in a field of work or behavior. At the same
time, the corresponding content of procedural knowledge contains
relations between descriptions of such procedures and experiences,
as well as the way they can be applied, e.g., protocols in the medical
sector, acceptation rules in the insurance branch, and methods of
portfolio analysis in the business world.

Knowledge Item

Denotation/Domain Connotation/Content

Figure 2.4. The Epistemological Triad


September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 143

Knowledge Characteristics and Typology 143

The domain of descriptive knowledge consists of different objects,


their properties and relations between these objects. At the same
time, the corresponding content of descriptive knowledge contains
properties of and relations between knowledge items that describe
such properties and relations of objects.
The domain of representational knowledge consists of different
objects with their properties and relations, while the correspond-
ing content of representational knowledge contains relations between
knowledge items that describe such objects with their properties and
relations.
Practical experience of people in the knowledge domain shows
that there are gradations of meaning of knowledge items, e.g.,
concepts. In particular, Langacker (1991a) considers two levels of
meaning — profile and base. The profile of a knowledge item K is
the direct, e.g., literal, interpretation of K. For instance, a defini-
tion of a concept is its profile, while the base is the encyclopedic
knowledge that the concept presupposes. In a general case, it is pos-
sible to understand the meaning of a knowledge item K as a knowl-
edge system associated with K, e.g., a semantic network of a concept
is treated as the concept meaning. Then the larger is the associated
knowledge system the deeper meaning it reveals.
Types of knowledge induce forms of meaning in semantics as a
whole. It gives us three basic form of meaning:

∗ Descriptive meaning of knowledge reflects how this knowledge


describes its domain. For instance, the statement “Aristotle was a
philosopher” means that Aristotle had high intelligence.
∗ Operational meaning of knowledge reflects processes, actions, rules,
procedures, and algorithms related to this knowledge. For instance,
the statement “Aristotle was a philosopher” means that Aristotle
developed philosophical theories and ideas.
∗ Representational meaning of knowledge reflects what this knowl-
edge represents. For instance, the statement “Aristotle was a
philosopher” means that there was a philosopher with the name
Aristotle.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 144

144 Theory of Knowledge: Structures and Processes

In addition, there are various kinds of semantics with their specific


conception of meaning.
Logical semantics is related to propositions and involves truth
values of these propositions.
There is much philosophical discussion about the nature of “truth-
bearers” — the kinds of things that can be true or false. Various
writers have suggested such things as propositions, statements, asser-
tions, utterances, sentence-types, sentence-tokens, beliefs, opinions,
theories, doctrines, facts, etc.
Linguistic semantics is related to words and texts and is expressed
by relations between them. There are different directions in linguistic
semantics. Let us consider some of them.
Formal semantics studies the logical aspects of meaning, such as
sense, reference, implication, and logical form.
Lexical semantics is a subfield of linguistic semantics and studies
word meanings and word relations. According to this methodology,
words either denote things in the world or concepts, depending on
the particular approach to lexical semantics.
Conceptual semantics studies the cognitive structure of meaning.
Cognitive semantics is part of the cognitive linguistics movement
and is based on the following assumptions. First, grammar is a con-
ceptualization of meaning. Second, conceptual structure is embodied
in and motivated by the usage of words in a language. Third, the
ability to use language draws upon general cognitive resources and
not on a special language module. However, there are psychologists
and neurophysiologists who claim that linguistic abilities are based
on very specific structures in the brain.
Structural semantics, as logical positivists maintain, is the study
of relationships between the meanings of terms within a sentence,
the meanings of sentences within a text, and how meaning of larger
systems can be composed from meanings of smaller systems.
Frame semantics developed by Charles J. Fillmore (1929–2014)
attempts to explain meaning in terms of their relation to general
understanding, asserting that it is impossible to understand the
meaning of a word or a text without access to all the essential
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 145

Knowledge Characteristics and Typology 145

knowledge that relates to that word or text. This essential knowledge


is called the semantic frame of the corresponding word or text. In
other words, a semantic frame of a word or text can be defined as a
coherent structure of concepts that are related to this word or text
such that without knowledge of all of them, one does not have com-
plete knowledge of this word or text. Frames are based on recurring
experiences (Fillmore, 1976; 1982). For instance, the writing frame
is based on recurring experiences of writing.
Semantics for computer applications falls into three categories
(Nielson and Nielson, 1995):

— Operational semantics is the field where the meaning of a con-


struct is specified by the computation it induces when it is exe-
cuted on a machine. In particular, it is of interest how the effect
of a computation is produced.
— Denotational semantics is the field where meanings are modeled
by mathematical objects that represent the effect of executing the
constructs. Thus, only the result is of interest but not how it is
obtained.
— Axiomatic semantics is the field where specific properties of the
effect of executing the constructs represent meaning and are
expressed as assertions. Thus, there are always aspects of the
executions that are ignored.

Another dimension of meaning is studied by pragmatics, which is


the study of the ability of natural language users (e.g., speakers or
writers) to communicate not only the general meaning but also their
intentions, goals, and feelings, what the users mean and not the text.
To distinguish the semantic meaning from the pragmatic meaning of
a message (or sentence), communication researchers use the term the
informative intent, also called the sentence meaning, and the term the
communicative intent, also called the sender meaning or speaker mean-
ing when it is an oral communication (Sperber and Wilson, 1986).
In semiotics, pragmatics represents relations of signs to their
impacts on those who use them, e.g., relations of signs to interpreters.
This impact exists when a sign makes sense to the interpreter. Sense
of a message (information) is defined by Vygotskii (1956) as a system
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 146

146 Theory of Knowledge: Structures and Processes

of psychological facts emerging in the brain caused by the received


message (information).
The ability to understand another sender intended meaning is
called pragmatic competence. A statement about pragmatic functions
belongs to metapragmatic. While pragmatics deals with the ways
people reach their goals in communication, metapragmatics explains
how it is possible to reach the same goal with different syntax and
semantics. Suppose, a person wants to ask someone else to stop smok-
ing. This can be achieved by using several utterances. It is possible to
say directly, “Stop smoking, please!” This utterance has straightfor-
ward and clear semantic and pragmatic meaning. Alternatively, one
could say, “Oh, this room needs more air conditioners”, or “We need
more fresh air here.” In the given context, these utterances imply
a similar meaning but are indirect requiring pragmatic inference to
derive the intended meaning.
A popular assumption in the philosophy of mind and cognitive sci-
ence is that the propositional attitudes of subjects are underwritten
by an internal language of thought and comprised of mental repre-
sentations. In other words, linguistic meaning is explained directly in
terms of the contents of mental representations. Here are two theories
of meaning based on these conjectures.
The picture theory of meaning is a theory of linguistic reference
and meaning verbalized by Ludwig Wittgenstein (1889–1951). In his
Notebooks 1914–16 Wittgenstein wrote that language is first and fore-
most a representational system, with which people make (logical or
mental) pictures of facts and these pictures are models of reality.
Elements of the picture are combined with one another in a definite
way by relations and connections.
According to Wittgenstein, sentence is meaningful if and only if
it is a fact, which corresponds to a possible fact in the world. To
be a picture, a fact must have something in common with what it
pictures. Thus, a meaningful proposition pictures a state of affairs
or atomic fact in the world. In other words, the picture theory of
meaning asserts that statements are meaningful if they can be defined
or pictured in the real world. Wittgenstein compared the concept of
logical/mental pictures (German Bild) with spatial pictures.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 147

Knowledge Characteristics and Typology 147

A similar approach is the image theory of meaning, which is a


theory in which meaning of linguistic expressions derived from mental
images associated with these expressions. There are different kinds
and types of mental images. Conceptual networks and diagrammatic
schemas are mental images of structural knowledge. These images
increase efficiency of knowledge processing and utilization.
Specific mental images form the foundation of image theory, which
is a descriptive theory of decision-making based on the assump-
tion that decision makers represent meaning of knowledge as images
(Beach and Mitchell, 1987). One image consists of principles that
recommend pursuit of specific goals. A second image represents the
future state of events that would result from attainment of those
goals. A third image consists of the plans that are being implemented
in the attempt to attain the goals. A fourth image represents the
anticipated results of the plans. Decisions consist of (1) adopting or
rejecting potential candidates to be new principles, goals, or plans,
and (2) determining whether progress toward goals is being made,
i.e., whether the aspired-to future and the anticipated results of plan
implementation correspond. Decisions are made using either (1) the
compatibility between candidates and existing principles, goals and
plans, as well as the compatibility between the images of the aspired-
to and the anticipated states of events; or (2) the potential gains and
losses offered by a goal or plan.
The denotation meaning of a knowledge item is the domain
described or represented by this item.
The relation meaning of a knowledge item is the structure (net-
work of relations) with this item in the knowledge space.
The estimate/significance meaning of a knowledge item is the sig-
nificance of this item to the knower or to the knowledge observer.
The relation meaning of a knowledge item includes the implica-
tional meaning of this knowledge item, which consists of the knowl-
edge implied by this item.
The relation meaning of a knowledge item also includes the con-
textual meaning of this knowledge item, which consists of all contexts
in which this item appears.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 148

148 Theory of Knowledge: Structures and Processes

Usually, the concept of a context is applied only to languages.


Namely, a context of a word w (a text t) is a text T that includes w
(respectively, t). Then the contextual meaning of a word w (a text t)
is the set of all contexts in which this word (text) appears.
However, it is possible to define context for knowledge.

Definition 2.3.34. A context of a knowledge item k (a knowledge


system K) is a knowledge system N that includes k (respectively, K).

For instance, if a knowledge item is a mathematical theorem th


from a textbook, then one context of th is the theory to which this
theorem belongs, e.g., if th states that the derivative of the sum of
two functions is the sum of the derivatives of these two functions,
then the context is the Calculus. Another context to this knowledge
item is the content of the textbook that contains th.
Many mentalist theories of meaning have in common that they
analyze one sort of representation — linguistic representation — in
terms of another sort of representation — mental representation. It
means a reduction of structural entities to mental entities.
Grice developed an analysis of meaning based on two assumptions
(Grice, 1989):

(1) facts about what expressions mean are to be explained, or ana-


lyzed, in terms of facts about what speakers mean by utterances
of them; and
(2) facts about what speakers mean by their utterances can be
explained in terms of their intentions.

These two theses allow one reducing meaning of expressions


(utterances) to the contents of the intentions of speakers.
Another approach to analysis of meaning is based on the concept
of belief, i.e., beliefs are taken into account rather than intentions of
speakers.
An interesting approach to meaning was elaborated by Osgood
et al. (1978), introduced a measure of meaning and constructed a
technology for measurement of meaning.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 149

Knowledge Characteristics and Typology 149

2.3.11. Other descriptive properties of knowledge


External characteristics of knowledge are molded by systems that are
related to knowledge.
Different authors explicated and discussed various external char-
acteristics of knowledge. Such characteristics (properties) as avail-
ability, accessibility, being personal, being implicit, being explicit,
being operational, being representational, being descriptive, and
some others are considered in Section 2.1. Let us depict additional
characteristics.
The location of a knowledge item describes the place where the
knowledge carriers of this item are situated. For instance, within
the company or organization, carriers of a knowledge item can be
situated in the front or the back-office, but also on the other side
of the world. There are different types of knowledge carriers: people,
documents on paper in the form of books, newspapers or reports,
documents in computer files, web sites, pictures, paintings, etc.
The form of a knowledge item describes the form of the represen-
tation of this knowledge item.
For instance, it is possible to represent one knowledge item by a
text and another knowledge item by speech.
The material form of a knowledge item describes the carrier of
this knowledge item.
For instance, the material form of one knowledge item is a book
and the material form of another knowledge is a computer.
The content of a knowledge item describes what aspects of the
domain (object) of this knowledge item it reflects.
There are several temporal characteristics (properties) used in
databases and knowledge bases.
The generation time of a knowledge item describes the date when
the knowledge item was generated.
The acquisition time of a knowledge item describes the date when
the knowledge item was obtained.
Acquisition time is very important for temporal databases (Snod-
grass and Jensen, 1999; Burgin, 2008a).
The transformation time of a knowledge item describes the last
date when the knowledge item was transformed.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 150

150 Theory of Knowledge: Structures and Processes

The accessibility time of a knowledge item describes what time it


takes to access this knowledge item.
The availability time of a knowledge item notifies at what periods
of time this knowledge item is available.
For instance, people can call 24 hours a day to get information
from one organization or can obtain information from another orga-
nization only during its working hours.
Such a property as novelty of knowledge has three gradations:

• New knowledge is knowledge that the knower (knowledge system)


did not have before.
• Old knowledge has two meanings — it is either knowledge that the
knower (knowledge system) obtained long ago or knowledge that
is not new.
• Contemporary knowledge is knowledge the knower (knowledge sys-
tem) has at the considered period.

All novelty gradations are relative. For instance, knowledge can


be new for one individual but not new for another one or for a group.
Knowledge about non-Euclidean geometries was new in the middle
of the 19th century but it is old at the beginning of the 21st century.
The contemporary knowledge at the end of the 20th century is very
different from the contemporary knowledge at the end of the 10th
century.
Let us consider some more of knowledge properties.
A knowledge item (knowledge) is outdated if a more recent knowl-
edge item gives a better representation of the knowledge domain
(object).
Knowledge is safe when it cannot be distorted by some process.
Knowledge is shareable because it does not decrease when it is
given to others. This shows that knowledge differs from material
resources to a great extent.
Knowledge is private when only the person to whom it belongs
has access to this knowledge.
Confidentiality of a knowledge item means that access to it is
restricted only to authorized people or systems. Note that privacy
implies confidentiality.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 151

Knowledge Characteristics and Typology 151

Integrity of knowledge involves maintaining the consistency, accu-


racy, and trustworthiness of this of a knowledge item over its entire
life cycle. It means that integrity is a compositional property, which
includes such components as the consistency, accuracy, and trustwor-
thiness of a knowledge item.
Depth of knowledge characterizes to what degree of details knowl-
edge describes or represents its domain.
Knowledge scope is the property, the value of which is the knowl-
edge domain. Such property as knowledge generality considered
above reduces to the relation between the scope of different knowl-
edge items.
Knowledge can be interesting or not. For instance, Alfred North
Whitehead (1861–1974) wrote, “It is more important that a propo-
sition (i.e., propositional knowledge) be interesting than that it be
true.”
Thus, we see that an extremely active research in the knowledge
studies domain has allowed researchers to find many properties of
knowledge and knowledge processes and to use these properties in
artificial intelligence, education, and psychology.

2.4. Metaknowledge and metadata

A library may be very large; but if it is in disorder,


it is not so useful as one that is small but well arranged.
In the same way, a man may have a great mass of knowledge,
but if he has not worked it up by thinking it over for himself,
it has much less value than a far smaller amount
which he has thoroughly pondered.
Arthur Schopenhauer

The Greek word meta means “beside”, “after”, “later than” or “in
succession to”. Often people understand that something with the
name “metaX” occurs later on the timeline than X. However, a more
popular meaning in contemporary languages is “beside” or “after.”
For instance, carpus is the wrist, while metacarpus is the part of the
human hand between the wrist and the fingers or we may say, after
the wrist and before the fingers. In a similar way, metatarsus is the
part of the human foot after the tarsus and before the toes.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 152

152 Theory of Knowledge: Structures and Processes

An important part of philosophy is metaphysics. It came to phi-


losophy through the common name for several of Aristotle’s works.
However, Aristotle himself did not call the subject of these works
by the name metaphysics but referred to it as “first philosophy”.
The name metaphysics comes from the editor of Aristotle’s works,
Andronicus of Rhodes. He placed the books on first philosophy right
after the work called Physics, and called them Metaphysics meaning
“the book that come after the [books on] physics”. Due to this, later
generations of philosophers called the subject metaphysics thinking
it meant “the science of what is beyond the physical”.
However, now metaphysics is one of the pivotal branches of phi-
losophy concerned with explaining the fundamental nature of being
and the world as the manifestation of being and clarifying the fun-
damental notions by which people understand the world. Due to
fewer restrictions in philosophical exploration in comparison with
research in physics, metaphysics often goes beyond physics in many
aspects.
In contemporary understanding, the word “metaX” means
above X. However, the word above means after in a hierarchy when
you go in the direction bottom-up. This exactly relates to metaknowl-
edge and metadata.
Metaknowledge or meta-knowledge is knowledge about knowledge.
For instance, it may be a cluster of definitions and methods aiming to
guide you in gathering the pertinent knowledge with regard to your
activity. The metaknowledge is often used to guide functioning of a
system including goal formation and future planning. Metaknowledge
is intrinsically connected to metadata.
Metadata (also called metacontent) are data that provide infor-
mation about one or more aspects of the data.
For instance, data with the standard file information such as file
size, type, location, and date of creation are metadata for data in
the file. If data are organized as a text, then their metadata usually
contain information about:

— the language of the text,


— the number of words in the text,
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 153

Knowledge Characteristics and Typology 153

— the number of pages in the text,


— the number of symbols in the text,
— the number of lines in the text,
— who the author is,
— when the text was written,

and in some cases,

— a short summary of the text.

However, a summary of a text actually is metaknowledge as it


contains knowledge about the text. Another example of metaknowl-
edge is an annotation of a book.
An important form of metadata is represented by named sets
(Burgin, 1990; 1995; 2011). Indeed, it is demonstrated that all main
data structures and models are efficiently represented in a form of
named sets or chains of named sets (Burgin, 1992a). In addition,
named sets and their chains also give a unifying data model for
data structures used in programming languages, operating systems,
and computer hardware: vectors, lists, arrays, trees, strings, tables,
records, streams and the like. The same is true for such forms of data
on the Web as digital imagery (in the form of frames or pictures) and
audio data. In such a way, we come to a unified data meta-model
that is suitable data models on all levels: from high-level or concep-
tual to representational or implementation to low-level or physical
models. In turn, named sets and their chains form efficient high-level
metadata (cf., (Siegel and Madnick, 1991; Tannenbaum, 2002)) for
different purposes, in particular, for XML documents and schemas
(Nocedal et al., 2011).
Such general representation allows us to introduce various
operations with named sets and their chains oriented to data prepara-
tion, processing, search, and acquisition. Some operations are similar
to those found in relational algebras (Codd, 1990). Other operations,
such as sequential composition of named sets and their chains, are
different from operations in relational databases. All these operations
provide means for working with data models that are different from
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 154

154 Theory of Knowledge: Structures and Processes

the relational ones. For instance, sequential composition of named


set chains represents extensions of hierarchies.
Although metadata have been used from the beginning of com-
puter era, the term metadata was introduced in (Bagley, 1968). Since
then it became popular in many areas such as information manage-
ment, information science, information technology, librarianship, and
databases. For instance, library catalogs are metadata in libraries,
which are created in the process of cataloging resources, such as
books, journals, newspapers, magazines, manuscripts, DVDs, web
pages, or digital images.
Here are more examples of information contained in metadata for
some data:

• Means of creation of the data;


• Purpose of data creation and utilization;
• Time and date of creation;
• Source, creator or author of data;
• Location where the data was created;
• Location where the data are stored;
• Standards used;
• Data model for representing data structure.

Metadata research emerged as a discipline crosscutting many


areas and domains. It has been directed at the provision of struc-
tural descriptions (often called annotations) to Web resources or
applications. Descriptions in the form of metadata function as a
basis for advanced services in many application areas, including
search and location, personalization, federation of repositories, and
automated delivery of information. For instance, the HTML for-
mat for defining web pages has means for inclusion of various
metadata, from basic annotations, dates and keywords to further
advanced metadata schemas (Nocedal et al., 2011). In addition, meta-
data are used in database servers, data virtualization servers, and
application servers. Metadata in these servers are used for describ-
ing business objects in various enterprise systems and applications.
Structural metadata commonality is also important to support data
virtualization.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 155

Knowledge Characteristics and Typology 155

In information systems, metadata are often attached to data in the


form of labels and tags. For instance, a digital tag is a simple system
of keywords or terms assigned to a compound data (knowledge) item
such as a computer file, digital image, or Internet bookmark. The
main goal of tagging is identification of a data (knowledge) item
in a system for finding this item by browsing or searching. Usually
creators or users of data (knowledge) items informally choose tags
for these items.
Tagging and labeling on the Internet has achieved wide popularity
due to the growth of social networks, blogging, photography sharing
and bookmarking sites. These sites allow users to create and manage
labels or tags in the form of keywords.
Often labels and tags provide semantic information about data
(knowledge) items to which they are attached. For instance, triple
tags have three parts — a namespace, a predicate, and a value — for
purpose of their meaningful interpretation by computer programs.
Metadata has become even more important for the Semantic Web
with its technological framework for ontology-based metadata. The
central idea of the Semantic Web is to extend the existing Web by
adding semantic means to the web management resources allowing
better search, processing, integration, and presentation of the Web
information in a meaningful, intelligent manner. Different new means
have been developed for the Semantic Web. An example is the dis-
tributed intelligent managed element (DIME) network architecture
described in (Burgin and Mikkilineni, 2014).
Although it is possible to write books about metadata (cf., for
example, (Siegel and Madnick, 1991; Tannenbaum, 2002)), our main
concern here is metaknowledge. Thus, the first step in this direction is
understanding that the main feature of metadata is that they contain
knowledge about data. Some of this knowledge is related only to the
corresponding data, while other describes knowledge contained in the
corresponding data, For instance, taking data in the form of a text,
we see that metadata that inform us about the size of the text, e.g.,
the number of pages, give knowledge about data, while metadata in
the form of annotation give knowledge about knowledge in the text
and this knowledge is metaknowledge.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 156

156 Theory of Knowledge: Structures and Processes

Let us consider other kinds of metaknowledge that have existed


from the times long before the first computers were created.
A kind of operational metaknowledge is represented by metarules.
There are many metarules in logic, i.e., rules about how to use other
rules. Metarules instruct how to manipulate expressions or formulas
that are well formed or explain how to use deduction rules or how to
perform deductions. For instance, “the pair A → B, A implies B”is
a deduction rule because it represents a variety of deduction rules
such as “If it is a rain, the trees are wet” and “It is a rain” imply
“The trees are wet”.
Models of systems and processes contain representational knowl-
edge about these systems and processes. In the same way as models
contain knowledge about their domains, metamodels contain knowl-
edge about models. It means that metamodels contain metaknowl-
edge. Mathematics gives an abundance of examples of metamodels.
Let us consider some of these examples.
Differential equations in a general form, e.g.,

∂ m ui (t, x) 
= Pα (aαji (t, x), u(t, x), Dxα Dtk ui (t, x))
∂tm
|α|+k≤m,k<m

+ Q(bli (t, x), ui (t, x)), (2.6)

are representations of particular differential equations used in


physics, chemistry, biology, and economics. Thus, differential equa-
tions in a general form are metamodels containing representational
metaknowledge.
For instance, Equation (2.6) is a metamodel for Equation (2.7),
which is the discrete velocity models of the Boltzman equation in the
kinetic theory of gases.
∂u Λ∂u
+ = Q(u). (2.7)
∂t ∂x
Here u = (u1 , . . . , un ), Λ is constant diagonal matrix, and Q is a
quadratic form.
In turn, Equation (2.7) is a metamodel for Equations (2.8)
and (2.9), which constitute the Carleman system, as well as for
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 157

Knowledge Characteristics and Typology 157

Equations (2.10)–(2.12), which constitute the Broadwell model.


∂u ∂u
+ = v 2 − u2 , (2.8)
∂t ∂x
∂v ∂v
− = u2 − v 2 , (2.9)
∂t ∂x
and
∂u ∂u
+ = w2 − uv, (2.10)
∂t ∂x
∂v µ∂v
− = w2 − uv, (2.11)
∂t ∂x
∂w
= −2(w2 − uv). (2.12)
∂t
Thus, Equation (2.6) becomes a meta-metamodel, i.e., a model
for a metamodel.
There are whole disciplines, which deal with metaknowledge. For
instance, metamathematics is a part of mathematics that studies
mathematics as its domain. Consequently, knowledge in metamath-
ematics is mathematical metaknowledge.
There are three basic types of mathematical objects:
(1) Mathematical structures.
(2) Properties of and relations between mathematical structures.
(3) Algorithms and processes that are applied to mathematical
structures, their properties and relations.
Thus, metamathematics studies how mathematics studies and
uses mathematical structures, their properties and relations, as well
as algorithms and processes in mathematics. Note that properties,
relations, algorithms and processes are also structures, which can be
represented by mathematical structures, and these structures contain
metaknowledge. For instance, in some cases, it is possible to repre-
sent algorithms by abstract finite automata, while in other cases,
algorithms can be represented only by Turing machines or inductive
Turing machines (Burgin, 2005; Hopcroft et al., 2007). This implies
that abstract finite automata, Turing machines and inductive Turing
machines contain metaknowledge about various algorithms.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 158

158 Theory of Knowledge: Structures and Processes

Philosophy and methodology of mathematics contain metaknowl-


edge about mathematical knowledge. For instance, methodology of
mathematics studies the essence and source of mathematical knowl-
edge. For millennia, it was believed that mathematics is an abstract
discipline and all knowledge in mathematics is obtained by reason-
ing. However, in the 20th century, it was discovered in methodology of
mathematics that mathematics is also an experimental science and
a lot of it comes from mental and computer experiments (Burgin,
1997).
Philosophy and methodology of science contain metaknowledge
about scientific knowledge. The main problems in these disciplines
are:

— relation between knowledge and its domain, e.g., nature for nat-
ural sciences or society for social sciences;
— development of scientific knowledge;
— structure of scientific knowledge.

There is metaknowledge in computer science in general and in the


theory of automata, algorithms and computation, in particular. As
an example, let us consider second-level algorithms, which represent
operational knowledge about operational knowledge, i.e., they con-
tain metaknowledge (Burgin and Debnath, 2010; Burgin and Gupta,
2012).
Algorithms are built of simple data-transformation and other
functioning rules, usually in the form of instructions (cf., for example,
(Rogers, 1987; Sipser, 1997)). More exactly, algorithms are structures
in which simple transformation rules are their construction elements
(Burgin, 2005). For instance, Turing machines have the following
functioning rules:

qh ai → aj qk ,
qh ai → qk R,
qh ai → qk L.

Here, qh and qk are states of the Turing machine, ai and aj are


symbols from the working alphabet, R indicates a move of the Turing
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 159

Knowledge Characteristics and Typology 159

machine head to the right cell of the tape, and L indicates a move of
the Turing machine head to the left cell of the tape.
Each rule directs a separate step of computation of the corre-
sponding Turing machine. These rules are data transformation rules.
In addition, any algorithm has execution rules, also called metarules.
These rules instruct how to apply data transformation rules to data,
i.e., to words in the case of Turing machines.
For instance, the transition function (or relation) δA of a deter-
ministic finite automaton A is the system of data transformation
rules. However, to apply these rules and to organize a computational
process, it is necessary to have metarules for data transformation
rule application. In addition, it is necessary to have metarules that
specify how input is given to the automaton and how its output is
obtained (Burgin, 2005). Usually such metarules are described but
not formalized (cf., for example, (Hopcroft et al., 2007)). Here we
describe metarules for deterministic finite automata in an explicit
form.
For an accepting deterministic finite automaton A, we have the
following metarules:

1. The automaton A starts functioning in its start state, usually


denoted by q0 .
2. If the automaton A is in the state q and the input symbol is a,
then the transition function δA is applied and the automaton A
comes to the state δA (q, a).
3. Inputs are given in the form of finite words in some finite alphabet.
4. When a word w = a1 a2 . . . an is given as input to A, the automaton
A starts processing this word with the first symbol a1 , going from
ai to an+1 (i = 1, 2, . . . , n − 1) and finishing with an .

Metarules of an accepting deterministic finite automaton tell us


that:

1. The automaton A starts functioning in the start state q0 .


2. Words are given as input and they are consumed by the automaton
letter by letter, starting from the beginning of the word.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 160

160 Theory of Knowledge: Structures and Processes

3. Consuming one letter, the automaton performs a transition from


one state to another specified by the transition function.
4. If after consuming the whole word, the automaton stops in a final
state, then the input word is accepted, while when the automaton
stops not in a final state (either because there are no rules to
continue or consuming the whole input), then the input word is
rejected.
Note that data-transformation rules are individual for each algo-
rithm in general and each finite automaton in particular. At the same
time, metarules are given for classes of algorithms in general and the
class of accepting deterministic finite automaton, in particular.
Construction rules are used to form algorithmic representation for
data transformation rules. For instance, finite automata data trans-
formation rules may be in the form of a table, list of triples or a
graph called the transition graph of the automaton. For instance,
construction rules of Turing machines tell that all instructions are
written in a list. Construction rules of state-transition machines
tell that the descriptions of machine transitions form either a list
or a table (for a transition function) or a graph (for a transition
graph).
In essence, construction rules build algorithm representations
from simple instructions as, for example, it is done in operational pro-
gramming languages or Turing machines, or from simple operations.
In a similar way, second-level algorithms are built of algorithms
from another class as their construction elements. Analysis of con-
ventional (first-level) algorithms allows us to give a formal definition
of second-level algorithms.
Let us take a class K of algorithms, a set of construction rules C,
and a set E of execution rules (metarules).
Definition 2.4.1. A second-level algorithm A over the class K is
a structure for organizing and controlling processes, which is repre-
sented by the following triad:
(B A , C A , E A ),
where B A is a subset of K, C A is a subset of C, and E A is a subset
of E.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 161

Knowledge Characteristics and Typology 161

Informally, the algorithm A is built (constructed from the algo-


rithms from B A by the construction rules from C A , while perfor-
mance of A is guided by the execution rules (metarules) from E A .
As a description of an algorithm has to be finite (cf., for example,
(Burgin, 2005; Knuth, 1973)), all sets B A , C A , and E A are finite.
There are many examples of algorithms of the second order.
For instance, the whole concept and realization of cloud computing
(Miller, 2008) is based on the second-level algorithms.

Definition 2.4.2. Algorithms from B A are called blocks of the


second-level algorithm A.
For instance, cellular automata are built of finite automata as
blocks (Codd, 1968), while Internet machines are built of Turing
machines as blocks (Van Leeuwen and Wiedermann, 2000). Arbi-
trary abstract automata are blocks of grid automata, which give the
most general representation form for algorithms of the second level
(Burgin, 2005, Ch. 4).

The set E of execution rules (metarules) consists of three parts,


namely,
E = E I ∪ E W ∪ E O,
where E I are input rules, which determine how input is given to
second-level algorithms; E W are working rules, which show how algo-
rithms from K are applied to data when they are blocks of second-
level algorithms; and E O are output rules, which determine how
output of second-level algorithms is obtained.
This stratification of metarules implies a corresponding stratifi-
cation of metarules for each second-level algorithm A:
E A = E AI ∪ E AW ∪ E AO .
Here E AI are input rules, which determine how input is given to the
second-level algorithm A; E AW are working rules for A, which show
how blocks from A are applied; and E AO are output rules, which
determine how the output of A is obtained.
We have similar types of rules for classes of second-level
algorithms.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 162

162 Theory of Knowledge: Structures and Processes

Taking a class H of second-level algorithms, we have several


systems of rules related to this class:

— The set BH = A∈H B A consists of blocks of algorithms from H.

— The set CH = A∈H C A consists of construction rules for algo-
rithms from H.

— The set EH = A∈H E A consists of execution rules for algorithms
from H.
By definition, BH ⊆ K, CH ⊆ C, and EH ⊆ E.
In turn, the set EH of execution rules (metarules) consists of three
parts, i.e.,
EH = EHI ∪ EHW ∪ EHO .
Here EHI are input rules, which determine how input is given to
algorithms from H; EHW are working rules, which show how blocks
of algorithms from H are applied to data; and EHO are output rules,
which determine how output is obtained by algorithms from H.
Modifying functioning rules, it is possible to obtain more powerful
algorithms even without changing the structure of the system that
performs these algorithms. For instance, a slight change of output
rules transforms Turing machines in much more powerful inductive
Turing machine of the first order (Burgin, 2005).
An important type of metaknowledge in computers and computer
networks comprises knowledge tags, which, for example, describe
or define knowledge items (information resources) such as docu-
ments, digital images, relational tables, or web pages. Knowledge
tags add additional value, context, and meaning to the corresponding
knowledge items. Usually knowledge tags contain descriptive meta-
knowledge helping users to capture insights, expertise, attributes,
dependencies, or relationships associated with information resources.
Knowledge contained in knowledge tags has many different forms.
For instance, it may be factual knowledge found in books and
data depositories, conceptual knowledge shaped as ideas or con-
cepts, evaluative knowledge needed to make judgments and form
hypothesis, and methodological knowledge integrated by reasoning
and experimentation.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 163

Knowledge Characteristics and Typology 163

One more type of operational metaknowledge is represented by


metaheuristics. A metaheuristic is a higher-level procedure or heuris-
tic designed to find, generate, or select a lower-level procedure or
heuristic (partial search algorithm) that may provide a sufficiently
good solution to an optimization problem, especially with incomplete
or imperfect information or limited computation capacity. Thus, in
essence, metaheuristics embody strategies that guide the search or
construction process for appropriate heuristics.
For instance, metaheuristics sample a set of solutions which is
too large to be completely presented. Metaheuristics may make few
assumptions about the optimization problem being solved, and so
they may be usable for a variety of problems.
Compared to optimization algorithms and iterative methods,
metaheuristics do not guarantee that a globally optimal solution
can be found on some class of problems. Many metaheuristics
implement some form of stochastic optimization, so that the solu-
tion found is dependent on the set of random variables generated.
By searching over a large set of feasible solutions, metaheuristics can
often find good solutions with less computational effort than algo-
rithms, iterative methods, or simple heuristics. As such, they are
useful approaches for optimization problems. Most metaheuristics
are experimental in nature.
For instance, one approach is to characterize the type of search
strategy. One type of search strategy is an improvement on simple
local search algorithms. Metaheuristics of this type include simulated
annealing, tabu search, iterated local search, and variable neighbor-
hood search (Blum and Roli, 2003). Another type of search strat-
egy utilizes learning algorithms in the search. Metaheuristics of this
type include ant colony optimization, evolutionary computation, and
genetic algorithms (Blum and Roli, 2003).
In addition to metaheuristics for the serial algorithm develop-
ment, there are hybrid and parallel metaheuristics (Talbi, 2009).
Hybrid metaheuristics combine metaheuristics with other optimiza-
tion approaches, such as algorithms from mathematical program-
ming, constraint programming, or machine learning. All compo-
nents of hybrid metaheuristics may run concurrently, exchanging
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 164

164 Theory of Knowledge: Structures and Processes

information to guide the search. Parallel metaheuristics utilize mul-


tiple metaheuristic searches in the parallel mode.
Metaheuristics for combinatorial optimization, such as the trav-
eling salesman problem, use discrete search-spaces. Popular meta-
heuristics for combinatorial problems include simulated annealing
(Kirkpatrick et al., 1983), evolutionary automata and genetic algo-
rithms (Fogel et al., 1966; Burgin and Eberbach, 2009; 2013), scatter
search and tabu search (Glover and Kochenberger, 2003).
There are other disciplines, the primary goal of which is obtaining
metaknowledge.
Epistemology is the theory of knowledge, which primarily stud-
ies the nature and structure of knowledge and how knowledge is or
should it be acquired, validated, preserved, revised, updated, and
retrieved. It means that epistemology contains metaknowledge.
Meta-epistemology is concerned with investigation of the subject,
matter, methods and aims of epistemology, as well as of approaches
to understanding and structuring of knowledge about epistemological
knowledge. Thus, meta-epistemology contains metaknowledge of the
second degree.
Meta-ethics is the branch of ethics that explores the nature of
moral values, statements, attitudes, and judgments (Garner and
Rosen, 1967). For instance, meta-ethics addresses questions such as
“What is decency?”, “How to support moral judgments?” or “How
can we tell what is good from what is bad?”, trying to understand
the nature of ethical properties and evaluations.
Metalogic is the study of the metatheory of logic (Hunter, 1971).
Whereas logic studies is about how logical systems can be used to
construct valid and sound arguments, metalogic studies is about
the attributes of logical systems. The basic objects of metalogics
are logical languages and formal systems, as well as properties,
transformations and interpretations of these systems and languages.
In particular, metalogic explores inference processes in logic (Burgin,
2007a).
Metaphilosophy, also called philosophy of philosophy, is a study
of the nature, functions, and structure of philosophy including the
goals of philosophical investigations, the boundaries of philosophy,
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 165

Knowledge Characteristics and Typology 165

and its methods and approaches. In the analytical tradition, the term
metaphilosophy is mostly used to name commentaries and research on
previous works as opposed to original contributions towards solving
philosophical problems. The growing research in the field of metaphi-
losophy in general and its directions, in particular, led to the estab-
lishment of the special journal Metaphilosophy in 1970.
Finally, here is one more example. The knowledge presented in
this book is metaknowledge because it is knowledge about knowledge.
In studies of metaknowledge, it is useful to introduce orders of
metaknowledge.
Knowledge, the domain of which is not knowledge, e.g., a physical
system, has the zero order as metaknowledge.
Knowledge the domain of which is metaknowledge of the zero
order is metaknowledge of the first order.
Knowledge the domain of which is metaknowledge of the first
order is metaknowledge of the second order.
Knowledge the domain of which is metaknowledge of the order n
is metaknowledge of the order n + 1.
We see that orders of metaknowledge depend on knowledge inter-
pretation. Because knowledge and especially metaknowledge have
different interpretations, it is possible that the same knowledge item
has different orders as metaknowledge. For instance, it is possible to
interpret the differential equation (2.6) as model knowledge about
some physical system. In this case, it has the zero order as meta-
knowledge. When Equation (2.6) is treated as a metamodel for the
differential equation (2.7), which is model knowledge about gases,
Equation (2.6) has the first order as metaknowledge. When Equation
(2.7) is treated as a metamodel for Equations (2.8) and (2.9), which
constitute the Carleman system or for Equations (2.10)–(2.12), which
constitute the Broadwell model, Equation (2.6) has the second order
as metaknowledge. For instance, languages of metamathematics are
also metalanguages of the second and higher orders because many
mathematical languages are metalanguages of the first and higher
orders.
As classical logic contains metaknowledge about declarative
knowledge in the form of propositions and predicates, metalogic
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 166

166 Theory of Knowledge: Structures and Processes

contains metaknowledge of the first and higher orders. In a similar


way, as philosophy contains metaknowledge of the first order, e.g.,
in epistemology, metaphilosophy in general and met-methodology,
in particular, contains metaknowledge of the second and higher
orders.
Note that in general, the order of metaknowledge is not uniquely
defined — the same metaknowledge item can have, for example, first
and second order as the considered examples of differential equa-
tions demonstrate. However, it is possible to define the strict order
of a metaknowledge item as the least order that this metaknowledge
item has.
In addition, it is possible to introduce relative orders of meta-
knowledge.
A knowledge item K the domain of which is a knowledge item H
is metaknowledge of the first order over H.
A knowledge item L the domain of which is metaknowledge of
the first order over the knowledge item H is metaknowledge of the
second order over H.
A knowledge item L the domain of which is metaknowledge of the
order n over the knowledge item H is metaknowledge of the order n+1
over H.
Definitions imply the following result.

Proposition 2.3.1. If a knowledge item L is metaknowledge of the


order n over a knowledge item K and the knowledge item K is meta-
knowledge of the order m over a knowledge item H, then the knowl-
edge item L is metaknowledge of the order n + m over a knowledge
item H.

Languages used to represent metaknowledge are called metalan-


guages. According to Sowa (2010a), metalanguage consists of signs
that signify something about other signs, but what they signify
depends on what relationships those signs have to each other, to
the entities they represent, and to the agents who use those signs to
communicate with other agents.
Researchers elaborated different metalanguages. As we have seen,
mathematics provides the most popular and powerful metalanguage.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 167

Knowledge Characteristics and Typology 167

Another well-known metalanguage is the language of logic. However,


there are other metalanguages. For instance, in (Burgin, 1973; 1976),
a programming metalanguage is constructed to represent operational
metaknowledge. A metalanguage for abstract automata is described
in (Trahtenbrot and Barzdin, 1970).
The order of a metalanguage L is the highest order of the meta-
knowledge that L can represent.
However, in metalogic, formal languages are usually called object
languages, even when they are logical languages. At the same time,
the language used to represent knowledge about an object language
is called a metalanguage. This feature is a key difference between
logic and metalogic. While logic is concerned with formal systems
and proofs in these systems, which are expressed in some formal
language, metalogic deals with formal systems and proofs in these
systems, which are expressed in a metalanguage about some object
language in general and a logical language, in particular. To distin-
guish metalanguages in general from metalanguages in metalogic, we
call the latter by the name metalinguistic languages.
It is possible to introduce levels of metalinguistic languages.
A language used to represent knowledge about the domain, which
is not a language, e.g., a physical system, is an object language and
belongs to the zero level as a metalinguistic language.
A language used to represent knowledge about the domain, which
is a language, is a metalinguistic language of the first level.
A language used to represent knowledge about the domain, which
is a metalinguistic language of the first level, is a metalinguistic lan-
guage of the second level.
A language used to represent knowledge about the domain, which
is a metalinguistic language of the level n, is a metalinguistic lan-
guage of the level n + 1.
Thus, in our terminology, metalanguages used in metalogic are
metalinguistic languages of the second and higher levels.
To conclude, we show how metaknowledge induces an important
classification of the knowledge of a knower. This classification consists
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 168

168 Theory of Knowledge: Structures and Processes

of four classes:
1. Conscious knowledge is knowledge K such that the knower has
and knows that she/he/it has K, i.e., the knower has assertoric
metaknowledge about K.
2. Unconscious knowledge is knowledge K such that the knower has
but does not know that she/he/it has K, i.e., the knower does not
have metaknowledge about K.
3. Explicitly absent (unknown) knowledge is knowledge K such that
the knower knows that she/he/it does not have K, i.e., the knower
has erotetic metaknowledge about K.
4. Concealed knowledge is knowledge K such that the knower does
not know that she/he/it does not have K.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 169

Chapter 3

Knowledge Evaluation
and Validation in the Context
of Epistemic Structures

It is not the quantity but the quality of knowledge


which determines the mind’s dignity.
William Ellery Channing

Traditionally, separation of knowledge from beliefs is treated as the


foremost problems in epistemology. This separation is performed by
truth-evaluation of beliefs — those beliefs that were considered true
and for which this evaluation was justified were called knowledge.
Although contemporary theories of knowledge have elaborated more
sophisticated criteria, all of them include some kind of evaluation
(Pollock and Cruz, 1999).
Here we consider evaluation in a much broader context. First,
instead of belief evaluation, we consider evaluation of epistemic struc-
tures, which include beliefs, models, ideas, concepts, and all kinds of
knowledge. Second, not only truth evaluation is studied but evalua-
tion of other properties of knowledge such as consistency, exactness
or fuzziness. Third, evaluation is treated in the context of diverse
scales in which evaluated properties take values. This is different
from truth evaluation, which traditionally takes into account only
two values — True and False.

169
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 170

170 Theory of Knowledge: Structures and Processes

3.1. Knowledge in the context of epistemic


structures and knowledge scales

The only good is knowledge and the only evil is ignorance.


Socrates

To understand why epistemic structures and epistemic spaces form


a natural context for knowledge studies, it is necessary to start with
the origin and definition of the concept epistemic structure.
There are different conceptual systems that reflect the place of
knowledge. One of these systems places knowledge in the context of
data and information. This conceptual field reflects informational
aspect of knowledge, treating knowledge and data as carriers of
information, and is analyzed in Chapter 8. Here we are concerned
with the epistemological aspect of knowledge, which reflects simi-
larities and distinctions between knowledge and beliefs. This is the
traditional problem in the philosophy of cognition — epistemology
(Chisholm, 1989; Pollock and Cruz, 1999) and methodology of science
(Burgin and Kuznetsov, 1993; 1994). It originated with Plato and
Aristotle.
Both pivotal cognitive concepts — beliefs and knowledge — are
basic epistemic structures. Other examples of epistemic structures
employed by cognitive processes are concepts, notions, statements,
ideas, images, opinions, texts, values, measures, problems, models,
schemas, procedures, tasks, goals, algorithms, etc. Epistemic struc-
tures form epistemic spaces.
The necessity to consider epistemic structures comes from infor-
mation theory. According to the main principles of the general theory
of information (Burgin, 2010), information causes changes in info-
logical systems and information is modeled by epistemic information
operators that change mathematical representation of infological sys-
tems. As our main concern here is knowledge, we restrict ourselves
with the cognitive information, which acts on cognitive infological
systems, which contain knowledge as the basic structure and are
modeled by epistemic information spaces. They comprise symbolic
representations of epistemic structures, which are indispensable units
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 171

Knowledge Evaluation and Validation 171

of cognition. Examples of epistemic structures employed by cognitive


processes are concepts, notions, statements, ideas, images, opinions,
texts, beliefs, knowledge, values, measures, problems, schemas, pro-
cedures, tasks, goals, etc. That is why we treat knowledge in the con-
text of epistemic structures, which form pure and weighted epistemic
spaces (Burgin, 2014). Weighted epistemic spaces represent, not only
epistemic structures, but also their characteristics. The mathemati-
cal structure used for representing weighted epistemic spaces in the
formal context is called a generalized vector bundle (Burgin, 2014; Le
Potier, 1997). Informally, it consists of epistemic elements connected
by relations — the base of the vector bundle-and a vector space
attached to each of these elements. Note that an epistemic space is
a set of epistemic structures or their representations with relations
and operations. Consequently, this epistemic structure encompasses
many other mathematical structures, such as lattices, groups, or par-
tially ordered sets.
Epistemic spaces provide means for representing information in a
formal way as epistemic information operators, which are transforma-
tions and mappings of epistemic spaces. A special case of epistemic
spaces — knowledge spaces — and a special case of epistemic infor-
mation operators —knowledge information operators — are studied
in (Mizzaro, 2001; Burgin, 2010; 2011a). Here, we consider knowl-
edge spaces and information operators in a more general context
of epistemic structures, epistemic spaces, and epistemic information
operators. Note that there are information operators that are not
epistemic. Examples of such operators are emotional and instruc-
tional information operators (Burgin, 2010).
We start with epistemic structures defining them, at first, in an
informal way.
Definition 3.1.1. An epistemic structure is a structure that repre-
sents or reflects, i.e., contains, information about some domain.
In essence, an epistemic structure is a basic structure of cognition.
It is possible to find different definitions of structure in general
and a unifying approach to this concept in (Burgin, 2012). A formal
definition of structure is given in Section 5.1.2.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 172

172 Theory of Knowledge: Structures and Processes

Note that although according to the conventional understand-


ing, a domain usually comprises several objects, one object can also
be treated as the domain of an epistemic structure. In essence, a
domain is any part of reality, while reality as a whole includes
all three types — physical reality, mental reality and structural
reality (Burgin, 2012). Actually, it is possible to consider any
object, e.g., a system or a process, as a domain of an epistemic
structure.
Definition 3.1.1 is essentially informal. A formalized definition of
epistemic structures is recursive as it is constructed using recursion.
At first, we select a class of basic epistemic structures as elements
for the base of recursion. We select structures that are undeniably
containers of information about some domain as basic epistemic ele-
ments and call them epistemic structures of the first level. It is pos-
sible to take knowledge items as such structures. Then epistemic
structures of the second level are selected from cognitive objects, e.g.,
beliefs or images, which are used to obtain epistemic structures of
the first level, e.g., knowledge. We continue this process utilizing
the general step of recursion: epistemic structures of the level n are
selected from objects used to obtain epistemic structures of the level
n − 1. As a result, this recursive definition determines a system of
epistemic structures, which plays the role of a cognitive infological
system.
Note that the constructed system depends on three parameters:
(1) selection of basic objects (elements), (2) procedures (algorithms)
used for obtaining, e.g., building, epistemic structures on each level,
and (3) selection rules used in the level construction.
The essence and external structure of epistemic structures are
represented by the diagrams in Figures 3.1 and 3.2.
Both pivotal cognitive and behavioral concepts — beliefs and
knowledge — are basic epistemic structures. At the same time,

representation
Epistemic Structure ES Domain D
reflection

Figure 3.1. A reflection epistemic triad (direct epistemic unit U )


September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 173

Knowledge Evaluation and Validation 173

interpretation
Epistemic Structure ES Domain D
substantiation

Figure 3.2. A substantiation epistemic triad (inverse epistemic unit V )

there are many other epistemic structures, such as concepts, notions,


statements, questions, problems, ideas, beliefs, images, algorithms,
tasks, procedures, problems, values, measures, opinions, and goals.
Epistemic triads (cf., Figures 3.1 and 3.2) describe and represent
extended epistemic units, each of which reflects a definite domain or
an aspect of definite domain.

Definition 3.1.2. (a) A reflection epistemic triad (cf., Figure 3.1)


describes a direct extended epistemic unit, while a substantiation epis-
temic triad (cf., Figure 3.2) describes an inverse extended epistemic
unit.
(b) The epistemic structure ES of an epistemic triad U (or V ) is
called the cognitive part of the epistemic unit U (unit V ) or simply,
an epistemic unit.
(c) The domain D of the epistemic triad U (or V ) is called the
substantial part of the extended epistemic unit U (unit V ).
Usually epistemic structures are represented by systems of sym-
bols, which are called symbolic epistemic structures. Symbolic knowl-
edge units are examples of symbolic epistemic units. For instance, the
sentence “Ten is larger than five” is a symbolic knowledge unit, as
well as a symbolic epistemic unit. The logical expressions, such as
ϕ ∧ ψ, ϕ ∨ ψ and ϕ → ψ, are also symbolic knowledge units, as well
as symbolic epistemic units.
In general, epistemic structures are components of extended epis-
temic units.
In what follows, we consider only symbolic epistemic units and
symbolic knowledge units. That is why for simplicity, symbolic
epistemic units are called epistemic units and symbolic knowledge
units are called knowledge units. In this, we follow the longstand-
ing tradition where knowledge means only symbolic representation
of knowledge.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 174

174 Theory of Knowledge: Structures and Processes

It is natural to divide the domain of all epistemic structures into


two classes (types) — static and dynamic epistemic structures.

Definition 3.1.3. Static epistemic structures describe domains that


are not changing.

Definition 3.1.4. Dynamic epistemic structures either describe


domains that are changing or have changes as their domain.
For instance, if a system is described as a collection of elements
and relations between these elements, this description is a static epis-
temic structure. We obtain a dynamic epistemic structure when the
processes going on in the system are also included in the descrip-
tion. An algorithm or a procedure is another example of a dynamic
epistemic structure, the domain of which is a process or a system of
processes.
We also divide dynamic epistemic structures into two classes
(types) — functional epistemic structures and process epistemic
structures.

Definition 3.1.5. Functional epistemic structures represent changes


as transitions from the initial state to the final state.
For instance, operations, relations and goals are functional epis-
temic structures.

Definition 3.1.6. Process epistemic structures represent changes as


processes.
For instance, algorithms, flowcharts, scenarios, inferences, and
stories are process epistemic structures. In these examples, algo-
rithms are compressed epistemic structures, inferences, and stories
are expanded epistemic structures, while flowcharts and scenarios
can be either compressed epistemic structures or expanded epistemic
structures.
Epistemic item is a more general concept than epistemic unit
because an epistemic item can be a part of an epistemic unit or
consists of several epistemic units. In particular, knowledge item is a
more general concept than knowledge unit because a knowledge item
can be a part of a knowledge unit or consists of several knowledge
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 175

Knowledge Evaluation and Validation 175

units. For instance, taking a knowledge base, it is possible to con-


sider all knowledge from this knowledge base as one knowledge item
or as an organized system of knowledge items (units). However, as
a rule, this knowledge consists of many knowledge items (units). All
knowledge from a textbook is a knowledge item.
Epistemic structures form the base for the traditional interpreta-
tion of information as the essence that gives or changes knowledge.
According to the general theory of information, symbolic epistemic
items (structures) constitute cognitive infological systems (Burgin,
2010).
This understanding is formalized in the general theory of infor-
mation based on the basic ontological principle O2c (Burgin, 2010).
Ontological Principle O2c (the Cognitive Transformation
Principle). Cognitive information for a system R is a (potential)
capacity to cause changes in the cognitive infological system CIF(R)
of the system R.
In this context, a CIF(R) contains, acquires, stores, and pro-
cesses epistemic structures, such as knowledge, data, ideas, beliefs,
images, algorithms, tasks, procedures, problems, schemas, scenar-
ios, values, measures, opinions, goals, ideals, fantasies, abstractions,
etc. Cognitive infological systems are very important, especially, for
intelligent systems. Indeed, the majority of researchers believe that
information in general is intrinsically connected to cognition, while
cognitive information is one of the three basic types of anthropic
information studied in (Burgin, 2011a). Moreover, some researchers
believe that people’s knowledge about physical reality is the result
of information they obtain from external sources (von Weizsäcker,
1958; von Weizsäcker et al., 1958; Wheeler, 1990; Frieden, 1998).
Understanding that physicists study physical systems not directly,
but only through information they get from these systems has cre-
ated a school of thought about the role of information processing in
physical processes and its influence on physical theories. According
to one of the leading physicists of the 20th century John Archibald
Wheeler (1911–2008), it means that every physical quantity derives
its ultimate significance from information. He called this idea “It
from Bit” where “It” stands for things, while “Bit” impersonates
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 176

176 Theory of Knowledge: Structures and Processes

information as the most popular information unit (Wheeler, 1990).


For Wheeler and his followers, space–time itself must be understood
and described in terms of a more fundamental pregeometry with-
out dimensions and classical causality. These features of the physical
world only appear as emergent properties in the ideal modeling, the
physical reality based on information about complex interactions of
very simple basic elements, such as subatomic particles.
As the cognitive infological system contains system knowledge,
cognitive information is the main source of knowledge changes. When
a system (it may be a person, a group of people, a community, soci-
ety as whole, or an intelligent agent) receives cognitive information,
this system may convert it to knowledge or miss this information.
For instance, a teacher is not giving knowledge to his students but
only provides cognitive information and students themselves have to
convert this information into knowledge (Burgin, 2001).
It is important to discern epistemic structures and their repre-
sentations, as well as knowledge and its representations. Epistemic
structures are usually represented by symbolic systems in general,
and symbols in particular. As a rule, one epistemic structure (knowl-
edge unit) has several representations. For instance, knowledge that
a person A is seven feet tall can be represented by:

— the statement/sentence/proposition “A is seven feet tall”;


— the statement/sentence/proposition “the height of A is seven
feet”;
— the equality H(A) = 7 ft where H(X) is the property height of a
person;
— the truth of the predicate H(A, 7) where H(X, h) is the predicate
with X being a person and h being a number of feet;
— the element (A; 7) of a relational database.

At the same time, the same symbolic system can represent dif-
ferent epistemic structures. For instance, a statement can represent
knowledge, a belief or an idea.
A symbolic representation of an epistemic structure (cognitive
epistemic unit) is called a symbolic epistemic structure (symbolic
epistemic unit). Thus, it is possible to discern structural cognitive
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 177

Knowledge Evaluation and Validation 177

infological systems, which consist of epistemic structures, and sym-


bolic infological systems, which consist of symbolic representations
of epistemic structures.
Note that not all epistemic structure (knowledge) representations
are symbolic. For instance, algorithms are units of operational knowl-
edge. In computers, many algorithms are embodied in computer
hardware, usually, in chips. Such hardware representation of oper-
ational knowledge is not symbolic.
In a similar way, domains are usually represented by their models.
This brings us to the extended direct epistemic unit characterized
by Diagram (3.1) and extended inverse epistemic unit described by
Diagram (3.2).
reflecting
epistemic structure domain
representing modeling
(3.1)
representation model
reflecting

referring
epistemic structure domain
interpreting substantiating
(3.2)
representation model
attributing

Very often researchers do not make distinctions between epistemic


structures and their representations. For instance, many assume that
the statement “The Earth rotates around the Sun” is knowledge.
However, it is only some representation of knowledge about the Earth
and the Sun. It is possible to represent the same knowledge by the
statement “The Earth moves about the Sun” or by the statement
“The Earth travels about the Sun”. This shows that one epistemic
structure can have and usually has several symbolic, e.g., linguistic,
representations.
Cognitive systems, e.g., intelligent agents or cognitive actors, store
and employ a variety of epistemic structures in different forms and
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 178

178 Theory of Knowledge: Structures and Processes

shapes. The memory of a cognitive system contains epistemic struc-


tures, which are organized in the system called an epistemic space.
Informally, an epistemic space is a set of epistemic structures or
their representations with relations and operations. Epistemic struc-
tures (cognitive epistemic units) form abstract epistemic spaces, while
their representations (symbolic epistemic units) create symbolic epis-
temic spaces. However, in what follows, we consider only symbolic
epistemic spaces.
In modeling epistemic systems in general and knowledge systems
in particular, and studying information processes in these systems,
we consider two basic mathematical structures — sets and multisets
(cf., Appendix). Sets of epistemic structures enhanced by relations
between these structures become epistemic spaces, while multisets of
epistemic structures enhanced by relations between these structures
become epistemic multispaces. In essence, using the classical math-
ematical setting, it is possible to consider only sets, which make
the model simpler serving as the first approximation to real epis-
temic/knowledge systems and information processes. However, many
real cognitive systems contain several copies of the same element. For
instance, the same element of knowledge can be stored in different
parts of the computer memory or of the brain. This makes utiliza-
tion of multisets necessary. We remind that a multiset is a collection
that is like a set but can include identical or indistinguishable ele-
ments. For instance, M = {a, a, b, b, b} is a multiset that contains
two elements a and three elements b. It is usually assumed that in a
multiset, elements are indistinguishable if and only if they have the
same type.
If a is an element from a multiset M , then the number of copies
of an element a is called the multiplicity of a in M and is denoted by
mM (a). In the considered example, mM (a) = 2 and mM (b) = 3.
A multiset M is a multisubset of a multiset N if mM (a) ≤ mN (a)
for all elements a from M .
However, when for building epistemic spaces, we use not only
sets but structures, i.e., sets with relations, it is possible to include
multisets in this schema because multisets can be treated as sets with
the indistinguishable relation. So, sets are also structures and we use
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 179

Knowledge Evaluation and Validation 179

sets as the base for structures in the process of forming epistemic


spaces from epistemic structures.
To give an exact definition of an abstract epistemic space, we
consider a set W es of epistemic structures, while for an exact def-
inition of a symbolic epistemic space, we consider a set W ses of
symbolic epistemic units, e.g., of symbolic knowledge units or knowl-
edge items, taking it as the state base of an epistemic space. For
instance, it is possible to regard a set WK of symbolic parts of
elementary knowledge units as a set Wes . This allows us to obtain
an efficient formalization of the concept of a knowledge space. The
set WL of propositions and/or predicates in a logical language L
gives an example of Wses . Propositions and predicates are sym-
bolic knowledge units in the logical approach to information the-
ory developed in works of Bar-Hillel and Carnap (1958), Hintikka
(1968; 1970) and some other authors. Shreider (1965) interpreted
symbolic knowledge units as texts in a thesaurus. Many researchers
employ mental schemas as cognitive/symbolic knowledge units in the
brain (cf., for example, (Anderson, 1977; Arbib, 1992; Armbruster,
1996)). One more possibility for Wses is the set, or more exactly, a
multiset, WSit of logical representations of situations possible in a
world U . Situations themselves form the substantial parts of knowl-
edge units, while their logical descriptions, e.g., in the form of propo-
sitions, are the cognitive/symbolic parts of knowledge units. These
logical descriptions are also called situations playing the role of
knowledge items or knowledge units (cf., for example, (Barwise and
Perry, 1983)).
Some readers may be confused by variability of epistemic struc-
tures. However, mathematics provides means to decrease non-
uniformity in the diversity of epistemic structures. Let us assume
that we study such diversity as a collection B of epistemic structures
by means of epistemic spaces. One way to reduce complexity is to
use homogeneous approximations and, at first, to study only uni-
form epistemic spaces, which model uniform collections of epistemic
structures. For instance, a knowledge space is a uniform collection of
epistemic structures or algorithmic space is a uniform collection of
epistemic structures.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 180

180 Theory of Knowledge: Structures and Processes

On the second step, it is possible to introduce and study epis-


temic spaces with a low degree of uniformity. A formal definition
of degrees of uniformity based on measures of general system uni-
formity is given in (Burgin and Bratalskii, 1986; Burgin et al.,
1979). For instance, the degree of uniformity of a knowledge space
is lower than the degree of uniformity of a space of knowledge,
beliefs, and ideas because the latter space contains elements of differ-
ent types. At the same time, assertoric logical knowledge space has
higher degree of uniformity than the assertoric predicate knowledge
space because the former space contains not only predicates but also
propositions.
Another way to deal with extremely non-uniform systems is unifi-
cation. Here, we describe unification of collections of epistemic struc-
tures. To do this, we observe that it is possible to represent each
epistemic structure by a concept. For instance, taking a knowledge
item “Now it is 5 p.m.”, we express the content of this statement by
the equality relation between two temporal points (intervals) 5 p.m.
and “now”. This relation is a concept. The meaning of this concept
is expressed by (consists of) its relations to other concepts, such as
“time”, “equality”, “identity”, “point”, etc. In such a way, unification
converts any collection of epistemic structures into a uniform system,
which consists only of concepts. It is possible to model this system
by a semantic network or by a more advanced epistemic space.
In the theory of epistemic spaces, often it is possible not to distin-
guish sets Wes and Wses because they have many common properties
and only these properties are important in many theoretical con-
structions. When this is the case, we denote Wes or Wses by the same
letter W without causing confusion.
Some sets W can reflect (represent) more objects (large domains),
while others reflect (represent) less objects (small domains).

Definition 3.1.7. We call the set W universal for a collection CIF,


e.g., for systems of knowledge, when the following axiom is true.
UCIF 1 (the Internal Cognitive Representation Axiom).
For any infological system R from CIF, any state of R is a subset of
the set W .
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 181

Knowledge Evaluation and Validation 181

For instance, we can take a group G of intelligent agents and


define the collection CIF as the set of their knowledge systems play-
ing the role of cognitive infological systems from CIF. Then the
Internal Representation Axiom states that any possible state KAi of
the knowledge system KA of an agent A from G is a subset of the
set W . In this case, it is possible to interpret W as the base of all
knowledge that agents are able to have about their environment.
Another aspect of universality of the set W is expressed by the
possibility to describe all possible (existing) worlds or/and situa-
tions utilizing epistemic elements (symbolic epistemic elements), e.g.,
knowledge (symbolic knowledge units), only from W . For instance,
when W is the set WL of propositions and/or predicates from some
logical language L, then universality implies that it is possible to
build all descriptions of all possible worlds by combining elements
from WL . This possibility is reflected in the following concept.
Let us consider a domain, D. This domain may be a part of the
real world, the set of all (possible) situations in a part of the real
world, the set of all possible (existing) worlds in the sense of logical
semantics or the set of all possible (existing) states of the environ-
ment in some area.

Definition 3.1.8. We call the set W universal for the domain D


when the following axiom is true.
UCEF 2 (the External Cognitive Representation Axiom).
For any environment (situation, world, or state) R of the domain
D, there is a subset WR of the set W that contains all epistemic
structures that reflect R.
In particular, it means that if Wes consists only of knowledge
structures, then for any environment (situation, world, or state) R
of the domain D, there is a subset WR of the set Wes that contains
all (accessible or representable) knowledge about R.
Taking axioms UCIF 1 and UCEF 2 as the foundation, we develop
a theory of cognitive systems (cognitive agents) called the theory
of E-spaces. At first, we define free epistemic spaces, utilizing the
set Wes for building abstract epistemic spaces and the set Wses for
building symbolic epistemic spaces.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 182

182 Theory of Knowledge: Structures and Processes

Usually an epistemic space V is a dynamic system, which is per-


manently changing its content. This content taken at some moment
of time is called the state of the epistemic space V .

Definition 3.1.9. (a) An abstract (symbolic) epistemic space, also


called an abstract (symbolic) E-space, V with the base Wes (Wses ) is
a subset of the set Wes (Wses ).
(b) Some subsets of V are called states of the epistemic space V.
Usually there are definite rules that define states of an epistemic
space. For instance, if epistemic structures are knowledge units that
can be stored in a knowledge-base B, then all these knowledge units
form a symbolic epistemic space V and a subset U of V is its state if
it is possible to store all knowledge units from U in the knowledge-
base B.
(c) An abstract (symbolic) epistemic space V with a collection
StV ⊆ 2V of its states is called an abstract (symbolic) epistemic
system.
For instance, we can take the set Id of all ideas, about which
Professor Angstrem can think, as an epistemic space. However, at
any moment of time, he can think only about one or two ideas. Thus,
modeling his thinking, only one-element and two-element subsets of
Id will be appropriate as states of this epistemic space. These states
will represent the projection of Professor Angstrem’s mentality on
the space of ideas.
Note that that any state of an epistemic space is itself an epistemic
space.
It is natural to consider epistemic spaces that have elements of
the same type. For instance, free epistemic spaces that consist of
symbolic knowledge units are called knowledge spaces or more exactly,
unstructured knowledge spaces. They are studied in (Mizzaro, 2001;
Burgin, 2010).

Definition 3.1.10. (a) A type of structures is a system of conditions


(axioms) that all these structures, i.e., sets with relations, satisfy.
(b) An epistemic space V is called uniform if all its elements have
the same type.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 183

Knowledge Evaluation and Validation 183

(c) An epistemic system (V , StV ) in which all states are uniform


is called uniform.
For instance, the epistemic space of all logical propositions is uni-
form. Algorithms are knowledge items explaining how to solve a prob-
lem or how a system is functioning. Finite automata form a uniform
class of algorithms, which is a uniform epistemic space. However,
a class of algorithms that consists of finite automata and Turing
machines is an epistemic space that is not uniform.
In general, epistemic multispaces spaces have different relations
between and operations with their elements and states. Thus, let us
assume that W is not simply a set or a multiset but is a structure,
i.e., a set with relations and operations (Burgin, 2012).

Definition 3.1.11. (a) A structured epistemic space, also called a


structured E-space, V with the base W is a substructure of the
structure W .
(b) A substructure of V is called a state of the structured epistemic
space (multispace) V.
For instance, stratified M-spaces (Burgin, 2011a) give an example
of structured epistemic spaces.
Let us consider other examples of epistemic spaces. Usually these
epistemic spaces are structured.

Example 3.1.1. A propositional epistemic space EPS, in which


propositions are epistemic elements, is a knowledge space because
propositions constitute one of the basic forms of knowledge represen-
tation. There are many relations in this space, i.e., it is a structured
epistemic space. One of the main relations is implication denoted by
the symbol →, where p → q means that whenever the proposition p
is true, the proposition q is also true. Other important relations in
the propositional epistemic space EPS:
The deducibility relation means “a proposition p is deduced from
a proposition q”, for example, the proposition q is deducible from the
proposition q & p. This is the most popular logical relation between
propositions from a propositional language. The deducibility relation
is usually denoted by the symbol , e.g., A  ϕ.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 184

184 Theory of Knowledge: Structures and Processes

The generality relation means that one proposition is more general


than another one, for example, the proposition “stars give light”
is more general than the proposition “stars from our galaxy give
light”.
Another important logical relation between propositions or state-
ments is “a proposition (statement) p directly implies a propo-
sition (statement) q”, i.e., there is an inference rule, e.g., modus
ponens or modus tollens, such that p is the argument and q is the
conclusion.
One more important logical relation between propositions or
statements is “a proposition (statement) p entails a proposition
(statement) q (in a theory T )”, meaning that in every model (of
T ) where p is true, q is also true. The entailment relation is usually
denoted by the symbol |=, e.g., p |= q.
There are also operations in the propositional epistemic space
EPS, for example, classical logical operations — conjunction, dis-
junction, and negation.
It is also possible to induce topology in the propositional epistemic
space EPS.

Example 3.1.2. We can build a conceptual epistemic space based


on formal concept analysis (Ganter and Wille, 1999). A conceptual
epistemic space consists of formal contexts, concepts, concept intents,
and concept extents, which are defined as rigorous mathematical
structures.
A formal context C is a triad (named set) C = (G, I, M ), where

• G is a set of objects,
• M is a set of attributes,
• I is a relation between G and M , in which the relation (g, m) ∈ I
means, the object g has the attribute m, i.e., I is the connection
between objects and attributes.

Thus, we can see that formal contexts are named sets (cf.,
Appendix A) and it is possible to apply to them different named
set operations constructed in (Burgin, 2011). There are also vari-
ous relations between formal contexts that come from the named set
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 185

Knowledge Evaluation and Validation 185

theory. For instance, a formal context C = (G, I, M ) is a subcontext


of a formal context D = (H, J, L) if G ⊆ H, M ⊆ L and I is the
restriction of J on G and M , i.e., I = J|(G,M ) .
It is also possible to build an epistemic space from informal con-
cepts. Taking concepts used in some society as symbolic epistemic
elements, we obtain an epistemic space with different relations, i.e.,
a structured epistemic space. For instance, it is natural to use the
relation “to be a subconcept”, e.g., the concept dog is a subconcept
of the concept animal. Another relation between informal concepts
is the foundational relation, which shows when one concept is used
in a definition of another concept.
Semantic networks are examples of structured epistemic spaces
built from informal concepts. Semantic network or semantic net
is a knowledge representation formalism (graphic notation) that
describes objects and their relationships in the form of a network
consisting of labeled (named) nodes and (usually directed) links in
the form of arcs or arrows. The nodes represent objects or concepts
by their names, while the links represent relations between nodes also
by their names.
Note that in a general case, epistemic spaces are graphs. So, con-
ceptual epistemic spaces of Ganter and Wille (1999) are special cases
of general epistemic spaces.

Example 3.1.3. The epistemic scenarios together constitute an


epistemic space in the sense of Chalmers (2010). The most natu-
ral way of scenario interpretations, at least initially, are possible
worlds. More exactly, an epistemic scenario describes possible (in
some sense) ways a world might be. Defining this, Chalmers uses the
notion of possibility that is different from the notion of possibility
usually associated with possible worlds. Here, possibility is a sort
of epistemic possibility, whereas possible worlds are usually under-
stood to be associated with a sort of “metaphysical” possibility. If
an individual or a group did not know anything, all scenarios would
be epistemically possible for the individual (group). When a subject
(a group) knows something, some scenarios might be excluded from
the epistemic space of this individual (group).
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 186

186 Theory of Knowledge: Structures and Processes

Another way to describe epistemic scenarios is to identify them


with equivalence classes of thoughts or with maximal classes of
thoughts with equivalence defined as mutual implication. In this
context, it is possible to assume that thoughts are composed of
concepts.
Having defined a space of scenarios for each subject at a time,
it is possible to form a common space of scenarios for all subjects.
To do this, Chalmers uses the principle of translation between the
maximal thoughts (or the complexes) of one subject, and those of
another. This principle is based on the basic notion of translatabil-
ity represented by the translation relation, which is an equivalence
relation on the set of thoughts. As a result, we obtain a structured
epistemic space (cf., Definition 3.1.11).
Jago (2009) changed Chalmer’s definition of epistemic space
including in an epistemic space only those epistemic scenarios that
are not obviously impossible. Naturally, epistemic spaces in the sense
of Jago are structured epistemic spaces defined above.

Example 3.1.4. Bjerring (2014) builds an epistemic space as a space


of worlds that allows modeling what is epistemically possible for non-
ideally rational agents.

Example 3.1.5. A knowledge information space M is an important


kind of epistemic information spaces. Knowledge information spaces
are studied in (Burgin, 2011a).

Example 3.1.6. It is possible to represent a logical variety or a pre-


variety M (Burgin, 2004) as a structured epistemic space V . In it,
the components of M are treated as states of the space V , while the
mappings fi : Ai → L and gi : Ti → L (i ∈ I), which form con-
nections between components of the variety (prevariety), constitute
relations between elements in this space. Thus, M is a structured
epistemic space.

Example 3.1.7. Taking words from a natural language, such as


English, Spanish, or German, as symbolic epistemic elements, it
is natural to treat two words as linked when they express similar
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 187

Knowledge Evaluation and Validation 187

concepts. In such a way, Motter et al. (2002) built topology of the


conceptual network of a language. This network gives one more exam-
ple of structured epistemic spaces.

Example 3.1.8. Doignon and Falmagne (1999) introduced the con-


cept of knowledge space as a combinatorial structure with a prece-
dence relation describing the possible states of knowledge of a human
learner. Let us describe their constructions.
A formal knowledge structure is a pair (Q, K) where Q is a non-
empty set and K is a set of its subsets, which contains the empty set
and Q.
The system (Q, K) is called a formal knowledge structure because
in the approach of (Doignon and Falmagne, 1985; 1999), elements of
the set Q are interpreted as knowledge items. There are three educa-
tional interpretations of formal knowledge structures and their ele-
ments (Doignon and Falmagne, 1985; 1999; Albert and Lukas, 1999;
Schrepp, 1999): conceptual, erotetic, and operational.
In the conceptual interpretation, elements of the set Q are
regarded as concepts, which belong to some studied discipline, such as
arithmetic or geometry, while sets from K are called feasible because
they are sets of concepts from Q that students can learn.
In the erotetic interpretation, elements of the set Q are viewed as
problems, which belong to some studied discipline, while sets from
K are called feasible because they are sets of problems from Q that
a student or a group (class) of students can solve.
In the operational interpretation, an element of the set Q is treated
as an arrangement of techniques and methods (skills) necessary for
solution of some problem, which belongs to some studied discipline,
while sets from K are called feasible because they are sets of arrange-
ments of techniques and methods (skills) from Q that a student or a
group (class) of students can use for solving problems from a chosen
discipline.
A formal knowledge space is a knowledge structure (Q, K), in
which K is closed under union, i.e., if A, B ∈ K, then A ∪ B ∈ K.
Being a specific knowledge structure, a knowledge space has the
same interpretations. The additional condition means that if two
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 188

188 Theory of Knowledge: Structures and Processes

subsets of Q are feasible (learnable), then their union is also feasible


(learnable). For instance, if it is possible for someone to learn all
concepts from a set A from K and someone else can learn all concepts
from a set B from K, then it is supposed there is a third person
who can learn all concepts learnable by both people separately, i.e.,
concept that either the first person can learn or the second person
can learn.
A formal knowledge space (Q, K) is called quasi-ordinal if K is
closed under intersection, i.e., if A, B ∈ K, then A ∩ B ∈ K.
The additional condition means that if two subsets of Q are fea-
sible (learnable), then their intersection is also feasible (learnable).
For instance, if it is possible for someone to learn all concepts from
a set A from K and someone else can learn all concepts from a set B
from K, then it is supposed there is a third person who can learn all
concepts learnable by both people, i.e., concept that the first person
and the second person can learn.
A formal knowledge space (Q, K) is called well-graded if for any
set A from K, there is an element a from A such that A\{a} also
belongs to K.
In educational terms, it means any feasible state of concept knowl-
edge can be reached by learning one concept at a time, for a finite
set of concepts to be learned.
A well-graded formal knowledge space with educational interpre-
tation is called a learning space.
We can see that knowledge structures and knowledge spaces in
the sense of (Doignon and Falmagne, 1985; 1999) are special kinds
of epistemic spaces and knowledge spaces with additional structures,
i.e., structured epistemic spaces and knowledge spaces in the sense
of (Burgin, 2010; 2011a; 2014).
Given a set X with a binary relation R, it is possible to introduce
a metric in this space (Kuratowski, 1966).
An R-path in X between elements x and y is a sequence p(x, y) =
(x1 , x2 , x3 , . . . , xn ) of elements from X such that all pairs (xi−1 , xi )
belong to R, x1 = x and xn = x. The number n is called the length
of the path p(x, y), i.e., l(p(x, y)) = n. Then we define the distance
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 189

Knowledge Evaluation and Validation 189

by the following rule:




 min{l(p(x, y)); p(x, y) is path in X between elements

 x and y} if, at least, one path exists
dR (x, y) =

 ∞ if there are no paths in X between elements x and y


0 when x = y.

Proposition 3.1.1. The distance dR (x, y) defines metric in the


space X.

Indeed, by definition, dR (x, y) = 0 if and only if x = y. The


relation dR is symmetric, i.e., dR (x, y) = dR (y, x). In addition,
if (x1 , x2 , x3 , . . . , xn ) is path in X between elements x and z and
(y1 , y2 , y3 , . . . , ym ) is path in X between elements y and y, then
(x1 , x2 , x3 , . . . , xn , y1 , y2 , y3 , . . . , ym ) is path in X between elements
x and y. Thus, dR (x, y) ≤ dR (x, z) + dR (z, y).
We will call this metric by the name the relational metric in the
set X.
Note that in the relational metric, X is a discrete topological
space.
Proposition 3.1.1 shows that Wes and Wses are metric spaces.
Structures in the spaces Wes and Wses are inherited by epistemic
spaces and their states. Thus, as any subspace of a metric space is
a metric space (Kuratowski, 1966), an epistemic space E and all its
states of are metric spaces.
Now we specify our constructions for the case of knowledge
spaces considering a universal set or multiset W of knowledge units
(knowledge items). It is possible to take the set W C of elementary
knowledge units mathematically modeled in Chapter 4 as a univer-
sal set (multiset) W , obtaining a reasonable formalization of the
concept of a knowledge state. Another possibility for W is real-
ized by the set (multiset) W L of propositions and/or predicates
from a logical language L. Propositions and predicates are sym-
bolic knowledge units in the logical approach developed in works
of Bar-Hillel and Carnap (1958), Hintikka (1968; 1970) and some
other authors. Shreider (1965) interpreted symbolic knowledge units
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 190

190 Theory of Knowledge: Structures and Processes

as texts in a thesaurus. Many researchers employ mental schemas


as cognitive/symbolic knowledge units in the brain (cf., for exam-
ple, (Anderson, 1977; Arbib, 1992; Armbruster, 1996; Burgin, 2006)).
One more possibility for W is the set, or more exactly, a multiset,
W S of situations possible in a world U . These situations are taken
as knowledge items or knowledge units (cf., for example, (Barwise
and Perry, 1983)). Situations themselves form the substantial parts
of knowledge units, while their logical descriptions in the form of
propositions signify the cognitive/symbolic parts of knowledge units
(cf., Chapter 2).
The set W is called universal because we assume that the follow-
ing axiom is true.

UKEF 1 (the Internal Representation Axiom). For any cogni-


tive system (agent) A, possible knowledge states KAi of A are subsets
(submultisets) of the set (multiset) W .
UKEF 1 is a particular case of the Internal Cognitive Represen-
tation Axiom (UCEF 1).
It is possible to interpret W as the base of all knowledge that
agents are able to have about their environment.
Another aspect of universality of the set (multiset) W may be
in the possibility to describe all possible (existing) worlds utilizing
knowledge only from W . For instance, when W is the set (multiset)
W L of propositions and/or predicates from some logical language L,
then it is possible to build all descriptions of all possible worlds by
combining elements from W L . This possibility is reflected in the
following axiom.

UKEF 2 (the External Representation Axiom). For any envi-


ronment (situation) D, there is a subset (submultiset) W D of the
set (multiset) W that contains all accessible knowledge about D.
UKEF 2 is a particular case of the External Cognitive Represen-
tation Axiom (UCEF 2).
Taking these two axioms as the foundation, we describe the theory
of cognitive systems/agents called the theory of M -spaces devel-
oped in (Burgin, 2011a), which includes the theory of Mizzaro spaces
(Burgin, 2010a).
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 191

Knowledge Evaluation and Validation 191

Definition 3.1.12. (a) Subsets of W are called Mizzaro spaces.


(b) Submultisets of W are called Mizzaro multispaces.
In some cases, only specific subsets (submultisets) of W are used
in the theory. For instance, if elements of W are propositions and
the model satisfies conditions of classical logical calculi, then only
consistent subsets of propositions are acceptable as Mizzaro spaces.
When, in addition, all deducible propositions are also included in
such a logical Mizzaro space, then Mizzaro spaces are components of
a logical variety (cf., Section 3.3).
A Mizzaro space without an additional structure is an unstruc-
tured knowledge space. A Mizzaro multispace is a typological knowl-
edge space as in a multiset, elements are determined by their types
(Burgin, 2011).
Taking a knowledge system K , we model the states of K by
Mizzaro spaces (Mizzaro multispaces), i.e., each knowledge state is
represented by a Mizzaro space (Mizzaro multispace) KK i . The whole
knowledge system K is modeled by an M-space.
In modeling knowledge systems and information processes, we
consider two structures — sets and multisets — because using the
classical background it is possible to consider only sets, which make
the model simpler. However, many real cognitive systems contain
several copies of the same element. For instance, the same element
of knowledge can be stored in different parts of a computer memory
or of the brain. This makes utilization of multisets necessary.

Definition 3.1.13. A type of structures is a system of conditions


(axioms) that all these structures, i.e., sets with relations, satisfy.
To define an M-space, we consider a type θ of structures in Miz-
zaro spaces (Mizzaro multispaces).

Definition 3.1.14. An M-space (M-multispace) M is a structure of


the form
M = {KSM ; OSM },
where KS consists of Mizzaro spaces (Mizzaro multispaces) K with
the structure of the type θ, and KSM is a system of information
operators in OSM acting on Mizzaro spaces (Mizzaro multispaces).
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 192

192 Theory of Knowledge: Structures and Processes

Thus, KSM = {Ki |i ∈ I} and OSM = {At |t ∈ T }.


Example 3.1.7. It is possible to represent a logical variety or a
prevariety V (cf., Section 3.3) as an M-space where KSM consists
of the components of V and operators from OSM apply mappings
fi : Ai → L and gi : Ti → L(i ∈ I), which form connections between
components of the variety (prevariety).
Definition 3.1.15. (a) The set KSM is called the state space of the
M-space M .

(b) The set UM = i∈I Ki is called the universe of the
M-space M .
Ki ∈ KS M
(c) The system OSM is called the operating system of the
M-space M .
The simplest structure of Mizzaro spaces Ki of the type θ is the
structure of a set and the simplest structure of Mizzaro multispaces
Ki of the type θ is the structure of a multiset. However, Mizzaro
spaces Ki can be logical calculi, linear spaces or groups.
In the algebraic context, each M-space M is a universal alge-
bra (Kurosh, 1963) with the support KSM and system of operations
OSM . In this paper, we consider only unary Mizzaro spaces (unary
Mizzaro multispaces), in which each information operator maps one
Mizzaro space (Mizzaro multispace) Ki into another (may be the
same) Mizzaro space (Mizzaro multispace) Kj . Information opera-
tors with higher arity, e.g., binary information operators, not only
transform knowledge (epistemic) spaces but also merge these spaces
together.
Information operators from OSM are global epistemic informa-
tion operators in KSM . When an operator acts only on one Mizzaro
space (Mizzaro multispace), then it is a local epistemic information
operator. Local epistemic information operators in non-structured
Mizzaro spaces are studied in (Mizzaro, 1996; 1998; 2001; Burgin,
2010).
Note that it is possible to consider any system that contains a
knowledge system as a knowledge system itself. Thus, any intelligent
agent is a knowledge system.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 193

Knowledge Evaluation and Validation 193

Here we are mostly interested in content epistemic information


operators, which we simply call epistemic information operators.
To correctly model stratified knowledge system, the modeling
M-space also has to be stratified in a similar fashion.

Definition 3.1.16. (a) An M-space M = {KSM ; OSM } is stratified


if there is a set J and each Mizzaro space (Mizzaro multispace) Ki
from KSM has the form

Ki = Kij .
j∈J

(b) A stratification of an M-space M = {KSM ; OSM } is strict if


for each Mizzaro space (Mizzaro multispace) Ki from KSM , Kij ∩
Kik = Ø when j = k ∈ J.
(c) An M-space M = {KSM ; OSM } is linearly stratified if each
Mizzaro space (Mizzaro multispace) Ki from MK has the form


Ki = Kin ,
n=1

in the case when the stratification is infinite and the form


m

Ki = Kin ,
n=1

in the case when the stratification is finite (cf., Figure 3.3).


Linear stratification means that the set of stratum indices J is
finite or countable and linearly ordered.
There are M-space stratifications with a nonlinear topology. For
instance, the stratification in Figure 3.4 has a cyclic topology. Impor-
tant cases of M-space stratifications have structures of a tree or a
forest.

Example 3.1.8. People, as well as computers, have many kinds


of memory. It is even supposed that each part of the brain has
several types of memory agencies that work in somewhat different
ways, to suit particular purposes (Minsky, 1986). It is possible to
consider each of these memory agencies as a separate system and
September 29, 2016 8:49 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 194

194 Theory of Knowledge: Structures and Processes

Kn

Kn 1

………………….

K3

K2

K1

Figure 3.3. A finite M-space stratification with the linear topology

K3

K1 K2

Figure 3.4. A finite M-space stratification with the cyclic topology

to study differences between information that changes each type of


memory. This might help to understand the interplay between sta-
bility and flexibility of the mind, in general, and the memory, in
particular.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 195

Knowledge Evaluation and Validation 195

Psychologists differentiate three types of the human memory:


the sensory memory, short-term memory, and long-term memory.
It is the most important and best documented by scientific research
memory stratification. However, memory researchers do not employ
uniform terminology. Sensory memory is also known as the sensory
register, sensory store, sensory information storage, eidetic mem-
ory, and echoic memory. Short- and long-term memories are also
referred to as the primary memory and the secondary memory, cor-
respondingly. Each component of the memory differs with respect
to its function, the form of information held, the length of time
information is retained, and the amount of information-handling
capacity.
Thus, the human memory has three basic strata: the sensory mem-
ory, the short-term memory, and the long-term memory. As a result,
all knowledge in the memory of an individual is also stratified into
three components: knowledge/data in the sensory memory, knowl-
edge in the short-term memory, and knowledge in the long-term
memory of the individual (cf., Figure 3.5). Additional stratification

Sensory memory

Short-term memory

Long-term memory

Figure 3.5. The human memory hierarchy


September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 196

196 Theory of Knowledge: Structures and Processes

of the human memory induced by its representation as a knowledge


space generates additional stratification of knowledge.
Sensory memory acts as a buffer for stimuli received through the
senses, which are then filtered and passed from sensory memory into
short-term memory by attention.
In turn, sensory memory is also stratified by different sensory
channels. There are iconic memory for visual stimuli, echoic memory
for aural stimuli, and haptic memory for touch.
Long-term memory is naturally stratified. The most popular strat-
ification divides it into two parts: episodic memory and semantic
memory. Episodic memory stores knowledge of events and experi-
ences in a serial form. In contrast to this, semantic memory is a struc-
tured record of facts, concepts, and skills that people have acquired.
The information in semantic memory is derived from that in the
episodic memory of the same individual.
Neuroscientists distinguish three main activities related to long
term memory: storage, deletion, and retrieval. These operations are
modeled by epistemic information operators. Storage is modeled
by the epistemic information operator REPL. In information stor-
age, information from sensory memory, at first goes to short-term
memory and then is stored in long-term memory, usually by the
process called rehearsal. Rehearsal is the repeated exposure to a
stimulus of a knowledge/data portion, which transfers it into long-
term memory. Deletion of a knowledge/data portion is modeled by
the epistemic information operator DEL and is mainly caused by
decay and interference. Emotional factors essentially affect the long-
term memory functioning. According to the contemporary approach,
there are two types of information retrieval: recall and recogni-
tion. In knowledge/data recall, the information is reproduced from
memory. Recall is modeled by the epistemic information operator
COPY. Knowledge/data recognition is based on information that
this knowledge/data portion has been seen before and is assisted
by the provision of retrieval cues to enable better access in the long-
term memory. Recognition is modeled by the epistemic information
operator GEN.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 197

Knowledge Evaluation and Validation 197

Scientists also use another stratification of the human memory:


personal memory, semantic memory, perceptual memory, and skill
memory, which includes, motor skill memory, cognitive skill memory,
and rote linguistic skill memory.

Example 3.1.9. The computer memory is also a complex system of


diverse components and processes. Memory of a computer includes
such three basic components as the random access memory (RAM),
read-only memory (ROM), and secondary storage. While RAM for-
gets everything whenever the computer is turned off and ROM
cannot learn anything new, secondary storage devices allow the com-
puter to record information for as long period of time as we want and
change it whenever we want. Now the following devices are utilized
for long-term computer memory: magnetic tapes and correspond-
ing drives, magnetic disks and corresponding drives, flash memory
storage devices and corresponding drives, and optical disks and cor-
responding drives.
Computer memory is intrinsically stratified by the hierarchy in
which levels are distinguished by the response time, complexity, and
capacity. The overall goal of using a memory stratification is to obtain
the higher possible average access performance, while minimizing the
cost of the entire memory system.
Often four major memory levels are separated:

1. Internal memory — Processor registers and cache.


2. Main memory — the system RAM and controller cards.
3. On-line mass storage — Secondary storage.
4. Off-line bulk storage — Tertiary and Off-line storage.

There are more detailed stratifications of computer memory. One


of them has the following form:

1. Processor registers usually are few thousand bytes in size and


provide the fastest access to the memory, which is usually 1 CPU
cycle.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 198

198 Theory of Knowledge: Structures and Processes

2. Cache
2a. Micro operations cache forms the level 0 and can be several
KiB in contemporary computers, where one KiB (kibibyte) is
210 = 1024 bytes.
2b. Instruction cache is a part of the level 1 and can be 128 KiB
in contemporary computers.
2c. Data cache is another part of the level 1 and can be 128
KiB in contemporary computers with the access speed about
700 GiB/second, where one GiB (gibibyte) is equal to 210 =
1024 MiB.
2d. Instruction and data (shared) cache forms the level 2 and can
be around 1 MiB in contemporary computers with the access
speed around 200 GiB/second, where one MiB (mebibyte) is
equal to 1024 KiB = 1048576 bytes and one gibibyte is equal
to 210 = 1024 MiB.
2e. Lesser shared cache forms the level 3 and can be several
MiB in contemporary computers with the access speed about
100 GiB/second.
2f. Larger shared cache forms the level 4 and can be 128 MiB
in contemporary computers with the access speed about
40 GiB/second.
3. Main memory (or primary storage) can be a number of gigabites
with the access speed is around 10 GiB/second.
4. Disk storage (or secondary storage) can be a number of tebibytes
(TiB) with the access speed (from a solid state drive) about
600 MiB/second, where one tebibyte is equal to 1024 GiB.
5. Nearline storage (or tertiary storage) can be a number of exbibytes
(EiB) with the access speed about 160 MiB/second, where one
exbibyte is equal to 220 TiB.
6. Offline storage.
Such memory stratifications are linear, reflecting the access time
with the fast CPU registers at the top and the slow hard drive at the
bottom.
Another, although related, stratification is induced by different
electronic devices, which include CPU registers, on-die SRAM caches,
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 199

Knowledge Evaluation and Validation 199

external caches, DRAM, paging systems, and virtual memory or


swap space on a hard drive. These devices are called RAM by many
developers, even though the various subsystems can have very differ-
ent access times, violating the original concept behind the random
access term in RAM. RAM consists of two strata: dynamic RAM,
which requires the stored information to be periodically re-read and
re-written, or refreshed in order not to lose it, and static memory,
never needs to be refreshed as long as power is applied, although it
can lose its content if power is removed.
Usually each stratum in RAM is also stratified. For instance,
in DRAM, strata are defined by the row, column, bank, rank, and
channel.
In addition, computer memory is also stratified by data stor-
age technologies, such as semiconductor, magnetic, and optical tech-
nologies. In modern computers, primary storage almost exclusively
consists of dynamic volatile semiconductor memory, which uses
semiconductor-based integrated circuits to store information. Since
the turn of the century, a type of non-volatile semiconductor memory
known as flash memory has steadily gained share as off-line storage
in various advanced electronic devices and computers.
Magnetic storage, which is non-volatile, uses different types of
magnetization on a magnetically coated surface to store informa-
tion. The information is accessed using one or more read/write heads
which may contain one or more recording transducers. A read/write
head only covers a part of the surface so that the head or medium,
or both must be moved relative to another in order to access data.
In modern computers, there are following kinds of magnetic storage
devices:

• Magnetic disk, such as sloppy disks, used for off-line storage, and
the hard disk drive, used for secondary storage.
• Magnetic tape data storage, used for tertiary and off-line storage.

At the beginning of computer era, magnetic storage was also


used for primary storage in a form of a magnetic drum, or core
memory, core rope memory, thin-film memory, twistor memory or
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 200

200 Theory of Knowledge: Structures and Processes

bubble memory, while magnetic tapes were often used for secondary
storage.
Another popular type of storage is optical discs, which stores
information in deformities on the surface of a circular disc, reading
this information by illuminating the surface with a laser diode and
observing the reflection. In modern computers, there are following
kinds of optical storage devices:

• CD, CD-ROM, DVD, BD-ROM: Read only storage, used for


mass distribution of digital information (music, video, computer
programs);
• CD-R, DVD-R, DVD+R, BD-R: Write once storage, used for ter-
tiary and off-line storage;
• CD-RW, DVD-RW, DVD+RW, DVD-RAM, BD-RE: Slow write,
fast read storage, used for tertiary and off-line storage;
• Ultra Density Optical or UDO is similar in capacity to BD-R or
BD-RE and is slow write, fast read storage used for tertiary and
off-line storage.

In magneto-optical disc storage, the information is read optically


and written into the magnetic state on a ferromagnetic surface by
combining magnetic and optical methods. It is usually used for ter-
tiary and off-line storage.
Paper data storage, typically in the form of paper tape or punched
cards, has long been used to store information for automatic process-
ing, particularly before electronic computers were invented.
There are also such memory devices as the vacuum tube mem-
ory, electro-acoustic memory, optical tapes, phase-change memory,
holographic data storage, and molecular memory, which stores infor-
mation optically inside crystals or photopolymers.
All these memory devices and components determine the cor-
responding stratification of knowledge stored in computers, e.g., of
knowledge bases.

Example 3.1.10. Stratification is a popular technique in knowl-


edge base theory and practice. For instance, Hunter and Liu (2010)
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 201

Knowledge Evaluation and Validation 201

introduce knowledge base stratification to solve the problem of merg-


ing multiple knowledge bases. Benferhat and Baida (2004) use strat-
ified first order logic for access control in knowledge bases. Benferhat
and Garcia (2002) employ stratification for handling inconsistent
knowledge bases. Lassez et al. (1989) show how stratification can
be used as a tool in the interactive model-building process. Namely,
it is possible to reduce the computational complexity of the process
by the use of stratification, which limits consistency checking to min-
imal strata.

Definition 3.1.17. (a) An M-space M is a subspace of an M-space


H if the state space KSM is a substructure of the state space KSH
and the operational system OSM is a substructure of the operational
system OSH .
(b) If an M-space M is a subspace of an M-space H , then H is
called a superspace of M .
In particular, the stratification of the knowledge space KSM is
induced by the stratification of the knowledge space KSH .

Example 3.1.11. If a structured M-space H models the group mem-


ory of a group G of several people, then a structured M-space M that
models the memory of one individual from this group G is a subspace
of H.
Subspaces of M-spaces and M-multispaces represent subsystems
of knowledge systems. For instance, in large knowledge systems, such
as a scientific theory, it is possible to separate the subsystem of deno-
tational knowledge and the subsystem of operational knowledge.

Definition 3.1.18. If X is a structure, i.e., a set/multiset with rela-


tions, then X is the set of all elements from X and X is the multiset
of all elements from X, while RelX is the set of all relations from X.
In such a way, ignoring the M-space stratification, it is possible
to represent structured knowledge systems by uniform M-spaces, in
which all knowledge states are sets or multisets. In this setting, con-
tent epistemic information operators act on elements from sets K
or multisets K, while bond epistemic information operators act on
elements from RelK.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 202

202 Theory of Knowledge: Structures and Processes

Definition 3.1.19. (a) A knowledge system (agent) A is called


locally finite if any knowledge state of A is finite.
(b) A knowledge system (agent) A is called finite if it has only a
finite number of knowledge states and any knowledge state of A is
finite.
(c) An M-space M is called locally finite if each K from KSM
contains only a finite number of knowledge items.
(d) An M-space M is called finite if it has only a finite number of
Mizzaro spaces (Mizzaro multispaces) K each of which is also finite.
It looks like it might be sufficient to consider only finite or at
least, locally finite agents. However, if knowledge is represented by
logical statements and it is assumed (as, for example, in the theory
of Bar-Hillel and Carnap, 1958) that any knowledge system con-
tains all logical consequences of all its elements, then an agent with
such knowledge system is infinite. In information algebras, portions
of information are represented by close subsets of sentences from a
logical language L (Kohlas and Stärk, 2007).
However, in conventional logics, closed with respect to such infor-
mation operators as deduction sets are infinite because any sentence
p implies p ∨ q for any sentence q from L, which is, as a rule, infinite
(cf., for example, (Shoenfield, 2001)). Thus, in the context of clas-
sical logic and information algebras any portion of information has
infinitely many representations. Consequently, such a portion gener-
ates a system with the infinite number of knowledge items.
Proposition 3.1.2. An M-space M is finite if and only if it has a
finite universe.
Epistemic structures and epistemic units have many properties.
Basic characteristics called dimensions of epistemic structures (epis-
temic units) in general and of knowledge items and units, in par-
ticular, are studied in Chapter 2. Usually dimensions explored in
Section 2.3 are gradual compound properties or attributes used for
building weighted epistemic spaces.
As we have seen, each dimension is a composite attribute com-
prising several basic epistemic attributes. For instance, the efficiency
dimension of an epistemic structure e includes the efficiency of e for
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 203

Knowledge Evaluation and Validation 203

reaching some goal, e.g., for reaching the Mars, the efficiency of e
for understanding e, the efficiency of e for understanding people, the
efficiency of e for building some object A and the efficiency of e for
obtaining knowledge about some object D.
Another example of efficiency is given by such an epistemic struc-
ture int as knowledge of mathematical integration and its efficiency
for a student C. In this case, efficiency of int for getting a high grade
in the class is rather high, while efficiency of int for getting from
home to the college is rather low (usually it is zero).
Complexity comprises such properties as compression, under-
standability, and hardness.
In addition to attributes that constitute dimensions, there are
other properties/attributes of epistemic structures and epistemic
units. For instance, an important epistemic attribute is novelty with
respect to the infological system of an intelligent agent (cognitive sys-
tem). This attribute is a (fuzzy) function of another attribute that
shows the time of attribution of epistemic structure to the given
infological system.
Epistemic structures (knowledge units) taken without their prop-
erties are pure. Properties/attributes of structures (knowledge units)
and characteristics of objects they reflect (represent) induce weights
of these epistemic units. For instance, taking such an epistemic unit
as a statement P , we can consider its properties (attributes): (1) time
when this statement P was made; (2) person(s) who made this state-
ment; (3) people who supported this statement; (4) time needed to
prove validity (truthfulness) of P ; (5) truth value of P ; and so on.
The value of the first attribute is the first weight w1 of P . It is
a numerical value. The value of the second attribute is the second
weight w2 of P . It is a nominal value, i.e., it is a name or names of
people who made statement P , or a functional value, i.e., it is the
indicator function of the names of people who made statement P .
The value of the third attribute is the third weight w3 of P . It is
similar to the second weight. The value of the fourth attribute is the
fourth weight w4 of P . It is numerical and similar to the first weight.
The value of the fifth attribute is the fifth weight w5 of P and it
takes two values from the two-element set {true, false} if we utilize
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 204

204 Theory of Knowledge: Structures and Processes

the classical logic. If we evaluate P by means of fuzzy logic, the fifth


weight w5 takes values in the interval [0, 1].
An algorithm is a unit of operational knowledge instructing how
to solve a problem or a class of problems. Thus, taking an algorithm
A as a symbolic structure, we have such properties as (cf., (Burgin,
2005)):

(1) the length lA (weight w1 );


(2) its time complexity TA (weight w2 );
(3) its space complexity SA (weight w3 ).

The values of the first weight are positive numbers, while the
values of the second and third weights are functions (cf., Burgin,
2005).
Dimensions, which are basic complex properties, also add weights
to knowledge units as a specific kind of epistemic structures. For
instance, the knowledge unit A represented by the sentence “Now it
is ten o’clock in the morning” is the symbolic part of pure knowl-
edge. However, it can be true or false depending on the current time.
This estimate defines the weight the knowledge unit in the relevance
dimension. Namely, if the estimate “true” is represented by 1 and
the estimate “false” is represented by 0, then the weight of A is 1
when it is really ten o’clock in the morning and the weight of A is 0
when this is wrong.
As a result, dimensions and other properties/attributes bring us
from pure epistemic structures (knowledge units) to weighted epis-
temic structures (weighted knowledge units). To determine weights,
we fix a vector of attributes (A1 , . . . , Ak ). Then we change a pure epis-
temic structure (pure knowledge unit) e, to the weighted epistemic
structure (weighted knowledge unit) B = (e; w1 , . . . , wk ), where wi
is the weight of e with respect to the attributes Ai (the dimension i).
The value of the weight wi of the epistemic structure e with respect
to the attribute Ai reflects to what extent e has the attribute Ai .
When the attributes Ai is an abstract property in the sense of (Bur-
gin, 1985; cf., also Section 5.2), then wi is the value of this property
for the epistemic structure e.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 205

Knowledge Evaluation and Validation 205

It is possible to consider the system of weights w1 , . . . , wk of a


weighted epistemic structure (knowledge unit) B = (e; w1 , . . . , wk )
as the state of e.
It is interesting to know that weights can describe arbitrary struc-
tures in epistemic spaces. In particular, weights in an epistemic space
can turn it into an epistemic multispace by inducing an indistin-
guishability relation between elements of this space. Indeed, in a
multiset (cf., Appendix), elements are indistinguishable if and only
if they have the same type. Thus, we can introduce the weight wt
the values of which are types of elements (epistemic structures) from
an epistemic space. Making elements that have the same value of the
weight wt indistinguishable, we obtain an epistemic multispace.
An epistemic space E is called weighted if all its elements are
weighted epistemic structures. By construction, there is a natural
projection π of a weighted epistemic space E onto a pure epis-
temic space E0 where π(e; w1 , . . . , wk ) = e for any pure epistemic
structure e.

Example 3.1.12. Gärdenfors offers his theory of conceptual rep-


resentations as a bridge between the symbolic and connectionist
approaches (Gärdenfors, 2000; 2004). Symbolic representation is par-
ticularly weak at modeling concept learning, which is paramount
for understanding many cognitive phenomena. Concept learning is
closely tied to the notion of similarity, which is also poorly served
by the symbolic approach. Gardenfors’ theory of conceptual spaces
presents a framework for representing information on the concep-
tual level. A conceptual space is built up from geometrical structures
based on a number of quality dimensions. The main applications
of the theory are on the constructive side of cognitive science: as a
constructive model the theory can be applied to the development of
artificial systems capable of solving cognitive tasks. Gardenfors also
shows how conceptual spaces can serve as an explanatory framework
for a number of empirical theories, in particular those concerning
concept formation, induction, and semantics. His aim is to present
a coherent research program that can be used as a basis for more
detailed investigations.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 206

206 Theory of Knowledge: Structures and Processes

Example 3.1.13. Osgood, Suci, and Tannenbaum (1978) use


semantic spaces for building their theory of meaning and its mea-
surement. A semantic space is a set of concepts with their mean-
ing. The meaning of a concept to an individual subject is defined
as the set of the factor scores based on the data from this indi-
vidual. The meaning of a concept in the culture is defined as the
set of the averaged factor scores (Osgood et al., 1978). This shows
that a semantic space is a special kind of weighted epistemic spaces.
Although very often the factor scores are integers, it is possible to
conjecture that using real numbers as the factor scores allows better
evaluation of meaning turning the meaning of a concept into a real
vector.
The mathematical structure used for representing weighted epis-
temic spaces in the formal context is called a generalized vector
bundle (Burgin, 2011). Informally, it consists of epistemic elements
connected by relations and a vector space attached to each of these
elements. Below we give some examples of weighted conceptual epis-
temic spaces (cf., Figures 3.6 and 3.8).

Example 3.1.14. We can also build a weighted conceptual epistemic


space WECS in which concepts are epistemic elements. Below we give
a sample of weighted conceptual epistemic spaces, which consists of
four concepts, to each of which a three-dimensional weight space is
attached. For instance, it is possible to construct each of these weight
spaces using such properties as the level of abstraction, fuzziness,
and connectedness, i.e., the number of other concepts to which this
concept is connected.

Animal

Lion Tiger Cat

Figure 3.6. A simple graphical example of a weighted conceptual epistemic


space WECS
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 207

Knowledge Evaluation and Validation 207

Example 3.1.15. A weighted propositional epistemic space WEPS,


in which propositions are epistemic elements, is a knowledge space as
propositions constitute one of the basic forms of knowledge represen-
tation. There are many relations in this space, i.e., it is a structured
epistemic space. One of the main relations is implication denoted by
the symbol →, where r, p → q means that whenever the propositions
p and r are true, the proposition q is also true.
Given a geometrical shape (cf., Figure 3.7), we can build the
weighted conceptual epistemic space (cf., Figure 3.8), which consists
of three propositions:

A: ABCD is a square.
B: ABCD is a rectangle.
C: ABCD is a rhombus.

To each of these propositions, a three-dimensional weight space is


attached. For instance, it is possible to construct each of these weight
spaces using such properties as the level of abstraction, fuzziness,
and connectedness, i.e., the number of other concepts to which this
concept is connected.

B C

A D

Figure 3.7. The square ABCD

B C
Figure 3.8. A simple graphical example of a weighted conceptual epistemic
space WECS
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 208

208 Theory of Knowledge: Structures and Processes

In what follows, we assume that all weighted epistemic elements


of the form (a; w1 , . . . , wk ) with the fixed a is a real vector space —
the space of weights, which is denoted by La , or even a topological
vector space (Bourbaki, 1987). Note that in general, not all weights
are numbers. For instance, there are functional weights. However, it is
possible to immerse any domain of weight values into an appropriate
vector space and assume that the whole weight space is the vector
space equal to the Cartesian product of weight spaces of individual
weights w.
In addition, it is possible to assume that all vector spaces La of
weights have the same dimension. If it is not so, when dimensions of
all La are bounded, we can take the space La0 of weights with the
maximal dimension as the common space for all weights denoting
it by Le . In the case when dimensions of all La are unbounded, we
come to the necessity to use an infinite-dimensional vector space
as the common space of all weights Le . That is why, exploring the
general situation, we acknowledge that the common vector spaces Le
of weights is not necessarily finite dimensional.
In this context, the space Wesw of weighted epistemic structures
from Wes has the structure of a vector bundle with the base Wes , i.e.,
W esw = (Wesw , πes , Wes ) where πes : Wesw → Wes is a projection,
while the space W sesw of weighted symbolic epistemic structures
from Wses has the structure of a vector bundle with the base Wses ,
i.e., W sesw = (Wsesw , πses , Wses ) where πses : Wsesw → Wses is a
projection.
We remind (Le Potier, 1997) that a vector bundle E is a triad
(named set) E = (E, p, B) where the topological space E is called
the total space or simply, space of the vector bundle E; the topological
space B is called the base space or simply, base of the vector bundle
E; and p is the topological projection of E onto B such that there
is a vector space F is called the fiber of the vector bundle E, for all
points b from B, p−1 (b) = Fb F and every point in the base space
has a neighborhood U for which the space p−1 (U ) is homeomorphic
to the direct product U × F . In the case of the epistemic spaces Wesw
and Wsesw , F is a vector space isomorphic the common vector space
Le of weights.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 209

Knowledge Evaluation and Validation 209

Consequently, taking a weighted epistemic space E ⊆ Wesw , we


obtain the vector bundle E = (E, pE , Ee ) in which pE is the restric-
tion of πses on E and Ee = πses (E).
In general, we have the set Ww of weighted epistemic structures
and the vector bundle E = (E, pE , Ee ) where E ⊆ Ww and Ee ⊆ W .
Assuming that Ee and the fiber F of the vector bundle E are
metric spaces with distances (metrics) d and dv , correspondingly, we
are able to define a distance d between elements (e; w1 , . . . , wn ) and
(l; u1 , . . . , um ) from the space E in the following way:

d((e; w1 , . . . , wn ), (l; u1 , . . . , um ))
= dv ((w1 , . . . , wn ), (u1 , . . . , um )) + d(e, l) (3.3)

when n = m;

d((e; w1 , . . . , wn ), (l; u1 , . . . , um ))
= dv ((w1 , . . . , wn ), (u1 , . . . , um , 0, . . . , 0)) + d(e, l) (3.4)

when n > m; and

d((e; w1 , . . . , wn ), (l; u1 , . . . , um ))
= dv ((w1 , . . . , wn , 0, . . . , 0), (u1 , . . . , um )) + d(e, l) (3.5)

when n < m.
In finite-dimensional vector spaces, we can take the Euclidean
metric as dv , defining the distance dv ((x1 , . . . , xn ), (y1 , . . . , yn )).
However, as we discussed before, it is natural to assume that the
fiber F is an infinite-dimensional vector space. In this case, we simply
postulate existence of a metric in it. Usually, metrics in vector spaces
are defined by norms (Rudin, 1991; Burgin, 2013). Note that in this
case, we use only formula (3.3) because all fibers Fa have the same
dimension.

Proposition 3.1.3. The distance d((e; w1 , . . . , wn ), (l; u1 , . . . , um ))


defines a metric in the space Ww .

Proof . By definition, d((e; w1 , . . . , wn ), (e; w1 , . . . , wn )) = 0. When


d((e; w1 , . . . , wn ), (l; u1 , . . . , um )) = 0, then d(e, l) = 0 and, thus,
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 210

210 Theory of Knowledge: Structures and Processes

e = l because d is a metric in W . Besides, dv ((w1 , . . . , wn ),


(u1 , . . . , um )) = 0 and thus, (w1 , . . . , wn ) = (u1 , . . . , um ) because
dv is a metric in a vector space. Consequently, d((e; w1 , . . . , wn ), (l;
u1 , . . . , um )) = 0 if and only if (e; w1 , . . . , wn ) = (l; u1 , . . . , um ).
The function d is symmetric because the function d is symmetric
in W , while the function dv is symmetric in the vector space F .
In addition, let us take arbitrary weighted (symbolic) epistemic
structures (e; w1 , . . . , wn ), (l; u1 , . . . , um ) and (h; v1 , . . . , vp ) from E
and denote d(e, l) = a, d(l, h) = b, d(e, h) = c, dv ((w1 , . . . , wn ),
(u1 , . . . , um )) = d, dv ((u1 , . . . , um ), (v1 , . . . , vp )) = k and
dv ((w1 , . . . , wn ), (v1 , . . . , vp )) = r. Then we have:

c ≤ a + b because d is a metric in W ,
r ≤ d + k because dv is a metric in the vector space F .

Consequently,

d((e; w1 , . . . , wn ), (h; v1 , . . . , vp )) = r + c
≤ (d + a) + (k + b) = d((e; w1 , . . . , wn ), (l; u1 , . . . , um ))
+ d((l; u1 , . . . , um ), (h; v1 , . . . , vp ))

i.e., the third axiom of metric spaces is true.


Proposition is proved.

Corollary 3.1.1. d((e; w1 , . . . , wn ), (h; v1 , . . . , vn )) ≥ d(e, h) and


d((e; w1 , . . . , wn ), (h; v1 , . . . , vn )) ≥ dv ((w1 , . . . , wn ), (v1 , . . . , vn ))

Corollary 3.1.2. d((e; w1 , . . . , wn ), (h; v1 , . . . , vn )) < k, then


d(e, h) < k.

There are other ways to define metrics in the spaces Wesw and
Wsesw based on metrics in the base and fiber of the correspond-
ing vector bundle. For instance, it is possible to use the following
formulas:

d((e; w1 , . . . , wn ), (l; u1 , . . . , um ))

= dv ((e; w1 , . . . , wn ), (l; u1 , . . . , um ))2 + d(e, l)2 (3.6)
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 211

Knowledge Evaluation and Validation 211

when n = m;

d((e; w1 , . . . , wn ), (l; u1 , . . . , um ))

= dv ((w1 , . . . , wn ), (u1 , . . . , um , 0, . . . , 0))2 + d(e, l)2 (3.7)

when n > m; and

d((e; w1 , . . . , wn ), (l; u1 , . . . , um ))

= dv ((w1 , . . . , wn , 0, . . . , 0), (u1 , . . . , um ))2 + d(e, l)2 (3.8)

when n < m.
Structures in the spaces Wes , Wses , Wesw , and Wsesw are inherited
by epistemic spaces and their states. In particular, a weighted epis-
temic space E and each its state is a vector bundle E = (E, pE , Ee )
with the metric d in the base E.
We remind that a set X in a metric space E with a metric d is
called bounded if there is a number k such that for any points a and
b from X, d(a, b) < k.
To study bounded sets in metric spaces that are spaces of vector
bundles, we need additional concepts.

Example 3.1.16. Osgood, Suci and Tannenbaum (1978) define dis-


tance in semantic spaces (cf., Example 3.1.8) by the formula from
the m-dimensional Euclidean spaces:


m
d(e, l) =
d2elj .
j=1

In this formula, m is the number of factors and delj is the difference


between the coordinates of the elements e and l with respect to the
same factor (dimension) j. In the most refined models, the number
m is equal to 3 (Osgood et al., 1967).
Let us consider a vector bundle E = (E, pE , Ee ) with the fiber F .

Definition 3.1.20. A set X ⊆ E is called rectangular in E if X =


{(b, u)|b ∈ Xe = pE (X), u ∈ F and for any a ∈ Xe and v ∈ F ((b, v) ∈
X ⇒ (a, v) ∈ X)}.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 212

212 Theory of Knowledge: Structures and Processes

Example 3.1.17. Let us consider a trivial vector bundle H =


(H = {a, b, c} × R, ph , He = {a, b, c}). Then the set X =
{(a, 1), (a, 2), (b, 1), (b, 2), (c, 1), (c, 2)} is rectangular in H, while the
set Z = {(a, 1), (a, 3), (b, 1), (b, 5), (c, 1), (c, 3)} is not rectangular
in H.

Definition 3.1.21. If X ⊆ E, then the minimal rectangular in E


set R(X) that contains X is called the rectangular closure of X in E.

Proposition 3.1.4. The rectangular closure of a set in E always


exists and is unique.
Indeed, we can put
R(X) = {(a, u)|a ∈ Ee = pE (X), u ∈ F
and there is b ∈ Xe ((b, u) ∈ X)}.

Proposition 3.1.5. The operation of taking the rectangular closure


of a set in E is a closure operation in the sense of (Kuratowski, 1966)
on sets in metric spaces.
In particular, the operation of taking the rectangular closure of a
set in E is idempotent, i.e., R(R(X)) = R(X) for any X ⊆ E.

Proposition 3.1.6. A set X in E is rectangular if and only if X =


R(X).

Proof is left as an exercise.

Definition 3.1.22. If X ⊆ E, then the fiber projection σ(X) of X


is defined as follows:
σ(X) = {u | ∃b ∈ Xe ((b, u) ∈ X}.
For instance, taking sets X and Z from Example 3.1.17, we see that:
σ(X) = {1, 2},
and
σ(Z) = {1, 3, 5}.
Definitions imply the following result.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 213

Knowledge Evaluation and Validation 213

Proposition 3.1.7. A set X in E is rectangular if and only if X =


Xe × σ(X).
Some properties of sets and their rectangular closures are the
same.

Proposition 3.1.8. A subset X of the space E of the vector bundle


E = (E, pE , Ee ) is bounded if and only if its rectangular closure is
bounded.

Proof . Sufficiency. By definition, any subset of a bounded set is


bounded.
Necessity. Let us assume that X is bounded. It means that there is a
positive number k such that d(x, z) < k for any two points x = (a, u)
and z = (b, v) from X where a, b ∈ Xe .
Let us take two points p and q from R(X). Then p = (c, w) and
q = (d, y) with c, d ∈ Xe . By the definition of the rectangular closure
R(X), there are points x and z from X such x = (a, w) and z = (b, y)
with a, b ∈ Xe . By the properties of metric,
d(p, q) ≤ d(p, x) + d(x, z) + d(z, q).
By initial conditions, d(x, z) < k. At the same time, by the defi-
nition of the metric d and Corollary 3.1.2, we have:
d(p, x) = d((c, w), (a, w)) = d(c, a) < k,
and
d(z, q) = d((b, y), (d, y)) = d(b, d) < k.
Consequently,
d(p, q) < 3k.
Proposition is proved because p and q are arbitrary points from
R(X).
Reducing the problem of boundedness to rectangular sets, now
we find conditions of boundedness for rectangular sets.

Proposition 3.1.9. A rectangular subset X of the space E of the


vector bundle E = (E, pE , Ee ) is bounded if and only if the projection
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 214

214 Theory of Knowledge: Structures and Processes

Xe = pE (X) of X and the fiber projection σ(X) of X are uniformly


bounded.
Proof . Necessity. Let us assume that the projection Xe = pE (X)
of X is unbounded. It means that for any positive number k, there
are two points a and b in Xe such that d(a, b) > k. As Xe is the
projection of X, there are two points x and z in X such that a =
pE (x) and b = pE (z). By the definition of the metric in the space E,
d(x, z) ≥ d(a, b) > k. Consequently, X is also unbounded.
Now, let us suppose that the fiber projection σ(X) of X is not
uniformly bounded. It means that for any positive number k, there
are two points u and v in σ(X) such that dv (u, v) > k. As σ(X) is a
projection of X, there are points x = (a, u) and z = (b, v) from the
space X. By Corollary 2, d(x, z) ≥ k as by choice of the points u
and v, dv (u, v) > k. Thus, the space X is not bounded.
Then by the Law of Contraposition, if the space X is bounded,
then the projection Xe = pE (X) of X and the fiber projection σ(X)
of X are bounded.
Sufficiency. Let us suppose that the projection Xe = pE (X) of X
and the fiber projection σ(X) of X are bounded. It means that there
is a positive number k, such that for any two points a and b in Xe ,
we have d(a, b) < k and there is a positive number h, such that for
any two points u and v from σ(X), we have d(x, z) < h.
Let us take two points x and z from X. Then x = (a, u) and
z = (b, v) where a, b ∈ Xe , while u and v belong to the fiber F of the
bundle E. As X is a rectangular set, points (a, u) and (b, v) belong
to X and by definition,
d(x, y) = d(a, b) + d(u, v) < k + h.
Consequently, the set X is bounded.
Propositions 3.1.7 and 3.1.9 imply the following result.

Corollary 3.1.3. A subset X of the space E of the vector bundle


E = (E, pE , Ee ) is bounded if and only if the projection Xe = pE (X)
of X and the fiber projection σ(X) of X are uniformly bounded.
Note that here we study weighted epistemic structures with real
number weights and weighted epistemic space in which weights form
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 215

Knowledge Evaluation and Validation 215

real vector spaces. However, using the same technique, it is possible to


obtain similar results for weighted epistemic structures with complex
number weights or vector weights and for weighted epistemic space
in which weights form complex vector spaces.

3.2. Knowledge evaluation, justification, and testing

It is better to learn late than never.


Publilius Syrus

In the context of epistemic structures in general and knowledge, in


particular, evaluation means finding properties or values of properties
of these structures (knowledge). In a more strict sense, knowledge
evaluation also means evaluation of epistemic structures properties
related to knowledge, e.g., properties that allow discerning knowledge
from other epistemic structures.
In the context of epistemology, (knowledge) justification means
giving sound grounds for holding a belief (Pollock, 1974). Epistemo-
logical justification is a normative notion.
Thus, we see that justification is evaluation of the property “to be
justified” and justification is a kind of evaluation. In spite of this, we
separately consider justification because of the long-standing philo-
sophical tradition related to the notion of justification.
Testing an epistemic structure (knowledge) is a process, or a
procedure that controls such a process, aimed at answering some
question about properties of this structure (knowledge) (Burgin and
Debnath, 2007). Thus, testing represents a class of processes or
procedures for evaluation.

3.2.1. Knowledge evaluation


As knowledge evaluation is evaluation of knowledge properties in the
context of epistemic structures, it means finding whether a knowledge
item has some properties (in the classical (set-theoretical) interpreta-
tion of properties) or what is the value of a property (in the sense of
the theory of abstract properties, cf., Chapter 5 and (Burgin, 1985;
1986)) for this knowledge item.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 216

216 Theory of Knowledge: Structures and Processes

There are different types and kinds of evaluation in general and


knowledge evaluation, in particular. Taking into account processes of
(knowledge) evaluation, we get the procedural typology of evaluation:

• Testing or direct evaluation (of knowledge properties) is a proce-


dure performed with the object of evaluation (with a knowledge
item/system) aimed at evaluation of properties of this object.
• Representational evaluation (of knowledge properties) is finding
properties of the object of evaluation (of a knowledge item/system)
by performing operations with a representation of this object.
• Relational evaluation (of knowledge properties) is finding proper-
ties of the object of evaluation (of a knowledge item/system) by
comparing this object (knowledge item) to another object of the
same type (knowledge item/system).

An example of knowledge testing is finding whether a given scien-


tific theory or knowledge base is consistent by comparing postulates
and theorems of this theory (units of this knowledge base).
An example of representational knowledge evaluation is finding
whether a given scientific theory or knowledge base is consistent by
representing this theory (knowledge base) as a logical calculus (logi-
cal variety) and testing its consistency.
An example of relational knowledge evaluation is finding whether
a given scientific theory or knowledge base is consistent by comparing
this theory (knowledge base) to a consistent scientific theory (knowl-
edge base).
Social and individual practice of dealing with epistemic structures
in general and knowledge, in particular, shows that there are three
operational types of testing (direct evaluation):

➢ Testing by computation;
➢ Testing by inference;
➢ Testing by application.

When a computer or a network is used to find knowledge about


a given object, e.g., some person, in a knowledge base, it is testing
by computation.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 217

Knowledge Evaluation and Validation 217

modeling interpretation
model (metaknowledge) knowledge system knowledge domain
similarity

another knowledge system

Figure 3.9. The Representational Tetrad

When a scientist tries to find if an already known law of nature


follows from his theory, it is testing by inference.
When an engineer tries to find he can use some physical theory
for engineering problems, he applies this theory to his problems and
it is testing by application.
Similar to testing, representational knowledge evaluation also has
three types because it is possible to consider three types of represen-
tations for a knowledge system — a model of this knowledge system
(metaknowledge, the domain of reference of this knowledge system,
i.e., the domain in which this knowledge system is interpreted, and
representation by a similar knowledge system (cf., Figure 3.9). It
gives us the representational classification:

• Representational evaluation by a model;


• Representational evaluation by the domain of reference;
• Representational evaluation by a similar knowledge system.

When a scientist formalizes his theory as a logical calculus (logical


variety) to find its formal properties, it is evaluation by a model.
When a physicist checks his theory against experimental data, it
is evaluation by the domain of reference.
When a scientist wants to show that his new theory is better than
the existing theory, he compares both theories and this is evaluation
by a similar knowledge system.
In addition to procedural types, there are also three target types
of (knowledge) evaluation:

➢ General evaluation is finding values of definite properties of a


given object (of a knowledge item in our case);
➢ Validation is finding whether a given object (a knowledge item in
our case) has definite properties;
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 218

218 Theory of Knowledge: Structures and Processes

➢ Cause evaluation is finding causes for definite properties of a given


object (of a knowledge item in our case).

An example of general evaluation is finding computational com-


plexity of an algorithmic problem as an algorithmic problem is a kind
of operational knowledge.
An example of validation is finding whether a scientific theory
or knowledge base is complete with respect to a given domain. To
do this, scientists perform various experiments with the domain and
explore if a given theory explains all encountered phenomena.
An example of cause evaluation is finding why a scientific theory
or knowledge base is inconsistent.
There are also three aspect types of (knowledge) evaluation:

➢ Meaning evaluation is evaluation of meaning of a given object


(of a knowledge system in our case);
➢ Representation evaluation is evaluation of a representation of a
given object (of a knowledge system in our case);
➢ Structure evaluation is evaluation of the structure of a given
object (of a knowledge system in our case).

An example of meaning evaluation is finding meaning of theoreti-


cal terms, which was one of basic tasks for logical positivism (Carnap,
1928).
An example of representation evaluation is finding why a scientific
theory or knowledge base is inconsistent.
An example of structure evaluation is finding whether a model of
a scientific theory or knowledge base is adequate.
As the general theory of evaluation shows, the process of eval-
uation has three main stages: preparation, realization, and analysis
(Burgin and Kavunenko, 1994).
Preparation demands the following operations to achieve correct
and sufficiently exact evaluation:

1. Choosing evaluation criteria.


2. Corresponding characteristics (indices) to each of the chosen
criteria.
3. Representing characteristics by indicators (estimates).
September 29, 2016 8:49 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 219

Knowledge Evaluation and Validation 219

Criterion Index Indicator

Figure 3.10. The Attributive Triad

This shows that a complete process of evaluation preparation has


the structure of the Attributive Triad (cf., Figure 3.10), in which
nomological tools for evaluation — evaluation criteria, indices, and
indicators — are prepared.
A specific realization of this process is the GQM (Goal-
Question-Measurement) approach to software measurement (Basil
and Rombach, 1988), which is a case of operational knowledge evalua-
tion and in which software metrics play the role of evaluation indices.
According to GQM, creation of software indicators (software metrics)
for software evaluation has to include the following three stages:

1. Setting goals specific to needs in terms of purpose, perspective,


and environment.
2. Refinement of goals into quantifiable tractable questions.
3. Deducing metric and data to be collected (as well as the means
for their collection) to answer the questions.

Thus, the first stage in evaluation demands to determine a specific


criterion for evaluation. This criterion gives the goal of evaluation.
For instance, criteria of good software include such properties as reli-
ability, adequacy, exactness, completeness, convenience, user friend-
liness, etc. However, such properties are also directly immeasurable
and to estimate them, it is necessary to use corresponding indices and
indicators. However, a criterion can be too general for direct estima-
tion. This causes necessity to introduce more specific properties of
the evaluated object. To get these properties, quantifiable tractable
questions are formulated. Such properties play the role of indices for
this criterion. Thus, the second stage of evaluation consists of index
selection that reflects criteria. Sometimes an index can coincide with
the corresponding criterion, or a criterion can be one of its indices.
However, in many cases, it is impossible to obtain exact values for the
chosen indices. For instance, we cannot do measurement with abso-
lute precision. What is possible to do is only to get some estimates of
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 220

220 Theory of Knowledge: Structures and Processes

indices. Consequently, the third stage includes obtaining estimates or


indicators for selected indices. In the case of software, these indica-
tors have form of software measures. With respect to software quality
such indicators are called software metrics.
Similar approach was suggested by Belchior, Xexéo, and da
Rocha (1996) in their hierarchical software quality evaluation model
(SQEM). This model is based on four main concepts: objectives or
goals, factors, criteria, and evaluation processes. Quality objectives or
goals form criteria and represent important properties that a prod-
uct should possess. Each goal is decomposed into definite factors,
which are sometimes further decomposed into more detailed sub-
factors. Factors and subfactors play the role of indices and define
different users’ perspectives about the quality of a software product.
After this, it is necessary to convert obtained indices into correspond-
ing indicators.
However, a software metric (indicator) is useful only when there
are corresponding procedures/algorithms of measurements. Thus,
we need to reflect more stages for the metric development, which
includes six stages — three listed above and three new ones.

4. Designing procedures/algorithms for data collection.


5. Designing procedures/algorithms for computing metrics values.
6. Designing procedures/algorithms for analysis of measurement
results.

It is necessary to remark that measuring algorithms are usually


recursive, while operational knowledge, e.g., computer programs, is a
system of dynamic objects, which are usually changing during their
life cycle. For instance, to facilitate the necessary changes for com-
puter software, many software update services are suggested on the
Web. That is why super-recursive algorithms would be more efficient
and flexible for evaluation (measurements) of operational knowledge
in software engineering (Burgin and Debnath, 2009).
The dynamics of knowledge evaluation is reflected by the Evalu-
ation Triad (cf., Figure 3.11).
Let us give an example of this triad. Assume that a ball is taken
from the urn with green and blue balls. Then it is possible to consider
September 29, 2016 8:49 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 221

Knowledge Evaluation and Validation 221

Property Measure Measurement

Figure 3.11. The Evaluation Triad

such a property of a ball as “to be blue.” Then the corresponding


measure is the probability that a randomly taken ball is blue. In
this case, the measurement procedure is extraction of a ball and
observation to determine its color.
The Evaluation Triad goes after the Attributive Triad because
the property with which the Evaluation Triad starts is in essence the
indicator from the Evaluation Triad.
Studying epistemic structures and their important case —
knowledge, researches evaluated them from different perspectives.
The most popular one has been the representation perspective
because epistemic structures in general and knowledge in particu-
lar always represent or reflect some domain. Thus, it is possible to
estimate relevance of this representation (reflection) to the repre-
sented (reflected) domain, e.g., to physical reality. This brings us to
the cognitive scale, in which epistemic structures form the representa-
tional continuum situated in the relevance dimension and presented
in Figure 3.12. All elements from the representational continuum are
called impressions. Informally an impression is an epistemic structure
with an estimate, such as true, or false, relevant or irrelevant, cor-
rect or incorrect, grounded or ungrounded. Note that impressions are
not only statements or propositions, but they may be, for example,
images or procedures.
In the representational continuum, impressions are ordered by
the degree of relevance so that impressions with the higher relevance
in them lie to the right of impressions with the lower relevance in
them. When the degree of relevance becomes sufficiently high, the
corresponding structures from the representational continuum are

Knowledge Impressions Misconceptions/Delusions

complete complete
relevance irrelevance

Figure 3.12. The representational continuum


September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 222

222 Theory of Knowledge: Structures and Processes

called knowledge. When the degree of relevance becomes very low,


the corresponding structures from the representational continuum
are called misconceptions and delusions.
This shows that it is possible to discern these groups of impres-
sions either based on the factual situation or on the evaluation that
exists in the corresponding knowledge system. For instance, a person
can think that he knows something while by objective estimates it
will be only a bunch of misconceptions. When a man in a desert,
deprived of water and thirsty, has a mirage of a well, very often this
man thinks that he knows that very soon he will be able to drink,
while in reality, it is a misconception that he is seeing water.
Examples of truth-valued impressions are propositions, state-
ments, assertions, utterances, sentence-types, sentence-tokens,
beliefs, opinions, theories, doctrines, facts, etc.
Examples of beneath-truth-valued impressions are words, naming
expressions (such as big mountain, three monkeys, or pretty dolls),
exclamations (“Ouch”, “Ups”), numbers (seventy seven, three hun-
dred and five)
There are different evaluation criteria for impressions:

— Correctness, which is traditionally called truth, reflects relation of


knowledge or more general, of an impression to the domain this
impression (knowledge) reflects.
— Confidence by assurance reflects to what extent the agent is cer-
tain in the validity of an impression.
— Groundedness by evidence reflects the extent of the impression
evaluation.

For instance, we have an impression that it will be a good weather


tomorrow. This prognosis is justified by weather forecast but only
tomorrow we will know if this impression is correct and to what
extent. If we strongly believe in the weather forecast, our confidence
by assurance is high. If we know probability of correct weather fore-
casting, then the groundedness by evidence is equal to this probabil-
ity, e.g., if this probability is 70%, then the groundedness by evidence
is equal to 70%.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 223

Knowledge Evaluation and Validation 223

There are external impressions formed under the impact of infor-


mation coming from external sources and there are internal impres-
sions formed under the impact of information coming from internal
sources. For instance, according to contemporary science, dreams give
internal impressions, while vision gives external impressions. It is nec-
essary to remark that there are other opinions that assume existence
of external dreams, i.e., dreams that convey information from exter-
nal sources.
Functional typology and other classifications of impressions are
considered in Chapter 2. Here we describe only epistemological tax-
onomy, which is related to the epistemological scale.
Another approach to epistemic structures is represented by the
confidence perspective because epistemic structures in general and
knowledge in particular are evaluated by cognitive systems, such as
people, by believing in these epistemic structures. Thus, it is possible
to estimate the degree of believing building the confidence/certainty
scale, in which epistemic structures form the confidence/certainty
continuum presented in Figure 3.13.
All elements from the confidence/certainty continuum are called
beliefs. They are ordered by the certainty (confidence) that they are
true, correct or relevant as a degree of believing. In this ordering,
beliefs with the higher certainty (confidence) lie to the right of beliefs
with the lower certainty (confidence). When the degree of believ-
ing (confidence or certainty) that the belief is true, correct or rel-
evant becomes sufficiently high, the corresponding structures from
the confidence/certainty continuum are called knowledge. The max-
imal uncertainty is achieved in the middle of the scale where cer-
tainty of truthfulness, correctness or relevance reaches zero. When
the degree of believing (confidence or certainty) that the belief is

Knowledge Beliefs Fantasies

as complete believing maximal uncertainty as complete disbelieving


or complete certainty of or complete certainty of
(confidence in) truthfulness, (confidence in) falseness,
correctness or relevance incorrectness or irrelevance

Figure 3.13. The confidence/certainty continuum


September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 224

224 Theory of Knowledge: Structures and Processes

untrue, incorrect or irrelevant becomes sufficiently high, the cor-


responding structures from the confidence/certainty continuum are
called misconceptions or delusions.
Note that it is also possible to treat certainty of (confidence
in) falseness, incorrectness, or irrelevance as negative certainty of
truthfulness, correctness, or relevance. When certainty is represented
by subjective probability, we come to negative probabilities (Dirac,
1930; 1942; 1974; Heisenberg, 1931; Wigner, 1932; Pauli 1943; 1956;
Feynman, 1950; 1987; Burgin and Meissner, 2010; 2012).
As elements in the confidence continuum are also impressions, it
is possible to classify them according to this scale. Namely, we have
the following descriptions:
Beliefs are impressions that are grounded (justified) to some extent.
Knowledge consists of impressions that are (or are considered) highly
grounded (justified).
Fantasies are impressions that are (or are considered) almost or com-
pletely ungrounded (baseless).
It is possible to define these concepts in a more exact
and detailed way discerning three approaches — the object-
dependent, reference-dependent, and attitude-dependent approaches.
The object-dependent approach differentiates beliefs according to
their correspondence to the domain (object).
Definition 3.2.1. A unit B of individual validated knowledge (of
collective validated knowledge) about a domain D (objects from the
domain D) is the epistemic structure from Figure 3.1 or 3.2 that is
highly justified and true.
This is the traditional perspective on knowledge as a specific kind
of beliefs.
Definition 3.2.2. A unit B of specific validated belief (of general
validated belief) about a domain D (objects from the domain D) is
the epistemic structure from Figure 3.1 or 3.2 that is sufficiently
justified.
As Bem (1970) writes, “beliefs and attitudes play an important
role in human affairs. And when public policy is being formulated,
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 225

Knowledge Evaluation and Validation 225

beliefs about beliefs and attitudes play an even more crucial role.”
As a result, beliefs are thoroughly studied in psychology and logic.
Belief systems are formalized by logical structures that introduces
structures in belief spaces and calculi, as well as by belief measures
that evaluate attitudes to cognitive structures and are built in the
context of fuzzy set theory. There are developed methods of logics of
beliefs (cf., for example, (Munindar and Nicholas, 1993) or (Baldoni
et al., 1998)) and belief functions (Shafer, 1976). Logical methods,
theory of possibility, fuzzy set theory, and probabilistic technique
form a good base for building CIF systems in computers.

Definition 3.2.3. A unit M of specific validated fantasy (of general


validated fantasy) about a domain D (objects from the domain D)
is the epistemic structure from Figure 3.1 or 3.2 that is not justified
or even justified that it is not true.
For instance, looking at the Moon in the sky, we know that we
see the Moon. We can believe that we will be able to see the Moon
tomorrow at the same time and we can fantasize how we will walk
on the Moon next year.
According to the attitude-dependent approach, we have the fol-
lowing definition.

Definition 3.2.4. A unit B of specific assumed knowledge (of general


assumed knowledge) for a system R about a domain D (objects from
the domain D) is the epistemic structure from Figure 3.1 or 3.2 that
is estimated (believed) by the system R to represent D (objects from
D) with the high extent of confidence.
We see that in contrast to the object-dependent approach, in the
attitude-dependent approach, knowledge is relative depending on the
system it belongs to. What is knowledge for one system may be
fantasy to another one. For instance, what was knowledge about
atoms for Democritus is a pure fantasy for a contemporary physicist.
In a similar way, we define basic structures for belief units and
fantasy units.

Definition 3.2.5. A unit B of specific assumed belief (of general


assumed belief) for a system R about a domain D (objects from the
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 226

226 Theory of Knowledge: Structures and Processes

domain D) is the epistemic structure from Figure 3.1 or 3.2 that is


estimated (believed) by the system R to represent D (objects from
D) with the moderate extent of confidence.
We see that what is assumed as belief by a system does not nec-
essarily coincide with a validated belief and vice versa.

Definition 3.2.6. A unit B of specific assumed fantasy (of general


assumed fantasy) for a system R about a domain D (objects from
the domain D) is the epistemic structure from Figure 3.1 or 3.2 that
is estimated (believed) by the system R to represent D (objects from
D) with the low extent of confidence or without any confidence.
Note that assumed fantasy for some people may be validated
knowledge for other people. For instance, Knowledge of contempo-
rary people about radio and television would be an assumed fantasy
for people who lived in the 17th century.
If confidence depends on some validation system, then there is a
correlation between the first and the second stratification of epistemic
structures into three groups — knowledge, beliefs, and fantasy.
According to the reference-dependent approach, distinctions
between knowledge, beliefs and fantasies depend not on the object
domain but is estimated by comparison to another epistemic, e.g.,
knowledge, system. For instance, to show validity of their theories,
physicists often compare new theories to existing theories validity of
which has been justified by a diversity of experiments.
One more perspective on epistemic structures is represented by
the activity perspective because epistemic structures in general and
knowledge in particular are assessed by their role in achieving some
goals. Thus, it is possible to estimate the measure of efficiency build-
ing the operational scale, in which epistemic structures form the prag-
matic continuum presented in Figure 3.14.

Knowledge Schemas Wrong Schemas

complete complete
efficiency inefficiency

Figure 3.14. The pragmatic continuum


September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 227

Knowledge Evaluation and Validation 227

All elements from the pragmatic continuum are called schemas.


They are ordered by the degree of efficiency so that schemas with
the higher believing in them lie to the right of schemas with the
lower efficiency. When the degree of efficiency becomes sufficiently
high, the corresponding structures from the pragmatic continuum
are called knowledge. When the degree of efficiency becomes very
low, the corresponding structures from the pragmatic continuum are
called wrong schemas.
Organization of knowledge evaluation is based on three knowledge
perspectives:

— The objective knowledge perspective is defined by the correlation


with (closeness to) the real (true) situation.
— The subjective knowledge perspective is defined by the strength of
the belief in truthfulness.
— The verification knowledge perspective is defined by the extent of
the supportive evidence.

Each perspective can be estimated by a corresponding measure


or by a system of measures. For instance, it is possible to estimate
knowledge from the objective knowledge perspective by the mea-
sure of domain related truthfulness considered later in this chap-
ter. For measures reflecting the subjective knowledge perspective, it
is viable using various psychological techniques for measuring the
strength of beliefs. Supportive evidence in the verification knowl-
edge perspective can be measured by its strength, groundedness, and
extent.
There are different systems of evaluation, validation, and justi-
fication:

— science, for which the main validation technique is experiment;


— mathematics, which is based on logic with its deduction and
induction;
— religion with its postulates and creeds;
— history, which is based on historical documents and archeological
discoveries.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 228

228 Theory of Knowledge: Structures and Processes

Knowledge has been always connected to truth. Namely, the con-


dition, that a person knows something, P , always implied that P is
true (cf., for example, (Pollock and Cruz, 1999)). However, the his-
tory of the humankind shows that knowledge is temporal, that is,
what is known at some time can be disproved later. For instance,
for a long time people believed that the Earth was flat. However,
several centuries ago it was demonstrated that this is not true. In a
similar way, for a long time, philosophers and scientists knew that
the universe always existed. However, in the 20th century it was
demonstrated that this is not true. Modern cosmology assumes that
the universe erupted from an enormously energetic, singular event
metaphorically called “big bang”, which spewed forth all the space
and all of matter. As time passed, the universe expanded and cooled.
A more recent example tells us that before 1994, physicists
knew that the gravitational constant GN had a value between
6.6709 · 10−11 and 6.6743 · 10−11 m3 kg−1 s−2 (Montanet et al., 1994)
Since then three new measurements of GN have been performed
(Kiernan, 1995) and we now have four numbers as tentative val-
ues of the gravitational constant GN , which do not agree with each
other.
These examples show that knowledge has the temporal
modality — what is true at some time can become false later, while
what is considered false can become valid knowledge in the future.
We see that it is possible to speak about false knowledge and knowl-
edge of some time, say knowledge of the 19th century. It means that
there is subjective knowledge, which can be defined as beliefs with the
high estimation that they are true, and objective knowledge, which
can be defined as beliefs that correspond to what is really true.
Different approaches to epistemic structure differentiation, e.g.,
founding difference between knowledge, beliefs and fantasies, demand
different criteria, measures, indicators, procedures, and techniques
for evaluation. For instance, in the reference-dependent approach,
we measure truthfulness of knowledge not with respect to an object
domain, but with respect to another knowledge system, e.g., the-
saurus. For instance, let us consider a situation when one person A
gives some information I to another person B. Then it is viable to
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 229

Knowledge Evaluation and Validation 229

measure truthfulness of I with respect to A or more exactly, to the


system of knowledge TA of A.
To achieve this goal, we use some measure cr(T, D) of correlation
or consistency between knowledge systems T and D. Let TA be the
system of knowledge of A and TB be the system of knowledge of B.

Definition 3.2.7. The function cr(I, TA ) = cr(I(TB ), TA ) −


cr (TB , TA ) is called a measure of relative truthfulness of knowledge.
Object-related truthfulness works when the real situation is
known. However, it is impossible to compare knowledge directly with
a real system. So, in reality, we always compare different systems
of knowledge. It means that the object-dependent approach can be
reduced to the reference-dependent approach. However, one system of
knowledge can be closer to reality than another system of knowledge.
In this case, we can assume that the corresponding truthfulness is
object related (at least, to some extent). For instance, in the object-
dependent approach, it is possible to compare theoretical knowledge
to experimental data.
This assumption is used as the main principle of science: it is pre-
supposed that correct experiment gives true knowledge about reality
and to find correctness of a model or theory, we need to compare the
model or theory with experimental data.
Such an approach to knowledge is developed in the externalist
theories of knowledge (Pollock and Cruz, 1999).

Example 3.2.1. Let us consider the system TA of knowledge of a


person A. The system TSA is generated from all (some) statements
of A. Then the value of the function tr(TSA , TA ) reflects sincerity
of A, while the value of the function tr(TSA , TA ) reflects sincerity of
information I given by A.

Example 3.2.2. System related truthfulness is useful for estimating


statements of witnesses. In this case, the measure of inconsistency
incons(TA , TB ) between TA and TB is equal to the largest number
of contradicting pairs (pA , pB ) of simple statements pA from TA and
pB from TB when pA and pB are related to the same object or event.
The measure of inconsistency incons(TA , TB ) between TA and TB
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 230

230 Theory of Knowledge: Structures and Processes

determines several measures of consistency cons(TA , TB ) between TA


and TB . One of such measures of consistency cons(TA , TB ) between
TA and TB is defined by the formula
1
cons(TA , TB ) = . (3.9)
(1 + incons(TA , TB ))
It is possible to normalize the measure of inconsistency
incons(TA , TB ), defining inconsN (TA , TB ) as the largest ratio of the
number of contradicting pairs (pA , pB ) and number of all pairs (pA ,
pB ) of simple statements pA from TA and pB from TB when pA
and pB are related to the same object or event. This measure gen-
erates the corresponding normalized consistency measure consN (TA ,
TB ) by the formula (3.9). Another normalized consistency measure
consN (TA , TB ) is defined by the formula (3.10).
cons(TA , TB ) = 1 − incons(TA , TB ) (3.10)
Relations of a portion of knowledge I to its object OI reflect
important properties of this knowledge. One of the most important
relations of this kind is truth, i.e., correct representation of OI by
I. We can say that knowledge is true, or factual, if it allows one to
build true knowledge. However, it is necessary to understand that in
the context of the general theory of information, truth is only one of
many properties of knowledge. Thus, we consider here a more general
situation and distinguish true knowledge as a specific kind of genuine
knowledge (cf., Definition 3.2.2).
It is reasonable to call cognitive knowledge true when it gives
knowledge with a high level of validity. However, genuine knowledge
is not always true knowledge. For instance, the Hartley measure of
uncertainty (entropy) of an experiment E that can have k outcomes
is equal to log2 k. Then it is possible to measure knowledge I in a
message M that tells us that the experiment E can have h outcomes
as m(I) = log2 k − log2 h. The message and consequently, knowledge
I can be false, i.e., it is not true that the experiment E can have h
outcomes, but if h is less than k, knowledge I is genuine with respect
to the chosen measure.
Treating knowledge as the quality of a message that is sent from
the sender to the receiver, we can see that knowledge does not have
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 231

Knowledge Evaluation and Validation 231

to be accurate. It may be conveying a truth or a lie, or just be


something intangible. Even a disruptive noise used to inhibit the
flow of communication and create misunderstanding would in this
view be a form of knowledge. That is why, the problem of measuring
knowledge accuracy is so important for knowledge and library science
(cf., for example, (Fallis, 2004)).
The measure tr(T , D) of correctness validity allows us to define
truthfulness for knowledge. If we have a thesaurus T and a unit of
knowledge K, then the truthfulness of K about an object domain D
is given by the formula
tr (K, D) = tr (K(T ), D) − tr (T , D).
Here K(T ) is the thesaurus T after it receives/processes knowl-
edge K. The difference shows the impact of K on T .

Definition 3.2.8. The function tr(K, D) is called a measure


of domain related truthfulness or domain related correctness of
knowledge.
Domain related truthfulness or correctness of a knowledge item K
reflects changes K makes in a thesaurus. This property is related to
accuracy of knowledge, which measures the changes in the distance
of the initial knowledge to the absolutely exact knowledge.
To measure domain related truthfulness, we explicate the struc-
ture of a knowledge item using Diagrams (3.1) and (3.2), which give
the following diagram for knowledge structure.
e
QA PA

q p (3.11)
A NA
n

Here NA is a name of the object A and QA is a feature (an intrinsic


property) of A, e.g., if A is a book, then NA is usually the title of
A, the intrinsic property QA may be the year of its publication or
the author, while the attribute (ascribed property) PA is the cognitive
representation of QA . In our case, when QA is the year of publication,
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 232

232 Theory of Knowledge: Structures and Processes

then PA is the number that represents this year, e.g., 2012, or if QA


is the author, PA is the first and the last names of the author.
The essential part of the knowledge structure is the cognitive or
symbolic component of knowledge. It is the ascribed property of the
object A or in general, of objects from the knowledge domain U .
The cognitive component of knowledge renders information about
the object A or the knowledge domain U . It is reflected by Diagram
(3.12).
p
NA PA. (3.12)
In addition, any knowledge system has two parts: informational
and substantial. The component (A, q, QA ) as its substantial part and
all other elements, that is, the component (NA , p, PA ) and two rela-
tions e and n, as its informational part. Cognitive parts of knowledge
form knowledge systems per se as abstract structures, while adding
to substantial parts forms extended knowledge systems.
To evaluate the degree to which a given knowledge is true, we
use Diagram (3.11), validating correspondences e, q, and p. There
are different systems of knowledge validation. Science, for which the
main validation technique is experiment, is a mechanism developed
for cognition of nature and validation of obtained knowledge. Math-
ematics, which is based on logic with its deduction and induction,
is a formal device constructed for empowering cognition and knowl-
edge validation. Religion with its postulates and creeds is a specific
mechanism used to validate knowledge by its compliance with reli-
gious postulates and creeds. History, which is based on historical
documents and archeological discoveries, is also used to obtain and
validate knowledge about the past of the society. Each system of
validation induces a corresponding validation function tr(T, D) that
gives a quantitative or a qualitative estimate of the truthfulness of
knowledge T about some domain D. Note that the domain D can
consist of a single object F .
If a validation function tr(T, F ) is defined for separate objects,
then it is possible to take some unification of all values tr(T, F ) rang-
ing over all objects F in the domain D to which knowledge T can be
related. This allows us to obtain a truthfulness function tr(T, F ) for
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 233

Knowledge Evaluation and Validation 233

systems of knowledge. In general, such unification of values tr(T, F )


is performed by some integral operation in the sense of (Burgin and
Karasik, 1976; Burgin, 1982).

Definition 3.2.9. (a) An integral operation W on the set R of real


numbers is a mapping that corresponds a number from R to a subset
of R, and for any x ∈ R, we have W ({x}) = x.
(b) A finite integral operation W on the set R of real numbers is
a mapping that corresponds a number from R to a finite subset of
R, and for any x ∈ R, we have W ({x}) = x.
As a rule, integral operations are partial. That is, they attach
numbers only to some subsets of R. At the same time, it is possible
to define integral operations in arbitrary sets.
Examples of integral operation are: summation, multiplication,
taking the minimum or maximum, determining the infimum or supre-
mum, evaluating integrals, taking the first element from a given sub-
set, taking the sum of the first and second elements from a given
subset, and so on.
Examples of finite integral operation are: summation, multiplica-
tion, taking minimum, determining maximum, calculating the aver-
age or finite weighted average for finite sets, taking the first element
from a given finite subset, and so on.
The following integral operations are the most relevant to the
problem of knowledge truthfulness estimation:

(1) taking the average value;


(2) taking the minimal value;
(3) taking the maximal value.

Let us consider some measures of correctness.

Example 3.2.3. One of the most popular measure tr(T, D) of truth-


fulness is correlation between the experimental data related to an
object domain D and data related to D that are stored in the knowl-
edge system T .
Correlation r is a bivariate measure of association (strength) of
the relationship between two sets of corresponded numerical data.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 234

234 Theory of Knowledge: Structures and Processes

It often varies from 0, which indicates no relationship or random


relationship, to 1, which indicates a strong relationship or from −1,
which indicates a strong negative relationship to 1, which indicates
a strong relationship. Correlation r is usually presented in terms
of its square (r 2 ), interpreted as percent of variance explained. For
instance, if r 2 is 0.1, then the independent variable is said to explain
10% of the variance in the dependent data. However, as experts think,
such criteria are in some ways arbitrary and must not be observed
too strictly.
The correlation coefficient often called Pearson product–moment
correlation coefficient is a measure of linear correlation and is given
by the formula:
n
(xi − x̄)((yi − ȳ)
r =  i=1 .

n n


(xi − x̄)2 ((yi − ȳ)2
i=1 i=1

In the context of knowledge validation, numbers xi are numerical


data about objects from the domain D obtained by measurement
and experiments, while numbers xi are numerical data about objects
from the domain D obtained from the knowledge K.
In general, the correlation coefficient is related to two random
variables X and Y and is defined in the context of mathematical
statistics as
Cov(X, Y ) E((X − µX )(Y − µY ))
rX,Y = = .
σX σY σX σY
Here µX (µY ) is the mean and σX (σY ) is the standard deviation of
the random variable X(Y ).
Beside correlation coefficient r, which is the most common type
of correlation measure, other types of correlation measures are used
to handle different characteristics of data. For instance, measures of
association are used for nominal and ordinal data.
One more measure of correlation is given by the formula
n
(xi − x̄)(yi − ȳ).
i=1
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 235

Knowledge Evaluation and Validation 235

This formula has the following interpretation:


n

(xi − x̄)(yi − ȳ) > 0 means positive correlation;
i=1
n

(xi − x̄)(yi − ȳ) < 0 means negative correlation;
i=1
n

(xi − x̄)(yi − ȳ) = 0 indicates absence of correlation.
i=1

Example 3.2.4. It is possible to take validity of stored in the sys-


tem knowledge T about the object domain D as a measure tr(T, D).
There are different types of validity. For instance, researchers have
introduced four types of validity for experimental knowledge: con-
clusion, internal, construct, and external validity. They build on one
another, and each type addresses a specific methodological question.
For instance, Bertrand Russell (1926) uses an attitude-dependent
approach in his definition of knowledge. Consideration of beliefs
forms the starting point for Russell’s definition of knowledge.
Belief relations in a system are estimated by belief and plausi-
bility measures (cf., (Klir and Wang, 1993)) or by extended belief
measures that are described below. It makes possible to take some
belief or plausibility measure as a measure tr(T, D) of truthfulness
of knowledge in the system T .
A belief and plausibility measures are kinds of fuzzy measures
introduced by Sugeno (1974).
Let P(X) be the set of all subsets of a set X and A be a Borel
field of sets from P(X), specifically, A ⊆ P(X).

Definition 3.2.10. A fuzzy measure on A in X (in the sense of


Sugeno) is a function g : A → [0, 1] that assigns a number in the unit
interval [0, 1] to each set from A so that the following conditions are
valid:

(FM1) X ∈ A, g(Ø) = 0, and g(X) = 1, i.e., the function g is


normed.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 236

236 Theory of Knowledge: Structures and Processes

(FM2) the function g is monotone, i.e., for any A and B from A,


the inclusion A ⊆ B implies g(A) ≤ g(B).
(FM3) For any non-decreasing sequence A1 ⊆ A2 ⊆ · · · ⊆ An ⊆
An+1 ⊆ · · · of sets from A, the following equality is valid


g An = lim g(An ).
n→∞
n=1

(FM4) For any non-increasing sequence A1 ⊇ A2 ⊇ · · · ⊇ An ⊇


An+1 ⊇ · · · of sets from A, the following equality is valid


g An = lim g(An )).
n→∞
n=1

If A ∈ A, then the value g(A) is called the fuzzy measure of the


set A.
Then this definition was improved further through elimination of
the condition (FM4) (Sugeno, 1977; Zimmermann, 2001).
Many measures with infinite universe studied by different
researchers, such as probability measures, belief functions, plausibil-
ity measures, and so on, are fuzzy measures in the sense of Sugeno.
A more general definition of a fuzzy measure is used in (Klir and
Wang, 1993), as well as in (Burgin, 2005c) where these measures are
applied to exploration of fuzzy dynamical systems.
Let P(X) be the set of all subsets of a set X and B be an algebra
of sets from P(X), in particular, B ⊆ P(X).

Definition 3.2.11. A fuzzy measure on B in X is a function g :


B → R+ that assigns to each set from B a positive real number, is
monotone (FM2):

(FM1) g(Ø) = 0.
(FM2) For any A and B from B, the inclusion A ⊆ B implies g(A) ≤
g(B).

In what follows, we call g simply a fuzzy measure and call B the


algebra of fuzzy measurable sets with respect to the fuzzy measure g.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 237

Knowledge Evaluation and Validation 237

Popular examples of fuzzy measures are possibility, belief and


plausibility measures.
Possibility theory is based on possibility measures. Let us consider
some set X and its power set P(X).
Definition 3.2.12. (Zadeh, 1978; Zimmermann, 2001). A possibility
measure in X is a partial function Pos : P(X) → [0, 1] that is defined
on a subset A from P(X) and satisfies the following axioms:
(Po1) Ø, X ∈ A, Pos(Ø) = 0, and Pos(X) = 1.
(Po2) For any A and B from A, the inclusion A ⊆ B implies
Pos(A) ≤ Pos(B).
(Po3) For any system {Ai ; i ∈ I} of sets from A,


Pos Ai = sup Pos(Ai )
i∈I i∈I

Possibility can be also described by a more general class of measures


(cf., (Oussalah, 2000; Zadeh, 1978)).
Definition 3.2.13. A quantitative possibility measure in X is a func-
tion P : P(X) → [0, 1] that satisfies the following axioms:
(Po1) Ø, X ∈ A, P (Ø) = 0, and P (X) = 1;
(Po2a) For any A and B from A, P (A ∪ B) = max{P (A), P (B)}.
A quantitative possibility measure is a fuzzy measure.
Dual to a quantitative possibility measure is a necessity measure
(Oussalah, 2000).
Definition 3.2.14. A quantitative necessity measure in X is a func-
tion N : P(X) → [0, 1] that satisfies the following axioms:
(Ne1) N (Ø) = 0, and N (X) = 1.
(Ne2) For any A and B from P(X), N (A ∩ B) = min{N (A), N (B)}.
A quantitative necessity measure is a fuzzy measure.
Possibility and necessity measures are important in support logic
programming (Baldwin, 1986), which uses fuzzy measures for reason-
ing under uncertainty and approximate reasoning in expert systems,
based on the logic programming style.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 238

238 Theory of Knowledge: Structures and Processes

In addition to ordinary measures, fuzzy measures encompass


many kinds of measures introduced and studied by different
researchers. For instance, beliefs play an important role in people’s
behavior. To study beliefs by methods of fuzzy set theory, the concept
of a belief measure was introduced and studied (Shafer, 1976).

Definition 3.2.15. A belief measure in X is a partial function Bel :


P(X) → [0, 1] that is defined on a subset A from P(X) and satisfies
the following axioms:

(Be1) Ø, X ∈ A, Bel (Ø) = 0, and Bel(X) = 1.


(Be2) For any system {Ai ; i = 1, 2, . . . , n} of sets from A and any n
from N ,
n

Bel(A1 ∪ · · · ∪ Ai ) ≥ Bel (Ai ) − Bel (Ai ∩ Aj )
i=1 i<j

+ · · · + (−1)n+1 Bel(A1 ∩ . . . ∩ Ai ).

Axiom (Be2) implies that belief measure is a super-additive fuzzy


measure as for n = 2 and arbitrary subsets A and B from X, we
have

Bel (A ∪ B) ≥ Bel(A) + Bel(B) − Bel(A ∩ B)).

Axiom (Be2) also implies the following property

0 ≤ Bel (A) + Bel(Ā) ≤ 1),

where Ā is a complement of A.
For each set A ∈ P(X), the number Bel(A) is interpreted as the
degree of belief (based on available evidence) that a given element x
of X belongs to the set A. Another interpretation treats subsets of
X as answers to a particular question. It is assumed that some of the
answers are correct, but we do not know with full certainty which
ones they are. Then the number Bel(A) estimates our belief that the
answer A is correct.
One more class is plausibility measures, which are related to belief
measures.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 239

Knowledge Evaluation and Validation 239

Definition 3.2.16. A plausibility measure in X is a partial function


Pl : P(X) → [0, 1] that is defined on a subset A from P(X) and
satisfies the following axioms (Shafer, 1976):

(Pl1) Ø, X ∈ A, Pl(Ø) = 0, and Pl(X) = 1.


(Pl2) For any system {Ai ; i = 1, 2, . . . , n} of sets from A and any n
from N ,
n

Pl(A1 ∩ · · · ∩ Ai ) ≤ Pl(Ai ) − Pl(Ai ∪ Aj )
i=1 i<j

+ · · · + (−1)n+1 Pl(A1 ∪ · · · ∪ Ai )).

Belief measure and plausibility are dual measures as for any belief
measure Bel(A), Pl(A) = 1 − Bel (Ā) is a plausibility measure and
for any plausibility measure Pl(A), Bel(A) = 1 − Pl(Ā) is a belief
measure.
When Axiom (Be2) for belief measures is replaced with a stronger
axiom
Bel(A ∪ B) = Bel(A) + Bel(B) whenever A ∩ B = Ø,
we obtain a special type of belief measures, the classical probability
measures (sometimes also referred to as Bayesian belief measures).
In a similar way, some special kinds of probability measures are
constructed (Dempster, 1967).
It is possible to consider dynamical systems in spaces with a belief,
plausibility or possibility measure. These systems allow one to model
mental processes, cognition, and knowledge processing in intelligent
systems. For example, it is possible to consider data- and knowledge
bases as dynamical systems with a belief measure and study their
behavior. Such a belief measure can reflect user beliefs in correctness
and validity of data, as well as user beliefs in truth and groundedness
of knowledge systems.
A belief system, either of an individual or of a community, con-
tains not only beliefs, but also disbeliefs, i.e., beliefs that something
is not true. To represent this peculiarity, extended belief measures
are introduced in (Burgin, 2010).
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 240

240 Theory of Knowledge: Structures and Processes

Definition 3.2.17. An extended belief measure is a function Bel :


P(X) → [−1, 1] that is defined on a subset A from P(X) and satisfies
the following axioms:

(EBe0) X = X + ∪ X −
(EBe1) Ø, X + , X ∈ A, Bel(Ø) = 0, Bel(X + ) = 1 and Bel(X) = 0.
(EBe2) For any system {Ai ; i = 1, 2, . . . , n} of sets from A ∩ X + and
any n from N , we have
n

Bel(A1 ∪ · · · ∪ Ai ) ≥ Bel(Ai ) − Bel (Ai ∩ Aj )
i=1 i<j

+ · · · + (−1)n+1 Bel(A1 ∩ · · · ∩ Ai ).

Remark 3.2.1. It is possible to represent an extended belief measure


by an intuitionistic fuzzy set in the sense of (Atanasov, 1999).
The attitude-dependent approach is based on a function
at : K → L,
where K is a collection of knowledge items or knowledge systems,
and L is a partially ordered set. This function evaluates truthfulness
of knowledge items (systems) based on attitudes that reflect confi-
dence that a knowledge item (system) is true. When an individual or
a group evaluates knowledge, their estimates are assigned to differ-
ent knowledge items (systems). For instance, a knowledge item k1 is
presumed to be absolutely true, a knowledge item k2 is estimated as
true only in some situations, while a knowledge item k3 is treated as
rarely true. It is possible to give numerical values of truthfulness. For
instance, it is 30% true that it will be raining tomorrow (item k1 )
and 70% true that it will not be raining tomorrow (item k2 ). These
estimates reflect individual or group confidence, which may be based
on some grounded procedures, e.g., on measurement, or represent
only dispositions, preferences, and beliefs, e.g., gut feeling.

3.2.2. Knowledge validation, justification, and testing


The validation dimension reflects confirmation of confidence of the
knower (knowledge user) in the knowledge item property estimation
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 241

Knowledge Evaluation and Validation 241

and includes such properties as is justified, is validated, is grounded,


is tested and is checked. In more detail, these properties are treated
in Section 3.2.
Knowledge validation is the process of finding or establishing
validity of definite knowledge properties. Here validity of a prop-
erty P means not only that a knowledge item (knowledge system)
has this property but also that a property P has the necessary
(intended) property Q. For instance, we may need to check if the
values of the property P belong to some interval. As an example
of such a property P , we can take truth as a probabilistic prop-
erty. Then it is natural to consider some probabilistic knowledge
valid when its probability is larger than 0.95. This is the property Q,
which means that the value of the property P belongs to the interval
(0.95, 1).
An example of popular validation of operational knowledge is soft-
ware validation, which is the process of checking what the user actu-
ally wanted (Burgin and Debnath, 2008). Namely, in validation, the
main explored question is — Have we done the right job?
In the context of epistemic structures, validation also means eval-
uation of epistemic structures in order to discern knowledge from
other epistemic structures.
In general, knowledge verification means checking whether a
knowledge item (knowledge system) has a definite property or its
property has a definite value. For instance, it is useful to find whether
a knowledge item (knowledge system) is relevant to its domain.
An important case of knowledge verification is criteria verifica-
tion, which is evaluation of knowledge for conformance and consis-
tency with respect to a system of criteria or conditions. For instance,
consistency is defined by systems of conditions (cf., Section 3.3) and
to validate consistency of a knowledge system, it is necessary to verify
these conditions.
An example of popular verification of operational knowledge is
software verification, which is the checking of or testing of software
for conformance and consistency with respect to the developed spec-
ification (Burgin and Debnath, 2008). Namely, in verification, the
main explored question is — Are we doing the job right?
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 242

242 Theory of Knowledge: Structures and Processes

Reliability of knowledge considered in Section 2.3.8 is a kind of


knowledge validation and verification measures.
Traditionally, knowledge justification has been concerned with
demonstration of truthfulness of beliefs. We call it the traditional
knowledge justification.
Importance of knowledge justification is supported not only by
empirical data and history of epistemological research, but also by
results of the general theory of evaluation (Burgin and Kavunenko,
1994) as the process of evaluation has three main stages —
preparation, realization, and analysis — while justification is a par-
ticular case of evaluation.
Here we consider three goal-oriented types of knowledge justifica-
tion, which include the traditional approach:

— The relevance knowledge justification is a demonstration of knowl-


edge relevance to its domains.
— The truthfulness knowledge justification is a demonstration that
a knowledge item gives a highly adequate representation of its
domains.
— The generalized knowledge justification is a demonstration of cor-
rectness of knowledge property estimation.

If we assume that knowledge is always true and relevant, or it is


not knowledge, then the first two types of knowledge justification are
special cases of the third type.
Millennia of exploration in this area brought forth a variety of
techniques and methods of knowledge justification. Contemporary
theory of knowledge discerns several operational types of knowledge
justification (Pollock and Cruz, 1999):

— the doxatic justification;


— the foundational justification;
— the coherence justification;
— the internal justification;
— the process justification;
— the probabilistic justification;
— the external justification.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 243

Knowledge Evaluation and Validation 243

The doxatic approach assumes that the justifiability of a belief is


a function exclusively of what beliefs one holds. All beliefs held by a
person are called a doxatic state of this person. There are two basic
tactics in the doxatic justification — foundational justification and
coherence justification.
In the foundational justification, basic beliefs are determined and
justification of other beliefs is based on their relation to the basic
beliefs. Contemporary doxatic justification is based on the assump-
tion that all knowledge comes to us through our senses and thus,
simple beliefs resulting directly from sense perception form an epis-
temological foundation. For instance, if Alex believes that now is
five o’clock, this belief is justified if he sees correct clocks displaying
five o’clock when he thinks that it is five o’clock.
In the coherence justification, basic beliefs are determined and jus-
tification of other beliefs is based on their relation to all other beliefs
of the knower. For instance, if Alex believes that now is five o’clock,
this belief is justified if this belief correlates (“coheres”) with his
other beliefs.
The internal justification takes into account only internal states of
an individual. For instance, if Alex believes that now is five o’clock,
this belief is internally justified when Alex is sure that it is really
five o’clock.
The external justification takes into account more conditions than
just the internal states of the believer, e.g., external justification can
include both internal states of an individual and what is going on
in the objective reality. These additional conditions determine sev-
eral types of external justification: process justification, rehabilistic
justification, and probabilistic justification (Pollock and Cruz, 1999).
The process justification takes for granted that a belief is true if
the cognitive process that brought this belief forth is correct. For
instance, if Alex believes that now is five o’clock, this belief is jus-
tified by the process (action) of seeing correct clocks that display
five o’clock when he thinks that it is five o’clock.
The rehabilistic justification presupposes that a belief is true if
what is believed is really true. For instance, if Alex believes that now
is five o’clock, this belief is justified it is really five o’clock when Alex
thinks so.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 244

244 Theory of Knowledge: Structures and Processes

The probabilistic justification assesses a belief in terms of its


probability of being true. For instance, if Alex believes that now
is five o’clock, this belief is justified if there is high probability that
it is five o’clock when Alex thinks so.
Here we develop a more detailed operational typology of justifi-
cation.
Taking the justification dimension, we see that there are differ-
ent ways and strategies of epistemic structure (knowledge) justifica-
tion. It is possible to treat each kind of justification as an attribute
component of the justification dimension. At the same time, there
are three basic approaches to justification, which are similar to the
approaches to knowledge acquisition: by practice/experience, by rea-
soning/thinking and by authority/opinion.

— Justification by practice/experience means that an epistemic


structure (knowledge) is justified if it works well in practice, e.g.,
it allows better achieving some goals, and our experience gives
evidence for this.
— Justification by reasoning/thinking is performed in the mental-
ity of the justifier and is explicit justification. However, the brain
has three basic components — the System of Rational Intelligence
(also called System of Reasoning), the System of Emotions (Affec-
tive States) and the System of Will and Instinct (Burgin, 2010).
Consequently, there are two other kinds of mental justification —
by emotions and by instructions/assertions.
— Justification by authority/opinion means that an epistemic struc-
ture (knowledge) is justified if there is the corresponding opinion,
which is usually held as an authoritative one. Note that it may
be an opinion of an individual, of a social group taken from some
source, such as a book, magazine, or the Internet.

The operational typology is complementary to the epistemic clas-


sification, which includes three basic types of justification for epis-
temic structures in a general form:

— The existential justification means that something is justified


because it is so.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 245

Knowledge Evaluation and Validation 245

— The faith-based justification means that there is a belief, e.g., a


person believes, in what is justified.
— The procedural justification means that there are procedures that
allow one to justify epistemic structures.

Here we are mostly interested in evaluation and justification of


properties. So, we transform the general typology obtaining basic
types of justification for properties of epistemic structures in an
attributive form:

— The existential justification means that a property estimate is


justified when evidence is given that this estimate is correct.
— The faith-based justification of a property estimate means that
there is a belief, e.g., a person believes, that this estimate is
correct.
— The procedural justification of a property estimate means that
there are procedures that allow one to test that this estimate is
correct.

We can see that the existential justification demands some exter-


nal observer (observing system) who (that) can see the real situation.
Thus, the existential justification includes the external justification
in the sense of (Pollock and Cruz, 1999).
Taking examples from mathematics, we see that justification by
reasoning as a form of the existential justification is performed by
proving mathematical statements. It is interesting to observe how
evolution of the construction of a proof has been changing the con-
cept of justification in mathematics. Now a mathematical proof is
contemplated as a sequence of formalized statements, starting with
initially assumed statements about supposedly known terms, and
proceeding by steps permitted in the utilized logical calculus. In the
classical mathematics, first-order predicate calculus is usually used.
In the intuitionistic or constructive mathematics, a more restricted
calculus is employed. As a result, intuitionists do not accept all clas-
sical proofs and even refute some classical theorems.
Besides, for millennia, mathematics authorized only precise
proofs. In contrast to this, probabilistic proofs penetrated in to
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 246

246 Theory of Knowledge: Structures and Processes

mathematics a couple of decades ago (cf., for example, (Bass and


Burdzy, 1989; Alon and Spencer, 2000; Fulman, 2001)). This implies
probabilistic justification when a mathematical statement can be
asserted as true only with some high probability p.
Note that in wider context, a proof is argumentation that justifies
some statement as true, useful or acceptable. As Hersh writes, a
proof in mathematics is “an argument accepted as conclusive by the
present-day mathematical community” (Hersh, 1997).
Mathematics also exploits justification by authority or authorita-
tive opinion. Usually this is done in the form of references to works of
other mathematicians. These references are included in books, papers
and rigorous lectures.
Procedural justification is too one of the cornerstones of mathe-
matics. It has the form of checking the proofs. Mathematicians fre-
quently check their own proofs, as well as proofs of their colleagues.
When a mistake or a gap in a proof is found, this mistake is corrected,
the gap is filled or the result is not accepted.
Basic types of justification in general are projected onto three
causal types of existential justification:

— Justification by experience: it is so because it has been always so.


— Justification by theory: it is so because the theory tells that it has
to be so.
— Justification by authority: it is so because the authority tells that
it is so.

Here are examples of different types of existential justification of


the following statement about swans:
All swans are white because:

1. Whenever you see a swan, it is white (justification by experience).


2. All swans were created white (justification by theory).
3. Aristotle said so (justification by authority).

However, as we know, all these justification were invalidated when


Europeans came to Australia and found black swans.
In spite of various examples demonstrating absence of com-
plete reliability of justification by experience, it is basic for the
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 247

Knowledge Evaluation and Validation 247

experimental methodology used in all sciences. Even in mathemat-


ics, justification by experience plays a pivotal role. Indeed, by the
second Gödel theorem, it is impossible to prove consistency of some
basic mathematical systems such as arithmetic or set theory. So,
mathematicians justify utilization of these system only by experi-
ence that have not encounter contradictions doing mathematics with
these systems.
The base of the faith-based justification is produced by faith as
an affective (emotional) state (Burgin, 2010). Thus, the faith-based
justification is the internal justification in the sense of (Pollock and
Cruz, 1999). As people have different reasons to believe, there dif-
ferent types of the faith-based justification, which are similar to the
causal types of the existential justification. It gives us three causal
types of faith-based justification:
— Justification by tradition: people believe because their fathers and
grandfathers believed.
— Justification by reasoning: people believe because there are essen-
tial reasons to believe.
— Justification by emotions: people believe because they feel good
(e.g., moral or satisfying) to do so.
Coming to the procedural justification, we find four basic types:
— The external experimental justification means that the knowledge
system performs experiments to find evidence that supports valid-
ity of an epistemic structure.
— The external observational justification signifies that the knowl-
edge system performs observations to find evidence that supports
validity of an epistemic structure.
— The internal logical justification connotes that the knowledge sys-
tem does inference to get evidence that supports validity of an
epistemic structure.
— The internal affective justification stands for justification by
transformation of inner states. For instance, an individual can
justify some impression by her feelings and emotions.
An example of internal logical justification is a grounded selec-
tion (formation) of a system of basic beliefs and correct inference of
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 248

248 Theory of Knowledge: Structures and Processes

all other beliefs from basic beliefs. When beliefs are represented by
statements in a conventional logical system, then inference is done
using deduction rules of this system.
Pollock and Cruz (1999) argue that ontologically it is impossible
to have basic beliefs. However, in spite of this, people psychologically
tend to form a system of basic beliefs in some area deducing other
beliefs from the basic beliefs. Examples of basic beliefs are: axioms in
mathematics or theoretical physics (cf., for example, (Godel, 1940;
Fraenkel and Bar-Hillel, 1958; Burgin, 2011) for mathematics and
(Dirac, 1930a; von Neumann, 1932; Mackey, 1963; Atiyah, 1990;
Perez Bergliaffa et al., 1998; Streater and Wightman, 2000; Hardy,
2001; Schottenloher, 2008) for physics, axioms in computer science
where there is such a discipline as the axiomatic theory of algorithms
(Burgin, 2010d), religious credos, laws in social systems, ethical prin-
ciples, etc.
Note that even in mathematics, axioms do not represent absolute
truth and absolute knowledge. For instance, it is possible to build
consistent set theory with the continuum hypothesis as an axiom
(Gödel, 1940) and it is also possible to establish non-contradictory
set theory where the negation of the continuum hypothesis is an
axiom (Cohen, 1966).
Another example of internal logical justification is a grounded
selection (formation) of a system of basic beliefs and comparison of
all other beliefs with basic beliefs. A belief is considered justified if
there is consistency with the system of basic beliefs. When beliefs
are represented by statements in a logical system, then consistency
means logical consistency, in which addition of the compared belief
does not cause contradictions.
Now operational knowledge is mostly represented by computer
software and hardware. Consequently, correctness and reliability have
become critical issues in software and hardware production and uti-
lization as a result of the increased social and individual reliance
upon computers and computer networks. A study of the National
Institute of Standards found that only software defects resulted in
$59.5 billion annual losses. In turn, problems caused by software
faults often translate into loss of potential customers, lower sales,
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 249

Knowledge Evaluation and Validation 249

higher warranty repair costs, and losses due to legal actions from
customers.
Thus, we see that production of highly reliable software, on time
and within budget, is a constant challenge for the software industry.
Just as we know how disruptive failures can be, we all know how often
schedules slip. To overcome these inconveniences, software assurance
(SwA) is employed.
According to NASA (NASA-STD-2201-93), software assurance is
a “planned and systematic set of activities that ensures that soft-
ware processes and products conform to requirements, standards, and
procedures. It includes the disciplines of Quality Assurance, Quality
Engineering, Verification and Validation, Non-conformance Report-
ing and Corrective Action, Safety Assurance, and Security Assurance
and their application during a software life cycle.”
It is useful to distinguish three types of software assurance:

— A priory assurance is set of activities before its utilization aimed


at achieving a required level of confidence in the software system.
— Performance assurance is set of activities in the process of soft-
ware exploitation aimed at achieving a required level of confidence
in the software system when it is functioning.
— A posteriori assurance is set of activities aimed at achieving a
required level of confidence in the software system after its func-
tioning ends.

There are three types of a posteriori software assurance:

— Final assurance oriented at the level of confidence in the software


system after its termination.
— Cyclic or periodic assurance oriented at the level of confidence in
the software system after it completes a cycle.
— Phase assurance oriented at the level of confidence in the software
system after it completes some phase.

Software Assurance addresses such issues as:

— Trustworthiness, which affirms that no exploitable vulnerabilities


exist, either maliciously or unintentionally inserted.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 250

250 Theory of Knowledge: Structures and Processes

— Predictable Execution reflects justifiable confidence that software,


when executed, functions as intended.
— Conformance reflects relevance of software to requirements, stan-
dards and specifications.

The reason software assurance matters is that so many busi-


ness activities and critical functions — from national defense to
banking to healthcare to telecommunications to aviation to control
of hazardous materials — depend on the on the correct, predictable
operation of software. It is safe to say that in today’s world, these and
myriad other activities and functions would become hopelessly crip-
pled if not completely impossible were the software-intensive systems
that they rely on to fail (Goertzel and Winograd, 2008).
Denotational meaning of the software quality assurance is the
level of confidence that software is free from vulnerabilities, either
intentionally designed into the software or accidentally inserted at
anytime during its lifecycle, and that the software functions in the
intended manner.
Operational meaning of the software quality assurance is the
planned and systematic set of multi-disciplinary activities ensuring
that a given software system has such properties as trustworthi-
ness, which means that no exploitable vulnerabilities exist, either
maliciously or unintentionally inserted, predictable execution, which
means justifiable confidence that software, when executed, func-
tions as intended, and conformance to requirements, standards, and
procedures.
Here we use the broad understanding of testing elaborated in
(Burgin and Debnath, 2007).
Definition 3.2.18. Testing an object A is a process, or a proce-
dure that controls such a process, aimed at answering some questions
about properties of A.
Testing is mostly developed in the area of software engineering.
That is why we consider here software testing as an informative
example of operational knowledge testing. Theoretical results and
practical considerations show that software testing cannot be com-
pletely finished before software goes to users. Besides, it is impossible
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 251

Knowledge Evaluation and Validation 251

to complete software testing in one or even 10 testing sessions. Soft-


ware systems become more and more complex, while their concur-
rency and interactivity put new challenges for software testing. These
new characteristics brings us to the concept of life-long (total) con-
tinuous testing (Burgin and Debnath, 2007). The goal of life-long
continuous testing is to provide necessary information for updating
and upgrading software quality.
As it is impossible to find enough qualified people to do life-long
continuous testing, it has to be formalized and automated and employ
AI to provide means for developing sufficiently correct software and
maintaining reliable software functioning. For instance, a knowledge
base, which would evolve and capture the history of the software
life cycle and support automated reasoning about the tested soft-
ware system becomes an important component of life-long continuous
testing.
For instance, one of the most important properties of software
is correctness. Software correctness is a function (property) on the
set of all possible programs. If P is a program, then we denote this
function by Cor(P ). Taking some system C of conditions, we have
the relative correctness function CorC (P ). This conditional definition
makes the concept of correctness flexible, efficient, and adaptive to
changes.
Definition 3.2.18 is very broad and encompasses not only what is
conventionally understood as software testing, but also other proce-
dures that are traditionally discriminated from testing. For instance,
proving program correctness, a method that is usually opposed to
testing, can be treated as testing the program with logical means.
We call this process logical software testing.
In a more restricted and traditional sense, software/program test-
ing is a process that checks behavior of the software/program by
executing it with some input data. We call this process performance
(exploitation or simulation) software testing. It is simulation testing
when the testing process simulates functioning of software as it is
done in the pre-deployment stage. Experts estimate that one of the
largest time and resource drains in a development lifecycle is simula-
tion testing, often comprising as much as 50% of a project’s life cycle.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 252

252 Theory of Knowledge: Structures and Processes

Another kind of performance testing is exploitation testing when the


software functions performing tasks of the user.
This brings us to three instrumental types of software testing:
➢ Performance testing;
➢ Logical testing;
➢ Inspection or auditing testing.
Although two first types are mode popular, software inspection
has been proven to be a very effective means of removing software
faults. It can lead to the discovery of defects before a lengthy and
costly simulation testing begins, and often uncovers defects that test-
ing might completely miss because of shortcomings in the testing
plan. Inspections can be done manually, having staff review all lines
of code and identify problem areas, or they can be performed by
automated software running on relatively inexpensive workstations,
which is faster, more systematic, and less expensive.
Testing is a cyclical process that leads to incremental improve-
ments and one cycle of the testing process is called a test.
When a property P is tested, we call the process P -testing. Thus,
we have correctness testing, completeness testing, security testing,
reliability testing, safety testing, stability testing, portability test-
ing, maintainability testing, usability testing, etc.
There are three object types of software testing:
➢ Structural (static) testing;
➢ Functional testing;
➢ Processual testing.
Two latter kinds are also called dynamic testing. Note that
dynamic testing is not always simulation testing. It is possible, for
example, to test how a program performs with logical means that do
not demand program execution.
Structural (static) testing is aimed at finding/checking properties
of the structure of the tested software system. The working mate-
rial of structural (static) testing are the software texts — speci-
fications, codes, and instruction manuals. For instance, structural
(static) testing analyzes relations between data that determine data
flows in the software system programs.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 253

Knowledge Evaluation and Validation 253

It is generally not a detailed testing, looking mainly for the sanity


of the code, algorithm, or document. It is primarily syntax checking
of the code or manual reading of the code or document to find errors.
From the black box testing point of view, static testing involves
review of requirements or specifications. This is done with an eye
toward completeness or appropriateness for the task at hand. This
is the verification portion of Verification and Validation. This type
of testing can be used by the developer who wrote the code in isola-
tion. Code reviews, inspections, and walkthroughs are usually used
as static testing procedures (Kaner et al., 1998).
A walkthrough is a term describing the consideration of a process
at an abstract level. The term is often employed in the software indus-
try (see software walkthrough) to describe the process of inspecting
algorithms and source code by following paths through the algo-
rithms or code as determined by input conditions and choices made
along the way. The purpose of such code walkthroughs is generally to
provide assurance of the fitness for purpose of the algorithm or code;
and occasionally to assess the competence or output of an individual
or team.
Functional testing checks functions that are computed by the
tested software system, i.e., the software is tested for the functional
requirements. The working material of functional testing is texts of
the software system programs and sets of input and output data.
Usually functional testing looks into the correspondence between pre-
conditions and post conditions.
Processual testing investigates processes generated by the tested
software system. For instance, processual testing examines temporal
conditions of the process or its coherence.
These three approaches (structural, functional, and processual)
correspond to three types of complexity measures and software met-
rics considered in Burgin and Debnath (2003). It is possible to use
these measures to estimate complexity of testing.
There are three orientation types of software testing:

➢ Verification testing;
➢ Diagnostic testing;
➢ Action/decision testing.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 254

254 Theory of Knowledge: Structures and Processes

Validating testing is aimed at checking (validating) a property P


for a software system A.
For instance, if we take such property P as software correctness,
then validating testing tries to find whether the software system
A has any faults, i.e., if A is correct. Another type of validating
testing tries to find whether the software system A has faults of
given type.
Diagnostic testing is aimed at finding what influences (causes) the
property P in the software system A. For instance, if we take such
property P as software correctness, then diagnostic testing tries to
find what faults the software system A has. Another type of diagnos-
tic testing tries to find what causes faults in the software system A.
Software developers know that a very important stage of error
detection is diagnostics. It helps in predicting the potential occur-
rence of problems in software projects. There is evidence that soft-
ware metrics are highly effective for software diagnostics.
Action testing is aimed at finding what is necessary (sufficient) to
do to increase (decrease) the property P in the software system A.
For instance, if we take such property P as software security and
A is an operating system, then action testing tries to find how to
eliminate or, at least, to decrease vulnerability of A.
There are other kinds of testing considered in literature and used
in practice.
Black box testing does not take into account the inner structure
of the tested software system W .
White box testing takes into account the inner structure of the
tested software system W , testing the whole system W so that all
critical components are exercised.
In stress testing, the application is tested against heavy load such
as complex numerical values, large number of inputs, large number
of queries etc., which checks for the stress/load the applications can
withstand.
In load testing, the application is tested against heavy loads or
inputs such as testing of web sites in order to find out at what
point the web-site/application fails or at what point its performance
degrades.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 255

Knowledge Evaluation and Validation 255

Ad hoc testing is done without any formal Test Plan or Test Case
creation. Ad hoc testing helps in deciding the scope and duration
of the various other testing and it also helps testers in learning the
application prior starting with any other testing.
Exploratory testing is similar to the ad hoc testing and is done in
order to learn/explore the application.
Usability testing is done if user interface is important and needs
to be specific for the specific type of user.
Smoke testing also called sanity testing is performed to check if
the application is ready for further major testing and is working
properly without failing up to least expected level.
Recovery testing is chiefly done in order to check how fast and bet-
ter the application can recover against any type of crash or hardware
failure etc. Type or extent of recovery is specified in the requirement
specifications.
Volume testing is done against the efficiency of the application.
Huge amount of data is processed through the application (which is
being tested) in order to check the extreme limitations of the system.
User acceptance testing, the software is handed over to the user
in order to find out if the software meets the user expectations and
works as it is expected to.
In alpha testing, the users are invited at the development center
where they use the application and the developers note every partic-
ular input or action carried out by the user. Any type of abnormal
behavior of the system is noted and rectified by the developers.
In beta testing, the software is distributed as a beta version to
the users and users test the application at their sites. As the users
explore the software, in case if any exception/defect occurs that is
reported to the developers.
All considered kinds and types of operational knowledge testing
have their counterparts in the area of descriptive and representational
knowledge testing.
Testing in science is called an experiment. Some time ago, a new
form of scientific testing has been created. It is called computer sim-
ulation. In natural science in general and in physics, in particular,
experiment provides the evidence that tests and grounds scientific
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 256

256 Theory of Knowledge: Structures and Processes

knowledge in the form of statements (scientific laws), properties


and scientific theories. Moreover, scientists test not only theoretical
knowledge but also natural phenomena to get new knowledge.
A classical example of experimental testing are experiments of
Galileo Galilei (1564–1642). The history tells us that the great
Aristotle wrote that heavy objects fell faster than lighter ones. There-
fore, in 1500’s, everyone knew this. However, Galileo decided to
test this knowledge dropping two different weights from the Lean-
ing Tower in Pisa. He found that they landed at the same time and
proved that the assertion of Aristotle was incorrect. This was the
beginning of the experimental physics.
However, experiments existed in physics even before Galileo, but
they were only thought experiments. Now scientists use three forms
of experiments:
• material experiments;
• thought experiments;
• computer experiments.
The most popular form of computer experiments is computer sim-
ulation, which reflect relatively recent development in science. In
some fields, such as high-energy physics, computer simulations form
the core of all experiments because in this area, many material exper-
iments are either impossible or too expensive.
Computers also allow performing experiments by computation
when theoretical knowledge is used for computing some values and
comparing them with experimental data.
In mathematics, testing is done by calculations and . . . by proving
(Burgin, 1998). In essence, proving is the crucial testing in mathemat-
ics — only those statements that are proved, i.e., passed the proving
test, belong to the mathematical knowledge. Mathematics, as any
natural science, has the theoretical, experimental, and applied parts.
However, if in science, experiments form the base for making decision
about validity of scientific knowledge, in mathematics, experiments
in the form of calculations or constructions only help to obtain new
knowledge, while the final decision about validity of mathematical
knowledge are based on proofs.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 257

Knowledge Evaluation and Validation 257

In turn, to make proofs reliable, mathematicians also test them.


It is validating testing considered above for operational knowledge in
the form of computer programs. Sometimes mathematicians apply
diagnostic testing trying to find mistakes and gaps in the proof.
Often diagnostic testing is performed by reviewers of mathematical
works.
In the sphere of education, knowledge of students is periodically
tested by various quizzes, exams, and tests. Usually, these tests are
aimed at evaluation (estimation) of all three types of knowledge —
descriptive, operational, and representational knowledge.
An important issue in correctness estimation is that correctness
often cannot be considered as an absolute property. For instance, soft-
ware that is correct for some applications can be completely incorrect
for other applications. In addition, the concept of correctness changes
with time. Software that is now considered correct can be treated as
essentially incorrect in a couple of years. Thus, taking software as
an example of operational knowledge, the most relevant and flexi-
ble way to define software correctness is conditional, which provides
a sound base for the development of efficient correctness evaluation
techniques and is presented by the following definition.
Let us consider some system C of conditions.

Definition 3.2.19. A software system R is correct relative to C


if R satisfies all conditions from C. Conditions from C are called
correctness conditions.

Note that satisfaction of conditions can be complete or can be


partial. In the latter case, we have degrees of software correctness.
In this context, software correctness is a function (property) on
the set of all possible programs. If P is a program, then we denote
this function by Cor(P ). Taking some system C of conditions, we
have the relative correctness function CorC (P ).
There is often confusion about the relationship between state-
ments about software failure rates and about software correctness,
and about which evidence can support either kind of statement. Soft-
ware fails mainly for two reasons: logic errors in the software and
exception failures. Exception failures can account for up to two-thirds
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 258

258 Theory of Knowledge: Structures and Processes

of all system crashes (Maxion and Olszewski, 1998), hence, are wor-
thy of serious attention. In general, an exception is any unexpected
condition or event, usually environment- or data-driven, which would
cause an otherwise operational program to fail. Many different types
of conditions can cause exceptions including an empty data file, insuf-
ficient memory, type mismatch, wrong command-line argument, pro-
tection violation, and bad data returned from another program.
Conditional correctness encompasses all known types of correct-
ness and verification techniques. For instance, model-determined cor-
rectness means that conditions from C determine properties of a
relevant (usually, fixed) model of the system R. Building a relevant
formal model of R provides for using computers for correctness verifi-
cation. Different approaches exist for doing this. For instance, model
checking is an algorithmic verification technique in which efficient
programs are used to check, in an automatic way, whether a desired
property holds for a finite model of a software system. By the classi-
fication developed in (Burgin and Debnath, 2006), it is a kind of soft-
ware descriptive correctness. With respect to finite automata models,
this kind of software correctness is considered in (Burgin and Tandon,
2003).
This conditional definition makes the concept of correctness flex-
ible, efficient, and adaptive to changes. The type of conditions in C
determines the type of software correctness.
A software system is traditionally called functionally correct, if it
realizes functions prescribed by its specification. In this case, func-
tional specification gives conditions on software’s behavior. This def-
inition of correctness assumes that a specification of the system is
available and that it is possible to determine unambiguously whether
or not the software meets the specification.
A software system is traditionally called textually correct if its
text/code does not have errors. In this case, the condition is that the
software text complies with the syntactic rules of the corresponding
programming language. This definition of correctness is the simplest.
A software system is descriptively correct if it corresponds to its
specification. In this case, specification describes conditions that the
program must satisfy. Note that descriptive correctness includes (is
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 259

Knowledge Evaluation and Validation 259

stronger than) functional correctness, in particular, a program is


operationally correct if its functioning is faultless (Burgin and Deb-
nath, 2006). In this case, a user explicitly or implicitly gives condi-
tions that the program must satisfy. Users are mostly interested in
operational correctness. However, diversity of users makes the oper-
ational estimate fuzzy, and it is necessary to represent this fuzziness
in measures of operational correctness.
As correctness conditions can be set by different groups, we have
several types of software correctness:

— User-oriented software correctness.


— Consumer-oriented software correctness.
— Programmer-oriented software correctness.
— Designer-oriented software correctness.

Here we take into account communities of users who utilize the


system, consumers who buy the system, programmers who write pro-
grams for the system, and designers who design the system. Members
of each of these communities construct (sometimes, implicitly) cor-
rectness conditions and estimate correspondence (relevance) between
these conditions and system properties. For instance, user-oriented
operational correctness of a program means that users consider it
correct in utilization.
Consumers often are also users, but not always. For instance, com-
panies buy different software to be used by their personnel. However,
estimates of the company do not always coincide with estimates of
it personnel who use this software.
User-oriented software correctness is based on correctness condi-
tions elaborated by users. It is prevalent when users (individuals and
companies) update and upgrade software they use. Designer-oriented
software correctness is based on correctness conditions elaborated by
designers. It is prevalent when individuals and companies update
and upgrade software they produce. As a result, it is possible that
designer/programmer-oriented correctness is increased, while user-
oriented correctness declines. This situation results in waste of money
and effort. To avoid such losses, it is necessary to monitor users’
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 260

260 Theory of Knowledge: Structures and Processes

opinions and to correlate designer/programmer-oriented correctness


with user-oriented correctness.
Most programmers use testing to establish that their program per-
forms to specifications. However, in spite of repeated testing, errors
and bugs still find their way into applications, and testing can rarely
check all possible scenarios. There are always more test cases to be
tried, so testing is never finished, but only abandoned. It is for this
reason than many organizations, rather than test to exhaustion, are
enhancing their test efforts through software inspection and logical
verification.
Attracting attention to the shortcoming of performance testing
and necessity of logical verification, Dijkstra said, “Program testing
can be used to show the presence of bugs, but never to show their
absence!” Performance testing shows the effect of running the pro-
gram on a particular set of inputs. As a result, testing is only as good
as the test input. Debugging, as we know, is essentially testing.
A proof of program correctness, or logical verification, establishes
the correctness of the program on any input. However, like mathe-
matical proofs, a proof of program correctness is only as good as the
person writing the proof.
However, logical verification alone cannot provide complete soft-
ware correctness. As Knuth said, “Beware of bugs in the above code.
I have only proved it correct, not tried it.”
The idea of the logical approach to program correctness verifi-
cation is simple. As Hoare (1969) writes, one of the most impor-
tant properties of a program is whether or not it carries out its
intended function. This corresponds to the functional correctness.
The intended function of a program, as well as of its parts, is speci-
fied by making general assertions about the values of input and out-
put variables. These assertions are presented as formal expressions
of the form R{P }Q called a Hoare triple where the precondition R
and post condition Q are some logical formulas and P is a program
(an instruction or command). The meaning of such an expression is:

If the assertion R is true before initiation of the program P, then the


assertion Q is true on its termination.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 261

Knowledge Evaluation and Validation 261

A system of Hoare triples forms a logical model L(P ) of the pro-


gram P .
Another interpretation of the Hoare triple R{P }Q is given by
Milner (1989):

If P is executed in a state satisfying R and it terminates, then the


terminating state will satisfy Q.
Logical methods are used as a complementary quality control
measure that can reveal inconsistencies, ambiguities, and incomplete-
ness and several other shortcomings of system designs early on in the
development process. This reduces significantly the accidental intro-
duction of design errors in the development of the software lead-
ing to higher quality software and cost reduction in the testing and
maintenance phases of system development. Formal methods and
their related software tools have been used extensively and success-
fully in a variety of areas including protocol development, hardware
and software verification, embedded/real-time/dependable systems
and human safety, demonstrating that great improvements in sys-
tem behavior can be realized when system requirements and design
have a formal basis.
Here we consider a more general approach to logical verifica-
tion/testing of software systems. We represent testing conditions by
logical expressions.
Conditions for structural (static) testing are represented by state-
ments about the software structure and often in the form of logical
formulas in which the structure of a software system is represented
by a variable.
Conditions for functional testing are represented by two compu-
tational forms of functional triads, all three components of which are
also triads. The substantial form is a complex:
 
{DI , WI , EI }, {DP , WP , EP }, {DO , WO , EO } ,

where elements DI , WI , EI from the first triad are input data, pro-
gram (software system) and environment, respectively; DP , WP , EP
are data, program (software system) and environment during the
computational process; and DI , WI , EI are output data, program
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 262

262 Theory of Knowledge: Structures and Processes

(software system) and environment, respectively. In this schema, the


environment can include: devices used, users, load on the system,
etc. There two parts of the environment: performing environment
and interacting environment.
The conditional form of a functional triad is a complex:
 
{CDI , CWI , CEI }, {CDP , CWP , CEP }, {CDO , CWO , CEO } ,

where CDI , CWI , CEI are input conditions on data, program (soft-
ware system) and environment, respectively; CDP , CWP , CEP are
conditions on data, program (software system) and environment dur-
ing the computational process; and CDO , CWO , CEO are output
conditions on data, program (software system) and environment,
respectively.
A functional triad has three correctness interpretations:

1. If the first part is true, then the second and the third parts are
also true.
2. If the first and the second parts are true, then the third parts are
also true.
3. If the first part and a fragment of the second part are true, then
the complementary fragment of the second parts and the third
parts are also true.

Hoare triples with the conventional meaning, as well as with the


Milner’s meaning, are special cases of functional triads. A more
general form of Hoare triples is necessary due to the following
circumstances. First, now there are software systems, e.g., operat-
ing systems, which give results without termination. Second, even
in the case of terminating programs, it is often necessary to check
intermediate results. Third, it is necessary to specify conditions on
the environment.
Conditions for processual testing are represented by two compu-
tational forms of process triads, which are similar to functional triads
with processes as their elements. For instance, instead of input and
output data, input and output are described.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 263

Knowledge Evaluation and Validation 263

3.3. Local consistency versus global consistency


in knowledge representation

A foolish consistency is the hobgoblin of little minds.


Ralph Waldo Emerson

As we know, consistency is an important component of knowledge


correctness. However, even before the concept of consistency was
elaborated, people found that their thinking and knowledge are, in
some way, inconsistent. Philosophers, especially, Zeno of Elea (ca.
490–430 BCE), explicated various inconsistencies and contradictions
in understanding of natural processes.
Although inconsistencies bothered logicians from the time of Aris-
totle, the first logical systems treating contradictions appeared only
in the 20th century. At first, such systems had the form of multi-
valued logics initially developed by Vasil’év and L ukasiewicz. Then
the first relevant logics were built by Orlov. However, their work did
not make any impact at the time. Later a student of L  ukasiewicz,
Jaśkowski was the first logician who developed formal paraconsistent
logic (Jaśkowski, 1948; 1949). Starting from this time, a diversity
of different paraconsistent logics, multivalued logics, including fuzzy
logics, and relevant logics, has been elaborated (cf., for example,
(Routley et al., 1982)).
Minsky was one of the first researchers in AI who attracted atten-
tion to the problem of inconsistent knowledge (Minsky, 1974). He
wrote that consistency is a delicate concept that assumes the absence
of contradictions in systems of axioms. Minsky also suggested that in
AI systems this assumption was superfluous because there were no
completely consistent AI systems. In his opinion, it is important to
understand how people solve paradoxes, find a way out of a critical
situation, learn from their own or others’ mistakes or how they recog-
nize and exclude different inconsistencies. Minsky (1991) suggested
that consistency and effectiveness may well be incompatible. He also
maintained:
“An entire generation of logical philosophers has thus wrongly tried to
force their theories of mind to fit the rigid frames of formal logic. In
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 264

264 Theory of Knowledge: Structures and Processes

doing that, they cut themselves off from the powerful new discoveries of
computer science. Yes, it is true that we can describe the operation of
a computer’s hardware in terms of simple logical expressions. But no,
we cannot use the same expressions to describe the meanings of that
computer’s output — because that would require us to formalize those
descriptions inside the same logical system. And this, I claim, is some-
thing we cannot do without violating that assumption of consistency.”
(Minsky, 1991a).

Then Minsky continues, “In summary, there is no basis for assum-


ing that humans are consistent — not is there any basic obstacle
to making machines use inconsistent forms of reasoning” (Minsky,
1991a). Moreover, it has been discovered that not only human knowl-
edge but also representations/models of human knowledge (e.g., large
knowledge bases) are inherently inconsistent (Delgrande et al., 1986).
Later more and more researchers started to understand that
inconsistent knowledge is not an exclusion from a general pattern but
contrary to this, is the most widespread form of knowledge (cf., for
example, (Schwanke and Kaiser, 1988; Balzer, 1991; Burgin, 1991d;
Gabbay and Hunter, 1991; 1993; Easterbrook, 1996). For instance,
Gabbay and Hunter wrote:
“We claim there is a fundamental difference between the way
humans handle inconsistency and the way it is currently handled in
formal logical systems: To a human, resolving inconsistencies is not
necessarily done by “restoring” consistency but by supplying rules
telling one how to act when the inconsistency arises. For artificial
intelligence there is an urgent need to revise the view that inconsis-
tency is a “bad” thing, and instead view it as mostly a “good” thing.
Inconsistencies can be read as signals to take external action, such
as “ask the user”, or invoke a “truth maintenance system”, or as sig-
nals for internal actions that activate some rules and deactivate other
rules. There is a need to develop a framework in which inconsistency
can be viewed according to context, as a vital trigger for actions, for
learning, and as an important source of direction in argumentation.”
(Gabbay and Hunter, 1991).
The perspective-bound character of information and informa-
tion processing often results in natural inconsistency coming from
different perspectives or from a faulty perception or from faulty
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 265

Knowledge Evaluation and Validation 265

information processing, such as processing on the basis of incom-


plete knowledge from a single perspective. As a result, now many
understand that contradiction handling is one of the central prob-
lems in AI and an active research goes in the area of inconsistent
knowledge, comprising various directions and approaches. As Ben-
ferhat and Garcia (2002) write, the problem of handling conflicting
information is important in many areas of AI, being particularly
present in such areas as distributed knowledge bases and databases,
default/defeasible reasoning, data fusion, diagnosis, decision-making,
dynamic expert systems, merging ontologies, ontology evolution,
knowledge transition from one formalism to another, belief revi-
sion, and more generally, in managing the dynamics of databases
and knowledge bases, e.g., in merging knowledge bases.
Some researchers think (cf., for example, (Thagard, 1988)) that
AI liberates them from the narrow constraints of standard logic, for
example, from necessity of consistency, by enforcing rigor in a dif-
ferent way, namely, by the condition of computational realizability.
However, it is possible to realize inconsistent systems on computers
and it happens in computational practice but without means of work-
ing with inconsistent knowledge, such situations bring practitioners
to negative results and may be very dangerous. That is why more
and more research is aimed at inconsistent knowledge.
There are three basic approaches to dealing with inconsistency.

1. The first approach, which we call the restoration approach, is


aimed at restoring consistency of an inconsistent knowledge sys-
tem, e.g., a database (Rescher and Manor, 1970), by transforming
this system.
2. The second approach, which we call the tolerance approach, tries
to tolerate inconsistency by using systems with weaker rules of
inference, such as non-classical logics, and including an inconsis-
tent knowledge system into such a system.
3. The third approach, which we call the structural approach, is based
on restructuring of an inconsistent knowledge system to eliminate
a possibility of mutually inconsistent parts, components or ele-
ments interacting with one another.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 266

266 Theory of Knowledge: Structures and Processes

The main technique in the restoration approach is exclusion


of those formulas (statements) that cause contradictions. This
is achieved by utilizing logics that allow dynamical changes in
the process of their functioning. Examples of such strategies are
non-monotonic logics (McDermott and Doyle, 1980; Marek and
Truszczynski, 1993; Makinson, 2005), default logics (Brewka, 1991;
Reiter, 1980), and belief revision (Dalal, 1988; Satoh, 1988; Friedman
and Halpern, 1994; Gärdenfors and Rott, 1995; Pollock and Gillies,
2000; Wassermann, 2000).
A non-monotonic logic is a formal logic that learning a new piece
of knowledge can transform the set of what is known, aimed at con-
tradiction elimination. A non-monotonic logic can handle various rea-
soning tasks, such as default reasoning when consequences may be
derived only because of lack of evidence of the contrary, abductive
reasoning when consequences are deduced as most plausible expla-
nations, and belief revision when new knowledge may contradict
old beliefs and some beliefs that cause contradictions are excluded
(Marek and Truszczynski, 1993; Makinson, 2005).
A default logic is a formal argument-based system that uses rea-
soning with defeasible rules. Such rules give a justification for believ-
ing the statement, which is the consequent of the rule, whenever there
is a justification for believing the statement, which is the antecedent
of this rule (Reiter, 1980; Roos, 2000). Default rules have the follow-
ing form:

α(x) : β(x)
.
γ(x)

This rule says that if some statement α is true of x and it is


reasonable to believe the statement β(x), then conclude the state-
ment γ(x). For instance, if α(x) is the statement “x is a car” is a
fact and β(x) is the statement “x has all parts in working order” is
justified by the recent inspection of the car (a valid argument), then
γ(x) is the conclusion “x can be driven from Los Angeles to San
Diego.” However, if you later discover that the battery is dead, then
the conclusion will be blocked due to this new argument.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 267

Knowledge Evaluation and Validation 267

A justification for believing the antecedent usually consists of


valid statements (facts) about the knowledge domain, which are
called premises or evidence, and of statements that have their own
justification by defeasible rules. As a result, a justification for a state-
ment has the form of a tree and is called the argument for this state-
ment.
Since the rules used for constructing arguments are defeasible new
evidence can change arguments causing, in turn, transformation of
knowledge represented as a set of believed statements. By allowing
new information to block a conclusion, default rules provide a non-
monotonic effect.
It is natural to use super-recursive algorithms (Burgin, 2005)
for inference in non-monotonic logics and argument-based systems
because in general, there are no clear boundaries where the changes
caused by new information stop in non-monotonic logics or where
argumentation comes to an end.
The aim of the second approach called the tolerance approach is
to tolerate inconsistency (Bertossi et al., 2005) by using non-classical
logics and including an inconsistent knowledge system into such a
logic. As a result, inconsistency is treated at a higher level (Toul-
min, 1956). Examples of such methods are paraconsistent logics (Da
Costa, 1963; Priest et al., 1989; Ross, 1994; Besnard and Hunter,
1995; Damasio and Pereira, 1997) and argumentation (Dung, 1995;
Amgoud and Cayrol, 1998).
Paraconsistency has the meaning of besides or beyond consistency,
and the main idea of paraconsistent logics is to build and study
theories, which are contradictory from the classical point of view
but are not trivial, i.e., in contrast to classical logics, paraconsistent
logics do not have the property that any formula can be deduced
from every set of hypotheses that contains contradictory formulas.
Researchers built a quantity of paraconsistent logics, which belong
to one of the following directions:

— the three-valued approach, in which the classical logic is trans-


formed into a logic with three truth values: true, false and both-
true-and-false;
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 268

268 Theory of Knowledge: Structures and Processes

— the relevance approach, in which the classical logic is adapted to


the idea that the antecedent of an implication must be relevant
to its consequent;
— the non-truth-functional approach, in which the classical logic is
turned into a logic based on a non-truth-functional version of
negation;
— the non-adjunctive approach, in which the classical logic is
adapted to the idea that the inference of the formula (statement)
A&B from formulas (statements) A and B can fail;
— the annotation approach, in which the classical logic is changed
into a logic where atomic formulas are marked with believed truth
values.

Tahara and Nobesawa (2006) consider two approaches to reason-


ing in an inconsistent knowledge base tolerating inconsistency of the
whole base: the consistency-based method and the argument-based
method.
The consistency-based method performs deductive reasoning with
consistent subsets selected from an inconsistent knowledge base.
Zhang et al. (2010) also studied the argument-based approach to
tolerating inconsistency.
The argument-based method takes a consistent knowledge (data)
item as an argument and selects a consistent knowledge (data) system
in the given inconsistent knowledge base, which allows deriving the
chosen argument. In such a way, various consistent knowledge (data)
systems are constructed based on different arguments (Roos, 2000).
A form of the consistency-based method called the coherence-
based approach is developed in (Nebel, 1991; 1994; Pinkas and Loui,
1992) and studied in (Koriche, 2001). It uses a consolidation oper-
ation selecting several consistent subsets from the knowledge base
and an entailment relation, which is employed for applying inference
procedures to these subsets. An important advantage of coherence-
based approach is its flexibility based on a possibility to use differ-
ent classes of consolidation operations depending on importance or
relevance of knowledge (data) stored in the knowledge (data) base
(Nebel, 1991).
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 269

Knowledge Evaluation and Validation 269

Examples of systems that select one consistent subsystem are pos-


sibilistic logic, linear ordering (Nebel, 1994), Adjustment and Maxi-
adjustment (Williams, 1994; 1996).
Examples of techniques that select several consistent subsys-
tems are acceptable sub-bases (Rescher, 1976), preferred sub-bases
(Brewka, 1989), Papini’s revision function (Papini, 1992) and lexico-
graphical approach (Lehmann, 1995).
An efficient approach to eliminating inconsistency by changing the
set of permissible truth-values is developed in multivalued logics and
especially in fuzzy logic, which is a very popular direction in logic
and its applications (Zadeh, 1975c; McNeill and Freiberger, 1993;
Nguyen and Walker, 1996; Bandemer and Gottwald, 1996).
In classical logics, the statement can only be either true or false,
1 or 0, right or wrong. This way of thinking proved itself as a highly
valuable intellectual technique. However, this type of logic becomes
inefficient when it is necessary to reason about properties and vari-
ables with more than two values, or in situations where multiple
incompatible variables or orthogonal properties are involved. In these
cases, we need multivalued logics, i.e., logics in which there are more
than two truth values. The most popular of them are fuzzy logics.
Fuzzy logics were developed to handle the concept of partial truth
(Novák et al., 1999). It means that the truth value may range between
completely true denoted by 1 and completely false denoted by 0.
Namely, in fuzzy logics, statements take truth values in the interval
[0, 1].
There are different types of fuzzy logics: predicate fuzzy log-
ics and propositional fuzzy logics, which include the basic propo-
sitional fuzzy logic BL, monoidal t-norm-based propositional fuzzy
logic MTL, L  ukasiewicz fuzzy logic (Lukasiewicz, 1920), L
 ukasiewicz–
Tarski fuzzy logic (Lukasiewicz and Tarski, 1930), Gödel fuzzy logic,
and product fuzzy logic.
The third direction called the structuring approach is based on
implicit or explicit utilization of logical varieties, quasi-varieties and
prevarieties (Burgin, 1991d; 1997d; 2004a; 2008a).
Although conventional logical systems based on logical calculi
have been successfully used in mathematics and beyond, they have
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 270

270 Theory of Knowledge: Structures and Processes

definite limitations that restrict their applications in many cases. For


instance, the principal condition for any logical calculus is its con-
sistency. At the same time, knowledge about large object domains
(in science or in practice) is essentially inconsistent (Burgin, 1991d;
Mestdagh et al., 1991; Nguen, 2008). From this perspective, Partridge
and Wilks (1990) write, “because of privacy and discretionary con-
cerns, different knowledge bases will contain different perspectives
and conflicting beliefs. Thus, all the knowledge bases of a distributed
AI system taken together will be perpetually inconsistent.” Conse-
quently, when conventional logic is used for formalization of such
knowledge, it is possible to represent only small fragments of the
object domain. Otherwise, contradictions appear.
Paraconsistent logics, which form the base for the second
approach, are inferentially weaker than classical logic, that is, they
deem fewer inferences valid and have other limitations. For instance,
Weinzierl (2010) explains why paraconsistent reasoning is not accept-
able for many real-life scenarios and other approaches are necessary.
In addition, paraconsistent logics attempt to deal with contradictions
in a discriminating way.
To eliminate all these limitations and to work with inconsis-
tent knowledge in a logically correct way, logical prevarieties and
varieties were introduced (Burgin, 1991d). Here we introduce two
new structures — logical quasi-varieties and quasi-prevarieties, using
them together with logical prevarieties and varieties as a flexible and
constructive solution for the problem of inconsistency. Informally, a
logical variety is a formal structure built from logical calculi as its
components. Construction of logical variety is performed by gluing
together well-formed parts of the variety components. The main idea
is separation of contradictory expressions by placing them in dif-
ferent calculi and restricting inference so that it is locally confined
to a single calculus from the given logical variety. This approach
is similar to building topological manifolds from open subsets of
vector spaces, which have a much better structure than the whole
manifold. Logical prevarieties and quasi-varieties weaken conditions
on the gluing structure making the whole system more flexible in
applications.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 271

Knowledge Evaluation and Validation 271

Logical varieties, prevarieties and quasi-varieties represent the


natural development of logical calculi, being more advanced systems
of logic, and thus, they show the direction in which mathematical
logic will inevitably go. Including logical calculi as the simplest case,
logical varieties and related systems offer several benefits in compar-
ison with the conventional logic:

1. Logical varieties, prevarieties and quasi-varieties give an exact


and rigorous structure to deal with all kinds of inconsistencies.
2. Logical varieties, prevarieties, and quasi-varieties allow model-
ing/realization of all other approaches to inconsistent knowledge.
For instance, it is possible to use any kind of paraconsistent
logics as components of logical varieties. In (Burgin, 1991d),
it is demonstrated how logical varieties realize non-monotonic
inference.
3. Theoretical results on logical varieties, prevarieties and quasi-
varieties provide means for more efficient application of logical
methods to problems in different areas. Examples of such applica-
tions are: knowledge/information integration and program inter-
operability (Burgin, 2004a), the Logic of Reasonable Inferences,
which is a logical variety constructed to represent and process
contradicting opinions in the legal domain (de Vey Mestdagh,
1991; 1998; Burgin and de Vey Mestdagh, 2011; 2015; de Vey
Mestdagh and Burgin, 2015) and modeling psychological phe-
nomena of thinking, emotions, and will (Burgin and Rybalov,
2003).
4. Logical varieties, prevarieties and quasi-varieties allow partition-
ing of an inconsistent knowledge system into consistent parts
without loss of information, while, at the same time, preserving
a possibility of using powerful tools of classical logic for reason-
ing and justification. Taking an inconsistent logical system, this
technique restructures it into a logical system of locally consis-
tent components.
5. Logical varieties, prevarieties, and quasi-varieties allow utiliza-
tion of different kinds of logics in the same knowledge system,
possessing multifunctionality in applications. For instance, it is
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 272

272 Theory of Knowledge: Structures and Processes

possible to use a combination of the classical predicate calculus


and non-monotonic calculus to represent two perspectives one of
which is based on complete knowledge and the other on incom-
plete knowledge.
6. Logical varieties, prevarieties and quasi-varieties allow sepa-
ration of different parts in a knowledge system and working
with them independently when these parts either demand dif-
ferent logics or satisfy dissimilar (sometimes contradictory) con-
ditions or employ distinct rules for transformation and/or inter-
pretation.
7. Logical varieties, prevarieties, and quasi-varieties provide means
to reflect change of beliefs, knowledge and opinions without loss
of previously existed beliefs, knowledge and opinions even in the
case when new beliefs, knowledge and opinions contradict to
what was before (Burgin and Rybalov, 2003). For instance, in
(Burgin, 2004a), varieties and prevarieties are applied to knowl-
edge integration, while in (Burgin, 2008a), they are applied to
modeling processes in temporal databases.
8. Logical varieties, prevarieties, and quasi-varieties provide means
for efficient knowledge representation in multi-agent environ-
ment. Indeed, it is natural to represent knowledge of each agent
by a logical calculus or logical variety, while the combined knowl-
edge system naturally becomes a logical variety, prevariety or
quasi-variety. Note that even when knowledge systems of differ-
ent agents are consistent with one another, it is beneficial to
represent the combined knowledge system by a logical variety,
prevariety or quasi-variety because such a representation allows
more efficient operation with knowledge in comparison with uti-
lization of a single calculus.
9. Logical varieties, prevarieties and quasi-varieties provide means
for efficient modeling and management of distributed knowledge
and databases. Indeed, it is natural to represent knowledge (data)
in each local component by a logical calculus or logical variety,
while the combined knowledge (data) system of the whole knowl-
edge base (database) naturally becomes a logical variety, preva-
riety or quasi-variety.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 273

Knowledge Evaluation and Validation 273

10. Logical varieties, prevarieties and quasi-varieties provide means


for efficient modeling and management of temporal knowledge
and databases (Burgin, 2008a). Indeed, it is natural to represent
knowledge (data) of each temporal section by a logical calculus
or logical variety, while the combined knowledge (data) system of
the whole knowledge base (database) naturally becomes a logical
variety, prevariety or quasi-variety.

In comparison with paraconsistent logics, logical varieties, quasi-


varieties, and prevarieties allow utilization of sufficiently powerful
means of logical inference, for example, deductive rules of the classi-
cal predicate calculus. In addition, paraconsistent logics attempt to
deal with contradictions in a discriminating way, while logical vari-
eties, quasi-varieties, and prevarieties treat contradictions and other
inconsistencies by a separation technique.
In comparison with non-monotonic logics, which form the base for
the first approach, logical varieties, quasi-varieties, and prevarieties
provide tools for preserving all points of view, approaches and posi-
tions even when some of them taken together lead to contradiction.
Due to their flexibility, logical varieties, quasi-varieties, and prevari-
eties allow treating any form of logical contradictions in a rigorous
and consistent way.
For systems dealing with inconsistent knowledge, Girdenfors
(1988) suggested an important principle of minimal change: keep
as much information as possible of the initial information. Logical
varieties allow preserving all initial information.
These qualities of logical varieties are especially important for
normative, in particular, legal, knowledge because this knowledge
consists of a collection of formalized systems, a collection of adopted
laws, a collection of existing traditions and precedents, and a collec-
tion of people’s opinions. In addition, in the process of functioning,
normative (legal) knowledge involves a variety of situational knowl-
edge, beliefs, and opinions. To analyze and use this diversity, it is
necessary to have a flexible system that allows one to make sense of
all different approaches without discarding them in an attempt to
build a unique consistent system. To formalize these characteristics
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 274

274 Theory of Knowledge: Structures and Processes

of normative knowledge, a form of a logical variety, the Logic of Rea-


sonable Inferences (LRI) was developed (de Vey Mestdagh et al.,
1991) and further extended in (de Vey Mestdagh and Burgin, 2015).
The LRI was subsequently used as specification for the implementa-
tion of a knowledge based system shell called Argumentator (de Vey
Mestdagh, 1998).This shell has consequently been used to acquire
and represent legal knowledge. The resulting legal knowledge based
system has been successfully utilized to test the empirical validity
of the theory about legal reasoning and decision making modeled by
the LRI (de Vey Mestdagh, 1998).
It is interesting that several other systems used for inconsis-
tency resolution, e.g., Multi-Context Systems (Weinzierl, 2010) are
also logical varieties, quasi-varieties, and prevarieties. For instance,
it is true for Multi-Context Systems because bridge rules used in
Multi-Context Systems for non-monotonic information exchange are
functions that glue together components of a logical quasivariety,
prevariety or variety.
In addition, when in the argument-based approach, various consis-
tent knowledge systems are constructed based on different arguments
(Roos, 2000), these systems together also form a logical quasivariety,
prevariety or variety because some of these arguments may incon-
sistent. Similar techniques of selecting several consistent subsystems
are used in acceptable subbases (Rescher, 1976), preferred subbases
(Brewka, 1989), Papini’s revision function (Papini, 1992) and lexico-
graphical approach (Lehmann, 1995).
One more example of logical varieties and prevarieties implicit uti-
lization is the approach of Benferhat and Garcia (2002) to handling
inconsistent knowledge bases by local stratification. The authors
investigate the idea of reasoning in prioritized and possibly inconsis-
tent knowledge bases using a localized contextual technique. Priori-
ties, such as a preference relation or reliability relation, are assigned
to knowledge (data) items, allowing to provide more meaningful and
reliable information to the user.
Priorities stratify the database or its part forming a logical vari-
ety, in which a component is determined by a fixed level of priority.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 275

Knowledge Evaluation and Validation 275

As priorities are not supposed to be given globally between all the


beliefs in the knowledge base, but locally inside sets of pieces of infor-
mation responsible for inconsistencies, prioritization forms a subva-
riety of the variety that represents the whole database. This local
stratification offers more flexibility for representing priorities between
beliefs.
In addition, distributive reasoning is receiving increasing atten-
tion due to the distributed nature of knowledge on the Web (Binas
and McIlraith, 2008). However, conventional logical systems do not
support distributive reasoning. Only logical varieties, prevarieties
and quasivarieties provide rigorous mathematical tools for distribu-
tive reasoning.
While different approaches suggest different techniques for han-
dling inconsistency, this is only one stage of the more general process,
which is called managing inconsistency and includes four stages:

1. Monitoring and detection of inconsistencies.


2. Diagnostics of the causes of inconsistencies.
3. Handling inconsistencies.
4. Monitoring and handling results of previously performed actions.

In some cases, the fourth stage can cause new inconsistencies. This
demands going back to the second stage and repeating the whole
cycle.
The most general operational framework for managing inconsis-
tency was developed by Nuseibeh et al. (1991). Its modification is
presented in Figure 3.15.
Nuseibeh et al. (1994) use the term inconsistency to denote any
situation in which two descriptions do not obey some relationship that
is prescribed to hold between them. A precondition for inconsistency
is that the descriptions in question have some area of overlap. A rela-
tionship between descriptions can be expressed as a consistency con-
dition, against which descriptions can be checked. In current practice,
some such consistency rules are captured in various project docu-
ments, others are embedded in tools, and some are not captured
anywhere.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 276

276 Theory of Knowledge: Structures and Processes

Application Consistency Application


of rules Conditions of rules
Application (Rules) Application
of rules of rules
Refinement of Conditions
Monitoring Diagnostics Handling Monitoring
and and
detection Locate Ignore handling
of Defer consequences
inconsistencies Identify Tolerate Circumvent of handling
Ameliorate actions
Classify Resolve

Measurement Analysis of
of inconsistency impact and risk

Figure 3.15. A framework for managing inconsistency

Going in this direction, we define inconsistency in a very general


way using consistency conditions, i.e., to define consistency, a system
C of consistency conditions is determined

Definition 3.3.1 (Nuseibeh et al., 2001). A system R is con-


sistent (inconsistent) if satisfies (does not satisfy) all conditions
from C.
It is natural to consider three types of consistency.

Definition 3.3.2. A knowledge system K is self-consistent with


respect to a system C of conditions if conditions from C does not
involve any other system and K satisfies all conditions from C.
For instance, a set K of formulas is self-consistent if it does not
contain contradictions, e.g., A&A.
There is also relational consistency, i.e., consistency or compati-
bility of a knowledge system with another system.

Definition 3.3.3. A knowledge system K is epistemically consistent


with respect to a system C of conditions relative to a knowledge
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 277

Knowledge Evaluation and Validation 277

system H if after adding H to K, their union H ∪ K satisfies all


conditions from C.
For instance, a set K of formulas is epistemically consistent in con-
nection with a knowledge system H if K does not contain a formula
that contradicts some formula from H, e.g., K contains a formula A,
while H contains the formula A.

Definition 3.3.4. A knowledge system K is inferentially consistent


with respect to a system C of conditions in relation to an inference
system R if application of R to K gives a system that satisfies all
conditions from C.
For instance, a set K of formulas is inferentially inconsistent in
relation to the deduction rules R if it possible to deduce a contradic-
tion from K, e.g., A&A, using rules from R. Otherwise, the set K
is inferentially consistent.
Another type of consistency conditions C gives us another type
of consistency studied in mathematical logic.

Definition 3.3.5. A system of logical formulas K in a logical lan-


guage L is inflationally consistent with respect to a system C of
conditions apropos an inference system R if application of R to K
does not give all formulas in the language L.
However, defining consistency and inconsistency, it is natural to
take into account knowledge domains. This brings us to the following
concept.

Definition 3.3.6. Two knowledge systems K and H are orthogonal


if their domains do not intersect.
For instance, let us consider two sets of propositions K = {A is
blue, B is green, C is red, E is yellow} and H = {X is true, Y is
false, Z is true, V is true}. Then systems K and H are orthogonal.
To deal with inconsistency in a logically correct way, the concepts
of a logical quasi-variety, prevariety, and variety were introduced
and studied (Burgin, 1991d; 1997d; 2004a; 2008a). They general-
ize in a natural way the concept of a logical calculus and are more
advanced systems of logic. Logical varieties include logical calculi
as the simplest case, as well as towers of calculi introduced by
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 278

278 Theory of Knowledge: Structures and Processes

Maslov (1987) to represent dynamic aspects of formal theories. There


are different types and kinds of logical varieties, prevarieties, and
quasi-varieties:
Deductive or syntactic logical varieties prevarieties and quasi-
varieties.
Functional or semantic logical varieties, prevarieties, and quasi-
varieties.
Model or pragmatic logical varieties, prevarieties, and quasi-varieties.
Syntactic logical varieties, quasi-varieties, and prevarieties are sets
of logical formulas (expressions), which are structured in a definite
way as systems of logical calculi and their mappings. Semantic (func-
tional) logical varieties and prevarieties are sets of logical formulas
(expressions), which are formed by separating those parts that rep-
resent definite semantic units. In contrast to syntactic and semantic
varieties, model logical varieties are essentially formal mathematical
structures, which are built by gluing models (in a logical sense) of
components in some syntactic or semantic variety. In what follows,
we consider only syntactic logical varieties, quasi-varieties, and pre-
varieties. Semantic and model logical varieties, and prevarieties are
studied in (Burgin, 1997d). The most popular kinds of model logi-
cal varieties are topological manifolds (Gauld, 1974; Lee, 2000) and
supermanifolds (Bartocci et al., 1991). Chains of multi-agent clus-
ters or Kripke frames studied in (Muyeba and Rybakov, 2014) give
another example of model logical varieties.
Syntactic logical varieties, quasi-varieties, and prevarieties are
built from logical calculi as buildings are built from blocks. The main
idea of syntactic logical varieties, prevarieties, and quasi-varieties is
in presenting sets of formulas using logical calculi as local coordi-
nate systems. Semantically, it allows one to describe a domain the
interest, e.g., a database, knowledge of an individual or the text of
a novel, by a collection of logical formulas (expressions) and then to
divide the domain into parts, each of which allows representation by
an adequate logical calculus building in such a way a syntactic logical
variety, prevariety or quasivariety. That is why, we, at first a rigorous
mathematical representation for the concept of a logical calculus.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 279

Knowledge Evaluation and Validation 279

Let us consider a formal, e.g., logical, language L, and an infer-


ence language R. Typically L consists of well-formed logical formulas
(expressions) and R comprises rules (algorithms) of inference.

Definition 3.3.7. A syntactic or deductive logical calculus, usu-


ally called a logical calculus, is a triad (a named set) of the form
C = (A, H, T ) where H ⊆ R and A, T ⊆ L, A is the set of axioms,
H consists of inference rules (e.g., rules of deduction), by which the
theorems of the calculus are deduced from axioms, and the set of the-
orems T is obtained by applying algorithms/procedures/rules from
H to elements from A.
Two types of logical calculi are considered in the literature:

• A pure logical calculus uses only a logical language and has only
logical axioms.
• An applied logical calculus or formal theory uses some formal lan-
guage and has non-logical axioms.

The classical propositional calculus and predicate calculus are


pure logical calculi.
Formal arithmetic and axiomatic set theory are applied logical
calculi.
Gabbay introduced labeled deductive systems as a new kind of
logics (cf., (Gabbay, 1996)). The difference between traditional log-
ics and labeled deductive systems is in formulas that are manipulated
by them, i.e., traditional logics and labeled deductive systems have
different logical languages L of well-formed logical formulas (expres-
sions) and rules (algorithms) of inference R. While traditional logics
manipulate conventional logical formulas, labeled deductive systems
extend the concept of a logical formula manipulating pairs of conven-
tional logical formulas and labels (names). The labels may be terms
of an algebra, formulas of another logic, numbers, names of resources,
or names of databases, representing information about the formula it
labels. Consequently, deduction algorithms (rules) are also changed
as they take into account labels of logical formulas. As a result, any
labeled deductive system is a syntactic or deductive logical calculus
in the sense of Definition 3.3.7.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 280

280 Theory of Knowledge: Structures and Processes

It is interesting that according to Gabbay, to better characterize


a deductive calculus, it is not enough to define deduction rules  as
a binary relation but also to identify algorithmic ways this relation
is established (Gabbay, 1996). This well correlates with Definitions
3.3.7–3.3.11, according to which not only rules but more general algo-
rithms of inference are included in the inference languages R used
for building logical calculi, prevarieties, and varieties.
Classical logical calculi represented declarative or descriptive
knowledge. Later new calculi were elaborated for representation of
other types of knowledge. For instance, the lambda calculus (also
written as λ-calculus) was originally constructed by Alonzo Church
(1903–1995) to establish a logical theory of functions for further
extension to a functional foundation for mathematics (Church, 1932).
However, later researchers discovered that the lambda calculus pro-
vided a resourceful characterization of computability. At first, Kleene
(1936) proved that lambda-definability is equivalent to Godel–
Herbrand recursive functions. Then Turing (1937) demonstrated that
the computational model later called a Turing machine is equivalent
to the lambda calculus. These results demonstrated that the lambda
calculus represented operational knowledge.
Note that although the Church’s original system was inconsistent
due to the Kleene–Rosser paradox (Barendregt, 1984), it has been
very useful both in theoretical and practical domains demonstrating
practicality of inconsistent systems. For instance, when computers
became more advanced, lambda calculus influenced the design of
several programming languages including Algol 60, Algol 68, Pascal,
and LISP. Lambda calculus also provides an operational semantics
for programming languages.
One more example of operational knowledge representation is the
π-calculus (also written as pi-calculus), which is a process calculus.
The π-calculus elaborated by Milner allows channel names to be
communicated along the channels themselves, and in this way it is
able to describe concurrent computations that goes on in a network
that may change during the computation (Milner, 1999). In addition
to the original application concurrent systems, the π-calculus has also
been used to reason about business processes and molecular biology.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 281

Knowledge Evaluation and Validation 281

Utilizing the concept of a calculus, we build logical quasi-


prevarieties, quasi-varieties, prevarieties, and varieties. The goal is
the development of special tools for dealing with inconsistency by
separating contradictory logical expressions (formulas) and achiev-
ing local logical consistency without loss of information.
Let us take some class K of syntactic logical calculi, which use a
formal (in particular, logical) language L and an inference language
R, and a class F of partial mappings from L to L.

Definition 3.3.8. A triad M = (A, H, M ), where A and M are sets


of expressions that belong to L, namely, the set A = A(M) consists of
the axioms of M and the set M = T (M) consists of the theorems of
M, and H is a set of inference rules, which belong to the set R, is
called:

(1) a projective syntactic logical (K, F)-quasi-prevariety if there


exists a set C(M), which is called the cover of M, of logical calculi
C i = (Ai , Hi , Ti ) from K and a system of mappings fi : Ai → L
and gi : Mi → L(i ∈ I) from F, while each Ai consists of all
axioms and each Mi consists of some (not necessarily all) theo-
rems of the logical calculus C i , i.e., Mi ⊆ Ti , and for which the
  
equalities A = i∈I fi (Ai ), H = i∈I Hi and M = i∈I gi (Mi )
are valid (it is possible that C i = C j for some i = j).
(2) a syntactic logical K-quasi-prevariety if it is a projective syntactic
(K, F)-quasi-prevariety where all mappings fi and gi that define

M are inclusions (monomorphisms), i.e., A = i∈I Ai and M =

i∈I Mi .

Note that in principle, different logical calculi from K may use


different formal (logical) languages but it is possible to take L as the
union of these languages. The same is true for the inference language
R, which may be the union of inference languages used in different
logical calculi from K.
The main idea of projective syntactic logical quasi-prevarieties
and syntactic logical quasi-prevarieties is to correspond a given set
M of logical formulas to the union of components, which are sets of
theorems (deductible formulas) from calculi of a definite type. This
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 282

282 Theory of Knowledge: Structures and Processes

allows separation of formulas that are contradictory to one another


by placing them in different components of (projective) syntactic
quasi-prevarieties.
Note that there are projective syntactic quasi-prevarieties and
syntactic quasi-prevarieties that are not logical. This happens when
not all calculi from K used for building these quasi-prevarieties are
logical calculi. For instance, we have operational calculus (Erdelyi,
2013) and calculus of variations (Sagan, 1992) in mathematics, which
are not logical calculi.
Distributed databases with the logical representation of informa-
tion and inference tools give a natural example of projective syntac-
tic logical quasi-prevarieties and syntactic logical quasi-prevarieties
assuming that it is possible to assign a logical calculus to each compo-
nent of these databases. This is possible when, for example, all data
in each database component are consistent. Note that data in rela-
tional distributed databases form syntactic quasi-prevarieties, which
are not logical.
Projective syntactic (K, F)-quasi-prevarieties are used for con-
structing syntactic (K, F)-quasivarieties as more logically organized
structures.

Definition 3.3.9. A projective syntactic (K, F)-quasi-prevariety


M = (A, H, M ) is called:
(1) a projective syntactic (K, F)-quasi-variety with the depth k if
for any indices i1 , i2 , i3 , . . . , ik ∈ I either the intersections F =
k k
j=1 fij (Aij ) and G = j=1 gij (Tij ) are empty or for the pair
(F, G), there exists a calculus C = (D, G, P ) from K and pro-
 
jections f : D → kj=1 fij (Ajj ) and g : N → kj=1 gij (Mij ) from
F where N ⊆ P .
(2) A syntactic K-quasivariety with the depth k if it is a projective
syntactic (K, F)-quasivariety with depth k in which all mappings
fi and gi that define M are bijections on the sets Ai and Mi , cor-
respondingly, and for all intersections (cf., Definition 3.3.8(1)),
mappings f and g are bijections.
(3) A (full) projective syntactic (K, F)-quasi-variety if for any k > 0,
it is a projective syntactic (K, F)-quasi-variety with the depth k.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 283

Knowledge Evaluation and Validation 283

(4) A (full) syntactic K-quasi-variety if for any k > 0, it is a


K-quasi-variety with the depth k.
The main idea of projective syntactic quasi-varieties and syntac-
tic logical quasi-varieties is to represent knowledge in the form of a
given set of logical formulas (expressions) as the union of compo-
nents, which are sets of theorems (deductible formulas) from calculi
of a definite type, corresponding intersections of these sets also to
sets of theorems (deductible formulas) from calculi of the same type.
Note that there are syntactic quasi-varieties and syntactic quasi-
varieties that are not logical. This happens when not all calculi
from K used for building these quasi-varieties are logical calculi. For
instance, we have calculus of variations (Sagan, 1992) and calculus
of variations in the large, also called Morse theory (Milnor, 1963), in
mathematics, which are not logical calculi.
Projective syntactic (K, F)-quasi-prevarieties are also used for
building syntactic (K, F)-prevarieties as logical structures with
higher organization.

Definition 3.3.10. A projective syntactic (K, F)-quasi-prevariety


M = (A, H, M ) is called:
(1) A projective syntactic (K, F)-prevariety if Mi = Ti for all i ∈ I;
(2) A syntactic K-prevariety if it is a syntactic K-quasi-prevariety
in which Mi = Ti for all i ∈ I.
We see that the collection of mappings fi and gi forms a uni-
fied system of logical formulas called a prevariety or quasi-prevariety
using separate logical calculi C i . This structure can be useful in many
situations. For instance, mappings fi and gi allow one to establish
a correspondence between norms/laws that were used in one coun-
try during different periods or between norms/laws used in different
countries.
In the case of projective syntactic (K, F)-prevarieties, the set M of
logical formulas from a logical language L is represented by selecting
a system of calculi C i from K and mapping theorems of these calculi
into L so that all their images cover M . These calculi C i may have
different languages Li , different axioms (assumptions for reasoning)
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 284

284 Theory of Knowledge: Structures and Processes

Ai and/or different rules of inference Hi . However, all languages Li


are amalgamated in L and all rules of inference Hi are fused in R and
represented in H. For instance, it is possible that L = Lc ∪ LT ∪ LN
where Lc is the language of the classical predicate calculus, LT is the
language of the tense logic, LN is the language of the logic of norms.
In addition, axioms (assumptions for reasoning) Ai of the calculi C i
represent the generating base (assumptions for reasoning) A of the
syntactic variety M in a similar way.
Note that there are projective syntactic prevarieties and syntactic
prevarieties that are not logical. This happens when not all calculi
from K used for building these prevarieties are logical calculi. For
instance, we have differential and integral calculi in mathematics,
which are not logical calculi.
In comparison with syntactic K-quasi-prevarieties, syntactic
K-prevarieties have a better representation by calculi C i from K
than syntactic K-quasi-varieties. Namely, they are unions (in the
sense of named set theory (Burgin, 2011)) of these calculi C i from K.
A fragmentation (stratification) of a set of formulas in order to
make a logical prevariety or quasi-prevariety allows separation of
contradictory formulas making possible representation of each com-
ponent by a consistent calculus and restricting interference of con-
tradictory formulas. Note that in syntactic prevarieties and varieties
components (strata) contain all theorems from the corresponding cal-
culus, while in syntactic quasi-prevarieties and quasi-varieties com-
ponents (strata) can contain only a part of all theorems from the
corresponding calculus.
Definition 3.3.11. A projective syntactic (K, F)-quasi-prevariety
M = (A, H, M ) is called:

(1) A projective syntactic (K, F)-variety with the depth k if it is a pro-


jective syntactic (K, F)-quasivariety with the depth k in which
Mi = Ti for all i ∈ I.
(2) A syntactic K-variety with the depth k if it is a syntactic
K-quasivariety with depth k in which Mi = Ti for all i ∈ I,
and for all intersections (cf., Definition 3.3.8(1)), N = P , i.e., all
these intersections are projections of calculi from K.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 285

Knowledge Evaluation and Validation 285

(3) A (full) projective syntactic (K, F)-variety if for any k > 0, it is


a projective syntactic (K, F)-variety with the depth k.
(4) A (full) syntactic K-variety if for any k > 0, it is a K-variety
with the depth k.

We see that the collection of the intersections kj=1 fij (Aij ) and
k
j=1 gij (Tij ) makes a unified system called a variety out of sepa-
rate logical calculi C i . For instance, these intersections can contain
norms/laws that were the same in one country during different peri-
ods of time or norms/laws common for different countries.
Projective syntactic (K, F)-varieties add one important feature
to properties of projective syntactic (K, F)-prevarieties. Namely, not
only components C i of the cover {C i ; i ∈ I} of M are calculi from K
but also all intersections of the component images in L are presented
by calculi C i from K.
Syntactic K-varieties properties of projective syntactic (K, F)-
varieties and syntactic K-prevarieties. Namely, they are unions of
these calculi C i from K and intersections of these calculi C i are also
calculi from K.
The main goal of syntactic logical varieties, quasi-varieties and
prevarieties is in presenting knowledge in the form of sets of logical
formulas as a structured logical system using logical calculi, which
have means for inference and other logical operations. Semantically, it
allows one to describe the domain of interest, e.g., a database, knowl-
edge of an individual or the text of a novel, by a syntactic logical vari-
ety dividing the domain in parts that allow representation by calculi.
A fragmentation of a set of formulas in the process of logical
variety formation allows separation of contradictory formulas making
each calculus consistent and restricting interference of contradictory
formulas.
Note that there are projective syntactic varieties and syntactic
varieties that are not logical. This happens when not all calculi from
K used for building these varieties are logical calculi. For instance, we
have differential and integral calculi in mathematics, which are not
logical calculi. The calculus as a mathematical discipline is a variety
with two components — differential calculus and integral calculus.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 286

286 Theory of Knowledge: Structures and Processes

Each of these components is subdivided into two subcomponents —


calculus of a single variable and multivariable calculus.
The extended calculus as a mathematical discipline contains one
more component — time-scale calculus, which is a unification of dif-
ference equations with differential equations, combining integral and
differential calculi with the calculus of finite differences and offer-
ing a formalism for studying hybrid discrete–continuous dynamical
systems (Agarwal et al., 2002).
It is possible to use different properties of logical systems for strat-
ification of systems of formulas and building logical varieties, preva-
rieties and quasi-varieties. The most popular property is consistency,
which is used as a guideline for structuring a set of logical formu-
las into a logical variety, prevariety or quasi-variety with consistent
logical calculi as its components. However, it is also possible to use
other properties of logical calculi, such as completeness, axiom inde-
pendence or safety for dynamic logics, as guidelines for structuring.
Now let us reflect on components and levels of logical varieties,
prevarieties, and quasi-varieties.
Definition 3.3.12. The set A is called the lower level or the axioms
A(M) of the prevariety (quasi-variety or variety) M = (A, H, M ).
Indeed, formulas from A are used as the basis for inference in M,
i.e., they play the same role as axioms in logical calculi or formal
theories.
Definition 3.3.13. The set M is called the upper level or the
theorems T (M) of the prevariety (quasi-variety or variety) M =
(A, H, M ).
Indeed, formulas from M are deduced in M as theorems are
deduced in logical calculi or formal theories.
Definition 3.3.14. The set H is called the intermediate or infer-
ential level D(M) of the prevariety (quasi-variety or variety) M =
(A, H, M ).
Lemma 3.3.1. For any logical prevarieties (quasi-varieties or
varieties) M = (A, H, M ) and N = (B, G, N ), inclusions A ⊆ B
and H ⊆ G imply the inclusion M ⊆ N .
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 287

Knowledge Evaluation and Validation 287

Proof is left as an exercise.


In comparison with varieties and prevarieties, logical quasivari-
eties, and quasi-prevarieties are not necessarily globally closed under
logical inference. This trait provides higher flexibility in knowledge
representation and management.
An example of a logical variety is a distributed database or knowl-
edge base, each component of which consists of consistent knowledge/
data. In this case, the components of this knowledge/database are
naturally represented by components of a logical variety. Besides,
in one knowledge base different object domains may be represented.
In these domains some object may have properties that contradict
properties of an object from another domain. As an example let
us consider a knowledge base containing mathematical information.
Suppose that this information concerns some large mathematical field
like algebra or even its part — theory of groups. Mathematical logics
are frequently considered as the constructive basis of mathematics,
while logical calculi are viewed as precise models and formalizations
of real mathematical theories. However, the theory of groups does not
coincide with elementary (logical) theory of groups that is a deduc-
tive calculus. The field that is called in mathematics “the Theory of
Groups” contains various sub-theories (Hall, 1959).
In the theory of groups, such mathematical objects as finite and
torsion-free groups are studied. In any finite group, the formula
∀x∃n(xn = e) is valid where e is the identity element. At the same
time, in torsion-free groups another formula ∀x∀n(xn = e) is true.
Thus, if theory of groups with its sub-theories, such as the theory
of finite groups and theory of torsion-free groups, is represented as
a single calculus, then both these formulae produce a contradiction.
At the same time, a relevant logical variety in which sub-theories are
represented by its components provides means for consistent repre-
sentation of the theory of groups.
Another example of a mathematical field that cannot be repre-
sented by a consistent logical calculus but is naturally described by
a consistent logical variety is geometry. Indeed, geometry has many
subtheories — the Euclidean geometry, Riemannian geometry, a vari-
ety of non-Euclidean geometries, projective geometry, differential
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 288

288 Theory of Knowledge: Structures and Processes

geometry, analytic geometry, affine geometry, and metric geometry.


Impossibility of their immersion in a consistent logical calculus is
caused by the situation that axioms of some of these geometries con-
tradict axioms of others. For instance, the fifth postulate is basic
in the Euclidean geometry, while in non-Euclidean geometries, this
postulate is false.
Inference in a logical variety M is restricted to inference in its
components because at each step of inference, it is permissible to
use only rules from one set Hi applying these rules only to elements
from the set Ti . This allows one to better model non-monotonicity
of human thinking.
Indeed, the main difference between monotonic and non-
monotonic reasoning arises from the different kinds of knowledge
used in the process of inference. For instance, in the case of non-
monotonic reasoning, an inference rule of the following type can be
used: “A is true if B cannot be proved”, i.e., to prove A the sys-
tem relies on its ignorance of B. The statement B is not included in
the system of initial axioms. That is why by the above given rule of
inference, the statement A becomes true in the intellectual system.
However, it is possible that B becomes proved at some stage of the
inference. So, in this situation, the intellectual system must invali-
date A and even more — to revise each piece of knowledge depending
on A. In this way, the monotonic property of the consequence rela-
tion is violated. Usually, the statement A is excluded and the knowl-
edge/belief revision takes place. Logical varieties allow database users
and other intelligent systems not to eliminate knowledge/beliefs in
the process of revision but to build a new component from which all
knowledge/beliefs that contradict B are eliminated. In such a way,
all previously obtained knowledge/beliefs are preserved.

Definition 3.3.15. If a logical quasi-variety (quasi-prevariety, pre-


variety, or variety) M is built from the calculi C i , then these calculi
used in the formation of M are called components of M.
For instance, when a logical quasi-variety (prevariety or variety)
is used for knowledge representation in multi-agent environment, it is
natural to represent knowledge of each agent by a component of this
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 289

Knowledge Evaluation and Validation 289

logical quasi-variety (prevariety or variety). In a similar way, when a


logical quasi-variety (prevariety or variety) is used for modeling and
management of distributed knowledge and databases, it is natural to
represent knowledge (data) in each local component by a component
of this logical quasi-variety (prevariety or variety).
We consider two types of logical varieties, prevarieties and quasi-
varieties.

Definition 3.3.16. (a) If all components of a logical quasi-variety


(quasi-prevariety, prevariety, or variety) M are pure logical calculi,
then M is a pure logical quasi-variety (quasi-prevariety, prevariety,
or variety).
(b) If at least one component of a logical quasi-variety (quasi-
prevariety, prevariety, or variety) M is an applied logical calculus,
then M is a applied logical quasi-variety (quasi-prevariety, prevariety,
or variety).

Lemma 3.3.2. Being a pure or applied logical quasi-variety (quasi-


prevariety, prevariety, or variety) does not depend on its cover.

Indeed, if a cover of a logical quasi-variety (quasi-prevariety, pre-


variety, or variety) M consists only of pure logical calculi, then the
one used in M language is logical and consequently, there are no
non-logical axioms in M. At the same time, if a cover of a logical
quasi-variety (quasi-prevariety, prevariety, or variety) M has at least
one applied logical calculus, then there is, at least, one non-logical
axiom in M and it has to belong to, at least, one component of any
cover of M and this component will be an applied logical calculus.
Although any logical calculus is a logical variety, this particu-
lar case does not give anything new in logic because logical cal-
culi already exist in logic. A non-trivial example of logical vari-
eties is given by many-sorted logics (Turner, 1984: Manzano, 1993;
Meinke and Tucker, 1993; Abadi et al., 2010). In these logics, the
variables range over different domains. Consequently, logical vari-
ables are “typed” as variables in many computer programming lan-
guages. Many-sorted logics allow one not to work with the domain
of discourse as a homogeneous collection of objects, but to partition
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 290

290 Theory of Knowledge: Structures and Processes

this domain into several parts with various functions and relations
connecting them. In this case, these parts being formalized form a
model variety, while the system of logics that describe these parts
forms a syntactic variety.
For instance, semantics of computer languages employ different
types (domains) of data, such as the integers and the real numbers.
Each domain has its own equality, relations, identities, and arith-
metical operations. The logical language that describes the union of
these domains will have two sorts of variables, real variables, and
integer variables. The meaning of a quantifier would be determined
by the type of the variable it binds. The corresponding logic will be
a logical variety built of two calculi. Intersection of these calculi will
include such formulas as the commutative law
x + y = y + x,
and the associative law
x + (y + z) = (x + y) + z.
Towers of calculi introduced by Maslov (1983) for representation
of dynamic aspects of formal theories are an example of logical vari-
eties.
One more example of naturally formed logical varieties is the tech-
nique Chunk and Permeate built by Brown and Priest (2004). This
technique suggests to begin reasoning from inconsistent premises pro-
ceeds by separating the assumptions into consistent theories (called
by the authors chunks). These chunks are components of the log-
ical variety shaped by them. After this, appropriate consequences
are derived in one component (chunk). Then those consequences are
transferred to a different component (chunk) for further consequences
to be derived. This is exactly the way how logical varieties are used to
realize and model non-monotonic reasoning (Burgin, 1991d). Brown
and Priest suggest that Newton’s original reasoning in taking deriva-
tives in the calculus, was of this form.
Concepts of logical varieties and prevarieties provide further for-
malization for local logics of Barwise and Seligman (1997), many-
worlds model of quantum reality of Everett (Everett, 1957; 1957a;
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 291

Knowledge Evaluation and Validation 291

1973; DeWitt, 1971; Davies, 1980; Herbert, 1987), and pluralistic


quantum field theory of Smolin related to the many-worlds theory
(Smolin, 1995).
As history of physics tells us, to avoid some contradictions of
quantum theory, Everett suggested that the indeterminism of quan-
tum systems generates and is a consequence of a multifoliate reality.
In it, the universe is continually branching into myriads of ‘parallel
universes,’ which are physically disconnected but equally real assum-
ing that every branch actually occurred. However, in later days,
Everett suggested that the many of these worlds were not actual
but presented many views of the same world. In any case, it is possi-
ble to interpret each component of a logical variety as a description
of a separate world or a definite view (perspective) of such a world.
Logical variety approach well correlates and complements the gen-
eral methodology of labeled deductive systems used for generalization
of modal logics to enable reasoning about structures of actual worlds,
where each world has an arbitrary associated modal theory and the
whole system has a logical variety as its theory.
Let us consider a syntactic quasi-prevariety (quasi-variety, preva-
riety, or variety) M = (A, H, M ) with the cover {C i = (Ai , Hi , Ti );
i ∈ I} and the set N of formulas, i.e., N is a subset of a logical
language L.

Definition 3.3.17. (a) The set V = {Ti ; i ∈ I} is called the flat map
of the syntactic quasi-prevariety (quasi-variety, prevariety, or variety)
M and a quasi-prevariety (quasi-variety, prevariety, or variety) flat
map of the set N .
(b) If N = M , then the set V = {C i ; i ∈ I} is called a quasi-
prevariety (quasi-variety, prevariety, or variety) map of the set N .
Note that one set N of formulas can have different maps.

Proposition 3.3.1. If N ⊆ P, then any quasi-prevariety (quasi-


variety) (flat) map of the set P is a quasi-prevariety (quasi-variety)
(flat) map of the set N .

Proof is left as an exercise.


For prevariety and variety maps, this result is not always true.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 292

292 Theory of Knowledge: Structures and Processes

Proposition 3.3.2. (a) Any quasi-variety (flat) map of an arbitrary


set of formulas N is a quasi-prevariety (flat) map of N .
(b) Any prevariety (flat) map of an arbitrary set of formulas N is
a quasi-variety (flat) map of N .
(c) Any variety (flat) map of an arbitrary set of formulas N is a
prevariety (flat) map of N .

Proof is left as an exercise.


Let us consider a class of syntactic quasi-prevarieties (quasi-
varieties, prevarieties or varieties) H.

Definition 3.3.18. (a) The set W of all flat maps of a set of formulas
N corresponding to quasi-prevarieties (quasi-varieties, prevarieties,
or varieties) from H is called the flat H-atlas of N .
(b) The set W of all quasi-prevariety (quasi-variety, prevariety, or
variety) maps of set of formulas N corresponding to quasi-prevarieties
(quasi-varieties, prevarieties, or varieties) from H is called the quasi-
prevariety (quasi-variety, prevariety, or variety) H-atlas of N .

Proposition 3.3.3. If N ⊆ P, then any quasi-prevariety (quasi-


variety) (flat) H-atlas of the set P is a subset of the quasi-prevariety
(quasi-variety) (flat) H-atlas of the set N .

Proof is left as an exercise.


For prevariety and variety maps, this result is not always true.

Proposition 3.3.4. (a) Any quasi-variety (flat) H-atlas of an arbi-


trary set of formulas N is a subset of the quasi-prevariety (flat)
H-atlas of N .
(b) Any prevariety (flat) H-atlas of an arbitrary set of formulas
N is a subset of the quasi-variety (flat) H-atlas of N .
(c) Any variety (flat) H-atlas of an arbitrary set of formulas N
is a subset of the prevariety (flat) H-atlas of N .

Proof is left as an exercise.

Definition 3.3.19. If {C i ; i ∈ I} is the cover of the syntactic quasi-


prevariety (quasi-variety, prevariety, or variety) M, then the cardi-
nality |I| of the set I is called the weight of M.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 293

Knowledge Evaluation and Validation 293

Let us consider a class of syntactic quasi-prevarieties (quasi-


varieties, prevarieties, or varieties) H.
Definition 3.3.20. The cardinality min{|V|; V is an H-map of N }
is called the H-weight of the set of formulas N .
Proposition 3.3.5. If H is a class of syntactic quasi-prevarieties
(quasi-varieties) and N ⊆ P, then the H-weight of N is less than or
equal to the H-weight of P .
Proof is left as an exercise.
Let us consider two logical calculi C = (A, H, T ) and B =
(B, G, P ).
Definition 3.3.21. (a) The logical calculus C is a subcalculus of the
logical calculus B if A ⊆ B and T ⊆ P .
(b) The logical calculus C is a strict subcalculus of the logical
calculus B if A ⊆ B and H ⊆ G.
For instance, the applied calculus (formal theory) of the Zermelo–
Fraenkel set theory ZF, without the Axiom of Choice is a strict
subcalculus of the Zermelo–Fraenkel set theory ZF with the Axiom
of Choice (Fraenkel and Bar-Hillel, 1958).
Proposition 3.3.6. Any strict subcalculus of a logical calculus B is
a subcalculus of B.
Indeed, if a logical calculus C = (A, H, T ) is a strict subcalculus
of the logical calculus B = (B, G, P ), then whatever is deducible in
C is also deducible in B, and thus, T ⊆ P .
Proposition 3.3.7. If a logical calculus A is a (strict) subcalculus
of a logical calculus B and B is a (strict) subcalculus of a logical
calculus C, then A is a (strict) subcalculus of C.
Indeed, if A = (A, F, Q), C = (C, H, T ), B = (B, G, P ), A ⊆ B,
B ⊆ C, Q ⊆ P and P ⊆ T , then by properties of sets, A ⊆ C and
Q ⊆ T . In a similar way, if A ⊆ B, B ⊆ C, F ⊆ G and G ⊆ H, then
by properties of sets, A ⊆ C and F ⊆ H.
Let us consider two syntactic logical varieties M = (A, H, M ) and
Q = (B, G, N ).
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 294

294 Theory of Knowledge: Structures and Processes

Definition 3.3.22. (a) The logical variety M is a subvariety of the


logical variety Q if all components of M are subcalculi of the com-
ponents of Q.
(b) The logical variety M is a strict subvariety of the logical vari-
ety Q if all components of M are strict subcalculi of the components
of Q.
(c) The logical variety M is a direct subvariety of the logical vari-
ety Q if all components of M are also components of Q.
Note that any component of Q is also a direct subvariety of the
logical variety Q.
Definitions imply the following result.

Proposition 3.3.8. (a) Any direct subvariety M of the logical vari-


ety Q is a strict subvariety of Q.
(b) Any strict subvariety M of the logical variety Q is a subvariety
of Q.

Proposition 3.3.7 implies the following result.

Proposition 3.3.9. If a logical variety M is a (direct or strict) sub-


variety of a logical variety N and N is a (direct or strict) subvariety
of a logical variety Q, then M is a (direct or strict) subvariety of Q.

Proposition 3.3.10. If a logical variety M is a direct subvariety of


a logical variety N, then the flat map of M is a subset of the flat
map of N.

Proof is left as an exercise.

Proposition 3.3.11. If a logical variety M is a subvariety of a logi-


cal variety N, then the weight of M is less than or equal to the weight
of N.

Proof is left as an exercise.

Corollary 3.3.1. If a logical variety M is a direct (strict) subvariety


of a logical variety N, then the weight of M is less than or equal to
the weight of N.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 295

Knowledge Evaluation and Validation 295

Formation of a unified logical system from given logics is an impor-


tant problem of logic as a discipline. In particular, it is useful to be
able to include a system of calculi into one calculus. Gabbay writes
that “the problem of combining logics and systems is central for mod-
ern logic, both pure and applied. The need to combine logics starts
both from applications and from within logic itself as a discipline.
As logic is being used more and more to formalize field problems in
philosophy, language, artificial intelligence, logic programming, and
computer science, the kind of logics required becomes more and more
complex.” (Gabbay, 1999).
Logical varieties give a relevant context for solving this problem.
Let K be a class of logical calculi and M = {Ci ; i ∈ I} be a
deductive variety (K-variety).

Definition 3.3.23. A logical variety M is called:

(a) Discrete if its components are disjoint;


(b) Classical if all its components are classical deductive calculi;
(c) Connected if any two of its components have a non-void inter-
section;
(d) Compatible if it is a subset of a consistent calculus;
(e) K-compatible if it is a subset of a calculus from K;
(f) Provably compatible if it is possible to prove by classical methods
that it is a subset of a consistent calculus;
(g) Consistent if all its components are consistent calculi.

Example 3.3.1. Let us take the axiom system AG of group theory,


axioms AM for metric spaces and postulates PE of the Euclidean
geometry. Each of these systems gives birth to a logical calculus
when we apply classical deduction rules. These calculi form a dis-
crete classical logical variety.

Example 3.3.2. Taking the calculus C(AG) generated by the group


axioms AG, calculus C(AAG) generated by the abelian group axioms
AAG, and calculus C(AS) generated by the semigroup axioms AS, we
obtain a connected classical logical variety. It is provably compatible
because C(AS) ⊆ C(AG) ⊆ C(AAG).
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 296

296 Theory of Knowledge: Structures and Processes

Definition 3.3.24. Components {Ci ; i ∈ J} of M are called:

(1) (Provably) compatible if the subvariety of M generated by these


components is (provably) compatible.
(2) K-compatible if the subvariety of M generated by these compo-
nents is K-compatible.
(3) Connected if the subvariety of M generated by these components
is connected.

Lemma 3.3.3. A compatible deductive variety M is consistent.

Indeed, a subcalculus of a consistent calculus is also consistent.

Lemma 3.3.4. For any deductive variety M, there is a discrete


deductive variety DM such that their upper levels are isomorphic,
i.e., T (M) ≈ T (DM).

Proof . To build DM, we change variables in all components of M


so that any two new calculi built from the components of M do
not have common variables. In addition, we prohibit using the sub-
stitution rule for inference and eliminate all connecting mappings
that exist in M. The union of the obtained calculi forms the dis-
crete deductive variety DM. As deduction goes separately in each
component, T (M) ≈ T (DM) where the isomorphism is obtained by
renaming the variables in formulas.
Lemma is proved.

Lemma 3.3.4 directly implies the following result.

Proposition 3.3.12. The discrete counterpart DM of a variety M


preserves consistency, i.e., if M is a consistent variety, then DM
also is a consistent variety.

Transition to the discrete counterpart of a logical variety preserves


compatibility.

Proposition 3.3.13. If M is compatible (K-compatible), then DM


is compatible (K-compatible).
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 297

Knowledge Evaluation and Validation 297

Proof is based on the construction of DM described in the proof


of Lemma 3.3.4.
Theorem 3.3.1. For any number n > 1 there is a classical consis-
tent connected deductive logical variety M with n components such
that any n − 1 components of M are compatible, but M is not com-
patible.
Proof. Let us consider a consistent set A1 , A2 , A3 , . . . , An−1 of inde-
pendent well-formed formulas from the classical propositional (or
first-order predicate) calculus. We remind that formulas are inde-
pendent if neither of them can be deduced from all others in this set.
Then there is a consistent deductive logical variety M of the form
M = {Ci ; Ci = (Ai , d, Ti ) and i = 1, 2, 3, . . . , n},
where An =A1 ∨A2 ∨A3 ∨ · · · ∨An−1 , d is the set of all deduction
rules of classical the first-order predicate calculus and Ti is the set of
all formulas deducible from Ai by rules from d.
Note that as all formulas A1 , A2 , A3 , . . . , An−1 belong to the
classical propositional (or first-order predicate) calculus, each of
them is self-consistent (non-contradictory), implying that each
C1 , C2 , C3 , . . . , Cn−1 is a consistent classical calculus.
In addition, the calculus Cn is also consistent because otherwise
formulas A1 , A2 , A3 , . . . , An−1 would not be independent. Thus, all
Ci are classical calculi (i = 1, 2, 3, . . . , n) and by the definition of
compatibility, it is possible to include any n − 1 components of M in
a consistent classical calculus.
At the same time, the set of all formulas A1 , A2 , A3 , . . . , An−1 , An
is inconsistent because the formula D = A1 ∧ A2 ∧ A3 ∧ · · · ∧ An−1 is
deduced from the formulas A1 , A2 , A3 , . . . , An−1 and = ¬D.
Theorem is proved.
Remark 3.3.1. The condition that the variety is classical is
essential.
Now let us explore provable compatibility.
Theorem 3.3.2. For any number n > 1 there is a classical connected
deductive logical variety M with n components such that any n − 1
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 298

298 Theory of Knowledge: Structures and Processes

components of M are provably compatible, M is compatible, but it is


not provably compatible.

Proof . Let us consider a classical logical calculus C such that it has


a finite number of axioms (axiom schemas) and it is consistent but
it is impossible to prove its consistency. The formal arithmetic is an
example of such a calculus as by the first theorem it is impossible to
prove its consistency by classical methods (Gödel, 1931; 1932), while
consistency of arithmetic was proved using more powerful methods
(Gentzen, 1936; Ackermann, 1940; Schutte, 1960).
It is possible to presume that each axiom (axiom schema) from C
generates a provably consistent classical logical calculus. If L is the
language of calculus C, then there is a consistent but not provably
consistent calculus D in the language C such that it has the least
number of axioms each of which is provably non-contradictory. Let
us assume that D has m axioms A1 , A2 , A3 , . . . , Am and prove that
for any number n ≥ 2 there is a classical connected deductive logical
variety M with n components such that any n − 1 components of
M are provably compatible, M is compatible, but it is not provably
compatible.
At first, we do this for n ≤ m. In this case, we take n disjoint
groups G1 , G2 , G3 , . . . , Gn of axioms A1 , A2 , A3 , . . . , Am and con-
sider the deductive variety M = {Ci ; i = 1, 2, 3, . . . , n} where Ci is
the classical logical calculus with axioms from the group Gi . When
n = m, each group Gi consists of one axiom Ai . In a general case, this
variety M has the following properties. It is compatible because it is
a subset of the consistent calculus D. It is not provably compatible
because D is not a provably consistent calculus and as M contains
all axioms of D, there is no other provably consistent calculus that
contains M.
At the same time, any n − 1 components of M are provably com-
patible because D is consistent but not provably consistent calculus
with the least number of axioms. So, for n ≤ m, Theorem 3.3.2 is
proved.
Now let us consider the case n > m. Each classical logi-
cal calculus Ci with one axiom Ai has infinitely many formulas
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 299

Knowledge Evaluation and Validation 299

B1 , B2 , B3 , . . . , Bn , . . . . For instance, it has all conjunctions of this


axiom with itself. Thus, it is possible to consider the deductive vari-
ety M = {Ci ; i = 1, 2, 3, . . . , n} where each of first m classical logical
calculus Ci is generated using the single axiom Ai , all next n − m
classical logical calculus Ci are generated consecutively using the for-
mulas Bi . Note that it is the same variety as in the previous case only
with a different map.
The variety M is compatible because it is a subset of the con-
sistent calculus D. It is not provably compatible because D is not
a provably consistent calculus and as M contains all axioms of D,
there is no other provably consistent calculus that contains M.
At the same time, any n − 1 components of M are provably com-
patible because it is possible to include any n − 1 components of M
into a consistent calculus that has not more than n − 1 axioms from
the set A1 , A2 , A3 , . . . , Am and D is a consistent but not provably
consistent calculus with the least number of axioms.
Thus, Theorem 3.3.2 is proved.

Remark 3.3.2. The condition that the variety is classical is


essential.
Remark 3.3.3. For limit ordinals, similar results are not valid as
the following result demonstrates.
Theorem 3.3.3. For any classical deductive logical variety M, if any
finite subset of components of M is compatible, then M is compatible.
Proof. Let us consider a classical deductive logical variety M in which
any finite subset of components of M is compatible. When M has
a finite weight, then all its components are compatible, i.e., M is
compatible.
Now let us assume that the weight of the variety M is infinite
and consider a finite set X of formulas from M. Then there is a
finite number of components {Ci ; = 1, 2, 3, . . . , n} of M such that X
is a subset of the union ∪ni=1 Ci . As the set {Ci ; i = 1, 2, 3, . . . , n} is
compatible, then it is possible to include all formulas from X into a
consistent logical calculus, i.e., the set X is consistent. As this is true
for any finite set of formulas from M, by the compactness theorem
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 300

300 Theory of Knowledge: Structures and Processes

(cf., (Shoenfield, 2001)), the whole set is consistent, i.e., it is possible


to include M into a consistent logical calculus.
Theorem is proved.

Corollary 3.3.2. For any classical deductive logical variety M with


a countable number of components, if any finite subset of components
of M is compatible, then M is compatible.

A set of formulas in a classical logical system is consistent if and


only if it has a model (Shoenfield, 2001). It gives us the following
result.

Proposition 3.3.14. A deductive logical variety M is compatible if


and only if all its components have a common model.

This result is not constructive in a general case because it is not


always possible to find a model for a set of formulas. However, it is
possible to derive a constructive criterion for compatibility of logical
varieties.

Theorem 3.3.4. A classical consistent connected deductive logical


variety M = {Ci ; i = 1, 2, 3, . . . , n} is compatible if and only if for
each component Ci , the following condition is valid:
(* ) There are no formulas A1 , A2 , A3 , . . . , Ai−1 , Ai+1 , . . . , An from
the corresponding calculi C1 , . . . , Ci−1 , Ci+1 , . . . , Cn , i.e., Aj ∈ Cj ,
such that the formula A1 ∨A2 ∨A3 ∨ · · · ∨Ai−1 ∨Ai+1 · · · ∨An
belongs to the calculi Ci .

Proof. We prove this result for i = 1. For all other values of i, proof
is the same.

1. Let us assume that there are formulas A2 , A3 , . . . , An from the cor-


responding calculi C2 , C3 , . . . , Cn such that the formula A2 ∨A3 ∨
· · · ∨An belongs to the calculi C1 . Then the set of formulas
A2 , A3 , . . . , An , A2 ∨A3 ∨ · · · ∨An is inconsistent in a classical
calculus and this set belongs to any classical deductive calculus
that contains the variety M. As such calculus will be inconsistent,
the variety M is incompatible.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 301

Knowledge Evaluation and Validation 301

2. Let us consider an incompatible classical deductive logical variety


M = {Ci ; i = 1, 2, 3, . . . , n}. Then any classical calculus C that
contains T(M) also contains the classical calculus C1 [C2 , . . . , Cn ],
which is obtained by adding all formulas from C2 , . . . , Cn to the
calculus C1 and taking the deductive closure. In such a way, we
obtain classical calculus that contains T(M). As M is incompat-
ible, the calculus C1 [C2 , . . . , Cn ] is inconsistent. By the reduction
theorem for consistency (cf., (Shoenfield, 2001; Section 4.1)), if
the calculus C1 [C2 , . . . , Cn ] is inconsistent, then there are formu-
las A2 , A3 , . . . , An from the corresponding calculi C2 , C3 , . . . , Cn
such that the formula A2 ∨A3 ∨ · · · ∨An is deducible in C1 . As
C1 is a calculus, this formula belongs to the calculi C1 .

Theorem is proved.
The compatibility of a logical variety means that it is possible to
immerse all components of this variety into one calculus from the
class K of logical calculi. Thus, the obtained results show that the
possibility of logic system immersion into one calculus is undecidable
for a finite number of logics (Theorem 3.3.2), while for an infinite
number of logics, the decidability problem is reducible to the finite
case (Theorem 3.3.3) and thus, it is undecidable in general.
It is necessary to explain that logical varieties, prevarieties, and
quasi-varieties implicitly perform various functions in knowledge
organization and management. One of these functions is stratification
of knowledge systems.
Stratification is a popular technique in knowledge base theory
and practice. For instance, Hunter and Liu (2009) introduce knowl-
edge base stratification to solve the problem of merging multiple
knowledge bases. Benferhat and Baida (2004) use stratified first
order logic for access control in knowledge bases. Benferhat and Gar-
cia (2002) employ stratification for handling inconsistent knowledge
bases. Lassez et al. (1989) show how stratification can be useful as a
tool in the interactive model-building process, demonstrating that it
is possible to reduce the computational complexity of the process by
the use of stratification that limits consistency checking to minimal
strata.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 302

302 Theory of Knowledge: Structures and Processes

There are different kinds of knowledge stratification.


In physical stratification, each stratum is a separate or at least,
closed physical system. For instance, any distributed database is
physically stratified.
In analytical stratification, each stratum is determined by a spe-
cific name (label) and all elements from this stratum have this label
(name). Knowledge base stratification used for handling inconsis-
tent knowledge bases (Benferhat and Garcia, 2002; Brewka, 1989;
Cholewinski, 1994) for constructing models of a knowledge base
(Lassez et al., 1989), representation of information in temporal
databases (Burgin, 2008a) and for merging multiple knowledge bases
and information integration (Benferhat et al., 2004; Burgin, 2004a;
Hunter and Liu, 2010) is analytical. Different classes of knowledge
form corresponding strata of knowledge systems.
There are different principles of knowledge classification, which
allow us to build several types of knowledge system stratifications.
Time is an important characteristic of knowledge, giving different
stratifications.
The temporal stratification.

1. The past stratum of knowledge consists of knowledge obtained/


accepted in the past.
2. The current stratum of knowledge consists of the actual (used
now) knowledge.
3. The future stratum of knowledge.

For instance, the knowledge “the Earth is flat” is past knowledge,


while the knowledge “the Earth is round” is current knowledge.
The past stratum of knowledge consists of three substrata: the
forgotten past knowledge, outdated but preserved past knowledge and
still actual past knowledge.
The current stratum of knowledge consists of three substrata: the
disappearing current knowledge, consolidated current knowledge, and
emergent current knowledge.
The future stratum of knowledge consists of three substrata: the
tentative/ potential future knowledge, realizable future knowledge, and
emergent future knowledge.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 303

Knowledge Evaluation and Validation 303

More precise temporal stratifications are used in temporal knowl-


edge and databases. A temporal knowledge/database is a database
with built-in time aspects. In particular, it supports a temporal
knowledge/data model and has a temporal version of the query
language (Snodgrass and Jensen, 1999; Date et al., 2002). Tempo-
ral knowledge/data stored in a temporal knowledge/database are
different from the knowledge/data stored in non-temporal knowl-
edge/database in that a time coordinate is attached to the knowl-
edge/data. This is different from the conventional knowledge/data,
which are usually considered to be valid now. Past and future knowl-
edge/data are not stored. Usually past knowledge/data are modi-
fied, overwritten with new (updated) knowledge/data or deleted to
achieve their temporal relevancy. Future knowledge/data are not con-
sidered because it is assumed that we do not receive information
about the future.
There are many complexity measures of algorithms, methods, and
procedures. Taking a complexity measure C, it is possible to partition
all algorithms (methods or/and procedures) into separate classes that
have different complexity measures. Each such a partition induces a
corresponding stratification of knowledge with respect to such knowl-
edge characteristics as accessibility, inference, and generation, which
are specific forms of knowledge acquisition. Here are some examples
of such stratifications.
The accessibility stratification.

1. Directly or one-step accessible knowledge.


2. Two-step accessible knowledge.
..........
3. n-step accessible knowledge.

Another stratification is based on complexity of knowledge inference.


The inference stratification.

1. Directly implied knowledge.


2. Two-step inferable knowledge.
....... .
n. n-step inferable knowledge.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 304

304 Theory of Knowledge: Structures and Processes

One more stratification is based on complexity of knowledge genera-


tion.
The generation stratification.
1. Directly generable/computable knowledge.
2. Two-step generable/computable knowledge.
....... .
n. n-step generable/computable knowledge.
Steps in generation, inference, and access may be determined by:

— Time slicing when each step is assigned some period of time for
realization.
— Elementary operations.

For instance, it is possible to assume that knowledge acquisition


is direct if it demands less than 3 seconds. The first step of knowl-
edge acquisition can be estimated as an interval from 3 seconds to
30 seconds. The second step of knowledge acquisition can be esti-
mated as an interval from 30 seconds to 1 minute. The third step of
knowledge acquisition can be estimated as an interval from 1 minute
to 3 minutes and so on.
It is also possible to measure complexity, e.g., effort in generation,
by the power of algorithms (Burgin, 2010d). In this case, we have
an algorithmic ladder, which consists of classes of algorithms with
increasing computing power.
The traditional algorithmic ladders have one of the following
forms:
(1) Finite automata, deterministic pushdown automata, pushdown
automata, and Turing machines.
(2) Regular, or linear grammars, context-free grammars, context-
sensitive grammars, and unrestricted, or phrase-structure
grammars.
New achievements of the theory of algorithms and computation
extend these ladders:
(1) Finite automata, deterministic pushdown automata, pushdown
automata, Turing machines, inductive Turing machines (Burgin,
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 305

Knowledge Evaluation and Validation 305

2005), and infinite-time Turing machines (Hamkins and Lewis,


2000).
(2) Regular, or linear grammars, context-free grammars, context-
sensitive grammars, unrestricted, or phrase-structure grammars,
grammars with prohibition (Burgin, 2005b), and Boolean gram-
mars (Okhotin, 2003).
Inductive Turing machines give an example of an algorithmic
ladder. Namely, the n-th strata of the inductive algorithmic ladder
consists of inductive Turing machines with the structured memory
that have order n but do not have order n + 1 (Burgin, 2005). It is
also possible to build an algorithmic ladder using inductive or limit.
Turing machine have structured program (rules for computation) or
structured (heads) operating devices (Burgin, 2005).
Stratified Boolean grammars (Wrona, 2005) and grammars with
exclusion (Burgin, 2015) give more examples of stratified operational
knowledge.
Different hierarchies studied in the theory of recursive functions,
such as arithmetical hierarchy or analytical hierarchy, represent strat-
ified declarative and operational knowledge about sets and relations
(Rogers, 1987; Burgin, 2005).
When we have a stratified system of knowledge represented in a
logical form, it is natural to treat this system as a logical variety in
which each stratum is a component of the variety.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 307

Chapter 4

Knowledge Structure
and Functioning: Microlevel
or Quantum Theory
of Knowledge

Real knowledge is to know the extent of one’s ignorance.


Confucius

Here we consider the microlevel or quantum level of knowledge with


three main goals:
• construction of an adequate mathematical model of knowledge
units;
• explication of elementary knowledge units;
• exploration and description of elementary knowledge unit integra-
tion into complex knowledge systems.
This study provides means for discerning data and knowledge
and finding links between them, as well as for improving efficiency of
information processing by computers.
The quantum level of knowledge, or more exactly, of the knowl-
edge universe, contains “quantum bricks” and “quantum blocks” of
knowledge that are used for construction of other knowledge sys-
tems. For instance, knowledge macrosystems, such as logical calculi
and varieties, as well as formal theories in logic and mathematics,
are constructed using knowledge microsystems or quantum elements,

307
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 308

308 Theory of Knowledge: Structures and Processes

Table 4.1. A portion of a relational database where information about stu-


dents is stored. Here N/T means that the student did not take this course.

Course student Math 180 Math 211 Cs 100 Hist 200

Alex K A A A C
Ben X A B A B
Costas H B N/T A A
Dan R C D N/T N/T
Eddi T C C D A
Frank S D D B F

such as propositions and predicates. Primitive propositions and pred-


icates, such as “Knowledge is power”, 2 + 2 = 4 or “Information can
give knowledge”, serve as “bricks” for composite propositions, logical
functions and predicates, such as “(2 + 2 = 4) ∨ (2 + 2 = 2)” or “If
A is a Frenchman, then A is a European” or “If X is a metric space,
then X is a topological space”, while composite propositions, logical
functions, and predicates, are “blocks” or compound logical units.
All these “bricks” and “blocks” are used for logical inference, as well
as for building logical calculi, logical varieties, and formal theories in
logic and mathematics.
Units of quantum knowledge or knowledge quanta find another
application in relational databases where data are represented by
a collection of relations often in the form of tables (flat relations),
in which object data form rows — one row for one object — and
attributes form columns — one column for one attribute. Rows in the
database relations are usually called tuples. When people interpret
relational data, they form knowledge. Therefore, with interpretation,
the rows are symbolic knowledge items, or more exactly, symbolic
knowledge quanta, which are symbolic components of extended quan-
tum knowledge units.
As a simple example of a relational database, let us consider such
a table with information about students and their grades.
Table 4.1 is composed from knowledge quanta such as:
has
Student Alex K grade A in Math 180, (4.1)
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 309

Knowledge Structure and Functioning: Microlevel or Quantum Theory 309

has
Student Frank S grade D in Math 180, (4.2)

Student Alex K grade A in Cs 100, (4.3)


has
Student Ben X grade B in Math 211. (4.4)

It is demonstrated that a normalized datum in the relational


data model has a complimentary nature to its structure that can
be expressed by semantic primitives associated with its storage and
retrieval operations. These semantic primitives are symbolic knowl-
edge quanta, which form the foundation for understanding much
larger retrieval issues, such as how data relations form logical naviga-
tion paths and how to express navigation paths in a tangible manner
as a list of nested lists. Symbolic knowledge quanta have the struc-
ture of named sets providing a powerful insight into how navigation
paths represent the natural intersection of data and the structure
that organizes it. This shows the importance of knowledge quantiza-
tion for efficient knowledge and data storage and retrieval.
In this chapter, we describe three approaches to knowledge on the
quantum level:
• Quantum theory of knowledge (QTK), which studies knowledge
quanta of different types;
• Semantic link theory of knowledge (SLTK), which studies semantic
links of different types;
• Semiotics, which studies signs and symbols as quantum units of
knowledge.

4.1. Basic structures of knowledge units on the


quantum level — knowledge quanta
and semantic links

Mathematics as an expression of the human mind reflects the


active will, the contemplative reason, and the desire for aesthetic
perfection. Its basic elements are logic and intuition, analysis and
construction, generality and individuality.
Richard Courant
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 310

310 Theory of Knowledge: Structures and Processes

We start with QTK, which studies knowledge quanta in two forms —


symbolic and substantial. While symbolic knowledge quanta are
symbolic expressions, substantial knowledge quanta also include
knowledge objects or domains. This complete representation is essen-
tial because knowledge is always about something, i.e., about the
knowledge object or knowledge domain.

4.1.1. Quantum theory of knowledge (QTK)


The superior man understands what is right;
the inferior man understands what will sell.
Confucius

To study knowledge as an essence, we have to begin with an impor-


tant observation that as we have already discussed, there is no knowl-
edge per se but we always have knowledge about something — about
weather, about the Sun, about the size of a parameter, about occur-
rence of an event, and so forth. In other words, knowledge is always
related to some real or abstract object or domain. Plato was may
be the first to formulate this explicitly in his dialogue Republic. The
object to which knowledge is related may be a galaxy, planet, point,
person, bird, letter, novel, action, operation, event, process, love, sur-
prise, picture, song, sound, light, square, triangle, etc. We call such
an object knowledge domain or knowledge object.
However, to discern a knowledge object, as well as to be able
to speak and to write about it, we have to name it. A name may
be a label, number, idea, text, process, and even a physical object
of a relevant nature. For instance, a name may be a state of a cell
in computer memory or a sound or a sequence of sounds when you
pronounce somebody’s name. Note that a name can be very complex.
For instance, a long text can be used as a name of the object described
by this text. In particular, this book may be used as a name of the
object called knowledge. In the context of named set theory (cf.,
Appendix and (Burgin, 2011)), any object can be used as a name of
another object.
Knowledge by its essence is knowledge of something, namely,
about the knowledge object or knowledge domain, which forms the
external structure of knowledge per se. Consequently, any knowledge
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 311

Knowledge Structure and Functioning: Microlevel or Quantum Theory 311

system has two parts: cognitive, that is, knowledge per se, and sub-
stantial, which consists of the knowledge object (knowledge domain)
with its structure (internal or external). These two parts are con-
nected by a relation (correspondence), which is conveyed by the word
“about” in English.
Cognitive part of a knowledge system is also called symbolic
because, as a rule, it is represented by a system of symbols. Cog-
nitive/symbolic parts of knowledge form knowledge systems per se
as abstract structures, while addition of substantial parts to them
forms extended knowledge systems.
At first, let us consider descriptive knowledge as the most typ-
ical category of knowledge (cf., Chapter 2). In this case, the sim-
plest knowledge about an object gives some property of this object.
As Aristotle wrote, we can know about things, nothing but their
properties. The simplest property is existence of the object in ques-
tion. However, speaking about properties, we have to discern intrinsic
and ascribed properties of objects. In this, we are following the long-
standing tradition of attributive realism, in which it is assumed that
objects have intrinsic properties. Taking an object A and its feature
(intrinsic property) QA , we come to an inherent descriptive quantum
(IKQ) of knowledge K = (A, q, QA ), the graphical form of which is
represented by Diagram (4.5).
q
A QA. (4.5)
Note that it is possible to treat the property QA as a traditional
property of an object represented by a value of an attribute or as
the attribute itself represented by a predicate in the conventional
description of properties or by an abstract or natural property in the
advanced portrayal of properties (Burgin, 1985; 1986; 2010).
For example, taking a physical body B, we know that it can have
such an intrinsic property as 10 kg of mass. At the same time, it
can have such an intrinsic property as “being a rigid object” (an
attribute), as well as intrinsic property mass (a natural property).
Definition 4.1.1. When the object A and the property QA are inde-
composable, the inherent quantum of knowledge (4.1) is called an
elementary inherent descriptive knowledge unit (EIKU).
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 312

312 Theory of Knowledge: Structures and Processes

According to the contemporary understanding of reality, people


do not have direct access to intrinsic properties of natural objects
(Frieden, 1998; Burgin, 2010; 2012). It is only possible to receive
information about intrinsic properties (Burgin, 2010). Consequently,
intrinsic properties (features) of natural, e.g., physical, objects are
reflected by ascribed properties (attributes). Ascribed properties of
natural objects are obtained by observation, experiments, measure-
ment, calculation or inference. Intrinsic properties of abstract, e.g.,
mathematical, objects are given in the form of assumptions, axioms,
postulates, etc. It is possible to consider any known intrinsic prop-
erty of an abstract object also as an ascribed property of this object.
There are also other ascribed properties of abstract objects. In par-
ticular, approximations of intrinsic properties (features) of abstract
objects are ascribed properties (attributes) of these objects. For
instance, when we say that the length of a circle with the radius 1 ft
is 6.28 ft, we are speaking about an ascribed property (attribute) of
this circle because the length is equal to 2π ft and this is an irrational
number.
Intrinsic and ascribed properties are similar to primary and sec-
ondary qualities discerned by the prominent English philosopher
John Locke (1632–1704). He assumed that primary qualities are
objective features of the world, such as shape and size. By contrast,
secondary qualities depend on mind. Examples of secondary qualities
are color, taste, sound, and smell.
At the same time, unlike primary properties, intrinsic properties
of physical objects are inaccessible in a direct way. As we have men-
tioned above, to get knowledge about intrinsic properties of physical
objects, people obtain information that makes it possible to represent
intrinsic properties by ascribed properties.
Besides, people and other intelligent systems, as a rule, use names
when they deal with various objects, either natural or abstract. As a
result, when names are separated from objects themselves properties,
the cognitive representation of A splits into two components — a
name NA of A and an attribute PA , as a value of a property P of A,
while the object A is paired with its feature (property) QA .
These observations allow us to build the next level of knowl-
edge structure by formalizing the concept of descriptive knowledge
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 313

Knowledge Structure and Functioning: Microlevel or Quantum Theory 313

in the form of an individual descriptive extended knowledge quantum


(IDEKQ) or simply knowledge ide-quantum DK = [(A, q, QA ), (n,
e), (NA , p, PA )], the graphical form of which is displayed in Dia-
gram (4.6).
e
QA PA

q p . (4.6)
A NA
n
Here NA is a name of the object A and QA is a feature (an intrinsic
property) of A, e.g., if A is a book, then NA is usually the title of
A, the intrinsic property QA may be the year of its publication or
the author, while the attribute (ascribed property) PA is the cognitive
representation of QA . In our case, when QA is the year of publication,
then PA is the number that represents this year, e.g., 2012, or if QA
is the author, PA is the first and the last names of the author.
For instance, it is possible to understand PA in Diagram (4.2) as
a classical property such as being white, as a fuzzy property such as
being 50% white, as a physical property such as weight or height, and
as a value of a physical property, e.g., having weight 100 lb.
Table 4.1 represents a system of descriptive extended knowledge
quanta. Let us consider their structural portrayal.
knowledge A
of Math 180 content
,
(4.7)
person Alex K
with the name
Alex K

knowledge B
of Hist 200 content
,
(4.8)
person Ben X
with the name
Ben X
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 314

314 Theory of Knowledge: Structures and Processes

knowledge (C, C, D, A)
of taken courses
.
(4.9)
person Eddi T
with the name
Eddi T

Note that a compound object can be represented by a single name


and several intrinsic properties can be described by one ascribed
property. For instance, under definite conditions, a multitude of stars
is called by the name a galaxy. In the considered example, an anno-
tation of the book or the title and the name of the author(s) may be
also used as the book names.
From the perspective of the general theory of structures (Burgin,
2012), the knowledge ide-quantum DK represents the external struc-
ture of individual quantum units of descriptive knowledge.
Let us perform structural analysis of the knowledge ide-quantum
DK = [(A, q, QA ), (n, e), (NA , p, PA )] and its graphical form in
Diagram (4.6).
The knowledge ide-quantum DK has specific components. The
first one is the attributive or estimate component of the knowledge
ide-quantum DK. It reflects relation of the intrinsic property QA to
the ascribed property PA and is represented by Diagram (4.10).
e
QA PA . (4.10)

For instance, it is possible to understand e in Diagram (4.10) as


being observable meaning that the intrinsic property QA is observable
and observation gives us the ascribed property PA .
The second one is the naming component of the knowledge ide-
quantum DK. It reflects the process of naming of the object A when
this object is separated from other objects, discovered or constructed
in the knowledge domain. The naming component is represented by
Diagram (4.11).
n
A NA . (4.11)
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 315

Knowledge Structure and Functioning: Microlevel or Quantum Theory 315

The third one is the substantial component of the knowledge ide-


quantum DK. It is the intrinsic property of the object A from the
knowledge domain U . The substantial component is reflected by Dia-
gram (4.12).
q
A QA . (4.12)

The substantial component may be material, for example, U con-


sists of all elementary particles or of computers, or may be ideal. For
instance, A is a text or a real number, while the domain U consists
of texts or of real numbers.
The fourth one is the cognitive or symbolic component of the
knowledge ide-quantum DK. It is the ascribed property of the object
A from the knowledge domain U . The symbolic component of indi-
vidual knowledge renders information about the object A from the
knowledge domain U . Diagram (4.13) reflects the cognitive (sym-
bolic) component of DK.
p
NA PA . (4.13)

From the perspective of the general theory of structures (Burgin,


2012), the cognitive (symbolic) component of the knowledge ide-
quantum DK represents the inner structure of individual quantum
units of descriptive knowledge.
The theory of named sets (Burgin, 2011) provides efficient means
for structural analysis in general and for structural analysis of knowl-
edge in particular. According to the theory of named sets, this struc-
ture is a second order named set M (Burgin, 2011), which is built
of two named sets. The first one is described by Diagram (4.12),
which represents the objective or substantial part of Diagram (4.6)
playing the role of the support in the named set M , and the second
one is portrayed by Diagram (4.13), which represents the subjective
or cognitive/symbolic part of Diagram (4.6) playing the role of the
reflector in the named set M . The morphism (n, e) between Diagram
(4.12) and Diagram (4.13) forms the component called reflection of
this named set M of the second order, the graphical form of which
is represented by Diagram (4.6) (Burgin, 2011).
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 316

316 Theory of Knowledge: Structures and Processes

At the same time, it is possible to consider Diagram (4.6) as a


named set morphism (n, e) from the named set (A, q, QA ) into the
named set (NA , p, PA ) (Burgin, 2011). Each interpretation has its
advantages, for example, when we build operations with knowledge
quanta — in some cases one interpretation is better, while in other
cases another interpretation is more useful.
When all nodes and arrows — the object A, its property QA ,
the name NA and the value PA in Diagram (4.6) — are elementary,
then the IDEKQ is also elementary. However, the name can consist
of several words or the value can include several numbers. In this
case, the IDEKQ is compound and not elementary.
Definition 4.1.2. An individual descriptive extended knowledge
quantum (IDEKQ) K is elementary if its structure is not decom-
posable into two or more knowledge quanta.
Taking Diagram (4.6), we see that a quantum of descriptive
knowledge IDEKQ resembles the structure of an atom, where Dia-
gram (4.7) plays the role of the nucleus of this “atom” and Dia-
gram (4.8) forms its symbolic shell similar to the electronic shell
of an atom. Utilizing this analogy, we call the knowledge quantum
K = (A, q, QA ), the graphical form of which is presented by Diagram
(4.7), by the name a nuclear (or intensional) knowledge quantum,
while the knowledge quantum H = (NA , p, PA ), the graphical form
of which is presented by Diagram (4.8), we identify by the name a
symbolic knowledge quantum.
In logic, descriptive symbolic knowledge quanta are represented
by propositions in the form of declarative sentences from natural
languages or logical formulas, while in natural languages, descrip-
tive symbolic knowledge quanta are represented by declarative sen-
tences from these languages, e.g., “Everest is a high mountain,” or
by expressions such as “a blue ball” or “a high mountain,” which
represent knowledge quanta (ball, is, blue) and (mountain, is, high),
correspondingly.
Diagram (4.7) represents the structure of the base of an individ-
ual extended descriptive knowledge quantum (IEDKQ) or objective
descriptive knowledge quantum (ODKQ). In other words, the base is
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 317

Knowledge Structure and Functioning: Microlevel or Quantum Theory 317

the object of knowledge and its intrinsic property, or the value of an


intrinsic property of this object.
Diagram (4.8) represents the structure of an individual abstract
descriptive quantum (IADKQ) of knowledge or subjective descriptive
knowledge quantum (SDKQ). In other words, IADKQ is a nominal
(symbolic) representation of the object of knowledge and the ascribed
property that represents the intrinsic property of this object, or the
value of such a property.
It is possible that connections or relations in Diagram (4.6)
are specified and named according to this specification. This gives
us an individual valued descriptive extended knowledge quantum
(IVDEKQ) or simply knowledge ide-quantum.
For instance, we can specify Diagram (4.6) in the following way:
the object A is a car, its name NA is Cadillac, its property QA is “the
number of doors” and its value PA is 4. This gives us the following
diagram.

value
number of doors 4
nd number
. (4.14)
car Cadillac
vehicle make

Another possible specification of Diagram (4.6) preserves the


naming component of Diagram (4.14), i.e., the object A is a car,
its name NA is Cadillac, but changes the symbolic component of
Diagram (4.14), i.e., the considered property QA is “color” and its
value PA is “white”. This gives us the following diagram.

value
color white
nd number
. (4.15)
car Cadillac
vehicle make
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 318

318 Theory of Knowledge: Structures and Processes

As a result, we have a nuclear valued knowledge quantum repre-


sented by Diagram (4.16) and a symbolic valued knowledge quantum
portrayed by Diagram (4.17).
painted
car color. (4.16)

In this case, we interpret the morphism e in Diagram (4.9) as


painted.
painted
Cadillac white. (4.17)

For instance, it is possible to understand PA in Diagram (4.9) as a


classical property such as being white and p as its fuzzy counterpart,
e.g., p = 0.5 or p = 50% meaning being 50% white.
In a similar way, we find that individual quanta of representa-
tional knowledge, which depicts the knowledge object (domain) by
models (images), and of operational knowledge, which represents the
knowledge object (domain) by procedures, algorithms, instructions
or processes, have a similar structure, which is also a second order
named set constructed of two named sets and a morphism between
them (Burgin, 2011). Namely, we have the following diagram, which
represents an individual extended representational knowledge quan-
tum (IREKQ) or simply knowledge ire-quantum RK = [(A, q, SA ),
(n, a), (NA , p, MA )].
a
SA MA
q p . (4.18)
A NA
n

Here NA is a name of the object A and SA is an intrinsic structure


of A, while MA is a representing structure (model) of A. For instance,
if A is an atom, then NA is usually the type of A, e.g., an atom of
hydrogen. According to contemporary physics, people do not have
access to the intrinsic structure SA but there are various models
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 319

Knowledge Structure and Functioning: Microlevel or Quantum Theory 319

(ascribed structures) MA of the atom A. One of them called the Solar


model represents the structure of an atom consisting of the nucleus
the electronic shell, which consists of electrons rotating around the
nucleus.
For instance, we can specify Diagram (4.18) in the following way:
the object A is the Solar System, its name NA is the linguistic expres-
sion “the Solar System”, its structure SA is the intrinsic structure of
the Solar System, i.e., the Sun and planets with relations between
them, and its model MA is a Copernican or Ptolemaic model of the
Solar System. This gives us the following diagram.

modeling
the intrinsic structure a model of the Solar System
structuring representation
. (4.19)
the Solar System “the Solar System”
naming

From the perspective of the general theory of structures


(Burgin, 2012), the knowledge ire-quantum RK represents the
external structure of individual quantum units of representational
knowledge.
In a similar way, we have the following diagram, which represents
an individual extended operational knowledge quantum (IEOKQ) or
simply knowledge ioe-quantum OK = [(A, q, SA ), (f , g), (NA , p,
TA )].

g
SA TA
q p .
(4.20)
A NA
f

Here the object A is an action, operation or process, such as


behavior or functioning; NA is its name and SA is an intrinsic struc-
ture of A; while TA is a procedural (operational) structure (model) of
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 320

320 Theory of Knowledge: Structures and Processes

A. Here TA can be a system of instructions, an algorithm, a program,


a method, a technique, or a procedure. For instance, A is a process
of searching information on the Internet; its name NA is “search on
the Internet”, “Yahoo search” or “Google search”; SA is the type of
processes of search on the Internet; and is the program of the search
engine that perform the search A.
From the perspective of the general theory of structures (Burgin,
2012), the knowledge ioe-quantum OK represents the external struc-
ture of individual quantum units of operational knowledge.
We also have an individual extended valued operational knowledge
quantum (IVOEKQ), the graphical form of which is represented by
Diagram (4.21).

description
the intrinsic structure a program (algorithm) for computation of π
structuring description
(4.21)
computational process computation of π.
naming

As in the case of descriptive knowledge, we split extended rep-


resentational knowledge quantum RK = [(A, q, SA ), (n, a), (NA , p,
MA )] and operational knowledge quantum OK = [(A, q, RA ),
(f, g), (NA , p, TA )] into two parts. In the first case, it gives us
the individual substantial (nuclear) representational knowledge quan-
tum K = (A, q, SA ), the graphical form of which is presented by
Diagram (4.22), and the individual symbolic (cognitive) representa-
tional knowledge quantum H = (NA , p, MA ), the graphical form of
which is presented by Diagram (4.23).

q
A SA , (4.22)

p
NA MA . (4.23)
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 321

Knowledge Structure and Functioning: Microlevel or Quantum Theory 321

Here NA is a name of the object A and SA is an intrinsic structure


of A, while MA is a representing the ascribed structure (assigned
model) of A.
Note that the substantial component can also be symbolic when
A is a symbolic object, e.g., a mathematical expression (formula) or
a scientific concept, and its intrinsic property SA is also represented
in a symbolic form, e.g., if A is the concept car, then SA can be its
weight.
In addition, there are two more components. The first one is the
attributive or estimate component of the knowledge ire-quantum RK.
It reflects relation of the intrinsic structure SA to the ascribed model
MA and is represented by Diagram (4.24).
f
SA MA . (4.24)

The second is the naming component of the knowledge ire-


quantum RK. It reflects the process of naming of the object A when
this object is separated from other objects, discovered or constructed
in the knowledge domain. It is represented by the Diagram (4.25).
g
A NA . (4.25)

Splitting of the extended operational knowledge quantum OK


gives us the individual nuclear operational knowledge quantum K =
(A, q, RA ), the graphical form of which is presented by Diagram
(4.26), and the individual symbolic operational knowledge quantum
H = (NA , p, TA ), the graphical form of which is presented by Dia-
gram (4.27).
q
A SA (4.26)

and
p
NA TA . (4.27)

Here the object A is an action, operation or process, such as


behavior or functioning; NA is its name and SA is an intrinsic
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 322

322 Theory of Knowledge: Structures and Processes

structure of A; while TA is a procedural (operational) structure


(model) of A.
In addition, there are two more components. The first one is the
attributive or estimate component of the knowledge ioe-quantum OK.
It reflects relation of the intrinsic structure SA to the ascribed model
TA and is represented by Diagram (4.28).
f
SA TA . (4.28)

The second is the naming component of the knowledge ioe-


quantum OK. It reflects the process of naming of the object A when
this object is separated from other objects, discovered or constructed
in the knowledge domain. It is represented by the Diagram (4.29).
g
A NA . (4.29)

In our study, we discern two types of knowledge quanta — indi-


vidual and collective. Diagram (4.6) represents the structure of
individual descriptive knowledge or knowledge about an individual
object, e.g., one person or a star. However, a similar structure char-
acterizes descriptive knowledge about concepts or about classes of
individual objects. We call such knowledge collective or general.
To model collective knowledge quanta explicating their structure,
let us consider a domain U , which consists of a class of objects,
e.g., all object that are unified by some concept (for models of con-
cepts, see Section 5.2), for example, a class of cats or a class of dogs,
relations between these objects and operations with these objects.
Objects can be people, animals, systems, processes, actions, sym-
bols, elementary entities, etc. In essence, an object is anything that
can be considered (may be only in an abstract/ideal way) as dis-
tinct from anything else, e.g., from other things or beings (either
real or abstract or ideal). We build a general definition of an object
in Section 5.2 classifying objects relative to the Existential Triad
considered in Section 2.2.
Note that it is possible that the set U consists of a single
object. We call this set U the collective knowledge domain (collective
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 323

Knowledge Structure and Functioning: Microlevel or Quantum Theory 323

knowledge universe). Then knowledge about objects from U involves:

(1) The domain U .


(2) An intrinsic property Q0 of objects from U .
(3) A class C of names for objects from U .
(4) An ascribed property P0 of objects from U .

In the formalized representation, the intrinsic property Q0 is rep-


resented by an abstract property Q = (U, q, W ) with the scale W
(elements of the theory of abstract properties are considered in Sec-
tion 5.2), while the ascribed property P0 is represented by an abstract
property P = (C, p, L) with the scale L and the domain C, i.e., P
is defined for names from C. In many cases, it is possible to assume
that P = P0 although in contrast to P , P0 is a natural property in
a general case. In this context, the property P is ascribed to objects
from U , although not directly, but through their names. Thus, we
come to the following definition.

Definition 4.1.3. A collective extended descriptive knowledge quan-


tum (CEDKQ) or simply knowledge cde-quantum about objects from
the class U is the structure, CDK = [(U , q, W ), (f , g), (C, p, L)],
the graphical form of which is represented by Diagram (4.30).

g
W L
q p .
(4.30)
U C
f
In Diagram (4.30), the correspondence f relates each object H
from U to its name «H » = f (H) from C (or to its system of names or,
more generally, to its conceptual representative or conceptual image
in the sense of (Burgin and Gorsky, 1991)) and the correspondence
g assigns values of the property Q to values of the property P . In
other words, g relates values of the intrinsic property to values of
the ascribed property. For instance, when we consider such property
of material things as weight, it is an intrinsic property. In weighting
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 324

324 Theory of Knowledge: Structures and Processes

any thing, we can get only an approximate value of the real weight,
or weight with some precision. It is the ascribed property. That is,
when we measure an intrinsic property, we obtain values of the cor-
responding ascribed property.
Relation f in Diagram (4.30) can have the form of some algo-
rithms/procedures of object recognition, construction, or acquisition.
Relation g can have the form of some algorithms/procedures of mea-
surement, evaluation, or prediction.
Note that in general, any object from U can be a big system that
consists of other objects. For instance, it can be a galaxy or the whole
physical universe. In this case, the knowledge quantum K about U
is not elementary.

Remark 4.1.1. It is possible that objects from U are character-


ized by a system of properties. However, this does not demand to
change our representation of a collective descriptive knowledge quan-
tum because the system of properties is equivalent to one property
(Burgin, 2010). Only in this case, the knowledge K about U is not
elementary.

Similar to the knowledge ide-quantum, the knowledge cde-


quantum CDK has specific components. The first one is the attribu-
tive or estimate component of the knowledge cde-quantum CDK. It
reflects relation of the intrinsic property Q to the ascribed property
P and is represented by Diagram (4.31).
g
W L. (4.31)

For instance, it is possible to understand g in Diagram (4.31) as


being observable.
The second one is the naming component of the knowledge
cde-quantum DK. It reflects the process of naming of the objects
from the domain U . The naming component is represented by the
Diagram (4.11).
f
U C. (4.32)
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 325

Knowledge Structure and Functioning: Microlevel or Quantum Theory 325

The third one is the substantial component of the knowledge ide-


quantum K. It is the intrinsic property of the objects from the
knowledge domain U . The substantial component is reflected by the
Diagram (4.12).
q
U W. (4.33)

The substantial component may be material, for example, when


U consists of all elementary particles or of computers, or may be
ideal. For instance, U is a collection of texts or a set of real numbers.
The fourth one is the cognitive or symbolic component of the
knowledge cde-quantum CDK. It is the ascribed property of the
objects from the knowledge domain U . The symbolic component
of collective knowledge renders information about the knowledge
domain U . Diagram (4.34) reflects the cognitive (symbolic) compo-
nent of CDK.
p
NA PA . (4.34)

From the perspective of the general theory of structures


(Burgin, 2012), the cognitive (symbolic) component of the knowledge
cde-quantum CDK represents the inner structure of individual quan-
tum units of descriptive knowledge.
On the quantum level, collective extended representational knowl-
edge is similar to individual extended descriptive knowledge and nat-
urally has similar components.
Definition 4.1.4. A collective extended representational knowledge
quantum (CERKQ) or simply knowledge cre-quantum about objects
from the class U is the structure CRK = [(U , q, S), (f , g), (C, p,
M )], the graphical form of which is represented by Diagram (4.35).
g
S M
q p.
(4.35)
U C
f
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 326

326 Theory of Knowledge: Structures and Processes

In Diagram (4.35), U is the knowledge domain, S is the class of


structures of objects from the domain U , the symbol C denotes the
class of names of objects from U , and M is the class of models of
objects from U .
On the quantum level, collective extended procedural knowledge
is similar to individual extended representational knowledge and has
similar components.
Definition 4.1.5. A collective extended operational knowledge quan-
tum (CEPKQ) or simply knowledge coe-quantum about objects from
the class U is the structure COK = [(U , q, S), (f , g), (C, p, P )], the
graphical form of which is represented by Diagram (4.36).
g
S P
q p .
(4.36)
U C
f
In Diagram (4.36), U is the knowledge domain, which consists
of actions, operations or processes, S is the class of structures of
actions, operations, and processes from the domain U , the symbol
C denotes the class of names of these objects from U , and P is the
class of symbolic representations of objects from U , such as systems
of instructions, algorithms, or procedures.
Diagrams (4.6), (4.18), (4.20), (4.30), (4.35), and (4.36) explicate
the structure of different types of extended knowledge quanta. Using
knowledge quanta of these types and their components, it is possible
to aggregate all knowledge systems from such units.
Some suggest that knowledge does not exist outside some knowl-
edge system. Quantum units of knowledge form such minimal knowl-
edge systems where knowledge dwells.
Knowledge may range from general to specific (Grant, 1996). Gen-
eral knowledge is broad, often publicly available, and independent of
particular events. Specific knowledge, in contrast, is context-specific.
General knowledge, its context commonly shared, can be more easily
and meaningfully codified and exchanged, especially among different
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 327

Knowledge Structure and Functioning: Microlevel or Quantum Theory 327

knowledge or practice communities. Codifying specific knowledge so


as to be meaningful across an organization or group requires its con-
text to be described along with the focal knowledge. This, in turn,
requires explicitly defining contextual categories and relationships
that are meaningful across knowledge communities. Three types of
collective knowledge determine three types of focal knowledge.
Taking Definition 4.1.6 as a base, we define specific knowledge in
the following way.

Definition 4.1.6. A focal descriptive knowledge quantum (SQ) K


about an object F is represented by Diagram (4.37).
gW
DW BL
tU pC . (4.37)
DU BC
fU

Here DU is a subset of U that contains F ; DW is the set of values


of the property T on objects from DU , i.e., DW = {t(u); u ∈ DU }; BC
is a subset of C that consists of the names of objects from DU , i.e.,
BC =}f (u); u ∈ DU }; BL is the set of values of the property P on
the names of objects from DU ; fU , tU , pC , and gW are corresponding
restrictions of relations fU , tU , pC , and gW .
Focal knowledge resembles individual knowledge. The difference is
in the perspective: individual knowledge is taken by itself, while focal
knowledge is treated in the system of collective knowledge. Such an
approach to focal knowledge results in the commutative cube (4.38),
in which all mappings rW , rL , rU , and rC are inclusions.
gW
DW BL
rW gW

W g L pC .
(4.38)
t DU BC
rU p rC
U C
f
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 328

328 Theory of Knowledge: Structures and Processes

Any system of knowledge is built from such elementary units by


means of relations, which glue these “knowledge bricks” together.
However, it is possible that some larger blocks are constructed from
elementary units and then systems of knowledge are built from
such blocks.
Often elementary units of knowledge are expressed by logical
propositions of the type:
An object F has the property P
or
The value of a property P for an object F is equal to a.
These propositions correspond to the forms of elementary data in
the operator information theory of Chechkin (1991).
Being transparent, the structure presented by Diagram (4.6) is
also the basic structure of descriptive knowledge on any level — from
the quantum level through the macrolevel to the megalevel. Indeed,
any knowledge system K is about some domain D of objects, which
is called the knowledge domain. When the domain consists of one
object, it is also called the knowledge object. Names of objects from
the knowledge domain D form the system ND . Besides, the domain
D is structured. This structure SD is represented by a knowledge
structure RD . This knowledge structure RD can be a logical system,
differential equation, difference equation, system of equalities and/or
equalities, text in a natural language, binary code, and so on. As a
result, we come to the following diagram:
g
SD RD
t p .
(4.39)
D ND
f

Besides, data and knowledge can themselves be objects, which


have names, as well as intrinsic and ascribed properties. In this case,
Diagrams (4.6), (4.18), (4.20), (4.30), (4.35), and (4.36) describe
the structure of metaknowledge. However, using specific names for
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 329

Knowledge Structure and Functioning: Microlevel or Quantum Theory 329

data and knowledge provides new opportunities in different areas.


An interesting example of this application is given by named data,
which are now so popular in the Internet technology defining its
future development (Jacobson et al., 2012; Ntuli and Han, 2012).

4.1.2. Semantic link network theory (SLNT) and


Semantic link theory of knowledge (SLTK)
Knowledge is of two kinds. We know a subject ourselves,
or we know where we can find information on it.
Samuel Johnson

Another representation of quantum knowledge is developed in the


semantic link theory of knowledge (SLTK) based on semantic link
network theory (SLNT) elaborated by Hai Zhuge and his collabora-
tors (Zhuge, 2004; 2010; 2012; Zhuge and Shi, 2003; 2004; Zhuge and
Sun, 2010; Zhuge and Xu, 2011; Zhuge and Zhang, 2010).
The goal of SLNT is to create a semantic map of the Web, rep-
resenting complex systems as semantic networks. As we will see
in Chapter 5, semantic networks form one of the basic classes of
knowledge representation. The advantage of SLNT is that an ele-
mentary unit of semantic networks is delineated and a system of
operations with these units is developed allowing development of
complex networks.
The SLNT elementary unit is called a semantic link, which is a
triad α = (X, α, Y ) where X and Y are called semantic nodes and can
be any objects, e.g., texts, people, computers, semantic links, etc.,
while α is the connection (link) between X and Y , which indicates a
relation between these semantic nodes.
The graphical representation of the semantic link α has the fol-
lowing form:

α
X Y.

α
Besides, Zhuge also calls the labeled arrow , as well as
the inner component α of the semantic link α = (X, α, Y ), by the
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 330

330 Theory of Knowledge: Structures and Processes

name semantic link (Zhuge, 2012). In addition, α is called the seman-


tic indicator of the semantic link α (Zhuge, 2012). To discern these
three entities, we employ the following terminology:

— the triad α = (X, α, Y ) is called a complete semantic link,


— the relation α is called an inner semantic link, or semantic indi-
cator, and
— the corresponding labeled arrow is called an arrow semantic link.

Note that a semantic link is, like many other basic structures, a
kind of fundamental triads (named sets) (cf., Appendix). The gen-
eral nature of nodes in a semantic link implies that it is possible to
use semantic links for building not only semantic networks but also
networks in which physical objects are connected by semantic links.
A semantic link network is a triad (N , L, ) where N is a set of
nodes, L is a set of semantic links and  is a semantic space, which
consists of a concept hierarchy ℘ and a set of rules . The extended
representation of a semantic link network also includes a mapping
from nodes and links into the semantic space .
As we can see, semantic links in the sense of Zhuge can con-
nect physical objects. Here we are interested in knowledge, which
is a structural essence. That is why we consider here only symbolic
semantic links to build the SLTK.
The SLTK elementary unit of knowledge is called a knowledge
link and is a symbolic triad α = (X, α, Y ) where X and Y are called
knowledge nodes and can be names of any symbolic objects, e.g.,
texts, words, symbols, pictures, semantic links, etc., while α is a
connection (link) between the knowledge nodes X and Y , which rep-
resents a semantic relation between the objects with the names X
and Y . Thus, a knowledge link is a kind of complete semantic links
in which nodes are arbitrary symbolic objects.
We will discern individual knowledge/semantic links and type
knowledge/semantic links. In an individual knowledge/semantic link,
α is an individual name of a certain relation. For instance, the knowl-
edge/semantic link ({1, 2, 3}, {(1, 3)}, {1, 2, 3}), where {(1, 3)} is
a binary relation in the set {1, 2, 3}, is individual.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 331

Knowledge Structure and Functioning: Microlevel or Quantum Theory 331

In a type knowledge/semantic link, α is an individual name of


a certain relation. For instance, if ord denotes an order relation in
the set {1, 2, 3}, then the knowledge/semantic link ({1, 2, 3}, ord,
{1, 2, 3}) is a type knowledge/semantic link. At the same time, if
nord denotes the natural order relation in the set {1, 2, 3}, i.e.,
nord = {(1, 2), (2, 3), (1, 3)}, then the knowledge/semantic link
({1, 2, 3}, nord, {1, 2, 3}) is an individual knowledge/semantic
link. Note that an individual semantic relation can connect only
two objects, while a type semantic relation can connect many dif-
ferent objects. For instance, the order relation ord can be defined
in many distinctive sets, and even in one set, it can be defined
differently.
Zhuge describes 12 general types of complete, arrow, and inner
semantic links, which are also types of knowledge links when they
involve only symbolic objects (Zhuge, 2012):

1. The cause–effect link, in which the inner semantic link is denoted


by ce indicating that the left node is the cause of the right node,
i.e., the complete semantic link (X, ce, Y ) means the node X is
the cause of the node Y .
2. The implication link, in which the inner semantic link is denoted
by imp indicating that the left node implies the right node,
i.e., the complete semantic link (X, imp, Y ) means the node
X implies the node Y .
3. The subtype link, in which the inner semantic link is denoted
by stOf indicating that the features of the left node include all
features of the right node, i.e., the complete semantic link (X,
stOf, Y ) means the features of the node X include all features
of the node Y .
4. The similar link, or similarity link, in which the inner semantic
link is denoted by sim indicating that the semantics of the right
node is similar to the semantics of the left node, i.e., the complete
semantic link (X, imp, Y ) means the semantics of the node X is
similar to the semantics of the node Y .
5. The instance link, in which the inner semantic link is denoted
by insOf indicating that the left node is an instance of the right
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 332

332 Theory of Knowledge: Structures and Processes

node, i.e., the complete semantic link (X, insOf, Y ) means the
node X is an instance of the node Y .
6. The sequential link, in which the inner semantic link is denoted
by seq indicating that the right node follows the left node, i.e.,
the complete semantic link (X, seq, Y ) means the node Y follows
the node X.
7. The reference link, in which the inner semantic link is denoted
by ref indicating that the right node is a further explanation of
the left node, i.e., the complete semantic link (X, ref, Y ) means
the node Y is further explanation of the node X.
8. The equal link, or equality link, in which the inner semantic link
is denoted by e indicating that the meaning of the right node
is the same as the meaning of the left node, i.e., the complete
semantic link (X, e, Y ) shows the meaning of the node X is same
as the meaning of the node Y .
9. The empty link, in which the inner semantic link is denoted by
Ø indicating that the right and the left nodes are completely
irrelevant to one another, i.e., the complete semantic link (X, Ø,
Y ) means the nodes Y and X are completely irrelevant to one
another.
10. The null or unknown link, in which the inner semantic link is
denoted by Null or by N indicating that the relation between the
two nodes is unknown or uncertain, i.e., the complete semantic
link (X, N , Y ) shows the relation between X and Y is unknown
or uncertain.
11. The semantic equivalence link, in which the inner semantic link
is denoted by equiv, indicating that the connected nodes can
substitute for one another wherever they occur, i.e., the complete
semantic link (X, equiv, Y ) shows X and Y can substitute for
one another wherever they occur.
The last of general semantic links considered by Zhuge (2012) rep-
resents a unary operation with semantic links and will be described
in Section 4.3.2.
12. The non-α relation link, in which the inner semantic link is
denoted by Non (α) or by αN indicating that there is no relation
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 333

Knowledge Structure and Functioning: Microlevel or Quantum Theory 333

α between the two nodes, i.e., the complete semantic link (X,
αN , Y ) shows there is no relation α between X and Y .

One more operation considered by Zhuge (2012), reversion, is


described in Section 4.3.2.

Here we describe other general types of knowledge/semantic links.

13. The property link, in which the inner semantic link is denoted by
prOf indicating that the left node is a property (feature) of the
right node, i.e., the complete semantic link (X, prOf, Y ) means
the node X is a property (feature) of the node Y . For instance,
the complete semantic link (blue, prOf, blue ball) means the color
blue ball is a property (feature) of a blue ball.
14. The part link, in which the inner semantic link is denoted by ptOf
indicating that the left node is a part of the right node, i.e., the
complete semantic link (X, ptOf, Y ) means the node X is a part
of the node Y . For instance, the complete semantic link (an arm,
ptOf, a woman) means an arm is a part of a woman.
15. The element link, in which the inner semantic link is denoted by
elOf indicating that the left node is an element of the right node,
i.e., the complete semantic link (X, elOf, Y ) means the node X
is an element of the node Y . For instance, the complete semantic
link (the Earth, elOf, the Solar System) means the Earth is an
element of the Solar System.
16. The name link, in which the inner semantic link is denoted by
nmOf indicating that the left node is a name of the right node,
i.e., the complete semantic link (X, nmOf, Y ) means the features
of the node X is a name of the node Y . For instance, the complete
semantic link (Michael, nmOf, the man) means Michael is the
name of the man.
17. The before link, in which the inner semantic link is denoted by be
indicating that on the time scale, the left node is before the right
node, i.e., the complete semantic link (X, be, Y ) means the node
X is before the node Y . For instance, the complete semantic link
(Winter, be, Spring) means Winter is before the Spring.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 334

334 Theory of Knowledge: Structures and Processes

18. The after link, in which the inner semantic link is denoted by
af indicating that on the time scale, the left node is after the
right node, i.e., the complete semantic link (X, af, Y ) means the
node X is after the node Y . For instance, the complete seman-
tic link (Summer, af, Spring) means that Summer is after the
Spring.
19. The function link, in which the inner semantic link is denoted by
fnOf indicating that the features of the left node is a function
of the right node, i.e., the complete semantic link (X, fnOf, Y )
means the features of the node X is a function of the node Y .
For instance, the complete semantic link (moving people, fnOf,
a car) means that a function of a car is moving people.
20. The relation link, in which the inner semantic link is denoted by
rn indicating that the left node and the right node are in some
relation, i.e., the complete semantic link (X, rn, Y ) means the
features of the node X and the node Y are in some relation. For
instance, the complete semantic link (10, rn, 5) means numbers
5 and 10 are in some relation, in particular, in the relation of
divisibility, i.e., 10 is divisible by 5.
21. The in link, in which the inner semantic link is denoted by in
indicating that the left node is in the right node, i.e., the complete
semantic link (X, in, Y ) means the node X is in the node Y .
For instance, the complete semantic link (Michael, in, the house)
means Michael is in the house.
22. The better link, in which the inner semantic link is denoted by
bt indicating that on some scale, the left node is better that the
right node, i.e., the complete semantic link (X, bt, Y ) means the
node X is better that the node Y . For instance, the complete
semantic link (honesty, bt, deception) means honesty is better
than deception.
23. The bigger link, in which the inner semantic link is denoted by
bg indicating that for some scale, the left node is bigger than the
right node, i.e., the complete semantic link (X, bg, Y ) means the
node X is bigger than the node Y . For instance, the complete
semantic link (the Sun, bg, the Earth) means the Sun is bigger
than the Earth.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 335

Knowledge Structure and Functioning: Microlevel or Quantum Theory 335

24. The subclass link, in which the inner semantic link is denoted by
scOf indicating that the left node is a subclass of the right node,
i.e., the complete semantic link (X, scOf, Y ) means the node X
is a subclass of the node Y . For instance, the complete semantic
link (all dogs, scOf, all animals) means the class of all dogs is a
subclass of the class of all animals.

As Zhue writes (Zhuge, 2012), it is impossible to list all possible


semantic relations and corresponding semantic links. Moreover, even
taking a sufficiently complete set of link (relation) operations, it is
usually impossible to build a semantic link base. The same is true
for knowledge links. However, it is possible to develop such a base
for semantic networks representing many specific domains.
Theory and practice of semantic networks shows that very often
knowledge/semantic links indicate connections not absolutely but
only to some degree. Thus, it is useful to build and utilize graded
counterparts for all general knowledge/semantic links:
1. The graded cause–effect link, in which the inner semantic link
is denoted by (ce, cdg) indicating that the left node is a par-
tial cause of the right node, i.e., the complete semantic link (X,
(ce, cdg), Y ) means the node X is the cause of the node Y to
the degree cdg. For instance, the complete semantic link (genetic
inheritance, (ce, 80%), length of life) means the length of life of
an individual depends on her genetic inheritance by 80%.
2. The probabilistic cause–effect link, in which the inner semantic
link is denoted by (ce, pr) indicating that the left node is the
cause of the right node with the probability pr, i.e., the complete
semantic link (X, (ce, pr), Y ) means the node X is the cause of
the node Y with the probability pr.
3. The graded implication link, in which the inner semantic link is
denoted by (imp, dg) indicating that the left node implies the
right node to the degree dg, i.e., the complete semantic link (X,
imp, Y ) means the node X implies the node Y to the degree dg.
4. The graded subtype link, in which the inner semantic link is
denoted by (stOf, ext) indicating that the features of the left
node include all features of the right node to the extent ext, i.e.,
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 336

336 Theory of Knowledge: Structures and Processes

the complete semantic link (X, (stOf, ext), Y ) means the features
of the node X all features of the node Y to the extent ext.
5. The graded similar link, or graded similarity, link, in which the
inner semantic link is denoted by (sim, sd) indicating that the
semantics of the right node is similar to the semantics of the left
node to the degree sd, i.e., the complete semantic link (X, imp,
Y ) means the semantics of the node X is similar to the semantics
of the node Y to the degree sd.
6. The graded instance link, in which the inner semantic link is
denoted by (insOf, id) indicating that the left node is an instance
of the right node to the degree id, i.e., the complete semantic link
(X, (insOf, id), Y ) means the node X is an instance of the node
Y to the degree id.
7. The graded sequential link, in which the inner semantic link is
denoted by (seq, pr) indicating that the right node follows the
left node with the probability pr, i.e., the complete semantic link
(X, (seq, pr), Y ) means the node Y follows the node X with the
probability pr.
8. The graded reference link, in which the inner semantic link is
denoted by (ref, rext) indicating that the right node is a further
partial explanation of the left node, i.e., explanation to the extent
ext. So, the complete semantic link (X, (ref, rext), Y ) means the
node Y is an explanation of the node X to the extent rext.
9. The graded equal link, or graded equality, link, in which the inner
semantic link is denoted by (e, eext) indicating that the meaning
of the right node is almost the same as the meaning of the left
node, i.e., the complete semantic link (X, by (e, eext), Y ) shows
the meaning of the node X is same to the extent eext as the
meaning of the node Y .
10. The graded empty link, in which the inner semantic link is
denoted by (Ø, ed) indicating that the right and the left nodes
are partially irrelevant to one another to the degree ed, i.e., the
complete semantic link (X, (Ø, ed), Y ) means the nodes Y and
X are irrelevant to one another to the degree ed.
11. The graded null or graded unknown link, in which the inner
semantic link is denoted by (Null, nd) or by (N , nd) indicating
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 337

Knowledge Structure and Functioning: Microlevel or Quantum Theory 337

that the relation between the two nodes is unknown or uncertain


to the degree nd, i.e., the complete semantic link (X, (N , nd),
Y ) shows the relation between X and Y is unknown or uncertain
to the degree nd.
12. The graded semantic equivalence link, in which the inner seman-
tic link is denoted by (equiv, pt) indicating that the connected
nodes can substitute for one another in the pt percent of situa-
tions, i.e., the complete semantic link (X, (equiv, 25%), Y ) shows
X and Y can substitute for one another in 25% of the situations.
13. The graded non-α relation link is denoted by (Non (α), pr) or by
(αN , pr) indicating that there is the probability pr that there is
no relation α between the two nodes, i.e., the complete semantic
link (X, (αN , 0.3) Y ) shows there is the probability 0.3 that
there is no relation α between X and Y .
14. The graded property link, in which the inner semantic link is
denoted by (prOf, prext) indicating to what extent the left node
is a property (feature) of the right node to the extent prext, i.e.,
the complete semantic link (X, (prOf, prext), Y ) means the node
X is a property (feature) of the node Y to the extent prext. For
instance, the complete semantic link (blue, (prOf, 0.5), ball B)
means the color blue ball is a property (feature) of the ball B to
the extent 0.5.
15. The graded part link, in which the inner semantic link is denoted
by (ptOf, ptext) indicating that the left node is a part of the
right node to the extent ptext, i.e., the complete semantic link
(X, (ptOf, ptext), Y ) means the node X is a part of the node
Y to the extent ptext. For instance, the complete semantic link
(woman Y , (ptOf, 0.7), family X) means the woman Y is a part
of the family X to the extent 0.7.
16. The graded element link, in which the inner semantic link is
denoted by (elOf, elext) indicating that the left node is an element
of the right node to the extent elext, i.e., the complete semantic
link (X, (elOf, elext), Y ) means the node X is an element of the
node Y to the extent elext. For instance, the complete semantic
link (the Earth, (elOf, 1), the Solar System) means the Earth is
an element of the Solar System to the extent 1.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 338

338 Theory of Knowledge: Structures and Processes

17. The graded name link, in which the inner semantic link is denoted
by (nmOf, next) indicating what part of a name of the right node
is the left node, i.e., the complete semantic link (X, (nmOf, next),
Y ) means the features of the node X is next of a name of the node
Y . For instance, if the name of a bridge is the Golden Bridge,
then the complete semantic link (Golden, (nmOf, 21 ), bridge)
means that Golden is one half of the name of this bridge.
18. The graded before link, in which the inner semantic link is
denoted by (be, bdg) indicating to what degree on the time scale,
the left node is before the right node, i.e., the complete semantic
link (X, (be, bdg), Y ) means the node X was bdg years before
the node Y . For instance, the complete semantic link (Colum-
bus, (be, Washington)) means Columbus lived years before
Washington.
19. The graded after link, in which the inner semantic link is denoted
by (af, adg) indicating to what degree on the time scale, the left
node is after the right node, i.e., the complete semantic link (X,
(af, adg), Y ) means the node X is (was) bdg years after the node
Y . For instance, the complete semantic link (Washington, (be),
Columbus) means Washington lived years after Columbus.
20. The graded function link, in which the inner semantic link is
denoted by (fnOf, fgext) indicating that the features of the left
node is a partial function of the right node, i.e., the complete
semantic link (X, (fnOf, fgext), Y ) means the features of the
node X is a function of the node Y . For instance, the semantic
link (moving things, (fnOf, 30%), a car) means that 30% of car
functions is moving things.
21. The graded relation link, in which the inner semantic link is
denoted by rn indicating that the left node and the right node
are in some relation, i.e., the complete semantic link (X, rn,
Y ) means the features of the node X and the node Y are in
some relation. For instance, the complete semantic link (10, rn,
5) means numbers 5 and 10 are in some relation, in particular,
10 is divisible by 5.
22. The graded in link, in which the inner semantic link is denoted
by in indicating that the left node is in the right node, i.e., the
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 339

Knowledge Structure and Functioning: Microlevel or Quantum Theory 339

complete semantic link (X, in, Y ) means the node X is in the


node Y . For instance, the complete semantic link (Michael, in,
the house) means Michael is in the house.
23. The graded better link, in which the inner semantic link is denoted
by (bt, btext) indicating that on some scale, the left node is better
that the right node to the extent btext, i.e., the complete semantic
link (X, (bt, ext), Y ) means the node X is better that the node
Y to the extent btelext. For instance, the complete semantic link
(honesty, (bt, much), deception) means honesty is much better
than deception.
24. The graded bigger link, in which the inner semantic link is
denoted by (bg, bgext) indicating that for some scale, the left
node is bigger than the right node to the extent bgext, i.e., the
complete semantic link (X, (bg, bgext), Y ) means the node X
is bigger than the node Y to the extent bgext. For instance, the
complete semantic link (the Sun, (bg, much), the Earth) means
the Sun is much bigger than the Earth.
25. The graded subclass link, in which the inner semantic link is
denoted by (scOf, sext) indicating that the left node is a subclass
of the right node to the extent sext, i.e., the complete semantic
link (X, (scOf, sext), Y ) means the node X is a subclass of the
node Y to the extent sext. For instance, the complete semantic
link (penguins, (scOf, 0.7) birds) means the class of all penguins
is a subclass of the class of all birds.

When the grades of a semantic knowledge/link take values in the


interval [0, 1], as it is, for example, for the probability pr, we have
fuzzy general knowledge/semantic links.
Semantic links are used for construction of semantic link networks.

Definition 4.1.7 (Zhuge, 2012). A semantic link network (SLN)


is a relational network that consists of the following parts: a set of
semantic nodes, a set of (arrows) semantic links between the nodes,
and a semantic space.

In contrast to their name, semantic nodes can be any objects.


Semantic links between nodes are regulated by attributes of nodes or
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 340

340 Theory of Knowledge: Structures and Processes

generated by interactions between nodes. A semantic space includes


a classification hierarchy of concepts and a set of rules for reasoning
and inferring semantic links, for networking and transformations of
the network.
Knowledge flows in semantic link networks as well as in physical
networks with people and computers as nodes. According to the the-
ory of knowledge flow developed in (Zhuge, 2012), Knowledge Grid
environment has three flows:

1. knowledge flow
2. information flow
3. service flow

Parallel to these flows, the cyber-physical society has three more


flows:

4. material flow
5. energy flow
6. money and other symbolic goods, e.g., stocks or bonds, flow

In the resource-mediated mode, knowledge flows through four


types of links:

• Question answering links


• Citation links
• Hyperlinks
• Semantic links

4.1.3. QTK–SLTK connection


He who learns but does not think, is lost!
He who thinks but does not learn is in great danger.
Confucius

Here we consider relations between the knowledge representation in


semantic link network theory (SLNT) described in Section 4.1.2 and
the knowledge representation in the theory of quantum knowledge
(QTK) described in Section 4.1.1.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 341

Knowledge Structure and Functioning: Microlevel or Quantum Theory 341

Definition 4.1.8. We call two knowledge representations R1 and R2


equivalent if all knowledge representable (expressible) in R1 is also
representable (expressible) in R2 and vice versa.
In computer science, there are many theorems in which equiva-
lence of different representations of operational knowledge is proved.
For instance, it is proved that the representation of functions (or more
exactly, of operational knowledge about functions) given by Turing
machines is equivalent to the representation of functions given by
partial recursive functions or that the representation of formal lan-
guages (or more exactly, of operational knowledge about formal lan-
guages) given by deterministic finite automata is equivalent to the
representation of formal languages given by non-deterministic finite
automata (Sipser, 1997).
However, not all representations are equivalent. For instance, it is
proved that the representation of formal languages given by Turing
machines is not equivalent to the representation of formal languages
given by finite automata (Sipser, 1997).
Representation Equivalence Theorem. Knowledge represen-
tations in the cognitive (symbolic or information) component of the
quantum theory of knowledge and in SLTK are equivalent.
Proof. (a) From QTK to KLTK:
We have to show that any knowledge that can be represented in
QTK can be also represented in KLTK. In QTK, the general repre-
sentations of knowledge quanta and their components are given in
Diagrams (4.6), (4.18), (4.20), (4.30), (4.35) and (4.36). Thus, we
need to demonstrate that it is possible build all these diagrams using
complete semantic/knowledge links.
At first, we take Diagram (4.6). As the KLTK element of knowl-
edge — a complete semantic/knowledge link (X, α, Y ) — is a fun-
damental triad with arbitrary symbolic support X and reflector Y ,
it can naturally represent the knowledge quantum (A, q, QA ). Going
to Diagram (4.6), we see that it is possible to represent all its com-
ponents — the attributive (estimate) component (QA , e, PA ), the
naming component (A, n, NA ), substantial component (A, q, QA )
and cognitive (symbolic) component (NA , q, PA ) — as complete
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 342

342 Theory of Knowledge: Structures and Processes

semantic/knowledge links in a similar way to Diagram (4.5). Conse-


quently, it is possible to build Diagram (4.6) using complete seman-
tic/knowledge links.
As all other Diagrams (4.18), (4.20), (4.30), (4.35), and (4.36)
have the same structure, we can apply to then the same con-
structions, building these knowledge quanta from complete seman-
tic/knowledge links.
Thus, we see that whatever can be represented in QTK can be
also represented in KLTK.
(b) From KLTK to QTK:
Let us consider a knowledge link in a general form
α
X Y

or

(X, α, Y ).

QTK representation of an arbitrary semantic or knowledge link in


the form of abstract property
(X, α, Y) {0, 1}
.
Object/object name property scale

In this representation, objects (object names) are knowledge links


from SLTK, while properties (property values) are relations that
reflect membership of the triad (X, α, Y ) in a semantic network.
It is a classical membership relation with the scale {0, 1}, which
describes membership of the triad (X, α, Y ) in a semantic network.
This membership relation may be fuzzy, having the form
(X, α, Y) [0, 1] .

There is another QTK representation of an arbitrary semantic


or knowledge link in the following form of abstract property (cf.,
Section 5.3)
(X, Y) α.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 343

Knowledge Structure and Functioning: Microlevel or Quantum Theory 343

In this representation, objects (object names) are pairs of objects


(object names) from KLTK, while properties (property values) are
connection/link (connection/link names).
Thus, we see that whatever can be represented in KLTK can be
also represented in QTK. Consequently, both systems — KLTK and
QTK — are equivalent.
Theorem is proved.
Because general descriptive knowledge units from QTK can
include arbitrary objects, a similar proof gives us the following result.
General Equivalence Theorem. Object representations in
QTK and in semantic link network theory are equivalent.
The obtained results show the equivalence of quantum units of
knowledge in different theories implicating uniqueness of the inner
structure of such units.

4.2. Signs and symbols as quantum units of knowledge

In life, particularly in public life, psychology is more powerful


than logic.
Ludwig Quidde

In natural languages, concepts are treated as quantum units of knowl-


edge. At the same time, concepts are kinds of symbols, while symbols
are types of signs. Thus, all of them, i.e., signs, symbols and concepts,
are quantum units of knowledge.
Two main interpretations of the word “sign” are used by peo-
ple: (1) sign as a physical object with some meaning, and (2) sign
as a conceptual (theoretical) structure. We will call a sign by the
name material sign when we have in mind the first interpretation
and by the name conceptual sign when we bear in mind the second
interpretation.
Examples of material signs are digits 0, 1, 2, 3, 4, 5, 6, 7, 8, 9
used in everyday calculations or letters, such as a, b, and c, from
alphabets of natural languages. Icons that we can see on the screen
of a computer are also material signs. Here we are mostly interested
in conceptual signs, treating material signs as names of conceptual
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 344

344 Theory of Knowledge: Structures and Processes

signs. Thus, in what follows, the word sign denotes a conceptual sign,
while a name of a sign usually means a material sign.
In a similar way, there are three different, however, connected,
meanings of the word “symbol”. In a broad sense, symbol is the
same as sign. For example, the terms “symbolic system” and “sign
system” are considered as synonyms, although the first term is used
much more often. Another understanding identifies symbol with a
physical sign.
Theoretical models of the structure of conceptual signs has been
constructed in the discipline, which is called semiotics and is a gen-
eral theory of signs. Semiotics studies structures and functions of
signs and their communicative operation, including sign processes
(semiosis), indication, designation, signification, likeness, analogy,
metaphor, symbolism, signification, and communication.
The term semiotics comes from the Greek word σηµει̃oν (meaning
a sign or a mark) and it was first used in English by Henry Stubbes
(1670) in the form semeiotics denoting the branch of medical science
related to the interpretation of signs and by John Locke (1690) in
the form semeiotike as “the doctrine of signs”.
The importance of signs and signification has been recognized
throughout much of the history of philosophy, and in psychology as
well. For instance, Umberto Eco (1986) argues that semiotic theo-
ries are implicit in the work of most, perhaps all, major thinkers.
Plato and Aristotle both explored the relationship between words
(as signs) and the real world. Much later, Augustine of Hippo (354–
430) (Saint Augustine) considered the nature of the sign in society
(St. Augustine, 1974). The general study of signs was popular in
scholastic philosophy and logic. For instance, Peter Abelard (1079–
1142) noted that linguistic signification does not cover the whole
range of sign processes instructed that arbitrary things might func-
tion as signs, too, if they were connected to each other in such a way
that the perception of one led to the cognition of the other (Abelard,
1927; 1956). The unknown author, now commonly named Ps.-Robert
Kilwardby, in his work written somewhere between 1250 and 1280,
strengthens Augustine’s renowned dictum that “all instruction is
either about things or about signs” stating that “every science is
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 345

Knowledge Structure and Functioning: Microlevel or Quantum Theory 345

Material Sign

Sign Object Sign Observer

Figure 4.1. The Bacon/Augustine Sign Triad

about signs or things signified.” In the context of named set theory


where signs are kinds of names (Burgin, 2011), this statement tells
us that every science is about names or things named. William of
Ockham (ca. 1285–1347/49) brought the concepts of sign and sig-
nification to logic restricting the general concept of sign to what
later became a propositional sign. Roger Bacon (ca. 1214–1293) was
probably the most important medieval philosopher of sign being the
author of the most extensive medieval treatise on signs known so far
(Bacon, ca. 1267). He developed a general conception of signification,
as well as a detailed theory of the linguistic sign integrated into a
broader theory of sign in general. According to Bacon, a sign, as it
was already pointed out by Augustine, is a triadic relation, such that
it is — in principle — a sign of something to someone. This gives us
the following model of sign (see Figure 4.1).
In addition, Bacon elaborated a detailed classification of signs
by adapting up, combining, and modifying elements of several prior
sign typologies. According to Bacon, all signs belong to the following
classes:
1. Natural signs
1.1. Signs signifying by inference, concomitance, consequence
1.1.1. Signs signifying necessarily
1.1.1.1. Signs signifying something in present
1.1.1.2. Signs signifying something in the past
1.1.1.3. Signs signifying something in the future
1.1.2. Signs signifying with probability
1.1.2.1. Signs signifying something in present
1.1.2.2. Signs signifying something in the past
1.1.2.3. Signs signifying something in the future
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 346

346 Theory of Knowledge: Structures and Processes

1.2. Signs signifying by configuration and likeness, e.g., images or


pictures
1.3. Signs signifying by causality
2. Signs given and directed by a soul
2.1. Signs signifying instinctively without deliberation
2.2. Signs signifying with deliberation, e.g., words
2.3. Interjections

This classification was complemented in the following way:

1. Natural signs signifying unintentionally by their essence


1.1. Inferential signs based on a more or less constant concomi-
tance of sign and what it signifies
1.2. Iconic signs, based on similarity in appearance
1.3. Signs based on a causal relation between the sign and the
signified thing.
2. Signs of inference
2.1. Necessary signs
2.2. Probable signs

It is necessary to remark that the division of all signs into two


main classes of natural and given signs was taken from Augustine, the
distinction between necessary and probable signs was adopted from
Aristotle and their subdivision according to their temporal reference
was a traditional element in the theories of the sacramental sign.
The next major step in semiotics was done when John Poinsot
published his Tractatus de Signis in 1632. It was, perhaps, the earli-
est, fully systematized treatise in semiotics.
Contemporary semiotics, was independently originated by the
American logician and philosopher Charles Sanders Peirce (1839–
1914), who originally called it semeiotic, and by the French linguist
Ferdinand de Saussure (1857–1913), who originally called it semiol-
ogy. Saussure (1916) defined semiology as a discipline that includes
linguistics as a special case. At the same time, Peirce included in
semeiotic both language studies and logic defining its three branches:
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 347

Knowledge Structure and Functioning: Microlevel or Quantum Theory 347

1. Syntax as a discipline that studies how signs interact with one


another.
2. Semantics as a discipline that studies how signs are related to
things in the world.
3. Pragmatics as a discipline that studies how signs are employed by
the agents who use them.

As signs exist in a huge diversity of situations, the founder of


semiotics, Charles Sanders Peirce and his follower Charles William
Morris (1901–1979) defined semiotics very broadly predicting that it
would influence a variety of disciplines. For instance, Morris wrote
(1938):
“The sciences must look to semiotic for the concepts and general prin-
ciples relevant to their own problems of sign analysis. Semiotic is not
merely a science among other sciences but an organon or instrument to
all sciences.”

Indeed, semiotics has become an important tool in communication


research, information theory, linguistics and the fine arts, as well as
in psychology, sociology, and aesthetics. At the same time, we have
the following situation. Although many other disciplines recognize
the potential importance of semiotic paradigms for their fields, they
have not yet found a satisfying way of integrating them into their
domain.
It is possible to find the first theoretical approach to the concept
sign in works of Ferdinand de Saussure, who is sometimes called the
father of theoretical linguistics. Saussure studied linguistic signs and
according to Saussure, the basic property of a sign is that it points to
something different from itself, transcendent to it (Saussure, 1916).
To represent this property, Saussure introduced a structural model
of sign in the form of the dyadic sign triad (see Figure 4.2).
As in other cases, this triad is a kind of a fundamental triad
(named set) described in Appendix.

signification
signifier signified

Figure 4.2. The dyadic sign triad of de Saussure


September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 348

348 Theory of Knowledge: Structures and Processes

Related to a word, a signifier, also called signal or signifiant, may


be understood as the sound or written pattern of the word or in
actual, physical realization as part of a speech act. At the same time,
a signified, also called signifie, may be treated as the conception or
meaning of the signifier. According to contemporary views, human
reality is a construction and the product of signifying activities which
are culturally specific, culturally determined and often unconscious.
According to Saussure, signs can exist only in relation to other
signs. In this context, a linguistic sign is not a link between a thing
and a word, but between a concept and a sound/written pattern,
where the pattern is the hearer/reader’s psychological impression
given to him by the evidence of her/his senses. This was rather dif-
ferent from previous approaches focused on the relationship between
words and the things they designate.
A dyadic model of sign was also supported by Eco (1976), who
defined a sign as anything which may be interpreted as standing
for something else. Note that it is necessary to make a distinction
between sign as a structure (a conceptual sign) and material sign as
a component name of this structure, e.g., letters or digits are such
material signs.
This understanding is represented by Figure 4.3, which is a struc-
tural model of sign:
material sign object
(sign name)

Figure 4.3. The dyadic sign triad of Eco

However, in a more detailed model adopted from Hjelmslev (1963),


Eco explained that any given (material) sign is required to be an
element of the expression-plane, and must therefore be conven-
tionally correlated to one or more elements of the content-plane
(cf., Figure 4.4).
Developing this approach, Eco also assumed that it was possi-
ble for the expression of a sign to have more than one content (cf.,
Figure 4.5), while the content of a sign may have more than one
expression (cf., Figure 4.6). For instance, such symbol (material sign)
as 1 can denote a digit of a decimal numerical system (Content 1),
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 349

Knowledge Structure and Functioning: Microlevel or Quantum Theory 349

expression plane


content plane

Figure 4.4. The Hjelmslev–Eco model of sign with one expression and one
content

expression plane

• • •
content plane

Figure 4.5. The Hjelmslev–Eco model of sign with one expression and several
contents

expression plane




content plane

Figure 4.6. The Hjelmslev–Eco model of sign with several expressions and one
content

a digit of a binary numerical system (Content 2) or the number one


(Content 3).
At the same time, the number one can be denoted by the symbol
1 (Expression 1), by the letter α (Expression 2), by the letter ℵ
(Expression 3), by the word “one” (Expression 4) or by the word
“ein” (Expression 5).
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 350

350 Theory of Knowledge: Structures and Processes

content plane B

expression plane

• •
content plane A

Figure 4.7. The Hjelmslev–Eco model of sign with one expression plane and
two content planes

Representamen or Sign Vehicle (Sign Name)

Denotat (Sign Object) Interpretant (Sign Meaning)

Figure 4.8. The Balanced Sign Triad of Peirce

Besides, in some cases, a sign may have expressions in more than


one expression plane, and may have contents in more than one con-
tent plane (cf., Figure 4.7). For instance, such symbol (material sign)
as 1 can denote a digit of a decimal numerical system (Content 1),
a digit of a binary numerical system (Content 2) or the number one
(Content 3), while its expressions can be such material symbols as
“1”, “1”, “1” or “1”.
Peirce extended the dyadic model by further splitting the sig-
nified (or content) into essentially different parts: the sign’s object
and interpretant (the meaning of the sign), and thus, coming to the
triadic model of a sign, the balanced sign triad:
As Alp (2010) writes, historically, dyadic and triadic sign models
form different cultures, and they are applied to semiotic targets inde-
pendently. In the Peirce’s model (cf., Figure 4.8), a sign is understood
as a structure that consists of three elements: the Name (Peirce called
it Sign Vehicle), Object and Meaning of the sign (Peirce, 1931–1935).
Sign Vehicle is conceived as a physical representation of a sign, that
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 351

Knowledge Structure and Functioning: Microlevel or Quantum Theory 351

is, a material sign, which is called sign in everyday life. In other words,
a sign is often comprehended as some elementary image inscribed on
paper, clay tablet, piece of wood or stone, presented on the screen of
a computer monitor, and so on. This material representation plays
the role of a name of the sign.
Peirce implied that signs establish meaning through recursive rela-
tionships that arise in sets of three main semiotic elements:
• Representamen, also called Sign or Sign Vehicle by Peirce or Sign
Name in a general context, is the component of the sign that rep-
resents the denoted object or objects and is similar to Saussure’s
signifier. Note that in this context, the sign name is not necessarily
a single word. It can be a quite elaborated object.
• Object, also called extent, is what the sign represents or encodes.
• Interpretant is the meaning formed into a further sign by inter-
preting or decoding a sign.
The object of a sign can be anything thinkable, for example, a law,
fact, possibility, or idea. Peirce considered two kinds of sign objects:
• The immediate object is represented in the sign name (representa-
men).
• The dynamic object is the object as it really is.
In addition, Peirce considered three kinds of the interpretants of
a sign:
◦ The immediate interpretant is the meaning that is already in the
sign, or more exactly, the meaning that is without delay ascribed
to the sign by the interpreter when she/he receives the sign.
◦ The dynamic interpretant is the meaning as formed in a process of
sign comprehension by the interpreter.
◦ The final interpretant is the meaning that would be reached if
formation process were to be pushed far enough, namely, it is a
kind of an ideal meaning, with which an actual, that is, dynamic,
interpretant may, at most, coincide.
Note that it is also possible to consider the final object, which is
an ideal collection of the object with all its possible changes.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 352

352 Theory of Knowledge: Structures and Processes

Besides, as the sign name, representamen or sign vehicle is also an


object, it is possible to consider the immediate sign name, dynamic
sign name, and final sign name.
It is necessary to remark that each component the components
of the structure of a sign — the name (sign vehicle), referent and
interpretant/sense — is also an object and has a name, the name of
this object. In addition, these objects also have an interpretant, i.e.,
sense and meaning, associated with their names. As a name is itself
an object, it has a name and interpretant, i.e., sense and meaning,
as any other object. In an intensional context, the names that occur
denote the meaning or sense of the objects for the reader or listener.
It means that each component of a sign can acquire the role of
another component. As a result, the structure of a sign has the prop-
erty called fractality, which tells that the structure of the whole is
repeated/reflected in the structure of its parts.
This property was envisioned by Peirce, who constructed combi-
nations of Balanced Sign Triads (triangles) connecting them together
in different ways by attaching a vertex of one to a vertex of
another. These combinations determine operations with signs. In
such a way, Peirce explicated structures of higher-level (second-
level in Figure 4.9) and higher-order (third-order in Figure 4.10)
signs, that is, signs of signs, or metasigns. An example of vertical
escalation of Balanced Sign Triads is given in Figure 4.9, while an
example of horizontal expansion of Balanced Sign Triads is given in
Figure 4.10.

Meaning (Interpretant) of the representation

Meaning (Interpretant) of the object name (symbol) of the concept

object name (symbol) of the object

Figure 4.9. The two-level Balanced Sign Triad of Peirce


September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 353

Knowledge Structure and Functioning: Microlevel or Quantum Theory 353

meaning (Interpretant) meaning (Interpretant) meaning (Interpretant)


of the object of the name of the string

object name (symbol) of the object name (symbol) of the name string of letters

Figure 4.10. The third-level Balanced Sign Triad of Peirce

Note that if the meaning (Interpretant) of the object is knowledge


of the first level, then the meaning (Interpretant) of the representa-
tion is knowledge of the second level, i.e., it metaknowledge (cf.,
Section 2.3).
Another operation with signs introduced by Peirce makes it possi-
ble to connect Balanced Sign Triads side by side in order to represent
signs of signs of signs as we see in Figure 4.10.
Note that if the name of the object contains knowledge of the first
level, then the name of the name contains knowledge of the second
level, while the corresponding string of letters that serves as the name
of the name of the name contains knowledge of the third level (cf.,
Section 2.3).
Peirce explained that signs mediate the relationship between their
objects and their interpretants in a triadic mental or mind-like pro-
cess, which goes through three stages: firstness, secondness, and
thirdness. Firstness is a universal category of phenomena and is
associated with a vague state of mind in which there is awareness
of the environment, a prevailing emotion, and a sense of the possi-
bilities when the mind is in neutral, waiting to formulate thought.
Secondness is a category associated with moving from possibility to
greater certainty expressed by actions, reactions, causality, or real-
ity when the mind identifies what message is to be communicated.
Thirdness is the category of signs and is associated with generality,
representation, continuity, and purpose.
This process is reversed in the receiver. At first, the active mind
receives the sign name as a material sign. Then the mind acquires
from the memory the mental object associated with the sign and in
such a way produces the interpretant of the sign, which is the result
of signification. Even when a sign represents by a resemblance or
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 354

354 Theory of Knowledge: Structures and Processes

factual connection independent of interpretation, the sign is a sign


only insofar as it is, at least, potentially interpretable. Peirce also
refers to the “ground” of a sign, which is the pure abstraction of the
sign quality.
It is interesting that if we analyze what Peirce wrote about his
model of sign, we see that he described the structure not of a tri-
angular triad but the structure of a fundamental triad defined in
Appendix. Indeed, Peirce (1903) wrote:
“A Sign, or Representamen, is a First which stands in such a genuine
triadic relation to a Second, called its Object, as to be capable of deter-
mining a Third, called its Interpretant . . .”

This gives us the following diagram, which is a fundamental triad


because the Object connects the Representamen with the Interpre-
tant.
Representamen Object Interpretant .

Being a thorough taxonomist, Peirce offered a taxonomic scheme


of signs demonstrating there could be 59,049 types of signs. However,
at the top level of this classification, according to Peirce, there were
three basic types of signs listed here in decreasing order of conven-
tionality:
— Symbols are highly conventional,
— Icons (iconic signs) always involve some degree of conventionality,
— Indices (indexical signs) directly bring our attention to their
objects.
These properties are implied by the following definitions.
Definition 2.4.1. An icon looks like what it signifies.
An example of icons is a photograph.
Definition 2.4.2. An index has a causal and/or sequential relation-
ship to its signified.
For instance, indices are directly perceivable events that can act
as a reference to events that are not directly perceivable. You may
not see a fire, but you do see the smoke and that indicates to you that
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 355

Knowledge Structure and Functioning: Microlevel or Quantum Theory 355

a fire is burning. Words this, that, these, and those are also examples
of indices.

Definition 2.4.3. A symbol represents something in a completely


arbitrary relationship.

Thus, in the majority of cases, symbols are subjective. Their rela-


tion to the signified object can be dictated either by social and cul-
tural conventions or by habit or by creative thinking. Words are a
prime example of signs.
This shows that each of the terms — icon, index, and symbol —
has two interpretations: (1) it is a physical object with some meaning,
and (2) it is a conceptual (theoretical) structure. We are interested
here in the second meaning of these terms.
Besides, each of these concepts — icon, index and symbol — has
the same structure as the concept sign.
One more triadic sign model was built by Morris. In contrast to
de Saussure and Peirce, Morris (1938) defines sign in a dynamic way
relative to some interpreter. He writes that S is a sign (M.B. more
exactly, the sign name) of an object or objects D for an interpreter
I to the degree that I takes the account of D in virtue of the pres-
ence of S. Thus, the object S becomes a sign only if somebody (an
interpreter) interprets S as a sign (M.B. the sign name). This gives
us the diagram in Figure 4.11.
In addition to these basic components of sign, Morris also includes
relations of the sign with other signs. This adds one more dimension,
transforming the triad into a tetrahedron (Figure 4.12).
According to Morris, the sign name is what supports the triadic
relation of the sign with other signs, with designated objects and with
the subjects using the sign. These relations are represented by the

Interpreter (I)

Sign/Sign Name (S) Object (D)

Figure 4.11. The Dynamic Sign Triad of Morris


September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 356

356 Theory of Knowledge: Structures and Processes

Sign/Sign Name

Interpreter

Sign System Object

Figure 4.12. The Dynamic Sign Tetrahedron of Morris

Sign vehicle

Sense Referent

Figure 4.13. The Semiotic Triangle

corresponding fields of semiotics. Syntactics deals with sign relations


to other signs in the Sign System, semantics is concerned with the
study of designated objects and pragmatics is oriented at the subjects
(interpreters) who use signs, whereas semiotics or semiology as a
scientific discipline is occupied with the general study of sign.
Note also that Morris considers fuzzy relations because he takes
into account the degree that the interpreter I takes into account of
the object(s) D in virtue of the presence of the sign name S.
This brings us to a variant of the Balanced Sign Triad of Peirce
called the Semiotic Triangle (Figure 4.13) and has the following com-
ponents (Nöth, 1990):

• Sign vehicle is the form of the sign.


• Sense is the sense made of the sign by the interpreter of the sign.
• Referent is what the sign “stands for.”

In this context, sense denotes the concept meaning for the interpreter
of the sign. Eco (1976) discerns three kinds of the sign vehicles, which
are material signs:

1. Signs for which there may be any number of tokens (replicas) of


the same type, for example, a printed word.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 357

Knowledge Structure and Functioning: Microlevel or Quantum Theory 357

2. Signs whose tokens are different but similar, for example, a word
which someone speaks or which is handwritten.
3. Signs whose token is their type, or signs in which type and
token are identical, for example, a unique original oil-painting or
sculpture.
Another semiotic triangle (Figure 4.14) was suggested by Ogden
and Richards (1953). Note that not only this but also other versions
of the Balanced Sign Triad of Peirce have been called by the name
semiotic triangle.
Sowa introduced another model of sign, which is similar to the
Dynamic Sign Triad of Morris. In his model, a sign has three aspects:
(1) an entity that represents (2) another entity to (3) an agent (Sowa,
2000a). It is represented in Figure 4.15.
In (Vetrov, 1968), another model of a sign is considered (cf.,
Figure 4.16).
There is one more model of a sign, in which the name and the
object are connected by the triadic relation. Namely, according to

Symbol (Material Symbol)

Thought (reference) Referent

Figure 4.14. The Semiotic Triangle of Ogden and Richards

agent

entity entity

Figure 4.15. The sign model of Sowa

Material Sign (Word or Text)


represents actualizes in the receiver
actualizes in the sender
Object Mental image
reflects

Figure 4.16. The Functional Semiotic Triangle


September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 358

358 Theory of Knowledge: Structures and Processes

Origin
Name Type Object
Reliability

Figure 4.17. The sign model of Foucault

Foucault (1966), signs are determined by three parameters of the


classical Port-Royal logic. These parameters characterize the con-
nection between the name of the sign and object denoted by the
sign:
— Origin of the connection shows whether the sign is natural or
conventional.
— Type or form of the connection explains whether the sign belongs
to the object it denotes or does not belong.
— Reliability of the connection illustrates whether the sign connec-
tion between the sign and its object is certain, i.e., without any
doubt, or only plausible.
It gives us the model in Figure 4.17.
The variety of sign models show that signs as units of quantum
knowledge have a sophisticated inherent structure.

4.3. Operations with and relations between quantum


knowledge units

To attain knowledge, add things every day.


To attain wisdom, remove things every day.”
Lao Tzu

In this section, we describe some operations with and relations


between knowledge quanta. The goal is to provide constructive tools
for building new knowledge quanta from given knowledge quanta, as
well as to illustrate relations that exist in knowledge systems.
Taking these operations, we can build a knowledge algebra
(Kset, oK) where:
(1) Kset — is a set of knowledge items,
(2) oK — is a set of epistemic operations with knowledge items.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 359

Knowledge Structure and Functioning: Microlevel or Quantum Theory 359

Note that (Kset, oK) may be a conventional universal algebra


or a multibase universal algebra.
Besides, taking operations with and relations between knowledge
items (units), we can construct a knowledge algebraic system
(Kset, oK, rK) where:
(1) Kset — is a set of distinguishable classes/types of knowledge
items,
(2) oK — is a set of available operations, which has to be specialized
according to the assumed Ic and recognized/assumed specific
knowledge structures (classes).
We may notice that at the present time, any knowledge concep-
tualizations and knowledge meta-ontologies do not enable yet to
develop a formal full-size algebra of knowledge although alge-
braic models in the Semantic Link Network theory are constructed
in (Zhuge, 2012), while general operations with knowledge systems
are studied in (Burgin, 2011a; 2011b; 2014).
Before starting our exposition of relations and operations, we
remind some definitions in this area.
A unary relation on a set of objects is a property of these objects.
For instance, being white is a unary relation in the set (property) of
tables and being interesting is a unary relation in the set (property)
of books.
A binary relation on a set of objects indicates pairs of these
objects. For instance, to be taller is a binary relation on the set of
people, for example, we can say that Alex is taller than Bob.
A ternary relation on a set of objects indicates triples of these
objects. For instance, to be a student of a professor P who works in
a university A is a ternary relation on the set of people, for example,
we can say that Alex is a student of the professor Pascal, who is
working at UCLA.
An n-ary relation on a set of objects indicates groups of n objects.
An integral relation on a set of objects indicates some groups of
these objects. For instance, company as a group of people who work
in this company is an integral relation on the set of people because
different companies usually have different numbers of employees.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 360

360 Theory of Knowledge: Structures and Processes

A unary operation in a set of objects assigns one object to another


object. For instance, any function can be treated as a unary opera-
tion.
A binary operation in a set of objects assigns one object to pairs
of objects. For instance, addition and multiplication are binary oper-
ations in the set of numbers.
A ternary operation in a set of objects assigns one object to pairs
of objects.
An n-ary operation in a set of objects assigns one object to triples
of objects.
An integral operation in a set of objects assigns one object to sets
of objects. For instance, summation in mathematics is an integral
operation in the set of numbers (Burgin, 2008).

4.3.1. Properties of and relations between nodes and


links in SLN and knowledge quanta in QTK
The following properties (unary relations) are considered in SLN.

Definition 4.3.1. If {(X, α, Y ) and (Y , α, Z)} imply (X, α, Z),


then the type link α or the type α-link is called transitive.
For instance, the sequential link, equal link, reference link, cause–
effect link, implication link, and subtype link are transitive.

Definition 4.3.2. If the relation α in the semantic link α = (X, α,


X) is symmetric, then the link α is called symmetric.
For instance, the equal link, empty link, unknown link, semantic
equivalence link, and similarity link are symmetric.
There are also binary relations between semantic links. The reach-
ability relation is defined in the following way.

Definition 4.3.3. (a) If there is a semantic link (X, α, Y ), then the


node Y is called directly semantically reachable from the node X by
the link α.
(b) The node Y is called directly semantically reachable from the
node X if it is semantically reachable by some semantic link.
(c) The node Y is called semantically reachable from the node X
if there is a sequence of nodes X0 , X1 , X2 , X3 , . . ., Xn such that
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 361

Knowledge Structure and Functioning: Microlevel or Quantum Theory 361

X0 = X, Xn = Y , and each Xi is directly semantically reachable


from the node Xi−1 (i = 1, 2, 3, . . ., n).

Let us consider some examples.

Example 4.3.1. By Cantor’s theorem (cf., (Fraenkel and Bar-Hillel,


1958)), the set Q of all natural numbers is countable, i.e., it is equiv-
alent to the set N of all natural numbers. It means that the set Q
is semantically reachable from the set N by the equivalence link.

Example 4.3.2. As it is proved in the theory of algorithms,


automata and computation, the class of all non-deterministic finite
automata is semantically reachable from the class of all determinis-
tic finite automata by the equivalence (or more exactly, by linguistic
equivalence) link (Sipser, 1997; Burgin, 2010d).

Example 4.3.3. As it is proved in the theory of algorithms,


automata and computation, the class of all Turing machines with one
tape is semantically reachable from the class of all Turing machines
with n tapes by the equivalence (or more exactly, by functional equiv-
alence) link (Sipser, 1997; Burgin, 2010d).

Definition 4.3.3.c implies the following result.

Proposition 4.3.1. If a node Y is semantically reachable from a


node X and a node Z is semantically reachable from the node Y ,
then the node Z is semantically reachable from the node X.

An important binary relation is implication of links.

Definition 4.3.4. A semantic link α = (X, α, Y ) implies a semantic


link β = (X, β, Y ), it is also said that α implies β and is denoted by
α ⊆ β or (X, α, Y ) ⇒ (X, β, Y ) or α ⇒ β, if correctness (validity)
of the semantic link α = (X, α, Y ) implies correctness (validity) of
the semantic link β = (X, β, Y ).

Definitions 4.3.1 and 4.3.4 imply the following result.

Proposition 4.3.2. Implication of semantic links is a transitive


relation.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 362

362 Theory of Knowledge: Structures and Processes

Let us consider some examples.


Example 4.3.4. The equal link implies the similarity link because if
two objects are equal (in some sense), then they are naturally similar.
Example 4.3.5. According to the conventional understanding, the
element link implies the part link because elements are usually treated
as parts. However, in mereology, the concept of an element is not used
and thus, such an implication is not true (Slupecki, 1958).
Example 4.3.6. If a set is treated as a kind of a class, then the
subset link implies the subclass link.
An important concept in the operation of a semantic network
is the inheritance of relations, which determines relations between
semantic links.
The following properties and relations are studied in the quantum
theory of knowledge. At first, we consider binary relations “more
abstract” and “more general”.
Let us consider two descriptive collective knowledge quanta
(units): KG described by Diagram (4.40) and HG described by
Diagram (4.41):
l
L M

,
(4.40)
p q
C D
f
k
W M

.
(4.41)
t q
U D
h

Definition 4.3.5. The knowledge unit HG is more abstract than


the knowledge unit KG if both Diagrams (4.40) and (4.41) can be
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 363

Knowledge Structure and Functioning: Microlevel or Quantum Theory 363

included into the commutative combined diagram (4.42), in which l


and d are projections:
k M
l
W L q .
g
t p h D (4.42)
d
U C
f
For example, average knowledge about a group of people, e.g.,
their average salary, is more abstract than knowledge about some
individual from this group, e.g., her salary. Average salary of a person
for a year is more abstract than the salary of this person for some
month.
As the correct combination of commutative diagrams gives a com-
mutative diagram, we have the following result.
Proposition 4.3.3. If a knowledge unit HG is more abstract than a
knowledge unit KG and a knowledge unit FG is more abstract than
the knowledge unit HG, then the knowledge unit FG is more abstract
than the knowledge unit KG.
Having the combined diagram for FG and HG and the combined
diagram for HG and KG, we can build the combined diagram for FG
and KG because composition (product) of projections is a projection.
Let us consider two descriptive collective knowledge quanta
(units): KG described by Diagram (4.40) and PG described by
Diagram (4.43):
n
Z N

. (4.43)
r l
V E
m

Definition 4.3.6. The knowledge unit HG is more general than the


knowledge unit KG if both Diagrams (4.40) and (4.43) are included
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 364

364 Theory of Knowledge: Structures and Processes

into the commutative joined Diagram (4.44), in which u and care


injections:

.
(4.44)

For example, knowledge about properties of birds is more abstract


than the corresponding knowledge about eagles.
As correct combination of commutative diagrams gives a commu-
tative diagram, we have the following result.

Proposition 4.3.4. If a knowledge unit HG is more general than a


knowledge unit KG and a knowledge unit FG is more general than
the knowledge unit HG, then the knowledge unit FG is more general
than the knowledge unit KG.

Having the joined diagram for FG and HG and the joined diagram
for HG and KG, we can build the joined diagram for FG and KG
because composition (product) of injections is an injection.
As we can see from Chapter 2, another important relation between
knowledge quanta (knowledge units) is consistency. For instance, the
knowledge quanta (A, is, a man) and (A, is, a student) are consistent,
while the knowledge quanta (A, is, a man) and (A, is, a building) are
inconsistent.
Computability and measurability are useful properties of knowl-
edge quanta (knowledge units).

Definition 4.3.7. The knowledge unit HG is computable if the


attribute q in Diagram (4.40) is computable.
In general, computability is a relative property, which depends on
the class of algorithms or automata that are used for computation
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 365

Knowledge Structure and Functioning: Microlevel or Quantum Theory 365

(Burgin, 2005). Consequently, computability of knowledge quanta


(units) is also a relative property.

Definition 4.3.8. The knowledge unit HG is measurable if the


intrinsic property p in Diagram (4.40) is measurable.
Similar to computability, measurability is a relative property,
which depends on the class of measuring devices. For instance, now
we can measure much more than scientists were able in the 19th cen-
tury. Consequently, measurability of knowledge quanta (units) is also
a relative property.
To define the inclusion relation, we consider two descriptive col-
lective knowledge quanta (units): KG given by Diagram (4.45) and
HG given by Diagram (4.46):

l
P L

, (4.45)
p q
V D
f

k
W M

. (4.46)
t r
U C
h

Definition 4.3.9. The extended knowledge unit KG is included in


the extended knowledge unit HG if the domain V is a subdomain of
the domain U, the set D is a subset of the set C, while the properties
(V, p, P) and (D, q, L) are components of the properties (U, t, W)
and (C, r, M), correspondingly.
For instance, if we have U = {car 1, car 2}, V = {car 1}, D
= {BMW1}, C = {BMW1, BMW2}, W = {color}, L = {white},
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 366

366 Theory of Knowledge: Structures and Processes

M = {white, gray}, r(BMW1) = q(BMW1) = white and q(BMW2) =


grey, (cf., Diagrams (4.47) and (4.48)) then the knowledge unit KG
is included in the knowledge unit HG.

l
{color} {white}

, (4.47)
p q
{car 1} {BMW1}
f

k
{color} {white, gray},

.
(4.48)
t r
{car 1, car 2} {BMW1, BMW2}
h

As inclusion of sets is a transitive relation, we have the following


result.

Proposition 4.3.5. If a knowledge unit HG is included in a knowl-


edge unit KG and a knowledge unit FG is included in the knowledge
unit HG, then the knowledge unit FG is included in the knowledge
unit KG, i.e., inclusion of extended descriptive (representational or
operational) knowledge units is a transitive relation.

It is also possible to define other relations between descriptive


individual and collective knowledge quanta (units), such as infer-
entiability, reducibility, and equivalence, as well as relations between
representational and operational, individual and collective knowledge
quanta (units), such as simulation relation, representability, reducibil-
ity, and equivalence.
Now construct some relations between symbolic knowledge units.
To define the inclusion relation, we consider two symbolic
descriptive collective knowledge quanta (units): SKG given by
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 367

Knowledge Structure and Functioning: Microlevel or Quantum Theory 367

Diagram (4.49) and SHG given by Diagram (4.50):


l
C L , (4.49)

f
. (4.50)
D M

Definition 4.3.10. The symbolic knowledge unit SKG is included


in the symbolic knowledge unit SHG if the domain C is a subdomain
of the domain D and the property (C, l, L) is a component of the
property (D, f , M).
Computability and measurability are also useful properties of
symbolic descriptive knowledge quanta (symbolic knowledge units).
However, in this case, measurability is interpreted not as in physics
or in any natural science where characteristics of physical systems
are measured by specific devices but as it is comprehended in math-
ematics where there is a special discipline — the theory of measure
and integration (cf., for example, (Konig, 2009)).

Definition 4.3.11. The symbolic descriptive knowledge unit (C, l,


L) is computable if it is computable as an abstract property.
As in the case of extended knowledge quanta (units), computabil-
ity of symbolic knowledge quanta (units) is a relative property.

Definition 4.3.12. The symbolic descriptive knowledge unit (C, l,


L) is measurable if it is a measurable function in the sense of the
theory of measure and integration.
Similar to computability, measurability of symbolic knowledge
units is a relative property, which depends on the utilized measure.
A useful binary relation is reducibility of symbolic knowledge
units, which is based on the concept of reduction studied in the the-
ory of abstract properties (cf., Chapter 5).

Definition 4.3.13. The symbolic knowledge unit SKG is reducible


to the symbolic knowledge unit SHG if the abstract property (C, l,
L) can be reduced to the abstract property (D, f , M).
When we consider relative reducibility of abstract properties, i.e.,
reducibility that depends of admissible functions used for reduction,
then reducibility of symbolic knowledge unit also becomes relative.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 368

368 Theory of Knowledge: Structures and Processes

Reducibility of symbolic knowledge unit implies equivalence rela-


tion between symbolic knowledge units.

Definition 4.3.14. The symbolic knowledge unit SKG is equivalent


to the symbolic knowledge unit SHG if SKG is reducible to SHG and
SHG is reducible to SKG.
Another useful binary relation between symbolic knowledge units
is “to be an extension of”. There are two kinds of this relation — “to
be a right extension of” “to be a left extension of”.

Definition 4.3.15. The individual symbolic knowledge unit (C, l,


L) is a right extension of the individual symbolic knowledge unit (C,
f , M) if L = M ∪ K.
For instance, if C is the word car and functions f and l describe
properties of this car, namely, M = {white} and L = {white, four
doors, automatic, 2010} where 2010 is the year of its production, then
the individual symbolic knowledge unit (C, l, L) is a right extension
of the individual symbolic knowledge unit (C, f , M).

Definition 4.3.16. The collective symbolic knowledge unit (C, l, L)


is a left extension of the collective symbolic knowledge unit (D, f , L)
if C = D ∪ E.
For instance, if C = {car1, car2, car3}, D = {car1} and functions
f and l describe color of these cars, namely, L = {white}, then the
individual symbolic knowledge unit (C, l, L) is a left extension of the
individual symbolic knowledge unit (D, f , M).

Proposition 4.3.7. Relations “to be a right extension of” “to be


a left extension of” are transitive. Projections are dual relations to
extensions.

Definition 4.3.17. The individual symbolic knowledge unit (C, l,


L) is a right projection of the individual symbolic knowledge unit
(C, f , M) if (C, l, L) is a right extension of the individual symbolic
knowledge unit (C, f , M).
By this definition, right extension is the inverse relation to right
projection. This is an example of a relation between relations, which
are also included in the structure of a system (cf., Section 5.1).
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 369

Knowledge Structure and Functioning: Microlevel or Quantum Theory 369

Definition 4.3.18. The individual symbolic knowledge unit (C, l,


L) is a left projection of the individual symbolic knowledge unit (C, f ,
M) if (C, l, L) is a left extension of the individual symbolic knowledge
unit (C, f , M).
By this definition, right extension is the inverse relation to left
projection.

Proposition 4.3.8. Relations “to be a right projection of” “to be a


left projection of” are transitive.

Proof is left as an exercise.


It is also possible to define other relations between symbolic
descriptive individual and collective knowledge quanta (units), such
as inferentiability, “to be a renaming” and “to be a reinterpretation”,
as well as relations between symbolic representational and opera-
tional, individual and collective knowledge quanta (units), such as
simulation relation, representability, reducibility, and equivalence.

4.3.2. Operations with extended knowledge quanta


Extended knowledge quanta with operations form epistemic quantum
algebras. Here we consider only some of these operations and study
their properties.

4.3.2.1. Unary operations with extended


knowledge quanta

Operation 4.3.1. The operation renaming of an individual knowl-


edge quantum K is represented by the following transformation:
Diagram (4.51) is changed to Diagram (4.52)
e
QA PA

, (4.51)
q p
A NA
n
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 370

370 Theory of Knowledge: Structures and Processes

e
QA PA
. (4.52)
q po
A MA
no

Informally, renaming means changing the name of the object A.


For instance, when number 1 is considered as an element of the set
{1, 2, 3, . . . }, it is called a natural number. However, when the same
number is considered as an element of the set {0, 1, 2, 3, . . . }, it is
called a whole number. In such a way, moving from natural numbers
to whole numbers, we rename number 1.
As extended knowledge quanta are built of named sets, we can
apply results from the theory of named sets (Burgin, 2011) obtaining
the following result.

Proposition 4.3.6. A renaming of a renaming of an extended


knowledge quantum K is a renaming of K.

Renaming can be pure, when only the name is changed, and com-
bined, when, for example, the ascribed property (attribute) is also
changed.

Operation 4.3.2. The operation re-evaluation of an individual


knowledge quantum is represented by the following transformation:
Diagram (4.53) is changed to Diagram (4.54).

e
QA PA
,
q p (4.53)
A NA
n
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 371

Knowledge Structure and Functioning: Microlevel or Quantum Theory 371

e
QA = QB PA
.
qo po (4.54)
B NB = N A
no

Informally, reevaluation means giving the same name to another


object B. For instance, taking the number 1 with the name a natural
number, we can give the same name to the number 3.
As extended knowledge quanta are built of named sets, we can
apply results from the theory of named sets (Burgin, 2011) obtaining
the following result.

Proposition 4.3.7. A re-evaluation of a re-evaluation of an


extended knowledge quantum K is a reevaluation of K.
Let us consider combined operations.

Operation 4.3.3. Re-evaluative renaming of an individual knowl-


edge quantum is represented by the following transformation:
Diagram (4.55) is changed to Diagram (4.56)

e
QA PA

, (4.55)
q p
A NA
n

e
QB PB

. (4.56)
qo po
B NB
no
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 372

372 Theory of Knowledge: Structures and Processes

Re-evaluative renaming is a kind of combined renaming.


Note that re-evaluation, renaming, and re-evaluative renaming are
multivalued operations.
There are operations with extended knowledge quanta induced by
binary relations between extended knowledge quanta. Let us consider
two of these operations — abstraction and generalization, which are
very popular in science and mathematics.

Operation 4.3.4. Abstraction transforms a knowledge unit into a


more abstract knowledge unit. Proposition 4.3.3 implies the following
result.

Proposition 4.3.8. Abstraction of an abstraction of an extended


knowledge quantum K is abstraction of K.

Operation 4.3.5. Generalization transforms a knowledge unit into


a more general knowledge unit.
Proposition 4.3.4 implies the following result.

Proposition 4.3.9. Generalization of generalization of an extended


knowledge quantum K is a generalization of K.

4.3.2.2. Binary operations with extended knowledge


quanta
There are different kinds of unions of extended quantum knowledge
units. Here we define some of them for descriptive collective knowl-
edge quanta.

Operation 4.3.6. In the disjunctive union of collective extended


quantum knowledge units, which is denoted by the symbol
,
objects and their names are combined together with their proper-
ties as separate objects. Namely, let us take two collective extended
quantum knowledge units K and H and build their disjunctive
union K
H.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 373

Knowledge Structure and Functioning: Microlevel or Quantum Theory 373

K:
g
W L
q p ,
(4.57)
U C
f

H:
h
V M
r n ,
(4.58)
T D
t

K
H:
g∪h
{W, V} {L, M}
q∪r p∪n .
(4.59)
{U, H} {C, D}
f ∪t

As the union of sets is a commutative operation (Fraenkel and


Bar-Hillel, 1958), we have the following result.

Proposition 4.3.10. The disjunctive union of collective extended


quantum knowledge units is a commutative operation.

As the union of sets is an associative operation (Fraenkel and


Bar-Hillel, 1958), we have the following result.

Proposition 4.3.11. The disjunctive union of collective extended


quantum knowledge units is an associative operation.

As the union of sets is an idempotent operation (Fraenkel and


Bar-Hillel, 1958), we have the following result.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 374

374 Theory of Knowledge: Structures and Processes

Proposition 4.3.12. The disjunctive union of collective extended


quantum knowledge units is an idempotent operation, i.e., K
K = K
for any collective extended quantum K.
The rich structure of collective extended knowledge quanta allows
defining other types of unions for them.
Operation 4.3.5. In the union with name amalgamation of collec-
tive extended quantum knowledge units, which is denoted by the
symbol ∪n , objects and their names are also combined together with
their properties. However, while objects and their properties are com-
bined together as separate objects forming multisets in a general case,
names are combined by the operation of union of sets. Namely, let
us take two collective extended quantum knowledge units K and H
and build their union with name amalgamation K ∪n H.
K:
g
W L
q p ,
(4.60)
U C
f
H:
h
V M
r n ,
(4.61)
T D
t
K ∪n H:
g∪h
{W, V} {L, M}
q∪r u . (4.62)

{U, H} w C∪D
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 375

Knowledge Structure and Functioning: Microlevel or Quantum Theory 375

For instance, the domain U has one object with the name ball and
the domain H has one object with the name ball. Then due to the
name amalgamation, the domain {U , H} has two objects with the
same name ball. That is, C = D = {ball} and C ∪ D = {ball}.
As the union of sets is a commutative operation (Fraenkel and
Bar-Hillel, 1958), we have the following result.

Proposition 4.3.13. The union with name amalgamation of collec-


tive extended quantum knowledge items is a commutative operation.

As the union of sets is an associative operation (Fraenkel and


Bar-Hillel, 1958), we have the following result.

Proposition 4.3.14. The union with name amalgamation of collec-


tive extended quantum knowledge items is an associative operation.

As the union of sets is an idempotent operation (Fraenkel and


Bar-Hillel, 1958), we have the following result.

Proposition 4.3.15. The union with name amalgamation of collec-


tive extended quantum knowledge units is an idempotent operation,
i.e., K ∪n K = K for any collective extended quantum K.

One more operation is feature union.

Operation 4.3.6. In the feature union, or union with feature merger,


denoted by the symbol ⊕, objects and their names are combined
together by the set-theoretical union and stay the same in this oper-
ation but their intrinsic properties (features) are merged, namely,
when the same object has different properties, these properties are
combined (merged) into one property.

For instance, let us consider two knowledge units K and H. In


K, the (abstract) property is color, e.g., U has an object b, which
is called a ball and is white. At the same time, in H, the (abstract)
property is weight, e.g., H has the same object b, which is called a
ball and has weight 10 kg. If K ⊕ H is the feature union of K and H,
then the (abstract) property is color and weight, e.g., the object b,
which is called a ball has the property {white, 10 kg}.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 376

376 Theory of Knowledge: Structures and Processes

Merging of relational database schemas involves feature union of


quantum knowledge items.
As the union of sets is a commutative operation (Fraenkel and
Bar-Hillel, 1958), we have the following result.
Proposition 4.3.16. The feature union of collective extended quan-
tum knowledge items is a commutative operation.
As the union of sets is an associative operation (Fraenkel and
Bar-Hillel, 1958), we have the following result.
Proposition 4.3.17. The feature union of collective extended quan-
tum knowledge items is an associative operation.
As the union of sets is an idempotent operation (Fraenkel and
Bar-Hillel, 1958), we have the following result.
Proposition 4.3.18. The feature union of collective extended quan-
tum knowledge units is an idempotent operation, i.e., K ⊕ K = K
for any collective extended quantum K.
One more operation is attributive union.
Operation 4.3.7. In the attributive union, or union with attribute
merger, denoted by the symbol ⊗, objects and their names are com-
bined together by the set-theoretical union and stay the same in
this operation but their assigned properties (attributes) are merged,
namely, when the same object has different assigned properties, these
properties (attributes) are combined (merged) into one property.
For instance, let us consider two knowledge units K and H, ele-
ments of which are cities. In K, the attribute (assigned property) is
population, e.g., a city a with 10 million population, while in H, the
attribute (assigned property) is the place on the Earth, e.g., the city
a situated in Europe. If K ⊗ H is the attributive union of K and H,
then the property is population and the place on the Earth, e.g., the
attribute of the city a is {10 million, Europe}.
Merging of relational database schemas attributive union of quan-
tum knowledge items.
As the union of sets is a commutative operation (Fraenkel and
Bar-Hillel, 1958), we have the following result.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 377

Knowledge Structure and Functioning: Microlevel or Quantum Theory 377

Proposition 4.3.19. The attributive union of collective extended


quantum knowledge items is a commutative operation.

As the union of sets is an associative operation (Fraenkel and


Bar-Hillel, 1958), we have the following result.

Proposition 4.3.20. The attributive union of collective extended


quantum knowledge items is an associative operation.

As the union of sets is an idempotent operation (Fraenkel and


Bar-Hillel, 1958), we have the following result.

Proposition 4.3.21. The attributive union of collective extended


quantum knowledge units is an idempotent operation, i.e., K ⊕ K =
K for any collective extended quantum K.

One more operation is union with object amalgamation.

Operation 4.3.8. The union with object amalgamation of collective


extended quantum knowledge units, which is denoted by the symbol
∪O , is applied when an object has several names. In this case, these
names and the object properties are merged into one set and these
names and the object properties are assigned to their common object.
For instance, a geometrical object I has two names — a segment
and an interval and the property “the length of I is 25 inches.” It
is possible to consider two quantum items of knowledge K1 and K2
about the object I, which are represented by Diagrams (4.63) and
(4.64), respectively. Then when the union with object amalgamation
of these two quantum items of knowledge is performed, Diagrams
(4.63) and (4.64) that represent this quantum knowledge are merged
into Diagram (4.65). The result of the operation represented by Dia-
gram (4.64) is denoted by K = K1 ∪O K2 .
g
length 25 in
q p , (4.63)

I segment
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 378

378 Theory of Knowledge: Structures and Processes

g
length 25 in
q r , (4.64)

I interval

g
length 25 in
q {p, r} . (4.65)

I {segment, interval}

Here is one more example. A person A has two names — Barbara


and Barbie, as well as two heights — 5 ft at the age 12 when she was
called Barbie and 6 ft at the age 30 when she was called Barbara.
As the union of sets is a commutative operation (Fraenkel and
Bar-Hillel, 1958), we have the following result.
Proposition 4.3.22. The disjunctive union with object amalgama-
tion of quantum knowledge items is a commutative operation.
As the union of sets is an associative operation (Fraenkel and
Bar-Hillel, 1958), we have the following result.
Proposition 4.3.23. The disjunctive union with object amalgama-
tion of quantum knowledge items is an associative operation.
As the union of sets is an idempotent operation (Fraenkel and
Bar-Hillel, 1958), we have the following result.
Proposition 4.3.24. The attributive union of collective extended
quantum knowledge units is an idempotent operation, i.e., K ⊕ K =
K for any collective extended quantum K.
One more operation is Cartesian product.
Operation 4.3.9. In the Cartesian product of collective extended
quantum knowledge units, which is denoted by the symbol ×, the
Cartesian products of object domains and of name domains are
formed together with their properties. Namely, let us take two
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 379

Knowledge Structure and Functioning: Microlevel or Quantum Theory 379

quantum knowledge units K and H and build their Cartesian prod-


uct K × H.
K:
g
W L
q p ,
(4.57)
U C
f
H:
h
V M
r n ,
(4.58)
T D
t
K × H:
g×h
W×V L×M

q×r p×n .
(4.59)

U×H C×D
f×t
As the Cartesian product of sets is a commutative operation
(Fraenkel and Bar-Hillel, 1958), we have the following result.
Proposition 4.3.25. The Cartesian product of collective extended
quantum knowledge units is a commutative operation.
As the Cartesian product of sets is an associative operation
(Fraenkel and Bar-Hillel, 1958), we have the following result.
Proposition 4.3.26. The Cartesian product of collective extended
quantum knowledge units is an associative operation.
There are also operations with higher than two arity, as well
as integral operations with extended knowledge units (knowledge
quanta) but they are studied elsewhere.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 380

380 Theory of Knowledge: Structures and Processes

4.3.3. Operations with symbolic knowledge quanta and


complete semantic links
Operations with symbolic knowledge quanta and complete semantic
links are induced by the union of named sets studied in (Burgin,
2011) because these knowledge items are special cases of named sets.
Here we consider only some of these operations in the domains of
descriptive knowledge quanta and of complete semantic links.

4.3.3.1. Unary operations with symbolic


knowledge quanta
Having a symbolic knowledge quantum (N , c, P ), it is possible to
invert its relation c obtaining the symbolic knowledge quantum (P ,
c−1 , N ). Logically, such an operation is called inversion.

Operation 4.3.10. If K = (N, c, P ) is a symbolic knowledge quan-


tum, then the symbolic knowledge quantum invK = (P, c−1 , N )
where c−1 is the inverse of the relation c is the inversion of K, which
is denoted by inv K.
Inversion is a very natural operation for representational knowl-
edge. For instance, let us consider a (formal) theory T and its
model M . They are included in the symbolic knowledge quantum
U = (T , int, M ), in which M represents T . At the same time, it is
possible to consider T as a representation of M obtaining the inverse
symbolic knowledge quantum invU = (M , model, T ).
Inversion also works for descriptive knowledge because in the the-
ory of abstract properties, values of properties may be arbitrary
objects (Burgin, 1985). Then what was the object becomes a prop-
erty and what was the property becomes an object, while what was
an object becomes a property as the result of inversion.
The same is true for operational knowledge where inversion is an
innate operation. For instance, in the theory of algorithms, automata,
and computation, a Turing machine is represented by a collection
T = (A, Q, q0 , F , R) where A is its alphabet, Q is its set of states,
F is the set of the final states, q0 is the start state and R is its sys-
tem of rules. This gives the operational symbolic knowledge quantum
W = (Turing machine, r, (A, Q, q0 , F , R)). Inverting this knowledge
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 381

Knowledge Structure and Functioning: Microlevel or Quantum Theory 381

quantum, we obtain the inverse knowledge quantum invW = ((A,


Q, q0 , F , R), t, Turing machine), which gives knowledge that collec-
tion T = (A, Q, q0 , F ,R) is a Turing machine.

Proposition 4.3.27. The inversion of symbolic quantum knowledge


units is an idempotent operation, i.e., inv(invU ) = U for any sym-
bolic knowledge quantum U .

Note that it is possible to apply inversion both to individual and


collective symbolic knowledge quanta.

Operation 4.3.11. The renaming operation rn changes the right


component in a symbolic knowledge quantum, i.e., if K = (N, c, P )
is a symbolic knowledge quantum, then rnK = (N, b, T ) for some
name T .
For instance, taking the operational symbolic knowledge quan-
tum V = ((A, Q, q0 , F, R), t, Turing machine) where (A, Q, q0 ,
F , R) is the mathematical description of some Turing machine
(Hopcroft et al., 2007) and applying the renaming operation, which
changes the name “Turing machine” to the name “abstract automa-
ton”, we can obtain the operational symbolic knowledge quantum
rnV = ((A, Q, q0 , F, R), l, abstract automaton).
As symbolic knowledge units are a special kind of named sets,
we can apply results from the theory of named sets (Burgin, 2011)
obtaining the following result.

Proposition 4.3.28. A renaming of a renaming of a knowledge


quantum K is a renaming of K, i.e., the sequential composition of
renamings is a renaming.

Attributes in symbolic knowledge quanta also have names. For


instance, symbolic knowledge quantum K = (N, c, P ) has the
attribute with the name “c”. Therefore, we have one more opera-
tion of renaming.

Operation 4.3.11. The attribute renaming operation arn changes


the right component in a symbolic knowledge quantum, i.e., if K =
(N, c, P ) is a symbolic knowledge quantum, then arnK = (N, b, P )
for some attribute name b.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 382

382 Theory of Knowledge: Structures and Processes

For instance, it is possible to call some plant by the name “a tree”


or by the name “an oak”.
As symbolic knowledge units are a special kind of named sets,
we can apply results from the theory of named sets (Burgin, 2011)
obtaining the following result.
Proposition 4.3.28. An attribute renaming of an attribute renam-
ing of a knowledge quantum K is an attribute renaming of K, i.e.,
the sequential composition of attribute renamings is an attribute
renaming.
Attribute renaming is a widespread unary operation in relational
databases where it is called rename and denoted by the letter ρ
(Elmasri and Navathe, 2000). It is used to change names of attributes
in a relation.
Operation dual to renaming is called reinterpreting.
Operation 4.3.12. The reinterpreting operation rt changes the left
component in a symbolic knowledge quantum, i.e., if K = (N, c, P )
is a symbolic knowledge quantum, then rtK = (M, b, P ) for some
object M .
For instance, taking the operational symbolic knowledge quan-
tum V = ((A, Q, q0 , F, R), t, Turing machine) where (A, Q, q0 , F, R)
is the mathematical description of a universal Turing machine
(Hopcroft et al., 2007) and applying the reinterpreting operation,
which changes the description (A, Q, q0 , F, R) to the description
(X, P, q0 , H, D) of a Turing machine that computes the identity func-
tion, we can obtain the operational symbolic knowledge quantum
rtV = ((X, P, q0 , H, D), h, Turing machine).
As symbolic knowledge units are a special kind of named sets,
we can apply results from the theory of named sets (Burgin, 2011)
obtaining the following result.
Proposition 4.3.29. Reinterpreting of reinterpreting of a knowledge
quantum K is reinterpreting of K, i.e., composition of reinterpretings
is a reinterpreting.
Some unary operations with symbolic knowledge units are
induced by binary relations between symbolic knowledge units. For
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 383

Knowledge Structure and Functioning: Microlevel or Quantum Theory 383

instance, the relations “to be a right extension of” and “to be a


left extension of” give us two operations — right extension and left
extension.
Operation 4.3.13. A right extension of an individual symbolic
knowledge unit (A, l, L) is an individual symbolic knowledge unit
(A, f , M) for which the equality M = L ∪ K where K consists of
properties of A is true.
If an individual symbolic knowledge unit (C, l, L) describes us
some properties of an object with the name C, then its right exten-
sion gives more properties of the same object. For instance, if in the
individual symbolic knowledge unit (C, f , L), C is the word car and
function l describes the color of this car, namely, M = {white}, then
extending this knowledge unit from the right, we can get the indi-
vidual symbolic knowledge unit (C, l, L), in which L = {white, four
doors, automatic, 2010} where 2010 is the year of its production.
Proposition 4.3.29. Right extension of a right extension of an indi-
vidual symbolic knowledge quantum K is a right extension of K, i.e.,
composition of right extensions is a right extension.
It is also possible to apply right extension to collective symbolic
knowledge units.
The operation left extension acts in the realm of collective sym-
bolic knowledge units.
Operation 4.3.14. A left extension of a collective symbolic knowl-
edge unit (C, l, L) is a collective symbolic knowledge unit (D, f , L)
for which the equality D = C ∪ E is true.
Left extension combines different knowledge domains of objects
with the same property together. For instance, if C = {car1, car2,
car3}, D = {car1} and functions f and l describe color of these cars,
namely, L = {white}, then the individual symbolic knowledge unit
(C, l, L) is a left extension of the individual symbolic knowledge unit
(D, f , M).
Proposition 4.3.29. Left extension of a left extension of a collec-
tive symbolic knowledge quantum K is a left extension of K, i.e.,
composition of left extensions is a left extension.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 384

384 Theory of Knowledge: Structures and Processes

Note that both extensions are multivalued operations. However, it


is possible to make right extensions a regular operation by assigning
parameters to extensions, which can determine what attributes have
to be added. In such a way, we obtain a parametric family of right
extensions.
Adding parameters indicating objects to be added, we can also
make left extensions a regular operation.
Operations dual to extensions are called projections.
Operation 4.3.15. A right projection of an individual symbolic
knowledge unit (A, l, M) is a collective symbolic knowledge unit
(A, f , L) for which the equality M = L ∪ H where H consists of
properties of A is true.
By this definition, right extension is the inverse operation to right
projection.
Proposition 4.3.29. Right projection of a right projection of an
individual symbolic knowledge quantum K is a right projection of K,
i.e., composition of right projections is a right projection.
In a similar way, we define left projection.
Operation 4.3.14. A left projection of a collective symbolic knowl-
edge unit (C, l, L) is a collective symbolic knowledge unit (D, f , L)
for which the equality C = D ∪ E is true.
By this definition, left extension is the inverse operation to left
projection.
Proposition 4.3.29. Left projection of a left projection of a collec-
tive symbolic knowledge quantum K is a left projection of K, i.e.,
composition of left projections is a left projection.
Note that both projections are multivalued operations. However,
it is possible to make right projections a regular operation by assign-
ing parameters to projections, which can determine what attributes
have to be eliminated (truncated). In such a way, we obtain a para-
metric family of right projections.
Adding parameters indicating objects to be eliminated, we can
also make left projections a regular operation.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 385

Knowledge Structure and Functioning: Microlevel or Quantum Theory 385

4.3.3.2. Unary operations with semantic links


Zhuge introduces the following operation with semantic links (Zhuge,
2012).

Operation 4.3.13. The reverse relation operation, or simply inver-


sion, R changes α to αR and a semantic link (X, α, Y ) to the semantic
link (Y , αR , X).
As Zhuge writes, a semantic link (semantic relation) and its
reverse declare the same thing, but one of them may be useful in
some situations, while the other may be useful in other situations
(Zhuge, 2012).

Example 4.3.7. The before link is the reverse of the after link.

Example 4.3.8. The worse link is the reverse of the better link.

Example 4.3.9. The smaller link is the reverse of the bigger link.

The reverse relation operation of semantic links is the involution


operation of named sets (Burgin, 2011). This allows us to use named
set theory to prove the following proposition obtaining properties of
the reverse relation operation.

Proposition 4.3.30 (Zhuge, 2012). (Idempotent Law) αRR = α


for any semantic link α.

Proposition 4.3.31. (Symmetry Law) αR ≡ α if and only if α is a


symmetric semantic link.

For instance, for symmetric relations, the relation link coincides


with its reverse.

Corollary 4.5.3 (Zhuge, 2012).


(1) simR = sim
(2) ØR = Ø
(3) N R = N
(4) equiv R = equiv
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 386

386 Theory of Knowledge: Structures and Processes

Proposition 4.3.32. (Monotone Law) A semantic link α implies


the semantic link β if and only if the semantic link αR implies the
semantic link β R .

4.3.3.3. Binary operations with symbolic


knowledge quanta
At first, we describe some operations with knowledge units.

Operation 4.3.6. The union of symbolic collective quantum knowl-


edge units K = (C, l, L) and H = (D, h, M ) combines names
and properties as separate objects and is defined as K ∪ H =
({C, D}, g ∪ h, {L, M }) where l ∪ h|C = l and g ∪ h|D = h.
Note that {C, D} and {L, M } are multisets in a general case.
As the union of sets is a commutative operation (Fraenkel and
Bar-Hillel, 1958), we have the following result.

Proposition 4.3.10. The union of symbolic collective quantum


knowledge units is a commutative operation.

As the union of sets is an associative operation (Fraenkel and


Bar-Hillel, 1958), we have the following result.

Proposition 4.3.11. The union of symbolic collective quantum


knowledge units is an associative operation.

As the union of sets is an idempotent operation (Fraenkel and


Bar-Hillel, 1958), we have the following result.

Proposition 4.3.12. The union of symbolic collective quantum


knowledge units is an idempotent operation, i.e., K ∪ K = K for
any symbolic collective quantum K.

It is also possible to define the union with properties amalgama-


tion of symbolic collective quantum knowledge units and the union
with name amalgamation of symbolic collective quantum knowledge
units in a similar way as it is done for collective extended quantum
knowledge units (cf., Section 4.3.2.2).
One more operation with symbolic collective quantum knowledge
units is Cartesian product.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 387

Knowledge Structure and Functioning: Microlevel or Quantum Theory 387

Operation 4.3.9. In the Cartesian product of collective extended


quantum knowledge units K = (C, l, L) and H = (D, h, M ) forms
the Cartesian products of name domains C ×D and of property scales
L × M and is defined as K × H = (C × D, g × h, L × M ).
As the Cartesian product of sets is a commutative operation
(Fraenkel and Bar-Hillel, 1958), we have the following result.
Proposition 4.3.25. The Cartesian product of symbolic collective
quantum knowledge units is a commutative operation.
As the Cartesian product of sets is an associative operation
(Fraenkel and Bar-Hillel, 1958), we have the following result.
Proposition 4.3.26. The Cartesian product of symbolic collective
quantum knowledge units is an associative operation.
There are also operations with higher than two arity, as well as
integral operations with symbolic collective knowledge units (knowl-
edge quanta) but they are studied elsewhere.

4.3.3.4. Binary operations with semantic links


Operation 4.3.7. The semantic addition + of two complete seman-
tic links α = (X, α, Y ) and β = (X, β, Y ) is performed by merging
the inner semantic links α and β (Zhuge, 2012). It means that
α + β = (X, α & β, Y ).
The inner semantic link (semantic indicator) α&β is also denoted
by α + β.
Example 4.3.10. If α = (X, sim, Y ) and β = (X, ptOf, Y ), which
means that X is a part of Y and X is similar to Y , then α + β =
(X, sim&ptOf, Y ), which means that X is a part of Y and is similar
to Y . This inner semantic link sim & ptOf is pivotal for the theory of
fractals when some parts are similar to the whole (cf., for example,
(Flake, 1998)).
Example 4.3.11. If α = (X, ce, Y ) and β = (X, elOf, Y ), which
means that X is an element of Y and X causes Y , then α + β =
(X, ce&elOf , Y ), which means that X is an element of Y and
causes Y .
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 388

388 Theory of Knowledge: Structures and Processes

Note that semantic addition is a partial operation on the class


of all complete semantic links. There are two cases when semantic
addition is not defined. First, we cannot add complete semantic links
if their left nodes are different or/and their right nodes are different.
Seconds, some semantic links are incompatible and it is impossible
to consistently add them. For instance, inner semantic links Ø and e
are incompatible because it is impossible that two objects are equal
and at the same time, completely irrelevant to one another. Thus, we
cannot add complete semantic links, in which inner semantic links
are Ø and e.
However, even when some semantic links are incompatible, they
can be partially compatible and it might be possible to consistently
add graded counterparts of these semantic links.
Let us consider some properties of semantic addition.

Proposition 5.5.4 (Zhuge, 2012). (Commutative Law) If the


semantic addition α + β is defined, then α + β = β + α for any
complete semantic links α and β.

Proposition 5.5.5 (Zhuge, 2012). (Associative Law) If the


semantic addition α+β, α+γ and β+γ is defined, then (α+β)+γ =
α + (β + γ) for any complete semantic links α, β, and γ.

Proposition 5.5.6 (Zhuge, 2012). (The Law of Zero) α + N =


N + α = α for any complete semantic link α and the corresponding
complete semantic link N with the inner semantic link N .
Note that semantic addition α + N and N + α is always defined.

Proposition 5.5.4 (Zhuge, 2012). (Implicative Law) If a com-


plete semantic link α implies a complete semantic link β, then
α + β = α.

Corollary 5.5.3 (Zhuge, 2012). (Idempotent Law) α+α = α for


any complete semantic link α.

Proposition 5.5.4 (Zhuge, 2012). (Distributive Law for Rever-


sion) If the semantic addition α + β is defined, then (α + β)R =
αR + β R for any complete semantic links α and β.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 389

Knowledge Structure and Functioning: Microlevel or Quantum Theory 389

Proposition 5.5.4 (Zhuge, 2012). (Distributive Law for Implica-


tion) If semantic addition β + γ is defined and a complete semantic
link α implies a complete semantic link β and a complete semantic
link γ, then α implies β + γ.

Proposition 5.5.4 (Zhuge, 2012). (Conjunction Law) If the


semantic addition α + β is defined, then the complete semantic link
α+β implies the complete semantic link β and the complete semantic
link α.

Generalizing the concept of semantic addition, we can build a


parametric family of semantic operations.
Let us consider a partial binary operation ◦ on the totality of all
inner semantic links. If this operation is defined for some pairs of com-
plete semantic links in which left nodes are the same and their right
nodes are the same, then we can build the parallel ◦–composition of
complete semantic links.

Operation 4.3.7. The parallel ◦–composition ◦ of two complete


semantic links α = (X, α, Y ) and β = (X, β, Y ) is performed by
performing the operation ◦ with the inner semantic links α and β, i.e.,

α ◦ β = (X, α ◦ β, Y ).

Example 4.3.12. If ◦ = &, then α ◦ β = α + β.


Note that parallel ◦–composition of complete semantic links is a
special case of parallel composition of named sets (Burgin, 2011).
Parallel ◦–composition of complete semantic links inherits prop-
erties of the operation ◦.

Proposition 5.5.4. (Commutative Law) If the operation ◦ is com-


mutative and α ◦ β is defined, then α ◦ β = β ◦ α for any complete
semantic links α and β.

Proposition 5.5.5. (Associative Law) If the operation ◦ is asso-


ciative and α ◦ β, α ◦ γ and β ◦ γ are defined, then (α ◦ β) ◦ γ =
α ◦ (β ◦ γ) for any complete semantic links α, β and γ.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 390

390 Theory of Knowledge: Structures and Processes

Semantic addition of complete semantic links is not the only spe-


cial case of parallel ◦–composition of complete semantic links.

Operation 4.3.7. The semantic disjunction ∨ of two complete


semantic links α = (X, α, Y ) and β = (X, β, Y ) is performed by
taking the disjunction of the inner semantic links α and β. It means
that

α ∨ β = (X, α ∨ β, Y ).

Example 4.3.13. If α = (X, elOf , Y ) and β = (X, ptOf , Y ), then


α ∨ β = (X, elOf ∨ ptOf , Y ), which means that X is either a part of
Y or an element of Y .
Let us consider some properties of semantic disjunction.

Proposition 5.5.4. (Commutative Law) If semantic disjunction α∨


β is defined, then α ∨ β = β ∨ α for any complete semantic links α
and β.
Proof is left as an exercise.

Proposition 5.5.5. (Associative Law) If the semantic disjunction


α ∨ β, α ∨ γ and β ∨ γ is defined, then (α ∨ β) ∨ γ = α ∨ (β ∨ γ) for
any complete semantic links α, β and γ.
Proof is left as an exercise.

Proposition 5.5.4. (Implicative Law) If a complete semantic link


α implies a complete semantic link β, then α ∨ β = α.
Proof is left as an exercise.

Corollary 5.5.3. (Idempotent Law) α ∨ α = α for any complete


semantic link α.

Proposition 5.5.4. (Distributive Law for Reversion) If the seman-


tic addition α∨β is defined, then (α∨β)R = αR ∨β R for any complete
semantic links α and β.
Proof is left as an exercise.

Proposition 5.5.4. (Distributive Law for Implication) If the seman-


tic disjunction β∨γ is defined and a complete semantic link α implies
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 391

Knowledge Structure and Functioning: Microlevel or Quantum Theory 391

a complete semantic link β and a complete semantic link γ, then α


implies β ∨ γ.
Proof is left as an exercise.

Proposition 5.5.4. (Conjunction Law) If the semantic disjunction


α ∨ β is defined, then the complete semantic link α implies the com-
plete semantic link α ∨ β and the complete semantic link β implies
the complete semantic link α ∨ β.
Proof is left as an exercise.

Corollary 5.5.3. If semantic disjunction α ∨ β and the semantic


addition α + β are defined, then α + β implies α ∨ β.

Taking a partial binary operation ◦ on the totality of all inner


semantic links, which is defined for some pairs of complete semantic
links in which the right node of the first complete semantic link is
equal to the left node of the second complete semantic link, we can
build the sequential ◦–composition of complete semantic links.

Operation 4.3.7. The sequential ◦–composition ◦ of two complete


semantic links α = (X, α, Y ) and β = (Y, β, Z) is performed by
performing the operation ◦ with the inner semantic links α and β,
i.e.,
α ◦ β = (X, α ◦ β, Z).
A special case of sequential ◦–composition of complete semantic
links is semantic multiplication × of complete semantic links studied
in (Zhuge, 2012) and defined in the following way.

Operation 4.3.7 (Zhuge, 2012). Given two complete semantic


links α = (X, α, Y ) and β = (Y, β, Z), if it is possible to find seman-
tic indicators γ1 , γ2 , γ3 , . . . , γk that connect X and Z by reasoning,
then the reasoning process is called semantic multiplication, which
is denoted by α × β = γ where γ = γ1 + γ2 + γ3 + · · · + γk .
In turn, this defines the semantic multiplication × of two complete
semantic links α = (X, α, Y ) and β = (Y, β, Z), which is equal to
α × β = (X, α × β, Y ).
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 392

392 Theory of Knowledge: Structures and Processes

Zhuge (2012) also formulates laws of semantic multiplication:


(1) α×e=α =e×α
(2) α×N =N =N ×α
(3) α × Ø = N = Ø × α and Ø × Ø = N
(4) (α + β) × γ = α × γ + β × γ and α × (β + γ) = α × β + α × γ
(5) (α × β)R = αR × β R

There are also operations with higher arity than two and integral
operations with semantic links but they are studied elsewhere.
Operations with semantic links and symbolic knowledge quanta
play an important role in construction and functioning of semantic
networks.
Although people are more accustomed to operations with a fixed
arity, e.g., to binary operations integral operations with knowledge
quanta find various applications. For instance, as it is explained at
the beginning of this chapter, a relation in a relational database is
an individual knowledge quantum if such a relation describes one
object and is a collective knowledge quantum when it describes sev-
eral objects. Then the basic operations in relational databases —
projection and selection — are examples of integral operations with
knowledge quanta.
Projection is a basic operation in relational databases (Codd,
1970). It is applied to rows (tuples) in a relation R from a rela-
tional database and has a set of attribute names {a1 , a2 , a3 , . . ., an }
as its parameters or arguments. Projection transforms the relation R
in such a way that all rows in the result are restricted to the set {a1 ,
a2 , a3 , . . ., an }. As projection can be applied to any number of rows
(tuples) in a relation and rows are symbolic individual knowledge
quanta, it is an integral operation on symbolic individual knowledge
quanta. Being applied to one row, database projection coincides with
the right projection of symbolic knowledge quanta.
Selection, sometimes called restriction, is another basic operation
in relational databases (Codd, 1970). It is applied to rows (tuples)
in a relation R from a in relational database and has a relation Q
between two of the attributes as its parameter or argument. Selec-
tion selects all those rows in R for which the relation Q holds for the
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 393

Knowledge Structure and Functioning: Microlevel or Quantum Theory 393

values of the chosen attributes. As selection can be applied to any


number of rows (tuples) in a relation and rows are symbolic indi-
vidual knowledge quanta, it is an integral operation with symbolic
individual knowledge quanta.
One more integral operation with symbolic individual knowledge
quanta is used in relational databases. It is generalized selection,
which is also a parametric operation as selection but it allows one to
use for selection of rows an arbitrary propositional formula.
To conclude, it is necessary to remark that on one hand, binary
relations, or more exactly, set-theoretical binary relations are sym-
bolic knowledge quanta. In addition, it is possible to represent arbi-
trary relations as a set or list symbolic knowledge quanta. On the
other hand, symbolic knowledge quanta and their systems are very
convenient objects for mathematical modeling and manipulation as
they form regular structures and allow many operations.
However, there are some problems with their utilization as a
basis structure for data mining theory. First, relations from relational
databases are not exactly set-theoretical relations. Actually, they are
named-set-theoretical relations because rows and columns in these
relations always have some names. Usually, columns are named by
attributes, while rows are named by objects information about which
is stored in the database. Named relations are some kinds of named
sets. So, relational approach to databases implicitly uses named sets.
Second, if we want to be able to apply theoretical techniques and
methods to search on the Internet, we have to take into account
how data on the Internet are organized. Looking at web pages, we
see that those data scarcely have the structure of a relation (a set-
theoretical one or even, a named-set-theoretical relation). Those data
are texts in a very general sense, which are often multimedia texts
with the hierarchical structure. The structure of the Internet objects
is so sophisticated that they often are called unstructured objects.
As a result, the relational model for these data appears inadequate.
In contrast to this, named sets give very powerful means for mod-
eling such structures. Named sets allow one to represent arbitrary
multimedia texts, as well as hierarchical structures (Burgin, 2008a;
Nocedal et al., 2011).
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 395

Chapter 5

Knowledge Structure
and Functioning: Macrolevel
or Theory of Average
Knowledge

Knowledge is that which, next to virtue,


truly raises one person above another.
Joseph Addison

Although philosophers paid a lot of attention to knowledge of peo-


ple developing logic as a tool for knowledge acquisition (cognition)
and justification, the structures used in logic for these objectives
were rather limited. So, the real interest to problems of knowledge
structures and functioning emerged when people tried to make their
computers intelligent. When researchers started to teach computers
to solve problems people can solve, they found that while computers
can solve some problems (mostly calculations with numbers) even
better than people could do, many problems that were easy for peo-
ple were not doable for computers. To overcome these limitations of
computers, researchers created a scientific direction called artificial
intelligence (AI).
In the Handbook of Artificial Intelligence, it is possible to find
the following definition of AI (Barr and Feigenbaum, 1981):
“Artificial Intelligence is the part of computer science concerned with
designing intelligent computer systems, that is, systems that exhibit

395
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 396

396 Theory of Knowledge: Structures and Processes

the characteristics we associate with intelligence in human behav-


ior — understanding language, learning, reasoning, solving problems,
and so on.”

This definition describes AI as a research area. At the same time,


AI has another meaning. Many people understand AI as interactive
intelligence realized by artificial means, having in mind computers as
the most appropriate means for this purpose. We call it technological
AI. For instance, Alan Turing (1912–1954) suggested the following
test for AI (Turing, 1956). If people who communicate with a com-
puter in one room and a human being in another room are not able
to tell in which room there is a computer and in which a person is
situated, then it would be possible to say that that computer with
its software embodies AI.
Consequently, the aim of researchers in AI has been creation of
computer programs that would allow computer to demonstrate intel-
ligent behavior. However, in scientific community in general and in
AI, in particular, there is a big controversy about definitions of intel-
ligence and what behavior, it is possible to call intelligent. Here we
are not going to discuss these questions. We only want to understand
how using mathematical technique, researchers have been trying to
build AI.
There are three main approaches to this problem. The first one
is empirical based on the technological development. Its essence is
verbalized as follows.
Let us build more and more sophisticated computers and their software,
observe their functioning, and analyze the results of our observations. If
we will persist, sometimes computers will become intelligent.

This approach is represented by the famous Turing’s test for AI


(Turing, 1956), as well as by knowledge engineering aimed at the
development of expert systems (Giarratano and Riley, 1998).
The second approach is philosophical and methodological. It is
based on speculative reasoning about computers, human intelli-
gence, and their interrelations. As an example of a problem con-
sidered in this approach, we can take the problem whether mind
(or brain) is some kind of a computer or it is something much
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 397

Knowledge Structure and Functioning 397

more complex. Different aspects of this approach were presented by


Dreyfus (1973).
The third approach is theoretical with essential utilization of
mathematics. Its target is the construction of mathematical mod-
els for computers and their software. This provides for explication
of AI capabilities through a study of these models and evaluation of
those aspects that can be considered intelligent. In modern mathe-
matics, computers, and their software are modeled by different types
of abstract automata and algorithms. Consequently, the power of
algorithms has been used as a measure for AI capabilities. It gives
a framework within which fairly sophisticated systems of AI may be
specified, designed, analyzed, and verified in a systematic rather than
ad hoc manner. A coherent application of this approach would result
in the most promising payoffs for practitioners.
It is essential to remark that although the third approach is based
on rigorous methods of mathematics, its adherents explicitly speak
and write, as a rule, what computers in themselves are, while in real-
ity they are dealing with mathematical models or even with informal
images of actual computers. Consequently, they substitute comput-
ers for their models (finite automata, Turing machines, RAM, etc.)
and estimate power of computers by capabilities of one or another
mathematical model. However, any model gives only some approxi-
mation to a real computer. That is why, before using a model as a
measurement device, it is necessary to find adequacy of the model.
This adequacy is a measure of precision of the estimations that are
made by means of this model.
Another remark serves to attract attention of the AI researchers
to the fact that evaluation of computers has to take into account
their hardware, software, and utilization. What concerns hardware
and software, now these components are included in the evaluated
mainframe, while their utilization is often neglected. As Wing (1998)
states, formal methods are used to describe only properties of hard-
ware and software. At the same time, it is a great difference between
using a computer as a calculator and simulating complex processes
in atmosphere with the help of the same computer. Consequently, if
a new way of utilization of computers is elaborated, it can change
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 398

398 Theory of Knowledge: Structures and Processes

drastically their capabilities. However, we still do not have techno-


logical AI, e.g., in the form of intelligent computers, and research in
the area of AI continues.
Studies in this area attracted attention of many researchers to
knowledge representation and modeling because it was reasonably
assumed that knowledge is the base of intelligence. The main atten-
tion was to the knowledge used by people in science and everyday
life. This average knowledge constitutes the macrolevel of the knowl-
edge universe. Different models of knowledge on the macrolevel have
been introduced. The most known models are relational and logical
structures (cf., for example, (Thayse et al., 1988)), semantic net-
works (cf., for example, (Minsky, 1968; Findler, 1979; Ueno et al.,
1987; Burgin and Gladun, 1989; 1990; 1990a; Sowa, 1987; 1991;
Lehmann, 1992)), systems of frames (Minsky, 1974), informal and
formal schemas (Anderson, 1977; Arbib, 1989; 1992; 1994; Nebel,
1994; Armbruster, 1996; Brewer, 1999; Burgin, 1973; 2005a; 2006;
2010a; Rohrer, 2006; Zhuge and Sun, 2010), scripts or formal sce-
narios (cf., (Schank and Abelson, 1977)), and productions (cf., for
example, (Ueno et al., 1987)). Theoretical studies of these structures
form the theory of average knowledge.
As a result, a special area in AI called knowledge representation
(KR) emerged, research in which is aimed at representing knowledge
in symbolic structures to facilitate inference and construction of new
knowledge items from given knowledge systems. To achieve these
goals, KR research involves construction of efficient symbolic struc-
tures that represent knowledge, operations with knowledge, such as
inference and knowledge integration, and relations between knowl-
edge items. For instance, in 1980s, the predominant representational
paradigms were semantic networks, frames, predicate logic, and pro-
duction systems. Researchers in KR analyze how to accurately and
effectively reason and process knowledge using different systems of
knowledge representations. Knowledge representation involves three
kinds of semantics: formal semantics, which explains correct trans-
formation of knowledge structures; content semantics, which enables
knowledge interpretation; and operational semantics, which assigns
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 399

Knowledge Structure and Functioning 399

meaning to the processes, actions, rules, procedures, and algorithms


related to this knowledge.
Knowledge representation is the primary concern for AI as the
goal of AI is to create a machine that is ‘truly’ intelligent, i.e.,
the behavior of which is similar to the behavior of intelligent
human being. Developing AI, most researchers assume that intel-
ligent behavior is based on knowledge. Therefore, it is necessary to
represent knowledge in a form acceptable for a machine and then to
teach the machine to use this knowledge.
Similar to the concept of AI, knowledge representation has also
another interpretation. Namely, it is comprehended as a definite sys-
tem, e.g., logic or schemas, used for representing definite knowledge
about some domain.
Researchers found key characteristics of knowledge representa-
tions:

• Expressivity or expressiveness of a given knowledge representa-


tion means how much knowledge it can represent. However, more
expressive representations are likely to require more complex
means for construction and operation.
• Understandability of a given knowledge representation, e.g., of
logic, means how well humans can understand knowledge in this
form. Such properties as modularity and hierarchies of classes allow
achieving higher understandability.
• Efficiency in representing and processing knowledge means how
well humans (or computers) can build knowledge items and oper-
ate with them. For instance, a given knowledge representation can
provide better or worse means for knowledge acquisition or for
elimination of redundant or conflicting knowledge. It is usually
assumed that efficient knowledge representation supports intellec-
tual activity and creativity.
• Hardship of a given knowledge representation characterizes easi-
ness of knowledge modifying and updating when this representa-
tion is used.

It is possible to classify different knowledge representations.


September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 400

400 Theory of Knowledge: Structures and Processes

There are three substantial classes of knowledge representations,


which correspond to the Existential Triad of the world considered in
Chapter 2:

— Abstract (structural) representations, such as formal logic or


semantic networks
— Material representations, such as the human brain or computer
memory
— Mental representations, such as concepts, ideas, or mental
schemas

For instance, representation of operational knowledge by


algorithms and representation of representational knowledge by dif-
ferential equations are abstract representations. Representation of
knowledge in a book or in a database is material representation,
while knowledge in the heads of people is mentally represented.
There are also three stylistic classes of knowledge representations:

— Formal representations
— Informal representation
— Semiformal representations

The most widespread formal approach to knowledge representa-


tion on the macrolevel is based on utilization of logical languages and
calculi, such as the propositional calculus or first-order predicate cal-
culus. For instance, Halpern and Moses (1985) consider knowledge
of an agent (a person or an artificial system) as a set of propositions.
The conceptual world of such an agent consists of formulas of some
logical language concerning the real world and agent’s knowledge.
Systems of propositions (propositional calculi) describe actual and
possible worlds in theories of Bar-Hillel and Carnap (1952; 1958),
Hintikka (1970; 1971; 1973; 1973a) and many other researchers.
The semi-formal methods include Minsky’s frames (1975), Schank
and Abelson’s scripts (1977), and related methods for showing typi-
cal, default, or expected information. The main weakness of the semi-
formal methods is that they are not defined precisely. For instance,
Minsky and Schank presented a variety of stimulating examples in
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 401

Knowledge Structure and Functioning 401

their papers, but they never formulated exact definitions that com-
pletely characterized them. However, without exact definition, semi-
formal methods are more flexible and adaptive.
Informal methods are very flexible and adaptive but not efficient
for many purposes. For instance, natural languages do not allow pre-
cise representation of scientific knowledge. Informal methods are also
very hard for computer processing. Natural languages form the most
popular informal knowledge representations.
It is interesting that similar to knowledge, knowledge representa-
tions also have the same three forms:

• Descriptive, e.g., declarative, representations


• Operational, e.g., instructional, representations
• Representational representations

Each of these types can epitomize any type of knowledge. For


instance, programs written in functional and logic-based program-
ming languages are descriptive/declarative representations of oper-
ational knowledge (cf., Section 5.1.3). Objects in object-oriented
programming languages are operational representations of descrip-
tive knowledge (cf., Section 5.1.3). Formulas that describe dynamic
processes are representational representations of operational knowl-
edge (cf., Section 5.1.2). Models of theories are representational
representations of descriptive knowledge. At the same time, for-
mal theories are descriptive representations of representational
knowledge.
Specification of tools used for knowledge representation determine
instrumental classes of knowledge representations:

— Mathematical representations,
— Logical representations,
— Scientific representations,
— Metaphoric representations,
— Linguistic representations,
— Digital representations,
— Schematic representations,
— Iconic representations,
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 402

402 Theory of Knowledge: Structures and Processes

— Symbolic representations,
— Algorithmic representations.

There are also perceptional classes of knowledge representations


such as:

— Visual representations,
— Vocal representations,
— Tactile representations, e.g., the tactile writing system Braille.

Writing about representation of descriptive knowledge and defin-


ing its task as construction of computable models for some domain,
Sowa (2000) suggests it is a multidisciplinary subject, which applies
theories and techniques from three other fields:

1. Logic provides formal structures and rules of inference.


2. Ontology defines the kinds of things that exist in the application
domain.
3. Computation supports the applications that distinguish knowl-
edge representation from pure philosophy.

For instance, without logic, knowledge representation is vague and


uncertain lacking criteria for determining whether statements are
redundant or contradictory. Without ontology, the terms and sym-
bols are ill-defined, confused, and confusing, and only computation
provides means for implementing knowledge representation in com-
putable models.

5.1. Language as a universal tool for knowledge


representation

Language shapes the way we think, and


determines what we can think about.
Benjamin Lee Whorf

Language is a highly complex phenomenon. People made many


efforts to study existing languages and to create new languages for
different purposes.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 403

Knowledge Structure and Functioning 403

There are three structural classes of languages:


— Informal languages, which include natural languages, such as
English, Spanish, or Chinese.
— Semiformal languages, which include languages of mathematics,
physics, biology, and other sciences.
— Formal languages, which include logical languages and program-
ming languages.
Formal languages have formalized syntax (generating algorithms),
semantics (interpretation), and utilization.
Note that the concept of a formal language in linguistics, mathe-
matics, and logic is essentially different from the concept of a formal
language in the theory of algorithms, automata, and computation
where a formal language is simply any subset of the set of all strings
in some alphabet (cf., for example, (Hopcroft et al., 2007)).
There are two basic classes of informal languages — natural lan-
guages and artificial languages, such as Esperanto.
There are also three referential classes of languages:
— All-purpose languages such as natural languages.
— Wide-ranging languages such as languages of mathematics or phi-
losophy.
— Specialized languages such as logical languages and programming
languages.
Here we consider languages as a tool for knowledge representa-
tion and processing analyzing three classes of languages: natural lan-
guages, languages of mathematics and science, and algorithmic and
programming languages.

5.1.1. Natural languages


Do not say a little in many words but a great deal in a few.
Pythagoras

The term language has two basic meanings: an abstract concept stud-
ied by linguistics and a specific linguistic system, e.g., “English”.
There are various interpretations of language as a linguistic system.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 404

404 Theory of Knowledge: Structures and Processes

It may be understood as a formal system of signs governed by gram-


matical rules of combination and aimed at meaning communication.
Another approach treats language as the mental faculty that allows
humans to undertake linguistic behavior by learning languages and
utilizing them to produce and understand utterances. Another char-
acterization is based on the social functions of language represent-
ing language as a system of communication that enables humans to
exchange verbal or symbolic utterances.
Natural languages evolved naturally as a means of communication
among people becoming a powerful tool for information transmission
as well as for storage of information and knowledge. According to
Renzl (2007), (natural) languages and meaning play a crucial role in
knowledge construction, with language as a vehicle of knowing aiming
at improvement of the efficiency of people interaction. When people
came to the necessity of information transmission, they developed
different natural languages. Various linguistic texts became symbolic
units of knowledge. For instance, the sentence “This is a book about
knowledge” is in English and represents knowledge of the content of
this book.
To be an efficient tool for communication and discourse, as well as
for informing, modeling, rejection, influence, formation, and expres-
sion of ideas, languages in general and natural languages in partic-
ular, have to reflect definite reality containing enough information
about its domain. Natural languages function as all-purpose tools
of communication. Consequently, they are aimed at reflection of the
whole world known to people. As Wittgenstein (1922) wrote, the
internal structure of reality shows itself in language. On the one hand,
language reflects the real world in which people live. On the other
hand, it is formed under the influence of nature and social forces in
the mental space of people as individuals and society as a whole. As a
consequence, the structure of language reflects basic features of soci-
ety and nature. For instance, the lexicon of language contains only
such words that are used in society. Natural languages are linear as
texts in natural languages are formed as linear sequences of words.
The cause of this peculiarity is, certain characteristics of the nervous
system and the mentality of a person. A similarity between language
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 405

Knowledge Structure and Functioning 405

and the world is explicated in the structural similarity (isomorphism)


between a natural language and the real world.
As we know (cf., Chapter 2), the world is structured as the Exis-
tential Triad. This structure is exactly reflected in the large-scale
structure of language. Indeed, in the opinion of experts in the field of
linguistics and philosophy of language, the following regularity can
be explicated in the history of poetic and philosophy of language: the
language imperceptibly directs theoretical ideas and poetic impulses
on one of the axes of it three-dimensional space, in which the first
direction, which is taken, goes along semantics, then it turns to
syntax, and at last, to pragmatics (Stepanov, 1985). This directly
implies that the global structure of the world (the Existential Triad)
is reflected in the triadic structure of language (syntax, semantics,
and pragmatics), where reflection is a structural similarity in the
sense of (Burgin, 2010; 2012). In this similarity, the World of Struc-
tures corresponds to syntax, which covers purely linguistic structures.
The Physical World is embodied in semantics, which connects lan-
guage structures and the real objects. The Mental World (as the
complex essence) gives birth to the pragmatics of language as prag-
matics refers to goals and intentions of people.
Being a tool for communication and discourse, any language is
used for information and knowledge transmission as the primary goal
of language utilization since communication is information exchange,
e.g., information transmission that goes from one person to another
and back repeating this cycle many times. Thus, specific aspects of
information transmission influence many features of language. The
three-dimensional structure of language (cf., Figure 5.1) — syntax,
semantics, and pragmatics — is a manifestation of this influence
(Burgin, 2010).

Syntax

Semantics Pragmatics

Figure 5.1. The Language Triad


September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 406

406 Theory of Knowledge: Structures and Processes

From the pragmatics point of view, there are four types of sen-
tences in languages that have the same structure as English, French,
or Spanish:

— A declarative sentence makes a statement.


— An interrogative sentence asks a question.
— An imperative sentence makes a command.
— Exclamations (an exclamatory sentence) expresses emotions.

For instance, the sentence “It is an apple” is declarative, the sen-


tence “What is this?” is interrogative, the sentence “Read this book”
is imperative and “Vow!” is an exclamation.
This classification reflects how sentences of natural languages rep-
resent knowledge.
Declarative sentences play the main role in representing descrip-
tive assertoric knowledge. For instance, the sentence “Tomorrow it
will be warm” gives knowledge about weather.
Interrogative sentences principally contain descriptive erotetic
knowledge. For instance, the sentence “What time is now?” expresses
lack of knowledge about time.
Imperative sentences convey operational knowledge. For instance,
the sentence “Get up!” instructs what is necessary to do.
Exclamatory sentences (exclamations) carry representational
knowledge. For instance, the exclamation “Great!” reflects emotions
of the speaker.
Forms of linguistic expressions are correlated with the types of
sentences:

1. Statements or propositions describe objects or situations and are


expressed by declarative sentences.
2. Questions ask for information and are expressed by interrogative
sentences.
3. Instructions or tasks describe or demand some actions and are
expressed by imperative sentences.
4. Exclamations describe emotions and are expressed by exclamatory
sentences.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 407

Knowledge Structure and Functioning 407

Besides, exploring properties of information transmission,


researchers found six factors (components) of this process:

• the addresser (sender or source);


• the addressee (recipient or receiver or receptor or target);
• the message (carrier);
• the context (environment);
• the code used in the message;
• contact (interaction).

These factors (components) of information transmission form


elaborated structures called information triads, which are analyzed
in the context of fundamental triads (Burgin, 1993b; 2010). As lan-
guage is a tool for information transmission, these factors also influ-
ence functions of language.
Extending Karl Bühler’s Organon-Model, Roman Jakobson
(1896–1982) defined six communication functions of language,
according to which an effective act of verbal communication can be
described (Jakobson, 1960; 1971):

1. The Referential Function describes (conveys information about)


some real phenomenon, e.g., a situation, object or mental state
and is usually expressed by declarative sentences, e.g., “This
book is about knowledge”, performing representation of descrip-
tive assertoric knowledge.
2. The Expressive (alternatively called Emotive or Affective) Func-
tion is usually expressed by exclamatory sentences, interjections,
and other sound changes that do not alter the denotative mean-
ing of an utterance but provide information about the Addresser’s
(Speaker’s or Writer’s) internal state (feelings or emotions), e.g.,
“Wow, what a book!”
3. The Conative Function engages the Addressee (Recipient or
receiver or receptor) directly trying to elicit some behavior
(action) from the Addressee (Recipient or receiver or receptor)
and is usually expressed by imperative sentences, e.g., “Bill! Please
give me this book!”, performing representation of operational
knowledge.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 408

408 Theory of Knowledge: Structures and Processes

4. The Poetic Function focuses on “the message for its own sake”
(the code itself, and how it is used) performing representation of
descriptive assertoric knowledge about the text (code) and is the
operative function in poetry as well as in slogans.
5. The Phatic Function is utilization of language for the sake of
interaction, e.g., building a relationship between both parties in a
conversation or dialogue. We can observe the Phatic Function in
greetings and casual discussions of the weather, particularly with
strangers. It also provides the keys to start, maintain, verify, or
finish the communication process with such words as “Hello?”,
“Ok?”, “Hummm”, “Goodbye”, etc.
6. The Metalingual (also called Metalinguistic or Reflexive) Func-
tion is the use of language (or of code by Jakobson) to discuss or
describe itself, i.e., it involves self-reference. For instance, the sen-
tence “The previous sentence is declarative” performs metalingual
function presenting knowledge about the text.

Each of these six functions has a pair of associated factors, in


which the first one is the source factor and the second one is the
target factor:

1. The referential function is associated with the pair (message, con-


text), e.g., as in the message “Water is a liquid”;
2. The emotive function is oriented toward the pair (message, the
addresser), e.g., as in the interjections and exclamations such as
“Bah!” and “Oh!”;
3. The conative function points toward the pair (message, the
addressee) and is usually expressed in the form of imperatives
and apostrophes;
4. The poetic function restricts its focus on the message for its own
sake being associated with the pair (message, message);
5. The phatic function is associated with the pair (message, contact)
assisting in establishment, prolongation or discontinuation of the
contact (communication);
6. The metalingual function is associated with the pair (message,
code) to establish mutual agreement on the code (for example, to
provide a definition);
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 409

Knowledge Structure and Functioning 409

All language functions are related to information transmission and


reception and thus, are typical features of information flows. Namely,
we have:

• The referential function of an information flow conveys cognitive


information about some real phenomenon.
• The expressive function of an information flow conveys affec-
tive information and describes (or provides cognitive information
about) feelings of the information sender.
• The conative function of an information flow attempts to elicit
some behavior from the addressee by conveying effective informa-
tion.
• The poetic function of an information flow focuses on information
independent of reference.
• The phatic function of an information flow builds a relationship
between both parties, the sender and receiver, in a communication
by conveying cognitive and affective information between both par-
ties.
• The metalingual function of an information flow reflects self-
reference establishing connections of the information item inside
the information flow.

These parallel traits of natural languages and information flow


show that information, in general, realizes the same functions as
language, while information studies reflect all three dimensions of
information — statistical information theory is based on information
syntax, semantic information theory portrays information semantics,
and pragmatic information theory portrays information pragmatics
(Burgin, 2010).
All considered language functions involve information transmis-
sion and reception and thus, are typical to information flow. Namely,
we have:

• The referential function of information flow conveys cognitive


information about some real phenomenon.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 410

410 Theory of Knowledge: Structures and Processes

• The expressive function of information flow conveys affective infor-


mation and describes (or provides cognitive information about)
feelings of the information sender.
• The conative function of information flow attempts to elicit some
behavior from the addressee by conveying effective information.
• The phatic function of information flow builds a relationship
between both parties, the sender and receiver, in a communica-
tion by conveying cognitive and affective information between both
parties.
• The metalingual function of information flow self-references, i.e.,
establishes connections of the information portion to itself.
• The poetic function of information flow focuses on information
independent of reference.

It is possible to discern different types of knowledge representation


in languages:

1. Direct representation by text and linguistic expressions.


2. Relational representation by references to texts and linguistic
expressions.
3. Procedural representation by derivation of texts and linguistic
expressions.

For instance, the sentence “It is an apple” is a direct representa-


tion, the sentence “It is possible to acquire knowledge about people
from their biographies” is a relational representation, while the sen-
tence “If you want to know about knowledge, read this book” is a
procedural representation.
To represent different types of knowledge, natural languages
developed various means. For instance, a metaphor is a figure of
speech in which an implicit comparison is made between two unlike
things that actually have something in common.
At the same time, another interpretation treats a metaphor as an
analogy between two objects or ideas, conveyed by the use of a word
instead of another.
Both interpretations are unified when we understand that a
metaphor gives representational knowledge and it is possible to build
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 411

Knowledge Structure and Functioning 411

a named set representation (Burgin, 2011) of a metaphor, which is


specified in Diagram (5.1) in the graphical form
analogy
object 1 object 2 (5.1)

or in Diagram (5.2) in the analytical form

(object 1, analogy, object 2). (5.2)

Metaphors are closely related to allegory.


Allegory (from Greek: αλλoς (allos) which means other, and
αγoρεv́ειν, (agoreuein) which means to speak) is a figurative mode
of knowledge representation conveying a meaning that is different
from the literal meaning.
Allegory is represented by the following diagram.
expression A expression A
↓ ⇒ ↓ . (5.3)
meaning 1 meaning 2

Metaphors and allegories are frequently used in literature. They


are also used in linguistics and semiotics, for example, in analyzing
myths (Griffin, 2006; 2008; 2009).

5.1.2. Languages of science and mathematics


The book [of nature] is written in mathematical language . . .
Galileo Galilei

Languages of science and mathematics have been mostly used for


portrayal of operational and representational knowledge. Mathemati-
cians and scientists have elaborated a lot of mathematical structures
and theoretical scientific models for modeling, and study of a variety
of natural phenomena. As mathematical language is an essential part
of the majority of scientific languages, here we have our attention on
the language of mathematics as a tool for knowledge representation.
Mathematics was created as a tool for solving practical problems
such as counting, measuring, business transactions, building calen-
dars, and buildings. Thus, at the beginning, mathematics was the
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 412

412 Theory of Knowledge: Structures and Processes

discipline that studied numbers and geometrical shapes/forms, con-


sisting of two parts — arithmetic and geometry (Burton, 1997).
Then mathematicians created many new mathematical fields,
which expanded far beyond numbers and geometrical shapes. Such
an enormous expansion brought a new vision of mathematics as a
discipline. A little by little, mathematicians started to understand
that mathematics is the formalized (abstract) science of structures
(Lautman, 1938; Bourbaki, 1957; 1960; Burgin, 1998; 2012). For
instance, Bourbaki from the first volume of the Elements in 1939
hunted for a general theory of all the ways a set can be structured
in mathematics discerning between operations on the set, finitary
relations among its members and infinitary relations on the set such
as define convergence or topology. Bourbaki asserted that mathe-
matics is a discipline that studies mathematical structures and only
such structures (Bourbaki, 1957; 1960). As a result, Bourbaki is
chiefly identified with the idea of a mathematical structure being
the cornerstone of mathematics. To proceed in this direction, the
leaders of the initial Bourbaki’s group Dieudonné and Weil led the
group in codifying ways that a few kinds of structures recur through-
out mathematics and explicating three basic classes of mathemati-
cal structures: order structures, algebraic structures, and topological
structures.
Some philosophers and other thinkers also came to the same
conclusion (Piaget, 1971; Resnick, 1997; Shapiro, 1997). Numbers
and geometrical shapes/forms are still mathematical objects actively
studied by mathematicians but they are treated as only special kinds
of structures among many others. Reflecting this peculiarity, the lan-
guage of mathematics has become the primary language of formal-
ized structures. Therefore, to understand how this language is used
for knowledge representation, we need to know what a structure is.
As North (2009) writes, the idea of a mathematical structure is
relatively straightforward. There are number structures, geometric
structures, topological structures, algebraic structures, and so forth.
Mathematical structure tells us how simpler mathematical objects
are organized to form more complex mathematical objects. That is
why, Dieudonné (1970) wrote: “I do not say it [M.B., structure] was
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 413

Knowledge Structure and Functioning 413

an original idea of Bourbaki — there is no question of Bourbaki’s


containing anything original.”
For instance, natural numbers are related by means of different
operations, such as addition or multiplication, e.g., 1 + 2 = 3 or
7×5 = 35. These operations form an algebraic structure on the set N
of all natural numbers. One more structure on the set N determines
order between natural numbers. For instance, 1 is less than 2, while
7 is larger than 5.
The traditional approach to a mathematical structure is repre-
sented by the following definition.

Definition 5.1.1. A mathematical structure is a set with relations


between its elements.
However, this definition is incomplete defining only structures of
the first order.
In their book on set theory, the group of mathematicians who used
the pseudonym Bourbaki (1960) constructed a much more sophisti-
cated definition of a structure, which is extremely formalized involv-
ing many other mathematical constructions and concepts. Being too
intricate, it were not used even by Bourbaki themselves in their other
works. Moreover, the Bourbaki’s theory of structures was not general
enough to apply to all the mathematical objects they needed, and
it was too complicated to use when it did apply. In particular, after
working with Grothendieck, the leader of the initial Bourbaki’s group
Dieudonné acknowledged that their theory of structures “has since
been superseded by that of category and functor, which includes it
under a more general and convenient form” (1970).
However, categories and functors are only some special kinds of
structures according to the most comprehensive definition of a struc-
ture is elaborated in the general theory of structures (Burgin, 2012)
and formulated below.

Definition 5.1.2. A mathematical structure Q consists of


parts/elements of Q and relations of Q, which form three groups:
(1) relations between parts/elements of Q; (2) relations between
parts/elements of Q and relations of Q; and (3) relations between
relations of Q.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 414

414 Theory of Knowledge: Structures and Processes

Note that relations between relations of Q are also relations of Q


and thus, relations between relations between relations of Q, relations
between relations between relations between relations of Q and so
on belong to the structure Q. As a result, we have a potentially
infinite hierarchy of relations instead of only relations of the first level
taken into account in the conventional definition of a mathematical
structure.
The new concept of structure with all its forms allowed solving
the long-standing problem of understanding ideas of Plato and forms
of Aristotle. Structure gives a scientific explication of both concepts
(Burgin, 2012). In addition, it provides other insights. For instance, it
is possible to consider Aristotle as the forefather of structural realism.
Indeed, in his theory of thinking, Aristotle asserts that the realization
of a form of an object in one’s mind is as real the instantiation of
the corresponding form in external reality and both forms exhibit
the same powers and properties and the same necessary relations to
other forms. As forms are special kinds of structures (Burgin, 2012),
this means that mental structures correctly represent structures in
nature and society.
Definition 5.1.2 comprises the concepts of a mathematical struc-
ture elaborated by Bourbaki and other mathematicians.
The order of a structure Q is determined by its relations.

Definition 5.1.3. If there are no relations in Q, then Q is a structure


of the zero order.
In this context, any set is a structure of the zero order with respect
to its inner structure (cf., Chapter 6).

Definition 5.1.4. (a) If Q has only relations between parts/elements


of Q, then Q is a structure of the first order.
(b) All relations between parts/elements of Q have the first order.
With respect to its internal structure (cf., Chapter 6), any set is a
structure of the first order. Any conventional structure is a structure
of the first order.

Definition 5.1.5. (a) Relations that involve relations of order n


have order n + 1.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 415

Knowledge Structure and Functioning 415

(b) If Q has relations of order n, then Q is a structure of order n.


Note that a structure of order n can also have order n − 1.

An emphasis on mathematical structures also had a marked influ-


ence outside mathematics. For instance, Bourbaki introduced three
basic types of mathematical structures (algebraic structures, order
structures, and topological structures) and Piaget (1964) found a
close correspondence between these types and the first operations
through which a child interacts with the world. Besides, in many
disciplines such as linguistics, sociology, anthropology, esthetics, eco-
nomics, and psychology, the structural approach became very pop-
ular shaping the direction of research called structuralism (Piaget,
1971; Burgin, 2012).
However, the traditional approach to mathematical structures has
been treated as too primitive and different researchers criticized it.
For instance, based on this understanding, Carter (2008) argues that
a set taken by itself may not be regarded as a structure. In addition,
she demonstrates that the relation that exists between the struc-
ture and the set on which this structure is defined (as a system
of relations) is important for mathematical practice. In his critic
of the structuralist view of mathematical objects, Parsons (1990)
demonstrates that mathematical objects include relations not only
to objects from the same set but also relations to other objects,
which Parsons calls external relations. Thus, the more general con-
cept of structure elaborated in (Burgin, 2010; 2011; 2012) eliminates
all objections to structural understanding of mathematical objects.
In particular, the general theory of structures (Burgin, 2012) discerns
five types of structures:

— Internal structures
— Inner structures
— External structures
— Intermediate structures
— Outer structures

Moreover, a limited understanding of the concept of structure


had existed in all fields of science, mathematics, and humanities.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 416

416 Theory of Knowledge: Structures and Processes

To improve this understanding, the general theory of structures was


created in (Burgin, 2012). This theory demonstrates that any system
has not a single structure as it has been believed before but several
structures of three types — inner structures, intermediate structures,
and outer structures. In addition, the new theory introduced the new
complete concept of structure and its mathematical formalization. As
a result, the theory of mathematical structures developed by Bour-
baki (1960) became a special formalized subtheory of the general
theory of structures.
Moreover, it is interesting to know that in mathematics there are
situations when objects with different structures are identified, that
is, treated as one and the same object. For instance, the natural num-
ber 3 is considered the same as the rational number 3. However, the
outer structure of the natural number 3 is the structure of the object
N of all natural numbers, while the outer structure of the rational
number 3 is the structure of the object Q of all rational numbers.
The inner structure of the natural number 3 is the named set (X, f ,
3) where X consists of (all) sets that have three elements and f con-
nects all these sets to the symbol 3, while the inner structure of the
rational number 3 is the named set (Z, r, 3) where Z consists of (all)
fractions 3/1, 6/2, 9/3, . . . and f connects all these fractions to the
symbol 3. This shows that mathematical objects are not structures,
at least, in that oversimplified sense that is traditionally assigned to
the concept of structure.
It is natural to ask a question why in many cases, different struc-
tures are treated in mathematics as the same object. We can answer
this question explaining that mathematics is a tool (mechanism)
for simplification through a rigorous generalization and unification.
Thus, when the rational number 3, the whole number 3, the inte-
ger number 3, the rational number 3, and the real number 3 are
comprehended as the same object, it provides a useful unification,
simplifying many cases of mathematical reasoning.
In 1970 in his thesis written at Moscow State University, Burgin
added a new dimension to this understanding demonstrating that
mathematics is a science of abstract structures in the same sense as
physics is a science of material structures because mathematics is
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 417

Knowledge Structure and Functioning 417

similar to any natural science having the theoretical, experimental,


and applied parts (cf., (Burgin, 1998)). Namely, while the domain of
physics is a part of the physical world, the domain of mathematics is a
part of the world of structures, which consists of structures described
and formalized in the mathematical language.
However, recent research shows that this is only an approximate
understanding. Achieving the next level of understanding, it is natu-
ral to conclude that mathematics is a science of systems of structures
although systems of structures are also structures.
To grasp the essence of the new understanding, we have to uncover
the difference between a (pure or abstract) structure and a sys-
tem. In a structure, elements/parts do not have other properties
and relations except those that belong to this structure. In a system,
elements/parts may have other properties and relations. For instance,
an algebraic (abstract) category is a (pure) structure, while the cat-
egory of groups is a mathematical system (of structures) because
its elements — groups — have inner structure. An abstract group
is a (pure) structure, while the group of n × n orthogonal matrices
or n × n unitary matrices is a mathematical system (of structures)
because its elements — matrices — have inner structure.
It is possible to read more about mathematical structures in
(Bourbaki, 1948; 1957; 1960; Shapiro, 1997; Resnick, 1997; 1999; Bur-
gin, 1998; 2011; 2012). Besides, any mathematical book introduces
and studies many specific mathematical structures such as numbers,
groups, spaces, triangles, graphs, formulas, operations, fields, and
measures.
Mathematics is intrinsically connected to logic. On one hand, the
main tool of formal mathematics is reasoning, which is formalized
and studied by logic. On the other hand, logics on its higher level
is formalized and developed as a mathematical discipline, which
is called mathematical logic (Russell, 1908; Church, 1956; Kleene,
2002). Thus, there are many structures in logic in general and in
mathematical logic, in particular. Examples of logical structures are
propositions, predicates, deductions rules, well-formed formulas and
calculi. Mathematical logic is mostly used for working with descrip-
tive knowledge although the area in mathematical logic called model
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 418

418 Theory of Knowledge: Structures and Processes

theory deals with representational knowledge, while another area


in mathematical logic called recursion theory studies operational
knowledge.
In comparison with mathematical logic, mathematics provides
extensive tool for representational knowledge in the form of math-
ematical models, which give knowledge about their domains. May
be the most well known, popular, and useful mathematical mod-
els are differential equations, which represent knowledge about a
host of diverse phenomena in nature, technology, economy, and soci-
ety. Many fundamental laws of physics and chemistry are presented
in the form of differential equations. Biologists, physiologists, and
economists make use of differential equations modeling behavior of
complex systems. Let us consider some examples.
Physicists use the following models having the form of differential
equations:

• Maxwell equations form the foundation of classical electrodynam-


ics, classical optics, and electric circuits underlying modern elec-
trical and communications technologies (Maxwell, 1865).
• The Boltzman equation describes behavior of gases.
• The Poisson equation has many applications in electrostatics,
mechanical engineering, and theoretical physics.
• The Poisson–Boltzmann equation finds diverse applications in
physiology, polymer science, theory of biomolecular systems, and
semiconductor technology, describing the distribution of the elec-
tric potential in solution or electron interactions in a semiconduc-
tor (Fogolari et al., 1999; Gruziel et al., 2008).
• The Einstein field equations (also known as Einstein equations)
form the base of general theory of relativity that describes the
gravitational interactions as a result of spacetime being curved by
matter and energy (Einstein, 1915).
• The Schrödinger equation describes how the quantum state of a
physical system changes with time.
• Equations of motion, which include the famous Newton’s second
law of motion, describe how the position of a physical system
changes with time.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 419

Knowledge Structure and Functioning 419

• The Korteweg–de Vries equation first introduced by Boussinesq


(1877) and rediscovered by Diederik Korteweg and Gustav de Vries
(1895) is a mathematical model of waves on shallow water sur-
faces describing the string in the Fermi–Pasta–Ulam problem in
the continuum limit, shallow-water waves with weakly nonlinear
restoring forces, long internal waves in a density-stratified ocean,
ion acoustic waves in a plasma, and acoustic waves on a crystal
lattice.
• Hamiltonian mechanics is based on Hamiltonian equations.
• Lagrangian mechanics is based on Lagrangian equations.
Chemists use the following models having the form of differential
equations:
• The rate equation describes links between the reaction rate with
concentrations or pressures of reactants and constant parameters
such as normally rate coefficients and partial reaction orders (Con-
nors, 1991).
Biologists and physiologists use the following models having the
form of differential equations:
• The Lotka–Volterra equations describe the dynamics of biological
systems in which two species interact, one as a predator and the
other as prey.
• The Verhulst equation describes biological population growth.
• The Bertalanffy growth equation describes individual growth.
• The Hodgkin–Huxley equations give a mathematical model that
describes how action potentials in neurons are initiated and prop-
agated.
Economists use the following models having the form of differen-
tial equations:
• The Black–Scholes equation (also known as Black–Scholes–Merton
equation) is a mathematical model of a financial market with cer-
tain derivative investment instruments.
• The Solow–Swan equation is an exogenous growth model, an eco-
nomic model of long-run economic growth.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 420

420 Theory of Knowledge: Structures and Processes

• The Sethi equation is an advertising model describing a sales-


advertising dynamics (Sethi, 1983).

Such mathematical structures as groups represent important


knowledge in quantum physics (cf., Appendix for a definition of a
group). Physicists use the subtheory of group theory known as the
theory of group representations, which allows them, for example, to
classify all the observed spectroscopic states of atoms and molecules
(cf., for example, (Wigner, 1959; Schonland, 1965; Schensted, 1967)).
Another mathematical structure called a Lie algebra is also used
as a practical model in particle physics (Georgi, 1999).
One more mathematical structure called a manifold provides
means for building different kinds of representational knowledge in
physics (Choquet-Bruhat and DeWitt-Morette, 1982)
Such mathematical constructions as abstract automata and algo-
rithms represent operational knowledge in computer science and net-
work technology (Burgin, 2005).
Operational knowledge is also represented by such mathematical
structures as process algebras and process calculi. In computer sci-
ence, the algebraic approach is a popular and effective way to study
processes in multicomponent systems, such as the Internet, multi-
core computers and clusters of multi-core computers. To efficiently
deal with such problems computer science developed a concurrency
theory, which extensively and successfully utilizes algebraic struc-
tures and techniques. Researchers built many powerful models of
concurrency, which, to mention but a few, include:

— Petri nets (Petri, 1962),


— communicating sequential processes (CSP) (Hoare, 1985),
— calculus of communicating systems (CCS) (Milner, 1989),
— event-signal-process model (ESP) (Lee and Sangiovanni-
Vincentelli, 1996),
— view-centric reasoning (VCR) model (Smith, 2000),
— extended view-centric reasoning (EVCR) (Burgin and Smith,
2006),
— event-action-process model (EAP) (Burgin and Smith, 2006),
— process networks (Kahn, 1974),
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 421

Knowledge Structure and Functioning 421

— process algebras (Baeten, 1990; Baeten and Bergstra, 1991),


— dataflow process networks (Lee and Parks, 1995),
— discrete event simulators (Fishman, 1978),
— synchronization trees (Winskel, 1985),
— labeled transition systems (Sassone et al., 1996),
— grid automata (Burgin, 2003).
Looking at science, we see a big variety of scientific languages.
Although some hold that science is an applied mathematics (cf.,
(Grene, 1974)), this ideal model for science is not adequate and
each scientific discipline has its own language, which often includes
a mathematical language. While science has been developing with
its scientific languages, philosophers of science tried to explore and
explain these languages. Usually the beginning of this process is
ascribed to the beginning of the 20th century when logical posi-
tivists such as Friedrich Albert Moritz Schlick (1882–1936), Otto
Neurath (1882–1945), Herbert Feigl (1902–1988) and Rudolf Carnap
(1891–1970) started their exploration of science with the emphasis
on scientific languages. In the picture of science developed by logi-
cal positivists, the language of a scientific theory consists of a set of
symbols, which form the alphabet and include terms of the theory,
and of the rules to generate formulas, which are correct with respect
to syntax and called well-formed formulas (wff).
Each scientific theory has two types of terms — logical/
mathematical and non-logical. It is assumed that the set of logi-
cal/mathematical terms of a scientific theory includes logical sym-
bols, e.g., connectives and quantifiers, and mathematical expressions
and objects, e.g., numbers, derivatives, and integrals. At the same
time, non-logical terms of a scientific theory belong to two classes —
observational and theoretical terms. They are used for denoting
(naming) physical objects, their properties and relations, such as
white, warm, longer than, electron, electromagnetic field, or quark.
Formulas and expressions of a scientific theory, as well as the corre-
sponding statements, are divided into five classes:
(i) Logical and mathematical expressions (statements), which do
not contain non-logical and non-mathematical terms.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 422

422 Theory of Knowledge: Structures and Processes

(ii) Purely observational expressions (statements), which contain


observational terms but not theoretical terms .
(iii) Purely theoretical expressions (statements), which contain the-
oretical terms but not observational terms.
(iv) Expressions (statements) that contain both theoretical and
observational terms.
(v) Descriptions or rules of correspondence between observational
and theoretical terms, which contain both observational and
theoretical terms.
However, this picture of scientific languages is oversimplified. For
instance, science often uses metaphors as representational knowledge
because, as MacCormac writes, without them it would be impossible
to pose a new hypothesis intelligibly (MacCormac, 1976). Some sci-
entific metaphors are reflected in scientific terms. For instance, only
vectors grow in vector fields, which are abstract mathematical struc-
tures useful for physics. Groups in mathematics are essentially dif-
ferent from groups of people and abstract rings in algebra have noth-
ing in common with rings, which people are wearing. We can take
the planetary model of an atom introduced by Ernest Rutherford
(1871–1937) as an example of a more complex metaphor in science.
In this model, the Solar System is used as a metaphor for explain-
ing the structure of an atom giving a picture of a few planet-like
electrons rotating around the nucleus of the atom. Although later, it
was demonstrated that this model cannot be adequate; this metaphor
suggested by Rutherford and developed by Bohr is still utilized for
educational purposes and as a symbol for atoms and atomic energy.
However, scientific metaphors play an important role not only in
explanation of scientific results but also in obtaining these results,
that is, in scientific cognition (Leatherdale, 1974; Rothbart, 1997).
Metaphors give creative insights for understanding unknown phe-
nomena, as well as generate the basis for concept formation. Exam-
ples are black holes and white dwarfs, which are popular objects in
cosmology.
A formal theory of metaphor based on the theory of abstract prop-
erties presented in Section 5.3 is developed in (Burgin and Rothbart,
1998).
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 423

Knowledge Structure and Functioning 423

To conclude, we consider the role of mathematics in human cog-


nition. As many know, physicists are trying to create a theory of
everything. However, this is impossible because physics studies only
physical, that is, material systems. Therefore, in the best case, physi-
cists will be able to create a theory of everything on the level of
physics. However, for example, people are studied not only on this
level but primarily on the biological, psychological, and social lev-
els. This shows that a physical theory of everything can describe the
world only from its material side.
At the same time, we already know that everything has a struc-
ture and, as a rule, not even one. Thus, the general theory of struc-
tures is a theory of everything as its studies everything. Mathematics,
for example, is such a general theory of formal structures. That
is why mathematics studies everything by formalized methods and
techniques, being a theory of everything on its structural level and
providing structural models (as units of representational knowl-
edge) for a diversity of various phenomena in nature, society, and
technology.

5.1.3. Algorithmic and programming languages


By relieving the brain of all unnecessary work,
a good notation sets it free to concentrate on more advanced problems,
and, in effect, increases the mental power of the race.
Alfred North Whitehead

Creation of electronic computers instigated creation of algorithmic


and programming languages for representation of operational knowl-
edge. If for a long time operational knowledge had been expressed
informally by means of natural languages and semi-formally by
means of mathematical languages, computers demanded more exact
description of actions and operations. Besides, mathematics also
required exact sufficiently general representation of operational
knowledge. To satisfy these needs, the general concept of algorithm
was introduced and formalized.
There are different approaches to defining algorithm. As an infor-
mal notion, algorithm has a variety of interpretations and definitions.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 424

424 Theory of Knowledge: Structures and Processes

For instance, a popular in mathematics point of view on algorithm


is presented by Rogers (1987):

Algorithm is a clerical (i.e., deterministic, bookkeeping) procedure which


can be applied to any of a certain class of symbolic inputs and which will
eventually yield, for each such input, a corresponding output.

In this definition, “procedure” is interpreted as a system of rules


that are arranged in a logical order, and each rule describes a specific
action, while an algorithm is treated as a kind of procedure. “Cleri-
cal” or “bookkeeping” means that it is possible to perform according
to these rules in a mechanical way, so that a device is actually able
to carry out these actions.
Unfortunately, this definition is incomplete because algorithms
used by people do not produce an output for each relevant input
and it is even proved that there is no algorithm discerning when it
is possible to obtain such an output.
Donald Knuth (1971), a well-known computer scientist, defines
algorithm as follows:

An algorithm is a finite, definite, effective procedure, with some output.

In this definition, “finite” means that it has a finite description


and there must be an end to the work of an algorithm within a
reasonable time. “Definite” means that it is precisely definable in
clearly understood terms, no “pinch of salt” type vagaries, or possible
ambiguities. “Effective” means that a device is actually able to carry
out actions prescribed by an algorithm.
The concluding words “with some output” allow different inter-
pretations. Some understand the condition to give output assuming
that an algorithm always gives a result. As we have seen, this is not
true for all practical algorithms. It is also possible to comprehend
these words as the condition that algorithm in some cases gives the
output. This better correlates with computing practice and theory,
which came to a broader understanding, according to which algo-
rithms are aimed at producing results, but in some cases cannot do
this. This shows that Knuth’s definition is too imprecise.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 425

Knowledge Structure and Functioning 425

In the Free Online Dictionary of Computing (http://foldoc.


doc.ic.ac.uk/) algorithm is defined as a detailed sequence of actions
to perform to accomplish some task.
A more exact definition consistent with the computational prac-
tice, is given in (Burgin, 2005), where it is written:

An algorithm is an unambiguous (definite) and adequately simple to fol-


low (effective) prescription (organized set of instructions or rules) for
deriving necessary results from given inputs (initial conditions).

Programmers, mathematicians, and computer scientists have


developed a variety of special languages for representing various algo-
rithms.
An algorithmic language is a formal language oriented at describ-
ing algorithms in a formal way. It is possible to separate two
classes of algorithmic languages: abstract or theoretical algorith-
mic languages and programming languages. Languages from the first
class are used in the theory of algorithms, automata, and com-
putation for mathematical representation of algorithms (cf., for
example, (Hopcroft et al., 2007; Burgin, 2005)). Examples of such
languages are:

— The language of Turing machines;


— The language of neural networks;
— The language of finite automata;
— The language of formal grammars;
— The language of automata with structured memory;

A programming language is formal language designed for describ-


ing computational processes in a compressed, exact form or, equiva-
lently, for writing down algorithms to be executed by computers.
To this day, hundreds of programming languages have been
created but the majority of them are not used for practical
programming.
There are two basic classes of programming languages: high-level
programming languages, which are not related to any specific com-
puter, and low-level or machine-oriented programming languages,
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 426

426 Theory of Knowledge: Structures and Processes

which take the specific features of a given type of computers, such as


instruction set or addressing modes, into account.
In turn, high-level programming languages are usually divided
into two classes: problem-oriented and universal programming lan-
guages. Universal programming languages also include several sub-
classes.
There are several main types of programming languages:

— Array programming languages, also called vector or multidimen-


sional programming languages, utilize operations with vectors,
matrices, and higher-dimensional arrays.
— Assembly programming languages use machine instructions coded
in a form understandable by people.
— Message-passing programming languages are designed for pro-
gramming concurrent processes. Dataflow programming languages
organize computations with respect to the flow of data often spec-
ifying the process in a visual form.
— Decision tables as programming languages express the logic of
computation in the form of a decision table
— Declarative programming languages, which include functional and
logic-based programming languages, describe a problem rather
than defining the process of its solution.
— Imperative programming languages construct programs as serial
orders (imperatives) given to a computer.
— Functional programming languages define programs and subrou-
tines as mathematical functions.
— Semantic programming languages represent all programs and data
as one distributed interconnected graph with the ‘nod-edge’ as the
basic building block.
— Logic-based programming languages specify a set of attributes
that a solution must have, rather than prescribing the process
to obtain a solution.
— Procedural programming languages describe the procedure of com-
putation with programs composed of one or more units or mod-
ules that encode procedures.
— Object-oriented programming languages combine data, and meth-
ods of manipulating the data in a single unit called an object.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 427

Knowledge Structure and Functioning 427

— Symbolic programming languages describe programs that are able


to manipulate formulas and program components as data.

Similar to natural languages, a programming language is con-


structed over an alphabet of basic symbols, in which programs are
written down in the form of a hierarchical system of grammatical
elements, between which relations are given (similarly to the words,
phrases, and sentences in a natural language, whose connections are
given by syntactic rules). The lowest level elements, formed by chains
of basic symbols, are called lexemes, or lexical units. For the lexemes
occurring in a program the class to which they belong is defined, and
for certain classes of lexemes (e.g., identifiers) also their scope —
some uniquely identifiable part of the program to which all occur-
rences of a given lexeme belong (a block). Exactly one occurrence
of such a lexeme is said to be defining; the other occurrences of the
lexeme in its scope are called applied.
The further levels of elements of an algorithmic language are
formed by notions, or non-terminals. The relation that may hold
between the notions of an algorithmic language is that of being a
(direct) constituent of (i.e., an immediate constituting part), while
the individual constituents of a given notion are related to each other
by concatenation (textual sequence). The transitive closure of the
constituent relation uniquely assigns to each notion some subword of
the text of the program, which is said to be the (terminal) production
of this notion. There is one initial notion, the production of which
is the entire program text. A tree whose root is the initial notion,
whose terminal vertices (leafs) are lexemes and basic symbols, whose
internal vertices are concepts and whose branches are constituent
relations, is called a production or syntax tree of a program. The
construction of such a tree is known as the syntactic analysis or
parsing of a program.
Sometimes, practitioners do not make a specific distinction
between the concepts of “programming language” and “algorithmic
language” For instance, one of the first programming languages was
called Algorithmic Language (Algol).
To conclude, it is important to understand that now due to
proliferation of computers, the most popular way of operational
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 428

428 Theory of Knowledge: Structures and Processes

(procedural) knowledge representation are computer programs and


algorithms.
It is possible to read more about the classical approach to algo-
rithms and computer programs, for example, in (Machtey and Young,
1978; Shoenfield, 2001; Knuth, 1997; Berlinski, 2001; Manin, 1991)
and about a more advanced approach to algorithms in (Burgin,
2005).

5.2. Logic as a tool for knowledge representation and


production

The logic of the world is prior to all truth and falsehood.


Ludwig Wittgenstein

As a separate discipline, logic has origins in philosophy and goes


back, at least, to Aristotle (384–322 B.C.E.). Therefore, it is natural
that the word logic originates from the Greek word logos, which has
acquired three meanings in the process of its development. According
to one of them, logic is a specific research (cognitive) field, while the
other interpretation exposes logic as a definite structure, which can
be formal or informal. The third (mundane) understanding regards
logic as a way of reasoning (cf., for example, (Agre, 2000; McCarthy
et al., 2004; Smehov, 1987)). As a field, logic exists in philosophy and
mathematics having the form of the discipline of formal principles of
reasoning and correct inference. As per modern understanding, logic
as a discipline is the study of correct reasoning by structural (often
formal) methods. Some understand logic as an analysis of laws of
thought.
The term logic as a formal (mathematical) structure has two
meanings. On one hand, a logic L consists of a logical language
L together with a deductive system (logical calculus) and/or truth
semantics. Sometimes a model-theoretic semantics (interpretation)
is also included. The language corresponds to a part of a natural
language like English or Spanish. The deductive system (logical cal-
culus) is developed to record, capture, and codify, the inferences of
which are correct for the given language, and the truth semantics
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 429

Knowledge Structure and Functioning 429

is built to reflect, capture, and codify the meanings, in the form of


truth-conditions, or possible truth conditions, for at least part of the
language L. It is also called a logical semantics.
On the other hand, there is an opinion that a logic L consists
only of inference/deduction rules and rules of interpretation. What
is done according to these rules is called logical, while what is in
dissonance with these rules got the name illogical. However, there
are different systems of logical rules and what is called logical with
respect to one system may be illogical with respect to another system
of logical rules.
Logic as a way of reasoning emerged millennia ago. It was the first
stage of logic as a discipline. We can see argumentation and inference
in such ancient texts as the Bible and Indian Vedas. The second stage
of logic as a discipline is characterized by explicit formulation of rules
for logical reasoning. The third stage was formation of logic as a
separate discipline, which happened in ancient Greece and ancient
India.
Although the roots of contemporary logic are in the logic of
ancient Greeks and especially in the logic developed by Aristotle and
called syllogistics, even before Aristotle, the development of logics
started in ancient India. The first known Indian logician Medhatithi
Gautama (ca. 6th century B.C.E.) founded the anviksiki school of
logic. The classical text Mahabharata, originated around the 5th cen-
tury B.C.E., refers to the anviksiki and tarka schools of logic. Another
Indian logician Pān.ini, who lived around 5th century B.C.E., devel-
oped a form of logic in the process of formalization of Sanskrit gram-
mar. In his work Arthashastra, philosopher Chanakya (ca. 350–283
B.C.E.) described logic as an independent field of inquiry anviksiki
(Ganeri, 2001). However, some authors conceive that in India, logic
was never developed as a distinct discipline but was always embedded
in epistemology (Matilal, 1998).
Jain logic developed and flourished from 6th century B.C.E. to
th
17 century C.E. making its own unique contribution to Indian
logic and taking logic as the foundation for the theory of knowledge.
According to Jain theory of knowledge, ultimate principles should
always be logical and no principle can be devoid of logic or reason.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 430

430 Theory of Knowledge: Structures and Processes

In essence, the development of logic in India continued from


ancient times to our days. It is possible to mention the analysis of
logical inference by Gotama (ca. 2nd century C.E.), who founded of
the Nyaya school of Hindu classical philosophy, and the tetralemma
of Nagarjuna (ca. 2nd century C.E.).
Indian Buddhist logic (called Pramana) flourished from about 5th
century C.E. up to 13th century C.E. The three main authors of
Buddhist logic are Vasubandhu (between 4th and 8th centuries C.E.),
Dignāga (480–540 C.E.), and Dharmakı̄rti (600–660 C.E.).
Indian logic had many results that surpassed European logic not
only at that time but even much later. For instance, Buddhist logic
used four forms of predication:

1. S is P , e.g., “a square is a rectangle” or “English is a language”.


2. S is not P , e.g., “a square is not a circle” or “Poland is not a
language”.
3. S is and is not P , e.g., if a ball is partially green and partially
yellow, we can say “the ball is green and is not green” or “there
is and is no mathematical straight line”.
4. S neither is nor is not P , e.g., if a ball is partially green and
partially yellow, “the ball neither is green nor is not green” or
“the world of ideas neither is real nor it is not real”.

One more example is syllogism developed in Indian, or more


exactly, the Nyaya, logic and called Prayoga. It has five parts (Rao,
1998):

1. Pratijñā (proposition or hypothesis);


2. Hetu (reason);
3. Udāharana or Drstanta (example);
4. Upanaya (application);
5. Nigamana (conclusion).

Here is an example of application of the Nyaya syllogism.

1. The proposition (hypothesis): the house is on fire;


2. The reason: the smoke is around the house;
3. The example: fire is accompanied by smoke, as in the kitchen;
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 431

Knowledge Structure and Functioning 431

4. The application: as in kitchen, so for the house;


5. The conclusion: therefore, the house is on fire.

At the same time, Aristotle’s syllogism, which was later trans-


formed into the deduction rule modus ponens, has only three parts:

1. Major premise;
2. Minor premise;
3. Conclusion.

Here is an example of application of Aristotle’s syllogism.

1. Major premise: People who lived in Athens spoke Greek.


2. Minor premise: Aristotle lived in Athens.
3. Conclusion: Aristotle spoke Greek.

In China, Mo Di (ca. 470–ca. 391 B.C.E.), also called Mozi or Mo


Tzu, who lived shortly after Confucius during the Hundred Schools
of Thought period, founded the Mohist school of philosophy, which,
in particular, studied problems of valid inference and rules for correct
conclusions. Mo Di introduced the “three-prong method” for testing
the truth or falsehood of statements. His followers later expanded
on this approach founding the School of Names, representatives of
which are often called logicians. As its name shows, philosophers
from this classical direction in Chinese philosophy were interested in
language, disputation, and metaphysics, focusing on names (ming in
Chineese, which also means words) and their relation to “stuff” (shi,
objects, events, situations). These thinkers lived mostly in the 2nd
century B.C.E. So, in essence, Chinese as many other cultures used
various logical forms in their reasoning and studied some problems
of logic, e.g., related to names, without developing explicit logical
systems (Weimin, 2009).
In Europe, the development of logic continued after Aristotle cre-
ated syllogistics. For instance, Theophrastus of Eresus (ca. 371–ca.
286 B.C.E.), who was Aristotle’s successor as the head of his school
at Athens, studied hypothetical syllogisms. Stoic philosophers made
an essential contribution to logic starting with Euclid of Megara (ca
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 432

432 Theory of Knowledge: Structures and Processes

435–ca 365 B.C.E.), a pupil of Socrates and slightly older contempo-


rary of Plato. The ancient Stoics developed modality in logic, built
a theory of the material conditionals, studied relations between and
truth introducing tense operators (cf., (Mates, 1953)). Besides, the
first employment in the Western philosophical tradition of the notion
of proposition, in roughly modern sense, is also found in the writings
of the Stoics (McGrath, 2014). In the 3rd century B.C.E., Zeno and
his followers distinguished the material aspects of texts from their
content, which was called lekta. In turn, lekta included axiomata, or
the meanings of declarative sentences. For the Stoics, only axiomata,
and not the texts used to represent them, were able to be true or
false.
In spite of its bright beginning, the European logic stopped its
development for many centuries, while the mainstream of logic as a
discipline shifted to the East, where an interesting development of
logic could be attributed to Islamic thinkers. At first, some forms of
analogical reasoning, inductive reasoning and categorical syllogism
were introduced in Islamic jurisprudence, law) and theology. This
process started in the 7th century C.E. before the Arabic transla-
tions of Aristotle. Later Arabs incorporated mathematics, logic and
philosophy of ancient Greeks translating their works into Arabic and
started developing their own logic. The great Islamic polymath Abū
Alı̄ al-Husayn ibn AbdAllāh ibn Al-Hasan ibn Ali ibn Sı̄nā (980–
1037), who was known as Avicenna (Latinate form of Ibn-Sı̄nā) in
Europe, created his own logic known as “Avicennian logic” as an
alternative to Aristotelian logic. In addition, he developed a the-
ory of definition and classification, elaborated an original theory on
“temporal-modal” syllogism with temporal modifiers such as “at
all times”, “at most times”, and “at some time” and introduced
quantification of the predicates in categorical propositions. By the
12th century, Avicennian logic was predominantly used in the Islamic
world replacing Aristotelian syllogistics. It is interesting to know that
Ibn Sı̄nā was a prolific writer as by historical evidence, he wrote
around 450 works, from which 240 has survived, including 150 works
in philosophy and 40 works in medicine.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 433

Knowledge Structure and Functioning 433

Other Islamic thinkers also developed logic. For instance, Ibn


Hazm (994–1064) wrote the Scope of Logic, in which he discussed
sense perception as a source of knowledge. Al-Ghazali (1058–1111)
applied Avicennian logic to Islamic theology Kalam. Fakhr al-Din
al-Razi (b. 1149) developed a form of inductive logic. Ibn Taymiyyah
(1263–1328) argued that inductive reasoning was more useful than
the syllogism.
Judaic logic as a form of grounded reasoning is already present
in The Bible. Later advanced hermeneutic and inference called midot
rules were developed and extensively used by the authors of Tal-
mud (2nd –3rd century C.E.).These rules went much further than the
syllogistics of Aristotle but were informal.
Historians discern three basic sets of rules: the seven rules of Hil-
lel, the thirteen rules of Rabbi Ishmael b. Elisha and the thirty two
rules of Rabbi Eliezer ben Jose HaGelili (Sion, 2010; Klein, 2013).
Rabbi Akiva also introduced some rules of inference (exigesis).
Hillel, also called Hillel HaGadol, or Hillel HaZaken, or Hillel
HaBavli, (ca. 60 B.C.E.–20 C.E.) was a famous Jewish religious
leader and sage, whose views were recorded in the Talmud and many
other books. According to some sources, he lived approximately from
110 B.C.E. to 10 C.E., while other sources give another period —
approximately from 60 B.C.E. to 20 C.E.,
Jewish scholars used the Seven Rules of Hillel long before Hillel
was born but he was the first to write them down. That is why they
have his name. Here is the contemporary understanding of the Seven
Rules of Hillel:

1. Kal wa-homer (which means Light and heavy in English):


It is possible to apply a law stated for one case to another case
when the second case gives even more reasons for application of
this law. This situation is often indicated by the phrase “how much
more. . .”
The Rabbinical sources describe two forms of this rule:
• Kal wa-homer meforash when the argument appears explicitly.
• Kal wa-homer satum when the argument is only implied.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 434

434 Theory of Knowledge: Structures and Processes

2. Gezerah shawah (which means Equivalence of expressions in


English):
It is possible to apply the same rules (considerations) to two
separate texts when they are analogical based on a similar phrase,
word or root, e.g., when the same words describe two separate
cases.
3. Binyan ab mi-katub ehad (which is translated as Building up a
“family” from a single text):
If several passages have a common characteristic, then it is pos-
sible to apply a law stated for one of them to all of them.
4. Binyan ab mi-shene ketubim (which is translated as Building up
a “family” from two or more texts):
When a law is deduced by comparing two passages, then it is
possible to apply this law to other passages that have character-
istics common with the two initial passages.
5. Kelal u-Perat (which is translated as The general and the partic-
ular):
A general principle may be restricted by a particularization of
it in another passage — or, conversely, a particular rule may be
extended into a general principle.
6. Ka-yoze bo mi-makom aher (which means Analogy made from
another passage):
Application of laws may seem to conflict in two passages and
to resolve the contradiction, these passages compared to a third
one, which has general though not necessarily verbal similarity to
them.
7. Davar ha-lamed me-’inyano (which means Explanation obtained
from context):
The total context, not just the isolated statement must be con-
sidered for an accurate inference (exegesis).

Rabbi Ishmael “Ba’al HaBaraita” or Ishmael ben Elisha (90-135)


was a rabbinic sage, whose views were recorded in the Talmud and
who extended the seven rules of Hillel obtaining what is now called
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 435

Knowledge Structure and Functioning 435

the thirteen rules of Rabbi Ishmael:

1. It is possible to apply a law stated for one case to another case


when the second case gives even more reasons for application of
this law. This rule is actually the same as the first rule of Hillel
and is also called Kal wa-homer in Hebrew.
2. If the same words and expressions are used in two parts of the
text, then what is possible to apply to the situation described in
one part, it is also possible to apply to the situation described in
the other part. This rule is actually the same as the second rule
of Hillel and is called Gezerah shawah in Hebrew.
3. Rules deduced from a single passage of the text or from two
passages. This rule is a combination of the third and fourth rules
of Hillel and is called Binyan av in Hebrew.
4. Application of a general law is clarified (e.g., explained and sup-
ported) by its particular case. This rule is similar to the fifth rule
of Hillel and is called Kelal u-Perat in Hebrew.
However, there is another rendering of this rule:
If at first, a law is given in a general form and after this, in its
particular form, then this law is applied only in the particular
form.
5. It is possible to extend a particular law to a general law if the
general case goes after the particular case. This rule is also similar
to the fifth rule of Hillel and is called u-Perat u-Kelal in Hebrew.
6. When a law is applied to a general case, then applied to a partic-
ular case, and then again applied to the general case, it follows
that it is possible to apply this law only for a part of a general
case that is similar to the particular case. This rule is called Kelal
u-Perat u-Kelal in Hebrew.
7. Elucidation of a general application of a law is performed by a
particular application and clarification of the particular applica-
tion is performed by the general application. Utilization of this
rule excludes employment of the three previous rules. This rule is
called Kelal she-hu tzarik le-Perat u-Perat she-hutzarik le-Kelal
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 436

436 Theory of Knowledge: Structures and Processes

in Hebrew, which means the general requires the particular and


the particular requires the general.
8. When a particular instruction implied by a general law is consid-
ered separately based on a special regulation, then the general
law has to be applied according to the application of the par-
ticular instruction. This rule is called cal davar shehayah bechlal
v’yatza in Hebrew.
9. When a general law is applied to a special case without some
conditions used in this law, then in this application, the omitted
conditions are not taken into account. This rule is called v’yatza
leton . . . ch’enyon in Hebrew.
10. When a general law is applied to a special case with additional
conditions but without some conditions used in this law, then in
this application, the new conditions are used, while the omitted
conditions are not taken into account. This rule is called v’yatza
leton . . . shelo ch’enyon in Hebrew.
11. When a particular case is excluded from a general law, then in
future, the general law is applied to this case only when it is
directly prescribed. This rule is called v’yatza ledon in Hebrew.
12. Inference (exegesis) is made either from the context or from
a later reference in the passage containing this law. This
rule is an extension of the seventh rule of Hillel and is
called Davar ha-lamed me-’inyano ve davar ha-lamed mi-sofo in
Hebrew.
13. When application of laws may seem to conflict in two passages,
then this contradiction must be solved by reference to a third
passage, which has general though not necessarily verbal simi-
larity to the first two. This rule is the same as the sixth rule of
Hillel and is called Ka-yozebo mi-makomaher in Hebrew.

Eliezer ben Jose, also called Eliezer ben Yose HaGelili, was a
renown Jewish rabbi, who lived in Judea in the 2nd century, was
a student of the famous Rabbi Akiva (ca. 50–135) and whose views
were recorded in the Talmud. In particular, Rabbi Eliezer ben Jose
elaborated the thirty two rules of Rabbi Eliezer intended for inference
of haggadic interpretation.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 437

Knowledge Structure and Functioning 437

As we can see, inference rules of Jewish sages include deduction


and abduction, e.g., analogy, shaping a semiformal operational logical
calculus, and implying non-monotonicity of inference in the case of
inconsistent knowledge.
This shows that parallel to the logic of propositions, operational
logic has been developing in hermeneutics and legal area.
It is necessary to remark that in addition to non-conventional
reasoning, Jewish logic essentially employed not only logically con-
sistent calculi but also logical varieties (cf., Section 3.1) representing
opinions of different sages. As these opinions often contradicted one
another, a logical formalization of such a fundamental text as Tal-
mud, which is the focal treatise of Rabbinic Judaism, was possible
exclusively in the form of a logical variety. From the perspective of
traditional logic, Talmud is a collection of contradictions, and only
elaboration of new logical constructions — logical varieties, prevari-
eties and quasi-varieties — makes it possible to represent Talmud in
a logically coherent way.
In spite of the highly developed informal logic, which existed in
Jewish tradition, the first authored by a Jew work specifically on
logic in the classical understanding appeared in the 12th century. It
was the “Makalah fi Sana’at al-Mantik” written by the great Jewish
sage Moshe ben Maimon (1135–1204), also known as Maimonides or
Rambam or Mūsā ibn Maymūn.
Later (in 1319) another great Jewish philosopher and mathemati-
cian Levi ben Gerson (1288-1344), mostly known as Gersonides or
the Ralbag, wrote a logical tract Sefer Ha-heqesh Ha-yashar (On
Valid Syllogisms), in which he examined problems associated with
Aristotle’s modal logic and which was translated into Latin with-
out Gersonides’ name attached to it (Manekin,1992; Simonson, 2000;
Rudavsky, 2015).
In Europe, the study and then the further development of logic
resumed after the Dark Ages. The first known philosopher who
contributed to logic was Peter Abelard (1079–1142). In his works,
he discussed problems of conversion, opposition, quantity, quality,
tense logic, a reduction of de dicto to de re modality, and some oth-
ers. Abelard also clearly formulated several semantic principles such
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 438

438 Theory of Knowledge: Structures and Processes

as the bi-conditional for the theory of truth, which was introduced by


Tarski in the 20th century. However, Abelard rejected this approach.
Philosophers think that the most important contribution to logic
made by Abelard was the clear formulation of a pair of relevant cri-
teria for logical consequences.
Later such philosophers as William of Ockham (ca. 1287–1347),
Jean Buridan (ca. 1295–after 1358), and Albert of Saxony (ca. 1320–
1390) developed a supposition theory, which studied how predicates
(e.g., “being a cat”) range over a domain of individuals (e.g., the
set of all cats). Buridan also elaborated a theory of consequences
based on a synthesis of entailments and inference rules. In his inves-
tigation of syllogistics, Buridan suggested some kind of completeness
proofs.
Besides, Jean Buridan, William of Ockham and Albert of Saxony
built the theory of consequences, which employed hypothetical,
conditional propositions, i.e., two propositions connected by the
term ‘if . . . , then’. In particular, Ockham made a clear distinc-
tion between material consequences similar to the modern material
implication and formal consequences similar to the modern logical
implication.
Albert of Saxony also examined various sentences that were dif-
ficult for interpretation due to the presence of words (terms) that,
according to medieval logicians, did not have a proper and deter-
minate signification but only modified the signification of the other
words (terms) in the propositions where they appeared. In his analy-
sis of epistemic verbs and infinity, Albert explained that a proposition
had its own signification signifying a “mode of a thing” distinctive
from signification of its terms. Based on this approach, he analyzed
paradoxes of self-reference demonstrating that since every proposi-
tion, by its very form, signified that it was true, an “insoluble” propo-
sition would turn out to be false because it would signify at once both
that it was true and that it was false.
After this, logic was successfully developed in Europe. Philoso-
phers wrote treatises and published books on logic. Traditional logic
started with Antoine Arnauld’s and Pierre Nicole’s Logic, or the Art
of Thinking, better known as the Port-Royal Logic (Arnauld and
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 439

Knowledge Structure and Functioning 439

Nicole, 1683). It was probably the most influential work on logic


in England until the 19th century. In essence, almost all outstand-
ing European philosophers contributed to logic. Approximately, at
the same time, John Poinsot (1589–1644) published his book Logic,
Francisco Suarez (1548–1617) published his book Metaphysical Dis-
putations and Giovanni Girolamo Saccheri (1667–1733) published his
book Logica Demonstrativa.
Gottfried Wilhelm Leibniz (1646–1716) had many revolutionary
ideas in logic developed mostly between 1670 and 1690 although the
majority of them were not published during his life. The central aim
of Leibniz in logic was to extend the traditional syllogistic to a Uni-
versal Calculus, in which “all mistakes in reasoning will at once show
up in a wrong combination of characters, and therefore the appli-
cation of the characteristic script provides a means to discover the
mistake in a disputed point like in every other calculation”. Although
researchers found several drafts of candidates for such a calculus,
none of these writings was eventually sent to press.
Philosophers divide logical works of Leibniz into four directions:

1. Studies in the theory of the syllogism.


2. Works on the Universal Calculus.
3. Development of propositional logic.
4. Advancement of modal logic.

When separate writings and fragments of Leibniz were joined


together, researchers found an impressive system of four calculi:

• The algebra of concepts LC , which is deductively equivalent to the


Boolean algebra of sets;
• The quantificational system LQ with “indefinite concept” functions
playing the role of quantifiers ranging over concepts;
• A propositional calculus of strict implication obtained from LC
by the strict analogy between the containment relation between
concepts and the deduction relation between propositions;
• The so-called “Plus-Minus-Calculus” as abstract system of “real
addition” and “subtraction” in the sense of Leibniz.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 440

440 Theory of Knowledge: Structures and Processes

Thus, we can see that although logic was developed in different


countries, the mainstream of contemporary logic has been formed
under the influence of the classical Greek logic.
Traditionally contemporary logic as a discipline is divided into
two big areas: informal logic and formal logic. Informal logic stud-
ies natural language argumentation, separating errors in reasoning,
which are called fallacies. Different types of fallacies have been found
and studied. For instance, A Red Herring is a fallacy in which an
irrelevant topic is presented in order to divert attention from the
original issue. Here are some examples of informal logics:

— Popper’s logic of scientific discovery (Popper, 1972);


— scholastic logic (cf., (Perrier, 1909));
— dialectic or dialectical logic (Hegel, 1813);
— universal logic (Beziau, 2012);
— Logic in Reality (Lupasco, 1951; Brenner, 2008);
— logic of science or logic of physics (Bridgman, 1927);
— logic of epistemology (Hintikka and Hintikka, 1988);
— logic of diagnosis (Tarasov et al., 1989);
— market logic (Agre, 2000);
— logic of distributed systems (Barwise and Seligman, 1997).

Philosophers also treat formal (symbolic) logic and dialectical


logic as separate logics independent of each other (Routley, 1979).
Formal logic studies inference with purely formal content and
consists of two subdisciplines: symbolic logic and non-symbolic
logic. Aristotle’s syllogistics is an example of formal non-symbolic
logic.
Symbolic logic as a discipline takes its origin in the works of George
Boole (1815–1864), Augustus De Morgan (1806–1871), Friedrich
Ludwig Gottlob Frege (1848–1925), Giuseppe Peano (1858–1932),
and Charles Sanders Peirce (1839–1914). The core systems studied
by symbolic logic are symbolic structures called logics and calculi.
Typically, a logic, as a symbolic system, consists of a specific for-
mal, or sometimes informal, language together with an inference sys-
tem, which generates (forms) a calculus (logic’s syntax), and/or a
model-theoretic semantics. The language is, or corresponds to, an
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 441

Knowledge Structure and Functioning 441

enhanced and often formalized part of a natural language, such as


English, Spanish, or French. The inference system is to capture, cod-
ify, or simply record, what reasoning is correct for the given language.
Often, inference is, or is called, deduction although there are other
kinds of inference — induction, abduction, and analogy. The goal of
the logical semantics is to represent meaning, or truth-conditions, or
possible truth conditions, for the logical language.
An important sub-discipline of symbolic logic is mathematical
logic. There are three meanings of the term mathematical logic:

— Logic as a discipline that uses mathematical techniques;


— Symbolic logic as a discipline applied to mathematics;
— Logic as a manner of reasoning used in mathematics.

Mathematical logic as a discipline that uses mathematical tech-


niques consists of three parts:

— The part called formal (usually, axiomatic) theories studies for-


mal symbolic systems such as axiomatic set theories or formal
arithmetic;
— proof theory investigates deduction and deductive systems explor-
ing dynamical processes in the syntax of mathematical logic;
— model theory studies of interpretations of formal systems and rep-
resents semantics of mathematical logic.

Besides, such a discipline as recursion theory, which explores con-


structive representations of functions and processes, is sometimes
considered as a part of logic, while in other cases, it is treated as a
part of the theory of algorithms.
There is a big diversity of various logics (as structures) introduced
and studied by different authors. Some of them are given in the
following list:

— syllogistics (Aristotle);
— classical logic, which includes the classical propositional logic and
classical predicate logic (Boole; De Morgan; Peirce);
— algebraic logic (cf., (Halmos, 1962; Plotkin, 1991));
— algebraic polymodal logic (Goldblatt, 2000);
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 442

442 Theory of Knowledge: Structures and Processes

— autoepistemic logic (Moore, 1985);


— belief logic (Levesque, 1984);
— business logic (cf., (Bohrer, 1997));
— categorical logic (cf., (Goldblatt, 1984; Lambek and Scott, 1988));
— combinatory logic (cf., (Fitch, 1974; Hindley et al., 1972));
— complex logic (Zinoviev, 1973);
— computability logic (Japaridze, 2003; 2006);
— conclusion logic (Shoesmith and Smiley, 1978);
— conditional logic (cf., (Nute, 1980));
— conservative logic (Fredkin and Toffoli, 1982);
— constructive logic (Bridges and Richman, 1987);
— continuous logic (cf., (Chang and Keisler, 1966));
— cumulative logic (cf., (Degen, 1984);
— cumulative default logic (Brewka, 1991);
— default logic (cf., (Reiter, 1980; Konolige, 1988; Brewka, 1989));
— deontic logic (also called logic of norms) (cf., (Hilpinen, 1971;
von Wright, 1951; 1963; 1968));
— dependence logic (Väänänen, 2007)
— deviant logic (Haack, 1974);
— discussive
(also called discursive) logic (Jaśkowski, 1948/1999;1949/1999;
Ciuciura, 2008);
— dividing and conquering logic (Amir, 2002);
— dynamic logic (cf., (Harrel, 1979; Harel et al., 2000));
— epistemic logic (cf., (Hintikka and Hintikka, 1988; Schlesinger,
1985));
— equational logic (Clouston, 2009);
— erotetic logic (also called logic of questions and answers) (cf.,
(Prior and Prior, 1955; Harrah, 2002; Belnap and Steel, 1976));
— fibring logics (Gabbay, 1999);
— first-order logics (cf., (Shoenfield, 2001; Kleene, 2002));
— free logics (Leonard, 1956; Lambert, 2003);
— fuzzy logic (cf., (Zadeh, 1975; McNeill and Freiberger, 1993;
Nguyen and Walker, 1996; Bandemer and Gottwald, 1996));
— higher-order logics (cf., (Lambek and Scott, 1988));
— hybrid logic (Areces et al., 2001);
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 443

Knowledge Structure and Functioning 443

— hypermodal logics (Gabbay, 2002);


— illocutionary logic (Searle and Vanderveken, 1985);
— inclusive logic (Mostowski, 1951; Quine, 1954);
— independence friendly logic (IF logic) (Hintikka and Sandu,
1989);
— inductive logic (cf., (Carnap, 1952; Kyburg, 1970));
— infinitary logic (Barwise, 1969);
— information-theoretic logic (Corcoran, 1998);
— intensional logic (cf., (Anderson, 1984));
— intermediate or superintuitionistic logics (Umezawa, 1959);
— interpretability logics (cf., (Japaridze and de Jongh, 1998));
— intuitionistic logic (cf., (Dummett, 1973));
— labeled logics (Gabbay, 1996);
— linear logic (cf., (Girard, 1987; 1998));
— local logics (Barwise and Seligman, 1997);
— logic of decision (Jeffrey, 1966);
— logic of discovery (Hájek and Havranek, 1978);
— logics of formal inconsistency (Carnielli et al., 2007);
— logics of names (Simons, 2001; Cocchiarella, 2005);
— many-sorted logic (cf., (Turner, 1984: Manzano, 1993; Meinke
and Tucker, 1993));
— many-valued logic (cf., (Lukasiewicz, 1920; Post, 1921; Acker-
mann, 1967));
— metalogic (cf., (Hunter, 1971; Burgin, 2007a));
— monadic predicate logic (cf., (Tharp, 1973));
— modal logics (cf., (Chellas, 1980; Hughes and Cresswell, 1968));
— neutrosophic logic (Smarandache, 2002);
— nominal logic (Pitts, 2003);
— non-monotonic logics (cf., (Brewka, 1991; Burgin, 1991d; Turner,
1984));
— operational logic (cf., (Luchi and Montagna, 1999));
— operational quantum logic (cf., (Coecke et al., 2000));
— paraconsistent logic (cf., (Jaśkowski, 1948; 1949; Arruda, 1980;
Priest, 1986));
— polar logic (Birnbaum, 1980);
— polyadic logic (Banerji, 1988);
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 444

444 Theory of Knowledge: Structures and Processes

— polymodal logic (Japaridze, 1988);


— possibilistic logic (Benferhat et al., 1992);
— predicate functor logic (cf., (Quine, 1976; Kuhn, 1983));
— probabilistic logics (cf., (Boole, 1854; Reichenbach, 1932; 1935;
Hailperin, 1984; Russell, 2014));
— process logic (Harel et al., 1982);
— prohairetic logic (Moutafakis, 1987; Wright, 1963a);
— provability logics (Solovay, 1976; Boolos, 1993);
— quantum logic (cf., (Mittelstaedt, 1978));
— relevant logic (also called relevance logic) (cf., (Anderson and
Belnap, 1975; Read, 1988; Restall, 2000));
— resource logics (Gabbay and Queiroz, 1992);
— restrictive access logics (Gabbay, 1993);
— second-order logics (cf., (Väänänen, 2001; Rossberg, 2004));
— slash logic (Hodges, 1997; 1997a);
— spatial logics (cf., (Aiello et al., 2007);
— stationary logic (Barwise et al., 1978);
— substructural logics (Restall, 2000);
— temporal logic (also called tense logic) (cf., (Allen, 1984; van
Benthem, 1991; 1995));
— triadic logic (cf., (Fisch and Turquette, 1966));
— type-free logic (cf., (Menzel, 1986)).

We see a great diversity of logics created by researchers. However,


all logics listed here as the most important and/or popular form
only a little part of all possible logics. For instance, there exists a
continuum of different intermediate logics (Umezawa, 1959).
According to Kant, logic studies three main essential structures of
thinking: concepts, judgments, and conclusions. This division of logi-
cal description of mental activity and knowledge corresponds to three
traditional basic levels of logic as a tool of knowledge representation,
grounding and acquisition with their specific basic structures:

— On the first (conceptual) level, reality (the knowledge domain) is


reflected by a system of concepts, terms, and names as the basic
structures of this level.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 445

Knowledge Structure and Functioning 445

— On the second (representational) level, concepts are used in state-


ments (judgments) about the knowledge domain and its con-
stituents, while reality (the knowledge domain) is reflected by a
system of statements such as propositions and predicates, which
are the basic structures of this level. This is the level of the logic
language.
— On the third (inferential) level, statements (judgments) are used
in reasoning and knowledge transformation, e.g., in inference,
aimed at obtaining new knowledge and grounding existing knowl-
edge. As a result, the system of statements becomes dynamic, and
the knowledge domain (reality) is reflected by logical calculi as the
basic structures of this level.

Each level has its own basic structures and rules of their
composition.
However, the development of logic and extension of its domain
brought forth a new (organizational) level of knowledge where real-
ity (the knowledge domain or knowledge object) is described by
logical varieties, prevarieties and quasi-varieties (Burgin, 1991d;
1995b; 1997d; 2004a; 2008a; Burgin and de Vey Mestdagh, 2011;
2015; de Vey Mestdagh and Burgin, 2015). On this level, all sys-
tems from the previous levels, such as concepts, statements, lan-
guages, rules of inference, and logical calculi, are organized as
the higher-order structures called logical varieties, prevarieties and
quasi-varieties.
However, this description of four logical levels reflects the classical
approach to logic representing only descriptive or assertoric knowl-
edge, which asserts something about its domain (object). At the same
time, people use not only statements for reasoning, but also ques-
tions, queries, and conjectures. To represent these forms of reasoning
in a formal way, probabilistic, hypothetic, and erotetic logics have
been created in the 20th century (Boole, 1854; Reichenbach, 1932;
1935; Hailperin, 1984; Halpern, 1999; Russell, 2014; Kleiner, 1970;
Belnap and Steel, 1976; Harrah, 2002).
In addition, dynamic and operational logics have been elaborated
for operational knowledge representation (cf., for example, (Allen,
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 446

446 Theory of Knowledge: Structures and Processes

1984; van Benthem, 1991; Luchi and Montagna, 1999; Harrel, 1979;
Harel et al., 2000)).
In non-assertoric logics, the first level is similar to conceptual
level of the classical logic employing names, terms, and concepts.
Only the basic names, terms, and concepts from these logics rep-
resent other epistemological objects forming foundation for the sec-
ond level, which is essentially different. For instance, on the second
level, erotetic logics employ questions, queries and problems instead
of statements; probabilistic logics assign probabilities to statements;
dynamic logics use instructions and other names of processes, actions,
and events; while hypothetic logics use conjectures and hypotheses
instead of statements. Consequently, the third level of these logics
consists of hypothetic, dynamic and erotetic logical calculi, while the
fourth level encompasses hypothetic, dynamic, and erotetic logical
varieties.
Below we consider the first three levels in more detail, while the
fourth level is in depth described in Section 3.3.

5.2.1. Concepts, names, terms, and objects


By “object” is meant some element in the complex whole that is
defined in abstraction from the whole of which it is a distinction.
John Dewey

Although the mainstream literature in cognitive science regards the


concept as a kind of mental particular, we treat concepts as struc-
tures, which in many cases have mental representation. In addition,
concepts can have physical representation, for example, by texts in
textbooks and monographs.
Concepts used in logic, mathematics and science, as well as
notions utilized in commonplace reasoning are important types of
symbols. Indeed, a concept, or notion, is defined as a general idea
derived from specific instances, i.e., a concept is a symbolic (usually,
linguistic) representation of these instances. This implies the fact
that concepts/notions are signs/symbols of ideas and have struc-
tures similar to the general structure of a symbol. Often a concept is
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 447

Knowledge Structure and Functioning 447

treated as a unit of knowledge and considered as a constituent of a


proposition.
The first known model of a concept belongs to Friedrich Ludwig
Gottlob Frege (1848–1925). According to him, concepts are ways
of thinking of objects, properties and relations (Frege, 1891; 1892;
1892a). To explicate the structure of a concept, Frege assumed that
any concept has a name and this name is related to an object or col-
lections of objects, which he called by the name denotation. Besides,
Frege suggested that in addition to the denotation, names or descrip-
tions of a concept also express a sense, which accounts for the cog-
nitive significance of the concept, and people develop the denotation
of a concept through its sense. This gives us the following diagram
as a model of concept.
This model is similar to the Semiotic Triangle of Nöth, which is
a model of sign analyzed in Chapter 4.
In addition, Frege defined concept also as a function, the value of
which is always a truth-value (Frege, 1891).
Another model of concept was suggested by Bertrand Russell
(1872–1970). He regarded concepts as constituents of propositions,
while his model of concept is similar to the model of Frege. Consider-
ing concepts, Russell explained that some words (for example, proper
names) indicate particular objects. This indication is represented by

Concept name

Denotation Sense

Figure 5.2. The Concept Triangle of Frege

Name

Denotation Meaning

Figure 5.3. The Concept Triangle of Russell


September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 448

448 Theory of Knowledge: Structures and Processes

following structure.
indicates
Proper name Meaning .
of the proper name (5.4)
(object)
Thus, for Russell a concept has a name and two more constituents.
On one hand, concepts symbolize objects that are their exemplifica-
tions. Russell calls the relation between concepts and their particular
exemplifications denotation (Russell, 1905). This relation is objective
or, as Russell also says, logical. On the other hand, concepts have a
part formed by meanings of corresponding linguistic expressions that
mean objects denoted by the concept. This gives us the following
structure of a concept.
Each component of the structure of a concept — the name, deno-
tation and meaning in the Concept Triangle of Russell or the name,
denotation and sense in the Concept Triangle of Frege — is also an
object and has a name, the name of this object. In addition, these
objects also have a denotation and sense (or meaning), associated
with their names. As a name is itself an object, it has a name and a
denotation and sense (or meaning), as any other object. In an inten-
sional context, the names that occur denote the meaning or sense of
the objects for the reader or listener. It means that each component
of a concept can acquire the role of another component.
It means that each component of a concept can acquire the role
of another component. As a result, the structure of a concept has the
property called fractality, which tells that the structure of the whole
is repeated/reflected in the structure of its parts.
It is interesting that explicating the structure of concepts,
proper names and propositions in the theories of Russell and Frege,
Tatievskaia (1999) demonstrates that they have the form of a fun-
damental triad, which is called a named set, or named set chain
(cf., Appendix). She portrays these structures as the diagrams in
Figure 5.4. This portrayal allows us to see that in the first diagram
in Figure 5.4, Sense connects Sentence with Reference, while in the
second diagram, Sense connects Proper Name with Reference. At the
same time, the third diagram is a composition of two fundamental
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 449

Knowledge Structure and Functioning 449

Sentence Sense Reference


of the sentence of the sentence
(thought) (truth-value)

Proper name Sense Reference


of the proper name of the proper name
(thought) (object)

Concept-word Sense Reference Referent


of the concept-word of the concept-word of the concept-word
(thought) (object)

Figure 5.4. Conceptual structures of Frege

triads: in the first one, Sense connects Concept-word with Reference,


while in the second one, Reference connects Sense with Referent.
Cohen and Murphy (1984) consider five types of concept models:
extensional, fuzzy set-theoretical, prototypical and semantic.
The extensional or set-theoretical model is based on set theory and
represents a concept as a set of objects that belong to the concept,
rather than as some form of mental representation. It is a classical
model of concepts (cf., Diagram (5.5)).

t
Concept name Set of objects . (5.5)

The fuzzy extensional or fuzzy set-theoretical model is based on


fuzzy set theory and represents a concept as a fuzzy set of objects
that to some extent belong to the concept (cf., Diagram (5.6)). The
structure of this model is similar to the structure of the extensional
model but in contrast to the relation t, which simply assigns each
object to the concept name, the relation r shows to what extent each
object belongs to the concept with the given name.
r
Concept name Set of objects . (5.6)

The semantic or knowledge-representation model is based on com-


ponential semantics and usually represents a concept as a system of
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 450

450 Theory of Knowledge: Structures and Processes

attributes in feature-based models and by conceptual networks in net-


work models (cf., Diagram (5.7)).
p
. (5.7)
Concept name System of attributes

These models of concepts are considered classical. There are also


non-classical models of concepts.
The prototypical or prototype model is based on prototype theory
and has two forms — extensional and attributive. The principal idea
of prototype theory is graded categorization of objects, according to
which some members of a category are more central or typical than
others. In prototype theory, the prototype is defined it as the most
central member or members of a category. It is assumed that graded
categorization is likely to be present in all cultures.
Psychologists found experimental evidence that some members
of a category are more privileged than others forming gradation of
categories:
1. Response time shows that queries involving prototypical members
elicited faster response times than for non-prototypical members.
2. Priming of objects from the same category, prototypical members
come to the top of the list.
3. Exemplars named by people more frequently include prototypical
items.
In the attributive prototype model, a concept is represented by
a system of the concept role-values as its most common attributes.
Such a system is treated as a prototype base (cf., Diagram (5.8)).
These role-values of a concept are not assigned by definition because
the concept does not specify necessary and sufficient criteria for mem-
bership. Objects belong to the concept if they sufficiently agree with
the concept role-values (Rosch and Mervis, 1975).
u
. (5.8)
Concept name System of role-values

In the extensional prototype model, a concept is represented by


an object called a prototype, which plays the role of the most com-
mon concept representative (cf., Diagram (5.9)). Objects that have
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 451

Knowledge Structure and Functioning 451

the most typical role-values are considered “good” prototypes, while


objects that have atypical or unacceptable values are considered
“borderline” or “poor” prototypes. For instance, for the concept a
bird, robin is a “good” prototype, while penguin is “poor” prototype.
Other objects belong to the concept if they are sufficiently similar to
the prototypes (Rosch and Mervis, 1975).

w
Concept name Prototype . (5.9)

According to (Hampton, 2011), researchers have proposed five


broad classes of concept models, which are described below.
The classical model assumes that each concept is clearly and
entirely defined by a system of necessary and sufficient as the system
common attributes (Armstrong et al., 1983; Osherson and Smith,
1981). Diagram (5.10) is a representation of the structure of this
model.
l
Concept name Set of attributes . (5.10)

Later the classical view was extended by dividing representation


attributes into two groups: defining features, which form the core
definition of the concept extension, and characteristic features, which
are true only for typical category members and which may form the
basis of a recognition procedure for quick categorization.
Thus, the classical concept model generates the structure of the
Attributive Concept Triangle (cf., Figure 5.5).
The second model is the prototype model, two forms of which are
considered above.
The third model is the theory-based model rooted in the cogni-
tive development tradition (Murphy and Medin, 1985). Concepts

Concept name

Defining features Characteristic features

Figure 5.5. The attributive concept triangle


September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 452

452 Theory of Knowledge: Structures and Processes

are framed by the theoretical understanding of the world. While a


prototype representation of a concept is a system of unconnected
attributes, the theory-based representation has the form of a struc-
tured frame or schema (cf., Diagram (5.11)), comprising theoretical
knowledge about the relations between these attributes, as well as
their causal and explanatory links.
k
Concept name Concept schema . (5.11)

The fourth model belongs to the psychological essentialism (Medin


and Ortony, 1989). It is constructed as the development of the clas-
sical and theory-based models aimed at synthesis of psychological
models with the philosophical intuitions. The model employs a classi-
cal “core” definition for concepts, but one in which the core definition
may frequently contain empty “place holders”, for example, for addi-
tional still unknown attributes (cf., Diagram (5.12)). In such a way,
these “place holders” allow adding additional knowledge to concept
definitions.
f .
Concept name Concept schema with variables (5.12)

The fifth one is the exemplar model is similar to the prototype


model. In it, the concept is based not on one prototype, but on a
number of different exemplar representations (Medin and Shoben,
1988). This system of exemplar representations is regarded as a pro-
totype base (cf., Diagram (5.13)).
h .
Concept name System of prototypes (5.13)

The exemplar model of concept is similar to a part of the edu-


cational model of concept developed in education (Merrill and Ten-
nyson, 1977). Its structure is portrayed by the following triangle (cf.,
Figure 5.6).
Name

Examples Attributes

Figure 5.6. The educational concept triangle


September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 453

Knowledge Structure and Functioning 453

However, there are essential differences between the exemplar and


educational models of concept. In the exemplar model, only examples
of objects comprised by the concept are included. In contrast to this
(cf., Figure 5.7), when people in general and children, in particular,
learn concepts, they use are given both examples of objects comprised
by the concept (positive examples) and examples of objects that do
not belong to the concept (negative examples).
The situation with concept attributes is the same — when peo-
ple in general and children, in particular, learn concepts, they learn
attributes of objects comprised by the concept (positive attributes)
and attributes that objects from the concept do not have (negative
attributes).
This peculiarity of concept learning shows that it would be rea-
sonable to use negative probabilities ((Dirac, 1942; Feynman, 1950;
Forsyth et al., 2001; Burgin, 2009; Burgin and Meissner, 2010; 2012),
in probabilistic models of learning and teaching.
Another educational model of concept was elaborated by Tarábek
(2006; 2007), who called it the Triangular Model of concept, which
is presented in Figure 5.8.

Name

Examples Attributes

Positive examples Negative examples Positive attributes Negative attributes

Figure 5.7. The extended educational concept model

Core

Meaning Sense

Figure 5.8. The triangular concept model


September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 454

454 Theory of Knowledge: Structures and Processes

This model has the following components (Tarábek, 2006; 2007).


The core of a concept has three appearances:

• Linguistic in a form of a word or expression.


• Symbolic in a form of a physical symbol.
• Visual in a form of an image.

The meaning of a concept has three levels:

1. More concrete concepts as particular cases of this concept.


2. Very concrete concepts as particular cases of this concept.
3. Objects, phenomena, actions that belong to this concept.

The sense of a concept is a list of meaningfully related concepts


and corresponding links.
The Triangular Model of concept also employs the following con-
cept architecture (cf., Figure 5.9).

core

periphery semantic frame

Figure 5.9. The triangular concept architecture

In this model, the periphery of a concept plays the role of the


meaning, while the semantic frame of the concept plays the role of
the sense.
One more model of concept has been developed in formal concept
analysis (FCA) (Ganter and Wille, 1999; Stumme and Wille, 2000).
This model is basic for FCA being formed from three components:
objects and attributes related by an incidence relation (cf., Diagram
(5.14)), which are defined as rigorous mathematical structures. To
build this model, a formal context K is defined as a triad (named
set) K = (G, I, M ), where

• G is a set of objects,
• M is a set of attributes,
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 455

Knowledge Structure and Functioning 455

Concept extent Concept intent

Figure 5.10. The FCA model of a concept

• I is a relation between G and M , in which the relation (g, m) ∈ I


means, the object g has the attribute m, i.e., I is the connection
between objects and attributes.
I
objects attributes . (5.14)
Thus, we can see that formal contexts are named sets and it is
possible to apply to them different named set operations (Burgin,
2011).
Definition 5.2.1. A formal concept C in a context K = (G, I, M )
is a pair C = (A, B), where (cf., Figure 5.10).
• A is a set of objects called the extent of the concept and A ⊆ G;
• B is a set of attributes called the intent of the concept and B ⊆ M ;
and the following condition (M) is satisfied:
A × B is a maximal rectangle in the binary relation I, i.e., sets A
and B are maximal with A × B ⊆ I.
There are also various relations between formal contexts that
come from the named set theory. For instance, a formal context
K = (G, I, M ) is a subcontext of a formal context D = (H, J, L)
if G ⊆ H, M ⊆ L and I is the restriction of J on G and M ,
i.e., I = J|(G,M ) . Relations between formal contexts imply relations
between formal concepts. For instance, it is possible to define two
kinds of subconcepts.
Definition 5.2.2. A formal concept (A, B) in a context K is an
attributive subconcept of a formal concept (D, E) in the context K
if B ⊆ E.
For instance, the concept of a computer is an attributive subcon-
cept of the concept of a notebook.
Definition 5.2.3. A formal concept (A, B) in a context K is an
extensive subconcept of a formal concept (D, E) in the context K if
A ⊆ D.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 456

456 Theory of Knowledge: Structures and Processes

For instance, the concept of a lion is an extensive subconcept of


the concept of an animal.

Proposition 5.2.1. A formal concept (A, B) in a context K is an


attributive subconcept of a formal concept (D, E) in the context K if
and only if the formal concept (D, E) in the context K is an extensive
subconcept of the formal concept (A, B) in the context K.

Proof . Let us assume that a formal concept (A, B) in a context


K = (G, I, M ) is an attributive subconcept of a formal concept (D,
E) in the context K. It means that B ⊆ E, i.e., if b ∈ B, then b ∈ E.
If B = E, then A = D because by Condition (M), A is maximal with
respect to the inclusion A × B ⊆ I and D is maximal with respect to
the inclusion D × B ⊆ I. If B = E, then D × B ⊆ D × E ⊆ I because
B ⊆ E. As A is maximal with respect to the inclusion A × B ⊆ I, we
have D ⊆ A. It means that the formal concept (D, E) in the context
K is an extensive subconcept of the formal concept (A, B) in the
context K.

Now let us assume that a formal concept (A, B) in a context


K = (G, I, M ) is an extensive subconcept of a formal concept (D,
E) in the context K. It means that A ⊆ D, i.e., if a ∈ A, then a ∈ D.
If A = D, then B = E because by Condition (M), B is maximal with
respect to the inclusion A × B ⊆ I and E is maximal with respect to
the inclusion D × B ⊆ I. If A = D, then A × E ⊆ D × E ⊆ I because
A ⊆ D. As B is maximal with respect to the inclusion A × B ⊆ I, we
have E ⊆ B. It means that the formal concept (D, E) in the context
K is an attributive subconcept of the formal concept (A, B) in the
context K.
Later developments of FCA by Lehmann and Wille (1995) was
based on the pragmatic philosophy of Charles S. Peirce with his three
universal categories and the general triadic approach. The new idea
was to use the category conditions in addition to the basic categories
objects and attributes.
The basic concept is a triadic context, which is defined as a
quadruple (G, M, B; Y) where G is a set of objects, M is a set of
attributes, and B is a set of conditions, while Y is a ternary relation
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 457

Knowledge Structure and Functioning 457

Concept name Conceptual representative

Figure 5.11. The first specification rank of the representational model of a


concept

between G, M, and B, i.e., Y ⊆ G × M × B. Then the formula (g,


m, b) ∈ Y means that the object g has the attribute m under (or
according to) the condition b. A triadic concept of a triadic context
(G, M, B; Y) is defined as a triple (A1 , A2 , A3 ) denoting the relation
A1 × A2 × A3 ⊆ Y, which is maximal with respect to component-
wise inclusion. The triadic concepts are structured by three quasi-
orders given by the inclusion order within each of the three
components.
Burgin and Gorsky (1991) introduced a new model called the
representational model of a concept. Its surface structure is presented
in Figure 5.11, in which concept name can be one word, e.g., “cat”,
an expression, e.g., “an infinitely small number”, or a text, and which
is a specific kind of fundamental triads.
We see that the representational model of a concept comprises
all other models of a concept as a structure having higher level of
abstraction. For instance, in the extensional prototype model, the
prototype is the conceptual representative, while in the theory-based
model, the concept schema is the conceptual representative. It may
look that triadic models of concept are different. However, in the
Concept Triangle of Frege, it is possible to consider the components
Denotation and Sense as the conceptual representative of the con-
cept. In a similar way, it is possible to consider the components Deno-
tation and Meaning as the conceptual representative of the concept
in the Concept Triangle of Russell.
This model is further developed in (Burgin, 2012) by explicating
several structural levels, which we call the specification ranks of the
concept. The first specification rank of a concept encloses the sur-
face structure of the representational model, which is portrayed in
Figure 5.11. Going to the second stratum, the component Concep-
tual Representative, which constitutes the first stratum of the model
is divided into three parts — Interpretant, Connotation (also called
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 458

458 Theory of Knowledge: Structures and Processes

Connotat), and Denotation (also called Denotat), which form the


second stratum of the model.
Combination of the first and second strata generates the concept
structure of the second specification rank, which is similar to the
Russell model of a concept.
In turn, each of the components Interpretant and Denotation/
Denotat is divided into three parts, forming the third stratum. The
component Interpretant is split into Intention, Sense, and Meaning,
while the component Denotation/Denotat is split into Scope/Extent,
Exemplars, and Prototype. At the same time, Connotation/Connotat
is split into Associative/indirect (e.g., Metaphorical) Interpretant and
Associative/indirect Denotat. Combination of three described strata
forms the concept structure of the third specification rank.
To understand the representational model, let us look what mean-
ing all these terms have.
In conventional models, the Denotation/Denotat of a concept,
which is also called the Referent by some authors, is the collection
of entities that are unified by this concept and denoted by the con-
cept name. For instance, the denotation of a concept cat consists of
all cats. In the representational model, the component Denotation/
Denotat of a concept consists of three parts — the Prototype, Exem-
plars and Scope (also called Extent) — and only the Scope/Extent
is the collection of entities that are unified by this concept. The
Prototype is the most typical representative of the Scope/Extent,
while the Exemplars include representatives from the Scope
(Extent) such that give its sufficiently complete description of the
Extent.
The term Interpretant goes back to the Peirce’s model of a sign
(cf., Chapter 4). In the sign model, Interpretant of a sign is the mean-
ing of this sign and because a concept is a kind of signs, we assume
that in classical models, such as the Concept Triangle of Russell,
Interpretant coincides with the meaning of the concept. In the rep-
resentational model, the Interpretant of a concept consists of three
components — Intention, Sense, and Meaning. As the component of
the representational model, the term Meaning has the conventional
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 459

Knowledge Structure and Functioning 459

meaning, which is analyzed in Section 2.3.10. The component Sense


of a concept accounts for its cognitive significance, by which people
conceive the denotation of this concept. The component Intention
indicates the internal content of a concept, which is expressed in its
explicit definition.
The component Connotation/Connotat of a concept is a com-
monly understood cultural or emotional association that the concept
carries, in addition to its explicit or literal meaning. A connotation
is frequently described as either positive or negative, taking into
consideration the emotional response. For instance, it is possible
to characterize a stubborn person by two concepts — strong-willed
or pig-headed. While these concepts have the same literal meaning
(stubborn), the concept strong-willed connotes respect and admira-
tion for person’s will (a positive connotation), while the concept pig-
headed connotes frustration in dealing with this person (a negative
connotation).
As an example of Metaphorical Denotat, we can take the mytho-
logical connotation of Barthes, which shows how examples can be
exploited to communicate more than just the one allusion to the
concept in a certain context (Barthes, 1977). Thus, a picture or a
diagram used as an example of a concept in a discourse on what
it denotes will not only be subject to certain representational con-
ventions. In addition, it can also contain clues that it should be
understood as describing the concept in a certain way — or con-
versely contain strategies to avoid such a description. We can con-
sider Diagrams (5.9)–(5.14) as examples of a Metaphorical Denotat.
In the representational model, the component Connotation/
Connotat of a concept includes two components — the Associative/
Indirect Interpretant (e.g., Metaphorical Interpretant) and Associa-
tive/indirect Denotat (e.g., Metaphorical Denotat). The Associative
Interpretant is the meaning indirectly associated with the concept,
while the Associative Denotat consists of images and emotions caused
by the concept. In the example considered above, the Associative
Interpretant of the concept strong-willed can include such proper-
ties as strong, persistent, successful, and/or reliable. The Associative
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 460

460 Theory of Knowledge: Structures and Processes

Denotat of the same concept can include images George Washington,


Napoleon, or Alexander the Great, as well as emotions of admiration,
respect and/or appreciation.
We come to the concept structure of the fourth specification
rank when a distinction is made between literal components of
the third stratum — literal Sense/Intention, literal Meaning, literal
Extent, and literal Prototype (the first part of the fourth stratum) —
and metaphorical components of the third stratum — metaphori-
cal Sense/Intention, metaphorical Meaning, metaphorical Extent, and
metaphorical Prototype (the second part of the fourth stratum). For
instance, taking the concept lion, we have a literal meaning “a mem-
ber of the Felidae family of carnivorous mammals” and a metaphor-
ical meaning “the king of animals”.
With respect to its literal components — the literal meaning,
literal extent, literal sense, and literal prototype, the concept plays
the role of an index.
With respect to its metaphorical components — the metaphorical
meaning, metaphorical extent, metaphorical sense, and metaphorical
prototype, the concept plays the role of a symbol.
Continuing this procedure of the concept structure explication,
we come to the concept structure of the fifth rank when a distinc-
tion is made between strict components of the fifth stratum — strict
Sense, strict Intention and strict Meaning — and fuzzy components
of the fifth stratum — fuzzy Sense, fuzzy Intention and fuzzy Mean-
ing; as well as between exact components of the fifth stratum — exact
Extent, exact Exemplars and exact Prototype; graded components of
the fifth stratum — graded Extent, graded Exemplars and graded Pro-
totype; and approximate components of the fifth stratum — approx-
imate Extent, approximate Exemplars and approximate Prototype.
For instance, the concept “the Sun” has the exact Extent, which
consists of the definite star. At the same time, the concept “a star”
has the approximate Extent because first, we do not know all stars
and second, new stars are formed and some existing stars explode
and stop being stars. In a similar way, the meaning of the concept
“the Earth” is strict, while the meaning of the concept “a planet” is
fuzzy.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 461

Knowledge Structure and Functioning 461

Conceptual Representative

Interpretant Connotat/Connotation Denotat/Denotation

attributive
Intention Sense Meaning Prototype Exemplars Scope/Extent
operational
relational literal/direct metaphorical/indirect
literal/direct metaphorical/indirect individual group
all individual
objects united
strict fuzzy by the concept
graded exact approximate

Associative/indirect Interpretant Associative/indirect Denotat

Figure 5.12. The multilevel representational model of a concept

The concept structure of the fifth rank, which is composed of all


strata from the first to the fifth, is presented in Figure 5.12.
Note that graded categorization of the concept extent is a central
notion in many models of cognitive science and cognitive semantics
(cf., for example, Langacker, 1987; 1991)).
It is possible to continue the procedure of the concept structure
explication clarifying that there are three forms of meaning:

— Attributive meaning has the form of a system or schema of


attributes.
— Network or relational meaning has the form of a conceptual net-
work or schema.
— Operational meaning has the form of a system or schema of oper-
ations/procedures/algorithms.

In an attributive system, all attributes are given. In contrast


to this, an attributive schema may contain slots for yet unknown
attributes. Thus, an attributive system provides an exact attribu-
tive meaning, while an attributive schema provides an approximate
attributive meaning.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 462

462 Theory of Knowledge: Structures and Processes

In a conceptual network, all nodes (i.e., concepts) and links are


fixed. In contrast to this, a conceptual schema may contain node
(concept) variables and link variables (Burgin, 2006). Thus, a con-
ceptual network provides an exact network/relational meaning, while
a conceptual schema provides an approximate network/relational
meaning.
In a system of operations, procedures or algorithms, all of these
elements are fixed. In contrast to this, an operational schema may
contain node (i.e., operation, procedure, and algorithm) variables and
link variables (Burgin, 2006). Thus, a system of operations, proce-
dures, or algorithms provides an exact operational meaning, while an
operational schema provides an approximate operational meaning.
We see that the representational model of a concept comprises all
other models of a concept.
Metaphorical Denotation (metaphorical Denotat) and metaphori-
cal Interpretant belong to the Concept Connotation (Connotat).
With respect to its literal components — the literal meaning,
literal extent, literal sense, and literal prototype - the concept plays
the role of an index.
With respect to its metaphorical components — the metaphorical
meaning, metaphorical extent, metaphorical sense, and metaphorical
prototype — the concept plays the role of a symbol.
Besides, there are three forms of meaning:
— Attributive meaning has the form of a system of attributes.
— Network or relational meaning has the form of a conceptual
network.
— Operational meaning has the form of a system of operations/
procedures/algorithms.
It is necessary to remark that each component of the structure
of a concept — the name, interpretant, connotat, and denotat —
is also an object and has a name, i.e., the name of this object. In
addition, these objects play the role of denotat and themselves have
a connotat and an interpretant, i.e., meaning, sense and intention,
associated with their names. As a name is itself an object, it has
a name and interpretant, i.e., meaning, and connotat, i.e., sense,
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 463

Knowledge Structure and Functioning 463

as any other object. In the intensional context, the names that


occur denote the meaning or sense of the objects for the reader or
listener.
It means that each component of a concept can acquire the role
of another component. As a result, the structure of a concept has the
property called fractality, which tells that the structure of the whole
is repeated/reflected in the structure of its parts.
The epistemological category concept is intrinsically connected
to two other epistemological categories — name and object. For
instance, we can see that there is no concept without a name — con-
cepts do not exist without names. Exact names of concepts are called
terms. This reflects the situation that logic, like any other discipline,
extensively uses names (cf., for example (Kripke, 1972; McCulloch,
1989; Grove and Halpern, 1993; Haas, 1995; Grove, 1995)). Alonzo
Church (1903–1995) describes importance of names for mathematical
logic, contemplating the concept name as one of the basic terms used
in logic when at the beginning of his book Introduction to Mathemat-
ical Logic, he clarifies the main concepts of logic starting with the
concept name and making clear the prevalent importance of names
for logic, although he takes into account only proper names (Church,
1956). As another logician Stephen Cole Kleene (1909–1994) writes,
it is better to think about variables in logic not as some placehold-
ers for appropriate objects but as names from a warehouse used for
denoting different objects (Kleene, 2002).
In his book, one more logician Shoenfield also describes how
names for individuals and expressions are constructed and used
(Shoenfield, 2001).
Moreover, the great mathematician Jules Henri Poincaré (1854–
1912) wrote in his methodological studies that without a name, no
object exists in science or mathematics (Poincaré, 1908).
The word name stems from the Greek word ωνoµα (onoma) and
its successors — the Latin word nomen, Old English word nama, and
Old High German word namo. In a social context, a name is defined
as a word or words by which an entity is designated (American Her-
itage Dictionary, 2009). A more general definition of name is given
in (Gorsky et al., 1991). Namely, a name is defined as an expression
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 464

464 Theory of Knowledge: Structures and Processes

of a natural language that denotes a separate object, collection of


similar objects, properties, relations, etc.
A name usually is treated as a label for an object, such as a human
being, animal, thing, place, service, and even an idea or theory. It is
used to identify and distinguish one or several objects from anything
different. Names can single out a class or category of objects, or a
particular object, either uniquely, or within a given context. In the
theory of named sets (Burgin, 2011), the term name has a much
broader meaning and scope.
People even developed names for names, or more exactly, names
for classes of names. Here are some of them.
A name of a person is called an anthroponym. For instance, names
Andrew, Bill, and Ann are anthroponyms.
A name of a place is called a toponym. For instance, names
America, Sahara, and Malta are toponyms.
A name of a body of water is called a hydronym. For instance,
names Amazon, Nile, and Black Sea are hydronyms.
A name of an ethnic group is called an ethonym. For instance,
names Indians, Spanish, and Jews are ethonyms.
An assumed name of a person is called a pseudonym. For instance,
names Mark Twain, Lewis Carroll, and Napoleon are pseudonyms.
An assumed name of a writer or journalist is called a pen name.
An assumed name of an actor is called a stage name.
Psychologists also studied how infants perform naming and learn
names of things (cf., for example (Baldwin and Markman, 1989;
Markman, 1989; Hall, 1999; Waxman, 1998; Baillargeon, 2000; Wax-
man, 2002; 2003)). As Waxman (2002) writes, “Even before they
can tie their own shoes, human infants spontaneously form con-
cepts to capture various relations among the objects and events they
encounter, and they learn words to express them. Name learning,
more than any other development achievement, stands at the very
center of the crossroad of human cognition and language.”
However, usage of names in such a rigorous discipline as logic is
not always rigorous. For instance, as Grove (1995) writes, modal epis-
temic logics for many agents sometimes ignore or simplify the distinc-
tion between the agents themselves, and the names these agents use
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 465

Knowledge Structure and Functioning 465

when reasoning about each other. At the same time, problems moti-
vated by practical computer science applications show that utilized
theories of naming are often inadequate. For instance, their main
concern is proper names while other names, i.e., common names for
many objects, are also extremely important. Thus, practical appli-
cations demand logic to pay more attention to names and naming.
Logicians have developed special tools for working with names in
logic, and all of them involve building new or transforming existing
named sets. For instance, Gabbay and Malod (2002) extend predicate
modal and temporal logics introducing a special predicate W (x),
which names the world under consideration. Such a naming allows
one to compare the different states the world (universe or individual)
can be in after a given period of time, depending on the alternatives
taken on the way. As Gabbay and Malod (2002) remark, the idea of
naming the worlds and/or time points goes back to Prior (1967).
Labeling is a kind of naming andlabeled logics and labeled deduc-
tive systems form a new and actively expanding direction in logic (cf.,
(Basin et al., 2000; Chau, 1993; Gabbay, 1994; 1996; Gabbay and
Malod, 2002; Viganò and Volpe, 2008)). Labeled logics use labeled
signed formulas where labels (names of the formulas) are taken from
information frames. As the result, the set of formulas in a labeled
logic becomes an explicit named set, the support of which consists of
logical formulas, while the set of names is an information frame, i.e.,
the system of labels. The derivation rules act on the labels as well
as on the formulae, according to certain fixed rules of propagation.
It means that derivation rules are morphisms (mappings) of the cor-
responding named sets. In default logics, there is even an algorithm
for grounded naming (labeling) (Roos, 2000).
Besides, the set of names is the level of types, and the naming rela-
tion connects objects and types. The elementary theory of types and
names developed by Jäger (1988) is aimed at application in computer
science. Objects are the entities the computer can directly manipu-
late. They are promptly accessible and explicitly represented in suit-
able form, e.g., as bitstrings in a computer memory. In contrast to
this, types are abstract collections of objects. In order to address
them, computers have to use their names. Hence the name nX of
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 466

466 Theory of Knowledge: Structures and Processes

a type X has to provide enough information such that X can be


determined from nX . If the object a is the name of type X, then the
extension ext(a) of a is this (unique) type X. In the context of the
theory of named sets, the extension ext(a) of a is the interpretation
of a.
Understanding importance of names and naming, philosophers
and logicians have developed different theories of names (cf., for
example, (Mill, 1862; Frege, 1892; Donnellan, 1972; Evans, 1973;
Putnam, 1975; Kripke, 1972; Geurts, 1997; Cumming, 2009)). The
most renowned are Mill’s theory of names (Mill, 1862), description
(descriptivist) theory of names, causal theory of names (Evans, 1973;
1985), and proper theory of names (Katz, 1977).
It is necessary to remark that contemporary philosophers are
mostly interested in proper names, creating their theories of names.
For instance, in the Stanford Encyclopedia of Philosophy, the article
Name (Cumming, 2009) starts with the statement that proper names
are familiar expressions of natural language and then deals only with
proper names.
In the 19th century, John Stuart Mill (1806–1873) developed the
predominant theory of names. The basic tenet of his theory suggests
that the meaning of a proper name is simply its bearer (direct refer-
ent) in the external world.
Now one of the most influential is the descriptivist (description)
theory of names generally attributed to Frege (1892) and Russell
(1905) and further developed by Church (1956) and Searle (1958).
This theory says that naming, and reference in general, goes on
by mentally connecting a set of properties with a name, identify-
ing something as having each of these properties, and applying the
name to the object according to this identification.
Frege (1892; 1892a) develops his approach on the distinction
between sense and reference. In the case of proper names, the sense
(or Sinn) of a name consists in the (usually) definite description that
speakers associate with it. This sense is objective (it is an abstract
object) for Frege and must not be confused with its subjective repre-
sentation in the mind of each individual speaker. At the same time,
a proper name can have more than one sense associated with it.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 467

Knowledge Structure and Functioning 467

name

referent sense

Figure 5.13. The descriptivist model of a name

Following Frege, the descriptivist theory of names associates the


following triadic structure with a (proper) name (cf., Figure 5.13).
In this structure, the meaning (semantic content) of the name is
identical to the descriptions associated with the name by speakers,
while the name referent consists of the objects that satisfy these
descriptions. Besides, we see that this structure is, in essence, the
structure of a concept (cf., Figure 5.2).
Russell suggested a slightly different approach, making an impor-
tant distinction between what he calls “ordinary” proper names and
“logically” proper names. Logically proper names are indexicals such
as this and that, which are directly connected to sensual data or
other objects of immediate acquaintance. In contrast to this, ordi-
nary proper names are abbreviated definite descriptions.
The causal theory of names advanced by Saul Kripke and Hilary
Putnam assumes that a currently used name names a certain object
depends on whether current use of the name causally depends on its
use by people who originally dubbed the object with that name.
In this context, the causal theory of proper names is the view that:
— the meaning of a proper name is simply the individual object to
which, in the context of its use, the name refers;
— the name’s referent is originally fixed by the naming, in which
the name becomes a rigid designator of the referent;
— later uses of the name succeed in referring to the referent by being
linked by a causal chain to that original naming act.
Thus, the causal theory implies that all proper names get their
meaning by an initial act of naming, while their reference is fixed.
Whether it is a name of a person, a ship, a town, a planet, or what-
ever, the original act of naming always exists, and the object named
is rigidly connected to its name. In such a way, the name becomes a
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 468

468 Theory of Knowledge: Structures and Processes

rigid designator of what it refers to where a name is a rigid designator


if and only if it denotes its reference in all possible worlds.
The prominent philosopher Edmund Gustav Albrecht Husserl
(1859–1938) paid special attention to the explanation of the differ-
ence between names and objects. He separated different kinds of
names:
Meaningfully indicating names play the role of properties forming
a group that uniquely identifies an object and expresses a meaning.
Purely indicating names also form a group that uniquely identifies
an object but express no meaning other than being a name of some
object.
Universal names denote sets of objects united by a concept, which
is the meaning of a universal name.
For instance, “the victor in Jena” and “the loser in Waterloo”
are meaningfully indicating names of Napoleon Bonaparte although
they express different meanings.
In a similar way, “the equilateral triangle” and “the equiangular
triangle” are universal names that express different meanings, but
designate the same class of object, namely, the class of triangles.
In contrast to this, such names as Aristotle, Socrates, Napoleon
Bonaparte, Edmund Gustav Albrecht Husserl and so on, have no
meaning, but have the role of designating an object, namely, a defi-
nite person.
Thus, we can see that logic and philosophy have paid a signifi-
cant attention to names, treating name as one of the basic objects
of the logical and philosophical enquiry. Along these lines, specific
logics of names have been also developed. For instance, Abadi (1998)
constructed a logic to explicate and work with the meaning of local
names the Simple Distributed Security Infrastructure (SDSI) used
to provide security for distributed computer systems. Halpern and
van der Meyden (2001) built the Logic of Local Name Containment,
which provides a more exact characterization of SDSI name resolu-
tion. Its semantics is closely related to that of logic programs, leading
to efficient implementation of queries concerning local names.
Meanwhile, it was discovered that names and naming are intrin-
sically connected to named sets (cf., Appendix and (Burgin, 2011)).
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 469

Knowledge Structure and Functioning 469

For instance, all models of concepts considered above either are


named sets (fundamental triads), e.g., the set-theoretical model
(Diagram (5.2)), the fuzzy set-theoretical model (Diagram (5.3)) and
the knowledge-representation model (Diagram (5.4)), or are built of
named sets, e.g., the Frege model (Figure 5.2), the Russell model
(Figure 5.3) and multilevel representational model (Figure 5.12). Uti-
lization of named sets as a foundation for naming in logic provides
means for making logical studies more rigorous and concise.
The standard model of a named set consists of three components:
(1) people or some objects, (2) their names, and (3) connections
between people (objects) and their names.
However, it is necessary to understand that named set theory
(Burgin, 2011) is not a theory of names. It is a mathematical theory
of fundamental mathematical structures and the unified foundation
of mathematics. At the same time, named sets rigorously explicate
the structure of the concept name and consequently provide tools for
building mathematical foundations for different theories of names.
Terms are a special case of names. There are two meaning of
the word term. In general, a term is a name (word) that has exact
definition. In mathematics and logic, formal expressions are called
terms.
When we speak about a name, it is, as a rule, a name of some
object. Besides, the word object is very popular. Nevertheless, there
is no exact definition of an object. For instance, Gaede writes:
“. . . the study of Physics is first and foremost the study of objects.
Indeed, the most important topics of contemporary Physics revolve
around physical objects. Hawking (in his book “A brief history of time”)
states that space-time is an object, a black hole is widely considered to
be a dynamic object, and Particle Physics is defined as the study of
motion of subatomic objects known as ‘particles’. Therefore, in Physics,
we have no alternative but to define what we mean by object.
It turns out, however, that in the entire history of Physics no one has
ever bothered to define this fundamental term. Not a single textbook
begins by defining what an object is.”
(Gaede, 2003)

There are also particular theories of object, in which there are


definitions or, at least, description of the concept object. However, in
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 470

470 Theory of Knowledge: Structures and Processes

each case, the concept of an object in these theories is very specific


and restricted to a particular area.
For instance, object relations theory is a derivative of psychoana-
lytic theory that emphasizes interpersonal relations, primarily in the
family, e.g., between mother and child. In this theory, object has a
very specific meaning being a person that is the target of another’s
feelings or intentions, and representation refers to the way the per-
son has or possesses an object and is the mental image of an object.
There are two positional types of objects:

— An external object is an actual person, place or thing that a person


has invested with emotional energy.
— An internal object is one person’s mental representation of
another person, such as a reflection of the child’s way of relating
to the mother, idea, or fantasy about a person, place, or thing.

In addition, there are two structural types of objects:

— A part-object is an object that is part of a person, such as a hand


or breast.
— A whole-object is another person who is recognized as having
rights, feelings, needs, hopes, strengths, weaknesses, and insecu-
rities just like one’s own.

In computer programming, there are object-oriented program-


ming languages based on the concept of an object. A software
object consists of two parts: the internal object state and the object
behavior. The state is written in fields, which are variables in the
object-oriented programming language, while the object behavior
is represented through methods, which are functions in the object-
oriented programming language. Methods operate on the internal
object states and serve as the primary mechanism for object-to-
object communication.
Object-oriented programming provides a number of benefits,
such as:

1. Modularity means that the source code for any object can be writ-
ten and maintained independently of the source code for other
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 471

Knowledge Structure and Functioning 471

objects in the same program. In addition, once created, an object


can be easily passed around inside the software system.
2. Reusability means that it is possible to use an object from one
program in any number of other programs. Thus, a software devel-
oper is able to use already implemented and tested complex, task-
specific objects, created by expert programmers.
3. Testability and debugging ease means that it is possible to test and
debug each object separately from other objects and the whole
software system.
4. Information-hiding means that interacting only with an object’s
methods, it is not necessary to use (disclose) the details of its
internal implementation.
5. Exchangeability means that it is possible to change one object by
another one if the first one, for example, turns out to be proble-
matic.

An interesting theory of objects was developed by Alexius


Meinong Ritter von Handschuchsheim (1853–1920), who was an
Austrian philosopher and psychologist from the University of Graz
(Meinong, 1904; 1904a; 1907). Assuming that the concept of an
object cannot be defined in terms of a general type and its varia-
tions, Meinong introduces only a vague understanding of the concept
object. So, whatever can be experienced in some way, i.e., be the tar-
get of a mental act, and be denoted by any grammatically correct
phrase is an object (Gegenstand in German). Consequently, not only
physically existing things but also all kinds of items of imagination
and thought are objects including even impossible objects such as
the round square, as well as paradoxical objects, or as Meinong calls
them, defective objects, such as the person who says “Whatever I say
is a lie.” For instance, the phrases “the present King of France,” or
“the round square” are supposed to denote genuine but paradoxical
objects. It is admitted that such objects do not subsist, but never-
theless they are supposed to be objects. This is in itself a difficult
view; but the chief objection is that such objects, admittedly, are apt
to infringe the law of contradiction.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 472

472 Theory of Knowledge: Structures and Processes

To explain possibility of such objects, Meinong analyzes the seem-


ingly paradoxical sentence “There are objects of which it is true that
there are no such objects” using two closely related principles: (1)
the principle of the independence of so-being from being, and (2) the
principle of the indifference of the pure object to being (Meinong,
1904).
The independence principle states that an object with properties
is independent of whether it has being or not.
The indifference principle that an object is by nature indifferent
to being.
Meinong expressed this in the form that “the pure object stands
‘beyond being and non-being’ ?”, meaning that neither being nor
non-being belongs to the make-up of an object’s nature.
In his theory, Meinong introduces the notion of incompletely deter-
mined objects, i.e., objects that are undetermined with respect to at
least one property. An example of incomplete objects is a triangle.
In relation to defective objects, Meinong separates absurd objects
as examples of nonsense such “lvap” or “bnuv de”, from inconsis-
tent objects such as the round square. In this context, inconsistency
is something that involves incompatible properties but is still under-
standable, whereas nonsense is something that cannot be understood
at all.
Here it is necessary to make two remarks. First, while people are
expanding their knowledge, what is absurd for one generation may
become consistent knowledge for another generation. For instance,
the word quark or the sentence (1) “Neutrons are built of quarks”
was an absurdity for all people living in the 10th century but now
physicists know that a quark is a legal physical object, while the
sentence (1) describes a meaningful physical relation between two
kinds of physical objects — neutrons and quarks.
Second, what is an absurdity for one person can be meaningful
for another individual. For instance, the word boson means nothing
for a farmer but is a very important term for a physicist who studies
the quantum reality.
The same is true for inconsistencies. For instance, from ancient
times to our days, philosophers are using the term round square as
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 473

Knowledge Structure and Functioning 473

an archetypal example of inconsistency, which does not define any


object. Meinong, for example, assumed that such objects as a round
square have no category of being at all as they are “homeless objects”,
to be found not even in Plato’s heaven of ideas.
However, mathematicians were able to ascribe meaning to this
term building such a mathematical object, which is naturally called
a round square. To do this, they build a geometrical object, which is
a circle and a square at the same time. As any circle is round, this
object is a round square. Let us describe how we can build such a
round square.
In the Euclidean metrics, the geometrical shape ABCD in
Figure 5.14 is a square because all its sides are equal and all its
angles are equal to 90◦ . At the same time, taking the Manhattan
metrics, the figure ABCD is a circle.
Indeed, a circle is a geometric figure in which all point are at the
same distance from one point, which is called the center of the cir-
cle. The Manhattan, or taxicab, metrics (cf., (Krause, 1987)) defines
the distance d between two points in the coordinate plane in the

y
B 1

K H

A O C
-1 1 x

-1 D

Figure 5.14. A round square


September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 474

474 Theory of Knowledge: Structures and Processes

following way:
d((x, y), (u, v)) = |x − u| + |y − v|.
In particular, the distance between the point (0, 0) and the point
(x, y) is equal to |x| + |y|. Consequently, the distance from the point
(0, 0) to any point of the figure ABCD is equal to 1. Thus, the
figure ABCD is a circle in the Manhattan metrics and a square in
the Euclidean metrics. Consequently, it is a round square.
In his theory, Meinong gives the following classification of objects.
Objects, which always have outside-being, are separated into two
classes:
1. Objects that have being are separated into two classes:
a. Real objects, which exist as well as subsist.
b. Ideal objects, which only subsist.
2. Objects that do not have being are separated into two classes:
a. Objects that have non-being are separated into two classes:
i. Non-contradictory objects.
ii. Contradictory objects.
b. Objects that are not determined with respect to being.
Defining an object as what can be experienced in some way,
Meinong analyzes experiences. By his approach, all experiences, even
the most elementary ones, are complex mental phenomena, contain-
ing, at least, three constituents: (1) the action, (2) the psychological
(mental) content, and (3) the object of the experience. This gives us
the following diagram (cf., Figure 5.15).
While the first two components, (1) and (2), must exist if the
experience exists, the third components (3) need not. If somebody

object

action content

Figure 5.15. The structure of a Meinong object


September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 475

Knowledge Structure and Functioning 475

has a strong hope for total peace, for example, then (1) the action of
hope exists, (2) the psychological content, i.e., hope, exists, but (3)
no total peace may occur. It means that there is only the non-existent
object the total peace.
Meinong believes that experiences can have different objects for
two reasons. First, different kinds of acts correspond to different
kinds of objects. For instance, “objects” correspond to representa-
tions, while “objectives” are related to thoughts. Second, it is pos-
sible to assume that inside an action, any variation of the objects
is dependent on a variation of some mental component that is the
psychological content of the experience. The difference between the
objects must somehow come down to an internal difference between
the representations in question. If you have two different represen-
tations, one of red and another of green, for example, the difference
between the objects is founded on a genuinely mental difference,
namely the difference between the psychological red-content and the
psychological green-content.
Meinong calls the relation of a content to its corresponding object
the “adequacy relation” (Meinong, 1910), and he takes it to be an
ideal relation. Ideal relations, in contrast to real relations, subsist
necessarily between the terms of the relation. If one color, say red,
is different from another, say green, than they must be different. If
you compare colors located somewhere, the relation between a color
spot and its location is called real because the color, say red, could
be located elsewhere, or another color could be in the place of the
red color spot. Ideal relations, however, attach once and for all and
with necessity to their terms.
Meinong just postulates the adequacy relation and offers only neg-
ative determinations of it. He stresses the point that adequacy is not
a relationship of sameness or of similarity. Since it is an ideal relation,
real relations — for example pictorial or even causal relationships —
are excluded. A positive hint is given by a kind of metaphorical use
of the word “fitting”: the mental content and its object must be fitted
to each other.
As Meinong supposes that the different kinds of acts are coordi-
nated with the different kinds of presented objects, his classification
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 476

476 Theory of Knowledge: Structures and Processes

of mental elementary experiences allows a categorization of all


objects. Namely, two kinds of experience — intellectual represented
by thoughts, and emotional represented by feelings and/or desires. It
results in three kinds of objects: objects of thought, objects of feeling,
and objects of desire.
We see that Meinong elaborated an extensive theory of objects but
did not a sufficiently clear definition of an object. He was interested
what objects existed and what objects did not exist but considered
as the majority of philosophers and scientists that the concept of
object is so clear that it does not need any definition. However, even
a rigorous discussion of object existence demands a sufficiently exact
definition of object.
Contemporary theories of objects, such as substance theory and
bundle theory, also do not provide adequate definitions of object.
Substance theory of objects maintains that an object is a sub-
stance, which stands under the change. However, ideal entities, such
as concepts, ideas, algorithms or names, can also be objects.
Bundle theory of objects implies that an object is a collection of
properties and relations. However, physical objects have independent
being and cannot be reduced to their properties and relations.
To eliminate these deficiencies and limitations, we suggest the
following definition.

Definition 5.2.4. An object is anything that is considered as a


whole, i.e., in its entirety, and has a name.
This definition well correlates with the definition of John Dewey,
who wrote (Dewey, 1938):
“By “object” is meant some element in the complex whole that is defined
in abstraction from the whole of which it is a distinction.”

However, in his definition, Dewey uses other concepts, such as


element or whole, which themselves have loose (if any) definitions.

In essence, we can see objects everywhere. According to the exis-


tential structuration of the world (cf., Section 2.2), there are physical
objects, mental objects, and structural objects.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 477

Knowledge Structure and Functioning 477

Structural component (e.g., name)

Physical component Mental component

Figure 5.16. The structure of a total object

At the same time, people make distinction between material


(physical) objects, ideal objects, and abstract objects. This classifi-
cation is relative. For instance, the Earth as a planet is a material
object. The Earth as a goddess in Greek mythology is an ideal object,
while the Earth as an element of the Copernican model of the Solar
system is an abstract object.
It is possible to introduce the concept of a total object as an object
that has all three components — physical, mental, and structural.
For instance, when we see a tree in a forest or garden, it is a total
object, for which the tree itself is the physical component of this total
object. The image of this tree in mentality of a person is the mental
component of this total object and the structure of this tree is the
structural component of this total object. Note that the name is the
simplest structural component of a total object.
As a result, the model of a total object (cf., Figure 5.16) is similar
to the Russell model of a concept (Figure 5.3).
Now having a definition of an object and the corresponding typol-
ogy, it is possible to explore the problem of object existence as it was
done by Meinong, Russell, and several other philosophers, who inves-
tigated this problem. In contrast to what has been done before, we
can base our exploration on the given definition and the multidimen-
sional theory of existence developed in (Burgin, 2012).
The problem of existence has been in the center of philosophy
from the very beginning of its existence. Philosophers suggested dif-
ferent approaches to this problem and created theoretical systems
and philosophical directions based on these approaches. In West-
ern philosophy, the main of these directions are: idealism (objective,
subjective, and pluralistic), materialism, realism, pragmatism, social
constructivism, and existentialism. All these systems assumed that
there is only one unique reality and it is necessary to understand and
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 478

478 Theory of Knowledge: Structures and Processes

described it. Each of these directions gives its own description of this
unique reality.
In contrast to this, the multidimensional theory of existence pos-
tulates three pure forms (dimensions) of reality:
1. The actual reality consists of natural objects and processes directly
or indirectly perceived by senses and reflected in the central ner-
vous system (CNS).
2. The virtual reality, often called virtuality, is created, simulated or
reflected by some technological system (device or machine), e.g.,
computer games, movies, and videos.
3. The imaginary reality is created by mentality, e.g., heroes in prose
and poems, characters in movies and plays.
In addition, there are four combined forms of reality:
1. The mixed reality is a combination of actual and virtual reality,
i.e., it is situated between actual reality and virtual reality forming
the Virtuality Continuum and including augmented reality and
augmented virtuality.
2. The materialized reality is a combination of imaginary and virtual
reality.
3. The actualized reality is a combination of imaginary and actual
reality.
4. The enhanced reality is a combination of actual, imaginary and
virtual reality.
Understanding multiplicity of realities allows obtaining natural
solutions to problems of object existence, which bothered philoso-
phers for a long time. For instance, actively discuss whether such
objects as “the king of France in 2000” or “Pegasus” or “a golden
mountain” exist. The multidimensional theory of existence tells us
that all these objects do exist but “the king of France in 2000” and “a
golden mountain” exist in the imaginary reality of the philosophical
discourse, while “Pegasus” exists in the imaginary reality of Greek
legends.
It has been an active discussion in philosophical circles in what
sense mathematical objects in general and numbers, in particular,
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 479

Knowledge Structure and Functioning 479

exist. The multidimensional theory of existence explains that num-


bers and other mathematical structures exist in the world of struc-
tures, as well as in mentality of people. The world of structures is the
actual reality, while mentality forms imaginary reality for numbers
and other mathematical structures (Burgin, 2012).
An important part of the concept meaning and sometimes the
whole meaning of the concept is its definition. Usually it places
the concept in a system of other concepts. According to Tappen-
den (2008), the discovery of a proper definition is rightly regarded
in practice as a significant contribution to mathematical knowledge.
For instance, the progress of algebraic geometry is reflected as much
in its definitions as in its theorems (Harris, 1988). In a similar way,
Arnauld and Nicole (1683) assume that nothing is more important
in science than classifying and defining in a good way.
To achieve better understanding of the physical world, people
elaborate definitions of different things or objects. To better under-
stand one another, people define words they use, or in other words,
they give definitions to names.
Naturally, studies of definitions have a long history, which begins
with ancient Greek philosophers Plato (427–347 B.C.E.), Archytas
(428–347 B.C.E.), Aristotle (384–322 B.C.E.) and Antipater of Tar-
sus (2nd century B.C.E.) and continued with some Roman intellectu-
als such as Marcus Tullius Cicero (106–43 B.C.E.) and Gaius Marius
Victorinus (4th century). Later many philosophers and mathemati-
cians, including such thinkers as Peter Abelard (1079–1142), Thomas
Hobbes of Malmesbury (1588–1679), John Locke (1634–1704), Blaise
Pascal (1623–1662), Joseph Diaz Gergonne (1771–1859), Augustus
De Morgan (1806–1871) and John Stuart Mill (1806–1873), discussed
now to build correct definitions, what types of definitions exist and
what is necessary to define.
For instance, Victorinus distinguished the following types of def-
initions (cf., (Popa, 1976)):

— Substantial definitions;
— Conceptual definitions;
— Substitution definitions;
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 480

480 Theory of Knowledge: Structures and Processes

— Differentiating definitions;
— Causal definitions;
— Rhetoric definitions;
— Exemplifying definitions;
— Relative definitions;
— Listing definitions.
Definitions are a specific kind of knowledge and therefore, they
also have three forms:
• Descriptive definitions;
• Operational definitions;
• Representational definitions.
The majority of researchers starting with Plato and Aristotle
acknowledged only descriptive definitions in the form of statements
or propositions (Popa, 1976). Only in the 20th century, Bridgman,
(1927) introduced operational definitions as sets of operations with
the defined objects. Representational definitions were not explicitly
introduced or used in philosophy and methodology of science. How-
ever, such definitions are often employed in early childhood. Indeed,
to learn concepts, children observe representatives, e.g., exemplars or
prototypes of some concept and these representatives shape represen-
tational definitions of the learned concepts. Note that representatives
can be real objects or their images, e.g., pictures or photographs. For
instance, to learn the concept dog, a child observes different dogs con-
necting them with the word dog and in such a way, learns the concept.
Besides, representational definitions correspond to listing definitions
described by Marius Victorinus.
The question what is necessary to define separated two classes
of definitions. Some philosophers presumed that people define things
(physical objects). For instance, Aristotle in his Topics wrote:
“A definition is a proposition describing the essence of a thing.”

However, starting with Hobbes, philosophers changed their atti-


tude to definitions assuming that it was necessary to define words
(names) and not things. Now many think the main objective is to
define concepts.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 481

Knowledge Structure and Functioning 481

5.2.2. Statements, queries, and instructions


It is more important that a proposition be interesting than that it
be true.
Alfred North Whitehead

The next level of logic consists of classical logical objects such as


statements, propositions, and predicates, and new logical objects —
abstract properties (in assertoric logics), queries and questions (in
erotetic logics) and instructions, operators and operations (in oper-
ational logics). These logical objects as formal constructions use
elements from the first level as building blocks. As the majority of
logics, including classical logics, are assertoric, we begin with asser-
toric logics.
Looking at the second level of contemporary assertoric logics, we
see statements, propositions, and predicates. In logic and philosophy,
the term proposition, which is derived from the word proposal, refers
to both (a) the “content” or “meaning” of a meaningful declarative
sentence or (b) the pattern of symbols, marks, or sounds that make
up a meaningful declarative sentence.
In the classical logic, a proposition is a sentence that is either
true or false. For a long time, propositions have been studied as
minimal elements (atoms) of logic that are either true or false.
Recently, logicians started to study structured propositions (King,
2011). However, even before the famous mathematician and philoso-
pher Bertrand Russell (1872–1970) ascribed definite structure to
propositions, regarding concepts as constituents of propositions (Rus-
sell, 1903). That is why many current structured propositions theo-
rists attribute the idea of structured propositions to Russell.
Statements and predicates also have developed structures, which
represent corresponding statements and predicates (Church, 1956;
Burgin, 2011).
According to contemporary logic, there are two meanings of the
term statement:
(1) A statement is a well-formed declarative sentence.
(2) A statement is the information content of a well-formed declar-
ative sentence.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 482

482 Theory of Knowledge: Structures and Processes

While in the first case, statements, and sentences coincide, in the


second case, a sentence is only a linguistic carrier of a statement,
whereas there may be many other linguistic carriers of the same
statement. For instance, it is natural to assume that sentences “Alex
wrote this letter” and “This letter is written by Alex” express the
same statement. Thus, in the second case, a sentence, which is a
linguistic object bearing a statement, is related to this statement,
which is a logical object, like a numeral, which is physical entity, to
the number it refers to, which is a structural entity. This shows that
the second approach to statement meaning is more arguable than the
first one.
Often a statement is viewed as a truth bearer. However, many
understand that it involves the binary valued truth function, mean-
ing that a statement can be either true or false. This misconception
comes to us from classical logic. Now when we know various many-
valued logics, such as fuzzy logic, in which the truth function takes
values in the interval [0, 1], or intuitionistic logic, in which the truth
function takes values in the set “{true, false, unknown}”, it is neces-
sary to admit truth values of sentences cannot restricted only to two
values — true and false.
Under the condition of being a truth bearer in classical logic, the
sentence “In 2000, the King of France was wise” is not (does not
contain) a statement because this sentence is neither true nor false
as in 2000, there was no king in France. However, taking a logic
with three truth-values “true, false, undefined”, we conclude that
this sentence is (contains) a statement.
The concept statement is intrinsically related to the concept
proposition, which is fundamental in contemporary logic and has
the following properties. As the statements, propositions are repre-
sented by affirmative sentences reflecting the meaning and/or inten-
tion. They are the primary bearers of truth-values and the objects
of belief and other “propositional attitudes” (i.e., what is believed,
doubted, etc.), as well as the referents of that-clauses and the mean-
ings of declarative sentences.
This situation involves the following named set (fundamental
triad).
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 483

Knowledge Structure and Functioning 483

representation .
{propositions} {affirmative sentences}

Aristotelian logic treats a proposition as a sentence that affirms


or denies a predicate of the subject of this sentence. Let us consider
the following examples: “All stars are big” and “The Sun is a star”.
In the first example, the subject is “All stars” and the predicate
“are big”. In the second example, the subject is “The Sun” and the
predicate is “is a star”.
Some philosophers do not distinguish statements and propositions
in natural languages although formal logic uses only the term proposi-
tion. However, others conceive these terms as different. For instance,
Strawson advocated the use of the term statement with the second
meaning in preference to the term proposition (Strawson, 1950).
At the same time, some philosophers argue that some (or all)
kinds of speech, actions and other objects besides the declarative sen-
tences also have propositional content being carriers of propositions.
For instance, some signs can convey propositions without forming a
sentence nor even being linguistic, e.g., traffic signs express definite
meaning, which is either true or false.
Bertrand Russell assumed that propositions were structured sys-
tems composed of objects and properties as constituents. Wittgen-
stein and some other philosophers held that a proposition is the set
of possible worlds/states of affairs in which it is true. One important
difference between these views is that the Russell’s approach allows
one to differentiate two propositions that are true in all possible
worlds/states.
It is possible to theorize four relations between concepts statement
and proposition:
1. A proposition is a special case of statements. Indeed, this will be
the case if we assume that only declarative sentences are bearers
(carriers) of propositions, while both declarative and affirmative
sentences are bearers (carriers) of statements.
2. A statement is a special case of propositions. Indeed, this will be
the case if we assume that only declarative sentences are bearers
(carriers) of statements, while both declarative and interrogative
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 484

484 Theory of Knowledge: Structures and Processes

sentences with yes–no answers are bearers (carriers) of proposi-


tions.
3. Neither statements form a subclass of propositions nor do propo-
sitions form a subclass of statements.
4. Proposition and statement are different names of the same con-
cept.

Propositions are used in formal logic as units of a formal language.


Logics based on propositions are called propositional, sentential, or
statement logics. They include only operators (actually, names of
operators) and propositional constants as symbols in their languages.
The propositions in this language are either propositional constants,
which are considered atomic propositions, or composite propositions,
which are built by recursively applying operators to propositions. It
means that proposition names are treated as propositions.
In contemporary logic, knowledge represented by propositions
acquires meaning through the possible-world semantics. The base
of this semantics for a propositional calculus C is a logical (formal)
universe W , states of which are fixed assignments of truth values to
primitive propositions from C. Such states of the universe W are
interpreted as the worlds that a cognitive system, e.g., an intelli-
gent agent, assumes as possible, i.e., a possible world is described
by the propositions that are true and by those that are false in this
world. In such a way, systems of propositions describe situations in
the world W .
Other type of logics called predicate or quantificational logics
include names of variables and operators, predicate and function
symbols, and quantifiers as symbols in their languages. The propo-
sitions in these logics are more complex. First, terms are defined in
the following way.
A term is either a variable or a function symbol applied to the
number of terms equal to the number of variables in the function (its
arity).
For instance, if x, y, and z are variables and + is a binary function
symbol, i.e., the function + has two variables, then x+y and x+(y+z)
are terms. Propositions are constructed from terms.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 485

Knowledge Structure and Functioning 485

A proposition is either a predicate symbol applied to the number


of terms equal to the number of variables in it (its arity), or an
operator applied to the number of propositions equal to the number
of variables in it (its arity), or a quantifier applied to a proposition.
For instance, taking a binary predicate symbol = , a the quantifier
∀, the symbol N as a name of the set of all natural numbers, symbol
∈, operator ⇒ and three variables x, y, and z, we can build the
proposition ∀x, y, z ∈ N ((x = y) ⇒ (x + z = y + z)). This more
complex structure of propositions allows predicate logics to build
finer inferences achieving greater expressive power.
Thus, it is possible to suggest denoting informational content of
sentences in natural languages by the term statement and calling
sentences in formal logical languages by the name proposition.
According to historical evidence, in the Western philosophical tra-
dition, it is possible to find the first employment of the notion of
proposition as the informational content of sentences in the writ-
ings of the Stoics. Namely, in the third century B.C., Zeno and his
followers distinguished the material aspects of words and sentences
from lekta that denoted what was expressed by words and sentences.
In turn, lekta included axiomata, or the meaning of declarative sen-
tences, while only axiomata, and not the words used to articulate
them, were properly said to be true or false.
In symbolic logic as a discipline, statements are formalized as
propositions and well-formed formulas. Here we describe how this
formalization is performed in symbolic logic and in mathematical
logic as the central sub-discipline of symbolic logic.
The majority of logics as systems have many formation layers.
The first formation layer consists of logics of sentences (also called
propositional or sentential logics). The second formation layer con-
sists of the first order predicate logics, which are logics of objects. The
third formation layer consists of the second order predicate logics and
so on.
There are different predicate logics (or predicate calculi): the
classical predicate logic (predicate calculus), monadic predicate logic
(predicate calculus) (Tharp, 1973), predicate functor logic (Quine,
1976; Kuhn, 1983), etc.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 486

486 Theory of Knowledge: Structures and Processes

Predicate logics use various quantifiers. The most popular quan-


tifiers are:

— the existential quantifier ∃ means “there is” or “there are”;


— the universal quantifier ∀ means “for any” or “for all”.

If A = {ai ; i ∈ I} is an infinite set, then the expression “a pred-


icate P (x) is true for almost all elements from A”, or “almost all
elements from A have a property P ” or in logical notation ∀∀xP (x),
means that P (x) can be untrue only for a finite number of elements
from A. For instance, if A = ω, then almost all elements of A are big-
ger than 10, or another example is that conventional convergence of
a sequence l to x means that any neighborhood of x contains almost
all elements from l.
Church (1956) describes binary formal equivalence quantifier and
several implication quantifiers: singular-binary implication quanti-
fier, binary formal implication quantifier, ternary implication quan-
tifier and so on.
One more quantifier, ∃! , means “there is (a) unique”. It is called
the uniqueness quantifier.
There are also quantifiers ∃many , which means “there are many”,
and ∃few , which means “there are few”.
All these quantifiers are linear. To achieve higher expressibility of
logical languages, logicians invented branching quantifiers. A typical
example of such quantifiers is the simplest Henkin quantifier

∃x∀y,
∃a∀b.

In quantum logics and relational databases, quantifiers are map-


pings that satisfy specific axioms (cf., for example (Halmos, 1962;
2000; Leblanc, 1962; Plotkin, 1991)). For instance, in relational
databases, the existential quantifier ∃ is represented by the database
operator selection and the universal quantifier ∀ is represented by
the database operator projection.
As a result, quantifiers are represented by named sets as mappings
and operators are also represented by named sets (cf., Appendix).
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 487

Knowledge Structure and Functioning 487

Besides, whenever we have a representation, it involves a fundamen-


tal triad (named set). Thus, in the case of quantifiers, we have two
fundamental triads:
represented by
∀ projection operator

and
represented by .
∃ selection operator

Basic structures of logic, such as propositions, predicates, logical


calculi, and logical varieties, are constructed using different logical
languages. It is possible to show that any language, formal or natural,
is built of different named sets. All this demonstrates that any logic
is formed using different named sets as construction blocks.
Indeed, a language in a constructive representation/definition is a
triad (named set) of the form L = (X, R, L) where X is the alphabet,
R is the set of constructive algorithms/rules for building well-formed
(correct) expressions (e.g., words or texts) from L, and L is the set
of words of the language L.
Logical languages are artificial languages developed intentionally
for representation of logical reasoning within a culture. The typical
feature of logical languages is that their structure (inner relations)
and grammar (formation rules) are intended to express the logical
information within linguistic expressions in a clear and effective way.
Languages used in logic have, as a rule, constructive definitions in
a form of production rules. Elements of logical languages are logical
expressions or formulas constructed in a proper way and often called
wff.
Elements of the language of the classical propositional or sen-
tential logic/calculus represent propositions in a formal way using
variables and constants. Propositional variables and constants are
denoted (named) by letters, which are considered as atomic formulas
and form a part of the alphabet of the language, while another part
is formed by symbols of the following logical operations:
negation is denoted either by  or by ∼; for example, A means
“not A”;
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 488

488 Theory of Knowledge: Structures and Processes

conjunction also called “logical and” is denoted either by ∧ or by &


or by ·; for example, A ∧ B means “A and B”;
disjunction also called “logical or” is denoted by ∨; for example,
A ∨ B means “A or B”;
implication is denoted either by → or by ⇒ or by ⊃; for example,
A → B means “A implies B”;
equivalence is denoted either by ↔ or by ≡ or by ⇔; for example,
A ↔ B means “A is equivalent to B”.
If P (x) and Q(x) are some propositions or predicates, then:
Their conjunction is equal to P (x) & Q(x) or P (x) ∧ Q(x).
Their disjunction is equal to P (x) ∨ Q(x).
The negation of P (x) is equal to P (x).
It is possible to use fewer operations (and thus, a smaller alpha-
bet) by expressing some of these operations by mean of others, e.g.,
P → Q is equivalent to P ∨ Q. For example, Church (1956) uses
only one logical operation ⊃. In addition, the left and right paren-
theses, (and) or the left and right brackets [and] are included in the
alphabet.
It is possible to use fewer operations (and thus, a smaller alpha-
bet) by expressing some of logical operations by means of others, e.g.,
P → Q is equivalent to P ∨Q. For example, Church (1956) uses only
one logical operation ⊃ to build the classical propositional calculus.
In addition to constants, variables and operation symbols, the left
and right parentheses, (and) or/and the left and right brackets [and]
are included in the alphabet.
It is interesting that there are logics that use many other opera-
tions. For instance, linear logic has the following system of operations:
• ⊗ is called multiplicative conjunction or multiplicative and or
times (or sometimes tensor);
• ⊕ is called additive disjunction or additive or or plus;
• & is called additive conjunction or additive and or with;
• $ is called multiplicative disjunction or multiplicative or or par;
• ∧ is called classical conjunction;
• ∨ is called classical disjunction;
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 489

Knowledge Structure and Functioning 489

•  is called top;
• ⊥ is called bottom;
• ! is interpreted as of course (or sometimes bang);
• ? is interpreted as why not;
• −◦ is called linear implication;

Elements of the language LP of the classical propositional (or


sentential) logic/calculus are called wffs and are traditionally built
by the following rules:

1. Letters of the alphabet are wffs from LP .


2. If ϕ is a wff, then ϕ is a wff from LP .
3. If ϕ and ψ are wffs, then (ϕ ∧ ψ), (ϕ ∨ ψ), (ϕ → ψ), and (ϕ ↔ ψ)
are wffs from LP .

These rules form the set of algorithms R that are used to build
the language LP .
Elements of the language LCP C of the classical predicate
logic/calculus of the first order give a formal representation of binary
properties. The predicate calculus language has a developed alpha-
bet and elaborated symbolic notation. Lower-case letters a, b, c, . . .,
x, y, z, . . . are traditionally used to denote individuals (variables or
constants). Upper-case letters M , N , P , Q, R, . . . are traditionally
used to denote (variable or constant) predicates.
The alphabet LCP C of the language LCP C consists of six parts:

— A set F of function symbols (e.g., +, × or ·).


— A set P of predicate symbols (e.g., P1 or Pa ).
— A set C of logical connectives or symbols of operations (usually,
it is: , ∧, ∨, →, and ↔).
— A set S of punctuation symbols (usually, it is: ( , ), : and , ).
— A set Q of quantifiers (usually, they are ∀ and ∃, although some-
times other quantifiers such as ∀∀ and ∃! are also used).
— A set V of variables.

Note that non-classical logics often use not only logical opera-
tions but also logical operators. For instance, the modal operator
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 490

490 Theory of Knowledge: Structures and Processes

expresses necessity, while the modal operator ♦ expresses possibil-


ity in modal logics.
Every function symbol, predicate symbol, and connective has its
arity. Namely, an n-ary function has the form f : X n → X and
an n-ary predicate has the form P (x1 , x2 , . . ., xn ). As a rule, 0-ary
predicates and/or 0-ary functions are called constants. Another way
to deal with constants is to include their names in the alphabet of
the language.
The language LCP C of the classical predicate calculus, as a general
structure, encompasses the language LP of the classical propositional
calculus because propositions may be constructed by juxtaposition of
a predicate with an individual constant or variable and using quan-
tifiers.
Elements of the language LCP C of the classical predicate
logic/calculus are also called wffs and built by the following rules:

(1) Letters of the alphabet A of the language LCP C are wffs from
LCP C .
(2) Expressions P (x1 , x2 , . . ., xn ) where P is an n-ary predicate
symbol is a wff from LCP C .
(3) If ϕ is a wff, then ϕ is a wff from LCP C .
(4) If ϕ and ψ are wffs, then (ϕ ∧ ψ), (ϕ ∨ ψ), (ϕ → ψ), and (ϕ ↔ ψ)
are wffs from LCP C .
(5) If H(x1 , x2 , . . ., xn ) is a wff containing a free variable x, then ∃
xH(x1 , x2 , . . ., xn ) and ∀ xH(x1 , x2 , . . ., xn ) are wffs from LCP C .

Here a variable is free if it is not related to a quantifier. Con-


sequently, the rule (5) makes any instance of x bound (that is, not
free) in the formulas ∃xH(x1 , x2 , . . . , xn ) and ∀xH(x1 , x2 , . . . , xn ). In
logic, symbol ∃ means “there is” or “there are” and symbol ∀ means
“for any” or “for all”.
If A = {ai ; i ∈ I} is an infinite set, then the expression “a pred-
icate P (x) is true for almost all elements from A”, or “almost all
elements from A have a property P ” or in logical notation ∀∀xP (x),
means that P (x) can be untrue only for a finite number of elements
from A. For instance, if A is the set N of all natural numbers, then
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 491

Knowledge Structure and Functioning 491

almost all elements of A are bigger than 10. Taking the mathemat-
ical calculus (cf., for example, (Larson and Edwards, 2006; Burgin,
2008), we have another example of the quantifier ∀∀, namely, the
conventional convergence of a sequence l = {ai ; i = 1, 2, 3, . . .} to
number a means that any neighborhood O a of a contains almost all
elements from l, or in the formal language, we have:

lim ai = a if and only if ∀Oa∀∀i ∈ N (ai ∈ Oa).


i→∞

There are many other logics and they have different languages.
For instance, conventional logics are extended to probabilistic log-
ics by assigning probabilities to statements, i.e., to propositions and
predicates. This allows representation of probabilistic knowledge (cf.,
for example, (Boole, 1854; Reichenbach, 1932; 1935; Hailperin, 1984;
Russell, 2014)). This direction in formal logic was initiated by Leib-
niz, who envisioned that it would be necessary to estimate likelihood
of propositions and a way of proof leading not to certainty but only to
probability of propositions. However, Leibniz did not develop such
a logic and it was George Boole, who introduced a mathematical
concept of imprecise probability aiming to reconcile classical logic,
which tends to express complete knowledge or complete ignorance,
and probability theory, which has a propensity to express partial
or/and imprecise knowledge or ignorance (Boole, 1854).
It is necessary to remark that some logics have an essentially dif-
ferent structure. For instance, the “logic” of (non-relativistic) quan-
tum mechanics is thought of as being the lattice of closed subspaces
of a separable infinite dimensional Hilbert space (Mackey, 1963).

5.2.3. Logical systems of inference


There is a tradition of opposition between adherents
of induction and of deduction.
In my view, it would be just as sensible for the
two ends of a worm to quarrel.
Alfred North Whitehead

Looking at the third level of logic, we see deductions and other


inference operations and processes. A deduction system consists of
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 492

492 Theory of Knowledge: Structures and Processes

deduction rules. Each deduction rule and consequently, each step of


deduction has the form
A→B (2.1)
or
A  B. (2.2)
Here A and B are some expressions or sets of expressions from
the language of the corresponding logic. For instance, taking as A
two propositions “We live in the USA” and “The USA are situated
in the Western Hemisphere of the Earth,” we can deduce as B the
proposition “We live in the Western Hemisphere of the Earth.”
Thus, a deduction rule is (has the structure of) the fundamental
triad (A, →, B).
A deduction is a sequence (of applications) of deduction rules.
A → B → C → D → E, (2.3)
where A, B, C, D, and E are groups of logical statements. As a
result, any deduction incorporates the structure of a deduction rule in
a more elaborate structure of deduction, which is usually a sequence
of named sets.
The logical symbol |= is usually interpreted as truth-functional
entailment (cf., for example, (Bergman et al., 1980) or (Kleene,
1967)). That is, if a and b are some propositions or predicates, then
a |= b means “if a is true in some interpretation, then b is true in the
same interpretation”. In this context, |= b means that b is always true.
Dynamic properties of logics are realized by logical processes
based on reasoning operations, such as inference or deduction. Deduc-
tion is always formal, while there are three classes of inference:
formal inference, semiformal inference and informal inference. For
instance, in practical mathematics, inference is, as a rule, semifor-
mal, while in symbolic logic it is sometimes semiformal and some-
times formal, e.g., deduction in mathematical logic is a kind of formal
inference.
In turn, inference is a kind of reasoning when reasoning is per-
formed step by step and each step is a transition from premises to
conclusions. Another kind of reasoning is argumentation, the goal of
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 493

Knowledge Structure and Functioning 493

which is not formula derivation but persuasion of the opponent or


the audience.
Here we consider dynamical structures in mathematical logic. The
subject of mathematical logic has origins in philosophy. Philosophers
explained how and why the logical rules for inference or deduction
(such as the modus ponens or law of excluded middle) are valid. Later
philosophical arguments were formalized and then put into the math-
ematical form. It is also a legacy from philosophy that mathematical
logic distinguishes semantic analysis, answering the question “What
is true?” from syntactic considerations, answering the question “How
to express something true?” In such a way, logicians have developed
two traditional parts of mathematical logic: its syntax (proof theory)
and semantics (model theory).
Basic dynamic structures of logic are logical calculi and logical
varieties.
The idea of the concept of logical calculus comes from Leibniz,
who also introduced names differential calculus and integral calculus.
He wrote that in future informal and vague arguments of philosophers
would be changes for formal and exact calculations with formulas
(Leibniz, 1989). Such calculations would allow one to find who of
those philosophers was right and who was wrong.
To make such generalized calculations with formulas, people use
definite rules. Systems of rules form algorithms when they are pre-
cise, exactly realizable and sufficiently simple to be performed by a
mechanical device. Otherwise, such systems are called procedures.
Generalized calculations are performed with symbolic expressions,
which are elements of definite languages, usually, formal languages.
The goal of this work is to study relations between algorithms, pro-
cedures, languages, and calculi as a part of metalogic.
On the syntactic level, dynamics of a formal logic goes on in an
appropriate logical calculus in the traditional representation or by
the corresponding syntactic logical variety in the advanced represen-
tation (Burgin, 2010).
In mathematics, the word calculus has two meanings. The most
popular in general mathematics understanding is that Calculus is
a name that is now used to denote the field of mathematics that
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 494

494 Theory of Knowledge: Structures and Processes

studies properties of functions, curves, and surfaces. As this is the


most popular meaning in mathematics, we call it the Calculus. It is
usually subdivided into two parts: differential calculus and integral
calculus. The main tool of the Calculus is operating with functions to
study properties of these functions. This operation can be regarded
as a generalized calculation with these functions. This explains the
name calculus used for this field, which originated from the Latin
word meaning pebble because people many years ago used pebbles
to count and do arithmetical calculations. The Romans used calculos
subducere for “to calculate”.
Thus, the Calculus is called so because it provides analytic,
algebra-like techniques, or means of computing, which apply algo-
rithmically to various functions and curves. Many mathematical
problems that had very hard solutions or even such problems that
mathematicians had not been able to solve, after the calculus had
been developed, became easily solvable by mathematics students.
Later the Calculus developed into analysis, or mathematical anal-
ysis. There are also other calculi in analysis, for example, operational
calculus and calculus of variations.
Another mathematical meaning of the word calculus comes from
mathematical logic where calculus is a formal system used for logical
modeling of mathematical and scientific theories. In a traditional set-
ting, a logical calculus consists of three parts: axioms, rules of deduc-
tion (inference), and theorems (cf., (Kleene, 2002; Mendelson, 1997).
There are two meanings of the term logical calculus. According
to the classical understanding, a logical calculus consists of a set A
of axioms or a priori statements, a set T of theorems or deduced
statements, and a set R of rules of deduction or inference. We call
this system (A, R, T ) a standard logical calculus or simply, a logical
calculus. If such a system (A, R, T ) contains non-logical axioms, it
is called a formal theory.
In another interpretation, a logical calculus consists only of a log-
ical language M and a set R of rules of deduction or inference. We
call this system (M , R) a free logical calculus.
Axioms form the foundation of a logical calculus because
the axiomatic approach provides efficient tools for knowledge
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 495

Knowledge Structure and Functioning 495

compression, storage, extraction and production. Axioms store


knowledge of the calculus, as well as of the represented system, in
the compressed form. Inference rules allow one to extract knowledge
from axioms and previously inferred knowledge items producing in
such a way new knowledge.
There is a long-standing debate whether a system that holds
axioms also possesses all knowledge inferred from these axioms. To
adequately answer this question and clarify the situation, we need to
make a distinction between potential knowledge and actual knowledge.
What is stored in the form of data in a knowledge system is actual
knowledge of this system.
What a knowledge system can extract or produce from its actual
knowledge is inner potential knowledge of this system.
What a knowledge system can extract or produce from accessible
to it knowledge is outer potential knowledge of this system.
Taking a logical calculus, it is natural to assume that axioms form
the actual knowledge of this calculus, while theorems give the inner
potential knowledge of this calculus.
Potential knowledge of a logical calculus is produced by inference
(deduction) algorithms. This naturally brings us to the levels of inner
potential knowledge.

Definition 5.2.5. If C = (A, R, T ) is a logical calculus, then the nth


level of inner potential knowledge consists of all theorems from T for
which there is an inference by rules from R with not more than n
steps.

By this definition, all axioms from A are situated at the zero level.
Levels of inner potential knowledge allow us to solve the Problem
of Omniscience. Namely, Omniscience in this context means that a
person who has some knowledge K in the logical form and logical
rules of (inference) also knows all knowledge deducible from K. The
Problem of Omniscience asks whether it is possible for a human being
really to have infinite knowledge.
The solution to this problem is that a person who has some knowl-
edge K in the logical form and logical rules of (inference) knows only
some number of levels of knowledge deducible from K.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 496

496 Theory of Knowledge: Structures and Processes

Logical calculi are constructed using different logical languages.


A language in a constructive representation/definition is a fun-
damental triad (named set) of the form L = (X, R, L) where X is
the alphabet of L, R is the set of constructive algorithms/rules for
building well-formed (correct) expressions (e.g., words or texts) from
L, and L is the set of words of the language L.
Logical languages are a special kind of artificial languages devel-
oped intentionally within a culture. The typical feature of logical
languages is that their structure (inner relations) and grammar (for-
mation rules) are intended to express the logical information within
linguistic expressions in a clear and effective way. Languages used
in logic have, as a rule, constructive definitions in a form of logi-
cal operations considered in the previous section. The rules of these
operations form the set of algorithms R that build the language L.
Elements of logical languages are logical expressions or formulas. To
emphasize that these formulas are constructed in a proper way, they
are often called wffs.
The language LP of the classical propositional (or sentential)
logic/calculus is considered in the previous section. Elements of the
language of the classical propositional or sentential logic/calculus are
formulas that give a formal representation of propositions. Proposi-
tional variables are denoted by letters, which are considered as atomic
formulas and form a part of the alphabet of the language.
The language LCPC of the classical predicate logic/calculus is con-
sidered in the previous section. Elements of the language of the
classical predicate logic/calculus are formulas that give a formal
representation of binary properties.
Now we give a general formal definition of a logical calculus.
Let L be formal (logical) language or a language of well-formed
formulas and R be an algorithmic language, procedural language or
a language of rules of inference in L.

Definition 5.2.6. A (syntactic or deductive) logical calculus, usually


called calculus, in the pair of languages (L, R) is a triad of the form

C = (A, H, T ). (5.15)
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 497

Knowledge Structure and Functioning 497

where H ⊆ R, A, T ⊆ L, A is the set of axioms, H consists of infer-


ence rules (rules of deduction) by which from axioms the theorems
of the calculus are deduced, and the set of theorems T is obtained by
applying algorithms/procedures/rules from H to elements from A.
When L is a logical language and H consists of rules of logical
deduction, C is a deductive calculus.
The Principle or Law of Excluded Middle, or in Latin, tertium non
datur (a third is not given) is one of the basic axioms of the classical
logic It is formulated as “A has a property B or A does not have a
property B.”
Categorical syllogisms of Aristotle gives examples of classical
inference rules (cf., for example, (Timothy, 1973)).
The set A in the formula (3.3.1) is called the base (an axiom
system or generating expressions) of the calculus C . The set H is
called the system of inference rules of the calculus C . The set T
is called the body (the set of theorems or deducible expressions) of
the calculus C and is constructed by applying algorithms from H to
expressions from A. They are denoted as follows:
A = A(C ), H = H(C ), T = T (C ).
The named set (A, R, T ) is called the basic named set of the cal-
culus C . The same calculus may be represented by another (deduc-
tion) named set (A, d, T ) where the relation d connects any axiom
a from A with such theorems t from T that a is used in a process of
deduction of t.
Note that there calculi that are not logical. For instance, differen-
tial calculus is a purely mathematical construction. That is, when L
contains descriptions and denotations of real/complex numbers and
functions, while H consists of rules of differentiation/integration, C
is the differentiation/integration calculus.
A deduction system consists of deduction rules. Each deduction
rule and consequently, each step of deduction has the form
A→B (5.16)
or
A  B. (5.17)
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 498

498 Theory of Knowledge: Structures and Processes

Here A and B are some expressions or sets of expressions from


the language of the corresponding logic. For instance, A consists of
two formulas ϕ and ϕ → ψ, while B is the formula ψ.
Thus, a deduction rule is the named set (A, →, B).
Often the system of inference rules has only one rule called modus
ponens:
ϕ, ϕ → ψ  ψ. (5.18)
In the programming language notation, it means
If ϕ and ϕ → ψ, then ψ. (5.19)
Other rules are derived from modus ponens and then used in for-
mal proofs to make proofs shorter and more understandable. These
rules serve to directly introduce or eliminate connectives, e.g.,
“If ϕ and χ, then ϕ ∧ χ“(or ϕ, χ  ϕ ∧ χ)
or
“If ϕ, then ϕ ∨ χ“(or ϕ  ϕ ∨ χ).
A standard transformation rule is substitution. This rule is neces-
sary because axiom schemas demand substitution to become axioms
and be applied.
Different systems of axioms for the classical propositional cal-
culus have been devised to achieve consistency, completeness, and
independence of axioms. All these systems are logically equivalent.
For instance, Kleene (2002) suggests the following list of axioms
(axiom schemas) for the classical propositional calculus, in which
Greek letters denote propositions:
ϕ → (χ → ϕ), (5.20)
(ϕ → (χ → ψ)) → ((ϕ → χ) → (ϕ → ψ)), (5.21)
ϕ → (χ → (ϕ ∧ χ)), (5.22)
ϕ → ϕ ∨ χ, (5.23)
χ → ϕ ∨ χ, (5.24)
ϕ ∧ χ → ϕ, (5.25)
ϕ ∧ χ → χ, (5.26)
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 499

Knowledge Structure and Functioning 499

(ϕ → ψ) → ((χ → ψ) → (ϕ ∨ χ → ψ)), (5.27)


(ϕ → χ) → ((ϕ →χ) → ¬ ϕ), (5.28)
  ϕ → ϕ. (5.29)
Usually the system of inference/deduction rules of the classical
propositional calculus has only one rule called modus ponens (5.18).
A standard transformation rule is substitution. This rule is necessary
because axiom schemas demand substitution to become axioms and
be applied.
Shoenfield (1967) suggests the following list of axiom schemata of
the classical first-order predicate calculus:
Propositional Axiom: ϕ ∨  ϕ
Identity Axiom: x = x
Substitution Axiom: ϕx [a] → ∃xϕ
Equality Axioms:
(a) If F is a symbol of an n-ary function from F, then
x1 = y1 ∧ x2 = y2 ∧ . . . ∧ xn = yn → F (x1 , x2 , . . . , xn )
= F (y1 , y2 , . . . , yn );
(b) If P is a symbol of an n-ary predicate from P, then
x1 = y1 ∧ x2 = y2 ∧ . . . ∧ xn = yn → P (x1 , x2 , . . . , xn )
= P (y1 , y2 , . . . , yn ).
Usually the system of inference/deduction rules of the classical
predicate calculus has two basic rules. One is modus ponens (5.18).
The other is called the substitution rule. It states:
If t is a term, ϕ is a formula possibly containing the variable x,
and ϕ[t/x] is the result of replacing all free instances of x by t in ϕ,
then ϕ implies ϕ[t/x], or as a formal expression:
 
t
ϕϕ . (5.30)
x
Shoenfield (1967) also suggests the following inference rules:
Extension rule: ϕ implies ϕ ∨ ψ
Cancellation rule: ϕ ∧ ψ implies ϕ
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 500

500 Theory of Knowledge: Structures and Processes

Associative rule: (ϕ ∨ ψ) ∨ χ = ϕ ∨ (ψ ∨ χ)
Cut rule: (ϕ ∨ ψ) and (ϕ ∨ χ) imply (ψ ∨ χ)
∃-introduction rule: ϕ → ψ implies ∃xϕ → ψ if x is not a free
variable in ψ.

While inference/deduction rules represent microsteps of rea-


soning, there are also macrosteps of reasoning. One of the most
important macrosteps of reasoning, which is extensively used in
mathematics, is the, so-called, proof from contradiction. The essence
of this proof is that trying to prove that some system (object) A
has a property P , we make an assumption that A does not have this
property P . Then we show that this contradicts the initial conditions.
This allows us to conclude that our assumption was not true and due
to the Principle or Law of Excluded Middle, the system (object) A
has the property P .
There are many other logics and they have different calculi. For
instance, proof nets used in linear logic give another example of
macrosteps of reasoning.
It is possible to read more about classical logical systems and
structures, for example, in ((Kleene, 2002; Shoenfield, 1967; Bergman
et al., 1980; Manin, 1991; Mendelson, 1997).
Syntactic logical calculi provide functional formalization to the
notion of a formal theory (Smullyan, 1962). In turn, a formal theory
formalizes some source theory from a scientific discipline (e.g., from
mathematics, physics, or economics). In order to specify a formal
theory, predicates, functions, and relations, which are regarded as
basic for a given field of study, are chosen. These predicates delimit
the scope of the formal theory and are the primitives of the theory
and together with logical and punctuation symbols form the alphabet
of the theory language.

5.3. Theory of abstract properties

Democracy is when the indigent, and


not the men of property, are the rulers.
Aristotle
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 501

Knowledge Structure and Functioning 501

There are different generalizations of logical systems. One of the


most far-reaching generalizations is the theory of abstract proper-
ties developed in (Burgin, 1985a; 1986; 1989) and applied to social
sciences (Burgin, 1990a), to epistemology in (Burgin and Rothbart,
1998) and to methodology of science in (Burgin and Kuznetsov, 1992;
1992b; 1993; 1994).
Properties are very important. As the great Aristotle wrote, we
can know about things nothing but their properties. Thus, it is natu-
ral that properties play an important role in mathematics, logic and
all sciences. However, concepts of properties in mathematics, logic
and science are basically different.
In mathematics and logic, a property is represented by the pred-
icate that defines the set of all objects that have this property in
common. For instance, if P is a predicate on a set X, it is tradi-
tionally said P is a property on X, while the notation P (x) is used
to denote a sentence or statement P that an object x has the prop-
erty P . Then the set of all the objects that have this property P
is denoted by the formula {x|P (x)}, meaning that it is just a set
of all x for which P is true. By definition, a predicate P on X is
a Boolean-valued function P : X → {true, false} and thus, it is an
abstract property.
Similar situation exists in philosophy and linguistics where the
ontological fact that something has a property is typically repre-
sented in language by applying a predicate to a subject. For instance,
Swoyer and Orilia write:

“Properties (also called ‘attributes,’ ‘qualities,’ ‘features,’ ‘characteris-


tics,’ ‘types’) are those entities that can be predicated of things or, in
other words, attributed to them. For example, if we say that that thing
over there is an apple and is red, we are presumably attributing the
properties red and apple to it. Thus, properties can be characterized as
predicables. Relations, e.g., loving and between, can also be viewed as
predicables and more generally can be treated in many respects on a par
with properties. Indeed, they may even be viewed as kinds of properties.”
(Swoyer and Orilia, 2014)

Indeed, relations are also a kind of multivariable abstract proper-


ties (Burgin, 1985).
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 502

502 Theory of Knowledge: Structures and Processes

Based on understanding property as a predicate, logicians devel-


oped formal theories of properties, which are formalized systems
that aim at formulating “general non-contingent laws that deal with
properties” (Bealer and Mönnich, 1989). To do this, such theories
construct terms corresponding to properties, e.g., variables with sys-
tems of properties as their domain. There are two main approaches
in this area: either the terms standing for properties are predicates
(Cocchiarella, 1986) or such terms are subject terms that can be
linked to other subject terms by a special predicate, e.g., pred, that
is meant to express a predication relation similar to representation
of the membership relation in standard set theory by the special
predicate ∈ (Bealer, 1982). For instance, in the first approach,
the formula ∃P (P (j) & P (m)) represents the statement “there is
a property that both Alex and Bob have,” while in the second
approach, this statement is represented by the formula ∃x (pred(x,j)
& pred(x,m)). In Menzel (1986; 1993), both approaches are
combined.
With its development, logic created models for properties truth-
fulness of which had more than two values. For instance, fuzzy predi-
cates and propositions were introduced into logic in the 20th century
to represent fuzzy properties. Truthfulness of fuzzy properties may be
partial with values in the interval [0, 1]. Consequently, fuzzy proper-
ties are not physical or other scientific properties. Nevertheless, fuzzy
predicates and propositions are also abstract properties, the scale of
which is equal to the interval [0, 1].
A very different concept of property exists in science. For instance,
a physical property is any measurable property, quality or attribute
whose values describe states of physical systems. The changes in
the physical properties of a system reflect transformations of this
system. Physical properties are often called observables. Thus, in
science, properties are not predicated but measured and in contrast
to predicated properties, they have different scales and diverse values.
That is why the goal of creation of the theory of abstract prop-
erties was a mathematical synthesis of the concept of property in
mathematics and logic with the concept of property in science. The
main concept in this theory is abstract property.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 503

Knowledge Structure and Functioning 503

Let us take a class (universe) U of objects, an abstract class M


of partially ordered sets, i.e., such a class that with any partially
ordered set contains all partially ordered sets isomorphic to it, and
consider a partially ordered set L from M.

Definition 5.3.1. (a) An abstract property P of objects from the


universe U is a named set P = (U , p, L) where p : U → L is a partial
function (L-predicate).
(b) The partial function p is called the evaluation function Ev(P )
(or the functional component) of the property P .
(c) The partially ordered set L is called the scale Sc(P ) of the
abstract property P .

We write P (u) = ∗ when the property P is undefined for the


object u.
Abstract properties are used as mathematical models of real
properties.
For instance, when you want to know something about a com-
puter, you look into a list of its specifications, which contains many
properties of a computer, which are examples of abstract proper-
ties. The universe U where these properties are defined consists of
computers. Here is one of such specifications:

Table 5.1. A specification of a computer.

System memory installed 128 MB


Hard disk capacity 30 GB
Monitor diagonal size 18 in.
Graphics memory amount 64 MB
Graphics chipset nVidia GeForce II GTS
Supported operating systems Windows 98 SE
Graphics card Guillemot three-dimensional Prophet II GTS
Monitor type CRT
Processor model Athlon
Processor clock speed 1000 MHz

Some properties of computers are now incorporated into their


names. For instance, when you see the name “Dell Dimension L866r
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 504

504 Theory of Knowledge: Structures and Processes

Pentium III 866 MHz,” you know that this computer has the proces-
sor clock frequency 866 MHz. In turn, the processor clock frequency
determines how many instructions it can execute per second.
An important property of physical bodies is mass. The scale of
this property is the infinite interval [0, ∞) of the real line.
The most popular property in logic is truth defined for logical
expressions. In classical logics, only one such property as truth with
the scale L = {T, F} is considered, i.e., predicates and propositions
take one of two value T (true) and F (false). Thus, the set {T, F}
is the scale of the abstract properties that represent predicates and
propositions.
For modal logics, which have only one truth property that is deter-
mined for logical expressions, modalities are expressed by means of
modal operators. At the same time, modal operators are abstract
properties defined for well-formed formulas and taking values in
modal well-formed formulas. For instance, the modal operator of
necessity is defined as :f → f for any well-formed formula f .
Another possibility to express modality is to determine differ-
ent modal truth properties: “truth,” “necessary truth”, and “possible
truth”. These truth properties are also abstract properties.
Valued sets give one more example of abstract properties
(Dukhovny and Ovchinnikov, 2000; Ovchinnikov, 2000; Frascella and
Guido, 2008). We remind that a valued set is a function from a given
set into a given linearly ordered set L. Thus, we see that valued sets
also are particular cases of abstract properties and thus, they are
represented by named sets (Burgin, 2011).
There are several methods to represent concepts by abstract prop-
erties. For instance, we can take an abstract property P of names
with the scale that consists of conceptual representatives. It is also
possible to represent concepts by abstract properties by an abstract
property P so that P assigns the Extent of a concept C to the name of
C. We obtain another representation when the Meaning of a concept
C is assigned to the name of C.
Taking the next level of logic, we see that it is possible to represent
statements and propositions by abstract properties. Indeed, a state-
ment or a proposition is the information content of a well-formed
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 505

Knowledge Structure and Functioning 505

declarative sentence. It means that statements and propositions are


represented by declarative sentences, for example, in English. At the
same time, there are two basic parts to every sentence in English:
the subject and the predicate (Webster’s English Language Desk
Reference, 1999). The simple subject is the noun or pronoun that
identifies the person, place, or thing the sentence is about. The com-
plete subject is the simple subject and all the words that modify
it. The predicate contains the verb that explains what is going on
with the subject. The simple predicate contains only the verb, while
the complete predicate contains the verb and any complements and
modifiers. Thus, taking the property of subjects that takes values
in predicates, we obtain an abstract property, which represents all
sentences in English. Consequently, this abstract property also repre-
sents all statements and propositions expressed by English sentences.
An important construction in the theory of abstract properties is
reduction (Burgin, 1989; 2007b). Informally, reduction of an abstract
property P1 to an abstract property P2 means that knowing values of
the abstract property P2 , we can find values of the abstract property
P1 . This idea is formalized in the following way giving several types
of abstract property reductions.
Let us consider a pair of universes U and V , a pair of classes
of mappings Φ and Ψ, and a pair of properties P = (U, p, L) and
R = (V, r, M ).
Definition 5.3.2. A property P = (U, p, L) is reduced to a property
R = (V, r, M ) in the pair (Φ, Ψ) if there is there are mappings g :
U → V from Φ and h : M → L from Ψ such that h preserves the
partial order in M and p = h ◦ r ◦ g, i.e., the following diagram is
commutative.
p
U L
g h
V M
r
Reduction allows establishing relations between properties of
objects from different universes.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 506

506 Theory of Knowledge: Structures and Processes

Note that it is possible to treat reduction either as a relation on


or as an operation with abstract properties.
There are many reductions in the theory of algorithms, automata,
and computation. Let us consider some of them.

Example 5.3.1. In the axiomatic theory of algorithms, automata


and computation, the most general concept of operational reduction
is introduced (Burgin, 2007b). Let us assume that all algorithms
(automata) are defined and take values in a set X, e.g., in the set Σ∗
of all words in the alphabet Σ, and consider two classes of algorithms
(automata) R and Q, which play the role of the classes of mappings
Φ and Ψ from Definition 5.3.2.

Definition 5.3.3. An algorithm A is (R,Q) — reducible to an algo-


rithm B if there is an algorithm D from R and an algorithm H from
Q such that for any element x ∈ X, we have A(x) = H(B(D(x))),
or rD◦ rB ◦ rH where rX is the mapping determined by an algorithm
(automaton) X.
Reduction of algorithms (automata) is represented by the follow-
ing commutative diagram:

A
X X
H D
X X
B

For instance, any Turing machine is reducible to a universal


Turing machine. This reduction and other reductions of algorithms
and automata have been very useful in computer science allow-
ing researchers to prove many useful and important results such as
undecidability of many algorithmic problems, NP-completeness of a
variety of practical problems or equivalence of different classes of
algorithms and automata (Minsky, 1967; Sipser, 1997; Burgin, 2005;
2007b; 2010d).
Another example of reduction of properties is reduction of prob-
lems, which is defined in the following way (Burgin, 2010d).
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 507

Knowledge Structure and Functioning 507

Definition 5.3.4. A problem P can be reduced to a problem Q if


knowing a solution to the problem Q, it is possible to find a solution
to the problem P .
To describe this relation by property reduction, we consider a
property Sol = (Pr, solproc, sol) where Pr is a set of problems, sol is
a set of problem’s solutions and solproc is a set of solving processes,
each of each assigns (finds) a solution to a problem.
Reduction of problems is extensively used in mathematics and
computer science. For instance, in arithmetic, division of fractions is
reduced to multiplication of fractions by the rule

b c b d
÷ = · .
a d a c

This is reduction of the property division Div : F 2 → F to the


property multiplication Mlt : F 2 → F where F is the set of all frac-
tions and the properties (operations) are defined in the following
way Div((b/a), (c/d)) = (b/a) ÷ (c/d) and Mlt((b/a), (c/d)) = (b/a)·
(c/d). The reduction mapping is g: F 2 → F 2 where g((b/a), (c/d)) =
((b/a), (d/c).
The following commutative diagram corresponds to this reduc-
tion.

F2 Div
g F
2
F Mlt

In addition, in arithmetic, subtraction of signed numbers is reduced


to addition by the rule

a − b = a + (−b).

This is reduction of the property subtraction Sn : R2 → R to


the property addition An : R2 → R where R is the set of all real
numbers and the properties (operations) are defined in the following
way: Sn(a, c) = a − c and Mn(a, c) = a + c. The reduction mapping
is g : R2 → R2 where g(a, c) = (a, −c).
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 508

508 Theory of Knowledge: Structures and Processes

The following commutative diagram corresponds to this reduc-


tion.
R2 Sn
g R
2
R Mn
Besides, multiplication of numbers is reduced to addition, squar-
ing, and subtraction by the rule
a × b = ((a + b)2 − a2 − b2 )/2.
This is also reduction of properties.
Let us consider some properties of reduction.
Proposition 5.3.1. If a property P is reduced to a property R and
the property R is reduced to a property T , then the property P is
reduced to the property T .
Proof is left as an exercise.
As it is possible to reduce any property to itself, Proposition 5.2.1
implies the following result.
Proposition 5.3.2. Reduction of properties induces a partial pre-
order on properties.
Reduction of properties also generates an equivalence relation.
Definition 5.3.5. Properties P and R are r-equivalent if the prop-
erty P is reduced to the property R and the property R is reduced
to the property P .
Proposition 5.3.2 implies the following result.
Proposition 5.3.3. r-equivalence of properties is an equivalence
relation on properties.
Example 5.3.2. The famous formula E = mC 2 induces
r-equivalence of properties mass m and energy E. Indeed, E is
reduced to m by the mapping g(x) = xC 2 and m is reduced to
E by the mapping h(x) = xC −2 .
Remark 5.3.1. r-equivalence of properties is a special case of real
equivalence of properties studied in (Burgin, 1985).
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 509

Knowledge Structure and Functioning 509

Remark 5.3.2. Reduction of properties is defined by a contravariant


morphism of named sets that represent these properties (Burgin,
2011).
Two special cases of extended reduction — direct and inverse
reductions — are especially important for various applications. We
obtain direct reductions by restricting the class of mappings Ψ
exactly to identical mappings and acquire inverse reductions by
restricting the class of mappings Φ exactly to identical mappings.
Let us consider a universe U and two properties P = (U , p, L)
and R = (U , r, M ).

Definition 5.3.6. A property P = (U , p, L) is directly reduced to a


property R = (U , r, M ) if there is there is a mapping h : M → L
where h preserves the partial order in M such that p = h◦r, i.e., the
following diagram is commutative.
U p
r L
M h

Definition 5.3.7. A direct reduction of a property P = (U , p, L)


to a property R = (U , r, M ) is strict if M = L.
For instance, if we take such properties as the weight P1 of an
individual in kilograms and the weight P2 of an individual in pounds,
then P1 is reducible to P2 and P2 is reducible to P1 .

Example 5.3.3. The concept of the right reduction from the


axiomatic theory of algorithms, automata and computation is an
example of a strict direct reduction of properties (Burgin, 2007b).
Indeed, let us assume that all algorithms (automata) are defined and
take values in a set X, e.g., in the set Σ∗ of all words in the alphabet
Σ, and consider a class of algorithms (automata) Q.

Definition 5.3.8. An algorithm A is right Q-reducible to an algo-


rithm B if there is an algorithm H from Q such that for any element
x ∈ X, we have A(x) = H(B(x)), or rB ◦ rH where rX is the mapping
determined by an algorithm (automaton) X.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 510

510 Theory of Knowledge: Structures and Processes

Reduction of algorithms (automata) is represented by the follow-


ing commutative diagram:
X A
B X
X H

Properties of reduction in general imply corresponding properties


of right reduction.
Proposition 5.3.1 implies the following results.
Corollary 5.3.1. If a property P is a direct reduction of a property
R and the property R is a direct reduction of a property T , then the
property P is a direct reduction of the property T .
Corollary 5.3.2. If a property P is a strict direct reduction of a
property R and the property R is a strict direct reduction of a property
T , then the property P is a strict direct reduction of the property T .
Proposition 5.3.2 implies the following results.
Corollary 5.3.3. Direct reduction of properties induces a partial
preorder on properties.
Corollary 5.3.4. Strict direct reduction of properties induces a par-
tial preorder on properties, which correlates with the preorder induced
by direct reduction.
Direct reduction of properties also generates an equivalence
relation.
Definition 5.3.9. Properties P and R are (strictly) dr-equivalent if
the property P is a (strict) direct reduction of the property R and
the property R is a (strict) direct reduction of the property P .
Proposition 5.3.3 implies the following results.
Corollary 5.3.5. dr-equivalence of properties is an equivalence rela-
tion on properties.
Corollary 5.3.6. Strict dr-equivalence of properties is an equiva-
lence relation on properties.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 511

Knowledge Structure and Functioning 511

There is also another kind of abstract property reduction.

Definition 5.3.10. A property P = (U , p, L) is inversely reduced


to a property R = (V , r, L) if there is there is a mapping h: M → L
where h preserves the partial order in M such that p = h◦ r, i.e., the
following diagram is commutative.

U p
g L
V r

Definition 5.3.11. An inverse reduction of a property P = (U , p,


L) to a property R = (V , r, L) is strict if U = V .

Example 5.3.4. An enumeration v is a mapping from the set N of


all natural numbers onto an arbitrary countable set (Ershov, 1977).
It means that an enumeration v of a set A is an abstract property
Ev = (N , v, A).
An enumeration v of a set R is reduced to an enumeration u of
the same set if there is a mapping f : N → N such that v = uf ,
that is, when the following diagram is commutative.

f
N N

(5.31)
v u

We see that reduction of enumerations is a strict inverse reduction


of abstract properties.

Example 5.3.5. The concept of the left reduction from the


axiomatic theory of algorithms, automata and computation is an
example of inverse reduction of properties (Burgin, 2007b). Indeed,
let us assume that all algorithms (automata) are defined and take
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 512

512 Theory of Knowledge: Structures and Processes

values in a set X, e.g., in the set Σ∗ of all words in the alphabet Σ,


and consider a class of algorithms (automata) R.

Definition 5.3.12. An algorithm A is left R-reducible to an algo-


rithm B if there is an algorithm D from R such that for any element
x ∈ X, we have A(x) = B(D(x)), or rD◦ rB where rX is the mapping
determined by an algorithm (automaton) X.
Reduction of algorithms (automata) is represented by the follow-
ing commutative diagram:
X A
D X
X B

Definition 5.3.13. An algorithmic problem P can be m-reduced or


is m-reducible to an algorithmic problem Q in a class K if there is
an algorithm/automaton A from K and an injection m : D(P) →
D(Q) realized in K by an algorithm/automaton M such that given
an element x from D(P) and a solution SQ (M (x)) of the problem Q
for the element M (x) from D(Q), the value A(M (x)) is a solution
to the problem P for the element x from D(P), and in this way, it is
possible to obtain all solutions to the problem P, i.e., if the problem
Q does not have a solution for the element M (x), then P does not
have a solution for the element x.
We remind that the domain D(Q) of a problem Q is the set of all
tentative initial conditions for this problem.
Reduction of problems helps finding characteristics of these prob-
lems, such as decidability or recognizability.
Let us consider a class of automata/algorithms K, automata from
which determine total functions and which is closed with respect to
sequential composition (cf., (Burgin, 2010d)).

Proposition 5.3.4. If a problem P is m-reducible in the class K to


a problem Q and the problem Q is decidable in K, then the problem
P is also decidable in K.

Indeed, if an algorithm B from K decides the problem Q and an


algorithm A from K reduces the problem P to the problem Q, then
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 513

Knowledge Structure and Functioning 513

the sequential composition of A and B decides the problem Q and


is in K.
Properties of reduction in general imply corresponding properties
of right reduction.
Proposition 5.3.1 implies the following results.

Corollary 5.3.7. If a property P is an inverse reduction of a prop-


erty R and the property R is an inverse reduction of a property T ,
then the property P is an inverse reduction of the property T .

Corollary 5.3.8. If a property P is a strict inverse reduction of


a property R and the property R is a strict inverse reduction of a
property T , then the property P is a strict inverse reduction of the
property T .

Proposition 5.3.2 implies the following results.

Corollary 5.3.9. Inverse reduction of properties induces a partial


preorder on properties.

Corollary 5.3.10. Strict inverse reduction of properties induces a


partial preorder on properties, which correlates with the preorder
induced by inverse reduction.

Direct reduction of properties also generates an equivalence


relation.

Definition 5.3.14. Properties P and R are (strictly) ir-equivalent if


the property P is a (strict) inverse reduction of the property R and
the property R is a (strict) inverse reduction of the property P .
Proposition 5.3.3 implies the following results.

Corollary 5.3.11. ir-equivalence of properties is an equivalence


relation on properties.

Corollary 5.3.12. Strict ir-equivalence of properties is an equiva-


lence relation on properties.

To include logic into the theory of abstract properties, it is nec-


essary to define operations with abstract properties because there
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 514

514 Theory of Knowledge: Structures and Processes

different operations in logic. The basic logical operations are: nega-


tion denoted either by  or by ∼; conjunction denoted either by ∧ or
by & or by ·; disjunction denoted by ∨; implication denoted either
by → or by ⇒ or by ⊃; and equivalence denoted either by ↔ or by
≡ or by ⇔. So, we need similar operations with abstract properties.
We build these operations in a more general way.
Let us consider n abstract properties Pi = (U , pi , Li ) (i =
1, 2, 3, . . . , n), a partially ordered set L and a mapping ω : L1 × L2 ×
L3 × · · · × Ln → L of the Cartesian product L1 × L2 × L3 × · · · × Ln
into L.

Definition 5.3.15 (Burgin, 1985). A property P = (U , p, L) is


called the ω–composition of the properties Pi = (U , pi , Li ) (i = 1,
2, 3, . . ., n) if for any object u from U , the following conditions are
satisfied


ω(P1 (u), P2 (u), P3 (u), . . . , Pn (u)) when Pn (u) = ∗
P (u) = for all i = 1, 2, 3, . . . , n

∗ otherwise

Proposition 5.3.5. All basic logical operations with classical propo-


sitions and predicates–, &, ∨, →, and ↔–are compositions of
abstract properties in the form of propositions and predicates.

Proof . We show what mappings of the scales generate basic logical


operations representing these mapping by corresponding mapping
tables. We use here the scale {T, F } although it is also possible to
use the scale {0, 1}.

The mapping table for negation :

x x
T F
F T
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 515

Knowledge Structure and Functioning 515

The mapping table for conjunction &:

x y x&y
T T T
F T F
T F F
F F F

The mapping table for disjunction ∨:

x y x∨y
T T T
F T T
T F T
F F F

The mapping table for implication →:

x y x→y
T T T
F T T
T F F
F F T

The mapping table for equivalence ↔:

x y x↔y
T T T
F T F
T F F
F F T

It is easy to check that the compositions defined by these mapping


define corresponding logical operations.
In a similar way, we prove the following result.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 516

516 Theory of Knowledge: Structures and Processes

Proposition 5.3.6. All basic logical operations with fuzzy proposi-


tions and predicates–, &, ∨, → and ↔– are compositions of abstract
properties in the form of propositions and predicates.
There are also other compositions of abstract properties.
Proposition 5.3.7. Any arithmetical operation, e.g., addition, mul-
tiplication or division, determines a composition of abstract proper-
ties with number scales.
For instance, the property ma where m(u) is the mass of the
body u and a is its acceleration is the multiplicative composition
of properties m and a. According to the Newton’s law F = ma, is
r-equivalent to the property F , which is the force acting on the body.
The Newton’s gravitational law F = G(mM/r 2 ) gives a more
complicated composition of properties.
To immerse logic into the theory of abstract properties, we also
need partial reductions of abstract properties, which allow one to
represent logical deduction.
Let us consider a universe U , two properties P = (U, p, L) and
R = (U , r, M ) and elements a from L and b from M .
Definition 5.3.16 (Burgin, 2010). A property P = (U , p, L) is (a,
b)-reduced to a property R = (U , r, M ) if the value of P is b whenever
the value of R is a, i.e., R(u) = a implies P (u) = b for any u from U .
Partial reduction of abstract properties is intrinsically related to
logical deduction. Deduction is a technique obtaining new true state-
ments from given true statements. It means that if we consider such
a property as truthfulness with two values T and F , then the con-
clusion of a deduction rule has to be true if the premises are true.
This exactly means that the conclusion of a deduction rule is (T ,
T )-reduction of the conjunction of the premises.
The most popular deduction rule is modus ponens, which has the
form
A, A → B ⇒ B,
where A are B are propositions, or equivalently,
A&(A → B) ⇒ B.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 517

Knowledge Structure and Functioning 517

Table 5.2. A truth table.


A B A→B A & (A → B )
T T T T
F T T F
T F F F
F F T F

According to modus ponens, if A and A → B are true, then B is


true as the truth Table 5.2 shows.
The truth-value of a proposition is a property of this proposition
with the scale {0, 1}. Therefore, from Table 5.2, we see that the
property B is (T , T )-reduced to the property A & (A → B), while
the property A & (A → B) is (F , F )-reduced to the property B.
This shows that the mathematical theory of abstract properties
includes logic as its subtheory (Burgin, 1989). It is possible to regard
the theory of abstract properties as a synthesis of logic and qualita-
tive physics (Bobrow, 1984).
Using operations in the domain of abstract properties, it is possi-
ble to reduce systems of abstract properties to one abstract property.
This reduction is based on the following concept of equivalence.

Definition 5.3.17. An abstract property P = (U, p, L) is equivalent


to a system Z of abstract properties {Pi = (U, pi , Li ); i ∈ I} if
the validity of the inequality Pi (a) = Pi (b) for some i ∈ I implies
P (a) = P (b) for any a, b ∈ U, and vice versa.
Composition of properties makes it possible to prove the following
result (Burgin, 1990a)).

Theorem 5.3.1. For any system Z = {Pi = (U, pi , Li ); i ∈ I}


of abstract properties, there is a property P = (U, p, L) equivalent
to Z.

Abstract properties allow us to define natural properties.


Let us consider two sets of transformations TU and TL . The first
one, TU , consists of natural transformations of the universe U , while
the second one, TL , consists of admissible transformations of the
scale L.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 518

518 Theory of Knowledge: Structures and Processes

Definition 5.3.18. An abstract property P = (U, p, L) is called


invariant with respect to TU , if for any transformation H ∈ TU and
any object A from U, we have p(A) = p(H(A)).
For instance, such property of a physical body as speed is invariant
with respect to linear transformations of the physical space.

Definition 5.3.19. An abstract properties P = (U, p, L) and Q =


(U, q, M ) are called transformationally equivalent with respect to
TL , if there is a transformation G ∈ TL such that M = G(L) and for
any object A from U, q(A) = G(p(A)).
For instance, such properties as the weight of a physical body in
kilograms is transformationally equivalent to the weight of a physical
body in grams and to the weight of a physical body in pounds.

Definition 5.3.20. The equivalence class with respect to admissible


transformations TL of abstract properties invariant with respect to
natural transformations TU is called a natural property.
For instance, such property of a physical body as weight is a
natural property.

5.4. Semantic networks and ontology

I learned very early the difference between


knowing the name of something and knowing something.
Richard Feynman

Semantic network or semantic net is a knowledge representation for-


malism that is based on a mathematical concept called a graph (cf.,
Appendix) and describes objects and their relationships in the form
of a network consisting of nodes and (usually directed) links between
nodes in the form of arcs or arrows. The nodes represent objects or
concepts by their names, while the links represent relations between
nodes also by their types (names). Both nodes and arcs are labeled
by the names of these nodes and arcs although when a semantic net-
work contains arcs of one type, these arcs are not labeled. In such a
way, a semantic network defines a set of binary relations on a set of
nodes. In the mathematical context, a semantic network is a labeled
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 519

Knowledge Structure and Functioning 519

(usually directed) graph. The structure of a semantic network defines


its meaning.
Typical semantic relations (links in a semantic network):
• is a
is a relation that indicates that one object is a subset of another
object in the network, e.g., “Elephant is a mammal”.

• is an instance of
is a relation that indicates that one object is an element of another
object in the network, e.g., “Jumbo is an instance of elephant”.

• is a prototype of
is a relation that indicates that one object is a special case or a model
of another object.

• is a part of
is a relation that indicates that one object is a physical part of
another object, e.g., “Wheel is a part of a car”.
It is possible to find other semantic relations in Section 4.1.2.
Semantic networks became very popular when researchers started
to use them in artificial intelligence and machine translation although
earlier versions have long been used in philosophy, psychology, and
linguistics.
The oldest known semantic network was drawn in the 3rd cen-
tury C.E. by the Greek philosopher Porphyry in his commentary
on Aristotle’s categories. Porphyry used this network to illustrate
Aristotle’s method of defining categories by specifying a genus or
general type, which encompasses the special cases as the subtypes
of the general type. Then this procedure was iterated for introduced
subtypes and so on. The structure of this semantic network is a spe-
cial kind of graphs called a forest.
For computers, semantic networks were first introduced by
Richard H. Richens of the Cambridge Language Research Unit in
1956 as an “interlingua” for machine translation of natural languages
because semantic networks allow spreading activation, imposing
inheritance, and using nodes as representations of objects.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 520

520 Theory of Knowledge: Structures and Processes

Most semantic networks are cognitively based and can be orga-


nized into a taxonomic hierarchy. The declarative graphic form of
semantic networks provides for their efficient utilization aimed at
knowledge representation and support of automated systems for rea-
soning about the knowledge. Network form and linear notation are
both capable of expressing equivalent knowledge, but certain repre-
sentational mechanisms are better suited to one form or the other.
For instance, network form is more efficient for operation while linear
notation is more suitable for derivation.
Some semantic networks are highly informal, while others are for-
mally defined, e.g., as systems in logic. For formal definitions, it is
possible to use either languages of network theory, such as math-
ematical schema theory (Burgin, 2005; 2006; 2010a) and semantic
link network theory (Zhuge, 2004; 2010; 2012; Zhuge and Shi, 2003;
2004), or languages of logic.
Sowa (1991) discerns six types of semantic networks.

1. Definitional networks are utilized to define concepts utilizing


the subtype or “is-a” relation between concept types and their
subtypes. Definitional networks are also called generalization or
subsumption hierarchies because they support and graphically rep-
resent the rule of inheritance for transferring properties defined for
a type to all of its subtypes. Since being a subtype is true by def-
inition, the knowledge these networks represent is often assumed
to be necessarily true.
2. Assertional networks are designed to assert propositions. Unlike
definitional networks, the information in an assertional network is
assumed to be contingently true, unless it is explicitly marked with
a modal operator. Some assertional networks have been utilized as
models of the conceptual structures underlying natural language
semantics. The distinction between definitional and asserting net-
works is similar to the distinction between semantic memory and
episodic memory (Tulving, 1972).
3. Implicational networks are based on implication as the primary
relation for connecting nodes. They may be used to represent pat-
terns of beliefs, causality, statements, or inferences.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 521

Knowledge Structure and Functioning 521

4. Executable networks include some mechanism, such as marker


passing or attached procedures, which can perform inferences,
pass messages, or search for patterns and associations.
5. Learning networks build or extend their representations by acquir-
ing knowledge from examples. The new knowledge may change the
old network by adding and deleting nodes and arcs or by modi-
fying numerical values, called weights, associated with the nodes
and arcs.
6. Hybrid networks combine two or more of the previous techniques,
either in a single network or in separate, but closely interacting
networks.

Some semantic networks have been explicitly designed to imple-


ment and test hypotheses about human cognitive mechanisms as
computational reasons may lead to the same conclusions as psy-
chological evidence, while others have been designed primarily
for increasing computer efficiency of knowledge representation and
processing.
However, there are some problems with semantic network utiliza-
tion. First, they are intractable for large domains. Second, they do not
represent performance or meta-knowledge very well. In spite of this,
semantic networks proved very useful in different areas including AI.
It is possible to distinguish two broad classes of semantic net-
works — static and dynamic semantic networks.
Static semantic networks are usually stable in the process of func-
tioning (utilization) but can be changed by those systems that have
permission to do this.

• Linguistic networks which include conceptual networks and defini-


tional networks;
• Statement networks which include assertional networks;
• Implicational or causal networks.

An example of a linguistic semantic network is the lexical database


of English called WordNet. It provides short, general definitions of
words in English and exhibits various semantic relations between
these words, or more exactly, between the concepts named by these
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 522

522 Theory of Knowledge: Structures and Processes

words. Some of the most common semantic relations defined are:

— meronymy (A is part of B, i.e., B has A as a part of itself),


— holonymy (B is part of A, i.e., A has B as a part of itself),
— hyponymy (or troponymy) (A is a subordinate of B, i.e., A is a
kind of B),
— hypernymy (A is superordinate of B),
— synonymy (A denotes the same as B),
— antonymy (A denotes the opposite of B).

Let us consider examples of these relations.

1. The word “hand” is connected to the word “body” by the meron-


omy relation.
2. The word “computer” is connected to the word “keyboard” by
the holonymy relation.
3. The word “car” is connected to the word “vehicle” by the
hyponymy relation.
4. The word “animal” is connected to the word “dog” by the hyper-
nymy relation.
5. The words “plane” and “aircraft” are connected by the synonymy
relation.
6. The words “light” and “heavy” are connected by the antonymy
relation.

Figure 5.17 gives an example of a conceptual (linguistic) network,


which represents knowledge in chemistry, namely, a chemical classi-
fication.
Figure 5.18 gives one more example of a conceptual (linguistic)
network, which represents knowledge in particle physics.
Now we give examples of statement networks in Figures 5.19
and 5.20.
In computer science, graphical representations of finite automata
are very popular (Burgin, 2005). They are examples of implicational
or causal networks. Indeed, the schema in Figure 5.21 informs us that
if the automaton is in the state q, then the input 1 causes transition
to the state p; if the automaton is in the state p, then the input 0
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 523

Knowledge Structure and Functioning 523

Chemical elements

Metalloids Alkali metals Alkaline Earth Metals Transition Metals

Other Metals Non-metals Halogens Noble Gases Rare Earth Elements

Figure 5.17. A conceptual network representing chemical elements in which all


relations (links) are to be a subclass

causes transition to the state t; if the automaton is in the state t,


then the input 1 causes transition to the state r, and so on.
Dynamic semantic networks are changing itself in the process of
functioning (utilization).
• Executable networks;
• Learning networks;
• Formation networks.
A useful form of dynamic semantic networks are growing semantic
networks, which are efficiently used as learning and formation net-
works (Gladun, 1986; 1987). In the process of their functioning, grow-
ing semantic networks form new concepts from information about
objects from some class of objects from the environment. The struc-
ture of a growing semantic network is a labeled pyramidal graph,
i.e., a labeled acyclic oriented graph in which a vertex either have
no entering edges or have more than one entering edge (cf., Fig-
ures 5.17, 5.18, 5.19 and 5.22). Vertices with no entering edges are
called receptors. Other vertices are called conceptors. A subgraph of
the pyramidal graph including some conceptor C and all vertices
from which there are paths to this conceptor is called the pyramid of
the conceptor C.
Receptors of growing semantic networks correspond to the names
of the relations, properties, states, actions, objects, and classes of
objects. Conceptors represent descriptions of objects or situations
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 524

524 Theory of Knowledge: Structures and Processes

Physical microobjects

particles antiparticles

elementary particles composite particles/hadrons

bosons fermions baryons mesons

gauge bosons nucleons charmed baryons bottom baryons pentaquarks hyperons

quarks leptons

up quark down quark charm quark strange quark top quark bottom quark positron

protons neutrons Λ particles Σ particles Ξ particles Ω particles

gluon W boson Z boson photon antiproton


Higgs boson graviton

electron neutrino electron muon neutrino muon tau neutrino tau

Figure 5.18. Modern classification of physical particles

is
A cat an animal
A fish is not

Figure 5.19. A statement network for the proposition “A cat is an animal, while
a fish is not an animal”
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 525

Knowledge Structure and Functioning 525

goes to
Andy his school
Alice the theater
is in

Figure 5.20. A statement network for the proposition “Andy goes to school and
Alice is in the theater”

0, 1
q p

ε 0
1
r t

0, 1

Figure 5.21. A non-deterministic finite automaton A with ε-transitions and the


alphabet {0, 1}”

Figure 5.22. A growing semantic network where is a receptor and is a conceptor

and repeating fragments of such descriptions in the form of a concept.


Conceptors corresponding to intersections of descriptions of objects
or situations represent conjunctive definitions (concepts) of classes
of environment elements.
A new mathematical model for growing semantic network is devel-
oped in (Burgin and Gladun, 1989; 1990; 1990a). In contrast to the
traditional mathematical model where growing semantic network
are represented in the form of labeled graphs, the new approach
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 526

526 Theory of Knowledge: Structures and Processes

formalizes semantic network as hierarchical systems of named sets.


This model has significant advantages because named set theory
(Burgin, 2011) provides much more operations for manipulation with
semantic networks in comparison with graph theory (Berge, 1973).
Another form of dynamic semantic networks are extended Petri
nets (Zhou, 2013).
As an alternative to semantic networks, contemporary artificial
intelligence and related areas, such as databases, use ontology as a
kind of knowledge representation although it is necessary to remark
that some ontologies have the form of semantics networks.
The notion of ontology came to artificial intelligence and infor-
mation science from philosophy. In philosophy, ontology is a branch
of metaphysics, which is concerned with the nature of reality and
existence analyzing various types or modes of existence. However,
coming to artificial intelligence, database and web applications, the
term ontology changed its meaning. The main difference is that for
a philosopher, ontology is a theoretical discipline, while for a com-
puter scientist, ontology is a system of description of some domain or
representation of knowledge about a domain. For instance, Gruber
defines ontology in the following way:
“An ontology is a description (like a formal specification of a program)
of the concepts and relationships that can formally exist for an agent
or a community of agents. This definition is consistent with the usage
of ontology as set of concept definitions, but more general. And it is a
different sense of the word than its use in philosophy” (Gruber, 1993).

As descriptions of domains, ontologies usually include the follow-


ing elements:
• Individuals called instances or “ground level” objects;
• Classes, sets, collections, concepts, types of objects, or kinds of
things;
• Attributes, aspects, properties, features, characteristics, or param-
eters that various objects and classes possess or may possess;
• Relations, which describe how classes and individuals from the
domain are related to one another;
• Complex structures, e.g., complex terms, formed from simpler
structures;
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 527

Knowledge Structure and Functioning 527

• Restrictions, e.g., axioms, which in contrast to axioms in logic


and mathematics, includes all true statements about the domain,
defining what must be true and what must not be true in the
described domain;
• Rules often in the form of if-then statements used for logical infer-
ences and ontology transformations;
• Operations, which include both ontological operations and repre-
sentation of domain operations;
• Events, such as the changing of attributes or relations.

Usually, two kinds of ontologies are discerned — domain-specific


ontologies and upper or foundation ontologies (Maedche and Staab,
2001).
A domain-specific or simply domain ontology represents knowl-
edge about a specific domain as part of the world. For instance, an
ontology about the domain of computer software would model the
meaning of such terms as program, programmer, programming lan-
guage, etc.
In contrast to this, an upper ontology (or foundation ontology)
represents knowledge common to many domain ontologies.
There are special ontology languages to build ontologies (Uschold
and Gruninger, 1996).
To increase expressive abilities and efficiency of websites, devel-
opers use tags for structuring information. However, tags essentially
increase the complexity of the format of content available on web-
sites. Ontologies help to harness this complexity taking account of
appropriate combinations of tags and content.

5.5. Scripts and productions

Resistance to change is as natural as change itself


— but history tells us that change is inevitable.
Matt Sakey

A script is a structured representation describing a stereotyped


sequence of events or actions in a particular context (Schank and
Abelson, 1977).
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 528

528 Theory of Knowledge: Structures and Processes

Developed by Roger Schank, Robert Paul Abelson (1928–2005)


and their research group, scripts have been used in natural language
understanding systems to organize a knowledge base in terms of the
situations that the system should understand. Scripts provide opera-
tional (procedural) knowledge about systems, actions, and situations.
A somewhat different understanding treats script as a procedural
structure, which prescribes a set of circumstances expected to follow
on from one another in a definite environment, i.e., as an imaginary
sequence or chain of situations, which could be anticipated in some
setting.
A script is composed of following components:

Entry conditions, which must be satisfied for the script starts.


Results or facts are conditions that will be true after the script has
terminated.
Events and actions which are represented in conceptual dependency
form using variables which are then concretized in the script.
Props are slots representing objects involved in events of the script.
Roles are the systems of actions that the individual participants
perform.
Scenes are sequences of events that occur according to the script
based on the temporal aspects of the script.
Scripts also can have include tracks, which are versions of the script
and may share the same components of the script.
Social schema in the form of a script is generated by an event, e.g.,
going to a restaurant, which includes scenes, e.g., booking a table,
arriving at the restaurant ordering food etc. props such as food
and menu; enabling conditions such as having money; roles such
as a waiter and a client; and outcomes such as not feeling hungry.
Social cognition researchers are particularly interested in studying
what happens when scripts involve conflicts with existing norms.

The classical example is the script “Restaurant” (Schank and


Abelson, 1977), which has the following informal description:

1. Go to a restaurant;
2. Be seated;
3. Get menu;
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 529

Knowledge Structure and Functioning 529

4. Read menu;
5. Order food;
6. Eat food;
7. Pay for meal;
8. Exit the restaurant.

After some formalization, we obtain a semi-formal script


“Restaurant”

Props:

Tables;
Menu;
Food;
Money.

Roles:

Customer;
Waiter;
Cook.

Scene 1: Entering
Customer PTRANS Customer into restaurant;
Customer ATTEND eyes to tables;
Customer MBUILD where to sit;
Customer PTRANS Customer to table;
Customer MOVE Customer to sitting position.

Scene 2: Ordering
Customer PTRANS menu to Customer (menu already on table);
Customer MBUILD choice of food;
Customer MTRANS signal to Waiter;
Waiter PTRANS to table;
Customer MTRANS ‘I want food’ to Waiter;
Waiter PTRANS to Cook.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 530

530 Theory of Knowledge: Structures and Processes

Scene 3: Eating
Cook ATRANS food to Waiter;
Waiter PTRANS food to Customer;
Customer INGEST food.
Scene 4: Exiting
Waiter MOVE write check;
Waiter PTRANS to Customer;
Waiter ATRANS bill to Customer;
Customer ATRANS money to Waiter;
Customer PTRANS out of restaurant.
The final formalization gives us the formal script “Restaurant”:
Props

Tables;
Menu;
F = Food;
Bill;
Money.
Roles
P = Customer;
W = Waiter;
C = Cook;
K = Cashier;
O = Owner.

Entry conditions

P is hungry;
P has money.

Results

P has less money;


P is not hungry;
P is pleased (optional).
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 531

Knowledge Structure and Functioning 531

Scene 1: Entering
P PTRANS P into restaurant;
P ATTEND eyes to tables;
P MBUILD where to sit;
P PTRANS P to table;
P MOVE P to sitting position.

Scene 2: Ordering
(Menu on table) (S asks for menu) (O brings menu)
P PTRANS menu to P O MTRANS signal to W W PTRANS menu to P
WPTRANS W to table
PMTRANS “need menu” to W
W PTRANS W to menu
W PTRANS W to table

P MTRANS food list to P


* P MBUILD choice of F
P MTRANS signal to W
W PTRANS W to table
P MTRANS ‘I want F’ to W
W PTRANS W toV
W MTRANS (ATRANS F) to C

C MTRANS ‘no F’ to W C DO (prepare F script)


W PTRANS W to P
W MTRANS ‘no F’ to P
(go back to*) or to (go to Scene 4 at no pay path)

Scene 3: Eating
C ATRANS F to W
W ATRANS F to P
P INGEST F
(Option: Return to Scene 2 to order more; otherwise go to Scene 4)
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 532

532 Theory of Knowledge: Structures and Processes

Scene 3: Eating

C ATRANS F to W;
W ATRANS F to P;
P INGEST F.
(Option: Return to Scene 2 to order more; otherwise go to Scene 4).

Scene 4: Exiting
S MTRANS to W

W MOVE (write bill)


W PTRANS W to P
W ATRANS bill to P
P ATRANS tip to W
P PTRANS P to K

P ATRANS money to K P ATRANS check to K

P PTRANS P to out of restaurant


(No pay path)

This script utilizes several conceptual dependencies:


PTRANS represents a physical motion from one place to another.
MTRANS represents information transmission.
ATRANS represents transmission of symbolic entities, e.g., trans-
mission of ownership.

A behavioral script is a type of frame that describes behavior of


people in definite situations. Scripts allow different solutions for the
same situation or scene, which form the so-called semantic spectrum
of actions.
Script theory was primarily intended to explain language process-
ing and higher thinking skills in the context of storytelling and the
development of intelligent tutors (Schank, 1991). A variety of com-
puter programs have been developed based on scripts, for example,
educational software (Shank and Cleary, 1995).
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 533

Knowledge Structure and Functioning 533

A useful kind of operational (procedural) knowledge representa-


tion is productions, which are rules for obtaining knowledge, mak-
ing decisions and/or performing actions. Productions are intensively
used to representing (procedural) knowledge in expert systems. In
some sense, productions are elementary scripts.

The term production and the corresponding construction were intro-


duced and studied by Emil Post (1897–1954) in (Post, 1943).
There are two basic forms of productions:

If <premise>, then <conclusion> (5.32)

or

If <condition>, then <action>. (5.33)

The first form is called premise-conclusion rules and the second


form is called condition-action rules. In an abstract representation,
both forms are described by the formula A → B, which means A
implies B.
Conditions, premises and conclusions can be in the form of simple
or compound statements. For instance, we can consider the following
productions

If you want to go from America to Europe,


then you have to use a plane or ship, (5.34)
If you love art and want to spend vacations in Europe,
then go to Italy. (5.35)

The first of them represents a premise-conclusion rule and the


second one represents a condition-action rule.
Some of the benefits of if-then rules are that they are modular,
each defining a relatively small and, at least in principle, independent
piece of knowledge. New rules may be added and old ones deleted
usually independently of other rules.
Premise-conclusion rules are utilized for deriving new knowledge
from given knowledge. In this case, the technique called forward
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 534

534 Theory of Knowledge: Structures and Processes

chaining is used. For instance, we have the following productions


(5.36) and (5.37) and initial knowledge (5.38):
If a celestial body rotates about the Sun,
then it gets light from the Sun. (5.36)
If a celestial body is a planet from the Solar System,
then it rotates about the Sun. (5.37)
The Earth is a planet from the Solar System. (5.38)
Then we can derive the following inference:
Applying rule (5.37), we have:
The Earth rotates about the Sun.
Applying rule (5.36), we have:
The Earth gets light from the Sun.
Premise-conclusion rules are also utilized for decision justification.
In this case, the technique called backward chaining is used. For
instance, we have the following backward chaining:
I am going to buy tickets for a plane to Rome.
Why?
Because “I want to go to Italy and Rome is in Italy”.
Why?
Because “If you love art and want to spend vacations in Europe,
then go to Italy”.
Why?
Because I live in Canada, love art and want to spend vacations in
Europe.
Condition-action rules are utilized for decision-making and orga-
nization of functioning. For instance, we have the following inference:
I live in Canada, love art and want to spend vacations in Europe.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 535

Knowledge Structure and Functioning 535

Applying rule (5.35), we have:

I will go to Italy.

In addition, we have:

Rome is in Italy.

Applying rule (5.34), we have:

I will buy tickets for a plane to Rome.

Pospelov (1990) suggests a more detailed form of a production:

(N , D, C),

where N is the name, D represents the domain, i.e., those objects


to which this production can be applied, and C is the core of the
production.
The core of a production has the following form:

(S, A → B, P ),

where S denotes preconditions, i.e., conditions of the production


applicability, and P denotes postconditions, i.e., conditions that must
be fulfilled when the production has been applied. The component
A → B is called the nucleus of the production.
Note that all forms of productions and their components have the
structure of a fundamental triad (named set) (cf., Appendix A).
Usually rule-based systems, such as expert systems, consist of
a set of rules in the form of productions, a knowledge base and
an inference engine. The rules encode active domain knowledge as
premise-conclusion and/or condition-action pairs. The knowledge
base contains initial knowledge and previously deduced knowledge.
The inference engine works in the context of a non-monotonic logic
applying a conflict resolution strategy to deal with inconsistencies
and handle cases where more than one rule is suitable for application.
Comparing scripts and productions with abstract automata and
algorithms, which are popular means of operational knowledge rep-
resentation, we see that productions are on the level of finite
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 536

536 Theory of Knowledge: Structures and Processes

automata, while scripts are on the level of recursive and super-


recursive algorithms.

5.6. Frames and Schemas

All truths are easy to understand once they are discovered;


the point is to discover them.
Galileo Galilei

A frame is a data structure introduced by Marvin Minsky in the


1970s for knowledge representation that allows imitating the way
in which people keep information in the brain and make use of it
when the need arises. Minsky frames were intended to help Artificial
Intelligence systems recognize and utilize patterns and their specific
instances (Minsky, 1974). Frames support inheritance and are often
used to capture knowledge about typical objects or events from some
class, such as cars, planes, organizations, people or triangles.
A frame is a labeled graph in which the nodes are called slots.
Each frame is a structured knowledge item that contains information
about a given object. A frame also has a name (identification), by
means of which this frame with all its slots can be retrieved. Each
slot has a name and characterizes a property or relation of the object
represented by the frame. Aspects of the relation represented by a slot
are described by facets of the slot. Slots may contain default values
(subject to override by detecting a different value for an attribute),
refer to other frames (component relationships) or contain methods
for recognizing pattern instances being able to hold declarative and
procedural information.
Slots may have the following facets:
— Type of a value;
— Default value determined by default fillers;
— Current value determined by current fillers;
— Constraints on a value in the form of properties or axioms;
— Salience reflecting measure of the slot’s importance;
— Cardinality determining minimum and maximum values;
— Methods or Procedures determining actions on values and other
facets;
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 537

Knowledge Structure and Functioning 537

Here are examples of standard methods:

— The method IF <condition> NEEDED is applied to acquire a


slot value;
— The method IF <condition> CHANGED is applied to change a
value of a slot;
— The method IF <condition> ADDED is applied to add a value
to a slot;
— The method IF <condition> REMOVED is applied to delete a
value of a slot;

Facets of a slot may be variable or constant. Variables in slots of


a frame allow adaptation of the frame to different objects and situ-
ations by assigning constant values to these variables. Consequently,
frame languages are usually focused on the recognition and descrip-
tion of objects and classes.
Simple slots are pairs <slot name, slot value> or a triple <frame
name, slot name, and slot value>. However, in the majority of frame
systems, slots are complex structures that have more facets, which
describe the properties of the relation represented by the slot. The
value of a slot may be elementary, e.g., a text or a number, or it may
be another frame. Some frame systems allow multiple values for slots
and support procedural facets (methods) used either to compute the
slot value or to make consistency checking or to update values of this
or other slots.
The meaning of a frame is not defined by any systematic rules
from logic and language or by relations to the real world. It is deter-
mined only by the procedures that manipulate the slots and their
aspects, while there are no formalized restrictions on these proce-
dures.
Let us consider an example of a frame for the object car.

Example 5.5.1. A frame F with the name “Car”, which stores


knowledge about cars has the following slots:

1. The slot with the name “Vehicle” informs that the class of cars
is a subclass of the class of vehicles and is related to the frame
“Vehicle”.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 538

538 Theory of Knowledge: Structures and Processes

2. The slot with the name “Number of wheels” informs how many
wheels the car has, e.g., this slot may have the value 4.
3. The slot with the name “Number of doors” informs how many
doors the car has, e.g., this slot may have the value 2 or 4.
4. The slot with the name “Make” informs what company produced
the car, e.g., this slot may have the value “GMC”, “Honda” or
“Toyota”.
5. The slot with the name “Model” describes the model of the car,
e.g., this slot may have the value “Accord” when the previous slot
has the value “Honda” or “Toyota”.

Collections of related frames are linked together into frame-


systems, which are used to make knowledge processing economical
representing changes of emphasis and attention and accounting for the
effectiveness of the representation. Frame-systems provide additional
means for description of the frame to different objects and situations
performing this process in two stages: at first, the most relevant frame
is selected from the frame-system and then this frame is adapted to
the current object or situation. For visual scene analysis, the different
frames of a system describe the object, e.g., a system, from different
viewpoints, and the transformations between one frame and another
stand for the effects of object changes, e.g., moving the object from
place to place. In turn, frame-systems are linked by an information
retrieval network performing selection of frames and their adaptation.
Frames became popular because they were assumed being power-
ful based on associated procedural languages and easy to learn due
to the arbitrariness of slot names and the values.
Frame technology was further developed in (Burgin and Slyusar,
1980) based on the block-schema language elaborated in (Burgin,
1973; 1976).
Some consider frames as a specific kind of semantic networks.
Others assume that semantic networks and frames have the following
differences:
1. In a semantic network, nodes hold no associated information
except their names, while in a frame, a node is a slot, which may
contain a lot of information.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 539

Knowledge Structure and Functioning 539

2. In a semantic network, all kinds of relations are automatically


managed according to a set of methods of inheritance that can be
defined by the constructor of the network, while in frame systems,
inheritance of properties is described by a single class of relations.

Frames, semantic networks and scripts are kinds of the structure


called schema or scheme, which is very popular in the field of knowl-
edge representation. Note that the plural of schema or scheme is
schemas, correspondingly, schemes in U.S. or schemata in UK.
Kant was perhaps the first to introduce the word schema into
philosophy, making brief remarks about schemata and describing the
“schematism” as “an art concealed in the depths of the human soul,
whose real modes of activity nature is hardly likely ever to allow
us to discover” (Kant, 1781/1929). As an example, he described the
“dog” schema as a mental pattern that could delineate the figure
of a four-footed animal in a general manner, without limitation to
any single determinate figure from experience, or any possible image
that a person can represent directly. In neuroscience, the notion of
a schema was introduced by Head and Holmes (1911) who discussed
body schemas in the context of brain damage. Bartlett (1932) imple-
mented the notion of a schema as part of a study of remembering.
Another important use of schemas in psychology was initiated by
Piaget (1952), who viewed cognitive development from biological per-
spective and described it in terms of operation with schemas. With
respect to adaptation, Piaget believed that humans desire a state of
cognitive balance or equilibration. When the child experiences cog-
nitive conflict, such as a discrepancy between what the child believes
the state of the world to be and what she or he is experiencing, adap-
tation to the new situation is achieved through assimilation and/or
accommodation. Assimilation involves making sense of the current
situation in terms of previously existing mental structures called
schemas. Accommodation requires construction of new schemas when
new information does not fit into existing schemas. For instance,
when a child encounters a bird for the first time and learns that it
is different from dogs or cats, he must create a new schema for birds.
Piaget characterized schemas, or schemes, as general characteristics
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 540

540 Theory of Knowledge: Structures and Processes

of an action that allow the application of the same action to a dif-


ferent context by means of the mind’s natural tendency to organize
information into related, interconnected structures — schemas.
However, Bartlett’s research was neglected in America as the
behaviorist psychology prevailed and only later Ulric Gustav Neisser
(1928–2012) returned schemas to psychology (Neisser, 1967). In
recent years, researchers extensively explored human cognition and
knowledge structure in terms of schemas treated as an efficient tool
for representation of human knowledge (cf., for example, (Anderson,
1977; Rumelhart, 1980)). It has been experimentally demonstrated
that schemas from the person’s mentality influence the way incom-
ing information is acquired and interpreted (cf., for example, (Pichert
and Anderson, 1977)).
Some researchers assume that a schema is defined as an organized
body of knowledge, conceived theoretically as a set of interconnected
propositions centering on a general concept, and linked peripherally
with other concepts (cf., for example, (Gagne, 1986)). However, this
is a too restrictive definition and the majority of cognitive psychol-
ogists utilize different conceptions and models of schemas. That is
why in psychology and education, a schema is treated as a men-
tal structure people use to organize, compress and simplify their
knowledge of the world in which they live. There are schemas about
people, nature, artificial devices, animals, and in fact, about almost
everything. Schemas also represent knowledge about concepts and
objects with the relationships they have with other objects, situa-
tions, events, sequences of events, actions, processes and sequences
of actions. Researchers often deal with schemas as the basic units
of human knowledge and cognition (Suzuki, 1987). As Rumelhart
writes:

“Schemata can represent knowledge at all levels-from ideologies and cul-


tural truths to knowledge about the meaning of a particular word, to
knowledge about what patterns of excitations are associated with what
letters of the alphabet. We have schemata to represent all levels of our
experience, at all levels of abstraction. Finally, our schemata are our
knowledge. All of our generic knowledge is embedded in schemata.”
(Rumelhart, 1980)
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 541

Knowledge Structure and Functioning 541

In this context, a mental schema is an abstract structure of


knowledge, a mental representation stored in memory upon which
all information processing depends. It may represent knowledge at
different levels, e.g., cultural truths, linguistic knowledge, or ideolo-
gies. They are mental templates that represent a person’s knowl-
edge about people, situations or objects, and which originate from
prior knowledge or experiences. People use schemas to organize their
knowledge and provide a framework for future understanding. All
social stereotypes and roles, scripts, worldviews, and archetypes are
schemas.
Schemas influence our attention, as we are more likely to notice
things that fit into schemas we have. If our schemas do not allow
to represent some incoming information, then this information is
either ignored or encoded into a new schema. Often (but not always)
schemas are prone to distortion. They influence what we look for
in different situations. They are conservative having a tendency to
remain unchanged, even in the face of contradictory information. All
biases have their schemas. People are inclined to place others who
do not fit their schema in a “special” or “different” category, rather
than to consider the possibility that the schema they have may be
faulty.
Dynamical schemas of situations are called interaction schemas,
which are applied to the visuomotor coordination of the frog, high-
level visual recognition, hand control, language processing, and per-
ceptual robotics (Arbib, 1992; 1995). Interaction schemas, which
include motor schemas and perceptual schemas, are defined by the
execution of tasks involving the physical environment. A set of
basic motor schemas is hypothesized to provide simple prototypical
patterns of interaction with the world, whereas perceptual schemas
recognize certain possibilities of interaction and regularities of the
physical world. Motor schemas are similar to control systems but
distinguished in that they can be combined to form coordinated con-
trol programs that control the phasing in and out of patterns of
co-activation, with mechanisms for the passing of control parameters
from perceptual to motor schemas and forming coordinated control
programs to mediate complex behaviors. Various schema parameters
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 542

542 Theory of Knowledge: Structures and Processes

represent properties of physical objects such as size, location, time,


and motion.
Many schemas may be abstracted from the perceptual-motor
interface. Schema activations are largely task-driven, reflecting the
goals of the organism and the physical and functional requirements
of the task.
Theoretical representation posits schemas as “programs” (in a
generalized sense) or some kind of procedures for a system that has
continuing perception of, and interaction with, its environment, with
concurrent activity of many different schemas passing messages back
and forth for the overall achievement of some goal. At the same time,
it is possible to treat schemas as self-contained computing agents
(objects) with the ability to communicate with other similar agents,
and whose functionality is specified by some behavior. In addition,
brain theory further requires schemas to be implemented in specific
neural networks.
A schema is both a store of knowledge and the description of a
process for applying that knowledge. Consequently, a schema is often
instantiated to form multiple schema instances as active copies of
the process to apply operational knowledge. Namely, given a schema
that represents generic knowledge about some object, an individ-
ual may need several active instances of the schema, each suitably
tuned, to subserve our perception of a different instance of the object.
Schemas can become instantiated in response to certain patterns of
input from sensory stimuli or other schema instances that are already
active.
The alternative view (Arbib and Liaw, 1995) is that there is a
limited set of schemas (maybe only one) and that only the schemas
can be active. By contrast a schema instance is rather a record in
working memory that records that a certain “region of space time”
R activated a specific schema S with certain parameters {P} and
confidence level C. On the latter view, processes of attention phase
the activity of a schema in and out for different regions. Presum-
ably, however, the working memory provides top-down activation of
a schema when attention returns to those regions where the schema
was recently active (cf., (Itti and Arbib, 2005)).
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 543

Knowledge Structure and Functioning 543

Each instance of a schema has an associated activity level. That of


a perceptual schema represents a “confidence level” that the object
represented by the schema is indeed present; while that of a motor
schema may signal its “degree of readiness” to control some course
of action. The activity level of a schema instance may be but one of
many parameters that characterize it. Thus the perceptual schema
for “ball” might include parameters to represent size, color, and
velocity.
The use, representation, and recall of knowledge is mediated
through the activity of a network of interacting computing agents, the
schema instances, which between them provide processes for going
from a particular situation and a particular structure of goals and
tasks to a suitable course of action (which may be overt or covert,
as when learning occurs without action or the animal changes its
state of readiness). This activity may involve passing of messages,
changes of state (including activity level), instantiation to add new
schema instances to the network, and deinstantiation to remove
instances. Moreover, such activity may involve self-modification and
self-organization.
The key question is to understand how local schema interac-
tions can integrate themselves to yield some overall result with-
out explicit executive control, but rather through cooperative com-
putation, a shorthand for “computation based on the competition
and cooperation of concurrently active agents”. For instance, in
interpretation of visual scenes, schema instances are used to rep-
resent hypotheses that particular objects occur at particular posi-
tions in a scene, so that instances may either represent conflict-
ing hypotheses or offer mutual support. Cooperation yields a pat-
tern of “strengthened alliances” between mutually consistent schema
instances that allows them to achieve high activity levels to con-
stitute the overall solution of a problem; competition ensures that
instances which do not meet the evolving consensus lose activ-
ity, and thus are not part of this solution (though their contin-
uing subthreshold activity may well affect later behavior). In this
way, a schema network does not, in general, need a top-level execu-
tor, since schema instances can combine their effects by distributed
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 544

544 Theory of Knowledge: Structures and Processes

processes of competition and cooperation, rather than the iteration


of an inference engine on a passive store of knowledge. This may
lead to apparently emergent behavior, due to the absence of global
control.
In brain theory, a given schema, defined functionally, may be dis-
tributed across more than one brain region; conversely, a given brain
region may be involved in many interaction schemas. A top-down
analysis may advance specific hypotheses about the localization of
(sub)-schemas in the brain and these may be tested by lesion exper-
iments, with possible modification of the model (e.g., replacing one
schema by several interacting schemas with different localizations)
and further testing.
Schemas, and their connections within a schema network, must
change so that over time they may well be able to handle a certain
range of situations in a sufficiently adaptive way. In a general set-
ting, there is no fixed repertoire of basic interaction schemas. New
schemas are formed as assemblages of old schemas and being formed,
a schema are usually tuned by certain adaptive mechanism. This
tunability of schema assemblages allows them to become units of
behavioral control, much as a skill is honed into a unified whole
from constituent parts. Such tuning may be expressed at the level
of schema theory itself, or may be driven by the dynamics of mod-
ification of unit interactions in some specific implementation of the
schemas. The theory of interaction schemas is consistent with a model
of the brain as an evolving self-configuring system of interconnected
units.
Once an interaction schema as a theoretical model of animal
behavior has been refined to the point of hypotheses about the local-
ization of schemas, it is possible to model a brain region by see-
ing if its known neural circuitry can indeed be shown to implement
the posited schema. In some cases, the model involves properties of
the circuitry that have not yet been predicted and tested, thus lay-
ing the ground for new enhancements and experiments. In AI, it
is possible to implement individual schemas using artificial neural
networks or appropriate programming language for developing the
corresponding software system on a computer.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 545

Knowledge Structure and Functioning 545

Schemas affect what we notice, how we interpret things and how


we make decisions and act. They act like filters, accentuating and
downplaying various elements. We use them to classify things, such as
when we ‘pigeon-hole’ people. They also help us forecast, predicting
what will happen. We even remember and recall things via schemas,
using them to ‘encode’ memories.
Schemas are often shared within cultures, allowing short-cut com-
munications. Every word is, in effect, a schema, as when you read it
you receive a package of additional inferred information.
We tend to have favorite schema which we use often. When inter-
preting the world, people will try to use these first, going on to others
if they do not sufficiently fit. Often when something does not match
the schema, it is ignored. Some schemas are easier to change than
other schemas, and some people are more open about changing any
of their schemas than other people.
Schema theory assumes that when individuals obtain informa-
tion, they attempt to fit it into some structure in memory that helps
them making sense of the received information. Schema theory pro-
poses that the individuals breaks information into suitable chunks,
which are then coded, structured by existing schemas or organized in
new schemas and stored in the brain for later recall. Schema theory
suggests efficient tools and an active strategy for coding technique
necessary for facilitating the recall and utilization of knowledge.
Schemas are hierarchically organized mental structures, which allow
the learners to understand and associate what is being presented
to them.
Information processing based on schema theory is far removed
from the serial symbol-based computation and artificial intelligence
actively contributes to schema theory, even when it does not use
this term. For instance, Minsky (1986) promoted a Society of Mind
analogy in which “members of society”, the intelligent agents, are
analogous to schemas. The study of interactive “agents” in a more
general context has become an established theme in artificial intelli-
gence when actions are mediated through a network of schemas.
Here are some examples of interaction schemas in human
mentality.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 546

546 Theory of Knowledge: Structures and Processes

Example 5.6.1. The schema of face recognition is tentatively


acquired at around two or three months of age (which succeeds to a
previous scheme already present at birth). This schema corresponds
to the mental structure which connects the various states of a face
defined by configurations of perceptual indices (front view, side view,
etc.) related to actions-transformations (head rotations, subject’s or
object’s rotation).
Example 5.6.2. The schema of (shape or) size constancy is the
insertion of the various sizes of an object related to its distance from
the perceiver in a transformational system (system of transforma-
tions) governing the moves of the object. Present at birth, it could
be reconstructed during the first months of life.
Example 5.6.3. The schema of object’s permanence (the “objec-
tive” form), the one achieved according to Piaget at around 16–18
months of age, is the mental structure which connects the various
successive states of a set of objects (their different localizations or rel-
ative positions) to their successive displacements (transformations),
even bridging across periods when the object disappears from the
view.
To achieve better comprehensibility, interaction schemas are usu-
ally represented in a graphical form as in Figure 5.23.
Example 5.6.4. A hypothetical coordinated control schema for
reaching and grasping is given in Figure 5.23 (Arbib, 1989).
Interaction schemas form the base for the Computational Neuro-
science, structure and function of which are given in Figure 5.24. It
is interesting to note that even subneural modeling brings us to grid
automata from the automata theory (Burgin, 2005).
One more important type is image schema defined as a recurring
structure within our cognitive processes which establishes patterns of
understanding and reasoning (Johnson, 1987; Lakoff, 1987; Rohrer,
2006). Image schemas are formed from our bodily interactions, from
linguistic experience, and from historical context.
In contemporary cognitive linguistics, an image schema is treated
as an embodied prelinguistic structure of experience that motivates
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 547

Knowledge Structure and Functioning 547

recognition criteria visual input visual input


visual input

Visual
activation Location
of visual
search Size Orientation
target Recognition Recognition
location

size orientation visual,


kinesthetic
activation visual and and tactile
of reaching kinesthetic input input

Fast Phase Hand Hand


Movement Preshape Rotation

Slow Phase Actual


Movement Grasp

Hand Reaching Grasping

Figure 5.23. Dashed lines — activation of signals (i.e., control links, in our
terminology); solid lines — transfer of data (i.e., information links, in our
terminology)

conceptual metaphor mappings. Experimental evidence supporting


existence of image schemas in mentality of people is drawn from
several disciplines, including cognitive psychology, neuroscience and
studies in spatial cognition in both linguistics and psychology.
Naturally, schemas are used in a variety of areas.
Mental schemas, also called mental models, concepts, mental rep-
resentations, and knowledge structures, are used by people for orga-
nization of their behavior, thinking, writing, and speaking.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 548

548 Theory of Knowledge: Structures and Processes

Computational Neuroscience
via Structure and Function

Brain / Behavior / Organism

Schemas Brain Regions


Functional Decomposition Components / Layers / Modules
Structural Decomposition
in a form of grid automata

Generalized Neural Networks


Structure meets Function

Subneural Modeling
in a form of grid automata

Figure 5.24. A version of the schema for the Computational Neuroscience sug-
gested by M. A. Arbib

A social schema of behavior is generated by an event (going to


a restaurant), that consists of a script and scenes (booking a table,
arriving at the restaurant ordering food etc.); props (menu); enabling
conditions (money); roles (waiter, client); and outcomes (not feeling
hungry). Social cognition researchers are particularly interested in
studying what happens when the schema activated conflicts with
existing norms.
In general, social schemas represent general social knowledge, e.g.,
describing the structure of organizations.
An ideological schema is generated by attitudes or opinions on
relevant social or political issues, for example, on energy and ecology.
A formal schema is related to the rhetorical structure of a written
text, such as differences in genre or between narrative styles and their
corresponding structures.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 549

Knowledge Structure and Functioning 549

A linguistic schema includes the decoding features a person needs


in order to understand how words are organized and fit together in
a sentence (be it spoken or written discourse).
A content schema refers to knowledge about the subject matter
or content of a text.
An orienting schema of the nearby environment, or “cognitive
map”, guides the organism around the environment. A cognitive map
contains schemata of the objects in the environment and spatial rela-
tions between the objects.
A schema for oneself is called a self-schema. People also hold
schemas for idealized or projected selves, or possible selves.
A schema of another person is called a person schema. Person
schemas can be mental or represented by physical objects, e.g., by
texts or graphs.
A role schema depicts roles or occupations of people, while a
schema for events or situations is called an event schema. Event
schemas are formally represented by scripts.
People use schemas to organize their knowledge and provide a
framework for future understanding, e.g., according to Piaget, in
their development, children adopt a variety of schemas to under-
stand the world. Examples of schemas include organization of sci-
entific disciplines, library classifications, social schemas, stereotypes,
social roles, worldviews, and archetypes. We can see that frames,
scripts and semantic networks are special case of schemas.
It is natural to consider three types of schemas:

• static schemas describe (give knowledge about) classes of objects.


• function schemas describe (give knowledge about) classes of
changes or relations.
• process schemas describe (give knowledge about) classes of pro-
cesses, such as behavior, interaction, or communication.

In contrast to static schemas, function schemas, and process


schemas are dynamic schemas. For instance, frames are usually static
schemas, while scripts are dynamic schemas.
In a natural way, schema theory came from psychology to edu-
cation where it was introduced by Anderson (1977). He pointed out
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 550

550 Theory of Knowledge: Structures and Processes

that schemas is a form of representation for complex knowledge pro-


viding a principled model of how old knowledge might influence the
acquisition of new knowledge. Following this principle, schema the-
ory was applied to understanding the reading process serving as an
important counterweight to purely bottom-up approaches to read-
ing. The schema theory emphasizes that reading involves both the
bottom-up information from the perceived letters coming into the
eye and the use of top-down knowledge to construct a meaningful
representation of the content of the text.
A notion of a schema rather different in emphasis and proper-
ties from those we have just been considering has been very popular
in programming, where it was formalized and extensively used for
theoretical purposes. At the beginning, program schemas, or pro-
gram schemata, were introduced by Lyapunov in 1953 and published
later in (Lyapunov, 1958) under the name operator schema. After-
wards Ianov, a graduate student of Lyapunov, transformed opera-
tor schemas into a logical form called a logical schema of algorithm
(later named Ianov program schemata) and proved many properties
of these schemas (Ianov, 1958; 1958a; 1958b). The main result of
Ianov is a theorem about the decidability of equivalency of schemas
that use only one-argument functions. Approximately at the same
time, Kaluznin (1959) introduced the concept of a graph-schema of
an algorithm. Subsequently, this concept was generalized by Bloch
(1975) and applied to automaton synthesis, discrete system design,
programming, and medical diagnostics.
Program schemas were later studied by different authors, who
introduced various kinds of program schemas: recursive, push-down,
free, standard, total schemas (cf., for example, (Karp and Miller,
1969; Paterson and Hewitt, 1970; Garland and Luckham, 1971;
Logrippo, 1978)). Fischer (1993) introduced the mathematical con-
cept of a lambda-calculus schema to compare the expressive power
of programming languages. The theory of program schemas has been
considered as a base for (Yershov, 1977) or one of the main direc-
tions (Kotov, 1978) in theoretical programming. In the 1960s, pro-
gram schemas were used to create programming languages and build
translators. To study parallel computations, flow graph and dataflow
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 551

Knowledge Structure and Functioning 551

schemas have been introduced and utilized (Slutz, 1968; Keller, 1973;
Dennis, Fossen, and Linderman, 1974). Dataflow schemas are for-
malizations of dataflow languages. Program schemas and dataflow
schemas formed an implicit base for the development of the first pro-
gramming metalanguage — the block-schema (flow-chart) language
(Burgin, 1973; 1976).
Moreover, the advent of the Internet and introduction of the
Extensible Markup Language, abbreviated XML, started the devel-
opment of schema languages (cf., for example, (Duckett et al., 2001;
Van Der Vlist, 2004)). As developers know, the advantage of XML is
that it is extensible, even to the point that you can invent new ele-
ments and attributes as you write XML documents. Then, however,
you need to define your changes so that applications will be able to
make sense of them and this is where XML schema languages come
into play. In these languages, schemas are machine-processable spec-
ifications that define the structure and syntax of metadata specifica-
tions in a formal schema language. There are many different XML
schema languages (W3C Schema, Schematron, Relax NG, and so
on). They are based on schemas that define the allowable content
of a class of XML documents. Schema languages form an alterna-
tive to the DTD (Document Type Definition), and offer more pow-
erful features including the ability to define data types and struc-
tures. XML schemas from these languages provide means for defining
the structure, content and semantics of XML documents, including
metadata. A specification for XML schemas is developed and main-
tained under the auspices of the World Wide Web Consortium. The
Resource Description Framework (RDF) is an evolving metadata
framework that offers a degree of semantic interoperability among
applications that exchange machine-understandable metadata on the
Web. RDF schema (Resource Description Framework schema) is a
specification developed and maintained under the auspices of the
World Wide Web Consortium. The Schematron schema language dif-
fers from most other XML schema languages because it is a rule-
based language that uses path expressions instead of grammars.
This means that instead of creating a grammar for an XML doc-
ument, a Schematron schema makes assertions applied to a specific
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 552

552 Theory of Knowledge: Structures and Processes

context within the document. If the assertion fails, a diagnostic mes-


sage that is supplied by the author of the schema can be displayed.
RELAX NG is a grammar-based schema language, which is both
easy to learn for schema creators and easy to implement for software
developers.
A natural tool for providing flexible data structures is to cre-
ate schemas for describing an object and any of the interrelation-
ships that exist within a data structure. There are many different
kinds of schemas used in different areas of information technology.
For instance, relational databases such as SQL Server use schemas
to contain their table names, column keys, and provide a reposi-
tory for trigger and stored procedures. In addition, when a developer
creates a class definition, he or she can define schemas to provide
an efficient object-oriented interface for properties, methods, and
events.
An XML schema is by definition a well-formed XML document
with a set of namespaces, which consist of declarations providing
a unique set of identifiers and establishing a definite structure on
XML elements and their attributes. The original namespace in the
XML specification used URI (Universal Resource Identifiers) as a
base for differentiating various XML vocabularies. Later this names-
pace was extended under the XML schema specification to include
schema components in the structure and not just single elements
and attributes. The unique identifier was changed from a URI to a
security boundary that is owned by the schema author because URI
does not point to a physical location. The whole namespace con-
sists of two components — the XML schema namespace and target
namespace.
Special kind of XML schemas has been developed for energy
simulation data representation (Gowri, 2001). Another application
of XML schemas is e-business. For instance, the ebXML specifi-
cation schema developed by UN/CEFACT and Oasis provides a
standard framework by which business systems may be config-
ured to support execution of business collaborations, which consist
of business transactions. Such schemas are called business process
specification schemas. Transactions can be implemented using one
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 553

Knowledge Structure and Functioning 553

of many available standard patterns of business interaction. These


patterns determine the ongoing exchange of business documents and
signals between the partners to achieve the required electronic com-
merce transactions.
Another example of business process specification schemas is the
XML schema definition developed by the Danish Broadcasting Cor-
poration for business-to-business exchange interface with the DR
metadata standard. XML schemas are also used for modeling and
exploration of business objects (Daum, 2003).
Star schemas determine methods of organizing information in a
data warehouse that allows the business information to be viewed
from many perspectives.
In addition, an important tool in database theory and technol-
ogy is the notion of the database schema, which gives a general
description of a database, is specified during database design, and
is not expected to change frequently (Elmasri and Navathne, 2000).
Database schemas are represented by schema diagrams. Database
management system (DBMS) architecture is often specified utilizing
database schemas. Three important tasks of databases are (Elmasri
and Navathne, 2000):

1. Insulation of program and data (program-data and program-


operation independence).
2. Support of multiple user views.
3. Use of a catalog to store the database description (schema).

To realize these tasks, the three-schema architecture, or


ANSI/SPARC architecture, of DBMS was developed (Tsichridsis and
Klug, 1978). In this architecture, schemas are defined at three levels:

1. The internal level has an internal schema, which describes the


physical storage structure of the database.
2. The conceptual level has a conceptual schema, which describes the
structure of the whole database for a community of users.
3. The external or view level includes a number of external schemas
or user views. Each external schema describes the part of the
database that a particular user group is interested in.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 554

554 Theory of Knowledge: Structures and Processes

Most DBMS do not separate the three levels completely, but sup-
port the three-schema architecture to some extent.
An interesting special kind of schemas was recently introduced
by Google in 2012. It is called a knowledge graph (Google: Knowl-
edge Graph, 2012; Kohs, 2014). It is organized around objects called
entities, which include individuals, societies, places, events, organiza-
tions, countries, sports teams, books, works of art, movies and so on,
with facts connected to them and relations between these different
objects. This structure is growing very fast: in May 2012, it included
3.5 billion facts connected to 500 million entities. By December of
that year, it had grown to include 570 million entities with 18 billion
facts connected to them.
The knowledge management system also called Knowledge Graph
organizes and manages this gigantic knowledge schema — a labeled
graph of knowledge — collecting and merging information about
entities from many data sources. Based on this knowledge schema,
Knowledge Graph provides structured and detailed information
about the topic in addition to a list of links to other sites.
Now other Internet companies, such as Yahoo or Diffbot, are
developing their own knowledge graphs.
It is interesting to know that the term knowledge graph appeared
in computer science much earlier. In 1982, Hoede and Stokman
started building a theory of knowledge graphs to use it for extract-
ing knowledge from medical and sociological texts and building
corresponding expert systems (cf., (Zhang, 2002)). Later several
researchers in the Netherlands continued to develop and apply this
theory (Hoede and Willems, 1989; Smit, 1991; van den Berg, 1993;
Zhang, 2002; Wang et al., 2010).
In this context, a knowledge graph is also a schema with the
variable  called a token, which is a node in a knowledge graph and
denotes a perception of an individual from the real world. Note that
according to the Existential Triad, the real world consists of three
components: the physical world, the mental world and the structural
world (Burgin, 2012). Perceptions of an individual from the mental
world are usually called concepts or conceptions.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 555

Knowledge Structure and Functioning 555

Perceptions have different types, which are constants represented


by marks and are also nodes in knowledge graphs. Token and marks
are connected by links (edges or arcs) representing various relations.
Here is an example of a knowledge graph

Rose →  ← cat

Here Rose and cat are marks and Rose is the name of a cat.
In this example, relations are represented by arrows (directed
arcs). However, according to the theory of knowledge graphs, a rela-
tionship between two concepts a and b is a graph in which both a and
b occur. It gives an interesting example of a named set A = (a, f, b),
in which the naming relation f is a graph (cf., Appendix).
It is demonstrated that the mathematical schema theory (Burgin,
2005; 2006; 2010a) encompasses all types of schemas used in pro-
gramming, database theory, and computer science, as well as on the
Internet and real-world databases.
A notion of a schema has been also used in mathematical logic,
metamathematics, and set theory. In 1927, John von Neumann (orig-
inally, János Lajos Margittai von Neumann) (1903–1957) introduced
the concept of an axiom schema. It has become very useful in
axiomatic set theories (for instance, the axiom of subsets is, according
to the conceptions of Thoralf Skolem (1887–1963), Wilhelm Friedrich
Ackermann (1896–1962), Willard Van Orman Quine (1908–2000) and
some other logicians, an axiom schema) and other axiomatic math-
ematical theories (Fraenkel and Bar-Hillel, 1958). Logicians studied
axiomatizability by a schema in the context of general formal theo-
ries (Vaught, 1967). In addition to axiom schemas, schemas of infer-
ence (e.g., syllogism schemas) have been also studied in mathematical
logic (cf., for example, (Fraenkel and Bar-Hillel, 1958)). Actually, syl-
logisms introduced by Aristotle, as well as deduction rules of modern
logic are schemas for logical inference and mathematical proofs.
Another mathematical field where the concept of schema is
used is category theory. This concept was introduced by Alexan-
der Grothendieck (1928–2014) in a form equivalent to a multigraph
and later generalized to the form of a small category (Grothendieck,
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 556

556 Theory of Knowledge: Structures and Processes

1957). From categories, the concept of a schema came to algebraic


geometry, where now it plays an essential role.
In contrast to logic, mathematics and computer science, for a
long time, exploration and utilization of mental schemas, such as
interaction schemas, has yielded no efficient formalism.
The first step to formalization of mental schemas in general and
interaction schemas, in particular, was made by creation of the RS
(Robot Schema) language (Lyons, 1986; Lyons and Arbib, 1989) and
NSL (Neural Simulation Language) (Weitzenfeld, 1989; Weitzenfeld
et al., 2002). RS is a language designed to facilitate sensory-based
task-level robot programming. RS uses port automata (Arbib et al.,
1983) to provide semantics of schemas. NSL was developed to aid
the exploration of neural network simulations through interactive
computer graphics. Arbib and Ehrig (1990) made two first attempts
at providing a rapprochement between a methodology for parallel
and distributed computation in the context of brain theory and per-
ceptual robotics based on RS-schemas and an algebraic category the-
ory of the specification of modules and their interconnections devel-
oped in (Ehrig and Mahr, 1985; 1990). However, as Arbib (2005)
writes: “It must be confessed that [that work] was more a program
for research than a presentation of results, and that research remains
to be done”.
The first resourceful formalization of all-purpose schemas in gen-
eral and mental schemas, in particular, was achieved in mathemat-
ical schema theory developed by Burgin (2005; 2006; 2010a). This
theory encompasses all types of schemas used in psychology, sociol-
ogy, logic, meta-mathematics, set theory, programming, and many
other areas. Here we present elements of mathematical schema the-
ory, which, in particular, enables us to represent not only exter-
nal features of interaction schemas and their functioning but also
essential structural peculiarities of interaction schemas and their
assemblages.
Schemas can have ports, which are specific schema elements.
They belong (are assigned) to schema nodes and through which
information/data come into (output ports or outlets) and are sent
outside the schema (input ports or inlets). Thus as before, any system
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 557

Knowledge Structure and Functioning 557

P of ports is the union of its two disjunctive subsets P = P in ∪ P out


where P in consists of all inlets from P and P out consists of all out-
lets from P . If there are ports that are both inlets and outlets, we
combine such ports from couples of an input port and an output
port.
To formalize schemas, we consider, at first, those elements from
which schemas are built of. There are three types of schema elements:
nodes or vertices, ports, and ties or edges. Elements of all types belong
to three classes:
1. Object/node, port, and connection/edge constants.
2. Object/node, port, and connection/edge variables.
3. Objects, ports, and connections with variables.
In formal schemas, variables are represented by their names in a
conventional manner (cf., Examples 5.6.1–5.6.4). In informal schemas,
variables are represented by their descriptions or specifications in the
form of a text (cf., Example 5.6.3), picture, text with pictures, etc.

Example 5.6.5. The symbol T can be used as an automaton vari-


able the range of which is the class of Turing machines. The expres-
sion NN can be used as an automaton variable the range of which
is the class of neural networks. The symbol P can be used as an
automaton variable the range of which is the class of port automata.
Thus, variables T for Turing machines, A for finite automata, N
for neural networks, etc. in the schema from Example 5.6.10 are
automaton/node variables.

Example 5.6.6. Information connections denoted by solid lines and


process connections denoted by dashed lines in the schema from
Example 5.6.10 are connection/edge variables.
It is also possible to use different connection variables for links
implemented on physical media, such as coaxial cable, twisted pair,
or optical fiber.

Example 5.6.7. The expression T [with x tapes] can be used as a


denotation for an automaton with the variable x the range of which
is the number of Turing machines tapes.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 558

558 Theory of Knowledge: Structures and Processes

Example 5.6.8. The expression c [with bandwidth x] can be used


as a denotation for a connection/link with the variable x the range
of which is the bandwidth (throughput) of the link.

Remark 5.6.1. Each variable x is determined by its name x and


range Rg x. Types of ranges determine types of variables. For
instance, a variable whose range encompasses some class of neural
networks has the neural network type.

Remark 5.6.2. Variables in a schema form in a general case not a


set but a multiset (cf., for example, Knuth, 1997) because the same
variable x may be assigned to different nodes, links or ports.

In addition to variables, we need variable functions. A variable


function takes values in variables. For instance, a linear real func-
tion f is a variable function as it has the form f (x) = ax + b
where a and b are arbitrary real numbers. Another example of a
variable function is the function that takes any value xn for a given
argument x.
Variable functions can be of different types:
• fuzzy functions in the sense of fuzzy set theory when values of the
function have estimates, e.g., to what extent this value is correct,
true or exact (Zimmermann, 1991);
• non-deterministic functions when values of the function are not
uniquely determined by the argument;
• probabilistic functions in the sense of fuzzy set theory when values
of the function have probabilities showing, e.g., to what extent this
value is correct, true or exact.
Wave function in quantum mechanics is an example of a proba-
bilistic function.

Remark 5.6.3. There is one-to-one correspondence between non-


deterministic functions and set-valued functions, in which values are
some sets.

Remark 5.6.4. There are fuzzy functions with fuzzy domain and/or
range. However, here we do not consider such functions.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 559

Knowledge Structure and Functioning 559

Remark 5.6.5. There is one-to-one correspondence between fuzzy


functions in the above sense and fuzzy-set-valued functions, in which
values are some fuzzy sets.

All these structures make possible to define basic schemas.

Definition 5.6.1. A basic schema R is the following system that


consists of two sets, two multisets, and one mapping:
R = (AR , VN R , C R , VCR , cR ).
Here:
The set AR is the set of all object names (node constants) from R;
the multiset VN R consists of all object variables from R;
the set C R is the set of all connections/links (link constants) from R;
the multiset VCR consists of all link variables from R;
and
cR : C R ∪ VCR → ((AR ∪ VN R ) × (AR ∪ VN R )) ∪ (A R ∪ V N R ) ∪
(A R ∪ V N R ) is a (variable) function, called the node-link adja-
cency function, that assigns connections to nodes where A R and
A R are disjunctive copies of AR , while V N R and V N R are dis-
junctive copies of VN R .
In some cases, we need more information about schemas. A spe-
cific kind of such information is related to ports of the nodes. Ports
are used to provide necessary connections between nodes inside the
schema and between the schema and other systems. In this case, we
consider port schemas.

Definition 5.6.2. A port schema B is the following system that


consists of three sets, three multisets, and three mappings
B = (AB , VNB , P B , VPB , CB, VCB , pIB , cB , pEB ).

Here:
The set AB is the set of all object names (node constants) from B
used as nodes of the schema; the multiset VN B consists of all object
variables from B also used as nodes of the schema; the set C B is
the set of all connections/links (link constants) from B; the multiset
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 560

560 Theory of Knowledge: Structures and Processes

VCB consists of all link variables from B; the set P B = P IB ∪ P EB


(with P IB ∩ P EB = ∅) is the set of all ports of B, P IB is the set of all
ports (called internal ports) of the nodes from AB , and P EB is the
set of external ports of B, which are used for interaction of B with
different external systems; the multiset VP B consists of all port vari-
ables from B and is divided into two disjunctive submultisets VP Bin
that consists of all variable inlets from B and VP Bout consists of all
outlets from B; pIB : P IB ∪ VP B → AB ∪ VN B is a (variable) total
function, called the internal port assignment function, that assigns
ports to nodes; cB : C B ∪ VCB → ((P Ibout ∪ VP Bout ) × (P Ibin ∪
VP Bin )) ∪ (P  IBin ∪ V P Bin ) ∪ (P  IBout ∪ V P Bout ) is a (variable)
function, called the port-link adjacency function, that assigns con-
nections to ports where P  IGin , P  Igout , V P Bin and V P Bout are
disjunctive copies of P  IGin , P  Igout , V P Bin and V P Bout , corre-
spondingly; and pEB : P EB ∪ VP B → AB ∪ P IB ∪ C B ∪ VN B ∪
VP B ∪ VCB is a (variable) function, called the external port assign-
ment function, that assigns ports to arbitrary elements from B, e.g.,
ports can be assigned to links or to other ports.
However, in what follows, we assume that all studied port schemas
do not have external nodes.
Note that totality of the internal port assignment function pIB
means that each internal port is assigned to some node.
Usually, basic schemas are used when the modeling scale is big,
i.e., at the coarse-grain level, while port schemas are used when the
modeling scale is small and we need a fine-grain model.
Schemas without ports, i.e., basic schemas, give us the first
approximation to cognitive structures, while schemas with ports,
i.e., port schemas, is the second (more exact) approximation. In
some cases, it is sufficient to use schemas without ports, while in
other situations to build an adequate, flexible and efficient model, we
need schemas with ports. For instance, interaction schemas (Arbib,
1985), schemas of programs (cf., Garland and Luckham, 1969; Dennis
et al., 1974; Fischer, 1993), or flow-charts (cf., Burgin, 1976; 1985;
1996) do not traditionally have ports. Even schemas of computer
hardware are usually presented without ports (Heuring and Jordan,
1997).
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 561

Knowledge Structure and Functioning 561

Definition 5.6.3. Internal ports of a port schema B to which


no links are attached are called open or free. External ports of a
port schema B to which no links or nodes are attached are called
free.
External ports of a port schema B, being open when the schema
is considered by itself, are used for connecting B to external systems.

Remark 5.6.6. It is possible to consider two representations of


schemas: planar or graphical and linear or symbolic. To achieve better
comprehension, schemas are usually represented in a graphical form
as in Figures 5.21 and 5.22.

Example 5.6.5. A basic schema of a grid automaton. In the schema


from Figure 5.25, variables form the multiset, which contains: two
variables Tm, one variable NN, two variables RAM, five variables
FA, one variable CA, six variables m, and one variable GA. Here is
the semantics of these variables:

Tm is a variable the range of which is the class of all Turing machines;


RAM is a variable the range of which is the class of all random access
machines;
S is a variable the range of which is the class of all servers;
m is a variable the range of which is the class of all modems;
NN is a variable the range of which is the class of all neural networks;
FA is a variable the range of which is the class of all finite automata;
CA is a variable the range of which is the class of all cellular
automata;
GA is a variable the range of which is the class of all grid automata.

Example 5.6.11. A formal basic schema that formalizes the inter-


action schema from Figure 5.23 is given in Figure 5.26. This schema
has connections/links of two types: links for activation of nodes
and for transfer of data. Such a formalization of the schema from
Figure 5.23 allows us to better study its properties and transforma-
tions. It demonstrates that this schema has realizations not only by
the brain neural structures but also by computer programs.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 562

562 Theory of Knowledge: Structures and Processes

GA
c1 c3 c5
Tm1 CA1
c2 c4 c6

FA1
m2
FA2

RAM1 m1 S1 m3 NN1 m5 Tm2 FA3

FA4

FA5
m4

RAM2 m6

GA1
c1 c3 c5
Tm1 CA1
c2 c4 c6

FA1
m2
FA2

RAM1 m1 S1 m3 NN1 m5 Tm2 FA3

FA4

FA5
m4

RAM1 m6

GA3

Figure 5.25. A schema of a grid automaton GA

Here is the list of variables and their meaning in this schema:

X1 is a variable the range of which is the class of all schemas (algo-


rithms or neural assemblages) for visual location;
X2 is a variable the range of which is the class of all schemas (algo-
rithms or neural assemblages) for size recognition;
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 563

Knowledge Structure and Functioning 563

X1
X2 X3

X4 X5 X6

X7 X8

Figure 5.26. Dashed lines represent activation of signals, while solid lines rep-
resent transfer of data

X3 is a variable the range of which is the class of all schemas (algo-


rithms or neural assemblages) for orientation recognition;
X4 is a variable the range of which is the class of all schemas (algo-
rithms or neural assemblages) for fast phase movement;
X5 is a variable the range of which is the class of all schemas (algo-
rithms or neural assemblages) for hand preshape;
X6 is a variable the range of which is the class of all schemas (algo-
rithms or neural assemblages) for hand rotation;
X7 is a variable the range of which is the class of all schemas (algo-
rithms or neural assemblages) for slow phase movement;
X8 is a variable the range of which is the class of all schemas (algo-
rithms or neural assemblages) for actual grasp.

We discern three classes of constants and variables in schemas:

• static constants and variables;


• function constants and variables;
• process constants and variables.

Static constants are representations (names) of objects that do


not represent changes.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 564

564 Theory of Knowledge: Structures and Processes

Static variables are variables the range of which consists of repre-


sentations (names) of objects that do not represent changes.
Function constants are representations (names) of functions and
relations.
Function variables are variables the range of which consists of
representations (names) of functions and relations.
Process constants are representations (names) of actions, events,
and processes.
Process variables are variables the range of which consists of rep-
resentations (names) of actions, events, and processes.
With respect to this classification, there are:

• static schemas with only static constants and variables;


• function schemas that have function and may be, static constants
and variables;
• process schemas that have process constants and variables.

Schemas that have variables are called variable.


There is a natural connection between basic schemas from port
schemas. The following algorithm shows how to get basic schemas
from port schemas.
Let us consider a port schema B = (AB , VNB , P B , VPB , C B ,
VCB , pIB , cB , pEB ) and build a basic schema DB. The internal port
assignment function pIB and port-link adjacency function cB deter-
mine the node-link adjacency function ncB of a basic schema DB
in the following way. Let us take l ∈ C B , P̄ IBin = P IBin ∪
VPB , P̄ IBout = P IBout ∪ VPB , ĀB = AB ∪ VNB , ĀB and ĀB
are disjoint copies of ĀB , and pIB ∗ = (pIB × pIB ) ∗ pIB ∗ pIB :
(P̄ IBin × P̄ IBout ) ∪ P̄ IBin ∪ P̄ IBout → (ĀB × ĀB ) ∪ ĀB ∪ ĀB .
Here × is the product and ∗ is the coproduct of mappings in the
sense of category theory (cf., for example, (Herrlich and Strecker,
1973)). Then nc B is a composition of functions pIB and cB , namely,
nc B (l) = pIB ∗ (cB (l)).
The node-link adjacency function nc B determines a schema in
which links are adjusted directly to nodes, ignoring ports. Thus, it
is possible to exclude ports from the schema, obtaining a schema
without ports or a basic schema. Consequently, this algorithm gives
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 565

Knowledge Structure and Functioning 565

us a basic schema, which is denoted by DB where DB = (AB , VNB ,


C B , VCB , ncB ) and called the projection of the port schema B.
At the same time, it is possible to consider any basic schema as
a special kind of port schemas, in which any node has exactly one
port and all connections go through this port.

Example 5.6.7. Let us consider the basic schema (adapted from


(Burgin, 2005; Ch. 2)) of a Turing machine AT,w that reduces the
problem of deciding whether a given Turing machine has some non-
trivial property P to the halting problem for Turing machines (cf.,
Figure 5.27). This example shows that it is possible to build schemas
of algorithms and automata.
Here is the list of variables in this schema and their properties:

T denotes some Turing machine from a given class K .


M denotes a Turing machine that does not have the property P .
G denotes a finite automaton with two inputs.

One input comes from the outside through M , while the second
input of G is the output of the machine T . The automaton G can be
in two states: closed and open. Initially G is closed until it receives
some input from T , which makes it open. When G is closed, it gives
no output. When G is open, it gives the word that comes to G from
M as its output. The structure of the Turing machine AT,w .
When an informal schema, such as an interaction schema or flow-
chart of a program, is formalized, its formal representation is a

G
u M

w T AT,w

Figure 5.27. The basic schema of a Turing machine AT w


September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 566

566 Theory of Knowledge: Structures and Processes

mathematical model of this schema. This model allows one to study,


build, and apply schemas utilizing powerful tools of mathematics.
The procedure of formalization is rather simple. To get a formal rep-
resentation of an interaction schema, we denote descriptions by vari-
ables, properly assign ranges of these variables, and make relevant
substitutions in the schema.

Remark 5.6.7. It is possible to consider schemas with zero vari-


ables.

Remark 5.6.8. If an automaton (system) is given by its specifi-


cation in the sense of Blum, Ehrig, and Parisi-Presicce (1987) and
Ehrig and Mahr (1985; 1990), then components and their composi-
tions become a special kind of schemas. This allows a more rigorous
development of a component-based technology similar to one devel-
oped by these same authors.

A port schema P is described by three grid characteristics, three


node characteristics, and three edge characteristics.
The grid characteristics are:

1. The space organization or structure of the schema P . This space


structure may be in physical space, reflecting where the corre-
sponding information processing systems (nodes) are situated, or
it may be a mathematical structure defined by the geometry of
node relations. Besides, we consider three levels of space struc-
tures: local, region, and global space structures of a schema. Some-
times these structures are the same, while in other cases they
are different. The space structure of a schema can be static or
dynamic. The dynamic space structure can be of two kinds: per-
sistent or flexible. However, the space structure of a schema may
be variable.
Inherent structures of the schema are represented by its grid and
connection grid (cf., Definitions 5.6.8 and 5.6.9). Due to a possible
non-determinism in the port assignment functions and port-link
adjacency function, there is a possibility of non-determinism in
inherent structures of the schema.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 567

Knowledge Structure and Functioning 567

2. The topology of the schema P is a complex structure that consists


of node topology determined by the type of the node neighborhood
and port topology determined by the type of the port neighbor-
hood. A neighborhood of a node (port) is the set of those nodes
(ports) with which this node directly interacts (is directly con-
nected). As the port assignment functions and port-link adjacency
function may be non-deterministic, the topology of the schema P
also may be non-deterministic. In particular, a schema may have
fuzzy or probabilistic topology.
For deterministic schemas, we have three main types of topology:

— A uniform topology, in which neighborhoods of all nodes of


the schema have the structure.
— A regular topology, in which the structure of different node
neighborhoods is subjected to some regularity.
— An irregular topology where there is no regularity in the struc-
ture of different node neighborhoods.

An example of a regular but non-uniform schema topology is the


schema of a cellular automaton in the hyperbolic plane or on a
Fibonacci tree (Margenstern, 2002). In this schema, nodes are
variables ranging over finite automata, while all edges/links are
fixed.
Non-deterministic schemas can also be regular and irregular.
3. The dynamics of the schema P determines by what rules its
nodes exchange information with each other and with a tenta-
tive environment of P and in particular, how nodes use ports
and corresponding links. This dynamics is usually an algorith-
mic function that depends on values of its variables because some
of nodes and/or links are variables and there is a permissible
non-determinism in the port assignment functions and port-link
adjacency function.

Interaction with the environment separates two classes of schema:


open schemas allow interaction (accepting and transmitting infor-
mation) with the environment through definite connections, while
closed schemas do not have means for such interaction. For instance,
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 568

568 Theory of Knowledge: Structures and Processes

traditional schemas representing concepts and logical propositions


are closed.
Existence of free ports makes a closed schema potentially open as
it is possible to attach connections to these ports.

The node characteristics are:

1. The type and structure of the node, including structures of its


ports. There are different levels of node typology. On the highest
level, there are two types of nodes: an object node and a variable
node. Each of these types has subtypes, e.g., a neural network,
Turing machine or finite state machine. These subtypes form the
next level of the type hierarchy. Subtypes of these subtypes (e.g.,
a Turing machine with one linear tape) form one more level of the
type hierarchy and so on.
2. The external dynamics of the node determines interactions of this
node. According to this characteristic, there are three types of
nodes: accepting nodes that only accept or reject their input;
generating nodes that only produce some output; and transduc-
ing nodes that both accept some input and produce some output.
Note that nodes with the same external dynamics can have dif-
ferent dynamics when they work in a grid. For instance, let us
take two nodes: a transducing node B and a generating node
B. Initially they have different dynamics. However, as parts of
a schema P , they both work as generating nodes because the
schema dynamics prescribes this. For nodes of the schema that
are variables, we have not a definite dynamics but a type of
dynamics.
Primitive ports do not change node dynamics. However, com-
pound ports are able to influence processes in the whole schema
and in the node to which they belong. For instance, a compound
port can be an object, e.g., an automaton, or even a schema.
3. The internal dynamics of the node determines what processes go
inside this node. For nodes of the schema that are variables, we
have not a definite dynamics but a type of dynamics. For instance,
it may be given that the node with number 3 in a schema computes
function f (x). Such nodes are usually used in program schemas
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 569

Knowledge Structure and Functioning 569

(which are traditionally called program schemata (cf., for exam-


ple, (Fischer, 1993))).
The edge characteristics are:
1. The external structure of the edge. According to this character-
istic, there are three types of edges: a closed edge (a link or link
variable) both sides of which are connected to ports of the schema;
an ingoing edge in which only the end side is connected to a port of
the port schema; and an outgoing edge in which only the beginning
side is connected to a port of the port schema.
2. Properties and the internal structure of the edge. There are dif-
ferent levels of edge typology. On the highest level, there are two
types: constant and variable links. Each of these types has sub-
types that form the next level of the type hierarchy. According to
the internal structure, there are three subtypes of edges: a sim-
ple channel that only transmits data/information; a channel with
filtering that separates a signal from noise; and a channel with
data correction. Subtypes of these subtypes form the next level
and so on.
3. The dynamics of the edge determines edge functioning. For
instance, two important dynamic characteristics of an edge are
bandwidth as the number of bits (data units) per second trans-
mitted on the edge and throughput as the measured performance
of the edge. In schemas, these characteristics may be variable.
Properties of links/edges separate all links into three standard
classes:
1. Information link/connection is a channel for processed data
transmission.
2. Control link/connection is a channel for instruction trans-
mission.
3. Process link/connection realizes control transfer and deter-
mines how the process goes by initiation of a node in the grid
by another node (other nodes) in the grid.
Process links determine what to do, control links are used to
instruct how to work, and information links supply automata with
data in a process of schema or its instantiation functioning.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 570

570 Theory of Knowledge: Structures and Processes

There are different types and kinds of schema variables.


The dynamic typology discerns three types of basic variables:
1. System variables.
2. Function variables.
3. Process variables.
The schema from Example 5.6.5 uses system variables (Ti for
Turing machines, Ai for finite automata, N for neural networks, etc.).
The schema from Example 5.6.6 uses function variables, e.g., X2
is a variable for such function as size recognition.
The scaling classification discerns three types of variables:
1. Individual variables that are used in one node, port or link from
the schema.
2. Local variables that are used in a group of nodes, ports or links
from the schema.
3. Global variables that are used for the whole schema.
Difference between constants and variables in schemas results in
existence of special classes of schemas:
— Basic/port schemas with constant nodes;
— Basic/port schemas with constant links;
— Port schemas with constant ports;
— Port schemas with constant port assignment;
— Basic/port schemas with constant node-link adjacency.
— Port schemas with dynamic port assignment;
— Basic/port schemas with dynamic node-link adjacency.
— Port schemas with deterministic port assignment;
— Basic/port schemas with deterministic node-link adjacency.
Let us consider operations on schemas. Utilization of different
schemas usually involves various operations. There are three basic
vertical unary operations in the hierarchy of both basic and port
schemas: abstraction, concretization, and determination.

Definition 5.6.4. Changing a variable to a constant from the range


of this variable is called an interpretation of this variable.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 571

Knowledge Structure and Functioning 571

Definition 5.6.5. An operation of changing (interpreting) some of


the variables in a schema R to constants is called a concretization
operation Con applied to R, while the result Con R of this operation
is called a concretization of R.

Example 5.6.8. An instantiation of a schema in the sense of (Arbib,


1989) is its maximal concretization.

Example 5.6.9. The grid automaton from Figure 5.23 is a con-


cretization of the schema from Figure 5.26.
Definitions imply the following result.

Lemma 5.6.1. Concretization of a schema preserves the schema


topology and structure.

Definition 5.6.6. If a concretization Con R of a schema R is a


grid automaton, then Con R is called a realization of R, while the
corresponding operation is called a realization operation.

Remark 5.6.9. We have noted that a schema may involve the


cooperative activity of multiple brain regions. In particular, then,
a schema becomes a “mode of activity” for grid automata and —
depending on input and context — a grid automaton can support
many different schemas as their realization.

Proposition 5.6.1. If P is a realization of a schema R, and R


is a concretization of a schema Q, then P is a realization of the
schema Q.

Indeed, P is obtained from R by changing all variables from R to


constants and R is obtained from Q by changing some variables from
Q to constants. Thus, P does not have variables and so it is a grid
automaton obtained by changing all variables from Q to constants.
Corresponding to a schema Q its realization RQ is an operation
that is also called realization.

Corollary 5.6.1. Realization of a schema is an idempotent


operation.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 572

572 Theory of Knowledge: Structures and Processes

Definition 5.6.7. A realization Rea R of a schema R becomes an


instantiation of R when Rea R starts functioning.
Abstraction is an operation opposite to concretization.

Definition 5.6.8. An operation of changing some of the constants


in a schema P to variables is called an abstraction operation Con
applied to P , while the result Abs P of this operation is called a
abstraction of P .

Remark 5.6.10. Abstraction and concretization are operations


with a set of tentative results in contrast to conventional arithmeti-
cal and algebraic operations such as addition or multiplication, which
give only one result (if any).

Example 5.6.10. The schema from Figure 5.26 is an abstraction of


the grid automaton from Figure 5.23.
In some sense, operations of abstraction and concretization of
schemas are reciprocal (inverse) with respect to one another. Namely,
they have the following property.

Lemma 5.6.2. (a) If a schema P is obtained from a schema R by


abstraction, then it is possible to get schema R by concretization of P .
(b) If a schema R is obtained from a schema P by concretization,
then it is possible to get schema P by abstraction of R.

As abstractions and concretizations are transformations of


schemas, it is natural to introduce their composition as consecutive
performance of corresponding transformations. Definitions imply the
following result.

Proposition 5.6.2. (a) Composition of schema concretizations is a


schema concretization.
(b) Composition of schema abstractions is a schema abstraction.

Definition 5.6.9. (a) A schema P is (strongly) equivalent to a


schema R if they have the same realizations (concretizations).
(b) A schema P is (strongly) equivalent to a schema R with respect
to a class A of grid automata if they have the same realizations
(concretizations) in A.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 573

Knowledge Structure and Functioning 573

For instance, taking an interaction schema, we are interested in


its realization in the class A of neural networks or even more exactly,
in the class B of neural ensembles in the brain.
Remark 5.6.11. There are other interesting equivalencies of
schemas.
Proposition 5.6.3. Two schemas are strongly equivalent if and only
if it is possible to obtain one from the other by renaming the variables
in the first schema.
Proof is left as an exercise.
Operations of abstraction and concretization define corresponding
relations in the set of all schemas.
Definition 5.6.10. If a schema P is obtained from a schema R by
abstraction (or a schema R is obtained from a schema P by con-
cretization), then P is called more general than R and R is called
more concrete than P .
It is denoted by Pa > R and R >c P , respectively.
Example 5.6.11. The schema from Figure 5.26 is more abstract
than the schema from Figure 5.23.
We remind that a strict partial order on a set X is a reflexive and
transitive relation.
Lemma 5.6.3. Both >c and a > are strict partial orders.
Proof . By the definition, a strict partial order is a transitive
antisymmetric relation (cf., Appendix). Thus, we have to test these
properties.
1. Transitivity. By Proposition 5.6.2.a, the relation >c is transitive
and by Proposition 5.6.2.b, the relation a > is transitive.
2. Antisymmetry.
(a) If Pa > R, then a schema P is obtained from a schema R by
abstraction. By definition, it means that the number nvP of
variables in P is larger than the number nvR of variables in
R. Because whole numbers are linearly ordered, it is impos-
sible to have the relation Ra > P , which involves nvP < nvR .
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 574

574 Theory of Knowledge: Structures and Processes

As P and R are arbitrary schemas, the relation a > is


antisymmetric.
b) If P >c R, then a schema P is obtained from a schema R by
concretization. By definition, it means that the number nvP
of variables in P is less than the number nvR of variables in R.
Because whole numbers are linearly ordered, it is impossi-
ble to have the relation R a > P , which involves nvP > nvR .
As P and R are arbitrary schemas, the relation >c is
antisymmetric.

Lemma is proved.
Concretization is a special kind of a more general operation on
schemas.

Definition 5.6.11. An operation of decreasing (delimiting) the


range of a variable function f is called a determination operation
Det applied to f , while the result Det f of this operation is called a
determination of f .
In a similar way, it is possible to change port assignment and
port-link adjacency functions.

Definition 5.6.12. An operation of decreasing (delimiting) the


range of the variable port assignment functions and/or port-link adja-
cency function of a schema R is called a determination operation Det
applied to R, while the result Det R of this operation is called a
determination of R.
Determination operation defines specific relations between
schemas.

Definition 5.6.13. If a schema P is obtained from a schema R by


determination, then P is called more determined than R.
It is denoted by Pdet ≥ R.

Lemma 5.6.4. Pdet ≥ R is a relation of partial order.

Proof is similar to the proof of Lemma 5.6.3.


It is interesting to study properties of this partial order relation
for some well-known classes of schemas, e.g., for program schemata.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 575

Knowledge Structure and Functioning 575

To represent structures of schemas, we use multigraphs, directed


multigraphs, partially directed multigraphs, generalized multigraphs,
generalized directed multigraphs, and generalized partially directed
multigraphs. Schema grids can be not only conventional or stable
directed multigraphs and generalized directed multigraphs, but also
variable directed multigraphs and generalized directed multigraphs.

Definition 5.6.14. (a) A multigraph G has the following form:

G = (V, E, c).

Here V is the set of vertices or nodes of G; E is the set of edges


of G; the edge-node adjacency or incidence function c : E → V × V
may be constant or variable. This function assigns each edge to a
pair of vertices.
(b) An oriented or directed multigraph G has the following form:

G = (V, E, c).

Here V is the set of vertices or nodes of G; E is the set of edges of


G, each of which has the beginning and the end, i.e., they are arrows;
the edge-node adjacency or incidence function c : E → V × V may
be constant or variable. This function assigns each edge to a pair of
vertices so that the beginning of each edge is connected to the first
element in the corresponding pair of vertices and the end of the same
edge is connected to the second element in the same pair of vertices.
(c) If in the structure G = (V, E, c), some edges are arrows and
some are not, this structure is a partially oriented or partially directed
multigraph.
A multigraph is a graph when c is an injection (Berge, 1973).
When the adjacency function is variable, the multigraph (directed
multigraph or partially directed multigraph) is called variable.
It is possible to consider bi-directed edges but they play the same
role in graphs as edges that are not directed.
Usually multigraphs, directed multigraphs, partially directed
multigraphs, generalized multigraphs, generalized directed multi-
graphs, and generalized partially directed multigraphs are
represented in the geometrical (graphical) form. For instance, the
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 576

576 Theory of Knowledge: Structures and Processes

geometrical structure in Figure 5.28(a) is a graph, the geometrical


structure in Figure 5.28(b) is a multigraph, the geometrical struc-
ture in Figure 5.28(c) is a directed graph, the geometrical structure
in Figure 5.28(d) is a directed multigraph, the geometrical structure
in Figure 5.28(e) is a partially directed graph, and the geometrical
structure in Figure 5.28(f) is a partially directed multigraph.
Schemas of open systems demand more general constructions as
their characteristics.
Definition 5.6.15. (a) A generalized multigraph G has the following
form:
G = (V, E, c).
Here V is the set of vertices or nodes of G; E is the set of edges of
G; the edge-node adjacency or incidence function c : E → (V ×V ∪V ),
which assigns each edge either to a pair of vertices or to one vertex,
may be constant or variable. In the latter case, when the image c(e)
of an edge e belongs to V , it means that e is connected to only to
one vertex c(e).
(b) A generalized oriented or directed multigraph G has the fol-
lowing form:
G = (V, E, c : E → (V × V ∪ Vb ∪ Ve )).
Here V is the set of vertices or nodes of G; E is the set of edges
(arrows) of G (with fixed beginnings and ends); Vb ≈ Ve ≈ V ; the
edge-node adjacency function c: E → (V ×V ∪Vb ∪Ve ), which assigns
each edge either to a pair of vertices or to one vertex, may be constant
or variable. In the latter case, when the image c(e) of an edge e
belongs to Vb , it means that e is connected to the vertex c(e) by
its beginning. When the image c(e) of an edge e belongs to Ve , it
means that e is connected to the vertex c(e) by its end. Edges that
are mapped to the set Vb ∪ Ve are called open.
(c) If in the structure G = (V, E, c) some edges are arrows and
some are not, this structure is a generalized partially oriented or
partially directed multigraph.
Note that the difference between graphs (multigraphs) and
generalized graphs (generalized multigraphs) is that in graphs
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 577

Knowledge Structure and Functioning 577

(a)

(b)

(c)

(d)

(e)

(f)

Figure 5.28. Examples of graphs (a), multigraphs (b), directed graphs (c),
directed multigraphs (d), partially directed graphs (e), and partially directed
multigraphs (f)
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 578

578 Theory of Knowledge: Structures and Processes

(multigraphs), all edges are connected to vertices by both sides, while


in generalized graphs (generalized multigraphs) edges can be con-
nected to vertices only by one side.
Graphs, multigraphs, generalized graphs and generalized multi-
graphs are used to represent inner structure of schemas.

Definition 5.6.16. The grid G(P ) of a (basic or port) schema P is


the (variable) generalized oriented multigraph that has exactly the
same vertices and edges as P , while its adjacency function cG(B) is
nc B .
Note that the grid of a variable schema can be constant if in this
schema, there are only node variables. Edge variables in a schema
also do not make its grid variable if domain of each edge variable
contains edges of a single type.

Example 5.6.12. Figure 5.29 gives the graphical form of the grid
G(P ) of the schema P from Example 5.6.9.
We can see that the grid G(P ) of a schema P is a partially directed
graph.

Example 5.6.13. Figure 5.30 gives the graphical form of the grid
G(R) of the schema R from Example 5.6.12.

o o

o
o
o o o o o o o o
o
o o

o o

Figure 5.29. The grid G(P ) of a schema P


September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 579

Knowledge Structure and Functioning 579

Figure 5.30. The grid G(R) of the schema R from Example 5.6.12

Any port schema has the same grid as its projection on the
corresponding basic schema.
Proposition 5.6.4. For any port schema P , we have G(P ) =
G(DP ) where DP is the basic schema built from P .
Grids of schemas allow one to characterize definite classes of
schemas.
Proposition 5.6.5. A schema B is closed if and only if its grid
G(B) satisfies the condition Im c ⊆ V × V , or in other words, the
grid G(B) of B is a conventional multigraph.
Proposition 5.6.6. A schema B is an acceptor only if it has exter-
nal input ports or/and its grid G(B) has edges connected by their
end, or Im c ∩ Ve = Ø.
Proposition 5.6.7. A schema B is a transmitter only if it has exter-
nal output ports or/and its grid G(B) has edges connected by their
beginning, or Im c ∩ Vb = Ø.
Proposition 5.6.8. A schema B is a transducer only if it has exter-
nal input and output ports or/and its grid G(B) has edges connected
by their beginning and edges connected by their end.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 580

580 Theory of Knowledge: Structures and Processes

Definition 5.6.17. The connection grid CG(B) of a port schema


B is the (variable) generalized oriented multigraph nodes of which
bijectively correspond to the ports of B, while edges and the adja-
cency function cCG(B) are the same as in B.

Proposition 5.6.9. The grid G(B) of a port schema B with con-


stant port assignment is a homomorphic image of its connection grid
CG(B).

Indeed, by the definition of a schema, ports are uniquely assigned


to nodes, and by the definition of the grid G(B) a schema B, the
adjacency function cG(B) of the grid G(B) is the composition of the
port assignment function pB and the adjacency function cB of the
schema B.

Proposition 5.6.10. For each schema, there is its maximal with


respect to the relations g ≥ and det ≥ abstraction.

Proof . It is possible to change object names, ports, and connections


with variables to variables with corresponding ranges so that the
new schema is equivalent to the initial one. Besides, if we have a
set of variables, we can change this set to one variable with the
corresponding range so that the new schema is equivalent to the
initial one.
Let R be a schema. We transform it in the following way. We
assign to each node of its connection grid CG(R) a variable the range
of which encompasses all possible automata. We assign to each port
of its connection grid CG(R) a variable the range of which encom-
passes all possible ports. We assign to each edge of G(R) another vari-
able the range of which encompasses all possible links. In addition,
we take as the port assignment functions and port-link adjacency
function non-deterministic functions that allow maximal flexibility
of assignments and adjustments, i.e., any port may be assigned to
any node or link from R in a permissible way and any link may be
adjacent to any node or a pair of nodes.
In such a way, we obtain a maximal abstraction of R.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 581

Knowledge Structure and Functioning 581

Dynamics of schemas is represented not only by operations but


also by different kinds of homomorphisms.

Definition 5.6.18. A structural homomorphism f of a basic schema


P into a basic schema R is a mapping of nodes and connections of
P such that nodes of P are mapped into nodes of R, connections of
P are mapped into connections of R, and the node-link adjacency
function is preserved.
For port schemas, we have two kinds of structural homo-
morphisms.

Definition 5.6.19. A (weak) structural homomorphism f of a port


schema P into a port schema R is a mapping of nodes and connections
of P such that nodes of P are mapped into nodes of R, connections
of P are mapped into connections of R, and the (node-link adjacency
function) port assignment functions and port-link adjacency function
are preserved.
It is possible to consider a structural homomorphism f of a basic
schema P into a basic schema R as a pair of mappings: one of them
f(n) maps nodes of P into nodes of R and the other one f(e) maps
links of P into links of R. In addition, assignment functions and
adjacency relation are preserved.
A structural homomorphism f of a port schema P into a port
schema R uniquely corresponds to and determines the homomor-
phism f g : CG(P ) → CG(R) of the corresponding connection grids,
while a weak structural homomorphism f of a port schema P into
a port schema R uniquely corresponds to and determines the homo-
morphism f g : G(P ) → G(R) of the corresponding grids.
In a natural way, compositions of structural homomorphisms and
weak structural homomorphisms of port schemas are introduced as
conventional sequential composition of mappings.
Definitions imply the following results.

Proposition 5.6.11. Any concretization Con P (abstraction Abs


P ) of the schema P defines a structural homomorphism fcon : Con
P → P (fabs : Abs P → P ).
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 582

582 Theory of Knowledge: Structures and Processes

Proposition 5.6.12. Composition of (weak) structural homomor-


phisms of schemas is a (weak) structural homomorphism of schemas.

In some situations, it is useful to have more restrictions on map-


ping of schemas.

Definition 5.6.20. A (weak) structural homomorphism f of a


schema P into a schema R is called a (weak) typed homomorphism if
the following conditions are satisfied:

(a) variables from P are mapped into variables and constants of the
same type from R;
(b) constants from P are mapped into constants of the same type
from R.
In a natural way, compositions of typed homomorphisms of
schemas are introduced as conventional sequential composition of
mappings.

Proposition 5.6.13. Composition of typed homomorphisms of


schemas is a typed homomorphism of schemas.

Proposition 5.6.14. (a) Any concretization Con R (abstraction


Abs P ) of a schema R is defined by a VE-homomorphism c : R →
Con R of schemas.
(b) Any abstraction Abs P of a schema P is defined by an inverse
VE-homomorphism h : P → Abs P of schemas.

Proposition 5.6.15. The transformation of a schema R into the


corresponding basic schema DR is a weak typed homomorphism of
schemas.

Utilizing typed homomorphisms and structural homomorphisms,


as well as Propositions 5.6.11 and 5.6.12, we build four categories
of schemas: the category TSC in which objects are schemas and
morphisms are their typed homomorphisms; the category GSC in
which objects are schemas and morphisms are their structural homo-
morphisms; the category WTSC in which objects are schemas and
morphisms are their weak typed homomorphisms; and the category
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 583

Knowledge Structure and Functioning 583

WGSC in which objects are schemas and morphisms are their weak
structural homomorphisms.

Proposition 5.6.16. TSC is a subcategory of GSC, while WTSC


is a subcategory of WGSC.

Indeed, typed homomorphisms are a special kind of struc-


tural homomorphisms. Consequently, the category TSC contains all
objects from the category GSC but only some of its morphisms. In
a similar way, the category WTSC contains all objects from the
category WGSC but only some of its morphisms.
Definitions imply the following result.

Proposition 5.6.17. WTSC is a quotient category of TSC, while


WGSC is a quotient category of GSC.

It is possible to separate in both categories some special classes


of morphisms useful in schema theory. Such morphisms represent
formation and transformations of schemas.

Definition 5.6.21. (a) A (structural) homomorphism of schemas f :


R → P is called a (structural) V-monomorphism [E-monomorphism]
if images of any two vertices [links] from R do not coincide.
(b) A (structural) homomorphism of schemas f : R → P is
called a (structural) VE-monomorphism if it is both a (structural)
V-monomorphism and E-monomorphism.

Definition 5.6.22. (a) A (structural) homomorphism of schemas


f : R → P is called a (structural) V-epimorphism [E-epimorphism]
if any vertex [link] from P is an image of some vertex [link] from R.
(b) A (structural) homomorphism of schemas f : R → P is
called a (structural) VE-epimorphism if it is both a (structural)
V-epimorphism and E-epimorphism.

Definition 5.6.23. The image Im f of a (structural) homomorphism


of schemas f : R → P is the largest subschema of P such that any
its vertex from P is the image of some vertex [link] from R and any
its link is the image of some link from R.
Let f : R → P be a (structural) homomorphism of schemas.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 584

584 Theory of Knowledge: Structures and Processes

Lemma 5.6.5. Im f is the largest subschema of P such that f


defines an (structural) VE-epimorphism of R onto Im f .

Let f : R → P be a (structural) E-epimorphism of schemas.


It is possible to derive properties of R from properties of P and
vice versa.

Proposition 5.6.18. (a) If the grid G(R) is connected (full) and


f : R → P is a (structural) E-epimorphism of schemas, then the
grid G(P ) is also connected (full). (b) If fan-in (fan-out) of all edges
from the grid G(P ) is larger than n and f : R → P is a (structural)
E-epimorphism of schemas, then fan-in (fan-out) of all edges from
the grid G(R) is larger than n.

Corollary 5.6.2. (a) If the grid G(P ) is disconnected and f : R → P


is a (structural) E-epimorphism of schemas, then the grid G(R)
is also disconnected. (b) If fan-in (fan-out) of all edges from the
grid G(R) is smaller than n and f : R → P is a (structural)
E-epimorphism of schemas, then fan-in (fan-out) of all edges from
the grid G(P ) is smaller than n.

Corollary 5.6.3. If f : R → P is a (structural) E-epimorphism of


schemas, then the number of components of the grid G(R) is larger
than or equal to the number of components of the grid G(P ).

The concept of a subschema is important in schema theory.

Definition 5.6.24. A schema P is a (strong) structural subschema of


a schema R if the grid G(P ) is a generalized oriented submultigraph
of the grid G(R) (the connection grid CG(P ) is a generalized oriented
submultigraph of the connection grid CG(R)).
It is denoted by P ⊆S R and P ⊆SS R, respectively.

Lemma 5.6.6. Any strong structural subschema of a schema R is


its structural subschema, i.e., P ⊆SS R implies P ⊆S R.

Example 5.6.14. The schema R given in Figure 5.31 is a struc-


tural subschema of the schema from Figure 5.23 and of the schema
from Figure 5.26. However, the schema R is neither a subschema of
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 585

Knowledge Structure and Functioning 585

T1 FA1 FA

NN T3

Figure 5.31. Dashed lines represent activation of signals, while solid lines rep-
resent transfer of data

the schema from Figure 5.23 nor a subschema of the schema from
Figure 5.26.
Here is the list of variables in this schema:
T1 and T3 are Turing machines;
NN is a neural network;
FA and FA1 are variables the range of which is the class of all finite
automata.
Lemma 5.6.7. If a schema P is a structural subschema of a schema
R, then there is a structural VE-monomorphism of P into R.
Proof is left as an exercise. 
Definition 5.6.25. A schema P is a subschema of a schema R if
all nodes of P belong to the set of nodes of R, all links of P belong
to the set of links of R, all ports of P belong to the set of ports of
R, and the internal and external port assignment functions pIP and
pEP and port-link adjacency function cP of P is a restriction of the
internal and external port assignment functions pIR and pER and
port-link adjacency function cR of R, respectively. It is denoted by
P ⊆ R.
Let P be a subschema of a schema R.
Proposition 5.6.19. For any concretization Con P (abstraction
Abs P ) of the schema P , there is a unique minimal concretization
Con R (abstraction Abs R) of the schema R such that Con P (Abs
P ) is a subschema of Con R (Abs R).
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 586

586 Theory of Knowledge: Structures and Processes

Proof . (1) The concretization Con P of the schema P is constructed


by changing some variables from P to constants. Let us take be the
set Vch of all changed variables in P and make the same changes in R
as those that are performed in P . This will give us the concretization
Con R of the schema R, in which Con P is a subschema and which
is minimal as the least number of variables from R necessary for
making Con P a subschema Con R is changed to constants.
(2) The abstraction Abs P of the schema P is constructed by chang-
ing some constants from P to variables. Let us take be the set Cch
of all changed constants in P and make the same changes in R as
those that are performed in P . This will give us the abstraction
Abs R of the schema R, in which Abs P is a subschema and which
is minimal as the least number of constants from R necessary for
making Abs P a subschema Abs R is changed to variables.
The Proposition is proved.

Let P be a subschema of a schema R.


Proposition 5.6.20. For any concretization Con R (abstraction
Abs R) of the schema R, there is a unique concretization Con P
(abstraction Abs P ) of the schema P such that Con P (Abs P ) is a
subschema of Con R (Abs R).
Indeed, to build the necessary subschema Con P (Abs P ) of the
schema P , we take the minimal subschema of Con R (Abs R) that
contains all nodes, links and, if necessary, ports from P .
Remark 5.6.12. The concept of a subschema of a schema is defined
to both informal and formal schemas, as well as for basic and port
schemas.
Proposition 5.6.21. Any port schema potentially is a subschema of
a closed port schema.
Indeed, a port schema is open when there are free ports, i.e., ports
that are assigned to any link. Thus, it is possible to make any port
schema close if we assign links to free ports so, that the new schema
will not have ports for external interaction and thus, it will be closed.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 587

Knowledge Structure and Functioning 587

In the neurophysiological schema theory (Arbib, 1995; 2005), it


is important to be able to elaborate different schemas into networks
of interacting subschemas until finally it becomes possible to realize
these constructs in terms of neural networks or other appropriate
circuitry.
To provide means for construction of new schemas from given
schemas, it is possible to use operations introduced in the mathe-
matical schema theory (Burgin, 2010a).
Example 5.6.15. The (informal) schema of grasping given in
Figure 5.32 is a subschema of the schema from Figure 5.23.
Example 5.6.16. The (formal) schema given in Figure 5.33 is
a subschema of the schema from Figure 5.26. This schema has
connections/links of two types.
Here is the list of variables in this schema:
X4 is a variable the range of which is the class of all schemas
(algorithms or neural assemblages) for fast phase movement;
X5 is a variable the range of which is the class of all schemas
(algorithms or neural assemblages) for hand preshape;

kinesthetic
visual and and tactile
kinesthetic input input

Hand Hand
Preshape Rotation

Actual
Grasp
Grasping

Figure 5.32. Dashed lines denote activation of signals and solid lines denote
transfer of data
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 588

588 Theory of Knowledge: Structures and Processes

X4 X5 X6

X7 X8

Figure 5.33. Dashed lines represent activation of signals, while solid lines rep-
resent transfer of data

X6 is a variable the range of which is the class of all schemas


(algorithms or neural assemblages) for hand rotation;
X7 is a variable the range of which is the class of all schemas
(algorithms or neural assemblages) for slow phase movement;
X8 is a variable the range of which is the class of all schemas
(algorithms or neural assemblages) for actual grasp.

Proposition 5.6.22. If a schema P is a (structural) subschema of a


schema R, and the schema R is a (structural) subschema of a schema
Q, then the schema P is a (structural) subschema of the schema Q.

Proof . Let us assume that a schema P is a structural subschema


of a schema R, and the schema R is a structural subschema of a
schema Q. It means that the grid G(P ) of P is a generalized ori-
ented submultigraph of the grid G(R) of R and the grid G(R) of R is
a generalized oriented submultigraph of the grid G(Q) of QR. Conse-
quently, the grid G(P ) of P is a generalized oriented submultigraph
of the grid G(Q) of Q. By definition, it means that the schema P is
a structural subschema of the schema Q.
Now let us assume that a schema P is a subschema of a schema
R, and the schema R is a subschema of a schema Q. The relation
“to be a subschema” is defined by inclusion of sets (of nodes, links
and ports) and by restrictions of functions. As inclusion of sets and
restriction of functions are transitive relations, the schema P is a
subschema of the schema Q.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 589

Knowledge Structure and Functioning 589

Proposition 5.6.23. If f : R → P is a (structural) homomor-


phism [V-monomorphism, E-monomorphism, VE-monomorphism] of
schemas and Q is a subschema of the schema R, then f defines
a restriction fQ of f on Q, which is a (structural) homomor-
phism [V-monomorphism, E-monomorphism, VE-monomorphism,
respectively] of schemas

Definition 5.6.26. A subschema Q of a schema R is called


V-complete in R if Q contains all nodes from R.
The concept of a V-complete subschema is especially useful when
connections (links) are variable while all nodes are constants. In such
a way, it is possible to investigate how connections (links) between a
set of given nodes are changing with time.

Definition 5.6.27. A subschema Q of a schema R is called


E-complete in R if Q contains all links from R that are connected in
R to some node of Q.
The concept of a E-complete subschema is especially useful when
nodes are variable while all connections (links) are constants. In such
a way, it is possible to explore how nodes are changing with time while
connections (links) remain stable.

Definition 5.6.28. A subschema Q of a schema R is called


P-complete in R if Q contains all ports from R.
Definitions imply the following result.

Proposition 5.6.24. A subschema Q of a schema R is E-complete,


P-complete, and V-complete at the same time in R if and only if it
coincides with R.

Proposition 5.6.25. If a schema R does not have nodes without


ports, then P-completeness of a subschema Q implies V-completeness
of Q.

Indeed, ports cannot be separate from nodes. Therefore, if all


ports are present in a subschema, then all nodes are also present in
this subschema.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 590

590 Theory of Knowledge: Structures and Processes

Proposition 5.6.26. If a schema R does not have links without


ports and open ports, then P-completeness of a subschema Q without
open ports implies E-completeness of Q.

Indeed, if there no open ports in R, then any of its ports is con-


nected by a link to another port. The subschema Q contains all ports
from the schema R. Not to be open, these ports need all links from
the schema R. Therefore, if all ports are present in the subschema
Q, then all nodes are also present in Q and it is E-complete.

Proposition 5.6.27. If f : R → P is a structural V-epimorphism


(E-epimorphism) of schemas, then the inverse image f −1 (Q) of a
V-complete (E-complete) subschema Q of a schema P is V-complete
(E-complete).

Proof . (a) Let Q be a V-complete subschema of a schema P . Then


f −1 (Q) contains all nodes from R as f : R → P is a structural
V-epimorphism, i.e., f −1 (Q) is V-complete in R.
(b) Let Q be a E-complete subschema of a schema P and r be an
link from R, at least, one end of which is connected to a node A such
that f (A) is a node from Q. Then by the definition of a structural
homomorphism, f (r) is connected to the node f (A), at least, one
end. Thus, f (r) is r be a link from Q as Q is E-complete in R.
Consequently, r belongs to f −1 (Q). As r is an arbitrary link from R
connected to a node from f −1 (Q), the schema f −1 (Q) is E-complete
in R.
Proposition is proved.

Definition 5.6.29. A schema P is open if it is connected to some


other systems. Otherwise, P is closed.
Multigraphs of the schemas allow one to characterize definite
classes of schemas.

Proposition 5.6.28. A schema R is closed if its multigraph G(R)


is not generalized.

Proof is left as an exercise.


September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 591

Knowledge Structure and Functioning 591

Corollary 5.6.4. A schema R is a transducer only if its multigraph


G(R) has edges connected by their beginning and edges connected by
their end.

Proposition 5.6.29. If f : R → P is a (structural) homomorphism


of schemas and R is a closed schema, then the image f (R) is closed.

Proof is left as an exercise.

Corollary 5.6.5. An epimorphic image of a closed schema is closed.

Proposition 5.6.30. If f : R → P is a (structural) homomorphism


of schemas and R is a connected schema, then the image f (R) is
connected.

Proof is left as an exercise.

Corollary 5.6.6. An epimorphic image of a connected schema is


connected.

An important part of the mathematical schema theory is con-


cerned with operations with schemas. There are operations with
schemas aimed at the creation and development of computing and
communication networks, some of these operations are constructed
and studied in (Burgin, 2010a).
Another class of schema operations is related to concepts because
schemas provide efficient representation of concepts. That is why an
important operation with conceptual schemas is conceptual blending.
It is a process that operates below the level of consciousness and
involves connecting two concepts to create new meaning (Fauconnier
and Turner, 2002; Guhe et al., 2011). Researchers use this operation
to explain abstract thought, creativity, and language. For instance,
Fauconnier and Turner (2002) argue that all learning and all thinking
consist of blends of concepts and metaphors based on various phys-
ical experiences. Blending is a cyclic operation, in which the results
of previous blendings are then themselves blended together into an
increasingly rich structure that makes up people’s mental functioning
in modern society.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch06 page 593

Chapter 6

Knowledge Structure and


Functioning: Megalevel or Global
Theory of Knowledge

Knowledge is not a series of self consistent theories that converges


towards an ideal view; it is rather an ever increasing ocean of
mutually incompatible (and perhaps even incommensurable)
alternatives, each single theory, each fairy tale, each myth.
Paul Feyerabend

In the context of global knowledge, that is, on the knowledge


megalevel, there are two approaches to knowledge structuration:

— System structuration organizes knowledge in the form of interact-


ing knowledge systems.
— Typological structuration groups knowledge with respect to differ-
ent types.

For instance, scientific and mathematical knowledge are struc-


tured as systems of interacting theories. At the same time, typo-
logical structuration organizes knowledge systems according to their
types. For instance, as it is done in Chapter 2, three basic types
of knowledge are discerned: descriptive, representational, and opera-
tional knowledge. Each type forms a subsystem or component of an
advanced scientific or mathematical theory.

593
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch06 page 594

594 Theory of Knowledge: Structures and Processes

A global knowledge system encompasses knowledge about a


domain that consists of immense amount of objects, relations, pro-
cesses, transformations, and interactions.
Any wide-ranging and advanced scientific or mathematical the-
ory is an example of global knowledge systems and consequently,
models of scientific and mathematical theories are models of global
knowledge systems of the definite type. Namely, scientific and math-
ematical theories, in general, are more organized and more advanced
than other types of knowledge. As a rule, these theories provide
more complete representation of global knowledge in comparison with
other kinds of knowledge such as engineering knowledge or medical
knowledge.
One more distinction is made between comprehensive and partial
knowledge systems.

Definition 6.1. A comprehensive knowledge system has all types of


knowledge and knowledge of each type is usually organized as the cor-
responding subsystem of the whole knowledge system (cf., Table 4.1).
Advanced scientific theories, such as relativity theory, quantum
mechanics or genetics, are comprehensive knowledge systems.

Definition 6.2. A partial knowledge system does not have all types
of knowledge.
Formal theories, such as axiomatic set theory or Peano arithmetic,
are examples of partial knowledge systems.
Structural analysis of big knowledge systems in general and scien-
tific theories, in particular, has been, as a rule, concerned only with
the inner structure of knowledge systems. The reason for this was the
limited understanding of the concept of structure that had existed
for a long time. In the general theory of structures developed in
(Burgin, 2012), this limitation was eliminated by demonstration that
any system has five types of structures — internal structure, inner
structure, intermediate structure, external structure, and outer struc-
ture. In addition, the new theory demonstrated that the traditional
understanding of the concept of structure, as well as its mathematical
formalizations are incomplete and the complete concept of structure
and its mathematical formalization was created. As a result, in the
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch06 page 595

Knowledge Structure and Functioning 595

knowledge realm, the theory of mathematical structures developed


by Bourbaki (1960) became a special formalized subtheory of the
general theory of structures (Burgin, 2012).

6.1. A typology of structures and scientific knowledge

Every task involves constraint, Solve the thing without complaint;


There are magic links and chains Forged to loose our rigid brains.
Structures, structures, though they bind, Strangely liberate the mind.
James Falen

While the traditional approach takes into account only one structure
of a system, the general theory of structures postulates that any
system has different structures, which belong to five basic types:
inner, internal, intermediate, outer, and external structures.

Definition 6.1.1. An internal structure Q of a system R con-


tains only inner structural parts, components and elements, i.e.,
parts, components, and elements of R; relations between these parts,
components and elements; relations between these parts, compo-
nents, elements and relations from Q; and relations between relations
from Q.
A complex system can have different internal structures because
it is possible to consider this system selecting different parts, com-
ponents and elements. For instance, it is possible to treat a text as a
system consisting of words and relations between them or as a sys-
tem consisting of letters and relations between them. It will give us
different internal structures of the same text.
Moreover, if we select definite parts, components and elements,
it is possible to have different internal structures taking dissimilar
relations between them. For instance, taking words as elements of
a text, we obtain one internal structure when we choose syntactic
relations between these words and another internal structure when
we choose semantic relations between these words.
Note that by this definition, it is possible to treat the system R
as a part of itself. In this case, relations between elements and the
whole system R, as well as relations between relations in R and the
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch06 page 596

596 Theory of Knowledge: Structures and Processes

whole system R are also included in the internal structure. However,


it is also useful to exclude such relations from consideration. It brings
us to the special case of internal structures — inner structure.

Definition 6.1.2. An inner structure Q of a system R is an internal


structure when the whole system R is not a part, component or
element of itself.
For instance, the inner structure of an abstract set does not
include any relations between elements of this set, while the inter-
nal structure of an abstract set is based on the membership relation
between elements of this set and the set itself, i.e., if X is an abstract
set and x is an element from X, then we have the relation x ∈ X.
As in the case of internal structures, a complex system can have
different inner structures.

Definition 6.1.3. An external structure T of a system R is an exten-


sion of the internal structure, in which other systems, their parts,
components and elements are included, as well as relations between
all these included parts, components and elements, relations between
these parts, components, elements and relations from T and relations
between relations from T .
For instance, considering an organization A, we take members
and divisions of this organization, other organizations, divisions and
people with which members and divisions of this organization have
relations as elements of the external structure of A, and take relations
between these elements and relations as relations of the external
structure of A.
In the external structure of a scientific theory T , we include other
theories related to T , scientists who have been developing this theory,
applications of T , and relations between all these systems.
As in the case of internal and inner structures, a complex system
can have different external structures.

Definition 6.1.4. An intermediate structure T of a system R is an


external structure when the whole system R is not a part, component,
or element of itself and other systems are also excluded from T , as
well as relations that include these systems.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch06 page 597

Knowledge Structure and Functioning 597

For instance, the intermediate structure of a mathematical theory


M includes axioms, statements, and theorems of other mathematical
theories related to axioms, statements, and theorems of M , relations
between all these objects and relations between these relations. If we
take, for example, the theory of groups, then its axioms and many of
its theorems are also axioms and theorems of the theory of abelian
groups, as well as of the theory of free groups.
As in the case of internal and inner structures, a complex system
can have different intermediate structures.

Definition 6.1.5. An outer structure of a system R is an inner struc-


ture of a system U in which R is only one of the inner elements of
the internal structure Q of the system U .
For instance, the internal structure of the organization where an
individual works is an outer structure of this individual. The internal
structure of person’s family also is an outer structure of this person.
The outer structure of a group is the category of all groups, while
the outer structure of a set is the category of all sets.
As in the case of internal and inner structures, a complex system
can have different outer structures.
Besides, when models of knowledge systems are developed, it is
reasonable to make a difference between knowledge systems and
systems of other nature. It brings us to two types of intermediate
structures, two types of external structures, and two types of outer
structures of a knowledge system:

— knowledge-bounded intermediate structure;


— beyond-knowledge intermediate structure;
— knowledge-bounded external structure;
— beyond-knowledge external structure;
— knowledge-bounded outer structure;
— beyond-knowledge outer structure.

Advanced scientific theories are model examples of comprehensive


and partial global knowledge systems. However, scientific theories
represent only a specific type of global knowledge systems. Gopnik
and Meltzoff (1997) give what is probably the most comprehensive set
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch06 page 598

598 Theory of Knowledge: Structures and Processes

of conditions that scientific theories have to satisfy. These conditions


fall into three categories: structural, functional, and dynamic. It is
interesting that these three categories exactly correspond to the
three types of complexity measures of algorithms and computations
(Burgin, 2005).
Structurally, scientific theories are abstract, coherent, causally
organized, and ontologically committed bodies of knowledge. They
are abstract in that they posit entities and laws using a vocabulary
that differs from the vocabulary used to state the evidence that sup-
ports them. They are coherent in that there are systematic relations
between the entities posited by the theory and the experimental and
observational data. Scientific theories are causal insofar as the struc-
ture that they posit in the world to explain observable regularities is
ordinarily a causal one. Keil (1989) suggests that causal relations are
central to scientific theories; especially, in those components that are
homeostatic and hierarchically organized. Finally, scientific theories
are ontologically committed as the entities that they hypothesize cor-
respond to real kinds supporting counterfactuals about how things
would be under various non-actual circumstances.
Functionally, scientific theories must make predictions, interpret
evidence in new ways, and impart explanations of phenomena in their
domain. The predictions of scientific theories often go beyond simple
generalizations of the evidence, and include ranges of phenomena
that the theory was not initially developed to cover. Scientific theories
interpret experimental and observational data by providing new
descriptions that influence what is seen as pertinent or salient and
what is not. Associative relations produced by interpretations supply
the raw data for theoretical development, as well as counterexamples
for discarding the theory. In addition, relevant scientific theories pro-
vide explanations of phenomena in their domain.
Dynamically, scientific theories are not static representations of
their domains but are transformed and developed by scientists.
Dynamic processes that involve scientific theories include:

— an initial period involving preliminary hypothesis formation and


the accumulation of evidence via processes of experimentation
and observation,
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch06 page 599

Knowledge Structure and Functioning 599

— discovery of counterexamples,
— possible discounting of such examples as noise,
— generation of hypotheses to modify a theory,
— production of a new theory when an old one has accumulated
too many counterexamples or repulsive and complicated auxiliary
amendments,
— applications of a theory.

Applying scientific approach, many researchers studied inner


structures of scientific theories building their models, which are often
called reconstructions, and testing their validity by application to
existing scientific theories. The most popular is the standard (pos-
itivist) model (reconstruction) of a scientific theory, which utilizes
means of logic representing a scientific theory as a system of propo-
sitions (cf., for example, (Carnap, 1934/1937; Suppe, 1974; 1979;
1999)). Namely, according to Carnap, a scientific theory is an inter-
preted axiomatic formal system, which consists of:

• a formal language, including logical and non-logical terms;


• a set of logical axioms and corresponding rules of construction and
inference;
• a set of mathematical axioms and corresponding rules of construc-
tion and inference;
• a set of non-logical axioms, which express the empirical content of
the theory;
• a set of semantic postulates defining the meaning of non-logical
terms, which formalize the analytic truths of the theory;
• a set of rules of correspondence, which give an empirical interpre-
tation of the theory.

Another popular approach to description of the scientific theory


structure is the structuralist model (reconstruction) of a scientific
theory (cf., for example, (Sneed, 1971; Balzer et al., 1987)), which
utilizes means of set theory representing a scientific theory as a sys-
tem of models of the theory domain.
Some researchers treat scientific theories as devices for the formu-
lating and resolving scientific problems. In this context, they model
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch06 page 600

600 Theory of Knowledge: Structures and Processes

scientific theories by systems of statements and questions (problems)


including (in some models) various forms of problem representation,
rules and heuristics for resolving problems and utilizing erotetic logic
for rigorous analysis of problems and problem-solving (cf., for exam-
ple, (Lobovikov, 1984; Garrison, 1988)).
In his model, Thagard (1988) represented a scientific theory
as a highly organized package of rules, concepts, and problem
solutions.
Some researchers treat scientific theories as devices for the formu-
lating and resolving scientific problems. In this context, they model
scientific theories by systems of statements and questions (problems)
including (in some models) various forms of problem representation,
rules and heuristics for resolving problems and utilizing erotetic logic
for rigorous analysis of problems and problem-solving (cf., for exam-
ple, (Lobovikov, 1984; Garrison, 1988)).
In his model, Thagard (1988) represented a scientific theory
as a highly organized package of rules, concepts, and problem
solutions.
All these and some other approaches were unified in the structure-
nominative model or reconstruction (SNR) of a scientific theory
(Burgin and Kuznetsov, 1989; 1989a; 1991; 1992; 1993; 1994; Burgin
et al., 1989; Balzer et al., 1991). As a result, other models of
theoretical knowledge that describe inner structure of big knowl-
edge systems, such a scientific theories, became subsystems of the
structure-nominative model of scientific knowledge (a scientific the-
ory) and all structures used in those models are either named sets
or systems of named sets. For instance, the structuralist model of a
scientific theory (cf., for example, (Sneed, 1971; Balzer et al., 1987))
is represented as the model-representing subsystem of a scientific
theory, while the standard (positivist model) of a scientific theory
(cf., for example, (Suppe, 1974; 1979; 1999)) is represented as the
logic-linguistic subsystem (LLS) of a scientific theory.
The structure-nominative model has been applied to the analy-
sis of laws and models of physical theories (Burgin and Kuznetsov,
1993), to formal and informal concept formation from a general
point of view (Burgin and Kuznetsov, 1988; 1990), to the analysis
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch06 page 601

Knowledge Structure and Functioning 601

of the structure and development of mathematical theories (Burgin


and Kuznetsov, 1991a), to problems of intellectual activity and
cognition (Burgin and Kuznetsov, 1988a; 1988b), to problems of
pedagogy (Burgin et al., 1989), to esthetic features of scientific the-
ories (Burgin and Kuznetsov, 1993a) and to knowledge represen-
tation in AI systems (Burgin and Kuznetsov, 1987; 1988a). The
structure-nominative model has provided efficient means for studies
of scientific reduction (Balzer et al., 1991) demonstrating that the
structure-nominative general schema for reduction entails as spe-
cial cases Schröder-Heister–Schafer’s weak and strong representa-
tion and reduction concepts (Schröder-Heister and Schafer, 1989),
as well as the structuralist concept of reduction (Balzer et al., 1987).
Kuokkanen (1993) suggested one more application of the structure-
nominative model, namely, to study Rantala’s concept of correspon-
dence (Rantala, 1989).
A new development of the structure-nominative model of scien-
tific knowledge is elaborated in (Burgin, 2011) where the complete
beyond-knowledge external structure of a scientific/mathematical
theory is constructed. The complete beyond-knowledge external
structure of a scientific/mathematical theory (comprehensive knowl-
edge system) developed in the extended structure-nominative model
or reconstruction (ESNR) is given in Figure 6.1.
The model of global knowledge systems presented in this chapter
is the further development of the SNR of comprehensive scientific
knowledge system (Burgin and Kuznetsov, 1989; 1989a; 1991; 1992;
1993; 1994; Burgin et al., 1989; Balzer et al., 1991), as well as of
the ESNR of comprehensive scientific knowledge system elaborated
in (Burgin, 2011).
Scientific and mathematical theories represent a transition form,
from the macrolevel to the megalevel of knowledge. When a mathe-
matical or scientific theory appears, it is, as a rule, small and is situ-
ated on the macrolevel of knowledge. This was the situation with the
calculus at the end of 17th century, with non-Euclidean geometries in
the middle of the 19th century or with non-Diophantine arithmetics
now. At the same time, mature theories, such as geometry, algebra,
genetics, or quantum physics, are on the megalevel.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch06 page 602

602 Theory of Knowledge: Structures and Processes

Problem Affirmative
Thinking Thinking
Thinking

Logical
Part
Heuristic
Part Logic-Linguistic
Problem-Heuristic Subsystem
Subsystem
Linguistic
Part
Problem
Part

Subsystem
of Ties
Nomological
Axiological Part
Part
Pragmatic-Procedural Model-representing
Subsystem Subsystem
Procedural Model
Part Part

Object
Domain
Action World

Figure 6.1. The complete beyond-knowledge external structure of a scientific/


mathematical theory (comprehensive knowledge system), which includes the inner
structure (the middle layer of the diagram), beyond-knowledge intermediate struc-
ture, and is developed in the ESNR of comprehensive knowledge systems
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch06 page 603

Knowledge Structure and Functioning 603

6.2. Nuclear and comprehensive knowledge systems

Integrity without knowledge is weak and useless, and knowledge


without integrity is dangerous and dreadful.
Samuel Johnson

A nuclear knowledge system performs a specific function in knowl-


edge functioning. There are several types of nuclear knowledge
systems: descriptive knowledge systems, representational knowledge
systems, operational knowledge systems, assertoric knowledge sys-
tems, erotetic knowledge systems, and heuristic knowledge systems.
A comprehensive knowledge system performs all these functions
combining all nuclear knowledge systems in a comprehensive totality
of knowledge.
There are different models of knowledge on the megalevel. The
most popular models, such as positivists logical representation of
knowledge where scientific theory is represented by logical propo-
sitions and/or predicates (cf., (Carnap, 1934/1937; Popper, 1965;
1979; Suppe, 1999)) or model-oriented representation of knowledge,
which assumes that a theory is a collection of models (cf., (Suppes,
1967; Suppe, 1979; Sneed, 1979; van Frassen, 2000)), represent only
nuclear knowledge systems working well on the macrolevel but being
essentially incomplete on the megalevel.
The structure-nominative model created by Burgin and Kuznetsov
(Burgin and Kuznetsov, 1991; 1992; 1993; 1994; Burgin et al., 1989;
Balzer et al., 1991) was the first conceptual and mathematical model
of comprehensive knowledge systems. This model encompasses all
other models of theoretical and practical knowledge that existed
before giving an advanced structural representation of scientific
knowledge in general and scientific theories in particular. Provid-
ing means for exploration of various traits and regularities of scien-
tific knowledge structure and functioning, the structure-nominative
model forms the base of the structure-nominative direction in the
methodology of science, and it is demonstrated that named sets form
the structural base of the SNR reconstruction of scientific knowledge
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch06 page 604

604 Theory of Knowledge: Structures and Processes

(Burgin and Kuznetsov, 1989; 1989a; 1991; 1992; 1993; 1994; Balzer
et al., 1991). Here we further develop the SNR model building the
modal stratified bond model (MSB model) of comprehensive knowl-
edge systems in general, as well as advanced scientific and mathemat-
ical theories, in particular, describing their structure and functioning.
To build the MSB model, we configure the global knowledge in
tree directions — systemic, modal and hierarchical.
Namely, we take into account three modalities of knowledge con-
sidered in Chapter 2 to construct the modal direction:

∗ Assertoric knowledge consists of epistemic structures with implicit


or explicit affirmation of being knowledge.
∗ Hypothetic or heuristic knowledge consists of epistemic structures
with implicit or explicit supposition that they may be knowledge.
∗ Erotetic knowledge consists of epistemic structures that express
lack of knowledge.

Logical propositions or statements, such as “The Sun is a star”,


are examples of assertoric units of knowledge.
Beliefs with low extent of certainty, i.e., when they are not suffi-
ciently grounded, are examples of hypothetic knowledge.
Questions and problems are examples of erotetic knowledge.
Knowledge with different modalities forms strata in knowl-
edge systems determining the horizontal structure of comprehensive
knowledge systems.
In the hierarchical direction, we separate three levels of global
knowledge systems:

1. The componential level of global knowledge.


2. The attributed level of global knowledge.
3. The productive level of global knowledge.

The componential level consists of elements, parts and blocks from


which systems from the attributive level are built. In some sense, the
componential level is the substructural level of a global knowledge
system.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch06 page 605

Knowledge Structure and Functioning 605

The attributed level reflects the static structure of global knowl-


edge as a system constructed from elements, parts and blocks from
the componential level.
The productive level reflects the cognitive (dynamic) structure of
global knowledge, containing means for knowledge acquisition, pro-
duction, and transmission.
Note that each of these three levels has its strata and sublevels.
Levels in global knowledge systems determine the vertical struc-
ture of this system.
As it is explained in Chapter 2, it is natural to distinguish three
categories of knowledge, which form the systemic direction of knowl-
edge structuration:

∗ Descriptive knowledge.
∗ Representational knowledge.
∗ Operational knowledge.

Each of these categories is subdivided into three groups.


Descriptive knowledge has three types:

◦ Informal knowledge based on natural languages.


◦ Semiformal knowledge based on logic.
◦ Formal knowledge.

Let us consider definitions of a limit as examples of three types


of descriptive knowledge.
An informal definition: A number a is a limit of a sequence of
numbers ai (i = 1, 2, 3, . . .) if the distance between a and all but
a finite number of elements from the sequence is smaller than any
arbitrarily small positive number.
A semiformal definition: A number a is called a limit of a sequence
l if for any ε ∈ R++ the inequality |a − ai | < ε is valid for almost all
ai , i.e., there is such n that for any i > n, we have |a − ai | < ε.
A formal definition: a = limi→∞ ai if for any ∀ε ∈ R++ ∃n ∈
N ∀i > n(|a − ai | < ε).
Note that the formal definition is much shorter than both informal
and semiformal definitions.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch06 page 606

606 Theory of Knowledge: Structures and Processes

Representational knowledge has three modes:

◦ Knowledge representing statics.


◦ Knowledge representing dynamics, which expresses two distinctive
modes:
• Knowledge representing functions;
• Knowledge representing processes.

The statement “The Sun is a star” is an example of static repre-


sentational knowledge.
The statement “The Sun gives light” is an example of static rep-
resentational knowledge representing functions.
The statement “The Earth rotates about the Sun” is an example
of representational knowledge representing processes.
Operational knowledge has three types:

◦ Procedural knowledge.
◦ Instrumental knowledge.
◦ Axiological knowledge.

An algorithm, e.g., rules for adding decimal numbers, is an exam-


ple of procedural knowledge.
An abstract automaton, e.g., a Turing machine (Burgin, 2005), is
an example of instrumental knowledge.
The statement “A theory has to be able to explain the results of
experiments” is an example of axiological knowledge.
Thus, in the systemic context, we discern five knowledge subsys-
tems: the logic-linguistic subsystem (LSS), the model-representation
subsystem (MSS), the procedural subsystem (PSS), the axiological
subsystem (ASS), and the instrumental subsystem (ISS) of a com-
plete knowledge system. Together, the PSS, ISS, and ASS form the
operational subsystem (OS) of a complete knowledge system but tak-
ing into account the intermediate structure (cf., (Burgin, 2012)) of
knowledge, we consider these three subsystems separately.
All these subsystems are explicated and studied based on func-
tions and functioning of real comprehensive knowledge systems. As
a theory contains knowledge, it possesses a variety of linguistic tools,
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch06 page 607

Knowledge Structure and Functioning 607

the main of which are different languages. From this perspective,


a theory is regarded as a sophisticated system of statements about
its domain, e.g., quantum mechanics describes the microlevel of the
physical world. In addition, theoretical knowledge has developed log-
ical structures, while logical apparatus, such as inference, is used for
scientific and mathematical cognition, as well as for grounding and
testing acquired knowledge. To perform these functions, a mature
theory employs specific means from linguistic and logic, which are
amalgamated in the LSS of the theory. Any comprehensive knowl-
edge system models some domain and results in the necessity of the
MSS. Operational functions of a comprehensive knowledge system
are performed by the PSS, the ASS, and the ISS. Each subsystem
has the same horizontal and vertical structures as the whole compre-
hensive knowledge system.
All subsystems of global (comprehensive) knowledge systems have
several levels, which are presented in Table 6.1 in the unified form
and considered in more detail in Sections 6.2–6.6.
In addition to types of knowledge, we have three knowledge
modalities:

— Assertoric knowledge;
— Erotetic knowledge;
— Hypothetic or heuristic knowledge.

An assertoric knowledge item asserts that something is or is not


the case. For instance, the statement “The Sun is a star” is an asser-
toric proposition. Assertoric propositions and assertoric predicates
are studied by assertoric logics such as the Aristotelian syllogistics.
An erotetic (from Greek, erōtēsis — questioning) knowledge item
expresses an inquiry or a question. For instance, the statement “Is
the Sun a star?” is an erotetic proposition (a question). Erotetic
knowledge is studied by erotetic logics (Belnap and Steel, 1976).
Hypothetic (heuristic) knowledge is knowledge that is not suffi-
ciently grounded, e.g., a hypothesis or a conjecture. For instance,
the statement “The age of the Sun is nine billion years” is a hypo-
thetic proposition (a hypothesis). Hypothetic logic studies hypothetic
propositions and hypothetic predicates (Baldoni et al., 1998).
Table 6.1. Parts and levels of global knowledge systems.

Descriptive Representational 608


September 27, 2016

Type knowledge knowledge Operational knowledge


19:40

Subsystem (Part)

Level LSS MSS PSS ASS ISS

Componential Nominalistic part: Aspect part: Operating part: Scaling part: Fragment part:
level concepts, lexicon, properties primitive relation, scalesparts and
vocabularies, operations, data and their systems
components of
alphabets automata, devices,
machines
Attributed Linguistic part: Modeling part: Operator part: Evaluation part: Performer part:
level grammars, parametric, instructions, estimates, automata, devices,
linguistic attributive and operators judgments, norms, machines,
relations, relational models goals, measures, instruments
languages criteria, and
values
Productive Logical part: logical Nomological part: Algorithmic part: Combination part: System part:
level calculi, logics, systems and algorithms, algebras and systems and
deduction/ algebras of models procedures, calculi of networks of
inference rules, operational properties, automata, devices,
Theory of Knowledge: Structures and Processes - 9in x 6in

Theory of Knowledge: Structures and Processes

logical varieties, schemas, scenarios estimates, machines


prevarieties and judgments, norms,
quasivarieties goals, measures,
criteria, and
b2334-ch06

values
page 608
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch06 page 609

Knowledge Structure and Functioning 609

Modalities of knowledge determine three knowledge strata:

— The assertoric stratum;


— The erotetic stratum;
— The hypothetic, or heuristic, stratum.

Each of these strata comprises all types of knowledge — descrip-


tive, representational, and operational — of the global knowledge
system, e.g., of a scientific theory. Namely, the erotetic stratum has
logical, model, procedural, axiological, and instrumental components.
Problems and questions related to the knowledge in the LSS are
united in the logical component of the erotetic stratum. Problems
and questions related to the theory models are combined in the model
component of the erotetic stratum. Problems and questions related
to algorithms, procedures and methods are amalgamated in the pro-
cedural component of the erotetic stratum. Problems and questions
related to properties, estimates, and parameters are collected in the
axiological component of the erotetic stratum. Problems and ques-
tions related to instrumental issues are integrated in the instrumental
component of the erotetic stratum.
In a similar way, the heuristic stratum has logical, model, proce-
dural, axiological, and instrumental components. Conjectures, heuris-
tics, and hypotheses in the linguistic form are united in the logical
component of the heuristic stratum. Heuristic and hypothetic mod-
els, i.e., models that are not sufficiently validated, are integrated in
the model component of the heuristic stratum. Heuristic and hypo-
thetic properties, estimates, and parameters are unified in the axio-
logical component of the heuristic stratum. Heuristic and hypothetic
algorithms, procedures and methods are collected in the procedural
component of the heuristic stratum. Descriptions of heuristic instru-
ments and devices, as well as their properties are integrated in the
instrumental component of the erotetic stratum.
The five subsystems of a theory (knowledge system) and the-
ory strata are not disconnected — they function together, have
common elements, and there are various ties between all subsys-
tems. For instance, statements from LSS are interpreted in models
from MSS, while problems (questions) from the erotetic stratum
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch06 page 610

610 Theory of Knowledge: Structures and Processes

can have solutions (answers) in the assertoric stratum. All these


ties are collected in the subsystem of bonds or ties of the theory.
Inference rules and algorithms in general belong both to the PSS
and to the LSS, while the PHS shares construction/generation
rules from erotetic and heuristic languages and inference rules from
erotetic and heuristic logics with the PPS. We remind that erotetic
languages describe and erotetic logics study the logic and prag-
matics of questions and answers, heuristic languages describe and
heuristic logics study the logic and pragmatics of hypotheses and
conjectures, while assertoric languages represent and assertoric log-
ics study the logic and pragmatics of assertions (statements or
propositions).
That is why in any developed knowledge system, there is the sub-
system of bonds or ties (BSS), which unites all other subsystems and
their strata, comprising connections and ties between their elements
and components. Examples of such connections are:

— the interpretations of languages and calculi (from LSS) in models


(from MSS);
— the procedural interpretations (in PSS) of languages and calculi
(from LSS);
— the correspondence between algorithms and procedures of infer-
ence (from PSS) and logics (from LSS) where they are used;
— the relations between conjectures (from the hypothetic stratum)
and their properties (from the assertoric stratum);
— the relations between problems (the erotetic stratum) and algo-
rithms (from the assertoric stratum) that solve these problems;
— the correspondence between generative grammars (from PSS) and
languages (from LSS) generated by these grammars;
— the relations between hypotheses (from the hypothetic stratum)
and corresponding problems (from the erotetic stratum);
— the relations between heuristics (from the hypothetic stratum)
and their properties (from the assertoric stratum).

All subsystems, their levels and strata form the inner struc-
ture of a comprehensive knowledge system. This structure with
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch06 page 611

Knowledge Structure and Functioning 611

the beyond-knowledge intermediate structure and beyond-knowledge


outer structure of a comprehensive knowledge system in general and
of a scientific or mathematical theory, in particular, is presented in
Figure 6.2.

Mental World
Hypothetic
Thinking
Affirmative
Thinking
Attitude
evaluation
Inquisitive
Thinking
Thinking

Structural World

Hypothetic
stratum
Logic-Linguistic
Axiological Subsystem
Subsystem Assertoric
stratum

Subsystem
Subsystem
of Bonds
of Bonds

Procedural Model-representation
Subsystem Subsystem
Instrumental
Subsystem

Erotetic Physical World


stratum

Object
Domain

Action

Figure 6.2. The complete beyond-knowledge external structure of a compre-


hensive knowledge system, e.g., a scientific/mathematical theory, which includes
the inner structure (the middle layer of the diagram), and beyond-knowledge
intermediate structure
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch06 page 612

612 Theory of Knowledge: Structures and Processes

6.3. Logic-linguistic knowledge system and descriptive


knowledge

Five paths to a single destination. What a waste. Better a


labyrinth that leads everywhere and nowhere.
Umberto Eco

A great bulk of knowledge and many think all knowledge is repre-


sented in a linguistic form, that is, using language, or actually, a
variety of languages. There are many natural languages, which are
used for information exchange and knowledge preservation. There
is a diversity of artificial languages created for specific purposes.
Scientific languages, such as languages of physics or biology, have
been created for scientific cognition and accumulating its results.
Some artificial languages, such as Esperanto, were created provid-
ing better tools for communication in society. Programming lan-
guages have been continuously created for controlling computers.
Mathematical languages, such as the language of arithmetic with
its numbers or the language of geometry with its figures, have been
created for mathematical cognition and accumulating its results.
Creation of logical languages have been aimed at formalization of
reasoning.
In any case, any global system of knowledge has a component or
a subsystem, in which knowledge is represented by languages and
has the logical form, i.e., according to the rules of logic used to orga-
nized knowledge in a reliable and efficient system. Many philosophers
and logicians even implied that any scientific or mathematical theory
as a kind of global knowledge is a system of statements (proposi-
tions). This is the standard model of a scientific theory introduced
by positivists (cf., (Carnap, 1934/1937; Popper, 1965; 1979; Suppe,
1999)).
The comprehensive models of global knowledge in general and
scientific theories, in particular, such as the structure-nominative
model (Burgin and Kuznetsov, 1991; 1992; 1993; 1994; Burgin et al.,
1989; Balzer et al., 1991), the extended structure-nominative model
(Burgin, 2011) and the MSB model presented in this book, do not
reject importance of logic and linguistic for knowledge systems but
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch06 page 613

Knowledge Structure and Functioning 613

LSS
Logical
part

Calculi

Varieties
Linguistic Productive
part
level
Languages

Grammars Attributed
Conceptual level
part

Concepts

Lexicon Componential
level

Figure 6.3. The vertical structure of the LSS

reflect their presence in separation of the specific LSS in comprehen-


sive knowledge systems.
The LSS has three basic levels and three corresponding parts (cf.,
Figure 6.3):

• The componential level, which gives elements and components,


such as concepts, terms, names, and lexicons, for building next
levels;
• The attributed level, where languages and their constituents, such
as grammars and interpreters, are situated;
• The productive level, where tools, such as logical calculi, vari-
eties, quasi-varieties and prevarieties, for knowledge production
and organization belong.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch06 page 614

614 Theory of Knowledge: Structures and Processes

Each level exists as the corresponding part of a global knowledge


system (scientific theory):

• The conceptual part,


• The linguistic part,
• The logical part.

Each basic level is, in turn, divided into sublevels, which struc-
turally form a named set sequence in the sense of (Burgin, 2011). Let
us consider all these levels.
The first level of the LSS consists of the symbols used in the
languages of this subsystem. For instance, mathematical theories use
such symbols as: decimal digits, Latin letters, Greekletters, Hebrew
letters, additional mathematical symbols, e.g., ∅ or , and letters of
the natural language used by mathematicians, e.g., French, German,
or Russian.
On the second level, symbols are organized into alphabets. For
instance, decimal digits 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9 form the alphabet
of the decimal positional numerical system. If a knowledge system
uses several languages, e.g., a physical theory, as a rule, uses natural
languages, mathematical languages and the language of the area of
this theory, then it is possible to take the union of all its alphabets
as the alphabet of this theory.
The third level of the LSS consists of the rules for building words,
expressions and well-formed formulas from the symbols.
On the fourth level, some words and expressions are chosen as
names and terms. For instance, 123 is a name of natural number
in the decimal positional numerical system. In the binary positional
numerical system, the same number has the name 1111011.
Concepts used in the considered knowledge system or connected
to this system form the fifth level of the LSS. Concept structure and
properties are modeled and studied by means of various named sets
(Burgin and Kuznetsov, 1988; 1990).
Symbolic forms of concept expressions (terms) are used for the
construction of the lexicon comprising alphabets and vocabularies of
languages from the whole knowledge system, which are elements of
the sixth level of LLS.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch06 page 615

Knowledge Structure and Functioning 615

The seventh level consists of the construction rules for expres-


sions, e.g., phrases, sentences, and texts, from the languages used
in the whole knowledge system in general and in the LSS, in par-
ticular. For instance, in the propositional calculus, logicians use the
construction rule:
If a and b are propositions, then a&b is also a proposition.
The eighth level of the LSS contains various languages treated as
systems of expressions built from symbols of alphabets in accordance
with construction rules. All these languages as systems of expression
are elements of the LSS. However, their semantics are defined by
means of other subsystems, while in some cases the languages also
belong to other subsystems. For instance, semantics of model lan-
guages is defined in the MSS and they also belong to this subsys-
tem. In a similar way, semantics of algorithmic languages is defined
in the procedural subsystem (PSS), to which these languages also
belong. The axiological subsystem (ASS) shares languages and log-
ics of norms and values with the logic-linguistic subsystem.
The ninth level of the LSS contains rules for transforming expres-
sions from the theory’s languages. In accordance with the classifica-
tion of languages, there are many kinds of transformation rules, for
example, deduction and substitution are the most frequently utilized
and the best known of them.
The tenth level, which is sometimes divided into three sublevels,
contains formal calculi. Any calculus is a named set C = (A, R, T )
where A is an axiom system, T is the set of theorems of the given cal-
culus, and R consists of the rules used in the process of deducing these
theorems, i.e., rules from the ninth level (cf., Section 3.8). The struc-
ture of a logical calculus brings on three sublevels of the tenth level:
— Systems of axioms, i.e., of expressions accepted without proofs;
— Systems of theorems, i.e., of expressions that are proved;
— Logical calculi, which combine axioms, deduction rules and the-
orems into unified systems.
The eleventh level of the LSS contains towers of calculi introduced
for representation of dynamic aspects of formal theories (Maslov,
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch06 page 616

616 Theory of Knowledge: Structures and Processes

1983) and other logical varieties, prevarieties, and quasi-varieties


(Burgin, 1991d; 1997f; 2004c; Burgin and de Vey Mestdagh, 2011;
2015).
Additionally, a comprehensive knowledge system, e.g., an
advanced scientific theory, usually is a source and repository of
problems, conjectures, heuristics, and hypotheses. For instance, the-
ories are, as a rule, estimated by problems they allow scientists/
mathematicians to solve. Thus, the theory of non-Euclidean geome-
tries has been considered one of the most important mathematical
theories because it solved the long-standing problem about the fifth
postulate of Euclid.
In addition to parts and levels, the LSS has three basic strata (cf.,
Figure 6.4):

— The assertoric stratum, which contains knowledge about the


knowledge system, e.g., theory, domain, e.g., propositions describ-
ing the knowledge domain properties.

Hypothetic
stratum

Assertoric
stratum

Subsystem

of BondsLSS

Erotetic
stratum

Figure 6.4. The horizontal structure of the LSS


September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch06 page 617

Knowledge Structure and Functioning 617

— The erotetic stratum which contains knowledge on problems,


tasks, and questions about the knowledge system, e.g., theory,
domain.
— The hypothetic stratum, which contains hypothetic, i.e., not suf-
ficiently validated, knowledge about the knowledge system, e.g.,
theory, domain, e.g., conjectures about the knowledge domain
properties.

The hypothetic and erotetic strata of the logic-linguistic sub-


system also has several levels inheriting them from the described
above assertoric stratum of the knowledge system. However, while
the assertoric stratum employs assertoric languages, the erotetic stra-
tum of the LSS uses translational, heuristic and other languages for
representing problems, questions, tasks, hypotheses, and heuristic
methods of construction and search. It also contains calculi, log-
ics, and algebras of problems and questions, hypotheses, etc. The
hypothetic stratum of the LSS uses translational, heuristic and other
languages for representing conjectures, hypotheses, and heuristic
methods of construction and search. It also contains calculi, logics,
and algebras of conjectures and hypotheses, etc.
It is necessary to stress that the erotetic and hypothetic strata of
the majority of mathematical and scientific theories are less devel-
oped and formalized than their assertoric hypothetic stratum.

6.4. Model-representation knowledge system and


representational knowledge

The beginning of knowledge is the discovery of something we do


not understand.
Frank Herbert

Any knowledge system, e.g., a scientific or mathematical theory, con-


sists of knowledge about some domain, called the knowledge domain,
e.g., the theory domain. To perform this function, a comprehensive
knowledge system, e.g., an advanced theory, uses an assortment of
models of the entities from the theory domain and in some cases,
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch06 page 618

618 Theory of Knowledge: Structures and Processes

MSS
Nomological
part

Systems
of
Modeling models Productive
part
level

Aspect Models Attributed


part level

Properties

Names Componential
level

Figure 6.5. The vertical structure of the MSS

of the domain as a whole. These models, e.g., mathematical models,


form the MSS of the knowledge system.
The MSS also has several levels, which form a named set sequence
(cf., Figure 6.5). Its first level consists of various names for objects
from the knowledge domain, e.g., the object field of a theory. Names
play a very important role for any kind of knowledge in general and
for representational knowledge, in particular. For instance, to define
a process calculus, one starts with a set of names of channels for
providing means of communication (Lanese et al., 2011).
Note that the names of the objects are not only separate words
but also formulas, expressions, or texts. For instance, an equation for
electron is a name of an electron.
The second level includes systems of names of objects with rela-
tions between the names. These systems are represented by semantic
networks or dependency tables. There are two types of relations
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch06 page 619

Knowledge Structure and Functioning 619

between the names of objects: linguistic relations, in which names


are connected as linguistic entities, and reflection relations, which
represent relations between objects.
The third level contains names of properties and relations between
the objects from the knowledge domain. Note that the names of
the properties and relations are not only separate words but also
formulas, expressions or texts.
The fourth level of the MSS consists of constructions describing
these properties and relations. The most important of such con-
structions is an abstract property P represented by a named set
P = (U, p, L), where U is the universe of entities considered, p is
a partial mapping, and a partially ordered set L is the scale of the
property P (cf., Section 5.3).
The fifth level includes systems of names of properties and rela-
tions with relations between the names. These systems are repre-
sented by semantic networks or dependency tables. There are two
types of relations between the names of properties and relations: lin-
guistic relations, in which names are connected as linguistic entities,
and reflection relations, which represent relations between properties
and relations.
The sixth level of the MSS consists of structural hierarchies built
of different sets, multisets and named sets that are used for building
models and properties in the considered system of knowledge, e.g.,
in a scientific or mathematical theory.
For instance, Bourbaki’s set scale (Bourbaki, 1960) and of the
concept of universes used in non-standard analysis (Cutland, 1988),
which were considered before as entities comprising the sixth level,
are special cases of structural hierarchies.
The seventh level of the MSS contains basic abstract entities
defined on the supports of the theory models. The choice of basic
properties depends on axiological estimates and judgments.
The eighth level consists of models, formal and informal, math-
ematical, and conceptual, which are used in the theory and called
theory models. In the MSB model, they have the general form of
the named set M = (R(D), f, L). Here D is the set that consists
of the names of the objects studied, the names of their properties
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch06 page 620

620 Theory of Knowledge: Structures and Processes

and relations between them, the names of names, etc.; the names
of abstract properties and relations corresponding to properties and
relations of objects, to properties of their properties, etc.; and names
of ideal entities like truth values; while R(D) is a subset of S(D) and
S(D) is the set scale in the sense of (Bourbaki, 1960) with basis D.
This set scale includes D and all its elements, functions from D into
D, functions defined on these functions; the set of all subsets of D;
and so on. In addition, L is the scale of the properties of the elements
from R(D) and f is the (partial) function that assigns values of their
properties to the elements from R(D).
The ninth level contains relations and ties between theory models,
as well as their properties and parameters. For instance, an important
relation between theory models is “to be a submodel of”. Examples
of properties are “standard” and “non-standard”, namely, there are
standard and non-standard models.
The tenth level of MSS is comprised of relations and ties between
relations and ties between theory models, as well as their proper-
ties and parameters. Using the structural hierarchy described in Sec-
tion 6.2, it is possible to come to higher levels of the MSS.
The eleventh level contains algebras and calculi of models, as well
as their properties and parameters. Examples of such algebras and
calculi are:
• Process algebras (Hennessy, 1988; Burgin and Smith, 2010).
• Process calculi (Hoare, 1985; Milner, 1989; 1999; Moller and Tofts,
1990).
• Algebras of abstract automata (Burgin, 2010d).
These algebras and calculi contain models of physical (usually,
computational) processes. Moreover, now there is a tendency to treat
physical, biological, psychological and economical processes as com-
putational processes.
In addition to parts and levels, the MSS has three basic strata
(cf., Figure 6.6):
— The assertoric stratum, which contains knowledge about the
knowledge system, e.g., theory, domain, e.g., propositions describ-
ing the knowledge domain properties.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch06 page 621

Knowledge Structure and Functioning 621

Hypothetic
stratum

Assertoric
stratum

Subsystem

of BondsMSS

Erotetic
stratum

Figure 6.6. The horizontal structure of the MSS

— The erotetic stratum which contains knowledge on problems,


tasks, and questions about the knowledge system, e.g., theory,
domain.
— The hypothetic stratum, which contains hypothetic, i.e., not suf-
ficiently validated, knowledge about the knowledge system, e.g.,
theory, domain, e.g., conjectures about the knowledge domain
properties.

The hypothetic and erotetic strata of the MSS also has several
levels inheriting them from the described above assertoric stratum of
the knowledge system. However, while the assertoric stratum employs
validated models of knowledge objects and their properties and rela-
tions, the hypothetic stratum of the MSS uses hypothetic and heuris-
tic models of knowledge objects and their properties and relations.
It also contains calculi and algebras of such models.
The erotetic stratum of the MSS contains problems and questions
concerning models of knowledge objects, as well as their properties
and relations.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch06 page 622

622 Theory of Knowledge: Structures and Processes

6.5. Procedural, axiological and instrumental


knowledge systems, and operational knowledge

If knowledge can create problems, it is not through ignorance that


we can solve them.
Isaac Asimov

Any comprehensive knowledge system contains operational knowl-


edge, which is organized in three subsystems:

• the PSS, which encompasses diverse operations, procedures, algo-


rithms, instructions, rules for operation, tasks, action and interac-
tion schemas, scenarios, processes and methods.
• the ASS, which comprises property representations, estimates,
norms, judgments, standards, indices, indicators, criteria, bench-
marks, measures, qualities, characteristics, attributes, quantities,
judgments, norms, goals, criteria, and values.
• the ISS, which contains descriptions of various instruments,
abstract automata and machines, mechanisms and devices.

For instance, any theory is a tool for scientific inquiry and


knowledge integration. Thus, a theory has different operations, proce-
dures, algorithms, rules, action schemas, and methods for perform-
ing these functions, as well as for estimation obtained results and
representation of properties of studied objects. All these tools are
comprised by the PSS of the theory. The PSS of the theory has
three components: inner, internal, and external. The inner compo-
nent includes operations, procedures, algorithms, scenarios, rules,
and action schemas that are used inside the PSS. Algorithms that
build other algorithms or enumerate other algorithms are examples of
elements from the interior component of the PSS. The internal com-
ponent includes operations, procedures, algorithms, scenarios, rules,
and action schemas that are used inside the comprehensive knowledge
system, e.g., a theory, but outside its PSS. Inference rules are exam-
ples of elements from the internal component of the PSS. The external
component includes operations, procedures, algorithms, scenarios,
rules, and action schemas that are used outside the comprehensive
knowledge system, e.g., a theory. Measurement and approximation
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch06 page 623

Knowledge Structure and Functioning 623

algorithms and procedures are examples of elements from the exter-


nal component of the PSS of physical theories.
Similar to other subsystems of comprehensive knowledge systems,
the PSS is also organized hierarchically.
The first (basic) level of the PSS consists of elementary actions and
operations. Their structure and properties are modeled and studied
by means of various named sets or fundamental triads (cf., (Burgin,
2011)).
The second level of the PSS contains relations between and
properties of elementary actions and operations. For instance, arith-
metical operations of addition and multiplication are commuta-
tive and associative. An example of relations between arithmeti-
cal operations is reduction, for instance, multiplication is reduced
to addition.
Instructions and rules for performing elementary actions and
operations, such as transition rules in abstract automata, instruc-
tions of Turing machines or instructions in programming languages,
are elements of the third level of the PSS.
The fourth level consists of construction rules for integrating ele-
mentary actions and operations into algorithms, programs, proce-
dures, scenarios, processes and methods as well as construction rules
for systems that perform these actions and operations. Examples
of construction rules are sequential composition and parallel com-
position of automata and algorithms, as well as many operations
described in (Burgin, 2010d).
The fifth level of the PSS contains various procedures, algorithms,
tasks, programs, schemas, scenarios, processes, and methods built
from elementary actions and operations by construction rules and
instructions. As it is demonstrated in (Burgin, 2011), algorithms and
procedures are represented by named sets and systems of named sets,
such as named set chains.
The sixth level of the PSS contains relations between and prop-
erties of procedures, algorithms, tasks, schemas, programs, scenar-
ios, processes, and methods from the fourth level. Examples of
such properties are time complexity, space complexity, computa-
tional complexity, communication complexity, power and reliability
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch06 page 624

624 Theory of Knowledge: Structures and Processes

of algorithms, procedures, and programs. Examples of such relations


are linguistic equivalence, functional equivalence, and reducibility of
algorithms, procedures, and programs.
The seventh level of the PSS contains systems of procedures,
algorithms, tasks, schemas, scenarios, processes, and methods from
the fourth level. Examples of such systems are software systems,
classes of algorithms of the same type, such as the class of all finite
automata or the class of all Turing machines (cf., Appendix B), and
classes of algorithms that solve the same problem, algorithms of data
compression.
The eighth level contains composition operations for algorithms,
automata, procedures, and other models of processes (cf., for exam-
ple, (Burgin, 2005b; 2010d)). Composition operations allow build-
ing new algorithms, processes and procedures from other algorithms,
processes and procedures. As algorithms, processes and procedures
are represented by named sets and systems of named sets, compo-
sitions of algorithms and procedures are represented by operations
with named sets and systems of named sets, such as operations with
named set chains.
The ninth level of the PSS contains algebras and calculi of algo-
rithms, processes and procedures (cf., for example, (Baeten, 1990;
Baeten and Bergstra, 1991; 1996; Bergstra and Klop, 1984; Burgin,
1997j; Burgin and Smith, 2010)), as well as algebras of operations
with algorithms, automata, procedures, and processes. Note that
such algebras and calculi also belong to the MSS.
The tenth level consists of construction rules and metarules (rules
for operation) for combining conventional algorithms, automata, and
procedures into algorithms, automata, and procedures of the second
level (Burgin and Gupta, 2012; Burgin and Debnath, 2010).
The eleventh level of the PSS consists of algorithms, automata
and procedures of the second level (Burgin and Gupta, 2012; Burgin
and Debnath, 2010).
Building algorithms of higher and higher levels it is possible to go
to higher levels of the PSS.
Levels and parts of the PSS form its vertical structure (cf.,
Figure 6.7).
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch06 page 625

Knowledge Structure and Functioning 625

PSS

Algorithmic
part

Algorithms,
procedures,
Operator scenarios Productive
part
level

Operators Attributed
Operating
part level

Operations

Data Componential
level

Figure 6.7. The vertical structure of the PSS

In addition to levels, the PSS has three basic strata (cf.,


Figure 6.8):
— The assertoric stratum contains operational knowledge in the
form of operations, procedures, algorithms, instructions, rules for
operation, tasks, action and interaction schemas, scenarios, pro-
cesses and methods.
— The erotetic stratum contains knowledge on problems, tasks and
questions about operations, procedures, algorithms, instructions,
rules for operation, tasks, action and interaction schemas, scenar-
ios, processes and methods.
— The hypothetic, or heuristic, stratum contains heuristic, i.e., not
sufficiently validated, and empirical, i.e., not theoretically based,
operations, procedures, algorithms, instructions, rules for opera-
tion, tasks, action and interaction schemas, scenarios, processes,
and methods.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch06 page 626

626 Theory of Knowledge: Structures and Processes

Hypothetic
stratum

Assertoric
stratum

BondsPSS

Erotetic
stratum

Figure 6.8. The horizontal structure of the PSS

The PSS is closely connected to other two subsystems containing


operational knowledge. The closest to it is the ISS, which contains
descriptions of various instruments, abstract automata and machines,
mechanisms, and devices for performing operations, procedures, algo-
rithms, rules for operation and tasks from the PSS, e.g., measuring
or experimental devices.
Similar to other subsystems of comprehensive knowledge systems,
the ISS is also organized hierarchically.
The first (basic) level of the ISS consists of descriptions of
elements, components and parts of machines, devices, abstract
automata, and instruments that are related to the comprehensive
knowledge system in question. For instance, alphabets of abstract
automata or descriptions of the parts of experimental devices belong
to the first level. Taking such an instrument as a single-beam
absorption spectrometer, we see that it has such parts as the ampli-
fier and the detector (cf., for example, (Rothbart, 1997)).
The second level consists of properties of and relations between
the elements, components, and parts described on the first level. For
instance, rules of automata can be deterministic or non-deterministic.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch06 page 627

Knowledge Structure and Functioning 627

An example of a relation is to “be after”, e.g., the amplifier is after


the detector in a single-beam absorption spectrometer.
The third level consists of construction rules for constructing
machines, devices, and instruments from their elements, components
and parts described on the first level.
The fourth third level of the ISS contains descriptions of various
machines, devices, abstract automata and instruments built by con-
struction rules from their elements, components and parts.
The fifth level consists of properties of and relations between the
machines, devices, abstract automata and instruments constructed
on the fourth level.
The sixth level of the ISS contains systems of abstract automata,
such as finite automata; abstract machines, such as Turing machines,
random access machines, cellular automata, and inductive Turing
machines; descriptions of devices and instruments, such as computers
or networks.
The seventh level contains composition operations for abstract
automata, abstract machines, devices and instruments (cf., for exam-
ple, (Burgin, 2005b; 2010d) for operations with abstract automata
and abstract machines). Composition operations allow building new
abstract automata, machines, devices and instruments from other
abstract automata, machines, devices, and instruments.
The eighth level of the ISS contains algebras and calculi of abstract
automata and abstract machines (cf., for example, (Burgin, 2010d)).
Note that such algebras and calculi also belong to the MSS.
Levels and parts of the ISS form its vertical structure (cf.,
Figure 6.9).
In addition to levels, the instrumental subsystem has three basic
strata (cf., Figure 6.10):

— The assertoric stratum contains operational knowledge in the


form of abstract automata and descriptions of instruments,
devices and machines;
— The erotetic stratum contains knowledge on problems, tasks and
questions about abstract automata and descriptions of instru-
ments, devices and machines;
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch06 page 628

628 Theory of Knowledge: Structures and Processes

ISS

Fragment
part
Parts
and
components
of automata
Performer Productive
part level
automata
machines
Attributed
devices
System part level
Systems
and
networks
of automata Componential
level

Figure 6.9. The vertical structure of the ISS

— The hypothetic, or heuristic, stratum contains descriptions of


heuristic, i.e., not sufficiently validated, and empirical instru-
ments, devices and machines.

Finally, the ASS of a comprehensive knowledge system encom-


passes property representations, estimates, judgments, norms, goals,
measures, indices, indicators, attributes, criteria, and values used in
this system, e.g., in a scientific theory. Some of these objects are
applied inside the knowledge system they belong to, while others
are used for the knowledge domain. For instance, the truth value
(from the ASS) is applied to the assertions (from the LSS of a sci-
entific/mathematical theory. There are also other judgments and
estimates, such as the adequacy of a model, the time complexity
or space complexity of an algorithm, and so forth, which are used
in the knowledge system itself. Other objects from the axiological
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch06 page 629

Knowledge Structure and Functioning 629

Hypothetic
stratum

Assertoric
stratum

BondsISS

Erotetic
stratum

Figure 6.10. The horizontal structure of the ISS

subsystem, such as measures, criteria and goals, are used in the


knowledge domain. For instance, in the theory of algorithms, such
measures as time complexity, space complexity, and Kolmogorov
complexity, are used for estimation of programs, software systems
and concrete algorithms, such as approximation algorithms or search
programs.
The ASS is also organized hierarchically containing the following
levels.
Its first (basic) level contains names of properties, indices, indica-
tors, attributes, estimates, judgments, norms, goals, measures, crite-
ria, and values.
The second level of the ASS contains scales of properties,
attributes, measures and norms.
The third level consists of abstract properties and quantities,
which are a specific kind of named sets (cf., Section 5.3).
The fourth level of the ASS consists of natural properties, mea-
sures, and characteristics.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch06 page 630

630 Theory of Knowledge: Structures and Processes

The fifth level contains various criteria, indices, indicators, and


attributes. Their structure and properties are modeled and studied
by means of an assortment of named sets.
The sixth level of the ASS contains a diversity of estimates, norms,
judgments, norms, goals, standards, values, and benchmarks.
The seventh level contains operations with abstract properties,
quantities, real properties, measures, characteristics, criteria, esti-
mates, norms, standards, values, and benchmarks.
The eighth level of the ASS contains algebras of abstract proper-
ties and quantities, which are systems of named sets. Their structure
and properties are also modeled and studied by means of various
named sets (cf., Section 3.6).
The ninth level of the ASS contains algebras of real properties,
measures, and characteristics, which are also represented by systems
of named sets.
The tenth level contains algebras of criteria.
The eleventh level of the ASS contains algebras of indices, indi-
cators, attributes, estimates, norms, standards, values, and bench-
marks.
Levels and parts of the ASS form its vertical structure (cf.,
Figure 6.11).
In addition to levels, the instrumental subsystem has three basic
strata (cf., Figure 6.12):

— The assertoric stratum contains operational knowledge in the


form of abstract automata and descriptions of instruments,
devices and machines;
— The erotetic stratum contains knowledge on problems, tasks and
questions about abstract automata and descriptions of instru-
ments, devices and machines;
— The hypothetic, or heuristic, stratum contains descriptions of
heuristic, i.e., not sufficiently validated, and empirical instru-
ments, devices and machines.

To conclude, it is necessary to remark that the described struc-


turation of subsystems efficiently works in the studies of the statics
and dynamics of scientific knowledge in general and of mathematical
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch06 page 631

Knowledge Structure and Functioning 631

ASS

Combination
part
Algebras &
calculi of
estimates,
norms, Productive
Evaluation values level
part
Estimates

Scaling Norms Attributed


part level
Values

Scales

Componential
level

Figure 6.11. The vertical structure of the ASS

and scientific theories, in particular (cf., (Burgin and Kuznetsov,


1989; 1989a; 1991; 1992; 1993; 1994; Balzer et al., 1991)). However, it
is not unique because it is possible to separate other levels, parts, and
strata. The choice of structuration depends on the studied knowledge
system and on the problems being solved.
The BSS inherits structuration into different levels, parts, and
strata from other subsystems of the knowledge system.

6.6. Relations between and operations with global


knowledge systems
The fact that scientists do not consciously practice a formal
methodology is very poor evidence that no such methodology exists. It
could be said–has been said–that there is a distinctive methodology of
science which scientists practice unwittingly, like the chap in Moliere
who found that all his life, unknowingly, he had been speaking prose.
Peter B. Medawar
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch06 page 632

632 Theory of Knowledge: Structures and Processes

Hypothetic
stratum

Assertoric
stratum

BondsASS

Erotetic
stratum

Figure 6.12. The horizontal structure of the ASS

It is natural to define set-theoretical relations between and operations


with knowledge systems treating these systems as sets of knowledge
items or knowledge units, e.g., sets of propositions.
Here are some of set-theoretical relations.
1. Set-theoretical inclusion of knowledge systems: A knowledge sys-
tem A is a subsystem of a knowledge system B if A is a subset
of B.
2. The binary relation to be disjoint: Knowledge systems A and B
are disjoint if A does not intersect with B.
Here are some of set-theoretical operations.
1. Set-theoretical union of knowledge systems: A knowledge system
C is the union of knowledge systems A and B if C consists of
knowledge items from A and from B.
2. Set-theoretical intersection of knowledge systems: A knowledge
system D is the intersection of knowledge systems A and B if
D consists of all knowledge items that belong both to A and
to B.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch06 page 633

Knowledge Structure and Functioning 633

3. Set-theoretical difference of knowledge systems: A knowledge sys-


tem E is the union of knowledge systems A and B if E consists
of all knowledge items that belong to A but not to B.

For instance, knowledge about cats and dogs is the union of knowl-
edge about cats and knowledge about dogs.
It is also possible to define structural relations between and struc-
tural operations with knowledge systems that have some structure,
e.g., a propositional algebra, i.e., a set of propositions closed with
respect to logical operations of disjunction, conjunction, implication
and negation, or a logical calculus (cf., Section 5.2).
Here are some of these relations.

1. The binary relation to be a structural subsystem: A knowledge


system A is a structural subsystem of a knowledge system B if
A is a subset of B inheriting its structure from B. For instance,
a subcalculus A of a calculus B is a structural subsystem of B.
A subtheory R of a formal theory T is a structural subsystem
of T .
2. The binary relation to be a structural extension: A knowledge sys-
tem B is a structural extension of a knowledge system A if B is a
structured system that contains A. For instance, any calculus B
that contains a set of formulas A is a structural extension of A.
3. The binary relation to be a strict structural extension: A knowl-
edge system B is a strict structural extension of a knowledge sys-
tem A if B is a minimal structured system that contains A. For
instance, a minimal calculus B that contains a set of formulas A
is a strict structural extension of A.

Here are examples of structural operations.

1. The logical intersection of axiomatic theories (calculi) T1 and T2


is built by taking the intersection A of the axioms of T1 and T2
and generating a theory from the axioms in A.
2. The logical union of axiomatic theories (calculi) T1 and T2 is built
by taking the union D of the axioms of T1 and T2 and generating
a theory from the axioms in D.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch06 page 634

634 Theory of Knowledge: Structures and Processes

To organize theory-elements on the second level of the theory hier-


archy of the knowledge-bounded outer structure, structuralists use
the specialization relation σ between theory-elements, which means
that one theory is a specialization of another theory (Sneed, 1971).
Namely, we have the relation
σ
T1 → T2 ,

when the theory T2 are constructed from the theory T1 by adding


additional conditions (laws or axioms). For instance, the theory of
differentiable manifolds is a specialization of the theory of topological
manifolds.
Functioning of science and development of scientific knowledge
often involves the construction relation β between theory-elements.
Namely, we have the relation
β
T1 → T2 ,

when the basic concepts of the theory (knowledge system) T2 are


constructed from the basic concepts of the theory (knowledge sys-
tem) T1 .
There are three basic operations with formal theories, which are
nuclear (in this case logical) global knowledge systems: expansion,
contraction, and revision. The first two of them are binary opera-
tions, in which the first argument is a formal theory and the second
argument is an arbitrary set of formulas, while the third one is a
ternary operation, in which the first argument is a formal theory and
the second and the third arguments are arbitrary sets of formulas.
To define these operations in a formal way, we select a system of
consistency conditions C and assume that all considered theories are
consistent with respect to C (cf., Section 2.3.2). The model example
of consistency for formal theories is the classical consistency when
a theory T is consistent if it does not contain a formula and its
negation.

Definition 6.6.1. Given a formal theory T and a set of formulas F ,


the expansion Exp(T , F ) of T by F is a minimal consistent formal
theory Q that contains both T and F .
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch06 page 635

Knowledge Structure and Functioning 635

Note that if there is no consistent formal theory Q that contains


both T and F , the expansion Exp(T , F ) is not defined. This hap-
pens, for example, when T contains a formula A and T contains its
negation.
Proposition 6.6.1. If the intersection of consistent with respect to
C formal theories is a consistent with respect to C formal theory,
then the expansion Exp(T , F ) is defined in a unique way.
Otherwise, expansion of formal theories is a multivalued
operation.
Definition 6.6.2. Given a formal theory T and a set of formulas F ,
the contraction Cnt(T , F ) of T by F is a maximal consistent formal
subtheory Q of T that does not contain F .
Note that to build the contraction Exp(T , F ) of T by F , we need
to exclude from T not only formulas from F but also such formulas
that allow deduction of formulas from F .
Proposition 6.6.2. If the logical closure of consistent with respect
to C formal theories without formulas from F is a consistent with
respect to C formal theory does not contain formulas from F , then
the contraction Cnt(T , F ) is defined in a unique way.
Note that contraction of formal theories is a multivalued operation
in the general case.
Definition 6.6.3. (a) Given a formal theory T and two sets of for-
mulas F and G, the positive revision Rvnp(T , F , G) of T by F is
a minimal consistent formal theory Q that contains both T and the
set F \G.
(b) Given a formal theory T and two sets of formulas F and G,
the negative revision Rvnn(T , F , G) of T by F and G is the largest
consistent formal subtheory Q of T that does not contain the set
G\F .
Proposition 6.6.3. If sets of formulas F and G do not intersect,
then the negative revision Rvnn(T, F, G) coincides with the contrac-
tion Cnt(T, G) and the positive revision Rvnp(T, F, G) coincides with
the expansion Exp(T, F ).
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch06 page 636

636 Theory of Knowledge: Structures and Processes

Indeed, if sets of formulas F and G do not intersect, then F \G =


F and G\F = G.
An important operation is structuration of knowledge systems.
For instance, structuration of quantum knowledge is described in
Chapter 4. Structuration of full-size (global) knowledge systems is
described in Section 6.2.
A special case of structuration is stratification of a knowledge
system. For instance, stratification of full-size (global) knowledge
systems into assertoric, erotetic, and hypothetic strata is described
in Section 6.2. Abstraction of knowledge items also induces strati-
fication of knowledge into levels of abstractions (Section 2.3.8). In
Section 2.4, metaknowledge is stratified.

6.7. Hierarchies of knowledge systems

The only good is knowledge and the only evil is ignorance.


Socrates

Knowledge in science is not always well organized on the global plane.


That is why, we consider here only the hierarchy of scientific knowl-
edge, which is properly structured by various relations. This hierar-
chical structure was elaborated in the structuralist direction of the
methodology of science in the form of a theory-net, theory-evolution,
and theory-holon, representing the structural hierarchy of scientific
knowledge in the knowledge-bounded outer structure of a scientific
theory T (Sneed, 1971; Balzer et al., 1987). Here we further develop
this approach.
At the first (lowest) level of the theory hierarchy, structuralists
logically position theory-elements.

Definition 6.7.1. A theory-element is the smallest unit regarded as


a theory.
For instance, propositional calculus and number theory are
theory-elements, while theory of groups and topology are theory-
nets.
The knowledge-bounded outer structure of a scientific theory T
is represented by theory-nets, theory-evolutions and theory-holons
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch06 page 637

Knowledge Structure and Functioning 637

that contain T . Concepts of a theory-net, theory-evolution and theory-


holon were introduced in the structuralist model of scientific knowl-
edge (Sneed, 1979; Balzer et al., 1987).
To organize theory-elements on the second level of the theory hier-
archy of the knowledge-bounded outer structure, structuralists use
the specialization relation σ between theory-elements, which means
that one theory is a specialization of another theory. Namely, we have
the relation
σ
T1 → T2 ,
when the theory T2 are constructed from the theory T1 by adding
additional conditions (laws or axioms). For instance, the theory of
differentiable manifolds is a specialization of the theory of topological
manifolds.
At the second level of the theory hierarchy, structuralists place
theory-nets.

Definition 6.7.2. A theory-net is a net of theory-elements connected


by the specialization relation σ.
An example of a structuralist theory-net in algebra is given in
Figure 6.13.

An important relation between theory-nets is inclusion when one


theory-net is a part of another theory-net. For instance, the theory-
net “Topological algebra” is included (as a subnet) into the theory-
net “Algebra” and the theory-net “Topology.”

Algebra
theory of universal algebras
σ σ σ
semigroup theory ring theory module theory σ
σ σ theory of vector spaces `
group theory field theory theory of associative rings
σ σ
σ
theory of abelian groups theory of finite groups

Figure 6.13. A part of the theory-net “Algebra”


September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch06 page 638

638 Theory of Knowledge: Structures and Processes

Scientific theories are not static systems. They are born, develop-
ing and often going to another world — from science to the history
of science. To represent theory dynamics, structuralists introduced
the concept theory-evolution.

Definition 6.7.3. A theory-evolution is a theory-net “moving”


through historical time.
It is possible to find examples of theory-evolutions in (Sneed, 1971;
Balzer et al., 1987).
Besides, theory-nets form a hierarchical structure. To reflect this
structure, the concept theory-holon was introduced.

Definition 6.7.4. A theory-holon is a complex of theory-nets tied


by “essential” links.
Any big field in mathematics or in physics has to be represented
by a theory-holon. In essence, the whole mathematics or theoretical
physics are theory-holons.
The structuralist model gives a realistic picture of theoretical
knowledge, when it includes several theories. However, to have a more
exact and complete model of theoretical knowledge, the structure-
nominative model was elaborated. Here we go to a higher level build-
ing the MSB model of advanced knowledge systems. In this model,
not only the specialization relation σ is used to form a mathematical
description of a theory-net but also other inter-theoretical relations
are employed. For instance, functioning of science and development
of scientific knowledge often involves the construction relation β
between theory-elements. Namely, we have the relation:
β
T1 → T2 ,
when the basic concepts of the theory (knowledge system) T2
are constructed from the basic concepts of the theory (knowledge
system) T1 .
For instance, in algebra, a field F is an abelian group with respect
to addition and the set F \{0}, i.e., all elements from F but 0, is
an abelian group with respect to multiplication, the concept field
is based on the concept abelian group. As a result, the structure
nominative model of the knowledge-bounded outer structure of a
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch06 page 639

Knowledge Structure and Functioning 639

scientific theory T , i.e., a theory-net containing T , can be different


from the corresponding structuralist model (theory-net).
There are also other inter-theoretical relations, i.e., relations
between theories, such as “to be a logical extension”, (Shoenfield,
2001) “to be a functional extension” (Burgin and Kuznetsov, 1985).
Besides, levels of theoreticity, abstraction, constructivity, and struc-
turation also define specific relations between theory-elements in a
theory-net (Burgin and Kuznetsov, 1994).
Based on these considerations, here we build the MSB hierarchy
of advanced theoretical knowledge systems.

Definition 6.7.5. A theory-component is a system of knowledge that


plays the role of a component of a theory.
In the SNR, five theory-components are discerned on the upper
level of a theoretical knowledge system: the LSS, the MSS, the PPS,
the PHS, and the subsystem of ties (ST). On the lower level of the
theory inner hierarchy, the structure-nominative model has such com-
ponents as the ASS, the procedural (operational) subsystem, logical
subsystem, and linguistic subsystem.
In the MSB model, different five theory-components are discerned
on the upper level of a theoretical knowledge system: the LSS, the
MSS, the PSS, the ASS, ISS and the BS or BT of a complete knowl-
edge system. In addition, each level of each subsystem also is a
theory-component (cf., Section 6.1).

Definition 6.7.6. A theory-unit is a system of knowledge treated as


a theory.
Analytical mechanics, quantum mechanics, group theory, number
theory, theory of manifolds, and probability theory are examples of
theory-units.
As in the structuralist model, the knowledge-bounded outer
structure of a scientific theory T is represented by theory-nets,
theory-evolutions and theory-holons that contain T . In addition,
theory-net-evolutions and theory-holon-evolutions are also intro-
duced to represent dynamics not only of theory-units but also of
theory-nets, theory-evolutions, and theory-holons.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch06 page 640

640 Theory of Knowledge: Structures and Processes

topology
σ σ σ

general topology algebraic topology differential topology


σ σ σ
theory of topological groups topology of surfaces topology of manifolds
σ
σ
theory of Lie groups theory of Lie groups

Figure 6.14. A structure-nominative theory-net in topology

Definition 6.7.7. A theory-net is a net of theory-units connected


by inter-theoretical relations.
An example of a structure-nominative theory-net in topology is
given in Figure 6.14.
It is useful to understand that it is possible to treat some theo-
ries as theory-units in some situations and as theory-nets in other
situations. Analytical mechanics, quantum mechanics, group theory,
the calculus and probability theory are examples of such theories.
At the same time, there are theoretical knowledge systems, such as
quantum theory or algebra, that are definitely theory-nets.
Although theory-nets can be very big, on some level of complex-
ity, the philosophy, and methodology of science come to a qualitative
alteration in complexity, which demands another concept for its rep-
resentation.

Definition 6.7.8. A theory-holon is a complex of theory-nets tied


by knowledge-related links.
Global knowledge systems in general and scientific theories are
not static systems. They are going over various transformations after
emergence. To represent global knowledge dynamics, we introduce
the following concepts.

Definition 6.7.9. A theory-state is a theory-unit taken at definite


interval of time.
For instance, it is possible to consider quantum mechanics in 1930
or number theory in 2000 as examples of theory-states.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch06 page 641

Knowledge Structure and Functioning 641

Definition 6.7.10. A theory-evolution is a sequence of theory-states


taken at successive intervals of time.
For instance, it is possible to consider the sequence of the states
of quantum mechanics in 1930, 1940, 1950, and 1960 as a theory-
evolution of this discipline. We have another theory-evolution taking
the sequence of the states of number theory in 1900, 1925, 1950, 1975,
and 2000.
Not only scientific theories are changing with time. Theory-nets
and theory-holons are also changing. To represent their dynamics,
we introduce the following concepts.

Definition 6.7.11. A theory-net-state is a theory-net taken at defi-


nite interval of time.
For instance, it is possible to consider quantum mechanics in 1930
or number theory in 2000 as examples of theory-net-states.

Definition 6.7.12. A theory-net-evolution is a sequence of theory-


net-states taken at successive intervals of time.
For instance, it is possible to consider the sequence of the states
of quantum mechanics in 1930, 1940, 1950, and 1960 as a theory-net-
evolution. We have another theory-net-evolution taking the sequence
of the states of topology in 1920, 1940, 1960, 1980, and 2000.

Definition 6.7.13. A theory-holon-state is a theory-holon taken at


definite interval of time.
For instance, it is possible to consider quantum theory in 1930 or
mathematics in 2000 as examples of theory-holon-states.

Definition 6.7.14. A theory-holon-evolution is a sequence of theory-


holon-states taken at successive intervals of time.
For instance, it is possible to consider the sequence of the states
of physics in 1930, 1940, 1950 and 1960 as a theory-holon-evolution.
We have another theory-holon-evolution taking the sequence of the
states of mathematics in 1900, 1925, 1950, 1975, and 2000. Named
set chains give a natural mathematical representation for theory-
evolutions, theory-net-evolutions and theory-holon-evolutions (Bur-
gin, 2008a; 2011).
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch07 page 643

Chapter 7

Knowledge Production, Acquisition,


Engineering, and Application

It is better to learn late than never.


Publilius Syrus

In the previous chapters, we studied knowledge properties, their


evaluation and verification, as well as knowledge structures on dif-
ferent levels. The goal of this chapter is to describe activities and
processes going on in the knowledge universe, which consists of
a diversity of knowledge systems and items together with their
representations and carriers. We consider such processes and activi-
ties as cognition, knowledge production, learning, knowledge acqui-
sition, knowledge discovery, reasoning, knowledge management and
application.
It is necessary to understand that these processes and activities
utilize data and knowledge representations, e.g., texts, schemas, for-
mulas, etc., which, as we have found before, are not knowledge itself
as the same knowledge can have different representations. However,
working with representations, we process knowledge, which is a very
important kind of structures (cf., Section 8.2).

643
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch07 page 644

644 Theory of Knowledge: Structures and Processes

7.1. Knowledge production, learning, and acquisition


as basic cognitive processes

Knowledge has to be improved, challenged, and increased


constantly, or it vanishes.
Peter Drucker

There are many processes in which people obtain knowledge, as


well as many names for these processes — cognition, knowledge
production, knowledge acquisition, knowledge creation, knowledge
capture, learning, knowledge reception, experience, observation,
experimentation, thinking, reasoning, perception, knowledge discern-
ment, knowledge apprehension, understanding, judgment, knowledge
comprehension, knowledge grasp, insight, knowledge purchase, and
knowledge discovery. There are different interpretations of and opin-
ions on the meaning of these terms. Here we take cognition as the
most generic term for obtaining knowledge and contemplate other
processes in this area as specific kinds and types of cognition.
In addition, we ascribe cognitive abilities not only to people but
also to other cognitive or epistemic systems such as intelligent tech-
nical systems, e.g., computers with the corresponding software, orga-
nizations, social groups, and communities.
Cognitive studies explicated three basic ways of cognition:
1. Knowledge creation or production when new knowledge is pro-
duced.
2. Knowledge acquisition/capture, which may be treated as proac-
tive learning where the learner (cognizer) actively seeks existing
knowledge in some area.
3. Knowledge reception or reflexive learning when knowledge carriers,
such as instructions or statements, are given (are sent) to the
learner (cognizer).
In what follows, we study these ways of cognition in more detail.
Creation (production) of knowledge goes on three major levels —
personal, group, and social. In this process, personal, group, and
social intelligence and creativity applied to personal/group/social
knowledge often produces (creates) new knowledge.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch07 page 645

Knowledge Production, Acquisition, Engineering, and Application 645

On the personal level, according to (Polanyi, 1966), knowledge


creation involves an ongoing process of transformation and integra-
tion of existing explicit and tacit items of knowledge. It is a highly
personal process, which depends on many factors: on the abilities
and knowledge of the individual, motivation, particular situation and
person’s perception of the situation (Polanyi, 1958).
Although some researchers think that knowledge is created only
by individuals, history of the humankind shows that knowledge is also
created by groups, communities and society as a whole but different
individuals play different (if any) roles in this creation. For instance,
the initial knowledge in the mathematical calculus was created by
the group of two people — Isaac Newton (1642–1727) and Gottfried
Wilhelm Leibniz (1646–1716). It is assumed that both mathemati-
cians did this independently. At the same time, some groups work
together in knowledge creation. For instance, the group of two math-
ematicians Kenneth Appel and Wolfgang Haken created knowledge
that the Four-Color Conjecture is true proving the Four-Color The-
orem, which states that any map drawn on a plane can use only
four colors without any two adjacent countries having the same color
(Appel and Haken, 1977). In contrast to Newton and Leibniz, they
worked together proving the Four-Color Theorem.
Mathematical knowledge as a whole is created by the mathe-
matical community, in which mathematicians interact, discuss their
results and use knowledge obtained by other mathematicians.
It is necessary to explain that although knowledge creation and
knowledge production are placed in the same category of processes,
they are different when treated on a deeper level. Here we employ
the following definitions.

Definition 7.1.1. Creation in general and knowledge creation, in


particular, is an individual, often unique action.

Definition 7.1.2. Production in general and knowledge production,


in particular, is a process that consists of separate actions of creation.
It is possible to find formalization of the concepts action and
process in (Burgin and Smith, 2010).
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch07 page 646

646 Theory of Knowledge: Structures and Processes

Knowledge is acquired by an epistemic system E under the action


of information, i.e., knowledge is the result of information impact.
Note that it is possible to treat knowledge production as a kind of
acquisition of knowledge from the system that (who) produces this
knowledge. However, this will be ungrounded if we presuppose that
acquisition of some knowledge means that this knowledge already
exists when acquisition starts.
Procedural/instructional knowledge/information production and
creation has three forms:

— Generation of new knowledge/information;


— Transformation of existing knowledge/information into new infor-
mation/knowledge; it is a kind of knowledge/information evolu-
tion;
— Reorganization/restructuring of existing information/knowledge
with the goal to obtain new knowledge/information.

This classification essentially depends of the utilized classifica-


tions of instructions/rules. For instance, if deduction rules, such as
modus ponens, are treated as generation instructions, then deduction
is knowledge generation. However, if deduction rules are interpreted
as transformation instructions, then deduction is knowledge trans-
formation.
The difference between knowledge transformation and knowl-
edge restructuring is also relative. When we consider knowledge
as a whole, then knowledge restructuring is a kind of knowl-
edge transformation. However, when we deal with knowledge items
as separate objects, then knowledge transformation changes these
items, while knowledge restructuring changes only relations between
these items.
Tuomi (1999) separates three modes of knowledge generation:

— Anticipation when the model of a phenomenon we have changes


(breaks down), producing new knowledge from the tension
between anticipation and what is observed.
— Appropriation of knowledge existing in another system.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch07 page 647

Knowledge Production, Acquisition, Engineering, and Application 647

— Articulation means explication and reconfiguration of epis-


temic relationships within the meaning system available for the
cognizer.

Three basic ways of cognition considered above correspond to


three main sources of knowledge/information:

1. Personal and social intelligence and creativity applied to per-


sonal/social knowledge and data can give new knowledge.
2. Material activity, such as observation, experimentation, request
for information, work, search in books, databases and on the Inter-
net, communication, etc., is a resourceful source of knowledge and
experience. In turn, experience can give new knowledge to those
who can learn from experience. Note that people can learn from
their own experience and from experience of others.
3. Carriers of knowledge, for example, other people, books,
knowledge bases, computers, and networks, such as the Internet,
can provide knowledge on their own, e.g., people give lectures
or companies disseminate information about their products and
services.

Note that there are people who are creative but lack intelligence
and there are intelligent people who are deficient in creative skills.
There are different forms of material activity that involve knowl-
edge processes:

1. Observation, which is typically reflexive learning.


2. Experimentation, which is frequently proactive learning.
3. Practical activity, which can include all kinds of learning.
4. Interaction and its special kind communication.
5. Search as intellectual activity.
6. Selection as intellectual activity.
7. Games, which can include other material activities.

Note that observation, experimentation and practical activity can


be material, e.g., observation of planet movements, as well as mental,
e.g., experimentation in mathematics (cf., for example, (Poincaré,
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch07 page 648

648 Theory of Knowledge: Structures and Processes

1902; 1905; 1908; Burgin, 1998) or computational, e.g., computer


simulation of driving a car.
Three basic stages of knowledge acquisition by a cognitive (intel-
ligent) system are:

— Information search and selection,


— Information extraction, acquisition and accumulation,
— Transformation of information into knowledge.

All these stages are based on knowledge of the cognizing system.


Therefore, it is possible to say that knowledge generates (increases)
knowledge. Although in some cases, it may be false knowledge.
Perception as the process of creation of perceptual knowledge
depends on the use of knowledge stored in the system and not avail-
able in the immediate sensory input. This peculiarity makes possible
recognizing and remembering objects and scenes that make sense, as
opposed to those that do not (Biederman et al., 1974; Potter, 1975).
Although in this case, the implicit knowledge is used unconsciously,
the perceiver would normally be able to identify and describe its
source in other cases when the knowledge source is still not apparent
to the observer. For instance, some researchers suggest that motion
perception appears to involve implicit familiarity with the physics of
transparency (Stoner et al., 1990).
It is interesting that there are situations when false knowledge can
help to acquire true knowledge. Usually it happens when an individ-
ual, e.g., a scientist, take some existing knowledge, makes observa-
tions and experiments and then based on these observations and
experiments obtains true (correct) knowledge. For instance, taking
false knowledge that the Sun rotates around the Earth, Copernicus
came to the conclusion that this was not true and the true knowledge
was that the Earth rotated around the Sun.
In a similar situation, Galileo had false knowledge that heavy
objects fell faster than lighter ones because Aristotle had written
so. In spite of this, Galileo decided to test this knowledge dropping
two different weights from the Leaning Tower in Pisa. He found that
they landed at the same time. After repeating this experiment many
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch07 page 649

Knowledge Production, Acquisition, Engineering, and Application 649

times, Galileo correctly concluded that the velocity of a falling body


did not depend on its weight. This was true knowledge.
In these cases, researchers abandoned false knowledge after find-
ing true knowledge. However, there are situations when false knowl-
edge is not discarded by acquisition of true knowledge. The most
famous example is Maxwell’s inference of the formula of gas pres-
sure. According to Fabrikant (1985), deriving this formula, Maxwell,
at first, did one mistake, obtaining incorrect knowledge, but then
continuing his derivation, he did another mistake, which compen-
sated the first mistake. As the result, Maxwell obtained the correct
formula, which represented true knowledge.
There are three basic sources in knowledge acquisition: from
practice/experience, from reasoning/thinking and from author-
ity/opinion. Namely, we have the following classification:

1. Knowledge acquisition by practice/experience means that the cog-


nitive agent gets knowledge from practical activity, e.g., it allows
better achieving some goals, and our experience gives evidence for
this.
2. Knowledge acquisition from authority/opinion means that an epis-
temic structure (knowledge) is acquired from opinion, which is
usually held as an authoritative one. Note that it may be an opin-
ion of an individual or of a social group taken from some source,
such as a book, magazine, or the Internet.
3. Knowledge acquisition by reasoning/thinking is performed in the
mentality of the cognitive agent and is explicit justification.

However, this traditional classification is incomplete. To show


this, we use the Componential Triune Brain model (CTBM) devel-
oped in (Burgin, 2010) and utilized in Section 7.2 for understanding
intuition. According to this model, the brain has three basic compo-
nents — the System of Rational Intelligence — SRI (also called the
System of Reasoning), the System of Emotions (the System of Affec-
tive States — SAS) and the System of Will and Instinct — SWI.
Each of these systems works as a source in knowledge acquisition
but only the SRI is taken into account in the above classification.
Consequently, there are two other kinds of mental-based knowledge
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch07 page 650

650 Theory of Knowledge: Structures and Processes

acquisition when the cognitive agent gets knowledge by emotions


and by instructions/assertions. Therefore, the third source has to be
interpreted in the following way:
3a. Knowledge acquisition by mental information processing is
performed in the mentality of the cognitive agent and can be either
explicit justification when it is reasoning or instruction, or implicit
justification when it comes from emotions or will.
There are three types of knowledge/information search:

— internal when a system searches in its own knowledge/informa-


tion storages such as memory, database or knowledge base;
— external when a system searches in its environment;
— mixed when a system searches everywhere.

Search and selection of relevant knowledge/information are cog-


nitive processes based on knowledge that the system already has.
Indeed, it is necessary, at least, to know what to search, where to
search, and what tools are suitable for the search. For instance, per-
ception, recognition and recollection of objects and scenes depends
on the use of knowledge stored in the system and demands
selection of relevant schemas (Biederman et al., 1974; Potter,
1975).
There are three basic types of knowledge/information produc-
tion/creation:

— Knowledge/information production by reasoning, e.g., by logical


inference, which includes deduction, induction, and abduction;
— Procedural/instructional knowledge/information production,
e.g., by construction, by experimentation or by transformation;
— Intuitive knowledge/information production by subconscious
contemplation, by guessing or by emotions, e.g., the so-called,
gut feeling.

According to Popper (1979), the aptitude for solving prob-


lems, is “a creative ability to produce new guesses, and more new
guesses” and growth of knowledge is due to a procedure of “trial and
error”.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch07 page 651

Knowledge Production, Acquisition, Engineering, and Application 651

There are three basic types of knowledge capture or proactive


learning:
• knowledge capture by search,
• knowledge capture by selection,
• knowledge capture by explication or extraction.
Often these processes (actions) go one after another forming the
proactive learning cycle.
There are also three basic types of knowledge reception or reflexive
learning:
◦ knowledge reception by exchange;
◦ knowledge reception by inquiry;
◦ knowledge reception by acceptance of what is coming.
The main components of cognitive proficiency are intelligence,
persistence, and creativity. There are various theories of human intel-
ligence, as well as of artificial intelligence (AI), for which human
intelligence is taken as the model.
Usually it is assumed that intelligence is based on the intellect
of a person, which is a subsystem of personality, i.e., some struc-
ture. Another approach interpreted intelligence in the same way as
the intellect treating it as a static property of an individual (such
as gifts, will, or moral beliefs). Thus, intellect and intelligence have
been considered as a static entity related to a human being, while
intellect and intelligence are always displayed in the behavior and
more exactly, in intellectual activity of a person.
Consequently, many proponents of behavior-based AI (cf., for
example, (Pfeifer and Scheier, 1999; Markman, 2000)) argue that
the traditional methods for studying intelligence have failed to pro-
vide sufficient insight into what intelligence is and how it works. The
root of this failure, they contend, is that cognitive science has focused
on complex internal mental processes to the exclusion of the factors
that permit intelligent agents to interact with their environment.
However, psychology has its own approach to behavioral aspects
of intelligence. It is based on intellectual activity. The term
intellectual activity has been extensively used by different authors for
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch07 page 652

652 Theory of Knowledge: Structures and Processes

a long time, but it has not been sufficiently exact lacking an adequate
and efficient definition (Bogoyavlenskaya, 1983). Consequently, this
absence has implied many difficulties for understanding and investi-
gation of this important phenomenon.
Such an exact definition was constructed by Burgin (1995d; 1996a;
1998a). According to this definition, Intellectual activity is a mean-
ingful functioning of mind (intelligent thinking).
This definition provides for the dynamic expression of human
intelligence as well as for elaboration of efficient means for its
study. That is why, an investigation of various properties of intellec-
tual activity is of the greatest interest to psychology and pedagogy
because intelligence has been always considered as main characteris-
tic of a human being.
The main assumption of this approach is that intelligence is
always displayed in different kinds of behavior and, in particular,
in cognition. Thus, knowing very little about inner structure of intel-
ligence, it is more efficient to consider intelligent behavior, or more
exactly, intellectual activity of a person.
Taking essential components of human activity as the base, dif-
ferent types and grades of intellectual activity are explicated and
explored.
With respect to the result, there are three types of intellectual
activity:

— The reproductive intellectual activity is selection in and reproduc-


tion of the given knowledge.
— The bounded productive intellectual activity is search for necessary
knowledge.
— The productive intellectual activity is creation/production of new
knowledge.

We can observe the reproductive intellectual activity when a stu-


dent learns some material given by the teacher or by reading the
textbook.
We can observe the bounded productive intellectual activity when
a person searches the Internet or an encyclopedia for necessary infor-
mation.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch07 page 653

Knowledge Production, Acquisition, Engineering, and Application 653

We can observe the productive intellectual activity when a physi-


cist discovers a new law of physics or when a mathematician proves
a new theorem.
It is necessary to understand that in some cases, selection and/or
reproduction can be more complicated actions than search and even
creation of new knowledge. That is why selection and reproduction
also demand a definite level of intelligence.
With respect to the means used in achieving the result, there are
also three types of intellectual activity:

— The reproductive instrumental intellectual activity,


— The extended instrumental intellectual activity,
— The creative intellectual activity.

In the reproductive instrumental intellectual activity, an individ-


ual or a group uses known means (tools) in the conventional way.
For instance, when a student solves a problem by the technique
taught by the teacher, it is the reproductive instrumental intellectual
activity.
In the extended instrumental intellectual activity, an individual
or a group uses known means (tools) in a new way. For instance,
when a student solves a problem by the technique usually used in
a different area, it is the extended instrumental intellectual activity.
Another example of the extended instrumental intellectual activity
comes from computer programming when programmers and com-
puter scientists started to use logic for building programming lan-
guages and writing programs.
In the creative intellectual activity, an individual or a group
invents new means (tools) for their activity, e.g., solving the prob-
lem. For instance, the famous physicists Richard Feynman (1918–
1988) invented path integral for giving a new model of quantum
mechanics.
Classification of intellectual activity is useful in several aspects.
First, it helps to study cognition and creativity from different per-
spectives. Second, it allows developing person oriented cognitive
methodology and technology. Third, it provides a theoretical base for
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch07 page 654

654 Theory of Knowledge: Structures and Processes

the development of cognitive and creative skills. In particular, intel-


lectual activity has been applied to problems of education (Burgin,
1995d). Intellectual activity is an expression of intelligence in its
operational/behavioral form. This feature allows using intellectual
activity as base for studying intelligence, while measures and meth-
ods of evaluation of intellectual activity provide tools for estimation
active intelligence of people.
In psychological studies, the intellectual activity approach is
orthogonal to the classical psychological approach to intelligence
where intelligence is considered as a trait or faculty of people. For
instance, intellectual activity is complementary to the well-known
triarchic model of human intelligence of Robert Sternberg.
The triarchic model asserts that human intelligence consists of
three types of faculties: analytical giftedness, synthetic giftedness
(creativity), and contextual (practical) giftedness.
According to the triarchic model, analytical giftedness is domi-
nant in being able to take apart problems and being able to see solu-
tions not often seen. To explain these abilities, analytical giftedness is
associated with three types of components in the functioning of the
mind: metacomponents, performance components, and knowledge-
acquisition components (Sternberg, 1985; 1997).

Definition 7.1.3. The metacomponents control functioning of the


mind telling what the mind has to do.
These components are especially important in problem solving
and decision making.

Definition 7.1.4. The performance components are the processes


in the mind that actually carry out the actions the metacomponents
dictate allowing people perceiving problems and relations between
objects, transforming images and applying relations to another set
of terms.

Definition 7.1.5. The knowledge-acquisition components are used


in acquisition of new information, selection of the useful informa-
tion from irrelevant information and construction of new knowledge
combining the various pieces of information.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch07 page 655

Knowledge Production, Acquisition, Engineering, and Application 655

Intellectually gifted individuals are proficient in using knowledge-


acquisition components because they are able to learn new informa-
tion at a greater rate.
Synthetic giftedness is especially important for creating new ideas
in the process of generating new ideas, posing and solving new prob-
lems and managing novel situations.
Contextual (practical) giftedness involves the aptitude to apply
synthetic and analytic skills to everyday situations. The effectiveness
with which an individual matches the surroundings and contends
with daily situations reflects degree of definite intelligence. Practi-
cally gifted people are able to succeed in any setting, creating a
supreme fit between themselves and their environment.
Contextual (practical) giftedness is expressed in the three pro-
cesses: adaptation, shaping, and selection.
Definition 7.1.6. Adaptation is a process of making changes within
oneself to better adjust to the environment.
For instance, when the weather changes and temperatures drop,
people adapt by wearing extra layers of clothing to remain warm
outside and heat up their homes to remain warm inside.
Definition 7.1.7. Shaping is a process of making changes in the
environment to better suit one’s needs.
For instance, people grow trees and bushes to have better climate,
more oxygen and enhanced surroundings.
The suggested terminology can be extended based on the classi-
fication of developmental processes suggested by Jean Piaget (1896–
1980). Namely, Piaget suggested that development consists of accom-
modation and assimilation (Piaget, 1964).
Definition 7.1.8. Accommodation includes reinterpretation of the
current situation and changing the cognitive model and behavioral
schemas that are used.
Thus, accommodation is inner shaping of the situation.
Definition 7.1.9. Assimilation is the process of adjustment to the
current situation without changing its interpretation, e.g., by chang-
ing the behavioral schemas that are used.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch07 page 656

656 Theory of Knowledge: Structures and Processes

We see that in his model, Piaget describes behavioral learning


and adjustment means primarily cognitive adaptation described in
Definition 7.1.6.

Definition 7.1.10. Selection is a process of changing the place of liv-


ing to find the environment that better meets the individual’s goals.
For instance, immigrants leave their native countries where they
endure economical, religious, or/and social oppression and go to other
countries in search of a better and less strained life.
With respect to knowledge, analytical giftedness expresses itself
in analyzing knowledge and solving cognitive problems. Synthetic
giftedness ameliorates creation of new knowledge, while contex-
tual (practical) giftedness facilitates applying knowledge to practical
problems.
There are three basic sources of knowledge for an epistemic
system E, i.e., a system, which has or/and produces knowledge and
other epistemic structures:
1. Another epistemic system R, e.g., a book, database, individual,
organization or movie.
2. Another physical system R, e.g., a molecule, atom, star, car, tree,
or the Earth.
3. The system E itself, for example, when E extracts (recollects) its
own knowledge or produces new knowledge.
There are three types of knowledge sources:
— Proactive sources send information themselves and the cognizer
only accepts (or does not accept) what was sent.
— Reactive sources send information in response to an inquiry or
request.
— Passive sources do not send information and so, the cognizer has
to extract information from passive sources.
First two types are active sources.
There are three ways of knowledge (information) extraction:
— Extraction by non-intrusive interaction when it is possible to dis-
regard the impact on the source, e.g., by observation.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch07 page 657

Knowledge Production, Acquisition, Engineering, and Application 657

— Extraction by intrusive interaction when it is necessary to take


into account the impact on the source, e.g., by experiment.
— Extraction by inquiry when a message is send to source, which
responds sending a reply to this message.

Important processes going in society are knowledge translation


and knowledge translation Integration.

Definition 7.1.11. Knowledge translation is transmission of knowl-


edge from one area to another and its adaptation, e.g., reinterpreta-
tion, to the new area.

Definition 7.1.12. Integration of knowledge from two areas is its


transmission to the common area and its mutual adaptation.
An important way of knowledge acquisition is interpretation,
which is considered as a procedure or function that corresponds
knowledge to a given system, e.g., to a text, facilitating its under-
standing.
There is a special theoretical area of interpretation called
hermeneutics. In other words, hermeneutics is a methodology of
obtaining knowledge of a text by extracting it from the text. The
Greek word ‘hermenuein’ means to express, explain, translate or
interpret (Thiselton, 1998). Hermeneutics appeared in 1654 as an
art of interpretation of biblical texts. Schleiermacher (1819) welded
different approaches into this field making the main input in its devel-
opment and applying it not to the Bible but also to other texts.
As a result, hermeneutics became a theory of understanding in
the broadest sense, and Dilthey (1981) applied it to all human acts
and products, including history and human life. He asked the ques-
tion, “How do the social or human sciences differ from the natural
sciences?” On his opinion, while the natural sciences explain, the
social and human sciences understand.
In contrast to this, modern methodology of science persuasively
demonstrated (Agazzi, 1992; Burgin, 1995) that structurally expla-
nation and understanding are similar processes. The main differ-
ence is in methodology they use. Natural science is based on direct
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch07 page 658

658 Theory of Knowledge: Structures and Processes

observations and experiments, while social science develops from


indirect evidence and opinions.
Contemporary hermeneutics, according to Dilthey (1981), consid-
ers two levels of understanding. The lower, immediate understanding
of simple expressions and higher understanding that involves com-
prehension of individuals.
In comparison with this, hermeneutics developed in Jewish tra-
dition is a system called PARDES, which comprises four levels of
understanding and is considered in more detail in Section 7.1.1.
Besides, achievements of modern science, advanced epistemology, and
innovative hermeneutics made it possible to build up more than ten
levels of cognition and understanding (Burgin, 1996c; 1996d).
Now let us consider some important categories of cognition in
more detail.

7.1.1. Scientific cognition


We’ve arranged a civilization in which most crucial elements
profoundly depend on science and technology.
Carl Sagan

Science emerged as an efficient tool for exploration and understand-


ing, at first, of nature and later of the whole world in which people
live. Scientists created powerful means of cognition.
Scientific cognition typically utilizes three basic processes:
• Theoretical reasoning;
• Observation and experiment;
• Intuitive insight.
As a rule, these processes are separate but depend on and support
one another expanding in the concurrent mode with multiple cycles
and iterations.
The formalized mode of theoretical reasoning is logical inference
(information/knowledge production), which has three main forms:
— Deduction;
— Induction;
— Abduction.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch07 page 659

Knowledge Production, Acquisition, Engineering, and Application 659

While philosophers have studied deduction and induction for a


long time, the term abduction was first introduced by the American
philosopher and semiotician Charles Sanders Peirce (1839–1914)
only in the 19th century. These three basic methods of reasoning
are applied to declarative knowledge in general and as formalized
techniques, to knowledge in the form of expressions (formulas) from
the utilized scientific or logical language in particular. Deduction,
induction and abduction are also used to obtain properties of oper-
ational and representational knowledge.
Deduction is a type of logical inference of knowledge performed
by application of specific deduction rules. which, in general, have the
form:
A→B (7.1)
or
A  B. (7.2)
Here A is called the assumption of the rule, B is called the con-
clusion of the rule and each of them is a finite number of expressions
or formulas. For instance, taking the expression “X is in Y ”, we can
build the deduction rule:
“U is in V ”, “V is in W ”  “U is in W ”.
Applying this rule to two propositions “We live in the USA” and
“The USA is situated in America” as A, we deduce the proposition
“We live in America” as B. A valid deduction guarantees the truth
of the conclusion given the truth of the assumptions. In a general
case, it is necessary to apply deduction rules several times to obtain
the necessary conclusion.
The most utilized deduction rule is modus ponens, which has the
form:
ϕ, ϕ → ψ  ψ,
where ϕ and ψ are statements or propositions. This rule has the
following meaning:
If ϕ is true and ϕ implies ψ, then ψ is true.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch07 page 660

660 Theory of Knowledge: Structures and Processes

Mathematical and scientific practice shows that deduction is used


not so much for knowledge production as for knowledge justification.
Recursive algorithms (cf., (Burgin, 2005)) and the majority of logical
systems formalize deduction (cf., Chapter 5).
However, as Aristotle observed, scientific discovery by deduction
is impossible, except one knows the “first” primary premises . . . and
it is necessary to obtain these premises by induction.
Induction, in its main interpretation, is a form of logical inference
that allows inferring a general statement from a sufficient number of
particular cases, which provide evidence for this general statement
(the conclusion). However, if the evidence is not complete, the con-
clusion may be incorrect. For instance, Aristotle saw that all of the
swans at the places where he lived were white, so he induced that
all swans are white in general. However, much later European came
to Australia and discovered black swans. This shows that in contrast
to deduction, induction does not always give correct results because
it works with incomplete information, while the number of initial
cases is not bounded and the researcher does not know for sure when
to stop. However, the whole science is actually built on induction
because scientific laws have to be in agreement with nature for nat-
ural sciences and with social systems for social sciences, while it is
possible to make only a finite number of experiments.
Induction has a long history. For instance, it is possible to find ele-
ments of inductive reasoning in Plato’s Parmenides (ca. 370 B.C.E.).
Later mathematicians started using inductive proofs, which after-
ward were shaped in the Principle of Induction. We can see (cf., for
example, (Katz, 1996; Rashed, 1994)) indication of inductive infer-
ence in works of the famous Euclid (ca. 300 B.C.E.), Islamic mathe-
matician al-Karaji (ca. 1000) and prominent Indian mathematician
Bhaskara (1114 — ca. 1185).
In the explicit form, mathematical induction was used (cf.,
(Rabinovitch, 1970)) by Jewish mathematician Levi ben Gerson
(1288–1344), while great French mathematician Blaise Pascal (1623–
1662) unambiguously formulated the Principle of Induction in his
Traité du triangle arithmétique. Later mathematical induction has
become a popular tool in mathematics.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch07 page 661

Knowledge Production, Acquisition, Engineering, and Application 661

In addition to inductive reasoning, there is also inductive learning,


which involves making (often uncertain) inferences that go beyond
direct experience and are based on intuition and insight. This is the
main procedure in learning from experience.
Analysis of computational processes allows discovering one more
kind of induction — constructive mathematical induction. In the the-
ory of computation, it is called recursion. Indeed, informally recur-
sion is a technique such that given the value f (n) of some function
defined for natural numbers, it allows computing the value f (n + 1)
(Shoenfield, 2001; Burgin, 2005). When this technique is given, it is
assumed that it makes possible to compute all values of the function
f . Similar to the conventional mathematical induction, in construc-
tive mathematical induction, description of a general step of com-
putation presupposes the possibility to perform computation for an
infinite quantity of inputs. Note that inductive reasoning is compu-
tation of the truth function.
Thus, it is possible to discern two types of induction as knowledge
production: the observational induction and abstract induction. The
observational induction, or more generally, empirical induction, is a
conclusion made for a large collection of objects, e.g., events, systems
or processes, based on observation of (and experiments with) some
(usually small) part of this collection.
The abstract induction is a conclusion made for a large (often
infinite) collection of objects, e.g., numbers, structures or procedures,
based on reasoning about some finite (usually small) part of this
collection.
In its essence, induction is reasoning based on a simple rule:

If it was so, then it will be so.

We call it the straightforward induction. For instance, when people


see that the Sun rises every morning, they suppose (induce) that it
will be always. However, such induction is not always reliable. For
instance, when people go into the space in a spaceship, they find that
their previous supposition is violated.
Correct application of the straightforward induction demands
highly developed intuition and can be invalidated by some new
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch07 page 662

662 Theory of Knowledge: Structures and Processes

observations or experiments. The straightforward induction is the


cornerstone cognitive technique in physics and other sciences. When
scientists declare that an experiment validates a scientific theory,
they implicitly apply the straightforward induction any experiment
can tell only that the theory is correct in one particular case.
Mathematicians, being largely dissatisfied by absence of absolute
reliability in the straightforward induction, invented rules for making
induction (totally, or at least, sufficiently) trustworthy. This brought
forward the imperative induction, which is based on definite rules for
testing a part of a big collection of objects and making conclusions
about the whole collection.
Note that the straightforward induction involves a divergent pro-
cess when it is impossible or unreasonable to get direct knowledge
about each object from the whole collection. Introducing rules for
making the process finite and tractable is renormalization of the rea-
soning process similar to renormalization used in physics.
The most popular kinds of the imperative induction are mathe-
matical induction and statistical inference.
Statistical inference allows obtaining properties of a large set of
objects called population in statistics, using data about these prop-
erties drawn from some (usually small) parts (subsets) of the whole
population. These parts (subsets) are called samples, which are usu-
ally selected according to definite rules.
Validity of statistical inference strongly depends on the relevance
of the statistical model used for inference to the modeled problem
(Cox, 2006).
A statistical model consists of three parts:

• A set of assumptions related to the whole population


• Rules for sample selection
• Rules for estimation of properties of the selected samples and for
making inference from obtained data.

Statisticians use three types of modeling assumptions;

• Fully parametric when it is assumed that the probability dis-


tributions describing the population and process generating the
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch07 page 663

Knowledge Production, Acquisition, Engineering, and Application 663

statistical data are fully described by a family of probability dis-


tributions involving only a finite number of unknown parameters.
• Non-parametric when the assumptions made about the population
and the process generating the statistical data are minimal.
• Semi-parametric when the assumptions are not as complete as
in the fully parametric case but are not so little as in the non-
parametric case.

Mathematical induction, in essence, reduces induction to deduc-


tion by the Principle of Induction formalized in the axiom of induc-
tion, which according to (Poincaré, 1905), has the following form.
Axiom of induction (numerical form). If a theorem is proved
for 1 and it is also proved that it is true for n + 1 whenever it is true
for n, then it is true for all natural numbers.
However, mathematical induction is applied not only to num-
bers but also to a variety of mathematical and other objects. These
applications are grounded by the following axiom of induction, which
is also called the principle of mathematical induction.
Assume that an infinite sequence of statements (propositions)
A1 , A2 , A3 , . . . , An , . . . is given and A is the statement (proposition)
that all are true.
Axiom of induction (logical form). If the statement (propo-
sition) A1 is proved and it is also proved that An+1 is true whenever
An is true for an arbitrary n = 1, 2, 3, . . . , then all statements (propo-
sitions) An are true.
However, a more general form of the axiom of induction is useful
in many situations.
Assume that an infinite sequence of objects O1 , O2 , O3 , . . . , On , . . .
is given and P is a property.
Axiom of induction (general form). If a theorem is proved
that the object O1 has the property P and it is also proved that
the object On+1 has the property P whenever the object On has
the property P , then all objects O1 , O2 , O3 , . . . , On , . . . have the
property P .
Note that that the general and logical forms of the axiom of induc-
tion are equivalent when all properties are binary and are defined by
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch07 page 664

664 Theory of Knowledge: Structures and Processes

predicates, while having a property is represented by a logical state-


ment (proposition).
However, axioms of induction represent only the simplest form
of mathematical induction. Mathematicians have used many other
kinds of mathematical induction.
Principle of extended induction. If the statement (proposi-
tion) A1 is proved and it is also proved that An+1 is true whenever
A1 , A2 , . . . , An are true for an arbitrary n = 1, 2, 3, . . . , then all state-
ments (propositions) An are true.
One more kind of mathematical induction, which starts not from
the first statement, shifted mathematical induction.
Principle of shifted induction. If the statement (proposition)
Ak is proved and for an arbitrary n ≥ k, it is also proved that An+1
is true whenever An are true, then all statements (propositions) An
are true (n = 1, 2, 3, . . .).
In some cases, it is necessary to extend our assumptions.
Principle of extended shifted induction. If the statement
(proposition) Ak is proved and for an arbitrary n ≥ k, it is also
proved that An+1 is true whenever Ak , Ak+1 , . . . , An are true, then
all statements (propositions) An are true (n = 1, 2, 3, . . .).
There are also:

• Odd-even mathematical induction


• Backward mathematical induction
• Spiral mathematical induction
• Double mathematical induction
• Mathematical induction with parameter

While mathematical induction eliminates necessity of intuition in


making the conclusion, acceptance of the axiom of induction and its
application still demand intuition (Poincaré, 1905).
There are special logical systems that formalize induction and
inductive reasoning (Kyburg, 1969; Hájek and Havranek, 1978). Nat-
urally, they develop logical representations only for some kind of
mathematical induction imbedding mathematical induction, which
is an inference procedure (deduction rule), into a logical calculus.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch07 page 665

Knowledge Production, Acquisition, Engineering, and Application 665

At the same time, empirical induction, which is prevalent in sci-


ence, is modeled and explored in the theory of abstract automata,
algorithms and computation. There are three main directions
in mathematical modeling of empirical induction: the, so-called,
Solomonoff’s theory of universal inductive inference, inductive infer-
ence based on learning in the limit and inductive inference based on
inductive Turing machines.
Solomonoff’s approach interprets knowledge acquisition as
gaining an ability to predict a symbol in a sequence based upon
the knowledge of previous symbols from this sequence (Solomonoff,
1964). The basic assumption made in this theory is that symbols
in the sequence follow some unknown but computable probability
distribution.
Inductive inference based on learning in the limit stems
from the innovative paper of Mark Gold (1965), where limiting recur-
sion was introduced in the form of limiting recursive and limiting
partial recursive functions. The goal of inductive inference is to rec-
ognize a function given some of its values. In this context, only
recursively computable functions are considered as descriptions of
scientific laws and epistemic systems are represented by inductive
inference machines (Angluin and Smith, 1983; Osherson et al., 1991).
In the third mathematically based direction, the process of induc-
tive inference is performed by an abstract automaton called an induc-
tive Turing machine, which represents the next step in the devel-
opment of computer science providing more adequate and efficient
models for contemporary computers and computer networks, consti-
tuting an important class of super-recursive algorithms and satisfying
all conditions in the informal definition of algorithm (Burgin, 2005).
In this context, cognition is represented by computations of induc-
tive Turing machines or inductive cellular automata (Burgin, 2005;
2015).
Abduction, according to Peirce, is a form of logical inference aimed
at finding a hypothetical explanation E for an observed phenomenon
P (Peirce, 1931–1935). In contrast to induction, abduction does not
demand many observations. In contrast to deduction, there are no
exact infallible rules for abduction.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch07 page 666

666 Theory of Knowledge: Structures and Processes

Peirce assumed that abductive reasoning constitutes the first


stage of a scientific research, as well as of any interpretive pro-
cesses (Peirce, 1931–1935). In this role, abduction utilizes two oper-
ations — formation of plausible hypotheses or premises by the pro-
cedure of reconstructing plausible causes or intentions, and selection
some of them by rational principles and “guessing instinct”. However,
it is reasonable to add three more operations to the process of
abduction:

— Grounding the acceptance of the selected hypothesis;


— Experimentation of the selected hypothesis;
— Application of the selected hypothesis.

The first of these operations is aimed at maintenance of logi-


cal coherence, while the second and the third operations pursue the
pragmatic relevance.
Peirce compares the process of abduction to the Darwinian model
of evolution where formation of hypotheses corresponds to the birth
of living being and selection by the “guessing instinct” and the ratio-
nal “principle of economy” matches natural selection by a fitness rule.
Bonfantini and Proni (1983) suggest to understand the abductive
“guessing instinct” not only as a natural insight, which is inborn, but
also as a cultural insight, which is learned and rooted in the person’s
background. This connects the abductive reasoning to hermeneutic
processes (Eco, 1990).
As a rule, abduction also utilizes intuitive solutions for finding a
hypothetical explanation for an observed phenomenon because there
are no deterministic rules for this process (cf., Section 7.1.1).
Even deduction and induction demand intuition — deduction
when axioms for theories are selected and induction when the deci-
sion is made that number of particular cases is sufficient to make the
conclusion (cf., Section 7.1.1).
An important and popular kind of abduction is analogy.
Analogy as a method of reasoning (inference) is based on a simi-
larity relation (Osuga and Saeki, 1990; Burgin, 1993). To define ana-
logical reasoning in a formal way, we consider a similarity relation σ
between statements or propositions.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch07 page 667

Knowledge Production, Acquisition, Engineering, and Application 667

Mathematically similarity relation is represented by tolerance


relation, which is a binary relation that is reflexive and symmet-
ric (cf., Appendix A). For instance, letters “a” and “a” are similar,
while letters “a” and “b” are not similar.
Taking some set of objects, it is possible to define that two objects
are similar when they have, at least, one common property. In this
case, any two letters are similar but a mountain and a letter are not
similar.
The procedure of analogical reasoning is described by the follow-
ing operation:

An inference system, e.g., a person or a machine, considers a system


of conditions a1 , a2 , a3 , . . . , an that imply a fact, e.g., a statement
or effect, C as a consequence, and another system of conditions
b1 , b2 , b3 , . . . , bn such that each ai is similar to the corresponding
bi ., i.e., ai σbi is true. Then the inference system finds a fact B
similar to C and assumes that B is a consequence of conditions
b1 , b2 , b3 , . . . , bn .
Nevertheless, in spite of serious studies of abductive reasoning in
epistemology and its increasing impact on linguistics, hermeneutics
and AI, the status of abductive reasoning remains a controversial
issue in philosophy of science and epistemology.
Sowa (1984) suggested describing deduction, induction, and
abduction in the following way:

• Deduction is described by the following formal rule:

If A and A → B are true, then conclude B.

• Abduction is described by the following informal rule:


If B and A → B are true, then assume A, or when several state-
ments A imply B, then use some selection rule or preference
relation to select the best A.
• Induction is described by the following informal rule:
If B is true for every case when A has been observed and there
was a sufficient number of observations, then assume A → B.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch07 page 668

668 Theory of Knowledge: Structures and Processes

This description explicates differences between three approaches.


Abduction and induction both use the tentative verb assume instead
of the secure and comfortable verb conclude. Induction demands
some quantity of observations as the premise and has an implicit
quantifier every in the conclusion. This makes induction much more
complex than deduction, which generates a conclusion in a single
step. In comparison with abduction, induction is simpler because
finding a cause for something is usually more complicated than obser-
vation and experiment.
It is interesting to know that the basic scientific methods of cog-
nition — deduction, induction and abduction — intrinsically corre-
spond to three ways of cognition and understanding crystallized in
Jewish tradition. According to it, there is a system called PARDES,
which literally means “orchard” or “garden” and comprises four
manners or levels of cognition and understanding: Peshat or Pshat,
Remez, Derush or Drash and Sod.
An expert in Judaism Gershom Scholem (1897–1982) defines these
levels in the following way (Scholem, 1995):

• Peshat (Pshat) is the literal perception of what is given,


• Remez is the allegoric cognition and understanding,
• Derush (Drash) is the hermeneutical or logical cognition and
understanding,
• Sod is the mystical insight or esoteric cognition.

In this system, it is possible to correspond Drash to logical deduc-


tion, Remez to abduction and Pshat to induction. Esoteric cognition
(Sod) and mystics lies outside science and scientific cognition.
The development of science and philosophy allows extending tra-
ditional and inventing new methods of cognition. As a result, achieve-
ments of modern science, advanced epistemology, and hermeneutics
make possible separation and development of more than ten levels of
cognition and understanding in comparison with four classical meth-
ods of PARDES (Burgin, 1996c; 1996d).
Usually reasoning is performed with expressions in natural lan-
guages or formulas in logical languages. However, as we have seen
in Chapter 5 some popular forms of knowledge representations
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch07 page 669

Knowledge Production, Acquisition, Engineering, and Application 669

use geometrical images such as graphs, diagrams, networks, and


schemas. As a result, people created another type of reasoning with
visual images. For instance, using diagrams, which are usually two-
dimensional and only sometimes three-dimensional. Diagrammatic
reasoning is reasoning using diagrams as visual representations of
knowledge and information. According to psychological studies, 80%
of all information that people get through their senses is visual infor-
mation (Atkinson et al., 1990). Thus, diagrammatic reasoning helps
understanding of concepts and ideas, facilitates reasoning and com-
prehension.
Different knowledge representations bring us to separation of
three forms of reasoning:
1. Symbolic reasoning is performed with symbolic expressions such
as texts or formulas.
2. Diagrammatic reasoning is performed with firmly structured
images such as graphs, diagrams, networks, and schemas.
3. Picture reasoning is performed with weakly structured and
unstructured images such as pictures or paintings.
To conclude, it is necessary to remark that scientific reasoning in
general and logical inference, in particular, may include all three basic
forms of theoretical analysis — deduction, induction, and abduc-
tion. For instance, at first, a scientific law is obtained by inductive
inference based on experimental results or by abduction based on
intuition. Then different consequences of this law are acquired by
deduction. One of the most prominent examples of this situation is
obtaining the Newton’s law of gravitation and deducing Kepler’s laws
of motion from the Newton’s law.

7.1.2. Intuition as a cognitive instrument


Man knows more than he understands.
Alfred Adler

Although general public assumes that scientific cognition is based


on exact methods of reasoning, intuition plays an important role in
science. For instance, abduction cannot be efficient without qualified
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch07 page 670

670 Theory of Knowledge: Structures and Processes

intuitive insights. Mathematics also relies on intuition. For instance,


to be able to select a relevant set of axioms for a mathematical the-
ory or to suggest a reasonable hypothesis, it is necessary to have a
developed mathematical intuition.
According to Hintikka (2003), the Latin term intuitio was intro-
duced into the mainstream philosophical usage by the scholastics,
who gave a variety of interpretations for the term intuitive cogni-
tion. However, the broad scholastic conception of intuition was not
long-lived.
The very general understanding (1) of the term intuition relates
this phenomenon to any source of ostensible knowledge not obtained
by conscious inference from the possessed knowledge.
Note that as sense-perception satisfies explanation (1), we come
to a specific kind of intuition called sensory intuition. At the same
time, Kant regarded intuition as a phenomenon strictly distinct from
any sensation.
In spite of many attempts to uniquely characterize intuition, no
efforts of researchers have resulted in the definition acceptable by the
majority of philosophers, psychologists, scientists, and mathemati-
cians. Therefore, there are many competing definitions and descrip-
tions of intuition, and we consider some of them.
Intuition is a deeper perception of inherent possibilities and inner
meanings.
Intuition is the ability to understand something instinctively,
without the need for conscious reasoning.
Intuitions are simply opinions . . . (Lewis, 1983).
Intuition is also characterized as the vehicle for knowing.
Intuitions are the tendencies that make certain beliefs attractive
to us, that ‘move’ us in the direction of accepting certain propo-
sitions without taking us all the way to acceptance (van Inwagen,
1997).
According to Locke, intuitive knowledge is the perception of the
certain agreement or disagreement of two directly compared ideas,
i.e., intuition is restricted to knowledge of ideas.
An intuition is a propositional attitude that either seems to be
true (Bealer, 1998; Pust, 2000; Huemer, 2001; 2005), or it is presented
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch07 page 671

Knowledge Production, Acquisition, Engineering, and Application 671

to the subject as true (Chudnoff, 2011) or it pushes the subject to


believe a proposition in it (Koksvik, 2011).
Some researchers even regard intuition as a constituent of knowl-
edge. For instance, Charles Oppenheim, contends that “knowledge
is a combination of information and a person’s experience, intuition
and expertise.” (cf., (Zins, 2007)).
The phenomenon of intuition has been explored for a long time.
Many philosophers and some mathematicians, such as Descartes,
Kant, Bergson, Poincaré, Gödel, Bunge and Kuhn to name but a
few of many, considered intuition as one of the main cognitive tools.
It is possible to find allusion to intuition in the work of the great
Aristotle, who declared that intuition would be the initial source of
knowledge (Aristotle, 1984).
René Descartes (1596–1650) regarded intuition as a clear and
attentive mind giving birth to truth in a way more reliable than
deduction. As a result, intuition provided propositions that had to
be chosen as axioms and postulates. As an example of an abso-
lutely clear, intuitive statement, Descartes suggested the equality
2 + 2 = 4. However, as it happened with the statement of Kant
that the Euclidean geometry is intuitively unique, this statement
of Descartes was invalidated with the discovery of non-Diophantine
arithmetics (Burgin, 1977; 1997g; 2007; 2010c).
Baruch Spinoza (1632–1677) elaborated the threefold division of
knowledge:
• Knowledge from imagination;
• Knowledge from intuition;
• Knowledge from the intellect.
We see that for Spinoza, intuition is one of the basic sources and
engines of knowledge. Bunge (1962) suggested that intuition in the
sense of Spinoza consisted of the fast inference of conclusions. How-
ever, Spinoza also treats intuitive knowledge as transition from an
adequate idea of certain attributes of God to the adequate knowledge
of the essence of things (Parkinson, 1954).
Similar to Spinoza, Immanuel Kant (1724–1804) envisaged a tri-
adic structure of knowledge creation and acquisition coming from
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch07 page 672

672 Theory of Knowledge: Structures and Processes

three engines of knowledge:

• Empirical intuition;
• Reasoning;
• Pure intuition.

According to Kant, intuition is a conscious, objective represen-


tation, which is truly different from any sensation, while the latter
is not a representation of something but only a state of the subject.
In addition, intuitions are passive representation, by means of which
sensibility enables sensations.
Concepts are also representations, which are general and mediate,
while intuitions are singular, immediate representations. For Kant,
experience is the mixture of an intuition with a concept in the form
of a judgment.
Thus, we see that that in the epistemology of Kant, intuition plays
a fundamentally important role in knowledge creation.
For Henri-Louis Bergson (1859–1941), the doctrine of intuition is
of the innermost importance. For instance, he contends that philo-
sophical intuition is the instrument of metaphysical knowledge and
is the center of all philosophical work, or at least, of all philosophy
which deserves this name. It is intuition, not reasoning, which brings
people into sympathetic acquaintance with mental reality. In contrast
to the misleading shell of common knowledge, which is framed for
action, intuition enables people to understand the intrinsic proper-
ties of the world, as well as the external and relative character of the
impressions and thoughts about the world.
Bergson describes intuition as a simple, indivisible experience
through which people transcend into the inner nature of an object to
comprehend what is unique and indescribable within it. The absolute
essence that is grasped always perfectly correlates with the object
and is infinite because being grasped as a whole through a simple,
indivisible act of intuition, it encloses boundless representations when
analyzed (Bergson, 1923; 2007).
Intuition is especially important for metaphysics, which is, accord-
ing to Bergson, the science that dispenses with symbols to grasp the
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch07 page 673

Knowledge Production, Acquisition, Engineering, and Application 673

absolute because metaphysics involves an inversion of the habitual


modes of thinking and needs its own method (Bergson, 1923;
2007).
Edmund Gustav Albrecht Husserl (1859–1938), who was also
intensely interested in intuition, differentiated between different
kinds of intuition.
The way we know sensible objects he called sensible intuition.
At the same time, immediately valid judgments, which serve as
the foundation of all proofs required in genuine science, derive their
validity from originally presentive intuitions.
Intuition displays aspects of reality and not only impressions of
these aspects. It is adequate because it provides the interface of con-
sciousness and of the object of consciousness. At the same time, intu-
ition does not presuppose for Husserl any special source of insight or
knowledge or any particular human capacity.
The famous psychologist Carl Gustav Jung (1875–1961) consid-
ered four way of knowledge creation and reality comprehension:

— Sensation
— Intuition
— Thinking
— Feeling

Jung understood intuition as a deeper perception of inherent pos-


sibilities and inner meanings when intuitive perception ignores the
details and focuses instead upon the general context or environment
adding meaning by encapsulating things into the situation that are
not immediately apparent (Jung, 1969).
In his philosophical and methodological writings, the great mathe-
matician Jules Henri Poincaré (1854–1912) explained the important
role played by intuition in creation of mathematical and physical
knowledge. He assumes that there is a specific mathematical intu-
ition, which allows finding hidden harmony and relations (Poincaré,
1908). Giving many examples, Poincaré affirm that it impossible
to create new mathematical knowledge without this mathematical
intuition.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch07 page 674

674 Theory of Knowledge: Structures and Processes

In addition, Poincaré specifies several sorts of the mathematical


intuition (Poincaré, 1905):

— Intuition based on feelings and imagination;


— Intuition of generalization by induction, which similar to intuition
in natural sciences;
— Intuition of pure number, which serves as a base for mathematical
induction.

In a similar way, the prominent French mathematician Jean


Alexandre Eugène Dieudonné (1906–1992) described the unifying
role of the mathematical intuition:
“Mathematics has less than ever been reduced to a purely mechanical
game of isolated formulas; more than ever does intuition dominate in the
genesis of discoveries”. (Dieudonné, 1975)

Other prominent mathematicians Hilbert and Cohn-Vossen


(1952) write that the mathematical knowledge progresses is under-
stood and learned through a common, simple, highly integrated cre-
ative process involving both intuitive insight and intellectual analysis.
Gödel (1906–1978) discussed problems of intuition with a consid-
erable attention promoting the view that there is a faculty called
rational intuition, which resembles sense-perception but is directed
towards abstract ideas rather than concrete bodies (Gödel, 1947).
Rational intuition as applied specifically to mathematical concepts
is usually called mathematical intuition. Gödel called mathematical
intuition applied specifically to set-theoretic concepts by the name
set-theoretic intuition.
The notable philosopher Thomas Samuel Kuhn (1922–1996)
assumes that paradigm change is a value change and while normal
science can articulate a paradigm but cannot change it, only intuition
can change paradigms:
“Paradigms are not corrigible by normal science at all ... normal sci-
ence ultimately leads only to the recognition of anomalies and to crises.
And these are terminated, not by deliberations and interpretation, but by
a relatively sudden and unstructured event like the gestalt switch. Sci-
entists then often speak of the “scales falling from the eyes” or of the
“lightning flash” that “inundates” a previously obscure puzzle. On other
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch07 page 675

Knowledge Production, Acquisition, Engineering, and Application 675

occasions, the relevant illumination comes in sleep. No ordinary sense of


the term “interpretation” fits these flashes of intuition through which a
new paradigm is borne.” (Kuhn, 1962).

It is also necessary to remark that intuition is the cornerstone


of a notable direction in mathematics, which is called intuition-
ism. According to intuitionism, mathematics is a free creation of the
human mind based on intuition, and a mathematical object or prop-
erty exists if and only if it can be physically or mentally constructed.
The basic principle of intuitionism is the primordial intuition of
natural numbers and mathematical induction (cf., (Fraenkel and
Bar-Hillel, 1958)).
That is why some researchers and other people discard intuition
arguing that it is unreliable and lacks scientific explanation. Dis-
cussing the prejudice many have against intuition and in favor of
perception, Sosa states, “opposition to the reliability of intuition
appears to involve a self-defeating appeal to intuition” (Sosa, 2006).
Besides, as we have seen, perception also involves intuition — the,
so-called, perceptual intuition. In addition, Sosa suggests an anal-
ogy between intuition and eye-witness testimony, which shows that
observers in general and witness, in particular, are frequently mis-
taken about their perceptions because witnesses to the same event
can give radically varying accounts of it. This analogy shows that
intuitions is as subjective as eye-witness accounts, and the difference
in intuitions among persons happens as often as divergence of eye-
witness testimonies. In spite of this, intuitions can be and are relevant
in many cases and the epistemic role of intuition is not easily filled
by other cognitive abilities and sources of knowledge (Sosa, 2006;
Gilbert, 2008).
Philosophers and mathematicians considered different types of
intuition.
Parsons (2008) distinguishes intuition of from intuition that. For
instance, it is possible to have an intuition of a straight line in the
Euclidean plane without having an intuition that given any straight
line and any point beyond this line, there is one and only one straight
line parallel to the given straight line.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch07 page 676

676 Theory of Knowledge: Structures and Processes

Burgess (2014) contemplates mathematical intuition as a kind of


rational intuition with three forms:

— The set-theoretic intuition is intuition related to set-theoretic con-


cepts.
— The geometric in the sense of Gödel, intuition supports the belief
that the three-dimensional Euclidean space correctly represents
a certain structure existing in the realm of mathematical objects.
— The chronometric intuition is intuition related to time.

In addition, Burgess separates two forms of empirical geometric


intuition:
— The spatial intuition supports beliefs about the physical space.
— The temporal intuition is intuition related to physical time.
Fraenkel and Bar-Hillel (1958) consider two types of intuition: the
primordial intuition of natural numbers and mathematical induction
and the global intuition, which allows one to determine when two
symbols coincide or have the same type.
Bunge (1962) suggests much more types of intuition, which are
named and listed below:
1. Perceptual intuition is immediate identification of a thing, phe-
nomenon or symbol.
2. Comprehension intuition is clear understanding of the meaning
and/or interrelations of system of symbols such as a text or a
diagram.
3. Interpretation intuition is easiness of interpretation of conven-
tional signs and symbols.
4. Geometrical intuition is the ability to envisage absent things and
construct visual models and schemas.
5. Metaphoric imagination is the ability to apprehend and develop
metaphors.
6. Creativity intuition is interpreted as creative imagination.
7. Reasoning intuition is interpreted as high-speed reasoning.
8. Synthetic intuition is the ability to easily synthesize different
elements, systems, and objects in a unified system.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch07 page 677

Knowledge Production, Acquisition, Engineering, and Application 677

9. Common sense intuition is the ability to make decision without


utilization of scientific knowledge or sophisticated reasoning.
10. Practical intuition is the ability to make sound judgments and
estimates.

It is possible to consider three types of intuitive knowledge/


information production:

— Analogy;
— Extension or generalization;
— Guessing.

In this context, extension is a form of logical inference from less


general to more general.
To understand neurophysiologic foundation of intuition, we uti-
lize the Componential Triune Brain model (CTBM) developed in
(Burgin, 2010), which is the further development of the Triune Brain
model (TBM) introduced and studied by MacLean (1913–2007).
The main conception of the TBM is existence of three levels of
perception and action that are controlled by three corresponding
centers of perception in the human brain (MacLean, 1973; 1982).
These three centers together form the Triune Brain with the struc-
ture of a triad constituting the neural basis, or framework, of the
brain consists of three parts: the spinal cord, hindbrain, and midbrain.
Besides, centuries of evolution have endowed people with three dis-
tinct cerebral systems (cf., Figure 7.1). The oldest of these is called
the reptilian brain or R-complex. It programs behavior that is pri-
marily related to instinctual actions based on ancestral learning and
memories, satisfying basic needs such as self-defense, reproduction,
and digestion. The reptilian brain is fundamental in acts such as pri-
mary motor functions, primitive sensations, dominance, establishing
territory, hunting, breeding, and mating.
Through evolution, people have developed a second cerebral
system, the limbic system, which MacLean refers to as the
paleomammalian brain and which contains hippocampus, amygdala,
hypothalamus, pituitary gland, and thalamus. This system is situated
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch07 page 678

678 Theory of Knowledge: Structures and Processes

Neomammalian

Paleomammalian

Reptilian

Figure 7.1. The triune brain

around the R-complex, is shared by humans with other mammals,


and plays an important role in human emotional behavior.
The most recent addition to the cerebral hierarchy is called
the neomammalian brain, or the neocortex. It constitutes 85% of
the whole human brain mass and receives its information from the
external environment through the eyes, ears, and other organs of
senses. This brain component (neocortex) contains cerebrum, corpus
callosum, and cerebral cortex. The cerebrum and cerebral cortex are
divided into two hemispheres, while the corpus callosum connects
these hemispheres. The neocortex deals with information in a logical
and algorithmic way. It governs creative and intellectual functions
of people, such as social interaction and advance planning. The left
hemisphere works with symbolic information, applying step-by-step
reasoning, while the right hemisphere handles images processed by
massively parallel (gestalt) algorithms.
Even psychologists who have objections to the Anatomic Triune
Brain model admit that it is a useful, although oversimplified,
metaphor, as the structure presented as the triune brain is based
on a sound idea of three functional subsystems of the brain. In the
development of neurophysiology and neuropsychology, MacLean’s
theory was used as a base for the Whole Brain model, developed
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch07 page 679

Knowledge Production, Acquisition, Engineering, and Application 679

by Herrmann (1990). The main idea of this development is a syn-


thesis of the Anatomic Triune Brain model with the two-hemisphere
approach to the brain functioning.
The theory of the triune brain (reptilian, old mammalian and
new mammalian) is used as a metaphor and a model of the inter-
play between instinct, emotion, and rationality in humans. Cory
(1999) applied this model to economic and political structures. In
Cory’s schema, the reptilian brain mediates the claims of self-interest,
whereas the old mammalian brain mediates the claims of empathy.
If selfish interests of an individual are denied for too long, there is
discontent due to a feeling of being unjustly treated. If empathic
interests are denied for too long, there is discontent due to guilt. In
either case, the center of intelligence at the prefrontal cortex plays
the role of a mediator. Its executive function is required to restore
balance, generating the reciprocity required for effective social and
economic structures.
The TBM is used to explain hyperactivity of youngsters stud-
ied by Zametkin (1990) and other researchers. Peter Levine base his
approach to trauma treatment on the TBM (Levine, 1990). Accord-
ing to Levine, there are three types of the uniform stress and relax-
ation responses to a threatening situation that are active in all animal
species through the autonomic nervous system. In the everyday lan-
guage, these responses are metaphorically called fight, flight, or freeze.
The first two of them are well known, while the third one was intro-
duced by Levine. The freezing or immobility response has evolved
over millions of years and it has served an adaptive purpose well for
all species — except humans. In an individual, it can lead to trauma.
Many physical ailments are actually residues of thwarted trauma
reactions incurred during stressful events. What usually happens to
non-human species is that after the threatening situation resolves
itself, the animal forgets the stress and goes on its way without being
traumatized.
In contrast to this people can get stuck in the freezing response,
while the reasoning mind resists or blocks the natural bodily sensa-
tions and fine motor movements needed to come out of the freeze
response. The contemporary rationalistic culture is not helpful in
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch07 page 680

680 Theory of Knowledge: Structures and Processes

supporting people in such a traumatizing situation. The feelings that


people go through after experiencing a traumatic event are outside of
their voluntary control, often being frightening and even potentially
re-traumatizing.
Levine (1999) postulates that trauma exists not in the event or
in the story of the event, but is stored within the nervous system
and thus, is expressed in definite reactions and behaviors. The main
principle of the Levine’s treatment approach is that the body has a
natural, innate, and miraculous capacity to heal once these reactions
and behaviors are understood and guided.
Although the Triune Brain has become a well-known model in
contemporary psychology, it caused several objections on the ground
of the development and structure of the triune brain system. First,
there is evidence that the, so-called, paleomammalian and neomam-
malian brains appeared, although in an undeveloped form, on much
earlier stages of evolution than it is assumed by MacLean. Sec-
ond, there are experimental data that in the neocortex, regions
that are homological to the, so-called, paleomammalian and reptil-
ian brains exist and perform similar kinds of information processing.
For instance, neuropsychological data give evidence that amygdala,
which is a part of a limbic system, performs the low-level emotion pro-
cessing, while the ventromedial cortex performs the high-level emo-
tion processing. This shows that emotions exist, at least, on three
levels: on the subconscious level of limbic system, on the conscious
intuitive level, and on the conscious rational level in the cortex. The
first level utilizes direct affective information, while the second and,
to some extent, the third levels make use of cognitive emotional infor-
mation (Burgin, 2010).
At the same time, the development of the system of Will demands
inclusion of some regions that are not included into the R-complex
(the reptilian brain in MacLean’s theory) into this system. It means
that the centers of rational intelligence, emotion and will are not
concentrated in three separate regions of the brain but are highly
distributed among several components of the brain. Thus, it might
be better to call them not centers but systems of intelligence, emo-
tion, and will. This extension of functional characteristics results in
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch07 page 681

Knowledge Production, Acquisition, Engineering, and Application 681

System of rational intelligence (SRI)

System of emotions/affective states (SAS)

System of will and instinct (SWI)

Figure 7.2. Three basic systems of the brain

the necessity to change the TBM making it more adequate to exper-


imental data.
All this brings us to the CTBM described in (Burgin, 2010), which
portrays the brain as a structure consisting of the three basic systems
(cf., Figure 7.2):

• the System (or Component) of Rational Intelligence (also called


the System of Reasoning) (SRI)
• the System (or Component) of Emotions (or more generally, of
Affective States) (SAS)
• the System (or Component) of Will and Instinct (SWI).

Note that in contrast to MacLean’s TBM, the Componential Tri-


une Brain model does not assume that each basic system belongs
only to one anatomic part of the brain. These systems can cover
regions of different anatomic parts of the brain, e.g., system of emo-
tions includes amygdala, which is a part of a limbic system, and the
ventromedial cortex, and even have common fragments.
All three systems of the brain are structural schemas in the sense
of the schema theory, which is developed as a specific direction of the
brain theory (Anderson, 1977; Arbib, 1992; 2005; Armbruster, 1996;
Burgin, 2005a; 2006; 2010a). According to this theory, brain schemes
interact in a way of concurrent competition and coordination. All
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch07 page 682

682 Theory of Knowledge: Structures and Processes

these interactions are based on physical processes but have an inher-


ent informational essence related to a specific type of information.
Information processes in the brain are more exactly reflected by the
theory of the triadic mental information than by the conventional
information theory that deals only with cognitive information. At
the same time, dynamic brain schemas are non-phenomenal repre-
sentations of mental knowledge.
It is natural to call the initial MacLean’s structure by the name
the Anatomic Triune Brain model because it is based on the anatomy
of the brain where three indispensable parts are distinguished: the
neocortex, limbic system and R-complex.
In standard structuring of the brain, we also find these three sys-
tems. In the conventional setting (cf., for example, (DeArmond et al.,
1989; Russell, 1992)), the brain includes three components: the fore-
brain, midbrain, and hindbrain.
The forebrain is the largest division of the brain involved in a
wide range of activities that make people human. The forebrain has a
developed inner structure. It includes the cerebrum, which consists of
two cerebral hemispheres. The cerebrum is the nucleus of the system
(center) of rational intelligence.
Under the cerebrum is the diencephalon, which contains the tha-
lamus and hypothalamus. The thalamus is the main relay center
between the medulla and the cerebrum. The hypothalamus is an
important control center for sex drive, pleasure, pain, hunger, thirst,
blood pressure, body temperature, and other visceral functions. The
forebrain also contains the limbic system, which is directly linked to
the experience of emotion. The limbic system is the nucleus of the
system (center) of emotions (or more generally, of affective states).
The midbrain is the smallest division and it makes connections
with the other two divisions — forebrain and hindbrain — and alerts
the forebrain to incoming sensations.
The hindbrain is involved in sleeping, waking, body movements,
and the control of vital reflexes such as heart rate, blood pressure.
The structures of the hindbrain include the pons, medulla, and cere-
bellum. The hindbrain is the nucleus of the system (center) of will
and instinct.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch07 page 683

Knowledge Production, Acquisition, Engineering, and Application 683

The SRI realizes rational thinking. It includes both symbol and


image processing, which go on in different hemispheres of the brain.
The SAS governs sensibility and the emotional sphere of personality.
The SWI directs behavior and thinking. Two other systems influence
behavior only through the will. For instance, a person can know that
it is necessary to help others, especially, those who are in need and
deserve helping. However, in many cases, this person does nothing
without a will to help. In a similar way, we know situations when an
individual loves somebody but neither tells this nor explicitly shows
this due to an absence of a sufficient will.
It is necessary to remark that discussing the will of an individual
we distinguish conscious will, unconscious will, and instinct. All of
them are controlled by the SWI. In addition, it is necessary to make
distinctions between thoughts about intentions to do something and
the actual will to do this. Thoughts are generated in the SRI, while
the will dwells in the SWI. In other words, thoughts and words about
wills, wishes, and intentions may be deceptive if they are not based
on a will.
Will is a direct internal injunction, as well as any kind of motiva-
tion. That is, the forces that act on or within an organism to initiate
and direct behavior, has to be transformed into a will in order to
cause the corresponding action. The will is considered as a process
that deliberates on what is to be done (Spence, 2000).
The CTBM is not only a necessary extension of the TBM but it
also continues a long standing approach to the brain stratification.
As Smith (2010) demonstrates, triadic models of the brain and psy-
che have featured through two and half millennia of Western thought,
starting with works of Pythagoras, Plato, and Aristotle and receiv-
ing a modern airing in Paul MacLean’s the TBM. A generation later
after Pythagoras, Plato and Aristotle, Herophilus and Erasistratus
from Alexandria put together a more anatomically informed triadic
theory, which was modified by Galen in the 2nd century and remained
the prevailing paradigm for nearly 1,500 years until it was over-
turned by the great thinkers of the Renaissance. Nonetheless, the
notion that the human neuropsychological system is somehow best
thought of as having a triadic (tripartite) structure has remained
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch07 page 684

684 Theory of Knowledge: Structures and Processes

remarkably resilient and has reappeared time and again in modern


and early modern times. For instance, the TBM well correlates with
the Freud’s model of personality, which has the structure of the triad
(Id, Ego, Super-ego). In the correspondence between the TBM and
the Freud’s model, the reptilian complex corresponds to Id, the limbic
system corresponds to Ego, and the neocortical complex corresponds
to Super-ego. In this venue, it is also possible to consider the triarchic
theory of intelligence developed by Robert Sternberg (1985).
Taking each of these the centers, SRI, SAS, and SWI, as a spe-
cific infological system, we find three types of information. One is
the conventional information that acts on the center of reasoning
and of other higher brain functions (SRI), which is situated in the
neocortex. This information gives knowledge, changes beliefs and
generates ideas. Thus, it is natural to call it cognitive information.
Information of the second type acts on the SAS, which includes the
paleomammalian brain. It is natural to call this information by the
name direct emotional information, or direct affective information or
emotive information. Information of the third type acts on the SWI,
which contains the reptilian brain. It is natural to call this informa-
tion by the name direct regulative or direct effective information.
Thus, anthropic information has three dimensions:
Cognitive information changes the content of the SRI, which includes
the knowledge system (thesaurus) and neocortex (neomammalian
brain) as its carrier.
Direct emotional/affective information changes the content of the
SAS, which includes the paleomammalian brain (limbic system).
Direct regulative/effective information changes the content of the
SWI, which includes the reptilian brain or R-complex.
However, in general, emotions constitute only one part of affective
states, which also include moods, feelings, etc. That is why in general,
direct affective information is more general than direct emotional
information. However, as there is no consensus on the differences
between emotions and affective states, these two types of information
are used without differentiation.
Interactions between the basic brain systems imply dependen-
cies between thinking, emotions, and actions of people. Emphasizing
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch07 page 685

Knowledge Production, Acquisition, Engineering, and Application 685

some of these relations, psychologists build their theories and psy-


chotherapists develop their therapeutic approaches. Giving prior-
ity to the SRI, the so-called “cognitive revolution” has taken hold
around the world. It influenced both psychology, resulting in the
emergence of cognitive psychology (Neisser, 1967; 1976), and cog-
nitive psychotherapy, inspiring creation of cognitive therapy (Beck,
1995). The main postulate in these areas is that all processes in the
human psyche are information processes. In psychology, the word cog-
nitive often means thinking in many contexts of contemporary life
(Freeman and DeWolf, 1992). The cognitive therapeutic approach
begins by using the extremely powerful reasoning abilities of the
human brain. This is important because our emotions and our actions
are not separate from our thoughts. They are all interrelated. Think-
ing (SRI) is the gateway to our emotions (SAS) — and our emo-
tions are the gateway to our actions through motivation and will
(SWI). This is only another way of saying that information from
the SRI goes to the SAS — and from it to the SWI that controls
our actions. Consequently, the cognitive psychotherapeutic approach,
which has been successfully utilized for treating many mental dis-
orders, gives additional supportive evidence for the theory of the
triune brain and behavior, as well as for the theory of the triadic
mental information. The latter explains that while going from the
SRI to the SAS and to the SWI, information is transformed from
cognitive information, to direct emotional/affective information to
direct effective/regulative information. As a result, we have interac-
tion processes of the personality components of the Componential
Triune Brain presented in Figure 7.3.
An interesting connection between the TBMs and the Stoic phi-
losophy have been found recently.
The prominent Roman–Greek Stoic philosopher Epictetus (ca.
55–135) suggested that the apprentice philosopher should be trained
in three distinct areas or topoi (Epictetus, 2008):

1. Desires (orexeis) and aversions (ekkliseis);


2. Impulse to act (hormas) and not to act (aphormas);
3. Freedom from deception, hasty judgment, and anything else
related to assents (sunkatatheseis).
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch07 page 686

686 Theory of Knowledge: Structures and Processes

Reasoning Emotions/Affective states

Will/instinct

Behavior

Figure 7.3. Interaction between components of personality

These three areas of training exactly correspond to the three types


of philosophical discourse referred to by earlier Stoics (cf., (Diogenes
Laertius, 1991)):

1. The logical is the topos, concerning assent to an impression or the


value-judgment.
2. The ethical is the topos, concerning impulse (hormê).
3. The physical is the topos, concerning desire (orexis).

Naturally, the logical is directly related to the SRI, the ethical is


associated with the SAS, and interpreting desire as will or instinct,
the physical is linked to the SWI.
The structure of Stoic philosophy also correlates with the Exis-
tential Triad of the world (Section 2.2). Indeed, the logical belongs to
the World of Structures, the physical naturally belongs to the Physi-
cal World and the ethical, as you would expect, belongs to the Mental
World.
Analyzing three ways of intuitive knowledge/information pro-
duction considered above, i.e., by subconscious contemplation, by
guessing or by emotions, e.g., the so-called, gut feeling, we see that
the Componential Triune Brain allows us to explicate three mental
capacities, which form the operational base of intuition:

• Unconscious deliberation in the system of mental reasoning, i.e.,


in the SRI.
• Emotional intelligence in the sense of (Burgin, 2010), which func-
tions in the SAS.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch07 page 687

Knowledge Production, Acquisition, Engineering, and Application 687

• Embodied instincts and instructions for control guessing based on


the SWI.

These types constitute only inner intuition, while many people


speak and write about intuition from external sources — from God,
from the Cosmos/Universe or the Earth, and from extraterrestrials.
Intuition is also extremely important in everyday life because it
allows utilization of partial knowledge without inference or the use of
reason. In this context, intuition is associated with instinct combined
with innate knowledge, i.e., it is related to the SWI, which is linked
to ethics and knowing instinctively the ‘right’ and the ‘wrong’ way
to behave.
There are approaches to formalization of intuitive information/
knowledge production. An attempt to make guessing grounded gave
birth to probability theory created in the 17th century by Blaise
Pascal (1623–1662) and Pierre de Fermat (1601–1665). The goal
of probability introduction was changing some types of intuitive
guessing to reasoning with probabilities (cf., for example, (Burton,
1997)). For a couple of centuries this reasoning was not formalized
and only in the 19th century, the first attempts to synthesize logic
with probability started. The notable logician George Boole aimed
to reconcile classical logic (which tends to express complete knowl-
edge or complete ignorance) and probability theory (which tends
to express partial or/and imprecise knowledge or ignorance), intro-
ducing imprecise probability (Boole, 1854). His approach represented
subjective interpretation of probabilities because people often do not
have enough information to assign definite numbers to probabilities
of given events. In the 20th century, various probabilistic logics also
called probability logics were introduced with the goal to combine
the ability of probability theory to work with uncertainty with the
capacity of deductive logic for formal inference — deduction. In par-
ticular, in probabilistic logics, truth values are not confined only to
two values — true and False — as in traditional logics but are prob-
abilities of being a true expression. In comparison with conventional
logics, probabilistic logics provide richer and more expressive for-
malisms with a broad range of possible application areas. However,
a richer formalism brings difficulties with probabilistic logics. First,
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch07 page 688

688 Theory of Knowledge: Structures and Processes

they bring higher computational complexity of processing of their


probabilistic and logical components. Second, there is a possibility
of counter-intuitive results.
Another approach to formalization of intuitive information/
knowledge production is based on fuzzy logics and linguistic variables
(Bandemer and Gottwald, 1996), as well as fuzzy logical varieties and
prevarieties (Burgin and Rybalov, 2003).
Note that, in essence, all scientific methods of cognition demand
intuition. Indeed, experimentation need intuition what experiments
it would be useful to conduct and how to organize these experiments.
In many cases, existing rules are insufficient. Observation also needs
intuition for making right selection from all observed events — not
all of them are equally important for the definite case.
Reasoning also employs intuition. It is evident for abduction
where there no formal rules for generation and selection of possi-
ble explanations and rational intuition can direct the researcher in
the right way, or mislead her or him.
What concerns induction, we have already indicated that, as
Poincaré (1905) stressed, induction always involves a specific kind
of intuition. Even deduction engages intuition in many cases. For
instance, in non-monotonic logics, knowledge exchange and selec-
tion, which demand intuition, precede deduction (cf., Section 3.3).
In a similar way, performing deduction in logical varieties and preva-
rieties, intuition is helpful for choosing the right component for this
operation (cf., Section 3.3).
To conclude, it is necessary to remark that although psycholo-
gists, mathematicians, and scientists studied different forms of intu-
ition, they have not achieved a sufficiently full understanding of this
phenomenon.

7.1.3. Computers and networks as cognitive tools


The good news: Computers allow us to work 100% faster.
The bad news: They generate 300% more work.
Unknown
One of the most popular directions in application of computers to
obtaining knowledge is knowledge discovery in databases (KDD).
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch07 page 689

Knowledge Production, Acquisition, Engineering, and Application 689

It is not coincidental. As Wright (1998) explains, the amount of data


being collected in databases today far exceeds our ability to reduce
and analyze data without the use of automated analysis techniques.
Many scientific and transactional business databases grow at a phe-
nomenal rate. Databases are increasing in size in two ways: (1) the
number N of records or objects in the database grows and (2) the
number d of fields or attributes to an object increases (Fayyad et al.,
1996). Databases where the number of objects, the order is more
than 100 are becoming increasingly common, for example, in the
astronomical sciences. At the same time, the number of fields has
the order more than 100 occur in many databases, for example, in
medical diagnostic applications.
That is why KDD emerged as the field that is evolving to pro-
vide people an efficient approach to extract knowledge from these
volumes and volumes of data. KDD is aimed at an essential growth
of the efficiency of information processing in comparison with direct
data mining. KDD is a growing field and there are many knowledge
discovery methodologies in use and under development. However,
the majority of these methodologies are empirical and deficiency of
sound theoretical foundations for KDD prevents from achieving even
sufficient efficiency in work with knowledge by means of computers.
Let us analyze the process of data transformation into knowledge.
Knowledge is achieved through information retrieval, which in its
turn is based on data collection, mining, and analysis. The amount
of data being collected in databases today far exceeds our ability
to reduce and analyze data without the use of automated analy-
sis techniques. Many scientific and transactional business databases
grow at a phenomenal rate. KDD is the new field that is evolving
to provide automated analysis solutions. The term knowledge discov-
ery in databases was coined at the first KDD workshop in 1989 to
denote processes in which knowledge is the end-product, while data
are the raw material (Piatetsky-Shapiro, 1991). A particular step in
this process is called data mining, which is the application of specific
algorithms for extracting patterns from data (Fayyad et al., 1996;
Wright, 1998) although in many cases, KDD is not distinguished
from data mining.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch07 page 690

690 Theory of Knowledge: Structures and Processes

Different methods of data mining have been developed to deal


with such amounts of data. However, data mining, while giving more
generalized and/or adequate data, does not provide knowledge per se.
As an example we can consider data mining on the Internet. This
process provides large amounts of data to a user (some relevant,
some not), but the user has himself/herself to convert these data
into knowledge. Knowledge is formed only inside some knowledge
system. It may be the mind of a user or an automated knowledge
system on a computer.
In addition, before data can become knowledge, it is necessary
to extract from these data appropriate information. KDD provides
the capability to discover new and meaningful information by using
existing data. KDD quickly exceeds the human capacity to analyze
large data sets. The amount of data that requires processing and
analysis in a large database exceeds human capabilities, and the
difficulty of accurately transforming raw data into knowledge sur-
passes the limits of traditional databases. Therefore, the full uti-
lization of stored data depends on the use of knowledge discovery
techniques.
As the first approximation, we develop a model of knowledge
discovery, which consists of three levels: data mining, information
retrieval, and knowledge formation (cf., Figure 7.4). Each of these
three levels may be divided into several sublevels. For instance, data
mining is performed on the level of raw data, level of interpreted
data, and level of attributed data. Separation of data mining, infor-
mation retrieval, and knowledge formation, as well as their sublevels,
is based on the general theory of information (Burgin, 2010) and

Data input Data mining

Information retrieval

Knowledge formation Knowledge output

Figure 7.4. The triadic model of KDD


September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch07 page 691

Knowledge Production, Acquisition, Engineering, and Application 691

system theory of knowledge presented in this book. Mathematical


techniques from the theory of named sets, category theory, and gen-
eral theory of properties are utilized.
Each of these processes, data mining, information retrieval, and
knowledge formation, is an essential component of information pro-
cessing, both by people and computers. Data mining provides a base
for information retrieval, which, in its turn, allows knowledge for-
mation and production. Under their conventions, the information
retrieval process takes the raw results from data mining (the pro-
cess of extracting trends or patterns from data) and carefully and
accurately transforms them into useful and understandable informa-
tion. This information is not typically retrievable by standard tech-
niques but is uncovered through the use of AI techniques. In its turn,
the retrieved information is transformed into knowledge through the
knowledge discovery process, which places information into a definite
knowledge system, which is coordinated with the knowledge system
of a user.
Thus, we have the triadic dynamic model of knowledge discov-
ery presented in the following diagram (Burgin and Gantenbein,
2002):

This stratification of knowledge discovery is based on principal dis-


tinctions between data, information and knowledge. Consequently,
there is a difference in working with data, processing information,
and acquiring or producing knowledge.
This shows that the definition of knowledge discovery as “the
non-trivial extraction of implicit, unknown, and potentially useful
information from data” given in (Frawley et al., 1991) is essentially
incomplete.
A more detailed stratification of KDD is developed in (Fayyad
et al., 1996). In their model, data are treated as a set of facts (e.g.,
cases in a database), while a pattern is an expression in some lan-
guage describing a subset of the data or a model applicable to the
subset.
The KDD process is interactive and iterative, involving numerous
steps, iterations, and loops demanding many decisions from the user.
October 12, 2016 11:42 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch07 page 692

692 Theory of Knowledge: Structures and Processes

Knowledge
Stages 8 and 9:
Interpretation/evaluation
Patterns

Stage 7:
Data mining
Algorithms, models
and methods
Stages 5 and 6:
Selection of algorithms, models
and methods

The transformed data


Stage 4:
Data transformation, which
includes reduction

The preprocessed data


Stage 3:
Preprocessing, which
includes cleaning
The target data
Stage 2:
Data selection

Tools The initial data Goals

Stage 1:
Identification of the application domain, goals,
tools and initial database (or databases) for KDD

The initial problem

Figure 7.5. The expanded model of KDD

The basic stages of the KDD process are (cf., Figure 7.5):

The first stage is developing an understanding of the application


domain, finding the relevant prior knowledge and identifying the
goals of the KDD process from the customer’s viewpoint.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch07 page 693

Knowledge Production, Acquisition, Engineering, and Application 693

The second stage is formation of a target entity in the database,


i.e., either selection of a data set, or focusing on a subset of variables
or data samples, on which knowledge discovery has to be performed.
The third stage is data cleaning and preprocessing including such
basic operations as (1) removing noise if appropriate; (2) collecting
the necessary information to model or account for noise; (3) deciding
on strategies for handling missing data and/or data fields; and (4)
accounting for time-sequence information and known changes.
The fourth stage is data and variables reduction and projection
with the goal of finding useful features and invariant representations
for the chosen data depending on the goal of the task.
The fifth stage is matching the goals of the KDD process obtained
on the first stage to a particular data-mining method, such as sum-
marization, classification, regression, clustering, and so on.
The sixth stage is an exploratory analysis with model and hypoth-
esis selection, choice of the data-mining algorithms and selecting
methods for data pattern searching. This process includes deciding
which models and parameters might be appropriate, e.g., models of
categorical data are different than models of vectors over the real
numbers, and matching a particular data-mining method with the
overall criteria of the KDD process, e.g., the end user might be more
interested in understanding the model than its predictive capabilities.
The seventh stage is data mining, which is searching for patterns
of interest in a particular representational form or a set of such repre-
sentations, including classification rules or trees, regression, and clus-
tering. The user can significantly optimize the data-mining method
by correctly performing the preceding operations.
The eighth stage is interpreting mined patterns, visualizing the
extracted patterns, models and/or data, obtaining knowledge, and
possibly returning to any of the preceding stages for further iteration.
The ninth stage is acting on the discovered knowledge by using
the knowledge directly, incorporating the knowledge into another
system for further action, or simply documenting it and reporting
it to interested parties. This process also includes checking for and
resolving potential conflicts with previously possessed or extracted
knowledge.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch07 page 694

694 Theory of Knowledge: Structures and Processes

There are many different methods classified as KDD techniques


and used for information retrieval and knowledge discovery. There
are quantitative methods, such as the probabilistic and statisti-
cal approaches. There are KDD schemas that utilize visualization
techniques (cf., for example, (Gantenbein and Sung, 2001)). There
are classification approaches to KDD such as Bayesian classifica-
tion, inductive logic, data cleaning/pattern discovery, and decision
tree analysis. Other approaches include deviation and trend analy-
sis, genetic algorithms, neural networks, and hybrid methodologies,
which combine two or more techniques. Many of them are described
and classified in (Wright, 1998).
Other popular techniques of knowledge discovery and acquisition
based on computers are pattern recognition, computer simulation,
learning algorithms, and expert system technology. For instance,
computer simulations are an essential part of all experiments in
some fields, such as high-energy physics. It is possible to assert
that without computer simulations many experiments in these fields
would be impossible. In addition, computer simulations can essen-
tially decrease the cost of experimentation and save environment
from the negative impact of some physical experiments.
Mathematicians also started to use computers in proving new theo-
rems and finding properties of mathematical objects. The most famous
result obtained with the help of computers was the Four-Color The-
orem, according to which any map drawn on a plane can use only
four colors without any two adjacent countries having the same color
(Appel and Haken, 1977). In the process of proving this theorem, a
computer checked through a large number of particular maps confirm-
ing that all of them satisfied the condition of the theorem.
Another prominent mathematical result obtained with the help
of computers is a proof of the famous Kepler conjecture. In 1611,
the great astronomer and mathematician Johann Kepler (1571–1630)
studied the problem of how to stack spherical balls, e.g., apples or
oranges, in the best way, i.e., so that they fill space as densely as
possible. He found several ways to do this and conjectured that one
of them is the best possible.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch07 page 695

Knowledge Production, Acquisition, Engineering, and Application 695

In 1997, Thomas Hales of the University of Michigan announced


a proof of this conjecture, which involved, like the famous computer
proof of the 4-color theorem, computer checking of thousands of sep-
arate cases, many of them individually very laborious. The proof
contained over 3 gigabytes of computer code and data.
To justify this result, it was necessary to check it (cf., Chapter 3).
However, the manuscript with the proof contained around 250 pages.
With all codes and formulas, it was hard to do for one mathematician.
So, a committee of 12 experts was appointed to verify the result. They
started to work but after 4 years, the committee announced that they
had found no errors, but still could not confirm the correctness and
gave up. This well demonstrates existing problems with computerized
proofs of complex mathematical results.
In addition to using computers as assistants in their work,
mathematicians and computer scientists has been developing auto-
mated systems for theorem proving, deduction, and general rea-
soning. It is necessary to remark that automated theorem provers
also found applications beyond mathematics, for example, in valida-
tion of software and hardware correctness. However, inside mathe-
matics, achievements of automated theorem provers have been very
moderate.
At the same, mathematicians, computer scientists and psychol-
ogists started developing computational cognition as the computa-
tional basis of learning and inference trying to find foundations of
cognitive processes and utilizing mathematical modeling, computer
simulation, and behavioral experiments.
It is also necessary to emphasize that the Internet has become a
powerful depository of information and thus, a useful cognitive tool.
Access to immense amounts of information allows the Internet users
to successfully create new knowledge.
One more direction in application of computers to cognition is
scientific and cognitive visualization (cf., for example, (Butler and
Bryson, 1992; Hartley and Barnden, 1997; Burgin, 1997f; Burgin
et al., 2001; 2001a; 2001b; Gantenbein and Sung, 2001; Liu et al.,
2001; Zellweger, 2011)).
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch07 page 696

696 Theory of Knowledge: Structures and Processes

7.1.4. Learning
“The most erroneous stories are those we think we know best —
and therefore never scrutinize or question.”
Stephen Jay Gould

Traditionally learning is understood as the process of knowledge


acquisition or knowledge transfer from one individual to another or
from a book to an individual. There is also a much broader interpre-
tation of learning. For instance, Tuomi (1999) suggests that learn-
ing is synonymous to internalization of new knowledge, creation of
knowledge or/and the development of new skills. However, this broad
approach makes learning equivalent to cognition in contrast to the
customary utilization of these two terms.
To avoid blending of these terms, we use two definitions.

Definition 7.1.13. Learning in the exact sense is internalization


of knowledge transferred from another carrier of knowledge, e.g.,
another person, organization, textbook, or the Internet.
Learning of students at a college or university is learning in the
exact sense.
Note that this definition includes the development of new skills
because it is natural to treat skills as embodied operational knowl-
edge. This definition also includes learning not only of individuals
but also learning of groups, organizations, and societies.

Definition 7.1.14. Learning in the broad sense is internalization of


knowledge created, discovered or obtained by the knower.
According to this definition, it is possible to learn from experience
or from nature.
Bearing in mind learning in the broad sense, Tuomi (1999) devel-
ops a classification of modes, sources and processes of ontogenetic
learning, which is presented in Table 7.1 with some modifications.
The given definition, allows us to discern different types and levels
of learning:

• The Xerox learning is simple acquisition of knowledge from another


system.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch07 page 697

Knowledge Production, Acquisition, Engineering, and Application 697

Table 7.1. Modes, sources, and processes of ontogenetic learning.

A source of
the behavioral
change Environment Society Self

Language Generation of Training Conceptual


Verbal mode spontaneous Generation of thinking
empirical concepts scientific concepts
and ideas and ideas
Participation in
thought
communities
Knowledge Experience Reflective Imagination
processing Empirical socialization
Self-referential experiment
non-linguistic
mode
Organism Habit formation Tacit socialization Intuition
Non-referential Skill acquisition
non-linguistic
mode

• The comprehensive learning is acquisition and understanding of


knowledge from another system.
• The functional learning is acquisition of knowledge from another
system and developing skills for its application.
• The articulate learning is acquisition of knowledge from another
system and being able to explain (transmit) it to somebody else.
• The behavioral learning is acquisition of knowledge from
another system and expressing this knowledge in ones behavior/
functioning.

For instance, we can observe the Xerox learning when somebody


learns by heart a definite text.
A teacher needs the articulate learning to be able to teach what
she learned before.
Professional learning has to be the functional learning because it
is necessary to use what is learned in one’s professional activity.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch07 page 698

698 Theory of Knowledge: Structures and Processes

When students are taught in schools, colleges and universities, it is


the behavioral learning because they have to express their knowledge
in tests and quizzes.
Differentiation between learners gives us the following classifica-
tion:

◦ The individual learning is learning of an individual.


◦ The collective learning is learning of more than one individual.

In turn, collective learning can be:

◦ The group learning is learning of a group where group can any


assembly of people.
◦ The organizational learning is learning of an organization, i.e., of
a structured group.
◦ The social learning is learning of society.

Based on his types of knowledge (cf., Section 2.1), Tuomi (1999)


introduces several types of learning:

— The inter-generational learning is learning inside a generation of


the learner.
— The intra-generational learning is learning across different gener-
ations, e.g., formation of instincts (instinctive knowledge).
— The cognitive or conceptual learning is internalization of cognitive
knowledge.
— The structural learning is acquisition of habits on the individ-
ual level and practices/routines on the organizational and social
levels, as well as the development of instincts.
— The ontogenetic learning is learning that takes place in the onto-
genetic development.
— The phylogenetic learning is learning that takes place in the phy-
logenetic development.

Note that although traditionally instincts are linked only to indi-


viduals, it is possible to speak about instincts of crowds, organization
and societies in the same way as we can speak about collective intel-
ligence (Burgin, 2012).
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch07 page 699

Knowledge Production, Acquisition, Engineering, and Application 699

Learning is a kind of intellectual activity. That is why, three basic


types of intellectual activity induces the corresponding typology of
learning in the exact sense:

— The reactive learning is selection of the given knowledge items


and reproduction of the selected items in the memory of the
learner.
— The active learning intellectual activity is search for and extrac-
tion of the necessary knowledge.
— The passive learning is reproduction of the given knowledge in
the memory of the learner.

In the case of learning in the broad sense, we have one more type:

— The proactive learning, which involves creation of new knowledge.

Gregory Bateson (1904–1980) introduced several levels of learning


(Bateson, 1973):

• Learning 0 means that there is no change, while the system is


involved in repetitive behaviors of individuals, groups or organiza-
tions ‘inside the box’. It is characterized by specificity of response,
which — right or wrong — is not corrected.
• Learning I is gradual, incremental change involving corrections and
adaptations through behavioral flexibility and extensions. While
these modifications may help to increase the capabilities of indi-
viduals, groups or organizations, they are still ‘inside the box’. It
is characterized by the modification of specificity of response by
correction of errors in the choice of alternatives.
• Learning II is a change in the process of Learning I, e.g., a cor-
rective change in the set of alternatives from which the choice on
the level I is made or a change in how the set of alternatives is
organized. It is a fast and often discontinuous change involving
instantaneous shifts of responses to an entirely different category
or class of behavior and performing switches from one type of
“box” to another such as changes in policies, values, or priorities.
• Learning III is a change in the process of Learning II. It is charac-
terized by momentous revisions, which go beyond the boundaries
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch07 page 700

700 Theory of Knowledge: Structures and Processes

of the current identity of the individual, group, or organization. It


is possible to say that the change is “outside a collection of boxes’.
• Learning IV is a change in the process of Learning III, and in its
essence, it is revolutionary change involving beginning of some-
thing completely new, unique and transformative.

Researchers also study learning strategies. For instance, in


(Molnar, 1997) six learning strategies are described:

— memory learning strategy includes grouping and structured


reviewing of information, as well as applying images and sound
and employing action
— compensation learning strategy involves guessing and inferring
— cognitive learning strategy entails analyzing, reasoning, trans-
ferring information, taking notes, practicing, highlighting, and
summarizing
— meta-cognitive learning strategy includes organization, evaluation,
and planning of learning by setting goals and objectives
— affective learning strategy is based on emotions when learners reg-
ulate their emotional state
— social learning strategy implies working with others and asking
questions

There are different models of the learning process. For instance,


John Dewey (1859–1952), who by the opinion of many educators,
was the most significant educational thinker of his time, suggested a
model that is consistent with all types of learning but applies only
to reactive learning. It consists of the following steps (Dewey, 1938):

1. Interruption of the routine functioning.


2. Problem definition and conceptualization.
3. Working hypothesis formation.
4. Inference and thought experiment for testing the hypothesis.
5. Experimental action for testing the hypothesis.
6. Problem solving as the confirmation of the hypothesis.
7. Returning to the new routine functioning.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch07 page 701

Knowledge Production, Acquisition, Engineering, and Application 701

We can see that for Dewey learning is not only knowledge


acquisition but mostly solving a problem, while the solution may
be not only new knowledge but also different behavior or new way
of functioning.
It is possible to ask a question how this schema works when the
routine functioning is learning. For instance, a scientist has to learn
all the time and not only when a new problem emerges.
Besides, in the case of Xerox learning, there is no hypothesis for-
mation because knowledge representation is routinely stored in the
memory of the learner.
Several factors influence the utilization of strategies by learners:

— learner ability
— gender
— culture
— attitude
— motivation
— learner ability to manage the learning process

All considered above learning processes are based on a sim-


ple schema when the learner learns by receiving information from
some source, e.g., from a teacher, instructor or textbook. In real-
ity, learning can include more stages and involve more participants.
To reflect these peculiarities, researchers introduced and studied
the concept of iterated learning (cf., for example, (Kalish et al.,
2007)). Iterated learning is intergenerational knowledge and infor-
mation transmission in the social context of the cultural evolu-
tion. Cultural information transmission by iterated learning may
explain why language is structured explicating key sources of its
evolution.
There is a mathematical model of iterated learning based on the
constructive hierarchy of inductive Turing machines (Burgin and
Klinger, 2004). This model demonstrates that deductive iterated
learning strategies do not increase learning possibilities in compar-
ison with individual learning. At the same time, inductive iterated
learning strategies essentially increase learning abilities in compari-
son with individual learning.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch07 page 702

702 Theory of Knowledge: Structures and Processes

Learning is intrinsically connected to teaching. Researchers dis-


tinguish three basic approaches to teaching:

— The teacher-centered approach is realized when the teacher is the


main authority figure, while the primary role of students is passive
information reception via lectures and direct instruction having
the end goal of testing and assessment through objectively graded
tests and assessments.
— The student-centered approach is realized when the teacher and
students play an equally active role in the learning process, while
student achievements are measured through both formal and
informal forms of assessment, including group projects, student
portfolios, and class participation.
— In the content-centered approach, the teacher makes an effort to
give more information to the students making them to acquire
(learn) more knowledge.

In the teacher-centered approach, the teacher the sole supplier of


knowledge and information, who can play the following roles of:

— A formal authority who is in position of power focusing on rules


and expectations
— An expert who possesses all knowledge and expertise within the
classroom
— A role model who is leading by example demonstrating how to
access information and obtain knowledge

In the student-centered approach focuses on student investiga-


tion and hands-on learning when the teacher can play the following
roles of:

— A facilitator who place a strong emphasis on teacher–student rela-


tionships,
— A delegator who plays a passive role in the student learning,
— An organizer who organizes the process of the student learning.

Discussing learning of individuals and organizations, it is nec-


essary not to forget that contemporary learning technologies often
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch07 page 703

Knowledge Production, Acquisition, Engineering, and Application 703

involve computers and networks, e.g., the Internet (Keenan, 1964;


Molnar, 1997). As Molnar (1997) states, “research shows that edu-
cational technology, when properly applied, can provide an effective
means for learning.”
There are different terms that denote utilization of computers in
education:

• Computer Assisted Instruction (CAI)


• Computer Aided Instruction (CAI)
• Computer Assisted Learning (CAL)
• Computer Based Education (CBE)
• Computer Based Instruction (CBI)
• Computer Enriched Instruction (CEI)
• Computer Managed Instruction (CMI)

When educators started using the Internet for teaching/learning/


training, new terms appeared:

• Web Based Training


• Web Based Learning
• Web Based Instruction

It is possible to discern three directions in utilization of computers


for the purpose of learning/teaching/training:

• The control technology when the computer plays the role of an


instructor telling students what to do.
• The teacher instrumental technology when the computer is used as
a tool of an instructor.
• The student instrumental technology when the computer is used as
a helping device of a student, who is telling computer what she/he
needs.

These first two directions correlate with two teaching strategies:

• The control technology corresponds to the teacher as an instructor.


• The instrumental technology corresponds to the teacher as a facil-
itator.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch07 page 704

704 Theory of Knowledge: Structures and Processes

For instance, in programmed instruction, which is a kind of the


control technology, the computer presents learning material to the
student, often employing text, graphics, sound and video. In the pro-
cess of presentation, questions are posed to the student, and based
on her answers, the computer, or more exactly, the education soft-
ware (program) makes a choice how to continue its presentation. In
some cases, new topics are chosen for exposition, while in other situa-
tions, the computer repeats previous topics if they were not properly
learned.
The control technology is also used in the simulation assisted
training when the student works with a simulation of the real world
systems, processes, and situations. Simulation assisted training is
used when it is not practical or feasible to provide the learning with
real-life systems, for example, in preparatory pilot training.
At the same time, the teacher can use computers as a tool for
enhancing her teaching, for example, suggesting the students to
participate in simulating experiments, when students explore sim-
ulations of the real experiments on the computer screen instead of
doing this either in a laboratory or in the field.
Educationally-oriented computer games are also used for enhanc-
ing teaching. Games generally include a competitive element reinforc-
ing motivation of learning and knowledge that the student is assumed
to have.
Each of the two computerized teaching/training technologies have
three specific forms.
The control technology can have the following forms:

• A collaborative instruction allows the student has a possibility to


change the process of instruction/teaching.
• A feedback-based instruction is organized taking into account the
feedback the student gives in response to questions and problems
posed by the computer.
• A commanding instruction simply gives an exposition of the nec-
essary material.

The teacher instrumental technology corresponds to the teacher


as a facilitator
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch07 page 705

Knowledge Production, Acquisition, Engineering, and Application 705

The teacher instrumental technology can have the form of:

• Proactive interaction with the student


• Active interaction with the student
• Reactive interaction with the student

In addition, there is an active research in the area of automated


learning, or machine learning, when computers are themselves learn-
ing (cf., for example, (Weiss and Kulikowski, 1991)).
To conclude, it is necessary to note that according to the general
theory of information (Burgin, 2010), a learner cannot get knowl-
edge directly, e.g., a teacher cannot give knowledge directly to her
students. The learner, e.g., a student, obtains only information, which
can be converted into knowledge or not. Only the learner can per-
form conversion (transformation) of information into knowledge and
the teacher or a textbook is able to help or hinder the student in this
process.

7.1.5. Knowledge creation in organizations


Advances are made by answering questions. Discoveries are made
by questioning answers.
Bernhard Haisch

Nonaka et al. (2000) suggested their model of knowledge creation in


organizations. It consists of three components:

1. The socialization, externalization, combination, internalization


(SECI) process of knowledge creation through the conversion of
tacit and explicit knowledge.
2. The shared context for knowledge creation called ‘ba’ in Japanese.
3. The utilized knowledge assets, such as the inputs, outputs, and
moderators of the knowledge-creating process.

According to this model, the knowledge creation process has the


form of a spiral, which develops based on these three components
and dialectical thinking as the leading mechanism of the process.
The top and middle management of the organization has to generate
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch07 page 706

706 Theory of Knowledge: Structures and Processes

ba efficient for knowledge creation. In such a way, using existing


knowledge assets, an organization creates new knowledge through
the SECI process that goes on in ba, while new knowledge, once cre-
ated, becomes, in turn, the basis for the next stage of the knowledge
creation process.
The main steps in the SECI process are socialization, external-
ization, combination, and internalization.
Socialization is the process of collecting, accumulating, and dis-
seminating tacit knowledge. Members of the organization collect and
accumulate knowledge from their professional activity, from daily
social life, interaction with external experts and informal meetings
with competitors outside the organization (external search), from
their colleagues inside the organization (internal search) and grow
craftsmanship and expertise through practice and demonstrations
by a master. This process is called socialization because it goes on
in social interactions.
Externalization is the process of converting tacit knowledge into
explicit knowledge by means of creative and essential dialogues,
utilization of metaphors, abductive thinking of the members of the
organization and knowledge sessions of experts (knowledge engineers)
with the knowers, i.e., with the members of the organization who suc-
cessfully perform their functions.
Combination is the process of acquisition, integration, synthe-
sis, processing, and disseminating explicit knowledge by planning
strategies and operations, assembling internal, and external data
by using published literature, computer simulation and forecasting,
writing manuals, documents, and building databases on products and
services.
Internalization is the process of converting explicit knowledge into
tacit knowledge and embodying it into the members of the organi-
zation in the form of shared mental models or technical know-how.
In such a way, explicit knowledge created is shared throughout an
organization becoming a valuable asset. For instance, documents and
manuals contain explicit knowledge but without the necessary appli-
cation skills of this knowledge, it usually remains useless. Skills are
based on tacit operational knowledge. Therefore, only conversion of
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch07 page 707

Knowledge Production, Acquisition, Engineering, and Application 707

Tacit Tacit

T E
Socialization Externalization x
a
p
c Empathizing Articulating l
i i
t c
i
t

E
Internalization Combination x
T p
a Embodying Connecting l
c i
i c
t i
t

Explicit Explicit

Figure 7.6. The SECI process

explicit descriptions from documents and manuals to tacit proficiency


allows organization to make these documents and manuals truly
useful.
The whole SECI process is presented in Figure 7.6, which illus-
trates the four ways of knowledge conversion and the evolving spiral
movement of knowledge through the SECI process.
An important peculiarity of the SECI process is that it moves
through the four modes of knowledge conversion not in a circle but
in a spiral, which becomes larger in scale and scope as it moves up
through the ontological levels. On any stage, created knowledge can
set off a new spiral of knowledge creation, expanding horizontally
and vertically inside and beyond the organization, starting at the
individual level and growing as it moves through communities of
interaction that transcend individual, sectional, departmental, divi-
sional, and even organizational boundaries. Being properly orga-
nized, organizational knowledge creation is a never-ending process
that accelerates and enhances itself continuously.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch07 page 708

708 Theory of Knowledge: Structures and Processes

However, this schema is incomplete because organizations and


individuals also use two other knowledge creation processes:

— Formation is the process of amalgamating explicit and tacit


knowledge into tacit knowledge by means of learning from books,
manuals and instructions to enhance individual and organiza-
tional skills.
— Construction is the process of synthesizing explicit and tacit
knowledge into explicit knowledge by means of creativity tech-
niques and procedures, such as brainstorming, thinking outside
the box, USIT, TRIZ, and morphological analysis.

Let us look at definitions of these creativity techniques.


Definition 7.1.15. Brainstorming is a group creativity technique by
which efforts are made to find a conclusion for a specific problem by
gathering a list of ideas spontaneously contributed by the members
of the participating group, discussion of these ideas and selection of
the most efficient ones.
Definition 7.1.16. Thinking outside the box (also called thinking
out of the box or thinking beyond the box) is a creativity technique,
both group and individual, in which people try solving a problem in a
more general context than it is originally given, thinking differently,
unconventionally, or from a new perspective.
Definition 7.1.17. USIT is an individual or group creativity tech-
nique, which is based on three fundamental components: objects,
attributes, and the effects they support. There are two groups of
effects: beneficial effects, called functions, or not beneficial effects,
called unwanted effects.
USIT consists of three common phases:

— Problem definition in terms of objects, attributes, and effects.


— Problem analysis, which has two forms: (1) the “closed-world”
analysis of the problem to understand functional connectivity of
objects, attributes and effects; and (2) the “particles method”,
which starts from a possible solution and works back to the prob-
lem situation.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch07 page 709

Knowledge Production, Acquisition, Engineering, and Application 709

— Application of solution strategies: utilization, nullification, and


elimination of the unwanted effect.

It is assumed that each phase takes equal time when the problem
is solved.

Definition 7.1.18. TRIZ (literally meaning a theory of the resolu-


tion of invention-related tasks), which also called the theory of inven-
tive problem solving, and occasionally goes by the English acronym
TIPS, is a problem-solving, analysis and forecasting tool developed
by the Soviet inventor and science-fiction author Genrich Altshuller
(1926–1998) and his colleagues, beginning in 1946.
However, TRIZ is not only a theory but also includes a practi-
cal methodology, creative tools, a knowledge base, and model-based
technology for innovative problem solving.

Definition 7.1.19. Morphological analysis is a method for system-


atically structuring and investigating the total set of relationships
contained in given problem complexes with the goal of solving a def-
inite problem.
It was developed by Fritz Zwicky (1898–1974) for exploring all
possible solutions to a multidimensional, non-quantified complex
problem (Zwicky, 1967, 1969).
Researchers have also developed different mathematical tools for
modeling, study, and improvement of knowledge processes in orga-
nizations. For instance, Heather and Rossiter (2009) apply category
theory to problems of interoperability of knowledge systems, provid-
ing a powerful abstraction of the business process.
To conclude, it is necessary to clarify a controversial issue of who
creates knowledge. Some researchers insist that only individuals cre-
ate new knowledge, while it cannot be created by groups, organiza-
tions and society. For instance, Nonaka and Takeuchi (1995) write:
“In a strict sense, knowledge is created only by individuals. Organiza-
tional knowledge creation, therefore, should be understood as a process
that ‘organizationally’ amplifies the knowledge created by individuals and
crystallizes it as a part of the knowledge network of the organization.”
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch07 page 710

710 Theory of Knowledge: Structures and Processes

However, this is a misconception. For instance, such a knowledge


creation technique as brainstorming implies that the participating
group solves the problem creating new knowledge. Besides, there are
examples from the history of science and mathematics when a group
solves a definite problem although the majority of problems have
been solved by individual researchers. In many cases, the groups of
scientists (mathematicians) who together solve some problem work
also together. For instance, working together Marie Sklodowska Curie
(1867–1934) and Pierre Curie (1859–1906) discovered two previously
unknown elements — polonium and radium — creating new physical
and chemical knowledge.
However, there are situations when a group of people working
separately creates the new knowledge. The famous example of such
a situation is the discovery of the subatomic particle called positron
by a group of two.
History tells us that in 1928, the famous physicist Paul Dirac
(1902–1984) introduced what was later called the Dirac equation.
It unified quantum mechanics and special relativity describing the
properties of the electron. However, calculations from this equation
indicated two particles: one was the ordinary negatively charged elec-
tron and another was a positively charged particle with the mass
as that of the negatively charged electron. Dirac did not neglect
the negative answer suggesting existence of an unknown particle
but nobody believed him and many physicists even mocked at him.
However, in 1932, another physicist Carl Anderson (1905–1991) dis-
covered a particles with the parameters predicted by the theory of
Dirac. This shows that the new particle called positron and knowl-
edge about it was discovered by the group of two physicists — Dirac
and Anderson.
At the same time, there is a class of situations when groups and,
in some sense, organizations create new knowledge. This happened
in the case of geographical discoveries because they were made when
groups of people reached new lands. For instance, when Columbus
came to America, new knowledge about this continent was created
not only by Columbus but by all members of the ship crews that
reached America, while any crew of a ship is an organization.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch07 page 711

Knowledge Production, Acquisition, Engineering, and Application 711

7.2. Knowledge organization and engineering

Real knowledge is to know the extent of one’s ignorance.

Confucius

Engineering, in general, is defined as application of scientific princi-


ples to practical ends, such as the design, manufacture, and operation
of structures and mechanisms.
Consequently, knowledge engineering is application of scientific
principles to practical ends associated with knowledge, such as the
design, manufacture, maintenance, and operation of knowledge sys-
tems.
In a similar way, Feigenbaum and McCorduck (1983) define
knowledge engineering as an engineering discipline that involves
integrating knowledge into computer systems in order to solve com-
plex problems normally requiring a high level of human expertise.
However, there other definitions of knowledge engineering, which
belong either to the transfer approach or to the modeling approach.
The definition of Feigenbaum and McCorduck is the objective for
the Transfer Approach, which treats knowledge engineering as appli-
cation of engineering techniques to transfer human knowledge into
AI systems. It usually means that the human knowledge for solving a
problem is transferred to the knowledge base assuming that experts
already have this knowledge in the explicit form. Thus, the trans-
fer approach disregards the tacit knowledge an individual possesses
for solving the problem. This deficiency caused the paradigm shift
towards the modeling approach, which is more adequate to reality
describing problem-solving as a dynamic, cyclic, incessant process
dependent on the knowledge acquired and the interpretations made
by the system.
Therefore, in the Modeling Approach, the goal of knowledge engi-
neering is to include problem solving techniques and knowledge of
the domain expert into AI systems modeling how an expert solves
problems in real life (Schreiber et al., 2000).
In the contemporary practice, knowledge engineering consists
of construction, maintenance, and development of knowledge-based
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch07 page 712

712 Theory of Knowledge: Structures and Processes

systems and is related to many computer science domains such as AI,


databases, data mining, expert systems, decision support systems,
and geographic information systems. Besides, knowledge engineering
is also linked to mathematical logic, as well as to cognitive science
and socio-cognitive engineering.
Knowledge engineering includes various activities specific for the
development of a knowledge-based system:

• Assessment of the problem;


• Development of a knowledge-based system shell/structure;
• Acquisition of the relevant information, knowledge and specific
preferences (IPK model) is the principal stage of the process;
• Knowledge representation for which available representation
schemas e.g., rules or semantic networks, are used;
• Knowledge structuring;
• Translation of the structured knowledge into knowledge bases;
• Testing and validation of the inserted knowledge;
• Integration and maintenance of the system;
• Revision and evaluation of the system;
• Inference of new knowledge;
• Explanation and justification of new knowledge.

Being still more art than engineering, knowledge engineering does


not exactly follow the above list in practice. The phases overlap, the
process might be iterative, and contains many cycles.
One of the central goals of knowledge engineering is to identify
an appropriate conceptual lexicon building an ontology. Ontologies
allow building models of the knowledge domain and defining terms
used inside the domain and the relationships between them. There
are different types of ontologies such as domain ontologies, generic
ontologies, application ontologies, and representational ontologies.
Ontology engineering, or ontology building, is a subfield of knowl-
edge engineering that studies the methods and methodologies for
working with ontologies including ontology development processes,
the ontology life cycle, the methods and methodologies for building
ontologies, and the tool suites, and languages that support them. The
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch07 page 713

Knowledge Production, Acquisition, Engineering, and Application 713

goal of ontology engineering is extraction of knowledge from software


applications, enterprise and business procedures, and organizational
practice. For instance, ontology engineering develops approaches to
solving the interoperability problems brought about by semantic
obstacles, such as the obstacles related to the definitions of business
terms and software classes.
Another goal of knowledge engineering is building, maintaining,
and using knowledge-based systems, such as expert systems, by
extracting knowledge from a human expert and then translating this
knowledge into a knowledge base and using various technical, scien-
tific and social tools.
Knowledge engineers have developed a number of principles,
methods and tools to improve the knowledge acquisition and struc-
turing. Some of the key principles are:
• There are different:
◦ types of knowledge each requiring its own approach and tech-
nique.
◦ types of experts and expertise, such that methods should be
chosen appropriately.
◦ ways of representing knowledge, which can aid the acquisition,
validation and re-use of knowledge.
◦ ways of using knowledge, so that the acquisition process can be
guided by the project aims (goal-oriented).
• Structured methods allow increasing efficiency of knowledge acqui-
sition and structuring processes.
• Knowledge engineering is aimed at efficient structuring knowledge
and organization of knowledge processes.
Organization of knowledge processes involves the following activ-
ities:
— Process identification;
— Process design;
— Process implementation;
— Process facilitation;
— Process monitoring;
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch07 page 714

714 Theory of Knowledge: Structures and Processes

— Process analysis;
— Process mending.

Thus, we can see that knowledge engineering is a field of engineer-


ing where the knowledge is the object of operations and is a sphere
of activity of knowledge engineers. Knowledge engineering is closely
related to software engineering.
Successful knowledge engineering demands various skills includ-
ing: computer and network skills, effective communication skills,
logical thinking, understanding of organizations and individuals,
self-confidence, and patience.

7.3. Knowledge management and application

A little knowledge that acts is worth infinitely more than much


knowledge that is idle.
Kahlil Gibran

Knowledge management (KM) has become such a hot topic that it


has been dubbed the business mantra of 1990s (Hallal, 1998). To
explain this situation, new terms such as knowledge-oriented organi-
zation or knowledge-generating organization were introduced. In the
contemporary economical and social environment, efficient KM is
necessary to enable an organization to function resourcefully in the
long run.
At the same time, while KM has grown to be a highly prominent
topic of research and practice, the term remains rather ambiguous
and controversial, what impedes progress in articulating what KM
entails and what knowledge-based organizations will look like (Baker
and Badamshina, 2002).
The most popular approach to KM describes it as a collection
of processes directed at creation, capturing, accumulating, sharing,
applying and reusing knowledge (Sydanmaanlakka, 2000). Another
approach interprets KM as delivering the right knowledge to the
right persons at the right time. Baker and Badamshina (2002) advo-
cate a very different perspective by comprehending KM as devel-
oping and managing integrated, well-configured knowledge systems
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch07 page 715

Knowledge Production, Acquisition, Engineering, and Application 715

and increasingly embedding work systems within these knowledge


systems, rather than managing something as nebulous as knowledge
per se.
Here we suggest looking at KM from a more general point of
view within the business model or organization. Management has
always been responsible for effective acquisition and allocation of
people, resources and tools at all levels in an organization, as well
as for the planning of processes and the evaluation of the results.
In a similar way, it is possible to define KM as effective acquisition
and allocation of knowledge at all levels in an organization, as well
as for the planning of knowledge processes and the evaluation of the
results. Naturally, this includes building and enhancing knowledge
systems although not embedding work systems within these knowl-
edge systems but efficiently embedding knowledge systems within
work systems.
Whereas there is no general agreement upon definition of KM,
there is near consensus that it constitutes the combination of all
the actions necessary for ensuring that organizations learn from past
practice and make effective use of all skills and knowledge that their
staff possesses (Powell, 2003).
In many organizations ‘managing’ knowledge has come to occupy
a central place in a manager’s work although knowledge engineers
ought to play the leading role in this process. Knowledge manage-
ment makes great demands on the strategic insight, problem-solving
ability, and tact of the person involved in this activity.
To be efficient, KM has to include the following activities:

1. Determination and identification of needs for knowledge (infor-


mation).
2. Location of needs in knowledge (information).
3. Search for knowledge (information). It is necessary to under-
stand that such a search can unsuccessful, i.e., without find-
ing/discovery of the necessary knowledge.
4. Knowledge discovery and collection consists of finding the nec-
essary knowledge and bringing it to the organization. According
to the current terminology, knowledge discovery is a process of
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch07 page 716

716 Theory of Knowledge: Structures and Processes

transformation of data into knowledge using data mining and


information extraction.
5. Knowledge creation/production is a process of generating
(constructing) of overall new knowledge or of reconstruction
of locally new knowledge, i.e., knowledge that is new for
those who reconstruct it but it existed somewhere before this
reconstruction.
6. Knowledge reception is a process of receiving information (data)
sent by another system, e.g., an agent or organization, and con-
verting it into knowledge.
7. Knowledge acquisition has several forms: (1) it is a process of
acceptance of the created, received or found knowledge into
the system, e.g., into the organization or the knowledge base;
(2) it is a process of capturing existing implicit knowledge
and transforming it into explicit knowledge, which is performed
mainly through knowledge sessions of experts (knowledge engi-
neers) with the knowers, i.e., people who have this implicit
knowledge.
8. Knowledge appropriation and representation is a process of mak-
ing knowledge suitable for definite people and/or definite tasks
and/or definite organizations. It is performed by transformation
and transmutation of knowledge, knowledge representations, and
knowledge carriers.
9. Knowledge codification is a process of changing knowledge repre-
sentation aimed at providing a possibility for placing knowledge
in a structured repository using a knowledge models.
10. Knowledge storing is a process of accumulation of knowledge in
a physical repository such as a database, knowledge base, library
or archive.
11. Knowledge integration has several forms: (1) it is a process of
integration of one knowledge system into another knowledge sys-
tem; (2) it is a process of transformation of several knowledge
systems into one knowledge system; and (3) it is a process of
integration of a knowledge system into some area of activity.
12. Evaluation of knowledge assets is a process of finding essential
properties, parameters, characteristics and attributes knowledge
in the organization.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch07 page 717

Knowledge Production, Acquisition, Engineering, and Application 717

13. Knowledge sharing and dissemination is a process of transmis-


sion and distribution of knowledge inside an organization, which
is often done through person-to-person contacts, lectures, work-
shops, seminars, webinars, and e-mails.
14. Knowledge hiding is a process of protecting knowledge assets
from unsanctioned access, which is often missed in descriptions
of knowledge management in organizations.
15. Knowledge translation is a process of transmitting knowledge
from one are to another, e.g., from creation and discovery to
applications.
16. Knowledge maintenance consists of actions aimed at modifying,
updating, and correcting organizational knowledge so that to
keep it operational and acceptable to its users increasing its
usability.
17. Knowledge application,implementation, and utilization are pro-
cesses in which knowledge existing in an organization is used
for supporting various activities, e.g., for problem solving or
decision-making.
18. Knowledge monitoring is a process of monitoring and evaluating
of knowledge usage.
19. Knowledge exchange and trade are performed in interaction with
other organizations and individuals.
20. Knowledge revision is a process of evaluation the situation and
changing knowledge when it is reasonable or necessary.
21. Knowledge retirement is a process of deleting and excluding some
knowledge items.

Each of the activities from the knowledge management process


includes the following stages:

➢ Goal determination;
➢ Means determination;
➢ Activity organization;
➢ Activity realization;
➢ Result evaluation.

All these activities go on concurrently shaping different cycles. For


instance, search for knowledge may be repeated several times before
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch07 page 718

718 Theory of Knowledge: Structures and Processes

the result will be obtained or the cycle knowledge creation — knowl-


edge appropriation — knowledge storing is performed many times
through the whole process of KM.
An important component of knowledge management is the Orga-
nizational Knowledge cycle:

1. Knowledge acquisition;
2. Knowledge dissemination;
3. Knowledge utilization.

Some of these activities, e.g., knowledge codification or knowl-


edge storing, involve only explicit knowledge, while others, e.g.,
knowledge creation or knowledge appropriation, involve both explicit
and tacit knowledge. Knowledge acquisition includes extraction, col-
lection, analysis, modeling and validation of knowledge by means of
knowledge engineering for knowledge management projects.
Usually it is possible to access explicit knowledge in an organi-
zation at three stages: before, during, or after KM-related activities.
However, access to tacit knowledge is rather restricted.
Note that all research and application of knowledge management
concentrate their attention on knowledge discovery, creation, acqui-
sition, and storing. At the same time, an important type of pro-
cesses in knowledge management has been overlooked for a long time,
namely, knowledge protection by means of knowledge hiding. In con-
trast to knowledge sharing, hiding is an intentional concealment of
knowledge requested or supposedly looked for by another person or
organization.
In the perfect world, all knowledge would be open for everybody.
In contrast to this ideal picture, there are many restrictions on the
access to different knowledge items in the world we live. This is
caused by various reasons. An example of such restrictions is paid
access versus open access to scientific and other publications. The
reason for this is that publishers have to spend money on publica-
tion and they do not want to do this without compensation of the
expenses and even some profit. Another situation emerges due to
the competition between companies, which necessitates protection of
their knowledge from competitors.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch07 page 719

Knowledge Production, Acquisition, Engineering, and Application 719

Only recently, researchers started studies of knowledge hiding in


organizations (cf., for example, (Connelly et al., 2011)). Researchers
found that there are three ways of knowledge hiding: evasive hiding,
rationalized hiding, and playing dumb.
Another activity missed by knowledge management studies is
knowledge retirement. Remember to forget is an essential principle
for efficient knowledge management. Forgetting, i.e., eliminating or
deleting knowledge, is important in the following situations:

• When the knowledge base has a limited space, it is necessary to


make space for new knowledge. In this case, it is necessary to
correctly select what is possible to delete without impairing func-
tioning of the person or organization.
• When the new knowledge contradicts to some stored knowledge
and preservation of the old knowledge may decrease efficiency of
the system, it is necessary to delete old knowledge. This is a gen-
eral situation in expert systems and non-monotonic and default
logics when old knowledge is excluded to evade contradictions and
inconsistency (cf., Section 3.3).
• When some knowledge may be detrimental to the system, its elim-
ination is vital. Operational knowledge in the form of computer
viruses gives examples of such knowledge.

Now knowledge maintenance is mostly developed for operational


knowledge in the form of computer and network software. For
instance, software developers often release upgrades in the form of a
“patch” in order to correct program bugs that are an inevitable part
of software development (Burgin and Debnath, 2006). Many software
companies also suggest updates for solving different problems with
their software.
At the same time, it is also necessary maintain descriptive and rep-
resentational knowledge. For instance, it is important permanently
to update information in databases and knowledge bases because
outdated information can be very detrimental. Such processes are
continuously going on in web search systems, which constantly
update the stored search data by means of the system called the
Index.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch07 page 720

720 Theory of Knowledge: Structures and Processes

PEOPLE
Define the roles of,
and knowledge needed by

Help design
and then use

Help design and PROCESSES


Provides then operate
support for
Makes possible
new kinds of
Determine the need for

TECHNOLOGY

Figure 7.7. Internal structure of knowledge management

Knowledge management goes in the system, which has three basic


components: people, technology and processes, which are connected
in the following way (Edwards, 2009; 2011):
Arrows in Figure 7.7 show what one component is doing with
another component. For instance, processes define the roles of and
knowledge needed by people, while technology provides support for
people. Processes determine needs for technology. In turn, technology
makes possible new kinds of processes.
To conclude, it is necessary to make two remarks. First, knowl-
edge management is a very popular and flourishing area of research
and here we presented only some basic elements from its theory and
methodology. It is possible to find much more in the literature on
this topic.
Second, it is vital to understand that knowledge itself does not
solve our problems or problems of the organization. Even having the
best knowledge what to do, people often do not or cannot do what
it is necessary to achieve their goals. Thus, it is extremely impor-
tant how we utilize knowledge we have. It is related to individuals,
organizations, and societies.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch08 page 721

Chapter 8

Knowledge, Data, and Information

The mathematician, carried along on his flood of symbols, dealing


apparently with purely formal truths, may still reach results of
endless importance for our description of the physical universe.
Karl Pearson

Knowledge is intrinsically related to data and information. This


association became evident with the advent of computers. How-
ever, people still struggle to achieve unambiguous understanding of
relations between these concepts. There is a huge diversity of judg-
ments on this topic. Many papers and books present views of their
authors on such relations. Some works reflect points of view of differ-
ent authors. For instance, 130 definitions of data, information, and
knowledge formulated by 45 scholars are collected in (Zins, 2007). A
variety of approaches to this problem is also described in the book
(Burgin, 2010).
Here based on the general theory of information (Burgin, 2010),
we further develop the understanding according to which knowledge,
data, and information do not belong to the same plane of reality
having, as a result, dissimilar functions. Knowledge and data are of
the same ilk but situated on different levels of structural reality play-
ing the role of substance in the world of structures, while information
functions in the world of structures as energy works in physical world.
Consequently, information is complementary to knowledge and data
constituting orthogonal dimensions in the world of structures.

721
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch08 page 722

722 Theory of Knowledge: Structures and Processes

8.1. Epistemic structures and cognitive information

The greatest obstacle to discovering the shape of the earth, the


continents, and the oceans was not ignorance but the illusion of
knowledge.
Daniel J. Boorstin

As it is demonstrated in Section 3.1, epistemic structures provide


a sound foundation for knowledge studies. At the same time, they
are also crucial for understanding the information phenomenon and
building information theories.
When people speak about information, they mean, as a rule,
cognitive information. Indeed, since the 16th century, we find the
word information in ordinary French, English, Spanish, and Italian
in the sense we use it today: to instruct, to furnish with knowledge
(Capurro, 1991). However, scientific usage of the notion of informa-
tion (cf., for example, (Loewenstein, 1999)) implies a necessity to
have a more general definition. For instance, Heidegger pointed to
the naturalization of the concept of information in biology in the
form of genetic information (Heidegger and Fink, 1970). An example
of such a situation is when biologists discuss information in DNA in
general or in the human genome, in particular.
As a result, we come to the world of structures. Modification of
structural features of a system, and transformation of other system
characteristics through these changes, is the essence of information in
the strict sense. This correlates with the von Weizsäker’s remark that
information being neither matter nor energy, according to (Wiener,
1961), has a similar status as the “platonic eidos” and the “Aris-
totelian form” (Weizsäcker, 1974).
In the context of the general theory of information, these ideas
are crystallized in the following principle (Burgin, 2010).

Ontological Principle O2a (the Special Transformation


Principle). Information in the strict sense or proper information
or, simply, information for a system R, is a capacity to change struc-
tural infological elements from an infological system IF(R) of the
system R.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch08 page 723

Knowledge, Data, and Information 723

The infological system plays the role of a free parameter in the


definition of information. One of the ways to vary this parame-
ter is to choose a definite kind of infological elements. Additional
conditions on infological elements imply a more restricted concept of
information.
Note that it is possible to comprehend the concept of information
in a much broader sense, which includes energy as a particular form
of information (Burgin, 2010).
There are researchers whose interpretations of information come
close to the definition given in the Special Transformation Principle.
One of them is developed by Karpatschof (2000) based on the concept
of release mechanism and activity theory. He postulates existence of
systems with stored potential energy, which is released by specific
release mechanisms triggered by some signals with a low energy. Then
information is defined as a property of a signal to trigger the release
mechanism of a system (Karpatschof, 2000). If we take the release
mechanism of a system R as its infological system IF(R) and take this
definition, we come to the definition from the Special Transformation
Principle.
Discussing definitions of data, knowledge and information,
Capurro explains that he prefers the everyday meaning (since Moder-
nity) of the term information, namely, information is “the act of
communicating knowledge” (cf., (Zins, 2007)).
In the same way as energy is a source of physical system transfor-
mation and emergence, information is a source of transformation and
emergence in epistemic systems in general and in knowledge systems,
in particular.
Madden (2000) defines information as “a stimulus originating in
one system that affects the interpretation by another system of either
the second system’s relationship to the first, or of the relationship
the two systems share with a given environment . . .”. Thus, if we
take interpretations or relationships of a system R as its infological
system IF(R) and take this definition, we come to the definition from
the Special Transformation Principle.
In another work (Madden, 2004), information is defined as
“a stimulus which expands or amends the World View of the
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch08 page 724

724 Theory of Knowledge: Structures and Processes

informed”. Thus, if we take the World View of a system R as its


infological system IF(R) and take the new definition, we once more
come to the definition from the Transformation Principle formulated
above or the Cognitive presented below.
However, here we are mostly interested in the particular case of
proper information — in cognitive information. It is identified by the
Cognitive Transformation Principle.
To better understand how infological system can help to explicate
the concept of information in the strict sense, we consider cognitive
infological systems.
Definition 8.1.1. An infological system IF(R) of the system R is
called cognitive if it contains (stores) such epistemic structures (ele-
ments) such as knowledge, data, images, ideas, fancies, abstractions,
beliefs, etc.
Cognitive infological systems are standard examples of infologi-
cal systems, while their elements, such as knowledge, data, images,
ideas, fantasies, abstractions, and beliefs, are standard examples of
infological elements. Cognitive infological systems are very impor-
tant, especially, for intelligent systems as the majority of researchers
believe that information is intrinsically connected to knowledge
(cf., (Flückiger, 1995)).
The system of knowledge KIF(R) of a system R is a kind of info-
logical systems of an intelligent system R. In cybernetics, the knowl-
edge system of R is called the thesaurus Th(R) of the system R.
A thesaurus is a part of larger cognitive infological systems. Another
example of an infological system is the memory of a computer. Such
a memory is a place in which data and programs are stored.
A cognitive infological system of a system R is denoted by CIF(R)
and is related to cognitive information. This relation is described in
the following ontological principle.
Ontological Principle O2c (the Cognitive Transformation
Principle). Cognitive information for a system R is a (potential)
capacity to cause changes in the cognitive infological system CIF(R)
of the system R.
In this context, a cognitive infological system CIF(R) con-
tains, acquires, stores, and processes epistemic structures, such as
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch08 page 725

Knowledge, Data, and Information 725

knowledge, data, ideas, beliefs, images, algorithms, tasks, procedures,


problems, schemas, scenarios, values, measures, opinions, goals,
ideals, fantasies, abstractions, etc.
A cognitive structure (element) E is any system (element) used for
cognition. It represents or reflects, i.e., contains information about,
some domain or system, which is called the domain of the epistemic
structure E.
We discern pure epistemic structures and weighted epistemic
structure.
A pure epistemic structure p contains information only about its
domain.
Example 8.1.1. A proposition is a pure epistemic structure.
Example 8.1.2. A semantic network is a pure epistemic structure.
Example 8.1.3. A frame is a pure epistemic structure.
A weighted epistemic structure consists of a pure epistemic struc-
ture p and its weights. Namely, a weighted epistemic structure has
the form (p; w1 , w2 , w3 , . . . , wk ) where wi is the weight of p in the
dimension i.
Examples of weighted epistemic structure are considered in
Section 3.1.
Some researchers also relate information to structures. For
instance, information is characterized as a property of how entities
are organized and arranged, but not the property of entities them-
selves (Reading, 2006). Other researchers have related information
to form, while form is an explicit structure of an object (Burgin,
2012). For instance, Dretske (2000) characterizes information as an
attribute of the form (in-form-ation) that matter and energy have
but not as a feature of the matter and energy themselves.
Cognitive infological systems are very important, especially, for
intelligent systems. Indeed, the majority of researchers believe that
information in general is intrinsically connected to cognition, while
cognitive information is one of the three basic types of anthropic
information studied in (Burgin, 2011a). Moreover, some researchers
believe that people’s knowledge about physical reality is the result of
information they obtain from external sources (von Weizsäcker, 1958;
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch08 page 726

726 Theory of Knowledge: Structures and Processes

von Weizsäcker et al., 1958; Wheeler, 1990; Frieden, 1998). Under-


standing that physicists study physical systems not directly but only
through information they get from these systems has created a school
of thought about the role of information processing in physical pro-
cesses and its influence on physical theories. According to one of
the leading physicists of the 20th century John Archibald Wheeler
(1911–2008), it means that every physical quantity derives its ulti-
mate significance from information. He called this idea “It from Bit”
where “It” stands for things, while “Bit” impersonates information
as the most popular information unit (Wheeler, 1990). For Wheeler
and his followers, space–time itself must be understood and described
in terms of a more fundamental pregeometry without dimensions and
classical causality. These features of the physical world only appear as
emergent properties in the ideal modeling the physical reality based
on information about complex interactions of very simple basic ele-
ments, such as subatomic particles.
Not all proper information is cognitive. For instance, there are
also such types of information as emotional information and effective
information, which are proper but not cognitive as they impact the
emotional and effective subsystems of the brain (Burgin, 2010).
In the mathematical representation of information, a cognitive
infological system is modeled by a mathematical structure, for exam-
ple, a space of knowledge, beliefs, and fantasies. Information, or more
exactly, a unit of information, is an operator acting on this structure.
A global unit of information is also an operator. Its place is in a higher
level of hierarchy as it acts on the space of all (or some) cognitive
infological systems.
Knowledge constitutes a substantial part of the cognitive infolog-
ical system. Some researchers even equate cognition with knowledge
acquisition and consider the system of knowledge often called the-
saurus as the whole cognitive infological system. In a general case,
knowledge, as a whole, constitutes a huge system, which is orga-
nized hierarchically and has many levels as it is demonstrated in
Chapters 4–6.
However, two issues, absence of the exact concept of structure and
lack of understanding that structures can objectively exist, result
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch08 page 727

Knowledge, Data, and Information 727

in contradictions and misconceptions related to information. For


instance, one author writes that information is simply a construct
used to explain causal interaction, and in the next sentence, the same
author asserts that information is a fundamental source of change in
the natural world. Constructs cannot be sources of change — they
can only explain change.
The Ontological Principle O2c implies that information is not of
the same kind as knowledge and data, which are structures. Actually,
if we take that matter is the name for all substances as opposed to
energy and the vacuum, we have the relation that is represented by
the diagram called the Structure-Information-Matter-Energy (SIME)
Square considered in the next section.

8.2. Structural aspects of knowledge–information


duality

Information is not knowledge.


Albert Einstein

In the traditional approach, relations between knowledge and infor-


mation are usually analyzed in the context of the triad Data–
Information–Knowledge, which attracted attention of researchers
only in the 20th century. In contrast to this, the best minds in society
were concerned with the problem of knowledge from ancient times.
Concepts of information and data in contemporary understanding
were introduced much later. Consequently, information attracted
interests of researchers millennia after knowledge had become one of
the primary concerns of philosophers. Only in 1920s, the first pub-
lications appeared that later formed the core of two principal direc-
tions of information theory. In 1922, Fisher introduced an important
information measure now called Fisher information and used in a
variety of areas — from physics to information geometry and from
biology to sociology. Two other papers (Nyquist, 1924) and (Hartley,
1928), which started the development of the most popular direction in
information theory, appeared a little bit later in the communication
theory. Later many other directions, such as algorithmic information
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch08 page 728

728 Theory of Knowledge: Structures and Processes

theory, qualitative information theory, semantic information theory,


pragmatic information theory, economic information theory, dynamic
information theory, quantum information theory, and operator infor-
mation theory, have been developed by different researchers (Burgin,
2010).
Interest to the third component, data, from the triad Data–
Information–Knowledge emerged in the research community only
after computers became the foremost information processing devices.
As a result, researchers started to study the whole triad only in 80s
of the 20th century. It is possible to approach relations between data,
information and knowledge from many directions. Now the most pop-
ular approach to the triad Data–Information–Knowledge represents
hierarchical relations between them in a form of a pyramid with data
at the bottom, knowledge at the top and information in the middle
(cf., for example, (Landauer, 1998; Boisot and Canals, 2004; Sharma,
2005; Rowley, 2007)).
In knowledge management literature, this hierarchy is mostly
referred to as the Knowledge Hierarchy or the Knowledge Pyra-
mid, while in the information science domain, the same hierarchy
is called Information Hierarchy or Information Pyramid for obvious
reasons. Often the choice between using the label “Information” or
“Knowledge” is based on what the particular profession believes to
fit best into the scope of this profession. Here we use the term Data–
Information–Knowledge Hierarchy or Data–Information–Knowledge

Knowledge

Information

Data

Figure 8.1. The Data–Information–Knowledge pyramid


September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch08 page 729

Knowledge, Data, and Information 729

Pyramid because the researchers prefer the image of a pyramid


as it conveys more implicit information than has an arbitrary
hierarchy. Besides, researchers extended the Data–Information–
Knowledge Pyramid by one more level — wisdom. The extended form
of this pyramid is called the Data–Information–Knowledge–Wisdom
Hierarchy (Pyramid) (Sharma, 2005).
The form of the pyramid implies that the bulk of data in the
world is much larger than the volume of information, which in turn,
is smaller than the amount of knowledge in the world. Another
metaphor conveyed by the form of a pyramid implies that data are
compressed into information, while information, which in turn, is
condensed into knowledge. In other words, the concepts go from the
least to most processed or integrated, with data the rawest, and
knowledge the most rarefied. The message contained in the form of
the pyramid, as well as in the concept of a hierarchy, is that infor-
mation is situated on the higher level than data, while knowledge is
situated on the higher level than information. Here higher level can
mean “more valuable” or “better organized”.
As Sharma writes (2005), there are two separate threads that
lead to the origin of the Data–Information–Knowledge Hierarchy.
In knowledge management, Ackoff is often cited as the originator
of the Data–Information–Knowledge Hierarchy. His 1988 Presiden-
tial Address at International Symposium in Geotechnical Safety and
Risk (ISGSR) (Ackoff, 1989) is considered by many to be the ear-
liest mention of the hierarchy as he gives its detailed description
and does not cite any earlier sources on this topic. However, Zeleny
(1987) described this hierarchy in the field of knowledge management
two years earlier than Ackoff. In his paper, Zeleny defined data as
“know-nothing”, information as “know-what”, knowledge as “know-
how”, and added wisdom as “know-why” (cf., Figure 8.2). Almost
at the same time as Zeleny’s article appeared, Cooley (1987) built
the Data–Information–Knowledge Hierarchy in his discussion of tacit
knowledge and common sense. In all of these papers, no earlier work
is cited or referred to.
Nevertheless, much earlier in information science, Cleveland
(1982) described the Data–Information–Knowledge Hierarchy in
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch08 page 730

730 Theory of Knowledge: Structures and Processes

Wisdom

Knowledge

Information

Data

Figure 8.2. The extended Data–Information–Knowledge Pyramid or Data–


Information–Knowledge–Wisdom Hierarchy

detail. Besides, Cleveland pointed that the surprising origin of the


hierarchy itself is due to the poet T.S. Eliot who wrote in “The Rock”
(1934):

Where is the Life we have lost in living?


Where is the wisdom we have lost in knowledge?
Where is the knowledge we have lost in information?

This is the first vague mention of the Data–Information–


Knowledge Hierarchy, which was expanded by Cleveland and others
by adding the layer of “Data” or in terms of Cleveland, of “facts
and ideas”. Cleveland (1982) concedes that information scientists are
“still struggling with the definitions of basic terms” of the hierarchy.
He uses Elliot’s metaphor as a starting point to explain the basic
terms. Cleveland also agrees that there are many ways in which the
elements of the hierarchy may be defined, yet universal agreement
on them need not be a goal in itself. At the same time, the major-
ity of researchers assume that the difference between data, infor-
mation, and knowledge has pivotal importance in our information
age with its information technology and knowledge-based economy.
For instance, Landauer (1998) writes, “the repeated failure of nat-
ural language models to succeed beyond the most superficial level
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch08 page 731

Knowledge, Data, and Information 731

is due to their not taking into account these different levels (of the
Data–Information–Knowledge Pyramid) and the fundamentally dif-
ferent processing required within and between them”.
The most popular approach to the relation between data and
information implies that information is an organized collection of
facts and data. In this context, the process of transition from data to
knowledge goes in two steps: at first, data are transformed into infor-
mation and then information is converted into knowledge by structur-
ing processes. Thus, information comes into view as an intermediate
level of similar phenomena situated between data and knowledge
forming the triad Data–Information–Knowledge.
However, the triad Data–Information–Knowledge is not always
visually represented by a three-level pyramid. Another structure
(cf., Figure 8.3) of the system Data–Information–Knowledge — the
chain — is considered by Liew (2007). In this structure, not only

Internalized: Absorbed & Knowledge


understood by the
human mind

Processed & Analyzed:


A reconstructed picture
of historical events
Externalized:
&/or projection of
Verbalized &/or
possible future
events
Information illustrated

Captured & stored

Data

Figure 8.3. The Data–Information–Knowledge chain


September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch08 page 732

732 Theory of Knowledge: Structures and Processes

levels of the system Data–Information–Knowledge are explicated but


also connections between these levels are described.
In this Data–Information–Knowledge Chain, data are interpreted
as recorded (captured and stored) symbols and signal readings, while
symbols are interpreted in a broad sense and include words (texts
and/or verbal expressions), numbers, diagrams, and images, and sig-
nals also interpreted in a broad sense include sensor and/or sensory
readings of light, sound, smell, taste, and touch.
Information, according to Liew (2007), is a message that contains
relevant meaning, implication, or input for decision and/or action,
while knowledge is the (1) cognition or recognition (know-what),
(2) capacity to act (know-how), and (3) understanding (know-why)
that resides or is contained within the mind or in the brain.
Some researchers extend the Data–Information–Knowledge Pyra-
mid by inclusion of additional level called wisdom labeling it by
the name the Data–Information–Knowledge–Wisdom Hierarchy (cf.,
Figure 8.2).
Other researchers include one more level, “understanding”, in
the Data–Information–Knowledge Pyramid. As Ackoff writes (1989),
descending from wisdom there are understanding, knowledge, infor-
mation, and, at the bottom, data (cf., Figure 8.4). Each of the levels
includes the categories that fall below it. However, the majority of
researchers do not add understanding to their studies of the Data–
Information–Knowledge Pyramid.
The Data–Information–Knowledge Pyramid is a very simple
schema. That is why it is so appealing to researchers. However, as
we know from physics and other sciences, reality does not always
comply with the simplest schemes. Therefore, to find relevance of
this pyramid to the objective reality, we need to understand all its
levels — data, information, and knowledge.
According to Davis, “data is the plural of datum, although the
singular form is rarely used. Purists who remember their first-year
Latin may insist on using a plural verb with data, but they forget
that English grammar permits collective nouns. Depending on the
context, data can be used in the plural or as a singular word meaning
a set or collection of facts. Etymologically, data, as noted, is the plural
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch08 page 733

Knowledge, Data, and Information 733

Wisdom

Understanding

Knowledge

Information

Data

Figure 8.4. The Data–Information–Knowledge–Understanding–Wisdom pyra-


mid or hierarchy

of datum, a noun formed from the past participle of the Latin verb
dare — to give. Originally, data were things that were given (accepted
as “true”). A data element, d , “is the smallest thing which can be
recognized as a discrete element of that class of things named by a
specific attribute, for a given unit of measure with a given precision
of measurement”. (cf., (Zins, 2007))
Data has experienced a variety of definitions, largely depending
on the context of its use. With the advent of information technol-
ogy the word data became very popular and is used in a diversity of
ways. For instance, information science defines data as unprocessed
information, while in other domains data are treated as a represen-
tation of objective facts. In computer science, expressions such as
a data stream and packets of data are commonly used. The concep-
tualizations of data as a flow in both a data stream and drowning
in data occur due to our common experience of conflating a multi-
plicity of moving objects with a flowing substance. Data can travel
down a communication channel. Other commonly encountered ways
of talking about data include having sources of data or working with
raw data. We can place data in storage, e.g., in files or in databases,
or fill a repository with data. Data are viewed as discrete entities.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch08 page 734

734 Theory of Knowledge: Structures and Processes

They can pile-up, be recorded or stored and manipulated, or captured


and retrieved. Data can be mined for useful information or we can
extract knowledge from data. Databases contain data. We can look
at the data, process data, or experience the tedium of data-entry. It
is possible to separate different classes of data, such as operational
data, product data, account data, planning data, input data, output
data, and so on. All these expressions reflecting usage of the term
data assign some meaning to this term.
In particular, this gives us two important conceptualizations of
data: data are a resource, and data are manipulable objects. In turn,
this may implicitly imply that data are solid, physical, things with
an objective existence that allow manipulation and transformation
of data, such as rearrangement of data, conversion to a different form
or sending data from one system to another.
Ackoff defines data as symbols that represent properties of objects,
events and their environments (Ackoff, 1989). In this context, data
are products of observation by people or automatic instrument
systems.
According to Hu and Feng (2006), data is a set of values recorded
in an information system, which are collected from the real world,
generated from some pre-defined procedures, indicating the nature
of stored values, or regarding usage of stored values themselves.
Data are also considered as discernible differences between states
of some systems in the world (Lloyd, 2000; 2002), i.e., light versus
dark, present versus absent, hot versus cold, 1 versus 0, + versus −,
etc. In many cases, binary digits, or bits, represent such differences
and thus, carry information about these differences. However, accord-
ing to Boisot (2002), information itself is a relation between these
discernible states and an observer. A given state may be informative
for someone. Information is then what an observer will extract from
data as a function of his/her expectations or prior knowledge (Boisot,
1998).
Many understand data as discrete atomistic tiny packets with no
inherent structure or necessary relationship between them. However,
this is not true. Besides, there are different kinds and types of data.
In addition, data as an abundant resource can pile-up to such an
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch08 page 735

Knowledge, Data, and Information 735

extent that many people find themselves drowning in data. As a


substance, data can be measured. The most popular way to mea-
sure data is in bits where bit is derived from the term a binary digit,
0 or 1. The reason for this is very simple. Namely, the vast major-
ity of information processing and storage systems, such as comput-
ers, calculators, embedded devices, CD, DVD, flash memory storage
devices, magneto-optical drives, and other electronic storage devices,
represent data in the form of sequences of binary digits.
It would be useful to better clarify our understanding of data.
At first glance, data look like some material things in contrast to
knowledge, which is structural in essence. However, when we start
analyzing what data are, we see that this is not so. Indeed, numbers
are a kind of data. Let us take a number, say 10. In mathematics,
it is called a natural number. From mathematics, we also know that
natural numbers are equivalent collections of sets that have the same
number of elements (Bourbaki, 1960). This is a structure. Another
way to define natural numbers is to axiomatically define the set N
of all natural structures and to call its elements by the name natural
number (Kuratowski and Mostowski, 1967). Abstract sets and their
elements definitely are structures. One more way to represent natural
numbers is to use abstract properties (Burgin, 1989). This approach
develops the idea of Cantor, who defined the cardinal number of a set
A as a property of A that remains after abstraction from qualities of
elements from A and their order (cf., for example, (Kuratowski and
Mostowski, 1967)). Properties, abstract and real, are also structures.
Thus, in any mathematical sense, a natural number, in particular,
number 10, is a structure. Moreover, the number 10 has different
concrete representations in mathematical structures. In the decimal
numerical system, it is represented as 10. In the binary numerical
system, it is represented as 1010. In the ternary numerical system, it
is represented as 101. In English, it is represented by the word ten,
and so on. In the material world, it is possible to represent number
10 as a written word on paper, as a written word on a board, as a
said word, as a geometrical shape on the computer or TV screen, as
signals, as a state of some system and so on. All these representations
form a structure that we call number 10.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch08 page 736

736 Theory of Knowledge: Structures and Processes

We come to similar situations analyzing letters and words, which


are also data. Let us consider the letter a. It also has different physical
representations: a, a, a, a, a, a, a , A, A, A, a, a, a, a, a, A, A, A, A,
A, and many others. In addition, it is possible to represent the letter
a as a written symbol on paper, as a said word, as a written symbol
on a board, as a geometrical shape on the computer or TV screen,
as signals, as a state of some system, and so on. Thus, the letter a,
as well as any other letter or word, is a structure. It is known that
words are represented by written and printed symbols, pixels on the
screen, electrical charges, and brain-waves (Suppes and Han, 2000).
Let us look how the upper level of the Data–Information–
Knowledge pyramid — knowledge — is defined and treated in con-
temporary studies. Logical analysis shows that knowledge is difficult
to define. Taking knowledge as the third and upper component of
the Data–Information–Knowledge pyramid, we see that in spite of
the long history of knowledge studies, there is no consensus on what
knowledge is. As we have seen, over the millennia, the philosophers
of each age and epoch have added their own ideas on the essence and
nature of knowledge to the list. Science has extended this list as well.
As a result, there is a lot of confusion in this area. As Land et al.
(2007) write, knowledge itself is understood to be a slippery concept,
which has many definitions.
In any case, our civilization is based on knowledge and information
processing. That is why it is so important to know what knowledge is.
For instance, the principal problem for computer science as well as
for computer technology is to process not only data but also knowl-
edge. Knowledge processing and management make problem solving
much more efficient and are crucial (if not vital) for big companies
and institutions (Ueno et al., 1987; Osuga, 1989; Dalkir, 2005). To
achieve this goal, it is necessary to distinct knowledge and knowledge
representation, to know regularities of knowledge structure, function-
ing and representation, and to develop software (and in some cases,
hardware) that is based on theses regularities. Many intelligent sys-
tems search concept spaces that are explicitly or implicitly predefined
by the choice of knowledge representation. In effect, the knowledge
representation serves as a strong bias.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch08 page 737

Knowledge, Data, and Information 737

In contrast to this, some researchers conceive information as a


process, whereas knowledge is perceived as a state (Machlup et al.,
1983). According to Sholle (1999), information and knowledge are dis-
tinguished along three axes: multiplicity, temporality, and spatiality.
Multiplicity means that information is piecemeal, fragmented, and
particular, while knowledge is structured, coherent, and universal.
Temporality means that information is timely, transitory, and even
ephemeral, while knowledge is enduring and temporally expansive.
Spatiality means that information is a flow across spaces, while
knowledge is a stock, specifically located, yet spatially expansive.
If we take formal definitions of knowledge, we see that they deter-
mine only some specific knowledge representation. For instance, in
logic, knowledge is represented by logical propositions and predicates.
On one hand, informal definitions of knowledge provide little oppor-
tunities for computer processing of knowledge because computers
can process only formalized information. On the other hand, there is
a great variety of formalized knowledge representation schemes and
techniques: semantic and functional networks, frames, productions,
formal scenarios, relational, and logical structures. However, without
explicit knowledge about knowledge structures per se, these means
of representation are used inefficiently.
The middle level in the Data–Information–Knowledge pyramid is
information. We can see the lack of agreement about the definition of
this term although a quantity of definitions have been suggested in
information sciences, knowledge management, and database theory.
Moreover, the effort to define information has been active in other
disciplines such as epistemology, cognitive sciences, computer science,
electrical engineering, and systems theory, among others.
Now let us look in more detail structural relations between infor-
mation, knowledge, and data. A systemic approach demands not to
consider components from the Data–Information–Knowledge pyra-
mid as given but to define these concepts in a more consistent way
according to the understanding of the majority of researchers:

Information is structuring of data (or structured data);


Knowledge is structuring of information (or structured information).
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch08 page 738

738 Theory of Knowledge: Structures and Processes

However, another approach (cf., for example, (Meadow and Yuan,


1997)) suggests a different picture:

Data usually means a set of symbols with little or no meaning to a


recipient.
Information is a set of symbols that does have meaning or significance
to their recipient.
Knowledge is the accumulation and integration of information
received and processed by a recipient.

From the perspective of knowledge management, information is


used to designate isolated pieces of meaningful data. These data inte-
grated within a context constitute knowledge (Gundry, 2001; Probst,
Raub, and Romhard, 1999).
As a result, it is often assumed that data themselves are of no
value until they are transformed into a relevant form. This implies
that the difference between data and information is functional, not
structural.
Stenmark (2002) collected definitions of the components from the
Data–Information–Knowledge pyramid from seven sources.
Wiig (1993) considers information as facts organized to describe a
situation or condition, while knowledge consists of truths and beliefs,
perspectives and concepts, judgments and expectations, methodolo-
gies and “know how”.
Nonaka and Takeuchi (1995) consider information as a flow of
meaningful messages, while knowledge consists of commitments and
beliefs created from these messages.
According to Spek and Spijkervet (1997) data are not yet inter-
preted symbols, information consists of data with meaning, while
knowledge is the ability to assign meaning.
According to Davenport (1997) data are simple observations,
information consists of data with relevance and purpose, while knowl-
edge is valuable information from the human mind.
According to Davenport and Prusak (1998) data are discrete
facts, information consists of messages meant to change the receiver’s
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch08 page 739

Knowledge, Data, and Information 739

perception, while knowledge is Experiences, values, insights, and con-


textual information.
Quigley and Debons (1999) treat data as a text that does not
answer questions to a particular problem. They look upon informa-
tion as a text that answers the questions who, when, what, or where.
In addition, they deal with knowledge as a text that answers the
questions why and how.
Choo et al. (2000) understand data as facts and messages, assume
that information is data vested with meaning, and perceive knowl-
edge as justified, true beliefs.
Dalkir (2005) treats data as “content that is directly observable or
verifiable,” information as “content that represents analyzed data”
or as “analyzed data — facts that have been organized in order to
impart meaning”, while knowledge is defined in his book as subjective
and valuable information.
One more view of the difference between data and information
is that data is potential information (Meadow, 1996). A message as
set of data may potentially be information but the potential is not
always realized.
This distinction is similar to the distinction between potential
energy and kinetic energy. In mechanics, kinetic energy is that asso-
ciated with movement. Potential energy is associated with the posi-
tion of a thing, e.g., a weight raised to a height. If the weight falls, its
energy becomes kinetic. Impact is what happens if the weight, with
its kinetic energy, hits something.
Meadow and Yuan (1997) suggest that in the information world,
impact is what happens after a recipient receives and in some man-
ner acts upon information. This perfectly correlates with Ontological
Principles O1 and O2 from the general theory of information (Burgin,
2010).
The most popular in computer science approach to information is
expressed by Rochester (1996) who defines information as an orga-
nized collection of facts and data. Rochester develops this definition
through building a hierarchy in which data are transformed into
information into knowledge into wisdom. Thus, information appears
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch08 page 740

740 Theory of Knowledge: Structures and Processes

as an intermediate level of similar phenomena leading from data to


knowledge.
An interesting approach to understanding data, information, and
knowledge in a unified context of semiotics is developed by Lenski
(2004). He suggests that data denote the syntactical dimension of a
sign, knowledge denotes the semantic dimension of a sign, and infor-
mation denotes the pragmatic dimension of a sign.
Data, in the sense of Lenski, are a system that is organized by
structural and grammatical rules of sign. In contrast to this, knowl-
edge results from neglecting the amount of individual contribution
to the semantic abstraction process. However, such an abstraction
process may be only subjectively acknowledged resulting in personal
knowledge. Comprising pragmatic dimension, information is bound
to a (cognitive) system that processes the possible contributions pro-
vided by signs that constitute data for a possible action. Moreover,
information inherits the same interpretation relation as knowledge
with the difference that the latter is abstracted from any reference to
the actual performance whereas information, in contrast, emphasizes
reaction, and performance. As a result, knowledge and information
are closely tied together.
To elaborate his own definition of information, Lenski (2004)
uses seven principles as a system of prerequisites for any subse-
quent theory of information that claims to capture the essentials
of a publication-related concept of information. These principles
were introduced by other researchers as determining features of
information.
Principle 1. According to Bateson (1973), information is a differ-
ence that makes a difference.
Principle 2. According to Losee (1997), information is the value of
characteristics in the processes’ output.
Principle 3. According to Belkin and Robertson (1976), informa-
tion is that which is capable of transforming structure.
Principle 4. According to Brookes (1977), information is that which
modifies . . . a knowledge structure.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch08 page 741

Knowledge, Data, and Information 741

Principle 5. According to Brookes (1980), knowledge is a linked


structure of concepts.
Principle 6. According to Brookes (1980), information is a small
part of such a structure.
Principle 7. According to Mason (1978), information can be viewed
as a collection of symbols.
After formulating these principles, Lenski gives his own interpre-
tation, explicating their relation to the “fundamental equation” (8.1)
of Brookes (1980).
K(S) + ∆I = K(S + ∆S). (8.1)
This equation reflects the situation when a portion of information
∆I acts on a knowledge structure K(S), transforming it into the
structure K(S + ∆S) with ∆S as the effect of this change.
In this context, principles of Lenski acquire the following meaning.
Principle 1 reflects the overall characterization of information
along with its functional behavior and refers to the ∆-operator in
the “fundamental equation”.
Principle 2 expresses a process involved with results ∆I that are
constituents of information.
Principle 3 specifies the concept of difference as a transformation
process resulting in K[S + ∆S].
Principle 4 shows on what information acts, namely, on the knowl-
edge (or structure) system K[S].
Principle 5 explains what is knowledge (or knowledge structure).
Principle 6 relates information to knowledge structures.
Principle 7 specifies carriers of information.
To achieve his goal, Lenski (2004) formulates one more principle.
Principle 8. The emergence of information is problem-driven.
Based on these principles, Lenski (2004) presents a working defi-
nition for a publication-related concept of information.
Definition 8.2.1. Information is the result of a problem-driven dif-
ferentiation process in a structured knowledge base.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch08 page 742

742 Theory of Knowledge: Structures and Processes

Thus, very often, it is assumed that being different, knowledge


and information nevertheless have the same nature. For instance,
the sociologist Merton (1968) writes that knowledge implies a body
of facts or ideas, whereas information carries no such implication of
systematically connected facts or ideas.
In many books and papers, the terms knowledge and informa-
tion are used interchangeably, even though the two entities, being
intertwined and interrelated concepts, are far from identical. More-
over, some researchers define information in terms of data and
knowledge. At the same time, other researchers define knowledge
and/or data in terms of information. For instance, Kogut and Zander
(1992) conceive information as “knowledge which can be transmit-
ted without loss of integrity”, while Meadow and Yuan (1997) write
that knowledge is the accumulation and integration of informa-
tion. MacKay (1969) also assumes that knowledge itself turns out
to be a special kind of information. In a similar way, Davenport
(1997) treats information as data with relevance and purpose, while
Tuomi (1999) argues that data emerge as a result of adding value to
information.
In a similar way, commonplace usage of words data and informa-
tion blurs differences between these concepts. For instance, Machlup
and Mansfield write (1980):
“Data are the things given to the analyst, investigator, or problem-solver;
they may be numbers, words, sentences, records, assumptions — just any-
thing given, no matter in what form and of what origin . . . Many writers
prefer to see data themselves as a type of information, while others want
information to be a type of data.”

At the same time, according to Frost (1986), “Knowledge is the


symbolic representation of aspects of some named universe of dis-
course”, while data are the symbolic representation of simple aspects
of some named universe of discourse. The universe of discourse may
be the actual universe or a fictional one, one in the future, or in
some belief. In any case, this means that data are a special case of
knowledge.
All these and many other inconsistencies related to the Data–
Information–Knowledge pyramid cause grounded critic of this
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch08 page 743

Knowledge, Data, and Information 743

approach to understanding data, knowledge, and information. For


instance, Capurro and Hjorland (2003) write that the semantic
concept of information, located between data and knowledge is
not consistent with the view that equates information management
with information technology. Boisot and Canals (2004) criticize dis-
tinctions that have been drawn between data, information, and
knowledge by those who analyzed the Data–Information–Knowledge
pyramid. Fricke (2008) also offers valid arguments that this pyramid
is unsound and methodologically undesirable explaining that it has
no foundation.
In addition, researchers criticized various implications of the
Data–Information–Knowledge pyramid. For instance, challenging the
traditional approach, Tuomi (1999) argues that in contrast to the
conventional estimation that knowledge is more valuable than infor-
mation, while information is superior to data, these relations have
to be reversed. Thus, data emerge as a result of adding value to
information, which in turn is knowledge that has been structured
and verbalized. Moreover, there are no “raw” data as any observ-
able, measurable, and collectible fact has been affected by the very
knowledge that made this fact observable, measurable, and col-
lectible. According to Tuomi, knowledge, embedded in minds of peo-
ple, is a prerequisite for getting information. Some researchers treat
knowledge in the mind as information, which, in turn, is explicit
and appropriate for processing when it is codified into data. Since
only data can effectively be processed by computers, Tuomi (1999)
explains, data is from the technological perspective the most valu-
able of the three components of the Data–Information–Knowledge
pyramid, which consequently should be turned upside-down.
In a similar way, Childers reasons “. . . knowledge is that which is
known, and it exists in the mind of the knower in electrical pulses.
Alternatively, it can be disembodied into symbolic representations of
that knowledge (at this point becoming a particular kind of infor-
mation, not knowledge). Strictly speaking, represented knowledge is
information. Knowledge — that which is known — is by definition
subjective, even when aggregated to the level of social, or public,
knowledge — which is the sum, in a sense, of individual “knowings”.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch08 page 744

744 Theory of Knowledge: Structures and Processes

Data and information can be studied as perceived by and “embod-


ied” (known) by the person or as found in the world outside the
person” (cf., (Zins, 2007)).
Le Coadic also explains “datum (in our sector mainly electronic)
is the conventional representation, after coding (using ASCII, for
example), of information. Information is knowledge recorded on a
spatio-temporal support. Knowledge is the result of forming in mind
an idea of something” (cf., (Zins, 2007)).
It means that information is a kind of knowledge, while data are
obtained from information.
In opposition to the conventional approach, according to which
knowledge is a kind of information, Zins (2007) believes that in the
subjective domain, information is empirical knowledge.
In turn, Poli (cf., (Zins, 2007)) does not agree that data, informa-
tion, knowledge, and message are placed on the same level of analy-
sis. On his opinion, message is the “vehicle” carrying either data or
information (which can be taken as synonymous). At the same time,
knowledge hints to either a systematic framework (e.g., laws, rules or
regularities, that is higher-order “abstractions” from data) or what
somebody or some community knows.
Stewart (2002) advocates subjective approach to relations
between data and knowledge writing “. . . one man’s data can be
another man’s knowledge, and vice versa, depending on context”.
All these considerations show that it is necessary to make a
distinction between data, knowledge, beliefs, ideas and their rep-
resentations. For knowledge, it is a well-known fact and knowledge
representation is an active research area (cf., for example, (Ueno
et al., 1987)). Change of representation is called codification. Knowl-
edge codification serves the pivotal role of allowing what is collec-
tively known to be shared and used (Dalkir, 2005). At the same
time, differences between data and data representations are often
ignored.
A variety of perspectives on the triad Data–Information–
Knowledge is collected in (Zins, 2007), where 45 scholars formulated
130 definitions of data, information, and knowledge mapping the
major conceptual approaches for defining these three key concepts.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch08 page 745

Knowledge, Data, and Information 745

Here we acquaint the reader with some of these approaches from


(Zins, 2007) in cases that are different from the conventional
approach to the Data–Information–Knowledge pyramid.
In contrast to the widespread opinion, Rafael Capurro suggests
that data are (or datum is) an abstraction. Indeed, while the con-
cept of data or datum suggests that there is something there that is
purely given and that can be known as such, this understanding con-
tradicts to the grounded opinion that there is nothing like ‘the given’
or ‘naked facts’ but that every (human) experience/knowledge is
biased.
In contrast to data, Capurro comprehends knowledge as the event
of meaning selection of a (psychic/social) system from its “world” on
the basis of communication. To know is then to understand on the
grounds of making a difference between message as a meaning offer
and information as meaning selection.
According to Capurro, information is a multi-layered concept with
Latin roots (where informatio means “to give a form”) going back to
Greek ontology and epistemology. The use of this concept in infor-
mation science is, at the first sight, highly controversial when it
refers to the everyday meaning (since Modernity), which explains
that information is “the act of communicating knowledge”. In addi-
tion, Capurro suggests using this definition as far as it points to the
phenomenon of message that he treats as the basic one in information
science.
It brings us to the Capurro communication triad (cf., Figure 8.5)).
Following the main principles of systems theory and second-order
cybernetics, Capurro advocates that a message is a meaning offer,
while information refers to the meaning selection within a system and
understanding expresses the possibility that the receiver integrates
the selection within his/her pre-knowledge.
Thus, Capurro concludes, “Putting the three concepts (“data”,
“information”, and “knowledge”) gives the impression of a logical

Message Information Understanding

Figure 8.5. The Capurro communication triad


September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch08 page 746

746 Theory of Knowledge: Structures and Processes

hierarchy: information is set together out of data and knowledge


comes out from putting together information. This is a fairytale”.
Summing up all discussions related to the Data–Information–
Knowledge pyramid and the Data–Information–Knowledge–Wisdom
pyramid, Bates (2010) writes that “it is difficult to take it from its
popular meaning and develop it into something sufficiently refined
to be useful for research”.
In comparison with knowledge, information is an active structure.
As some researchers have observed (cf., for example, (Hodgson and
Knudsen, 2007)), information causes some action.
This well correlates with the approach elaborated in the general
theory of information (Burgin, 2010) where in contrast to the Data–
Information–Knowledge Pyramid (Figure 8.1), Data–Information–
Knowledge Chain (Figure 8.3) and Lenski’s approach, a different
schema called the Knowledge–Information–Matter–Energy (KIME)
Square is elaborated. It is based on the Ontological Principle O2a
(the Special Transformation Principle) of the general theory of infor-
mation discussed in the previous section.
In essence, the Ontological Principle O2a implies that information
is not of the same kind as knowledge and data, which are structures
(Burgin, 2010). Taking matter as the name for all physical substances
as opposed to energy and the vacuum, we have the system of relations
represented by the diagram in Figure 8.6.
The SIME Square visualizes and embodies the following principle:

Information is related to structures as energy


is related to matter

similar
Energy ≈ Information

contains contain
similar
Matter ≈ Structures

Figure 8.6. The SIME Square


September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch08 page 747

Knowledge, Data, and Information 747

Some researchers also related information to structure of an


object. For instance, information is characterized as a property of
how entities are organized and arranged, but not the property of
entities themselves (Reading, 2006). Other researchers have related
information to form, while form is an explicit structure of an object.
For instance, information is characterized as an attribute of the form
(in-form-ation) that matter and energy have and not of the matter
and energy themselves (Dretske, 2000).
However, two issues, absence of the exact concept of structure and
lack of understanding that structures can objectively exist, result
in contradictions and misconceptions related to information. For
instance, one author writes that information is simply a construct
used to explain causal interaction, and in the next sentence, the same
author asserts that information is a fundamental source of change in
the natural world. Constructs cannot be sources of change, they can
only explain change.
Assigning the place for information in the world, some researchers
also contrasted it to matter and energy. For instance, Wiener (1961)
wrote, “Information is information, not matter or energy.”
Bates (2010) also argues that information is not identical to the
physical material that composes it; rather information is the pattern
of organization of that material, not the material itself. However,
pattern of organization is a pattern of structure, and according to
the comprising definition of structure (cf., Sections 5.1.2 and 6.1),
a pattern of structure is itself a structure. This is the main distinc-
tion of her definition from the definition in the general theory of
information.
Here we are interested not in all structures but only in structures
called knowledge. That is why we build a special form of the SIME
Square, which is called the KIME Square (cf., Figure 8.7).
The KIME Square visualizes and embodies the following principle:

Information is related to knowledge and data


as energy is related to matter

This schema means that information has essentially different


nature than knowledge and data, which are of the same kind.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch08 page 748

748 Theory of Knowledge: Structures and Processes

similar
Energy ≈ Information

contains contains
similar
Matter ≈ Knowledge/Data

Figure 8.7. The KIME square

Knowledge and data are structures, while information is only rep-


resented and can be carried by structures.
This approach to knowledge is to a great extent similar to the
theory of MacKay (1969), who asserted that knowledge must be
understood as a coherent representation. Here a representation is any
structure (a pattern, picture, or model) whether abstract or concrete
which, by virtue of its features, could symbolize some other structure.
Such representation structures contain information elements. Infor-
mation elements in the sense of MacKay are not shapeless isolated
things, but are imbedded in a structure, which helps the receiver to
infer the meaning of the information intended by the sender. MacKay
illustrates this thought with the following example. A single word
rarely makes sense. Usually it takes a group of words or even several
sentences acting together to clarify the role of a single word within
a sentence. In her view the receiver will find it all the easier to inte-
grate a piece of information in what she knows, the more familiar she
is with all words in the group or, at least, with some parts of it. In
particular, when people receive the same message several times from
different sources, usually it will each time become more familiar and
seem more plausible to them. This effect is by no means limited to
verbal communication. For instance, the result of a scientific experi-
ment gains in evidence the more often it is reproduced under similar
conditions.
However, the KIME representation of relations between knowl-
edge and other basic phenomena (the KIME Square) is different from
the approach of MacKay in several essential issues. First, MacKay
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch08 page 749

Knowledge, Data, and Information 749

does not specify representation of what knowledge is, while the KIME
model does it (Burgin, 2010). Second, MacKay does not specify what
kind of representation knowledge is, while the KIME model does it,
assuming that knowledge is a structure or a system of structures.
Third, the KIME model provides an explication of the quantum
knowledge structure (cf., Section 4.1).
The KIME Square well correlates with the distinction some
researchers in information sciences have continually made between
information and knowledge. For instance, Machlup (1983) and Sholle
(1999) distinguish information and knowledge along three axes:

1. Multiplicity: Information is piecemeal, fragmented, particular,


while knowledge is structured, coherent and universal.
2. Time: Information is timely, transitory, even ephemeral, while
knowledge is enduring and temporally expansive.
3. Space: Information is a flow across spaces, while knowledge is a
stock, specifically located, yet spatially expansive.

These distinctions show that information is conceived of as a pro-


cess, whereas knowledge is a specific kind of substance. This under-
standing well correlates with the information concept determined by
the Ontological Principle 2c from the general theory of information
(Burgin, 2010), which is incorporated in the KIME Square. Indeed,
cognitive information is conceived as a kind of energy in the World of
Structures, which under definite conditions induces structural work
on knowledge structures. At the same time, knowledge is a substance
in the World of Structures.
Mey (1986) considered matter, energy, and information as a triad
elements of which in their interaction constitute those features of (an)
object that can be perceived by people. The SIME Square shows that
it is necessary to add structure to this triad.
Some researchers, such as von Weizsäcker or Mattessich, wrote
about similarities between energy and information. For instance,
Mattessich (1993), assuming a holistic point of view and tracing
information back to the physical level, concluded that every man-
ifestation of active or potential energy is connected to some kind
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch08 page 750

750 Theory of Knowledge: Structures and Processes

of information, just as the transmission of any kind of information


requires active or potential energy.
It is possible to ask why data and knowledge occupy the same
place in the KIME Square (cf., Figure 8.6). It is possible to explain
similarities and distinctions between data and knowledge with the
following two metaphors.
Metaphor A. Data and knowledge are like molecules, but data
are like molecules of water, which has two atoms, while knowledge is
like molecules of DNA, which unites billions of atoms.
Metaphor B. Data and knowledge are like living beings, but data
are like bacteria, while knowledge is like a human being.
It is possible to base our understanding of how knowledge is
different from data on the following descriptions of knowledge and
data:

• Knowledge is a compressed, goal oriented structural depository


(carrier) of refined information.
• Data is a raw structural depository (carrier) of information.

It is possible to treat conversion of data into knowledge as refine-


ment of information.
An interesting question is which of these three essences, data,
information, and knowledge, are external phenomena, i.e., they
belong to the physical world, and which are internal phenomena,
they belong to the mental world. Zins (2007) specifies five models of
these relations:

1. Data and information are external phenomena, while knowledge


is an internal phenomenon.
2. Data constitute an external phenomenon, while knowledge and
information are internal phenomena.
3. Data constitute an external phenomenon, while knowledge and
information can be both external and internal.
4. Data and information can be both external and internal, while
knowledge is an internal phenomenon.
5. Data, knowledge, and information can be both external and
internal.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch08 page 751

Knowledge, Data, and Information 751

According to the general theory of information, data, knowledge,


and information all belong to the world of structures having carriers
in both mental and physical worlds (Burgin, 2010).
New understanding that comes from the general theory of infor-
mation necessitates changes of the definition of the term information
in the American Heritage Dictionary (1996). The main implication
is that it is more adequate to say and write that information gives
knowledge of a specific event or situation. When people say and write
that information is a collection of facts or data (The American Her-
itage Dictionary, 1996), the general theory of information suggests
that it is more adequate to say and write that a collection of facts
or data contains information.
According to the American Heritage Dictionary, Informa-
tion is:

1. Knowledge derived from study, experience, or instruction.


2. Knowledge of a specific event or situation; intelligence.
3. A collection of facts or data: “statistical information”.
4. The act of informing or the condition of being informed; commu-
nication of knowledge: “Safety instructions are provided for the
information of our passengers”.
5. A non-accidental signal or character used as an input to a com-
puter or communications system (in Computer Science).
6. A numerical measure of the uncertainty of an experimental
outcome.
7. A formal accusation of a crime made by a public officer rather
than by grand jury indictment (in Law).

According to the general theory of information, more adequate


expressions for the above definitions are:

1. Knowledge is derived from information obtained from study, expe-


rience, or instruction.
2. Information gives knowledge of a specific event or situation; or
information provides intelligence.
3. A collection of facts or data contains (statistical) information.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch08 page 752

752 Theory of Knowledge: Structures and Processes

4. The act of informing and the condition of being informed are


components of information transmission; communication is an
exchange of information.
5. A non-accidental signal or character used as an input to a com-
puter or communications system contains information.
6. A numerical measure of the uncertainty of an experimental out-
come is a measure of information.
7. A formal accusation of a crime made by a public officer contains
information on who committed the crime.

These expressions give some properties of information.


It is possible to do similar explication of the correct meaning for
the following definitions of information given in the Roget’s New
Thesaurus (1995):

1. That which is known about a specific subject or situation: data,


fact (used in plural), intelligence, knowledge, lore.
2. That which is known; the sum of what has been perceived, discov-
ered, or inferred: knowledge, lore, wisdom.

According to the general theory of information, more adequate


expressions for the above definitions are:

1. That which is known about a specific subject or situation, i.e.,


data, facts, intelligence, knowledge, and lore, contains informa-
tion.
2. That which is known, i.e., knowledge, contains information; the
sum of what has been perceived, discovered, or inferred contains
information, i.e., knowledge and lore contain information, while
wisdom assumes possession of big quantity of information, i.e.,
a wise person has a lot of information and what is even more
important, this person can properly use that information.

This analysis of the common usage of the word information shows


that the general theory of information does not essentially reverse the
conventional meaning. The theory makes this meaning more precise
by separating information from its carriers and representations. In
our everyday speech, we do not differentiate between information
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch08 page 753

Knowledge, Data, and Information 753

and its representation. As a rule, it does not matter. However, this


differentiation can be important in science and even in the every-
day communication, for example, when we need to find the intended
meaning of a message, going beyond its literal understanding.
An important component of information processes is their con-
text. According to the existential triad, it has three components:
the structural information context, physical information context, and
mental information context. These contexts are analyzed based on
the general theory of communication and interaction, as well as the
general theory of information.
Thus, the physical information context consists of:

♦ A physical system that is the source of information.


♦ A physical system that is the receiver/recipient of information.
♦ A physical system that is the object of information or represents
this object, e.g., brain processes represent thoughts.
♦ A physical system that is the carrier of information.
♦ A physical system that is the channel of information transmission
or/and communication space.
♦ Information transmission as a physical process, which includes
interactions of the source and recipient with the carrier.
♦ Physical environment in which information transmission goes on.

Thus, the mental information context consists of:

♦ An infological system that is the source of information, e.g., mind


of a person.
♦ An infological system that is the recipient of information.
♦ Mental representations of the object of information, e.g., if the
object of information is a book, then its mental representation is
a reflection of the book content in the mind.
♦ Mental environment in which information transmission goes on.

The structural information context consists of:

♦ A structural infological system of the source of information, e.g.,


knowledge of a person.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch08 page 754

754 Theory of Knowledge: Structures and Processes

♦ A structural infological system of the receiver/recipient of infor-


mation.
♦ The structure of the object of information.
♦ The structure of the carrier of information.
♦ The structure of the channel of information transmission or/and
communication space.
♦ The structure of information transmission, which includes inter-
actions of the source and recipient with the carrier.
♦ The structure of the environment in which information transmis-
sion goes on.

While the first two components of the structural information con-


text are well understood in the majority of cases, the object of infor-
mation is very often neglected. Given a portion of information I,
we define information object as the object to which changes in the
infological system caused by the portion of information I are related
(Burgin, 2010). For instance, the object of a portion of cognitive
information is the object about which this information gives knowl-
edge to the receiver/recipient. The object of a portion of emotional
information is the object to which emotions caused by this informa-
tion in the receiver/recipient are related. Note that an object can
be a complex system or even a collection of some objects. In the
algorithmic information theory (Burgin, 2010), objects are words or
texts of some languages.
Knowledge structures explicated in previous chapters allow us to
discern several kinds of data situated between raw data and knowl-
edge providing a mathematical perspective on transformation of data
into knowledge.
According to the quantum theory of knowledge, the surface outer
structure of knowledge has the following form:

Knowledge domain (object) D knowledge K . (8.2)

As it was demonstrated, any knowledge object (domain) D has


some indicative knowledge (a name of this object or domain) N
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch08 page 755

Knowledge, Data, and Information 755

and the aspect A reflected by the knowledge K. This gives us


Diagram (8.3) representing the first-order knowledge structure.
g
A K
q p
(8.3)

D N
f

It means that in Diagram (8.3), we have the knowledge domain


(knowledge object) D, an aspect of the domain (object) D, the sym-
bol N , which denotes the name of D or a class of names of the objects
from D (a name of the object D), and K is a knowledge item (unit).
The symbolic form of the first-order knowledge structure pre-
sented by Diagram (8.3) is [(D, f, N ), (q, p), (A, g, K)].
Knowledge quanta studied in Section 4.1 have the same structure,
i.e., it is possible to describe them by Diagram (8.3).
When Diagram (8.3) represents the structure of descriptive (quan-
tum) knowledge, the aspect A of the domain (object) D is a property
(feature) of D (the objects from D) and knowledge K is the attribute
(abstract property) that corresponds to A (cf., Section 4.1).
When Diagram (8.3) represents the structure of representational
(quantum) knowledge, the aspect A of the domain (object) D is an
intrinsic structure of D (of the objects from D) and knowledge K
is a model (ascribed structure) of D (of the objects from D) (cf.,
Section 4.1).
When Diagram (8.3) represents the structure of operational
(quantum) knowledge, the knowledge domain (knowledge object) D
consists of actions, operations or processes (is an action, operation
or process), the aspect A is the class of structures of the actions,
operations and processes from the domain D (the structure of the
action, operation or process D), the symbol N denotes the class of
names of these objects from D (the name of the object D), and K
is the class of symbolic representations of objects from D, such as
systems of instructions, algorithms or procedures (cf., Section 4.1).
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch08 page 756

756 Theory of Knowledge: Structures and Processes

Giving an exact definition of the first-order knowledge


structure by building its mathematical model in the form
[(D, f, N ), (q, p), (A, g, K)], allows us to study data in a more exact
form than before, discern data from knowledge, and specify several
types of data.
The first type is raw or uninterpreted data. Their first-order struc-
ture is represented by Diagram (8.4).

q
U A . (8.4)

In this diagram, U consists of some objects, q is a relation between


U and A, while A denotes: (1) attributes of these objects in the
case of descriptive data, (2) representations of these objects in the
case of representational data, and (3) operations, action and pro-
cesses related to these objects in the case of operational data. For
instance, all three types of data are used in the object-oriented pro-
gramming (OOP) for object description. OOP is a programming
paradigm based on the concept of abstract objects represented by
data structures that include attributes in the form of descriptive
data, characteristics in the form of representational data and meth-
ods in the form of operational data.
Raw data correspond to the substantial component of knowledge
(cf., Section 4.1). The difference is that in contrast to the substantial
component of knowledge, raw data are not related to any definite
property.
Having raw data, a person or a computer system can transform
them into knowledge by means of additional knowledge that this
person or computer system already has.
The second type of data is formally interpreted data. They are
related to the abstract property P , forming a named subset of the
information component of a knowledge unit represented by Diagram
(8.3), and the first-order structure of this component is represented
by Diagram (8.5).

p
N L . (8.5)
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch08 page 757

Knowledge, Data, and Information 757

Here N consists of the names of objects and L is the set of values


(the scale) of the property P on the names of objects that tentatively
have these properties, while p is the relation that connects names of
considered objects with values of the ascribed properties of these
objects, i.e., p is the functional component (evaluation function) of
the property P .
Example 8.2.1. Automatic instrument systems perform measure-
ment, which is a process of data acquisition, producing formally inter-
preted data. An unmanned weather station, for example, may record
daily maximum and minimum temperatures. Such recordings are for-
mally interpreted data because numbers (the set L) are corresponded
to names “maximal temperature” and “minimal temperature” (the
set N ).
Formally interpreted data correspond to the symbolic component
of knowledge (cf., Section 4.1). The difference is that formally inter-
preted data are not necessarily included in a knowledge system.
The third type is attributed data. They are related not to the
abstract property P but to the values of an intrinsic property, i.e.,
to the attribute A. The first-order structure of attributed data is
represented by Diagram (8.6).
g
. (8.6)
A L
The fourth type of data is naming data, the first-order structure
of which is represented by Diagram (8.7).
f
. (8.7)
U N
There are two more types of data, which are more enhanced and
are closer to knowledge.
The fourth type is object interpreted data. Their first-order struc-
ture is represented by Diagram (8.8).
L
p
f
. (8.8)
U N
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch08 page 758

758 Theory of Knowledge: Structures and Processes

Taking Example 8.2.1 and adding information that maximal and


minimal temperatures are temperatures of air (or water), we obtain
object interpreted data. In this process, Diagram (8.5) is extended
to Diagram (8.8).
The fifth type of data is object attributed data. Their first-order
structure is represented by Diagram (8.9).
g
A L
. (8.9)
q
N

It is interesting to remark that the statement about the corre-


spondence between linguistic constructions representing knowledge
and things in the external world as a necessary component of knowl-
edge, which makes it different from data, was discovered in (Burgin,
1989a) and then reiterated in (Davis et al., 1993) and (Burgin, 1995a;
2004; 2010).
Remark 8.2.1. It is possible to treat all types of data as incomplete
knowledge.
Formally naming data are names of considered objects and cor-
respond to the naming component of knowledge (cf., Section 4.1).
The difference is that naming data are not necessarily included in a
knowledge system.
It is important to understand that data and knowledge can them-
selves be objects, which have names, as well as intrinsic and ascribed
properties. In particular, when the domain U consists of (some kind
of) data, then in this case, we come to named data, which play an
important role in the recent ideas for the development of the Internet
(Jacobson et al., 2012; Ntuli and Han, 2012). To understand what
named data are and why they are so popular, we consider the schema
of the data transfer on the Internet.
The contemporary Internet is based on the TCP/IP communica-
tion protocol. In it, the transmission control protocol (TCP) part is
performs separation of the file/message into packets on the source
computer and reassembling the received packets at the destination,
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch08 page 759

Knowledge, Data, and Information 759

e.g., at the recipient computer. The internet protocol (IP) part han-
dles the address of the destination computer so that each packet is
routed (sent) to its proper destination.
In named data networking (NDN) architecture for the future
Internet, the transmitted packets of data carry data names rather
than source or destination addresses. The developers of this architec-
ture believe that this conceptually simple shift will have far-reaching
implications for how people design, develop, deploy, and use networks
and applications. The named data principle implies that a commu-
nication network should allow a user to focus on the data identified
by their names he or she needs, rather than having to reference a
specific, physical location where that data would be retrieved.
Actually, Internet packets of data are already named by destina-
tion addresses and the new approach suggests changing these names
to the original data names (identifiers). It is assumed that such a
renaming brings potential for a wide range of benefits such as sim-
pler configuration of network devices, building security into the net-
work at the data level and content caching to reduce congestion and
improve delivery speed. In addition, sustained growth in e-commerce,
digital media, social networking, and smartphone applications has led
to prevailing use of the Internet in the role of a distribution network.
Utilization of a point-to-point communication protocol in distribu-
tion networks is complex and error-prone, while NDN better suits
distribution environment.
In this context, named sets give a natural mathematical model for
named data, which form the naming component of the knowledge
quanta with data as their object or domain (Burgin and Tandon,
2006). Consequently, named set theory provides powerful means for
network algorithms and procedures in the form of various operations
and correspondences (Burgin, 2011).
Another example of naming data is named graphs, which are a key
structure of the Semantic Web architecture. In it, a set of resource
description framework (RDF) statements (a graph) are identified
using a universal resource identifier (URI), allowing derivation of
descriptions of context, provenance information or other metadata.
This shows that named graphs form an extension of the RDF data
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch08 page 760

760 Theory of Knowledge: Structures and Processes

model giving additional evidence for importance of naming data in


contemporary information technology.

8.3. Information as a source of knowledge

Some people drink deeply from the fountain of knowledge. Others


just gargle.
Grant M. Bright

As the cognitive infological system contains knowledge of the system


it belongs, cognitive information is the source of knowledge changes.
This perfectly correlates with the approach of Dretske (1983) and
Goldman (1967) who defined knowledge as information-caused belief,
i.e., information produces beliefs that are called knowledge. More-
over, it is impossible to obtain knowledge without information.
Dretske (1983) develops this idea, implying that information pro-
duces beliefs, which, according to our definition, are also elements of
the cognitive infological system. Moreover, many researchers relate
all information exclusively to knowledge. For instance, Mackay (1969)
writes:
“Suppose we begin by asking ourselves what we mean by information.
Roughly speaking, we say that we have gained information when we
know something now that we didn’t know before; when ‘what we know’
has changed.”

However, information comes to the knower not by itself but in


some carrier. Often this carrier is called data. Therefore, the new
understanding of relations between information and knowledge sug-
gests that a specific fundamental triad called the Data–Knowledge
Triad provides a more relevant theoretical explication for Elliot’s
metaphor than the Data–Information–knowledge Pyramid. Namely,
the Data–Knowledge Triad has the following form:
information
. (8.10)
Data Knowledge

Synthesizing data, information, and knowledge into a conceptual


structure, the triadic representation (8.10) has the following inter-
pretation, which reveals data–knowledge dynamics:
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch08 page 761

Knowledge, Data, and Information 761

Data, under the influence (action) of additional information,


become (are transformed into) knowledge
That is, information is the active essence that transforms data
into knowledge. It is similar to the situation in the physical world,
where energy is used to perform work, which changes material things,
their positions and dynamics.
Diagram (8.10) can be also interpreted in the following way:

➢ Knowledge is a compressed, goal oriented structural depository


of refined information.
➢ Data is a raw symbolic depository of information.
➢ Conversion of data into knowledge is a refinement of information.

According to Bates (2010), Dretske conceives information as com-


modity that capable of yielding knowledge (Dretske, 1981). Although
Dretske assumes that knowledge and hence, information are always
true, his definition well correlates with the Data–Knowledge triad.
Barwise and Seligman (1997) write “information is closely tied to
knowledge”. Meadow and Yuan suggest (1997) that the recipient’s
knowledge increases or changes as a result of receipt and processing
of new information. Cognitive information related to knowledge was
also studied by Shreider (1967).
At the same time, other researchers connect cognitive informa-
tion to experience. For instance, Boulding (1956) calls a collection of
experiences by the name image and explains that messages consist
of information as they are structured experiences, while the mean-
ing of a message is the change that it produces in the image. How-
ever, experience as trait of a personality is, in essence, explicit and
implicit knowledge obtained from experience as a process. So, this
understanding also connects information to knowledge.
Diagram (8.10) is a sub-diagram (a part) of Diagram (8.11).
effective effective
information information
Cognitive Information Data Knowledge
(8.11)
This relation between information and knowledge is well under-
stood in the area of information management. For instance, Kirk
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch08 page 762

762 Theory of Knowledge: Structures and Processes

prior expectations

the World data data information the agents


filters knowledge base

Figure 8.8. Transformation of data into knowledge

(1999) writes, “the effectiveness of information management can be


measured by the extent of knowledge creation or innovation in orga-
nizations”.
The triadic representation also well correlates with opinions of
Bar-Hillel (1964), Heyderhoff and Hildebrand (1973), Brookes (1980),
and Mizzaro (1996; 1998; 2001), who describe an information process
as a process of knowledge acquisition, which is often understood as
reduction of uncertainty. The information triad (8.10) also well cor-
relates with the approach of Boisot (1998) where information is what
an observer extracts from data as a function of his/her expectations
or prior knowledge. Boisot illustrates the point by means of the dia-
gram given in Figure 8.8.
The diagram from Figure 8.8 indicates that data, or more exactly,
data representations, are physical, but not information. Therefore,
data are rooted in the world’s physical properties. Knowledge, by
contrast, is rooted in the comprehensions, interpretations, estimates,
and expectations of individuals. Information is what a knowing indi-
vidual can use for creation and extraction of knowledge from data,
given the capacity of his/her knowledge. Physical extraction essen-
tially depends on used types of knowledge, i.e., whether it is repre-
sented by (contained in) algorithms and logic, models and language,
operations and procedures, goals, and problems. The situation is sim-
ilar to the situation with (physical) energy when energy is extracted
from different substances and processes: from petroleum, natural gas,
coal, wood, sunlight, wind, ocean tides, etc. This extraction essen-
tially depends on used devices, technologies, and techniques. For
instance, solar cells are used to convert sunlight into electricity. Wind
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch08 page 763

Knowledge, Data, and Information 763

and water turbines rotate magnets and in such a way create electric
current. Petroleum-powered engines make cars ride and planes fly.
The KIME Square shows essential distinction between knowledge
and information in general, as well as between knowledge and cogni-
tive information, in particular. This distinction has important impli-
cations for education. For instance, transaction of information (for
example, in a teaching process) does not give knowledge itself. It only
causes such changes that may result in the growth of knowledge.
This correlates with the approaches of Dretske (1981) and
MacKay (1969), who declare that information increases knowledge
and knowledge is considered as a completed act of information.
However, the general theory of information differs from Dretske’s
and MacKay’s conceptions of information because the general theory
of information demonstrates that information transaction may result
not only in the growth of knowledge but also in the decrease of knowl-
edge (Burgin, 1994). An obvious case for the decrease of knowledge is
misinformation and disinformation. For instance, people know about
the tragedy of the Holocaust during the World War II. However,
when articles and books denying the Holocaust appear, some people
believe this and lose their knowledge about the Holocaust. Disin-
formation, or false information, is even used to corrupt opponent’s
knowledge in information warfare.
However, the general theory of information differs from this con-
ception of information because it demonstrates that information
transaction may result also in the decrease of knowledge (Burgin,
1994). Namely, even genuine information can decrease knowledge.
For instance, some outstanding thinkers in ancient time, e.g., Greek
philosophers Leucippus (5th century B.C.E.) and Democritus (ca.
460–370 B.C.E.), knew, in some sense, that all physical things con-
sisted of atoms. Having no proofs of this feature of nature and lack-
ing any other information supporting it, people lost this knowledge.
Although some sources preserved these ideas, they were considered
as false beliefs. So, information about absence of supporting evidence
often decreases knowledge in society. Nevertheless later when physics
and chemistry matured, they found experimental evidence for atomic
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch08 page 764

764 Theory of Knowledge: Structures and Processes

structure of physical objects, and knowledge about these structures


was not only restored but also essentially expanded.
In a similar way, knowledge about America was periodically lost
in Europe and America was rediscovered several times. A thousand
years ago, nearly half a millennium before Columbus, the Norse
extended their explorations from Iceland and Greenland to the shores
of Northeastern North America, and, possibly, beyond. According to
the Olaf saga, the glory of having discovered America belongs to
Bjarni, son of Herjulf, who was believed to have discovered Vinland,
Markland, and Helluland as early as 985 or 986 on a voyage from
Iceland to Greenland (Reeves et al., 1906). There are also stories
of other pre-Columbian discoveries of America by Phoenician, Irish,
and Welsh, but all accounts of such discoveries rest on insufficiently
vague or unreliable testimony.
Sometimes people lose some knowledge and achieve other knowl-
edge. For instance, for centuries mathematicians knew that there was
only one geometry. The famous German philosopher Kant (1724–
1804) wrote that knowledge about the Euclidean geometry is given
to people a priory, i.e., without special learning. However, when
mathematicians accepted the discovery of non-Euclidean geometries,
they lost the knowledge about uniqueness of geometry. This knowl-
edge was substituted by a more exact knowledge about a diversity
of geometries.
Some may argue that it was only a belief in uniqueness but not a
real knowledge. However, in the 18th century, for example, the state-
ment There is only one geometry (8.12) was sufficiently validated
(for that time). In addition, it well correlated with reality known
at that time as the Euclidean geometry was successfully applied in
physics. Thus, there all grounds to assume that the statement (8.12)
represented knowledge of the 18th century in Europe. Likewise, now
the vast majority of people know that two plus two is equal to four
although existence of non-Diophantine arithmetics shows that there
are situations when two plus two is not equal to four (Burgin, 1997c;
2007; 2010c).
Distinction between knowledge and cognitive information implies
that transaction of information (for example, in a teaching process)
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch08 page 765

Knowledge, Data, and Information 765

does not give knowledge itself. It only causes changes that may result
in the growth of knowledge. In other words, it is possible to trans-
mit only information from one system to another, allowing a cor-
responding infological system to transform data into knowledge. In
microphysics, the main objects are subatomic particles and quantum
fields of interaction. In this context, knowledge and data play role of
particles, while information realizes interaction.
As we have seen, usually people assume that information cre-
ates knowledge. For instance, when an individual sends some text
by e-mail, this text, which is usually a knowledge representation,
is converted to data packages, which are then transmitted to the
recipient. However, utilizing information, it is possible to create data
from knowledge. This shows that the triad (8.13) inverse to the triad
(8.10) is also meaningful and reflects a definite type of information
processes.
Knowledge Data . (8.13)

At the same time, data are used not only knowledge generation
but also for deriving other epistemic structures such as beliefs and
fantasies. It gives us two more information diagrams.
information
Data Beliefs
(8.14)

and
information
. (8.15)
Data Fantasy

To explain why it is more efficient, i.e., more adequate to real-


ity and more productive as cognitive hypothesis, etc., to consider
information as the knowledge content than to treat information in
the same category as data and knowledge, let us consider people’s
beliefs. Can we say that belief is structuring of information or struc-
tured information? No. However, we understand that people’s beliefs
are formed by the impact some information has on people’s mind.
People process and refine information and form beliefs, knowledge,
ideas, hypotheses, etc. Beliefs are structured in the same way as
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch08 page 766

766 Theory of Knowledge: Structures and Processes

knowledge, but in contrast to knowledge, they are not sufficiently


justified. Thus, the concept of information in the sense of Stonier
(1991; 1992) does not allow fitting beliefs into the general schema,
while the concept of information in the sense of the general theory of
information naturally integrates beliefs into the system data, infor-
mation, and knowledge.
There are other examples that give evidence to support the state-
ment that Diagrams (8.10) and (8.11) correctly explains relation
between information, data and knowledge. In spite of this, people
are very conservative in their beliefs and do not want to change
what they “know” to a more grounded knowledge mostly because it
demands an intellectual effort (action) and even people are systems
that comply with the principle of minimal action.

8.4. Dynamic aspects of knowledge, data,


and information interaction

Knowledge must come through action;


you can have no test which is not fanciful, save by trial.
Sophocles

As it is demonstrated in the previous section, information changes


epistemic structures in general and knowledge, in particular. Using
epistemic spaces (cf., Section 3.1) as theoretical patterns of knowl-
edge systems, we model information by epistemic information oper-
ators. They act either in pure or in weighted epistemic spaces, trans-
forming these spaces and describing dynamics of infological systems
and epistemic spaces, which consists of epistemic structures modeling
infological systems.
Let us consider two (weighted) epistemic spaces E and H.

Definition 8.4.1. (a) A (partial) mapping A : E → H is called an


epistemic information operator.
(b) If both epistemic spaces E and H have the same structure
and an information operator A : E → H preserves this structure,
then A is called a structured information operator or an information
homomorphism.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch08 page 767

Knowledge, Data, and Information 767

(c) When E = H, the operator A is called an inner epistemic


information operator.
An inner epistemic information operator changes elements or
states or elements of (weighted) epistemic spaces and multispaces.
There are three pure basic types of inner epistemic information
operators: content, bond, and weight operators.

Definition 8.4.2. A content epistemic information operator acts on


symbolic epistemic items (structures) in an epistemic space changing
its state.
For instance, all information operators studied in (Mizzaro, 2001;
Burgin, 2010; 2011a) are content epistemic information operators.

Definition 8.4.3. A bond epistemic information operator acts on


connections (bonds or relations) between symbolic epistemic items
(structures) in an epistemic space changing its state.
Such operators as interpretation and reinterpretation of informa-
tion/knowledge items (Burgin, 2011) are bond epistemic information
operators.
In weighted epistemic spaces, we have one more type of inner
epistemic information operators.

Definition 8.4.4. A weight epistemic information operator acts on


weights of symbolic epistemic items (structures) in an epistemic space
changing its state.
In addition, there are mixed epistemic information operators.

Definition 8.4.5. A mixed epistemic information operator acts on


symbolic epistemic items (structures) and on connections (bonds
or relations) between symbolic epistemic items (structures) and/or
weights of symbolic epistemic items (structures) in an epistemic space
changing its state.
Operators of logical inference, such as rules of deduction, are
mixed epistemic information operators act because they add new
knowledge items in the form of propositions or/and predicates and
establish relations of inferrability/deducibility between propositions
or/and predicates.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch08 page 768

768 Theory of Knowledge: Structures and Processes

There are three key content inner epistemic information opera-


tors:

• An addition/deletion content operator AD (DL) adds a symbolic


epistemic item (knowledge item) to the state of the (weighted)
epistemic space.
• A transformation/substitution content operator TR (ST) trans-
forms or substitutes a symbolic epistemic item in the state of the
(weighted) epistemic space.
• A substantiation content operator switches on or off an existing
symbolic epistemic item in the state of the (weighted) epistemic
space.

Thesaurus information operators are a special type of epistemic


information operators.

Definition 8.4.6. A content thesaurus information operator acts on


knowledge items in a knowledge space.
For instance, all information operators studied in (Mizzaro, 1996;
1998; 2001; Burgin, 2010a) are content thesaurus information oper-
ators.

Definition 8.4.7. A bond thesaurus information operator acts on


connections (bonds or relations) between knowledge items in a knowl-
edge state.
Such operators as interpretation and reinterpretation of informa-
tion/knowledge items are bond thesaurus information operators.

Example 8.4.1. When an intelligent agent learns, it usually adds


knowledge items to its knowledge base changing in such a way the
state of this base. The knowledge base is naturally represented by
an appropriate epistemic space. Thus, growth of knowledge in the
process of learning is modeled by application of addition content
information operators.

Definition 8.4.8. (a) A replica of an epistemic (knowledge) item is


another knowledge item equivalent to the initial one.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch08 page 769

Knowledge, Data, and Information 769

(b) A replication epistemic information operator REPL makes a


replica of an epistemic (knowledge) item and adds it to the current
epistemic (knowledge) state.
Note that copies of an epistemic (knowledge) item always are its
replicas but a replica of an epistemic (knowledge) item is not always
its copy.

Example 8.4.2. Let us consider logical knowledge representation


in which knowledge items are propositions. Then according to
laws of logic there are equivalent propositions. For instance, tak-
ing the proposition (1) “B implies A”, we have equivalent proposi-
tions (2) “A follows from B”, (3) “If B, then A”, and (4) “A is a
consequence of B”. All of them are replicas of one another although
they are not copies.
Addition of symbolic epistemic items can be performed by five
operations:

• by generation of a new item inside the current state of the


(weighted) epistemic space;
• by generation of a new item outside (the current state of) the
epistemic space and its transition into the current state of the
(weighted) epistemic space;
• by transition of an existing item from the epistemic space into the
current state of the (weighted) epistemic space;
• by replication of an item from the current state of the (weighted)
epistemic space;
• by replication of an item outside (the current state of) the epis-
temic space and transition of this replica into the current state of
the (weighted) epistemic space.

Consequently, substitution of symbolic epistemic items can be


performed by five operations because substitution is the sequential
composition of elimination and addition.
In the case of stratified epistemic spaces, there is one more type
of key content epistemic operators, namely, a moving operators MV,
which moves epistemic items from one strata to another (Burgin,
2011a).
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch08 page 770

770 Theory of Knowledge: Structures and Processes

A transformation epistemic information operator TR takes a


group of epistemic (knowledge) items (may be, one item) from the
current epistemic (knowledge) state and transforms it into another
group of epistemic (knowledge) items (may be, into one item).
A generation epistemic information operator GR takes a group
of epistemic (knowledge) items (may be, one item) from the current
epistemic (knowledge) state and generates another group of epistemic
(knowledge) items (may be, one item).
The difference between transformation and generation is that in
generation, the initial group of epistemic items is preserved, while in
transformation, it is not preserved.
There are three key bond epistemic information operators:
(1) An addition/deletion bond operator adds or deletes a connec-
tion (bond or relation) between symbolic epistemic items in the
current epistemic (knowledge) state of the (weighted) epistemic
space.
(2) A substitution bond operator changes a connection (bond or rela-
tion) between symbolic epistemic items in the current epistemic
(knowledge) state of the (weighted) epistemic space to another
connection (bond or relation).
(3) A substantiation bond operator switches on or off an existing con-
nection (bond or relation) between symbolic epistemic items in
the current epistemic (knowledge) state of the (weighted) epis-
temic space.
There are three basic weight epistemic information operators:
(1) An addition/deletion weight operator adds or deletes a weight to
symbolic epistemic items in the weighted epistemic space.
For instance, epistemic items had one weight constructible, which
indicates whether the structure is constructible or not. Then the
weight complexity was added by the addition operator. The new
weight reflects complexity of the structure construction.
(2) A transformation/substitution weight operator substitutes one
weight of symbolic epistemic items in the weighted epistemic
space by another weight.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch08 page 771

Knowledge, Data, and Information 771

For instance, let us consider the situation when weighted epis-


temic items (structures) had the weight justification, which is then
substituted by the weight provability. In different circumstances, a
transformation operator changes the weight complexity for the weight
hardship. Such situation happens in software engineering (Burgin and
Debnath, 2003).

(3) A value-changing weight operator changes weights of symbolic


epistemic items in the weighted epistemic space.

For instance, let us assume the value of the weight complexity for
texts was estimated based on recursive algorithms, such as Turing
machines. Later the complexity estimate was obtained by means of
super- recursive algorithms, such as inductive Turing machines. As it
is proved that super-recursive algorithms decrease algorithmic com-
plexity (Burgin, 2005), the value-changing operator has to be applied
to give correct complexity of the texts processed by inductive Turing
machines.
There are also mixed epistemic information operators. A mixed
epistemic information operator acts on symbolic epistemic items
(structures) in some epistemic state, on their weights and on their
connections (bonds or relations).
For instance, a mixed epistemic information operator can act on
knowledge items in a knowledge state, their weights and their con-
nections (bonds or relations).
Operators of logical inference, such as rules of deduction, are
mixed epistemic information operators act because they add new
knowledge items in the form of propositions or/and predicates and
establish relations of provability/deducibility between propositions
or/and predicates.
Subspaces of knowledge spaces represent subsystems of knowledge
systems. For instance, in large knowledge systems, such as a scien-
tific theory, it is possible to separate the subsystem of denotational
knowledge and the subsystem of operational knowledge.
It looks like it might be sufficient to consider only finite or at least,
locally finite agents. However, if knowledge is represented by logical
statements and it is assumed (as it is done, for example, in the theory
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch08 page 772

772 Theory of Knowledge: Structures and Processes

of semantic information developed by Bar-Hillel and Carnap (1958)


that any knowledge system contains all logical consequences of all
its elements, then an agent with such knowledge system is infinite.
In information algebras, portions of information are represented by
close subsets of sentences from a logical language L (Kohlas and
Stärk, 2007).
However, in conventional logics closed with respect to such infor-
mation operators as deduction, sets are infinite because any sentence
p implies p ∨ q for any sentence q from L, which is, as a rule, infinite
(cf., for example, (Shoenfield, 1967)). Thus, in the context of clas-
sical logic and information algebras any portion of information has
infinitely many representations. Consequently, such a portion gener-
ates a system with the infinite number of knowledge items.
Let us consider epistemic information operators from a weighted
epistemic space E into a weighted epistemic space H.

Definition 8.4.9. An epistemic information operator A : E → H is


called:

(a) stationary if for any epistemic structures e and l, the equal-


ity A(e; w1 , . . . , wk ) = (l; v1 , . . . , vh ) implies the equality A(e;
u1 , . . . , uk ) = (l; q1 , . . . , qh ) for any (e; u1 , . . . , uk ).
(b) permanent if for any weighted epistemic structure (e; w1 , . . . , wk ),
we have A(e; w1 , . . . , wk ) = (e; v1 , . . . , vk ).
(c) semi-permanent if for any epistemic structure e and any num-
ber, k, there is a number h such that for any system of weights
(w1 , . . . , wk ) of e, we have A(e; w1 , . . . , wk ) = (e; v1 , . . . , vh ).

Definitions imply the following result.

Lemma 8.4.1. Any permanent epistemic information operator A


is semi-permanent, while any semi-permanent epistemic information
operator B is stationary.

Lemma 8.4.2. (a) Any weight epistemic information operator is


semi-permanent.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch08 page 773

Knowledge, Data, and Information 773

(b) Operators of adding weights and of deleting weights are semi-


permanent but not permanent.
(c) Operators of substituting weights and of changing values of
weights are permanent.

Stationary epistemic information operator are related to mor-


phisms of epistemic vector bundles.
We remind (Le Potier, 1997) that a morphism of a vector bundle
E = (E, p, B) into a vector bundle H = (H, r, D) is a pair of continu-
ous mappings f : E → H and g : B → D such that Diagram (8.16)
is commutative.

f
E H
p r . (8.16)
B D
g

Note that morphisms of a vector bundles are mappings (mor-


phisms) of named sets, which are studied in the theory of named
sets (Burgin, 2011). Thus, weighted epistemic spaces form a category
with vector bundle morphisms as its morphisms. It makes possible
application of results for categorical information modeling (Burgin,
2010b; 2011b) to epistemic information operators in weighted epis-
temic spaces.
Let us consider two weighted epistemic spacesE and H.

Proposition 8.4.1. An epistemic information operator A : E → H


is stationary if and only if it induces a morphism of the vector bundle
E = (E, pE , Ee ) into the vector bundle H = (H, pH , He ).

Proof . Necessity. Let us consider a stationary epistemic information


operator A : E → H. Then by Definition 8.4.9, each fiber Fa of the
vector bundle E = (E, pE , Ee ) is mapped by A into a single fiber Gb
of the vector bundle H = (H, pH , He ). That is why, we can build the
mapping C : Ee → He defining C(a) = b. By construction, we have
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch08 page 774

774 Theory of Knowledge: Structures and Processes

C(pE (x)) = pH (A(x)) for all elements x from E. This gives us the
commutative diagram (8.17).
A
E H
pE pH . (8.17)
EE HE
C

It means that the pair (A, C) is a morphism of the vector bundle


E = (E, pE , Ee ) into the vector bundle H = (H, pH , He ).
Sufficiency. If the pair (A, C) is a morphism of the vector bundle
E = (E, pE , Ee ) into the vector bundle H = (H, pH , He ), i.e., the
Diagram (4) is commutative. Consequently, each fiber Fa of the vec-
tor bundle E = (E, pE , Ee ) is mapped by A into a single fiber Gb of
the vector bundle H = (H, pH , He ), i.e., A is a stationary epistemic
information operator.

Definition 8.4.10. A stationary epistemic information operator A :


E → H is called:

(a) uniform if for any real number a and any weighted epis-
temic structures (e; w1 , . . . , wk ) and (l; v1 , . . . , vh ), the equal-
ity A(e; w1 , . . . , wk ) = (l; v1 , . . . , vh ) implies the equality
A(e; au 1 , . . . , au k ) = (l; av 1 , . . . , av h ).
(b) additive if A(e; w1 + u1 , . . . , wk + uk ) = A(e; w1 , . . . , wk ) +
A(e; u1 , . . . , uk ).
(c) linear if it is uniform and additive.

Example 8.4.3. Let us consider a weighted epistemic space E, in


which there are n epistemic structures e1 , e2 , e3 , . . . , en , the distance
between any two of them is 1 and each of them has one weight w the
range of which is the real line R. Taking a real number t, we define
the epistemic information operator A by the following rule:
A(ek , w) = (ek , tw ).
By definition, this operator is linear and thus, uniform and
additive.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch08 page 775

Knowledge, Data, and Information 775

At the same time, the epistemic information operator B with


B(ek , w) = (ek , k) is neither uniform nor additive nor linear.
To study linear epistemic information operators, we need some
topological constructions.
In (Burgin, 2004b), the concept of path Q-connectedness is intro-
duced and studied. To find relations between linearity and bounded-
ness of information operators, we need to further develop this concept
for metric spaces. Path Q-connectedness is important for epistemic
spaces because in many cases, epistemic spaces have the structure of
a graph or network of epistemic items connected by various relations.
For instance, Motter et al. (2002) build the conceptual network of a
language using exactly this approach.
Let C be a subspace of a metric space U with the distance func-
tion d.
Definition 8.4.11. The space C is called path (q, r)-connected in
U if for any two points a and b in C, there exists a sequence
a1 , a2 , a3 , . . . , an of points in C such that d(a, a1 ) ≤ r, d(an , b) ≤ r,
d(ai , ai+1 ) ≤ r for all i = 1, 2, 3, . . . , n − 1 and [d(a, a1 ) + d(a1 , a2 ) +
d(a2 , a3 ) + · · · + d(an−1 , an ) + d(an , b)] < q · d(a, b).

Example 8.4.4. Taking a square in which the length of the side is


equal and which is situated in an Euclidean plane (cf., Figure 8.9)
and the space of its vertices C = {A, B, C, D}, we see that it is path
(2, 1)-connected but it is not path (1, 1)-connected and not path

A B

C D

Figure 8.9. A topological space that is path (2, 12 )-connected but it is not path
(1, 1)-connected
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch08 page 776

776 Theory of Knowledge: Structures and Processes


(2, 12 )-connected. Indeed, d(A, C) = 2, the lengths of both paths
(A, B, C) and (A, D, C) is equal to 2 with d(A, B) = d(B, C) =
d(A, D) = d(D, C) = 1. Consequently, d(A, B) + d(B, C) <
2d(A, C) but d(A, B) + d(B, C) > 1 · d(A, C). Thus, the space C is
path (2, 1)-connected but it is not path (1, 1)-connected. In addition,
it is not path (q, p)-connected when p < 1 because there no paths
between A and C such that the distance between two consecutive
points is less than 1.
However, not all sets in metric spaces are path (q, r)-connected.

Example 8.4.5. The parabola y = x2 as the space C, we see that it


is not path (q, r)-connected for any numbers q and r. Indeed, taking
points u = (x, x2 ) and w = (−x, x2 ), we see that the distance between
these points d(u, w) = 2x. Now let us suppose that this parabola C is
path (q, r)-connected for some numbers q and r. It means that there
is a sequence a1 , a2 , a3 . . . , an of points in C such that d(u, a1 ) ≤ r,
d(an , w) ≤ r, d(ai , ai+1 ) ≤ r for all i = 1, 2, 3, . . . , n − 1. As all
points a1 , a2 , a3 , . . . , an belong to C, the sum [d(u, a1 ) + d(a1 , a2 ) +
d(a2 , a3 ) + · · · + d(an−1 , an ) + d(an , w)] is larger than x2 − ( 12 r)2 .
At the same time, we have [d(u, a1 ) + d(a1 , a2 ) + d(a2 , a3 ) + · · · +
d(an−1 , an ) + d(an , w)] < q · d(u, w). Thus, x2 − ( 12 r)2 < 2qx because
d(u, w) = 2x. Transforming this inequality, we obtain
 2
2 1
x − 2qx < r ,
2
 2
2 1
x(x − 2q) < r .
2
For sufficiently big x, this inequality cannot be valid because r is
a fixed number. Thus, our assumption is not true and the parabola
y = x2 is not path (q, r)-connected for any numbers q and r.
Here we are mostly interested in content epistemic information
operators, which we simply call epistemic information operators in
what follows. Using classical concepts of continuity and bounded-
ness (Kuratowski, 1966; Alexandroff, 1961), as well as the concept of
(p, q)-continuity from neoclassical analysis (Burgin, 2008), we deter-
mine important classes of epistemic information operators.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch08 page 777

Knowledge, Data, and Information 777

Let us consider epistemic information operators from a weighted


epistemic space E with a metric d into a weighted epistemic space H
with a metric d. In this case, it is possible to define the diameter d
of sets in E and in H. Namely, if X ⊆ E, then d(X) = sup{d(x, z);
x, z ∈ X} when this supremum exists and undefined otherwise.

Definition 8.4.12. An epistemic information operator A : E → H


is called:

(a) bounded if given a state X of E, for any number r, there is a


number t such that the condition d(X) ≤ r, implies the condition
d(A(X)) ≤ t.
(b) uniformly bounded if for any number r, there is a number t such
that for any state X of E, the condition d(X) ≤ r, implies the
condition d(A(X)) ≤ t.
(c) continuous if A is a continuous mapping.
(d) (p, q)-continuous if A is a (p, q)-continuous mapping.

Boundedness of an epistemic information operator A means that


when the distances between epistemic structures in a set X are
bounded, then the distances between epistemic structures in the
image A(X) of the set X are bounded.
(p, q)-continuity of an epistemic information operator A infor-
mally means that when the distances between epistemic structures in
a set X are not larger than p, then the distances between epistemic
structures in the image A(X) of the set X are not larger than q.

Lemma 8.4.3. Any uniformly bounded epistemic information oper-


ator A: E → H is bounded.

In a discrete metric space, any point is an open and a closed set


(Kuratowski, 1966). Thus, any epistemic information operator in an
epistemic space E is continuous, open and closed.
Results from (Burgin, 2008) give us the following property of epis-
temic information operators.

Lemma 8.4.4. An epistemic information operator A : E → H con-


tinuous if and only if it is (0, 0)-continuous.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch08 page 778

778 Theory of Knowledge: Structures and Processes

In addition, we need some constructions from neoclassical analy-


sis, such as fuzzy limits and fuzzy continuity (Burgin, 2008).
Let r ∈ R + .
Definition 8.4.13. (a) An element a from E is called an r-limit of
a sequence l (it is denoted by a = r − limi→∞ ai or a = r − lim l) if
for any ε ∈ R++ the inequality d(a, ai ) < r + ε is valid for almost all
ai , i.e., there is such n that for any i > n, we have d(a, ai ) < r + ε.
(b) A sequence l that has an r-limit is called r-convergent and it
is said that l r-converges to its r-limit a.
Informally, a is an r-limit of a sequence l if for an arbitrarily small
ε, the distance between a and all but a finite number of elements from
l is smaller than r + ε. In other words, an element a is an r-limit of
a sequence l if for any ε ∈ R++ almost all ai belong to the interval
(a − r − ε, a + r + ε).
It is a natural generalization of the classical concept of a limit as
the following result demonstrates.
Lemma 8.4.5. A point a limit of a sequence l if and only if it is a
0-limit of the sequence l.
We also need fuzzy continuity.
Definition 8.4.14. (a) A partial function f : R → R is called
(q, r)-continuous at a point a ∈ R if for any sequence l = {ai ∈
R; i = 1, 2, 3, . . .}, for which a is an q-limit, the point f (a) is an
r-limit of the sequence {f (ai ) ∈ R; i = 1, 2, 3, . . .}.
(b) A function f : R → R is called (q, r)-continuous in (inside)
set X ⊆ R if f (x) (the restriction of f (x) on X) is (q, r)-continuous
at each point a from X ∩ Domf .
Fuzzy continuity is a natural generalization of the classical con-
cept of continuity as the following result demonstrates.
Lemma 8.4.6. A function f (x) is continuous at a point a ∈ R if
and only if it is (0, 0)-continuous at the point a.
These results show that the concept of (q, r)-continuity is a nat-
ural extension of the concept of conventional continuity.
Lemma 8.4.7. If t > r, and p < q, then any (q, r)-continuous at a
function f (x) is also (p, t)-continuous at a.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch08 page 779

Knowledge, Data, and Information 779

Note that if q < p, then it is possible that a (q, r)-continuous at


a function is not (p, r)-continuous at a. For instance, the function
f (x) = x is (0, 0)-continuous at the point 0, but for any p > 0, it is
not (p, 0)-continuous at 0.
Let us consider two (weighted) epistemic spaces E and H, assum-
ing that the space E is a path (q, 1)-connected metric space with
the metric d and the space H is a metric space with the metric d.
Denoting metrics in different spaces by the same letter d follows the
mathematical tradition and does not cause confusion.

Theorem 8.4.1. A epistemic information operator A : E → H


is (1, k)-continuous for some positive number k if and only if it is
uniformly bounded.

Proof . Necessity. Let us consider a (1, k)-continuous epistemic infor-


mation operator A : E → H and two points a and b from a set X
in E. By Definition 8.4.13, there exists a path (sequence of points)
l = {a1 , a2 , a3 , . . . , an } in E such that
d(a, a1 ) ≤ 1, d(an , b) ≤ 1, d(ai , ai+1 ) ≤ 1
for all i = 1, 2, 3, . . . , n − 1
and
[d(a, a1 ) + d(a1 , a2 ) + d(a2 , a3 ) + · · · + d(an−1 , an ) + d(an , b)]
< q · d(a, b).
If for some i, d(ai , ai+1 ) < 12 and d(ai+1 , ai+2 ) < 21 , then it is pos-
sible to eliminate the point ai+1 , from the path because by properties
of metric, d(ai , ai+2 ) ≤ d(ai+1 , ai+2 ) + d(ai , ai+1 ) < 21 + 12 = 1 and
the new path is not longer than the previous one. We can reduce the
initial path in such a way and assume that we have an irreducible
path l = {a1 , a2 , a3 , . . . , an } between a and b.
As A is a (1, k)-continuous epistemic information operator,
d(A(a), A(a1 )) < k, d(A(an ), A(b)) < k, d(A(ai ), A(ai+1 )) < k for
all i = 1, 2, 3, . . . , n − 1. Thus,
d(A(a), A(b)) ≤ d(A(a), A(a1 )) + d(A(a1 ), A(a2 )) + · · ·
+ d(A(ai−1 ), A(an )) + d(A(an ), A(b)) < k(n + 1).
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch08 page 780

780 Theory of Knowledge: Structures and Processes

Now let us estimate the number n. Taking an irreducible path


l = {a1 , a2 , a3 , . . . , an }, we know that between any two pairs (ai , ai+1 )
and (ai+2 , ai+3 ) with the distance less than 12 , there is at least, one
pair (ai+1 , ai+2 ) with the distance larger than 21 . Thus, the number of
pairs (ai , ai+1 ) the distance between which larger than 21 is more than
( 14 )n. Consequently, the length of the path l = {a1 , a2 , a3 , . . . , an }
is larger than 12 · ( 14 )n = ( 18 )n, i.e., ( 18 )n < [d(a, a1 ) + d(a1 , a2 ) +
d(a2 , a3 ) + · · · + d(an−1 , an ) + d(an , b)] < q · d(a, b) = q · d where the
distance d(a, b) is equal to d. Thus, n < 8q · d.
Let us assume that X is a bounded set. It means that there is
a positive number h such that for two points a and b from X, the
distance d(a, b) is less than h. It is possible to assume that h > 1.
Then n < 8q · h and d(A(a), A(b)) < k(n + 1) < k(8q · h + 1) =
t < (8qk + k) · h. It means that the operator A is uniformly bounded
because numbers k and q are constants and t depends only on one
variable h.
Necessity is proved.
Sufficiency. Let us consider a uniformly bounded epistemic infor-
mation operator A: E → H. Then (cf., Definition 8.4.14) for any
number r, there is a number t such that for any state X of E, the
condition d(X) ≤ r, implies the condition d(A(X)) ≤ t. In par-
ticular, for the number 1, there is a number k such that for any
points a and b from E, the condition d(a, b) ≤ 1, implies the con-
dition d(A(a), A(b)) ≤ k. It means that the operator A is (1, k)-
continuous.

Remark 8.4.1. The proof of Theorem 8.4.1 is sufficiently general.


Therefore, this result remains true for general metric spaces that
satisfy the necessary conditions.
One of the basic results of functional analysis is the theorem stat-
ing that a linear operator in Banach space is continuous if and only if
it is uniformly bounded (Dunford and Schwartz, 1958; Rudin, 1991).
It is demonstrated that (1, 0)-continuity is stronger than continu-
ity in metric spaces (Burgin, 2008). Thus, it is possible to ask a
question whether it would be possible to change the condition of
(1, k)-continuity to the condition of continuity in Theorem 8.4.1.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch08 page 781

Knowledge, Data, and Information 781

The following example shows that it is impossible as there are linear


epistemic information operators that are continuous but not uni-
formly bounded.
Example 8.4.6. Let us consider a weighted epistemic space E,
in which there is a countable number of epistemic structures
e1 , e2 , e3 , . . . , en , . . ., the distance between any two of them is 1 and
each of them has one weight w the range of which is the real line R.
We define the epistemic information operator A by the following rule:
A(en , w) = (en , nw ).
By definitions, this operator is linear and continuous but it
is not uniformly bounded. Indeed, d((e1 , 1), (en , 1)) ≤ 2, while
d(A(e1 , 1)A(en , 1)) > n for any n.
Even more, there are linear epistemic information operators that
are continuous but not bounded.
Example 8.4.7. Let us consider a weighted epistemic space E,
in which there is a countable number of epistemic items e1 , e2 ,
e3 , . . . , en , . . . such that d(en , en+1 ) = (1/2n ) and d(en , en+k ) =
Σn+k−1
i=n (1/2i ). Each of these epistemic items has one weight w the
range of which is the real line R. We define the epistemic information
operator A by the following rule
A(en , w) = (en , nw ).
By definitions, the set U = {(e1 , 1), (e2 , 1), (e3 , 1), . . . , (en , 1), . . .}
is bounded as d(U ) = sup d(en , en+k ) < 3. The operator A is linear
and continuous. However, it is not bounded because in the set A(U ),
there are pairs of points with the arbitrary big distance between
them, e.g., d(A(e1 , 1), A(en , 1)) > n.
Although continuity is insufficient for boundedness of a linear
epistemic information operator in a general case, there are situations
when boundedness is still equivalent to continuity.
Let us consider two weighted epistemic spaces E and H such that
both spaces Ee and He are metric spaces with the metric d, the base
epistemic space Ee is finite and all fibers Fa of the vector bundle
E = (E, p, Ee ) and fibers Ga of the vector bundle H = (H, p, He ) are
hyperseminormed vector spaces (Burgin, 2013).
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch08 page 782

782 Theory of Knowledge: Structures and Processes

Theorem 8.4.2. A linear epistemic information operator A : E →


H is continuous in each fiber Fa of the vector bundle E = (E, p, Ee )
if and only if A is bounded.

Proof . Necessity. Let us consider an epistemic information operator


A : E → H continuous in each fiber Fa of the vector bundle E =
(E, p, Ee ) and a bounded set X in E. Then each intersection Xa =
X ∩ Fa is a bounded set because a subset of a bounded set is also
bounded. As the operator A is continuous in the fiber Fa , the image
A(Xa ) of Xa is bounded (Burgin, 2013). As the union of a finite
number of bounded sets is bounded, X = ∪a∈Ee Xa is a bounded set.
Sufficiency. Let us consider a bounded epistemic information
operator A : E → H. Then it is bounded on any subset of E,
in particular, on each fiber Fa of the vector bundle E = (E, p, Ee ).
As each fiber Fa of the vector bundle E = (E, p, Ee ) is a hypersemi-
normed vector space, the results from (Burgin, 2013) show that the
epistemic information operator A is continuous in each fiber Fa of
the vector bundle E = (E, p, Ee ).
In many applications, the base epistemic space Ee is finite. For
instance, a popular type of the base epistemic space Ee is a semantic
network (cf., Chapter 5) and all known semantic networks are finite.
Thus, the conditions from Theorem 8.4.2 are almost always satisfied.
As normed vector spaces are an important special case of hyper-
seminormed vector spaces (Burgin, 2013) the following result is
implied by Theorem 8.4.2.

Corollary 8.4.1. If all fibers Fa and Ga are normed vector spaces,


then a linear epistemic information operator A : E → H is contin-
uous in each fiber Fa of the vector bundle E = (E, p, Ee ) if and only
if A is bounded.

For linear operators in vector spaces, uniform boundedness coin-


cides with boundedness (Dunford and Schwartz, 1958; Rudin, 1991).
This gives us the following result.

Corollary 8.4.2. If all fibers Fa and Ga are seminormed and, in


particular, normed, vector spaces, then a linear epistemic information
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch08 page 783

Knowledge, Data, and Information 783

operator A : E → H is continuous in each fiber Fa of the vector


bundle E = (E, p, Ee ) if and only if A is uniformly bounded.

When Ee consists of single element, Theorem 8.4.2 gives us the


classical result of functional analysis.

Corollary 8.4.3. (Dunford and Schwartz, 1958; Rudin, 1991). A lin-


ear operator in Banach space is continuous if and only if it is uni-
formly bounded.

Many epistemic spaces in general and M-spaces in particular are


stratified (cf., Chapter 3). For instance, an important technique is
consistent stratification of inconsistent knowledge using logical vari-
eties, prevarieties and quasi-varieties (Burgin, 1991d; Burgin and de
Vey Mestdagh, 2015).
Stratification of the knowledge system and the corresponding
M-space allows defining specific classes of epistemic information oper-
ators.

Definition 8.4.15. An epistemic information operator A is called


stratified if for any j ∈ J, there is k ∈ J such that for any Ki from
KSM , we have A(Kij ) ⊆ Kik .
Stratified information operators preserve the structure, i.e., strat-
ification, of knowledge states. Note that addition and deletion oper-
ators are intrinsically stratified.

Definition 8.4.16. (a) An epistemic information operator A is


called closed if for any j ∈ J and for any Ki from KSM , we have
A(Kij ) ⊆ Kij .
(b) An epistemic information operator A is called closed in a Miz-
zaro space Ki from KSM if A(Ki ) ⊆ Ki .

Lemma 8.4.8. Any closed epistemic information operator A is


stratified.

Definition 8.4.17. (a) A stratified epistemic information operator


A in a linearly stratified M-space M is called monotone (antitone) if
for any n ∈ N , there is k ∈ N such that k ≥ n(k ≤ n) and for any
Ki from KSM , we have A(Kij ) ⊆ Kik .
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch08 page 784

784 Theory of Knowledge: Structures and Processes

(b) A stratified epistemic information operator A in a linearly


stratified M-space M is called strictly monotone (strictly antitone)
if for any n ∈ N , there is k ∈ N such that k > n(k < n) and for any
Ki from KSM , we have A(Kij ) ⊆ Kik .
Definitions imply the following property of epistemic information
operators.

Lemma 8.4.9. Any strictly monotone (strictly antitone) epistemic


information operator A is monotone (antitone).

Lemma 8.4.10. In a finite linearly stratified M-space M, there are


no strictly monotone and strictly antitone information operators.

Definition 8.4.18. An epistemic information operator A is called


contracting if there is k ∈ J such that for any Ki from KSM , we have
A(Kij ) ⊆ Kik .
Definitions imply the following result.

Lemma 8.4.11. Any contracting epistemic information operator A


is stratified.

There are five types of basic epistemic operations: adding, delet-


ing, moving, replicating, and transforming knowledge, and five types
of corresponding basic epistemic information operators: addition AD,
deletion DL, moving MV, replication REPL, generation GR and
transformation TR epistemic information operators.

Definition 8.4.19. A transformation epistemic information opera-


tor TR takes a group of knowledge items (may be, one item) from
the current knowledge state and transforms it into another group of
knowledge items (may be, into one item).
An example of a transformation epistemic information operator
is operation of substitution in logic described in Section 5.2.3.

Definition 8.4.20. A generation epistemic information operator


GR takes a group of knowledge items (may be, one item) from the
current knowledge state and generates another group of knowledge
items (may be, one item).
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch08 page 785

Knowledge, Data, and Information 785

The difference between transformation and generation is that in


generation the initial group of knowledge items is preserved, while in
transformation it is not preserved.
Lemma 8.4.12. AD is equal to GEN with the empty set of the initial
knowledge items.
Definition 8.4.21. A moving epistemic information operator MV
moves a knowledge item from one stratum into another one.
For instance, an operator that moves a knowledge item from the
short-term memory to the long-term memory is an example of a
moving operator.
Definition 8.4.22. (a) A replica of a knowledge item is another
knowledge item equivalent to the initial one.
(b) A replication epistemic information operator REPL makes a
replica of a knowledge item and adds it to the current knowledge
state.
Example 8.4.8. Let us consider logical knowledge representation
in which knowledge items are propositions. Then according to laws
of logic there are equivalent propositions. For instance, taking the
proposition (1) “B implies A”, we have equivalent propositions (2) “A
follows from B”, (3) “If B, then A”, and (4) “A is a consequence of
B”. All of them are replicas of one another although they are not
copies.
If the proposition (1) belongs to the stratum K1 , then its repli-
cation to the stratum K2 can introduce either proposition (1) or
proposition (2) or proposition (3) to the stratum K2 , while its copy-
ing to the stratum K2 can introduce only proposition (1) to the
stratum K2 .
An important special case of a replication epistemic information
operator is a copying epistemic information operator COPY, which
makes a copy of a knowledge item and adds it to the current knowl-
edge state.
Another important special case of a replication epistemic informa-
tion operator is a restricted replication epistemic information oper-
ator REPL0 , which replicates a knowledge item and adds it only
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch08 page 786

786 Theory of Knowledge: Structures and Processes

to a stratum of the current knowledge state that does not have the
same replica. One of its special cases is a restricted copying epistemic
information operator COPY0 , which makes a copy of a knowledge
item and adds it only to a stratum of the current knowledge state
that does not have the same replica. Operators REPL0 , and COPY0
are used in stratified M-spaces not to make these spaces stratified
M-multispaces.
Proposition 8.4.2. The operator COPY0 can copy a knowledge
item only to a different stratum, i.e., if a ∈ Ki and COPY0 a ∈ Kj ,
then i
= j.
Indeed, if this condition is violated, then the initial M-space is
converted to an M-multispace.
Complex information operations and operators are studied in
(Burgin, 1997e).
Definition 8.4.23. An epistemic information operator C is called
the sequential composition of an epistemic information operator A
with an epistemic information operator B if C(x) is defined and equal
to B(A(x)) when: 1) A(x) is defined and belongs to the domain of
B; 2) B(A(x)) is defined. Otherwise, C gives no result being applied
to x, i.e., C(x) = ∗ .
It is denoted by B ◦ A.
Taking sequential composition of an epistemic information oper-
ator A with itself, we obtain sequential powers An of the operator A.
In the general case, the sequential composition of epistemic infor-
mation operators is not commutative in M-spaces as the following
example demonstrates.
Example 8.4.9. Let us consider a structured M-space M = {KS M ;
OSM } where KS M = ∪i∈I KS M i . In this space, the operator MVaij
moves an element a from the stratum KS M i to the stratum KS M j ,
and does not change other elements from KS M . Taking the sequential
composition of such operators, we have
MV aij ◦ MV aik = MV aij
= MV aik ◦ MV aij = MV aik
if i
= j, k
= j, and i
= k. Thus, operators MV aij do not commute
with one another.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch08 page 787

Knowledge, Data, and Information 787

At the same time, all these operators are idempotents, i.e., MV aij ◦
MV aij = MV aij .
It is necessary to remark that in a structured M-multispace M
with an infinite number of elements a in each stratum KS M i , oper-
ators MV aij and MV aik commute with one another. This demonstrates
difference between M-spaces and M-multispaces.

Proposition 8.4.3. If A and B are closed (closed in a Mizzaro space


Ki ) operators, then their sequential composition A◦B is also a closed
(closed in a Mizzaro space Ki ) operator.

Indeed, if A and B are closed epistemic information operators


in a structured M-multispace M , then for any j ∈ J and for any
Ki from KSM , we have A(Kij ) ⊆ Kij and B(Kij ) ⊆ Kij . Thus,
(A ◦ B)(Kij ) = B(A(Kij ) ⊆ B(Kij ) ⊆ Kij .
For closed in a Mizzaro space Ki operators, the proof is similar.

Proposition 8.4.4. If A and B are contracting operators, then their


sequential composition A ◦ B is also a contracting operator.

Proof is similar to the proof of Proposition 8.4.3.

Proposition 8.4.5. If A and B are stratified operators, then their


sequential composition A ◦ B is also a stratified operator.

Proof is similar to the proof of Proposition 8.4.3.

Proposition 8.4.6. If A and B are (strictly) monotone [antitone]


operators, then their sequential composition A ◦ B is also a (strictly)
monotone [antitone] operator.

Indeed, if A and B are monotone epistemic information operators


in a structured M-space M , then for any Ki from KSM , we have
A(Kij ) ⊆ Kik with k ≥ j and B(Kik ) ⊆ Kih with h ≥ k. Thus,
(A ◦ B)(Kij ) = B(A(Kij ) ⊆ B(Kik ) ⊆ Kih with h ≥ j.
Considerations for strictly monotone, antitone and strictly anti-
tone epistemic information operators are similar.
Let us consider an M-space M with a finite linear stratification.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch08 page 788

788 Theory of Knowledge: Structures and Processes

Proposition 8.4.7. For any monotone and any antitone epistemic


information operator A, there is a number n such that the sequential
power An is also a closed epistemic information operator.

Indeed, if A is a monotone epistemic information operator in a


structured M-space M , then making each step, it either increases
the number of the stratum or the image of a stratum remains in the
same stratum. If the second case is true for all strata of M , then A
itself is a closed epistemic information operator. Otherwise, A can
increase the number of a stratum only for a finite number of steps
because there are only a finite number of strata in M . Thus, after
some number of repetitions, the image of a stratum remains in the
same stratum. Taking the largest number of such steps, we obtain
the necessary number n.
Note that n cannot be larger than the number of strata in M .
Let us explore relations between basic epistemic information
operators.

Definition 8.4.24. (Burgin, 2010d). Two operators A and B are


functionally equivalent if they have the same definability domain D
and A(x) = B(x) for any element x from D.
For instance, two algorithms that compute the same function on
natural numbers are functionally equivalent.

Proposition 8.4.8. A transformation epistemic information oper-


ator TR is functionally equivalent to the sequential composition of
a deletion epistemic information operator DEL and addition epis-
temic information operator AD that act in the same stratum of the
M-space.

Indeed, if TR takes items a1 , a2 , . . . , an , from KSM and trans-


forms them into b1 , b2 , . . . , bm , then it is possible to achieve the same
result by deleting a1 , a2 , . . . , an , and adding b1 , b2 , . . . , bm , to the cor-
responding stratum of KSM .

Proposition 8.4.9. A moving epistemic information operator MV


is functionally equivalent to deletion of a knowledge item in one stra-
tum and adding the same knowledge item to another stratum.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch08 page 789

Knowledge, Data, and Information 789

For instance, it is possible to describe the process of transition


of a knowledge item X from the short-term memory in the brain to
the long-term memory as deletion of X from the short-term memory
and addition of X to the long-term memory.

Proposition 8.4.10. A replication epistemic information operator


REPL is functionally equivalent to adding an equivalent knowledge
item to the corresponding stratum.

For instance, it is possible to describe the process of rewriting


a word X that represents a knowledge item from one tape t1 of a
Turing machine (cf., Appendix B) to another tape t2 as addition of
X to the tape t2 without changing the tape t1 .

Proposition 8.4.11. For any M-space M, there is a superspace H,


in which all deletion and addition epistemic information operators
DEL and AD in M are functionally equivalent to moving epistemic
information operators MV in H.

Proof . To build a superspace H with the necessary properties, we


add one more stratum E called the external stratum to the initial
M-space M . In addition, we assume that E contains all elements
from the universal set (multiset) W and each element has infinitely
many copies in E. In this case, any deletion of an element a from
a state K from M is equivalent to moving the same element a to
the stratum E. In a similar way, any addition of an element a to a
state K from M is equivalent to moving the same element a from
the stratum E to the state K.

Proposition is proved.

Proposition 8.4.12. A moving epistemic information operator MV


can be (functionally) simulated by copy COPY and deletion DEL
epistemic information operators.

Indeed, instead of moving a knowledge item a from a stratum Ki


of a state K to a stratum Kj of a state K  , it is possible to copy a
from Ki to Ki and then to delete this element from Ki .
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch08 page 790

790 Theory of Knowledge: Structures and Processes

Proposition 8.4.13. A generation epistemic information operator


GEN is functionally equivalent to the sequential composition of a
transformation epistemic information operator TR and addition epis-
temic information operator AD that act in the same stratum of the
M-space.

Proof is similar to the proof of Proposition 8.4.12.

Proposition 8.4.14. A transformation epistemic information oper-


ator TR is functionally equivalent to the sequential composition of
a generation epistemic information operator GEN and deletion epis-
temic information operator DEL that act in the same stratum of the
M-space.

Proof is similar to the proof of Proposition 8.4.12.

Definition 8.4.25. A system B of epistemic information operators


is an operator basis of an M-space M if any A from OSM is a com-
position of elements from B.
Operator bases can be useful in many situations. For instance,
knowing properties of operators from such a base and properties of
compositions, we can find properties of other operators.
Assuming that all operators in an M-space M are compositions of
basic epistemic information operators, we have the following results.

Proposition 8.4.15. (a) {AD , DEL} is an operator basis of an arbi-


trary (stratified) M-space M.
(b) {TR, MV } is an operator basis of an arbitrary (stratified)
M-space M.
(c) {TR} is an operator basis of an arbitrary (i.e., non-stratified)
M-space M.

Proof is based on Propositions 8.4.9–8.4.14.

Proposition 8.4.16. (a) In a stratified M-space M with the external


stratum, {REPL0 , DEL} is an operator basis.
(b) In a stratified M-space M with the external stratum, {MV } is
an operator basis.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch08 page 791

Knowledge, Data, and Information 791

(c) In a stratified M-multispace M with the external stratum,


{REPL, DEL} is an operator basis.
Proof is based on Propositions 8.4.9–8.4.14.
Different classes of epistemic information operator mathemati-
cally represent properties of information and its action on knowledge
spaces, which are mathematically modeled by stratified M-spaces and
stratified M-multispaces. This is the functional mathematical model
of dynamic features of knowledge and information interaction.
Another mathematical model of dynamic features of knowledge
and information interaction uses category theory and is studied in
(Burgin, 2010b; 2011b).

8.5. Knowledge as a measure of information

He who knows, does not speak. He who speaks, does not know.
Lao-Tzu

The methodological principles of the anthropic information expli-


cation and classification bring us to cognitive information and its
important subclass called epistemic information.
Portions of epistemic information are modeled/represented by
epistemic information operators acting in spaces of knowledge, which
are represented by a formal construction called a Mizzaro space.
These spaces consist of knowledge items often unified by structural
relations.
In a general setting, epistemic information has been studied by dif-
ferent authors. Bar-Hillel and Carnap (1958), Hintikka (1968; 1970;
1971) and Israel and Perry (1990) explored information in knowledge
represented by means of mathematical logic. Shreider (1965), Mackay
(1969), Brookes (1980), Mizzaro (1996; 1998; 2001) and Gackowski
(2004) base their theories on the following assumption:
Information is a change in a knowledge system.
Later this principle has been made more exact (Mizzaro, 2001) and
formulated as
Epistemic information is a change in a knowledge system.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch08 page 792

792 Theory of Knowledge: Structures and Processes

The general theory of information (Burgin, 2010) makes the next


step to the better understanding of epistemic information. Namely,
it is explained that
Epistemic information is a capacity to cause changes in a knowl-
edge system and it is possible to measure this capacity by changes in
the knowledge system impacted by information.
Such changes in the knowledge system of a system R reflect
accepted information. Note that information can be accepted not
only as true information but also as false information. In this case,
changes in the knowledge system can result in exclusion of some
knowledge or in labeling this knowledge as false, e.g., treating it as
a misconception or blunder.
To reflect interactions between information and knowledge in the
mathematical form, we assume existence of a universal knowledge set
(knowledge multiset) W of knowledge units (knowledge items) and
consider a class A of dynamic systems, e.g., intelligent or cognitive
agents, that have knowledge. This assumption is based on a tradi-
tional approach to knowledge, which is called the representational
atomism (Campbell, 1998), according to which, knowledge is built
from some basic or primitive or elementary units by combining them
into a more complex structure.
It is possible to take the set W C of elementary knowledge units
mathematically modeled in Chapter 4 as the universal set (multiset)
W . Another possibility for W is realized by the set (multiset) W L
of propositions and/or predicates from a logical language L. Propo-
sitions and predicates are symbolic knowledge units in the logical
approach developed in works of Bar-Hillel and Carnap (1958), Hin-
tikka (1968; 1970) and some other authors. Shreider (1965) inter-
preted symbolic knowledge units as texts in a thesaurus. Many
researchers employ mental schemas as cognitive/symbolic knowledge
units in the brain (cf., for example, (Anderson, 1977; Arbib, 1992;
Armbruster, 1996; Burgin, 2006)). Taking descriptions of these situ-
ations possible in a world U , which can be a physical world, a mental
world or a world of some organization, as knowledge items or knowl-
edge units, we obtain one more universal knowledge set (multiset)
W S (cf., for example, (Barwise and Perry, 1983)).
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch08 page 793

Knowledge, Data, and Information 793

As it is explained in Section 3.1, the set (multiset) W is called


universal because we assume that the following axiom is true.

UKEFA 1 (the Internal Representation Axiom). Knowledge of


any system A from the class A is organized in the form of subsets
(submultisets) of the set (multiset) W .
Note that UKEFA 1 is a particular case of the Internal Cognitive
Representation Axiom (UCEF 1) introduced in Section 3.1.
It is possible to interpret W as the base of all knowledge
that systems from the class A are able to have about their envi-
ronment.
If A is a system from the class A, then knowledge of A at some
moment is called a knowledge state (KS) of A or of the knowledge
space of A. By Axiom UKEFA 1, any KS is a subset (submultiset) of
the set (multiset) W . It means that knowledge of any system A from
the class A consists of atomic components called knowledge items or
knowledge units (KI) Additionally, we assume that knowledge of A
is stratified, which means that elements from a KS are situated in
several strata.
According to our model (cf., Section 3.1), the set (multiset) W
and all stratified knowledge state (KS) are knowledge spaces. This
model further develops the approach of Mizzaro (1996; 1998; 2001)
and Burgin (2010; 2011a) to epistemic information.
The set of all actual knowledge states of a system A is denoted
by KSA and the set of all possible knowledge states of the system
A is denoted by PKSA. By Axiom UKEFA 1, both of these sets are
subsets of the power-set 2W .
Thus, any knowledge of a system A state consists of two parts —
the actual knowledge KA of A and the potential knowledge KP of A.
Note that the potential knowledge KP of A., i.e., knowledge accessible
by A from this state without incoming information, depends on the
actual knowledge KA of A in this state.
As it is explained in Section 3.1, information is modeled by epis-
temic information operators, which change knowledge states (KS) of
systems from the class A. Namely, each portion (piece) of informa-
tion is represented by an epistemic information operator.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch08 page 794

794 Theory of Knowledge: Structures and Processes

Definition 8.5.1. (a) A (partial) mapping R : KSA → KSA is


called an actual epistemic information operator in the actual knowl-
edge space of the system A.
(b) A pair R, P of (partial) mappings R : KSA → KSA and
P : PKSA → PKSA is called an extended epistemic information
operator in the extended knowledge space of the system A.
If the knowledge state of the system A is KI , then the action of
the operator R changes it to the state KRF .
To define change caused by the operator R, we make a distinction
between transition changes and multistep changes. There are three
basic kinds of such changes (operations): knowledge items are added
to the initial knowledge state, knowledge items are removed (deleted)
from the initial knowledge state and knowledge items from the initial
knowledge state are moved from one stratum to another. In transition
changes, each operation is performed at most one time with one
knowledge item. In multistep changes, each operation is performed
at most one time with one knowledge item on each step of the whole
process. For instance, when changes are multistep, it is possible to
remove a knowledge item k at the first step and then to add k at
the step three. The same knowledge item can be moved several times
inside the knowledge system and/or can be moved several times into
and out from the knowledge system.
We use the following notation:

• KRO is the set of all knowledge items added in the process of infor-
mation acceptation, i.e., in the process caused in the knowledge
space of A by the actual epistemic information operator R.
• KRD is the set of all knowledge items removed (deleted) in the
process of information acceptation.
• KRM is the set of all knowledge items moved between strata in the
process of information acceptation.

Note that it is possible to treat removed or deleted knowledge as


knowledge moved outside the system A.

Proposition 8.5.1. In the context of transition (one-step) changes,


the following equalities are true for any actual epistemic information
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch08 page 795

Knowledge, Data, and Information 795

operator R
KRO = KRF \KI ,
and
KRD = KI \KRF .
Proof . Knowledge items that belong to KRF but does not belong to
KI are definitely added when the operator R is applied. This gives us
the inclusion KRO ⊇ KRF \KI . Besides, knowledge items that belong
to KI cannot be added in the transition process — it is only possible
to move or to delete them. This gives us the equality KRO = KRF \KI .
Knowledge items that belong to KI but does not belong to KRF
are definitely deleted when the operator R is applied. This gives us
the inclusion KRD ⊇ KI \KRF . Besides, knowledge items that belong
to KRF cannot be deleted in the transition process — it is only
possible to move or to add them. This gives us the equality KRD =
KI \KRF .

Proposition is proved.
Introduced components of knowledge states allow us to define
measures of information. At first, we define measures of received
information.
Let us consider an actual epistemic information operator R and
a system A from K in the knowledge state KI .
Definition 8.5.2. The transitional measure |IRA | of information IRA
transmitted by R to A represents changes of knowledge in A with
the knowledge state KI under the impact of R and is defined by the
following formula
|IRA | = ∆K = |KRF | − |KI |
where |X| is the number of elements in the set X.
This shows that transitional measure of transmitted information
can be positive when more knowledge items are added than deleted
or negative when less knowledge items are added than deleted.
As in the transition process, each operation is performed at most
one time with each of knowledge item from KI , Proposition 8.5.1
gives us the following result.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch08 page 796

796 Theory of Knowledge: Structures and Processes

Proposition 8.5.2. In the context of transition (one-step) changes,


the following equalities are true for any actual epistemic information
operator R
|IRA | = |KRF \KI | − |KI \KRF | = |KRO | − |KRD |
Usually, acceptation of information is a complex process that
involves many steps of operation. Consequently, we need a more
exact measure taking into account different operations performed
at all steps.
Definition 8.5.3. The operational measure IRA  of information
IRA transmitted by R to A represents changes of knowledge in A
with the knowledge state KI under the impact of R and is defined
by the following formula
IRA  = ∆K = |KRO | + |KRD | + |KRM |,
where |X| is the number of elements in the set X.
Corollary 8.5.1. When the knowledge system of A is not stratified,
we have
IRA  = ∆K = |KRO | + |KRD |.
The operational measure of transmitted information is always
non-negative and can be applied both to transitional and multistep
changes.
Definitions imply the following result.
Proposition 8.5.3. For any actual epistemic information operator
R and any system A from the class A, the following inequality is true
|IRA | ≤ IRA .
Proof is left as an exercise.
Proposition 8.5.3 gives us the following result.
Proposition 8.5.4. In the context of transition (one-step) changes,
the following equalities are true for any actual epistemic information
operator R
IRA  = |KRF \KI | + |KI \KRF | + |KRM |.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch08 page 797

Knowledge, Data, and Information 797

Corollary 8.5.2. When the knowledge system of A is not stratified,


we have
IRA  = |KRF \KI | + |KI \KRF |.

Measures of transmitted information are used to define measures


of a portion of information represented by an actual epistemic infor-
mation operator R. At first, we define measures of R for an object A.

Definition 8.5.4. The upper transitional measure |IRA | of informa-


tion of R for A represents maximal changes of knowledge in A under
the impact of R and is defined by the following formula
|IR U
A | = max{|IRA |; for all KI ∈ KSA}.

Similar to the transitional measure of transmitted information,


the upper transitional measure can be positive or negative. Note that
negativity of the upper transitional measure for the object A means
that the portion of information represented by the operator R always
causes decrease of knowledge in A independently of its state.

Proposition 8.5.2 implies the following result.

Proposition 8.5.5. In the context of transition (one-step) changes,


the following inequalities are true for any actual epistemic informa-
tion operator R
min{|KRF \KI |; for all KI ∈ KSA}
− max{|KI \KRF |; for all KI ∈ KSA}
≤ |IR U
A| ≤

≤ max{|KRF \KI |; for all KI ∈ KSA}


− min{|KI \KRF |; for all KI ∈ KSA}.
 
Definition 8.5.5. The lower transitional measure IR 
A of informa-
tion of R for A represents minimal changes of knowledge in A under
the impact of R and is defined by the following formula
 R
IA  = min{|IRA |; for all KI ∈ KSA}.
L
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch08 page 798

798 Theory of Knowledge: Structures and Processes

Similar to the transitional measure of transmitted information,


the lower transitional measure can be positive or negative. Note that
positivity of the lower transitional measure for the object A means
that the portion of information represented by the operator R always
causes increase of knowledge in A independently of its state.
Naturally, we have the inequality
|IR R U
A |L ≤ |IA | .

Proposition 8.5.6. In the context of transition (one-step) changes,


the following inequalities are true for any actual epistemic informa-
tion operator R
min{|KRF \KI |; for all KI ∈ KSA}
− max{|KI \KRF |; for all KI ∈ KSA} ≤ |IR
A |L

≤ max{|KRF \KI |; for all KI ∈ KSA}


− min{|KI \KRF |; for all KI ∈ KSA}.
Usually, acceptation of information is a complex process that
involves many steps of operation. Consequently, we need more exact
measures taking into account different operations performed at all
steps of information transmission.
Definition 8.5.6. The upper operational measure IR A  of informa-
tion of R for A represents maximal changes of knowledge in A under
the impact of R and is defined by the following formula
IR U
A  = max{IRA ; for all KI ∈ KSA}.
Similar to the operational measure of transmitted information, the
upper operational measure is always non-negative.
Definition 8.5.6 implies the following result.
Proposition 8.5.7. In the context of multistep changes, the follow-
ing inequalities are true for any actual epistemic information opera-
tor R
min{|KRO |; for all KI ∈ KSA}
+ min{|KRD |; for all KI ∈ KSA}
R U
+ min{|KRM |; for all KI ∈ KSA} ≤ IA 
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch08 page 799

Knowledge, Data, and Information 799

≤ max{|KRO |; for all KI ∈ KSA}


+ max{|KRD |; for all KI ∈ KSA}
+ max{|KRM |; for all KI ∈ KSA}.
Definition 8.5.7. The lower operational measure IR A  of informa-
tion of R for A represents minimal changes of knowledge in A under
the impact of R and is defined by the following formula
R
IA L = min{IRA ; for all KI ∈ KSA}.
Similar to the operational measure of transmitted information,
the lower operational measure is always non-negative
Naturally, we have the inequality
IR R U
A L ≤ ||IA  .

Definition 8.5.7 implies the following result.


Proposition 8.5.8. In the context of multistep changes, the follow-
ing inequalities are true for any actual epistemic information opera-
tor R
min{|KRO |; for all KI ∈ KSA}
+ min{|KRD |; for all KI ∈ KSA}
+ min{|KRM |; for all KI ∈ KSA}
≤ IR
A L ≤

≤ max{|KRO |; for all KI ∈ KSA}


+ max{|KRD |; for all KI ∈ KSA}
+ max{|KRM |; for all KI ∈ KSA}.
It is necessary to remark that measuring knowledge by the num-
ber of knowledge items and applying these measures to measuring
information gives us only the first approximation to modeling infor-
mation processes related to knowledge transformations. Indeed, if
some knowledge item contains another knowledge item, then the sec-
ond knowledge item does not add knowledge to the first knowledge
item. Besides, one knowledge item can contain much more knowledge
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch08 page 800

800 Theory of Knowledge: Structures and Processes

than another knowledge item. In addition, the number of knowl-


edge items as a measure of knowledge reflects only the substan-
tial dimensions of knowledge, while the relational dimensions, which
describe relations between knowledge items, belong to higher-level
models of knowledge transformations and operational information
theory, which are studied elsewhere. Taking into account such rela-
tions allows one to achieve higher precision in measuring information
and knowledge. However, there are many situations where precision
provided by quantities of knowledge items is sufficient. For instance,
exactly this level of precision is successfully used in software engi-
neering and technology where program instructions, which belong to
the procedural type of knowledge, are used as knowledge items that
determine measures of information and knowledge transformations
(Baer and Zeidman, 2009).
The most popular measures of information are Shannon’s entropy
and Kolmogorov (or algorithmic) complexity (Burgin, 2010).
The goal of Claude Elwood Shannon (1916–2001) was to measure
transmission information. To achieve this goal, he introduced the
entropy of a message m that informs about happening of an event E
or the outcome D of an experiment H, assuming that there are n pos-
sible alternatives E1 , E2 , E3 , . . . , En , one of which is E, to the event
E or there are n possible outcomes D1 , D2 , D3 , . . . , Dn of the experi-
ment H, one of which is D (Shannon, 1948). In this case, the entropy
of the message m is defined by the following formula

H(m) = H(p1 , p2 , . . . , pn ) = −Σni=1 pi · log2 pi . (3.2.4)

Probabilities used in the formula (3.2.4) are obtained from exper-


iments and observation, using relative frequencies.
By definition, the entropy H(m) of the message m measures infor-
mation by knowledge given by the message m about happening of
an event E or the outcome D of an experiment H.
Kolmogorov (or algorithmic) complexity was introduced by
three authors — Ray Solomonoff (1926–2009), Andrey Nikolayevich
Kolmogorov (1903–1987) and Gregory Chaitin. In this context, infor-
mation is considered not as some intrinsic property of different
objects but is related to algorithms that use, extract or produce
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch08 page 801

Knowledge, Data, and Information 801

this information. As a result, this approach was the first developed


form of a complexity measure of objects, such as a piece of text or
an arbitrary sequence of symbols, which estimates the computational
resources needed to specify the object.
Namely, the Kolmogorov (or algorithmic) complexity CA (x) of an
object (word) x with respect to an algorithm A is defined as
CA (x) = min{l(p); A(p) = x},
in the case when there is an input word (program of A)p with the
length l(p) such that A(p) = x; otherwise CA (x) is not defined.
This shows that the Kolmogorov complexity CA (x) of an object
(word) x measures information sufficient to reconstruction of the
object (word) x. With respect to the algorithm A, its program or
input data playing the role of a program are representations of oper-
ational knowledge as they determine how to reconstruct the object.
The length of a program (word) is its measure. Therefore, in this
case, information is also measures by knowledge.
In addition to Kolmogorov complexity CA (x), there also other
variations of this measure — uniform complexity KR(x), pre-
fix complexity or prefix-free complexity K(x), monotone complex-
ity Km(x), conditional Kolmogorov complexity CD(x), time-bounded
Kolmogorov complexity Ct (x), space-bounded Kolmogorov complexity
Cs (x), resource-bounded Kolmogorov complexity Ct,s (x) and inductive
Kolmogorov complexity (cf., (Burgin, 2004c; 2010)). All these mea-
sures estimate information by the amount of operational knowledge.
Axiomatic approach to measuring information by the amount of
operational knowledge is developed in (Burgin, 1982a; 1983; 1990b;
Câmpeanu, 2012). All kinds of Kolmogorov/algorithmic complexity
are encompassed by this approach.
Kolmogorov/algorithmic complexity as a measure of information
has been applied in a variety of areas such as medicine, biology, neuro-
physiology, physics, economics, hardware, and software engineering.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch09 page 803

Chapter 9

Conclusion

We now know enough to know that we will never know everything.


Maria Popova “Creating a “Fourth Culture” of Knowledge”

Thus, we can see that knowledge is an extremely complex and at


the same time, more than ever important phenomenon. It plays a
pivotal role in the life of individuals, functioning of organizations
and the whole existence of society. It is important to understand the
two-sided function of knowledge for a system that possesses knowl-
edge. On the one hand, knowledge can be and in many cases, is the
source of change and development. On the level of an individual (the
knower), good knowledge and its correct application can lead to a
better life and higher achievements. On the level of organization,
knowledge can enhance extension of activities, expansion into new
domains, and improvement of organizational culture and practices.
On the level of society, knowledge brings steady technological and
economical progress, which accelerates all the time. In essence, it is
possible to envision knowledge processes as fundamental drivers of
life on all levels.
On the other hand, knowledge allows a system to preserve
dynamic stability and consistency on all levels. For an individual,
it would be very hard (if not impossible) to preserve wellbeing and
even life without proper knowledge in the permanently changing
and sometimes hostile environment. Organizations always live in the
state of competition and insufficient knowledge and/or its wrong

803
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch09 page 804

804 Theory of Knowledge: Structures and Processes

application brings bad consequences, up to decline and disintegra-


tion. History shows that when cognition is not supported and/or its
results are not properly used, society declines and decays. Thus, we
come to the necessity to have knowledge about knowledge.
Synthesizing existing and developing new approaches in knowl-
edge studies, an exposition of the synthetic theory of knowledge is
given in this book. It includes philosophical, methodological, theoret-
ical and applied aspects of knowledge and knowledge processes. This
theory is constructed on three levels — quantum, average, and global.
In the context of the synthetic theory of knowledge, several math-
ematical techniques for modeling knowledge processes are described.
Epistemic structures, epistemic spaces and epistemic information
operators provide efficient means for knowledge studies. Different
types of knowledge spaces presented in this book — unstructured
knowledge spaces, stratified knowledge spaces, typological knowledge
spaces, and structured knowledge spaces — are aimed at the devel-
opment of knowledge processing in technical systems, such as com-
puters, computer networks, robots, etc. Studied here mathematical
models and constructions, such as logical varieties and prevarieties,
open new possibilities for artificial intelligence.
Novel theoretical and practical ideas are introduced. For instance,
the conditional approach to knowledge definition is developed in
Chapter 2. This approach better fits the practice of utilization of the
term knowledge allowing efficient differentiation and integration of
knowledge. At the same time, a more complete description of knowl-
edge management is elaborated in the book. Namely, new aspects
and stages of knowledge management, such as knowledge hiding,
knowledge retirement and knowledge maintenance, are explicated
and explored in this book.
The necessity of strongly multidisciplinary and transdisciplinary
approaches in knowledge studies has been demonstrated. That is
why we have explored basic properties of knowledge, knowledge
representations, knowledge processes and knowledge functions from
philosophical, methodological, mathematical, scientific and practical
perspectives opening new and emphasizing existing directions and
areas in knowledge studies.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch09 page 805

Conclusion 805

For instance, an important direction is the measurement theory of


knowledge and information. It is necessary to build efficient measures
of knowledge providing effective tools for evaluation of knowledge
assets of individuals and organizations. There are already some sim-
ple measures such as the number of propositions and predicates as
the measure of descriptive knowledge in the propositional form or
the length of a program as the measure of operational knowledge
in the procedural form. However, it is necessary to have more mea-
sures for evaluation of different knowledge properties based on sound
theoretical foundations.
It is possible to essentially extend the range of practical applica-
tions for some theories presented in this book. For instance, mathe-
matical schema theory presented in Chapter 5 can be used for stud-
ies of the brain and human intelligence (Burgin, 2005a), for building
computer and communication networks (Burgin, 2006), and for orga-
nizational planning. One more prospective application of mathemat-
ical schemas is the three-schema approach, or the Three Schema Con-
cept, in software and system engineering. According to this approach,
construction of information systems such as databases and knowledge
bases demands conceptual modeling as the key condition to achiev-
ing high-quality data and knowledge integration (Loomis, 1987). The
three-schema model comprises three types of schemas:

• External schemas are representing user;


• Conceptual schemas integrate external schemas in a logical
structure;
• Internal schemas define physical storage structures for knowledge
and data.

The external schema is the highest level of abstraction, which is


situated at the view level, reflecting the user vision of the system
and its interface. The conceptual schema, which is situated at the
logical level, defines the logical structure of the entire system and
the ontology of the system data, information and/or knowledge. For
instance, in the case of databases and knowledge bases, the con-
ceptual schema describes what data and/or knowledge are stored in
the system, the relationships among the data (knowledge items) and
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch09 page 806

806 Theory of Knowledge: Structures and Processes

complete description of the user’s requirements without any concern


for the physical implementation, which is relegated to the next level.
The physical schema, which is situated at the internal or physical
level, which is the lowest level of the system representation. It deals
with the physical representation of data and/or knowledge describing
how they are physically stored and organized on the storage medium.
To conclude, we formulate some open problems in knowledge stud-
ies, which, in turn, open new directions for the future research.
As psychologists found, people used mental schemas all the time
(Neisser, 1967; Anderson, 1977; Rumelhart, 1980). For instance,
image schema establish patterns of understanding and reasoning
(Johnson, 1987; Lakoff, 1987; Rohrer, 2006). Interaction schemas
are determined by the execution of tasks involving the physical
environment (Arbib, 1992; 1995). At the same time, mathematical
schema theory provides more efficient tools than empiric approaches
in schema theory (Burgin, 2005; 2006; 2010a). This brings us to the
following problems.

Problem 1. Use mathematical schema theory for modeling and


exploration of mental activity of people.

Problem 2. Use mathematical schema theory for modeling and


exploration of the brain.
To make the theory of knowledge really scientific, it is necessary
to develop experimentation techniques in this area. This bring us to
the following problems.

Problem 3. Create a theory of knowledge measurement and


evaluation.

Problem 4. Elaborate a theory of properties of and relations


between knowledge items.
In this book, we constructed various operations with knowledge
items. The next step is to add new operations and organize a unified
structure of all operations with knowledge items. This results in the
following problems.

Problem 5. Build algebraic systems (algebras) of knowledge.


September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch09 page 807

Conclusion 807

Problem 6. Develop a mathematical theory of knowledge algebras


(algebraic systems) and study their properties.
Contemporary logic is based on logical calculi, their semantics
and models. Logical varieties, quasivarieties and prevarieties studied
in this book (Chapter 3) are powerful generalizations of logical calculi
providing many additional possibilities. This naturally brings us to
the following problem.

Problem 7. Develop logic as a theoretical discipline based on logical


varieties, quasi-varieties and prevarieties of all three types — syntac-
tic, semantic, and model varieties, quasi-varieties and prevarieties.
Finally, some theories presented in this book, such as the mathe-
matical theory of schemas (Chapter 5), the theory of logical varieties,
quasi-varieties, and prevarieties (Chapter 3) or the theory of abstract
properties (Chapter 5), allow and need further development.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-appendix page 809

Appendix

The science of Pure Mathematics, in its modern developments,


may claim to be the most original creation of the human spirit.

A. N. Whitehead

A. Set theoretical foundations

∅ is the empty set.


If X is a set, then r ∈ X means that r belongs to X or r is a
member of X.
If X and Y are sets, then Y ⊆ X means that Y is a subset of X,
i.e., Y is a set such that all elements of Y belong to X.
The union Y ∪X of two sets Y and X is the set that consists of
all elements from Y and from X.
The intersection Y ∩X of two sets Y and X is the set that consists
of all elements that belong both to Y and to X.

The union i∈I Xi of sets Xi is the set that consists of all elements
from all sets Xi , i ∈ I.

The intersection i∈I Xi of sets Xi is the set that consists of all
elements that belong to each set Xi , i ∈ I.
The difference Y \X of two sets Y and X is the set that consists
of all elements that belong to Y but does not belong to X.
If X is a set, then 2X is the power set of X, which consists of all
subsets of X. The power set of X is also denoted by P(X).

809
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-appendix page 810

810 Appendix

If X and Y are sets, then X × Y = {(x, y); x ∈ X, y ∈ Y } is the


direct or Cartesian product of X and Y , in other words, X × Y is
the set of all pairs (x, y), in which x belongs to X and y belongs
to Y .
Y X is the set of all mappings from X into Y .
Xn = X  × X × ·· · × X × X.
n
Elements of the set X n have the form (x1 , x2 , . . . , xn ) with all
xi ∈ X and are called n-tuples, or simply, tuples.
If X is a set, then |X| is the cardinality of set X. If a set X is
finite, then |X| is the number of elements in the set X.
A fundamental structure of mathematics is function. However,
functions are special kinds of binary relations between two sets.
A binary relation T between sets X and Y is a subset of the direct
product X × Y . The set X is called the domain of T (X = Dom(T ))
and Y is called the codomain of T (Y = CD(T )). The range of the
relation T is Rg(T ) = {y; ∃x ∈ X ((x, y) ∈ T )}. The domain of
definition of the relation T is DDom(T ) = {x; ∃y ∈ Y ((x, y) ∈ T )}. If
(x, y) ∈ T , then one says that the elements x and y are in relation T ,
and one also writes T (x, y).
Binary relations are also called multivalued functions (mappings
or maps).
If T is a binary relation between sets X and Y , then the binary
relation T −1 = {(y, x) for all (x, y) ∈ T } is called the inverse of the
relations T .
Taking binary two relations R ⊆ X × Y and Q ⊆ Y × Z, it is
possible to build a new relation QR ⊆ X × Z also denoted by Q ◦ R
that is called composition or superposition of relations R and Q and
is defined by the following rule

QR = {(x, z) ∈ X × Z; ∃y ∈ Y ((x, y) ∈ R and (y, z) ∈ Q)}.

If T is a binary relation between sets X and Y , and R is a binary


relation between sets Z and V , then the direct or Cartesian product
T × R of binary relation between sets T and R is defined as

T × R = {((x, z); (y, v); (x, y) ∈ T, (z, v) ∈ R}.


September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-appendix page 811

Appendix 811

A family J of subsets of X, is called a filter if it satisfies the


following conditions:
If P ∈ J and P ⊆ Q, then Q ∈ J.
If P ,Q ∈ J, then P ∩ Q ∈ J.
It is possible to read more on set theory, for example, in (Bourbaki,
1960; Kuratowski and Mostowski, 1967).
A preorder (also called quasiorder) on a set X is a binary relation
Q on X that satisfies the following axioms:

1. Q is reflexive, i.e., xQx for all x from X.


2. Q is transitive, i.e., xQy and yQz imply xQz for all x, y, z ∈ X.

A partial order is a preorder that satisfies the following additional


axiom:

3. Q is antisymmetric, i.e., xQy and yQx imply x = y for all x,


y ∈ X.
4. A strict partial order is a preorder that is not reflexive, is transitive
and satisfies the following additional axiom:

Q is asymmetric, i.e., only one relation xQy or yQx is true for all
x, y ∈ X.
An equivalence on a set X is a binary relation Q on X that is
reflexive, transitive and satisfies the following additional axiom:

5. Q is symmetric, i.e., xQy implies yQx for all x and y from X.

A function (also called a mapping or map or total function or total


mapping) f from X to Y is a binary relation between sets X and Y
in which there are no elements from X which are corresponded to
more than one element from Y and to any element from X, some
element from Y is corresponded. Often total functions are also called
everywhere defined functions. Traditionally, the element f (a) is called
the image of the element a and denotes the value of f on the element
a from X. At the same time, the function f is also denoted by f :
X → Y or by f (x). In the latter formula, x is a variable and not a
concrete element from X.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-appendix page 812

812 Appendix

A partial function (or partial mapping) f from X to Y is a binary


relation between sets X and Y in which there are no elements from X
that are corresponded to more than one element from Y . Thus, any
function is also a partial function. Sometimes, when the domain of a
partial function is not specified, we call it simply a function because
any partial function is a total function on its domain.
A multivalued function (or mapping) f from X to Y is any binary
relation between sets X and Y .
f (x) ≡ a means that the function f (x) is equal to a at all points
where f (x) is defined.
Two important concepts of mathematics are the domain and
range of a function. However, there is some ambiguity for the first
of them. Namely, there are two distinct meanings in current math-
ematical usage for this concept. In the majority of mathematical
areas, including the calculus and analysis, the term “domain of f ”
is used for the set of all values x such that f (x) is defined. However,
some mathematicians (in particular, category theorists), consider the
domain of a function f : X → Y to be X, irrespective of whether
f (x) is defined for all x in X. To eliminate this ambiguity, we sug-
gest the following terminology consistent with the current practice
in mathematics.
If f is a function from X into Y , then the set X is called the
domain of f (it is denoted by Dom f ) and Y is called the codomain
of T (it is denoted by Codom f ). The range Rg f of the function f is
the set of all elements from Y assigned by f to, at least, one element
from X, or formally, Rg f = {y; ∃x ∈ X(f (x) = y)}. The domain of
definition DDom f of the function f is the set of all elements from
X that related by f to, at least, one element from Y is or formally,
DDom f = {x; ∃y ∈ Y (f (x) = y)}. Thus, for a partial function f (x),
its domain of definition DDom f is the set of all elements for which
f (x) is defined.
Taking two mappings (functions) f : X → Y and g : Y → Z, it
is possible to build a new mapping (function) gf : X → Z that is
called composition or superposition of mappings (functions) f and g
and defined by the rule gf(x) = g(f (x)) for all x from X.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-appendix page 813

Appendix 813

For any set S, χS (x) is its characteristic function, also called set
indicator function, if χS (x) is equal to 1 when x ∈ S and is equal
to 0 when x ∈ / S, and CS (x) is its partial characteristic function if
CS (x) is equal to 1 when x ∈ S and is undefined when x ∈ / S.
If f : X → Y is a function and Z ⊆ X, then the restriction
f|Z of f on Z is the function defined only for elements from Z and
f|Z (z) = f (z) for each element z from Z.
If U is a correspondence of a set X to a set Y (a binary relation
between X and Y ), i.e., U ⊆ X ×Y , then U (x) = {y ∈ Y ; (x, y) ∈ U }
and U −1 (y) = {x ∈ X; (x, y) ∈ U }.
An n-ary relation R in a set X is a subset of the nth power of X,
i.e., R ⊆ X n . If (a1 , a2 , . . . , an ) ∈ R, then one says that the elements
a1 , a2 , . . . , an from X are in relation R.
Let X be a set. An integral operation W on the set X is a mapping
that given a subset of X, corresponds to it an element from X, and
for any x ∈ X, W ({x}) = x.
Examples of integral operations are: sums, products, taking min-
imum, taking maximum, taking infimum, taking supremum, integra-
tion, taking the first element from a given subset, taking the sum of
the first and second elements from a given subset, and so on.
Examples of finite integral operations defined for numbers are:
sums, products, taking minimum, taking maximum, taking average,
weighted average, taking the first element from a given subset, and
so on.
As a rule, integral operations are partial, that is, they assign val-
ues, e.g., numbers, only to some subsets of X.

Proposition A.1. Any binary operation in X generates a finite


ordinal integral operation on X.

It is possible to read more about integral operations and their


applications in (Burgin and Karasik, 1976; Burgin, 2004a).
Set theory is correctly considered the base of the major part of
contemporary mathematics. For a long time, the word set was used
in mathematics as an informal notion. Only at the end of the 19th
century and at the beginning of the 20th century this notion was
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-appendix page 814

814 Appendix

formalized in the process of the set theory development. In turn,


set theory provided rigorous foundations for the whole mathematics.
However, applications of mathematical structures to real life phe-
nomena demonstrated limitations of sets. As a result various gen-
eralizations of sets have been suggested. The most popular of these
generalizations are fuzzy sets and multisets.
A multiset is similar to a set, but can contain indiscernible ele-
ments or different copies of the same elements. It is possible to read
more about multisets in (Aigner, 1979; Knuth, 1997).
A fuzzy set A in a set U is the triad (U, µA , [0, 1]), where [0, 1] is
an interval of real numbers, µA : U → [0, 1] is a membership function
of A, and µA (x) is the degree of membership in A of x ∈ U . It is
possible to read more about fuzzy sets, for example, in (Klir and
Folger, 1988; Zimmermann, 2001).
Named sets as the most encompassing and fundamental mathe-
matical construction encompass all generalizations of ordinary sets
and provide unified foundations for the whole mathematics (Burgin,
2011).
A named set (also called a fundamental triad) has the following
graphic representation (Burgin, 1990; 1991; 1997; 2011):
connection ,
Entity 1 Entity 2 (1)

correspondence
. (2)
Essence 1 Essence 2

In the fundamental triad (named set) (1) or (2), Entity 1


(Essence 1) is called the support, the Entity 2 (Essence 2) is called
the reflector (also called the set or component of names) and the con-
nection (correspondence) between Entity 1 (Essence 1) and Entity 2
(Essence 2) is called the reflection (also called the naming correspon-
dence) of the fundamental triad (1) (respectively, (2).
In the symbolic form, a named set (fundamental triad) X is a triad
(X,f ,I) where X is the support of X and is denoted by S(X), I is the
component of names (also called set of names or reflector) of X and
is denoted by N(X), and f is the naming correspondence (also called
reflection) of the named set X and is denoted by n(X). The most
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-appendix page 815

Appendix 815

popular type of named sets is a named set X = (X, f, I) in which X


and I are sets and f consists of connections between their elements.
When these connections are set theoretical, i.e., each connection is
represented by a pair (x, a) where x is an element from X and a
is its name from I, we have a set theoretical named set, which is a
binary relation. Even before the concept of a fundamental triad was
introduced, Bourbaki in their fundamental monograph (1960) had
also represented binary relations in a form of a triad (named set).
Using the term triad, it is necessary to distinguish it from the
notion of a triplet. A triad is a system that consists of three parts
(elements or components), while a triplet is any three objects. Thus,
any triad is a triplet, but not any triplet is a triad. In a triad, there
are ties and/or relations between all three parts (objects from the
triad), while for a triplet, this is not necessary.
There are many named sets that are not set theoretical. For
instance, an algorithmic named set A = (X, A, Y ) consists of an
algorithm A, the set X of inputs and the set Y of outputs. Let us

named set

Figure A1. A set theoretical named set X = (X, f, I)


September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-appendix page 816

816 Appendix

take as X and Y the set of all words in some alphabet Q and all Tur-
ing machines that work with words in the alphabet Q as algorithms.
Then theory of algorithms tells us that, there are much more algorith-
mic named sets than relations (set theoretical named sets) because
several different algorithms (e.g., Turing machines) can define the
same function or relation. Thus, algorithmic named sets are different
from set theoretical named sets.
Mereological named sets are essentially different from set theo-
retical named sets (Leśniewski, 1916; 1992; Leonard and Goodman,
1940; Burgin, 2011). Categorical named sets (fundamental triads)
also are different from set theoretical named sets. For instance, an
arrow in a category is a fundamental triad but does not include sets
as components (Herrlich and Strecker, 1973). Named sets with phys-
ical components, such as (a woman and her name), (an article and
its title), (a book and its title) and many others, are far from being
set theoretical.
People meet fundamental triads (named sets) constantly in their
everyday life. People and their names constitute a named set. Cars
and their owners constitute another named set. Books and their
authors constitute one more named set. A different example of a
named set (fundamental triad) is given by the traditional scheme of
communication:
Sender Receiver .

In this case, connection may be one of the following: a channel,


communication media, or a message.
People can even see some fundamental triads (named sets). Here
are some examples.
When it is raining, we see the fundamental triad that consists of a
cloud(s) (Entity 1), the Earth where we stand (Entity 2) and flows of
water (the correspondence). When we see a lightning, we see another
fundamental triad that consists of a cloud(s) (Entity 1), the Earth
where we stand (Entity 2) and the lightning (the correspondence).
There are many fundamental triads in which Entity 1 is some
set, Entity 2 consists of the names of the elements from the Entity 1
and elements are connected with their names by the naming relation.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-appendix page 817

Appendix 817

This explains the name “named set” that has been applied to this
structure (Burgin, 1990; 1991; 2011). A standard model of a named
set is a set of people who constitute the carrier, their names that
form the set of names, and the naming relation consists of the cor-
respondence between people and their names.
Many mathematical systems are particular cases of named sets or
fundamental triads. The most important of such systems are fuzzy
sets (Zadeh, 1965; 1973; Zimmermann, 1991), multisets (Knuth,
1997), graphs and hypergraphs (Berge, 1973), topological and fiber
bundles (Husemoller, 1994; Herrlich and Strecker, 1973). Moreover,
any ordinary set is, as a matter of fact, some named set, and namely,
a singlenamed set, i.e., such a named set in which all elements have
the same name (Burgin, 2011). It is interesting that utilization of
singlenamed sets instead of ordinary sets allows one to solve some
problems in science, e.g., Gregg’s paradox in biology (Ruse, 1973;
Burgin, 1983).
It is possible to find many named sets in physics. For instance,
according to particle physics, any particle has a corresponding
antiparticle, e.g., electron corresponds to positron, while proton cor-
responds to antiproton. Thus, we have a named set with particles as
its support and antiparticles as its set of names. A particle and its
antiparticle have identical mass and spin, but have opposite value
for all other non-zero quantum number labels. These labels are elec-
tric charge, color charge, flavor, electron number, muon number, tau
number, and barion number. Particles and their quantum number
labels form another named set with particles as its support and quan-
tum number labels as its set of names.
When we study information and information processes, fun-
damental triads become extremely important. Each direction in
information theory has fundamental concepts that are and mod-
els of which are either fundamental triads or systems built of
fundamental triads. Indeed, the relation between information and
the receiver/recipient introduced in the Ontological Principle O1
is a fundamental triad, while the Ontological Principles O4 and
O4a introduce the interaction and second communication triads
(Burgin, 2010). The first communication triad is basic in statistical
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-appendix page 818

818 Appendix

information theory (Chapter 3). A special case of named sets are Chu
spaces used in information studies, introduced by Barr (1979) and
theoretically developed by Chu.
A Chu space C over a set B is a triad (A, r, X) with r : A × X →
B. The set A is called the carrier and X the cocarrier of the Chu
space C. Matrices are often used to represent Chu spaces. Note that
in the form (A, r, X), Chu space is a triad but not a fundamental
triad. At the same time, in the more complete form (A × X, r, B),
Chu space is a particular case of fundamental triads.
One more example of named sets is given by operands from the
multidimensional structured model of computer systems and compu-
tations (Burgin and Karasik, 1976; Burgin, 1976; 1982).
An operand Q is a triad (Z, r, C) where Z = Z1 × Z2 × · · · × Zn ,
each Zi is a subset of the set Z of all integer numbers, r : Z × Z → C
and C is a set. The set Z is called the support of the operand Q.
Taking an operand with n = 2 and arbitrary sets Z1 and Z2 , we
get the concept of a valued relation (Dukhovny and Ovchinnikov,
2000; Frascella and Guido, 2008).
An operator A in the multidimensional structured model of com-
puter systems and computations is a mapping
A = Ql → Qh ,
where Qk is the set of all k-tuples (Q1 , Q2 , . . . Qk ) and Qi all are
operands (i = 1, 2, 3, . . . , k; k ∈ {l, h}).
Many constructions, such as valued relations, fuzzy relations
(Salii, 1965), classifications in the sense of (Barwise and Seligman,
1997), and Chu spaces, are particular cases of such operands.
Named sets are explicitly used in many areas: in models of com-
puters and computation (Burgin and Karasik, 1976), artificial intel-
ligence (Burgin and Gladun, 1989; Burgin and Gorsky, 1991; Bur-
gin and Kuznetsov, 1992), mathematical linguistics (Burgin and
Burgina, 1982), software engineering (Browne et al., 1995) and
Internet technology (Balakrishnan et al., 2004; Cunnigham, 2004).
Set-theoretical named sets are very popular in databases and knowl-
edge engineering due to the fact that both binary relations and hier-
archical structures are specific kinds of named sets. Even images of
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-appendix page 819

Appendix 819

data and knowledge structures and operations in such structures are


presented as named sets (cf., for example, (Elmasri and Navathe,
2000). The reason for such a popularity of named sets in these areas
is the fact that it is possible to represent any data structure by named
sets and their chains (Burgin, 1997; Burgin and Zellweger, 2005; Bur-
gin, 2008a).
There are different formal mathematical definitions of named
sets/fundamental triads: in categories, in set theory and by axioms
(Burgin, 2011). Axiomatic representation of named sets shows that
named set theory, as a formalized mathematical theory, is indepen-
dent from set theory and category theory. When category theory is
built independently from set theory, then categorical representations
of named sets are also independent from set theory. It is also nec-
essary to emphasize that physical fundamental triads (named sets),
i.e., fundamental triads that are structures of physical objects, are
independent from set theory.
An important category of named sets is abstract properties.
An abstract property P of objects from the universe U is a named
set P = (U, p, L) where L is a partially ordered set called the scale
Sc(P ) of the abstract property P and p : U → L is a partial function
(L-predicate) called the evaluation function Ev(P ) of P . L is called
the scale Sc(P ) of the abstract property P .
Abstract properties are used as mathematical models of real prop-
erties.
It is possible to read more about named sets in (Burgin, 1990;
1991; 1992a; 1997; 2011).

B. Elements of the theory of algorithms

The theory of algorithms is the most abstract part of the theory


of algorithms, computation and automata. Abstract algorithms and
automata work with symbolic data, usually, in the form of words.
An alphabet is a set of symbols, e.g., A = {1, 0} or B = {a, b, c}.
A string is a sequence of alphabet symbols, e.g., 101000.
A word is a string that belongs to some language.
A formal language is a set of words in a fixed alphabet.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-appendix page 820

820 Appendix

The most general structure in the theory of abstract automata is


state transition machine.
A state transition machine (STM), also called a state transition
system, A consists of three structures and can be represented as
A = (L, S, δ):

— The linguistic structure L = (AI , Q, AO ) where AI is an alphabet


of input symbols, Q is a set of states, and AO is a set of output
symbols of the STMA;
— The state structure S = (Q, q0 , F ) where q0 is an element from Q
called the initial or start state and F is a subset of Q called the
set of final states of the STM A;
— The action structure δ traditionally called the transition function,
or more exactly, transition relation of the STMA; the transition
relation δ determines how input and the current state determine
the next state and output, i.e.,
δ : AI × Q → Q × AO

δ is, in general, a multivalued function. It can be represented by two


relations (functions):
The state transition relation (function):
δtr : AI × Q → Q
and the output relation (function):
δout : AI × Q → AO .
Examples of state transition machines are finite automata, pushdown
automata, Turing machines, inductive Turing machines, limit Turing
machines, Petri nets, neural networks, and cellular automata (cf.,
(Burgin, 2005)).
The structure of an inductive Turing machine or a Turing
machine, as an abstract automaton, consists of three components
called hardware, software, and infware. Infware is a description and
specification of information that is processed by an (inductive)
Turing machine. Computer infware consists of data processed by
the computer. Inductive Turing machines and conventional Turing
machines are abstract automata working with the same symbolic
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-appendix page 821

Appendix 821

information in the form of words. Consequently, formal languages


with which (inductive) Turing machines works constitute their
infware.
Computer hardware consists of all devices (the processor, system
of memory, display, keyboard, etc.) that constitute the computer.
In a similar way, an inductive Turing machine (a Turing machine)
M has three abstract devices: a control device A, which is a finite
automaton and controls performance of M ; a processor or operating
device H, which corresponds to one or several heads of a conventional
Turing machine; and the memory E, which corresponds to the tape
or tapes of a conventional Turing machine. The memory E of the
simplest inductive Turing machine consists of three linear tapes, and
the operating device consists of three heads, each of which is the same
as the head of a Turing machine and works with the corresponding
tape.
The control device A is a finite automaton that regulates: the state
of the whole machine M , the processing of information by H, and
the storage of information in the memory E.
The memory E is divided into different but, as a rule, uniform
cells. It is structured by a system of relations that organize memory
as a well-structured system and provide connections or ties between
cells. In particular, input registers, the working memory, and output
registers of the inductive Turing machine M are separated. Connec-
tions between cells form an additional structure K of E. Each cell can
contain a symbol from an alphabet of the languages of the machine
M or it can be empty.
In a general case, cells may be of different types. Different types
of cells may be used for storing different kinds of data. For example,
binary cells, which have type B, store bits of information represented
by symbols 1 and 0. Byte cells (type BT) store information repre-
sented by strings of eight binary digits. Symbol cells (type SB) store
symbols of the alphabet(s) of the machine M . Cells in conventional
Turing machines have SB type. Natural number cells, which have
type NN, are used in random access machines. Cells in the mem-
ory of quantum computers (type QB) store q-bits or quantum bits
(Deutsch, 1985). Cells of the tape(s) of real-number Turing machines
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-appendix page 822

822 Appendix

(Burgin, 2005) have type RN and store real numbers. When dif-
ferent kinds of devices are combined into one, this new device has
several types of memory cells. In addition, different types of cells
facilitate modeling the brain neuron structure by inductive Turing
machines.
It is possible to realize an arbitrary structured memory of an
inductive Turing machine M , using only one linear one-sided tape L.
To do this, the cells of L are enumerated in the natural order from
the first one to infinity. Then L is decomposed into three parts
according to the input and output registers and the working memory
of M . After this, nonlinear connections between cells are installed.
When an inductive Turing machine with this memory works, the
head/processor is not moving only to the right or to the left cell
from a given cell, but uses the installed nonlinear connections.
Such realization of the structured memory allows us to consider an
inductive Turing machine with a structured memory as an inductive
Turing machine with conventional tapes in which additional connec-
tions are established. This approach has many advantages. One of
them is that inductive Turing machines with a structured memory
can be treated as multitape automata that have additional struc-
ture on their tapes. Then it is conceivable to study different ways to
construct this structure. In addition, this representation of memory
allows us to consider any configuration in the structured memory E
as a word written on this unstructured tape.
If we look at other devices of the inductive Turing machine M ,
we can see that the processor H performs information processing
in M . However, in comparison to computers, this operational device
performs very simple operations. When H consists of one unit, it can
change a symbol in the cell that is observed by H, and go from this
cell to another using a connection from K. This is exactly what the
head of a Turing machine does.
It is possible that the processor H consists of several processing
units similar to heads of a multihead Turing machine. This allows
one to model in a natural way various real and abstract computing
systems by inductive Turing machines. Examples of such systems
are: multiprocessor computers; Turing machines with several tapes;
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-appendix page 823

Appendix 823

networks, grids and clusters of computers; cellular automata; neural


networks; and systolic arrays.
We know that programs constitute computer software and tell
the system what to do (and what not to do). The software R of the
inductive Turing machine (Turing machine) M is also a program in
the form of simple rules:
qh ai → aj qk , (B.1)
qh ai → cq k , (B.2)
qh ai → aj qk c. (B.3)
Here qh and qk are states of A, ai and aj are symbols of the
alphabet of M , and c is a type of connection in the memory E.
Each rule directs one step of computation of the inductive Turing
machine M . The rule (1) means that if the state of the control device
A of M is qh and the processor H observes in the cell the symbol
ai , then the state of A becomes qk and the processor H writes the
symbol aj in the cell where it is situated. The rule (2) means that
the processor H then moves to the next cell by a connection of the
type c. In Turing machines with linear tapes, there are only two
types of connections: R is the connection to the right neighbor and
L is the connection to the left neighbor of the cell. The rule (3) is a
combination of rules (1) and (2).
Like Turing machines, inductive Turing machines can be deter-
ministic and non-deterministic. For a deterministic inductive Turing
machine, there is at most one connection of any type from any cell.
In a non-deterministic inductive Turing machine, several connections
of the same type may go from some cells, connecting them with (dif-
ferent) other cells. If there is no connection of the prescribed by an
instruction type that goes from the cell that is observed by H, then H
stays in the same cell. There may be connections of a cell with itself.
Then H also stays in the same cell. It is possible that H observes an
empty cell. To represent this situation, we use the symbol Λ. Thus,
it is possible that some elements ai and/or aj in the rules from R
are equal to ε in the rules of all types. Such rules describe situations
when H observes an empty cell and/or when H simply erases the
symbol from some cell, writing nothing in it.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-appendix page 824

824 Appendix

The rules of the type (3) allow an inductive Turing machine to


rewrite a symbol in a cell and to make a move in one step. Other
rules (1) and (2) separate these operations. Rules of the inductive
Turing machine M define the transition function of M and describe
changes of A, H, and E. Consequently, these rules also determine the
transition functions of A, H, and E.
A general step of the machine M has the following form. At the
beginning of any step, the processor H observes some cell with a
symbol ai (for an empty cell the symbol is Λ) and the control device
A is in some state qh .
Then the control device A (and/or the processor H) chooses from
the system R of rules a rule r with the left part equal to qh ai and
performs the operation prescribed by this rule. If there is no rule
in R with such a left part, the machine M stops functioning. If
there are several rules with the same left part, M works as a non-
deterministic Turing machine, performing all possible operations.
When A comes to one of the final states from F , the machine M also
stops functioning. In all other cases, it continues operation without
stopping.
For an abstract automaton, as well as for a computer, three things
are important: how it receives data, process data and obtains its
results. In contrast to Turing machines, inductive Turing machines
obtain results even in the case when their operation is not terminated.
This results in essential increase of performance abilities of systems
of algorithms.
The computational result of the inductive Turing machine M is
the word that is written in the output register of M : when M halts
while its control device A is in some final state from F , or when M
never stops but at some step of computation the content of the output
register becomes fixed and does not change although the machine M
continues to function. In all other cases, M gives no result.
The memory E is called recursive if all relations that define its
structure are recursive.
Here recursive means that there are some Turing machines that
decide/build all naming mappings and relations in the structured
memory.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-appendix page 825

Appendix 825

Inductive Turing machines with recursive memory are called


inductive Turing machines of the first order.
The memory E is called n-inductive if all relations that define
its structure are constructed by an inductive Turing machine of
order n.
Inductive Turing machines with n-inductive memory are called
inductive Turing machines of the order n + 1.
Limit Turing machines have the same structure (hardware) as
inductive Turing machines. The difference is in a more general way
of obtaining the result of computation. To obtain their result, limit
Turing machines need some topology in the set of all words that are
processed by these machines.
Let a limit Turing machine L works with words in an alphabet A
and in the set A* of all such words, a topology T is defined. While
the machine L works, it produces words w1 , w2 , . . . , wn , . . . in the
output tape (output memory). Then the result of computation of
the limit Turing machine L is the limit of this sequence of words in
the topology T .
When the set A* has the discrete topology, limit Turing machines
coincide with inductive Turing machines.
Turing machines are special cases of inductive Turing machines.
The difference is in a more general way of obtaining the result of
computation. To obtain their result, Turing machine gives the result
if and only if its control device A comes to a final state from F .
Besides, the memory of a Turing machine consists of one (or several)
potentially infinite one-dimensional (or multidimensional) tapes.
It is possible to read more on algorithms in (Sipser, 1997; Hopcroft
et al., 2007; Burgin, 2005).

C. Elements of algebra and category theory

An algebraic system is a structure A = (X, Ω, R) that consists of: a


non-empty set X called the carrier or the underlying set of A and
elements of which are called the elements of A; a family Ω of algebraic
operations, which are mappings ωi : X ni → X(i ∈ I); and a family
R of relations rj ⊆ X mj (j ∈ J) defined on X.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-appendix page 826

826 Appendix

The non-negative integers ni and mj are called the arities of the


respective operations ωi and relations rj . When ω is an operation
from Ω with arity n, then the image ω(a1 , a2 , . . . , an ) of the element
(a1 , a2 , . . . , an ) from X n under the mapping ω : X n → X is called
the value of the operation ω for elements a1 , a2 , . . . , an .
In a general case, some operations from Ω may be partial map-
pings. For instance, division in such a universal algebra (algebraic
system) as a field is not defined for 0, i.e., it is impossible to divide
by 0.
The operations from Ω and the relations from R are called basic
or primitive. Using basic operations, it is possible to build many
different derivative operations in the algebra A. The pair of families
({ni ; i ∈ I}; {mj ; j ∈ J}) is called the type of the algebraic system A.
Two algebraic systems A and A have the same type if I = I  , J = J  ,
and ni = ni and mj = mj for all i ∈ I, j ∈ J.
An algebraic system A is called finite if the set X is finite and it
is called of finite type if the set I ∪ J is finite.
An algebraic system A = (X, Ω, R) is called a universal algebra
or simply, an algebra if the set R of its basic relations is empty.
An algebraic system A = (X, Ω, R) is called a model (in logic) or
a relational system if the set Ω of basic operations is empty.
A heterogeneous or many-sorted or multibase universal algebra
A is a set A with a system of operations Σ where A is the union

{Ai ; i ∈ I} of the indexed system (family) {Ai ; i ∈ I} of sets and
each operation is a mapping of the form f : Ai1 ×Ai2 ×· · ·×Aik → Ai .
The system A is called the carrier or support of the multibase algebra
A and the system I is called the collection of sorts of the multibase
algebra A.
For instance, a deterministic finite automaton A with the input
alphabet Σ, the input alphabet Ω and the set of states Q is a
many-sorted algebra, which has the support {Σ, Q, Ω}, two binary
operations δ : Σ × Q → Q, and σ : Σ × Q → Ω, and several unary
operations σ0 , σ1 , . . . , σk on the set Q : σ0 = q0 , σ1 = q1 , . . . , σk = qk
with q1 , . . . , qk ∈ F .
Classical algebraic systems are groups, rings, linear (vector)
spaces, linear algebras, lattices, ordered sets, ordered groups, etc.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-appendix page 827

Appendix 827

Groups, rings, modules, fields, linear spaces, linear algebras, lattices,


and semigroups are classical universal algebras.
For instance, a linear space or a vector space L over the field F,
e.g., the field R of real numbers, elements of which are often called
vectors, has two operations:
addition: L × L → L denoted by x + y where x and y belong to L;
scalar multiplication: F × L → L denoted by ax where a ∈ F and
x ∈ L.
These operations satisfy the following axioms:
Addition is associative:
For all x, y, z from L, we have x + (y + z) = (x + y) + z.
Addition is commutative:
For all x, y from L, we have x + y = y + x.
Addition has an identity element:

There exists an element 0 from L, called the zero vector,


such that x + 0 = x for all x from L.

Addition has an inverse element:


For any x from L, there exists an element z from L,
called the additive inverse of x, such that x + z = 0.
Scalar multiplication is distributive over addition in L:
For all elements a from F and vectors y, w from L, we have
a (y + w) = a y + a w.
Scalar multiplication is distributive over addition in F :
For all element elements a, b from F and any vector y from L, we
have
(a + b)y = ay + by.
Scalar multiplication is compatible with multiplication in F :
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-appendix page 828

828 Appendix

For all elements a, b from F and any vector y from L, we have


a(by) = (ab)y.
The identity element 1 from the field F also is an identity element
for scalar multiplication:
For all vectors x from L, we have 1x = x.
Vectors x1 , x2 , . . . , xn from L are called linearly dependent in L
if there is an equality Σni=1 ai xi = 0 where ai are elements from R
(from F , in a general case) and not all of them are equal to 0. When
there are no such an equality, vectors x1 , x2 , . . . , xn are called linearly
independent.
A system B of linearly independent vectors from L is called a basis
of L if any element x from L is equal to a sum Σni=1 ai xi where n is
some natural number, xi are elements from B and ai are elements
from R (from F , in a general case).
The number of elements in a basis is called the dimension of the
space L. It is proved that all bases of the same space have the same
number of elements.
The space R is a one-dimensional vector (linear) space over itself.
The space Rn is an n-dimensional vector (linear) space over R.
An n-ary operation of a universal algebra A is called commutative
if for any permutation i1 , i2 , i3 , . . . , in of numbers 1, 2, 3, . . . , n, we
have
ω(x1 , x2 , x3 , . . . , xn ) = ω(xi1 , xi2 , xi3 , . . . , xin )
for any elements x1 , x2 , x3 , . . . , xn from A.
A subsystem of a universal algebra A closed with respect to basic
operations is called a sub-algebra of A. A subsystem of a model is
called a sub-model. The concept of a sub-algebra essentially depends
on the set of operations of the algebra under consideration. In con-
trast to this, any non-empty subset of a model is a sub-model.
A set V of universal algebras of a type Ω is called a variety if there
is a system of identities such that V consists of all universal algebras
of a type Ω that satisfy these identities. A variety of universal alge-
bras may be characterized as a non-empty class of algebras closed
under taking quotient algebras, sub-algebras and direct products.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-appendix page 829

Appendix 829

It is possible to find other structures from algebra and their prop-


erties, for example, in (Kurosh, 1963; Van der Varden, 1971).
There are two approaches to the mathematical structure called
a category. One approach treats categories in the framework of the
general set-theoretical mathematics. Another approach establishes
categories independently of sets and uses them as a foundation of
mathematics different from set theory. It is possible to build the
whole mathematics in the framework of categories. For instance, such
a basic concept as a binary relation is frequently studied in categories.
Toposes allow one to reconstruct set theory as a subtheory of category
theory (cf., for example, (Goldblatt, 1979)). According to the first
approach, we have the following definition of a category.
A category C consists of two collections Ob C, the objects of
C, and Mor C, the morphisms of C that satisfy the following three
axioms:
A1. For every pair A, B of objects, there is a set MorC (A, B), also
denoted by HC (A, B) or HomC (A, B), elements of which are
called morphisms from A to B in C. When f is a morphism
from A to B, it is denoted by f : A → B.
A2. For every three objects A, B and C from Ob C, there is a
binary partial operation, which is a partial function from pairs
of morphisms that belong to the direct product MorC (A, B) ×
MorC (B, C) to morphisms in MorC (A, C). In other words, when
f : A → B and g : B → C, there is a morphism g ◦ f : A → C
called the composition of morphisms g and f in C. This com-
position is associative, that is, if f : A → B, g : B → C and
h : C → D, then h ◦ (g ◦ f ) = (h ◦ g) ◦ f .
A3. For every object A, there is a morphism 1A in MorC (A, A),
called the identity on A, for which if f : A → B, then 1B ◦f = f
and f ◦ 1A = f .
Examples of categories:
The category of sets SET: objects are arbitrary sets and morphisms
are mappings of these sets.
The category of groups GRP: objects are arbitrary groups and
morphisms are homomorphisms of these groups.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-appendix page 830

830 Appendix

The category of topological spaces TOP: objects are arbitrary


topological spaces and morphisms are continuous mappings of these
topological spaces.
Mapping of categories that preserve their structure are called
functors. There are functors of two types: covariant functors and
contravariant functors.
A covariant functor F : C → K, also called a functor, from a
category C to a category K is a mapping that is stratified into two
related mappings FObC : Ob C → Ob K and FMor C : Mor C →
Mor K, i.e., FObC associates an object F (A) from the category K to
each object A from the category C and FMorC associates a morphism
F (f ) : F (A) → F (B) from the category K to each morphism f :
A → B from the category C. In addition, F satisfies the following
two conditions:
F (1A ) = 1F (A) for every object A from the category C;
F (f ◦ g) = F (f ) ◦ F (g) for all morphisms f and g from the category
C when their composition f ◦ g exists.
That is, functors preserve identity morphisms and composition of
morphisms.
A contravariant functor F : C → K from a category C to a
category K consists of two mappings FObC : Ob C → Ob K and
FMorC : Mor C → Mor K, i.e., FObC associates an object F (A) from
the category K to each object A from the category C and FMorC
associates a morphism F (f ) : F (A) → F (B) from the category K
to each morphism f : A → B from the category C, that satisfy the
following two conditions:
F (1A ) = 1F (A) for every object A from the category C;
F (f ◦ g) = F (g) ◦ F (f ) for all morphisms f and g from the category
C when their composition f ◦ g exists.
It is possible to define a contravariant functor as a covariant func-
tor on the dual category Cop .
A functor from a category to itself is called an endofunctor.
There is an approach to the definition of a category in which a
category C consists only of the collection Mor C of the morphisms
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-appendix page 831

Appendix 831

(also called arrows) of C with corresponding axioms. Objects of C


are associated with identity morphisms 1A . It is possible to do this
because 1A is unique in each set MorC (A, A), and uniquely identi-
fies the object A. In any case, morphism is the central concept in a
category. But what is a morphism?
If f : A → B is a morphism, then it is a one-to-one named set
({A}, f, {B}). Thus, the main object of a category is a named set
(fundamental triad), and categories are built from these named sets
(fundamental triad). Besides, a construction or separation of a cat-
egory begins with separation of all elements into two sets and call-
ing all elements from one of these sets by the name “objects” and
elements from the other sets by the name “morphisms”. In such
a way, two named sets appear. In addition, composition of mor-
phisms, as any algebraic operation, is also represented by a named
set (MorC (A, B)×MorC (B, C), ◦, MorC (A, C)). This shows that the
informal notion of a named set is prior both to categories and sets.
As a result, we come to the conclusion that any category is built of
different named sets. Moreover, functors between categories, which
are structured mappings of categories (Herrlich and Strecker, 1973),
are morphisms of those named sets.
It is possible to read more about categories, functors and their
properties, for example, in (Goldblatt, 1979; Herrlich and Strecker,
1973).

D. Numbers and numerical functions

N is the set of all natural numbers 1, 2, . . . , n, . . . .


ω is the sequence of all natural numbers.
N0 is the set of all whole numbers 0, 1, 2, . . . , n, . . . .
Z is the set of all integer numbers or of integers.
Q is the set of all rational numbers.
R is the set of all real numbers or of reals. The geometric form of the
set R is called the real line.
C is the set of all complex numbers.
∞ is the positive infinity, −∞ is the negative infinity. Usually, these
elements are added to the set R. The new set is denoted by
R∞ = R ∪ {∞, −∞}.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-appendix page 832

832 Appendix

If a is a real number, then |a| or


a
denotes its absolute value or
modulus. Thus, |a| = a when a is non-negative and |a| = −a when a
is negative. Two more important constructions are the integral part
or integral value [a] (also denoted by a ) of a, which is equal to the
largest integer number that is less than a, and ]a[ (also denoted by
a),which is equal to the least integer number that is larger than a.
For instance, [1.2] = 1, [7.99] = 7 and [−1.2] = −2. The difference
a — [a], also denoted by a mod 1, or {a}, is called the fractional
part of a. All these construction define real-valued functions |x|, [x],
{x}, and ]x[ where ]x[ is sometimes called the ceiling function and
[x] is sometimes called the floor function.
The symbol ≈ denotes the relation “approximately equal”. For
instance, we have 5 ≈ 5.001.
Axioms for operations with real and complex numbers:

Commutativity of addition: a + b = b + a;
Associativity of addition: (a + b) + c = a+ (b + c);
Commutativity of multiplication: a · b = b · a;
Associativity of multiplication: (a · b) · c = a · (b · c);
Distributivity of multiplication with respect to addition:
a · (b + c) = a · b + a · c.
Zero is a neutral element with respect to addition:
a + 0 = 0 + a = a.
One is a neutral element with respect to multiplication:
a · 1 = 1 · a = a.
A function with R as its range is called a real function or a real-
valued function.
A function with C as its range is called a complex function or a
complex-valued function.

Operations with numbers induce similar operations with func-


tions:

Addition of functions (f + g)(x) = f (x) + g(x).


Subtraction of functions (f − g)(x) = f (x) − g(x).
Multiplication of functions (f · g)(x) = f (x) · g(x).
Scalar multiplication of functions (k ·f )(x) = k ·g(x) by a number k.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-appendix page 833

Appendix 833

E. Topological, metric and normed spaces

A topology in a set X is a system O(X) of subsets of X that are


called open subsets and satisfy the following axioms:

T1. X ∈ O(X) and ∅ ∈ O(X).


T2. For all A, B, if A, B ∈ O(X), then A ∩ B ∈ O(X).

T3. For all Ai , i ∈ I, if all Ai ∈ O(X), then i∈I Ai ∈ O(X).

A set X with a topology in it is called a topological space.


Topology in a set can be also defined by a system of neighborhoods
of points from this set. In this case, a set is open in this topology if it
contains a standard neighborhood of each of its points. For instance,
if a is a real number and t ∈ R++ , then an open interval Ot a = {x ∈
R; a − t < x < a + t} is a standard neighborhood of a.
If X is a subset of a topological space, then Cl(X) denotes the
closure of the set X.
In many interesting cases, topology is defined by a metric.
A metric in a set X is a mapping d : X × X → R+ that defines
distances between points of X and satisfies the following axioms:

M1. d(x, y) = 0 if and only if x = y, i.e., the distance between an


element and itself is equal to zero, while the distance between
any different elements is equal to a positive number.
M2. d(x, y) = d(y, x) for all x, y ∈ X, i.e., the distance between x
and y is equal to the distance between x and y.
M3. The triangle inequality:

d(x, y) ≤ d(x, z) + d(z, y) for all x, y, z ∈ X.

That is, the distance from x through z to y is never less than the
distance directly from x to y, or the shortest distance between any
two points is a straight line.
A set X with a metric d is called a metric space. The number
d(x, y) is called the distance between x and y in the metric space X.
For instance, in the set R of all real numbers, the distance d(x, y)
between numbers x and y is the absolute value |x − y| , i.e., d(x, y) =
|x−y|. This metric defines the following topology in R. If a is a point
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-appendix page 834

834 Appendix

from R, then a standard neighborhood of a has the form Or (a) =


{y; |a − y| < r with r ∈ R++ }. A set is open in this topology if it
contains a standard neighborhood of each of its points.
Mappings of metric (topological) spaces that preserve the metric
(topological) structure are called continuous.
It is possible to find other structures from topology and their
properties, for example, in (Kuratowski, 1966).
n
i=1 ci denotes the sum c1 + c2 + c3 + · · · + cn .
If U is a subset of a metric space ⊆ R, then the diameter D(U )
of the set U is equal to sup{d(x, y); x, y ∈ U }.
If l = {ai ∈ M ; i = 1, 2, 3, . . .} is a sequence, and f : M → L is a
mapping, then f (l) = {f (ai ); i = 1, 2, 3, . . .}.
a = lim l means that a number a is a limit of a sequence l.
It is possible to introduce a natural metric in the space Rn :
 
if x, y ∈ Rn , x = ni=1 ai xi , and y = ni=1 bi xi where xi are ele-
ments from a basis B of the space Rn , then

d(x, y) = (a1 − b1 )2 + (a2 − b2 )2 + · · · + (an − bn )2 .
It is called the Euclidean metric in the space Rn . The space Rn
with the Euclidean metric is called an Euclidean space and often
denoted by E n .
Another natural metric in Rn is the Manhattan distance, where
the distance between any two points, or vectors, is the sum of the
distances between corresponding coordinates, i.e., d(x, y) = |a1 −
b1 | + |a2 − b2 | + · · · + |an − bn |.
It is possible to consider any non-empty set X as a metric space
with the distance d(x, y) = 1 for all x not equal to y and d(x, y) = 0
otherwise. It is a discrete metric.
A norm in a linear space L over the field R is a mapping

:
L → R+ that satisfies the following axioms:

N1.
x
= 0 if and only if x = 0, i.e., the zero vector has zero length,
while any other vector has a positive length.
N2. For any positive number a from R, we have
ax
= a
x
, i.e.,
multiplying a vector by a positive number has the same effect
on the length.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-appendix page 835

Appendix 835

N3. The triangle inequality:



x + y

x
+
y

i.e., the norm of a sum of vectors is never larger than the sum
of their norms.

This implies:

x

y

x + y
.
A vector space L with a norm is called a normed vector space or
simply, a normed space.
The space Rn is a normed vector space with the following norm:

if x ∈ Rn and x = ni=1 ai xi where xi are elements from a basis B
of the space Rn , then
x
= a21 + a22 + · · · + a2n .
Proposition D.1. Any normed vector space is a metric space.
Indeed, we can define d(x, y) =
x−y
and check that all axioms
of metric are valid for this distance.
Another natural metric in a normed vector space is the British
Rail metric (also called the Post Office metric or the SNCF metric)
on a normed vector space, given by d(x, y) =
x
+
y
for distinct
vectors x and y, and d(x, x) = 0.
A Hilbert space is an abstract linear (vector) space over the field
R of real numbers or the field C of complex numbers with an inner
product and complete as a metric space.
An inner product in a vector space V is a function ·, · : V × V →
R (or in the complex case ·, · : V × V → C) that satisfies the
following properties for all vectors x, y, and x from V and number a:
Conjugate symmetry:
x, y = y, x.
Linearity in the first argument:
ax , y = ax, y,
x + z, y = x, y + z, y.
Positive-definiteness:
x, y > 0
for all x = 0 from V .
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-appendix page 836

836 Appendix

A fiber bundle B (also called fibre bundle) is a triad (E, p, B)


where the topological space E is called the total space or simply,
space of the fiber bundle B, the topological space B the base space
or simply, base of the fiber bundle B, and p is a topological projection
of E onto B such that every point in the base space has a neighbor-
hood U such that p−1 (b) = F for all points b from B and p−1 (U ) is
homeomorphic to the direct product U × F . The topological space F
is called the fiber of the fiber bundle B. Informally, a fiber bundle is
a topological space which looks locally like a product space U × F .
Fiber bundles are special cases of topological bundles. A topolog-
ical bundle D is a triad (named set) (E, p, B) where E and B are
topological spaces and p is a topological (i.e., continuous) projection
of E onto B.
It is possible to find introductory information on topological man-
ifolds, fiber and topological bundles, and other topological structures
in (Gauld, 1974; Husemoller, 1994; Lee, 2000).
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 837

Bibliography

Abadi, M. (1998) On SDSI’s linked local name spaces, Journal of Computer


Security, v. 6, No. 1–2, pp. 3–21.
Abadi, M. and Cardelli, L. (1996) A Theory of Objects (Monographs in
Computer Science), Springer, New York.
Abadi, A., Rabinovich, A. and Sagiv, M. (2010) Decidable fragments of
many-sorted logic, Journal of Symbolic Computation, v. 45, No. 2,
pp. 153–172.
Abelard, P. (1927) Logica ‘ingredientibus’, glossae super perihermeneias,
philosophische schriften, in Beiträgezur Geschichte der Philosophie und
TheologieimMittelalter, v. 21, No. 3, Aschendorff, Münster.
Abelard, P. (1956) Dialectica, Van Gorcum, Assen.
Ackoff, R. L. (1989) From data to wisdom, Journal of Applied Systems
Analysis, v. 16, pp. 3–9.
Ackermann, W. (1940) Zur widerspruchsfreiheit der zahlentheorie, Mathe-
matische Annalen, v. 117, pp. 162–194.
Ackermann, R. (1967) Introduction to Many-Valued Logics, Routledge &
Kegan Paul, London & New York.
Adams, M. P. (2009) Empirical evidence and the knowledge-that/
knowledge-how distinction, Synthese, v. 170, pp. 97–114.
Agarwal, R., Bohner, M., O’Regan, D. and Peterson, A. (2002) Dynamic
equations on time scales: A survey, Journal of Computational and
Applied Mathematics, v. 141, pp. 1–26.
Agazzi, E. (1992) Intelligibility, understanding and explanation in science,
in Idealization IV: Intelligibility in Science, Amsterdam, pp. 25–46.
Agre, P. E. (2000) The market logic of information, Knowledge, Technology,
and Policy, v. 13, No. 3, pp. 67–77.

837
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 838

838 Bibliography

Aiello, M., Pratt-Hartmann, I. and van Benthem, J. (Eds.) (2007) Handbook


of Spatial Logics, Springer, New York.
Aigner, M. (1979) Combinatorial Theory, Springer Verlag, New York/
Berlin.
Alavi, M. and Leidner, D. E. (2001) KM and KM systems: Conceptual
foundations and research issues, MIS Quarterly, v. 25, No. 1, pp. 107–
136.
Albert, D. and Lukas, J. (Eds.) (1999) Knowledge Spaces: Theories, Empir-
ical Research, Applications, Lawrence Erlbaum Associates, Hillsdale,
NJ.
Albert, R. and Barabasi, A. L. (2000) Topology of evolving networks: Local
events and universality, Physical Review Letters, v. 85, pp. 5234–5237.
Alexander, J. and Weinberg, J. (2007) Analytic epistemology and experi-
mental philosophy, Philosophy Compass, v. 2, pp. 56–80.
Alston, W. (1986) Epistemic circularity, Philosophy and Phenomenological
Research, v. 47, pp. 1–30.
Alexander, P. A., Schallert, D. L. and Hare, V. C. (1991) Coming to terms:
How researchers in learning and literacy talk about knowledge, Review
of Educational Research, v. 61, pp. 315–343.
Alexandroff, P. (1961) Elementary Concepts of Topology, Dover Publica-
tions, Inc., New York.
Allen, E. H. (1976) Negative probabilities and the uses of signed probability
theory, Philosophy of Science, v. 43, No. 1, pp. 53–70.
Allen, J. F. (1984) Towards a general theory of action and time, Artificial
Intelligence, v. 23, pp. 123–154.
Alon, N. and Spencer, J. H. (2000) The Probabilistic Method, Wiley-
Interscience, New York.
Alp, K. O. (2010) A comparison of sign and symbol (their contents and
boundaries), Semiotica, v. 182, No. 1/4, pp. 1–13.
Alter, T. (2001) Know-how, ability, and the ability hypothesis, Theoria,
v. 67, No. 3, pp. 229–239.
Alter, S. (2006) Goals and tactics on the dark side of knowledge manage-
ment, Proc. of the 39th Hawaii International Conference on System
Sciences, IEEE Press.
Ambrosini, V. and Bowman, C. (2001) Tacit knowledge: Some suggestions
for operationalization, Journal of Management Studies, v. 38, No. 6,
pp. 811–829.
American Heritage Dictionary of the English Language (2009) Houghton
Mifflin Company, Boston.
Amgoud, L. and Cayrol, C. (1998) On the acceptability of arguments in
preference-based argumentation, Proc. of 14th Conference on Uncer-
tainty in Artificial Intelligence (UAI’98), pp. 1–7.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 839

Bibliography 839

Amidon, D. M. (1997) Innovation Strategy for the Knowledge Economy,


Butterworth-Heinemann, Boston, MA.
Amir, E. (2002) Dividing and Conquering Logic, Ph.D. Thesis, Stanford
University, Computer Science Department.
Amir, E. and McIlraith, S. (2005) Partition-based logical reasoning for first-
order and propositional theories, Artificial Intelligence, v. 162, No. 1/2,
pp. 49–88.
Anderson, R. C. (1977) The notion of schemata and the educational enter-
prise, in Schooling and the Acquisition of Knowledge, R. C. Anderson,
R. J. Spiro and W. E. Montague (Eds.), Lawrence Erlbaum, Hillsdale,
NJ, USA.
Anderson, C. A. (1984) General intensional logic, in Handbook of Philo-
sophical Logic, Volume II, Chapter II.7, pp. 355–385.
Anderson, A. R. and Belnap, N. (1975) Entailment: The Logic of Relevance
and Necessity, v. I, Princeton University Press, Princeton.
Angluin, D. and Smith, C. H. (1983) Inductive inference: Theory and meth-
ods, Computing Surveys, v. 15, No. 3, pp. 237–269.
Appel, K. and Haken, W. (1977) Every planar map is four colorable. Part I.
Discharging, Illinois Journal of Mathematics, v. 21, pp. 429–490; Part
II. Reducibility, Illinois Journal of Mathematics, v. 21, pp. 491–567.
Arbib, M. A. (1989) Schemas and neural networks for sixth generation com-
puting, Journal of Parallel and Distributed Computing, v. 6, pp. 185–
216.
Arbib, M. A. (1992) Schema theory, in The Encyclopedia of Artificial Intel-
ligence, S. Shapiro (Ed.), 2nd Edition, Wiley-Interscience, pp. 1427–
1443.
Arbib, M. A. (1994) Schema theory: Cooperative computation for brain
theory and distributed AI, in Artificial Intelligence and Neural Net-
works: Steps toward Principled Integration, Academic Press, New York,
pp. 51–74.
Arbib, M. A. (1995) Schema theory: From Kant to McCulloch and beyond,
in Brain Processes, Theories and Models. An International Conference
in Honor of W. S. McCulloch 25 Years After His Death, R. Moreno-
Diaz and J. Mira-Mira (Eds.), The MIT Press, Cambridge, MA,
pp. 11–23.
Arbib, M. A. (2005) Modules, brains and schemas, Formal Methods, LNCS
3393, pp. 153–166.
Arbib, M. A. and Ehrig, H. (1990) Linking schemas and module specifi-
cations for distributed systems, Proc. 2nd IEEE Workshop on Future
Trends of Distributed Computing Systems, Cairo, pp. 165–171.
Arbib, M. A. and Liaw, J.-S. (1995) Sensorimotor transformations in the
worlds of frogs and robots, Artificial Intelligence, v. 72, pp. 53–79.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 840

840 Bibliography

Arbib, M. A., Steenstrup, M. and Manes, E. G. (1983) Port automata and


the algebra of concurrent processes, Journal of Computer and System
Sciences, v. 27, pp. 29–50.
Areces, C., Blackburn, P. and Marx, M. (2001) Hybrid logics: Characteri-
zation, interpolation, and complexity, Journal of Symbolic Logic, v. 66,
pp. 977–1010.
Aristotle (1984) The Complete Works of Aristotle, Princeton University
Press, Princeton.
Armbruster, B. (1996) Schema theory and the design of content-area text-
books, Educational Psychologist, v. 21, pp. 253–276.
Armstrong, D. (1973) Belief, Knowledge, and Truth, Cambridge University
Press, London.
Armstrong, S. L., Gleitman, L. R. and Gleitman, H. (1983) What some
concepts might not be, Cognition, v. 13, pp. 263–308.
Arnauld, A. and Nicole, P. (1683) La Logique, ou L’art de Penser,
G. Desprez, Paris.
Arruda, A. I. (1980) A survey of paraconsistent logic, in Mathematical Logic
in Latin America, North-Holland, pp. 3–41.
Artale, A., Guarino, N. and Keet, C. M. (2008) Formalising temporal con-
straints on part-whole relations. Principles of knowledge representa-
tion and reasoning, Proc. of the Eleventh International Conference (KR
2008), Sydney, Australia, pp. 673–683.
Atanasov, K. (1999) Intuitionistic Fuzzy Sets: Theory and Applications,
Physica-Verlag, Heidelberg/New York.
Atiyah, M. F. (1990) The Geometry and Physics of Knots, Cambridge Uni-
versity Press, Cambridge.
Atkinson, R. L., Atkinson, R. C., Smith E. E. and Bem, D. J. (1990)
Introduction to Psychology, Harcourt Brace Jovanovich, Inc., San
Diego/New York/Chicago.
St. Augustine (1974) The Essential Augustine, V. J. Bourke (Ed.), Hackett,
Indianapolis.
Aune, B. (1967) Knowledge, Mind and Nature, Random House, New York.
Bacon, R. (ca. 1267/1978) De signis, Traditio, v. 34, pp. 75–136.
Baer, N. and Zeidman, B. (2009) Measuring software evolution with
changing lines of code, Proc. of the 24th International Confer-
ence on Computers and Their Applications (CATA-2009), pp. 264–
170.
Baeten, J. C. M. (Ed.) (1990) Applications of Process Algebra, Cambridge
University Press, Cambridge.
Baeten, J. C. M. (2005) A brief history of process algebra, Theoretical
Computer Science, v. 335, No. 2–3, pp. 131–146.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 841

Bibliography 841

Baeten, J. C. M. and Bergstra, J. A. (1991) Real time process algebra,


Formal Aspects of Computing, v. 3, No. 2, pp. 142–188.
Baeten, J. C. M. and Bergstra, J. A. (1996) Discrete time process algebra,
Formal Aspects of Computing, v. 8, No. 2, pp. 188–208.
Baeten, J. C. M. and Weijland, W. P. (1990) Process Algebra, Cambridge
University Press, Cambridge.
Bagley, P. (1968), Extension of Programming Language Concepts,
University City Science Center, Philadelphia.
Baillargeon, R. (2000) How do infants learn about the physical world? Infant
development: The essential readings, in Essential Readings in Develop-
ment Psychology, Blackwell, Malden, MA, pp. 195–212.
Baker, K. A. and Badamshina, G. M. (2002) Knowledge manage-
ment, in Science Policy; Strategy; Change Management; Compe-
tencies; Innovation (http://www.wren-network.net/resources/bench
mark/05KnowledgeManagement.pdf).
Balakrishnan, H., Lakshminarayanan, K., Ratnasamy, S., Shenker, S., Sto-
ica, I. and Walfish, M. (2004) A layered naming architecture for the
internet, SIGCOMM ’04, Portland, Oregon, pp. 343–352.
Balasubramanian, R. (2000) Introduction, in History of Science, Philoso-
phy and Culture in Indian Civilization. V. II Part 2, Advaita Vedanta,
Chattopadhyana (Ed.), Centre for Studies in Civilizations, Delhi.
Baldoni, M., Giordano, L. and Martelli, A. (1998) A modal extension
of logic programming: Modularity, beliefs and hypothetical reasoning,
Journal of Logic and Computation, v. 8, pp. 597–635.
Baldwin, J. F. (1986) Support logic programming, International Journal of
Intelligent Systems, v. 1, pp. 73–104.
Baldwin, D. A. and Markman, E. M. (1989) Establishing word-object rela-
tions: A first step, Child Development, v. 60, No. 2, pp. 381–398.
Balzer, R. (1991) Tolerating inconsistency, Proc. of 13th International Con-
ference on Software Engineering (ICSE-13), Austin, TX, USA, IEEE
Computer Society Press, Silver Spring, MD, pp. 158–165.
Balzer, W., Burgin, M. and Kuznetsov, V. (1991) Reduction and the
structure-nominative view of theories, Abstracts of the 9th Interna-
tional Congress of Logic, Methodology and Philosophy of Science, Upp-
sala, Sweden, v. II, p. 6.
Bandemer, H. and Gottwald, S. (1996) Fuzzy Sets, Fuzzy Logic, Fuzzy Meth-
ods With Applications, Wiley, London.
Banerji, R. B. (1988) Learning theories in a subset of a polyadic logic,
Proc. of the First Annual Workshop on Computational learning theory
(COLT ’88), Morgan Kaufmann Publishers Inc., San Francisco, CA,
USA, pp. 267–278.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 842

842 Bibliography

Bar-Hillel, Y. (1964) Language and Information, Selected Essays on their


Theory and Application, Addison-Wesley Publishing Company, Read-
ing, Massachusetts.
Baral, C., Kraus, S. and Minker, J. (1991) Combining multiple knowledge
bases, IEEE Transactions on Knowledge and Data Engineering, v. 3,
No. 2, pp. 208–220.
Baral, C., Kraus, S., Minker, J. and Subrahmanian, V. S. (1992) Combin-
ing knowledge bases consisting of first order theories, Computational
Intelligence, v. 8, No. 1, pp. 45–71.
Barendregt, H. P. (1984) The Lambda Calculus: Its Syntax and Semantics,
North Holland, Amsterdam.
Bar-Hillel, Y. and Carnap, R. (1958) Semantic information, British Journal
of Philosophical Sciences, v. 4, No. 3, pp. 147–157.
Barr, A. and Feigenbaum, E. (1981) The Handbook of Artificial Intelligence,
Kaufmann, Los Altos, CA.
Barr, M. (1979) *-Autonomous Categories, Lecture Notes in Mathematics,
v. 752, Springer, New York.
Bartlett, F. C. (1932) Remembering: A Study in Experimental and Social
Psychology, Cambridge University Press, Cambridge.
Barthes, R. (1967). Elements of Semiology. (Translated by Annette
Lavers & Colin Smith), Jonathan Cape, London.
Barthes, R. (1971) Action Sequences, in Patterns of Literary Style,
J. Strelka (Ed.), Pennsylvania State University Press, University Park,
PA, pp. 5–14.
Barthes, R. (1971a) The structuralist activity, in Critical Theory Since
Plato, H. Adams (Ed.), Harcourt Brace Jovanovich, San Diego
pp. 1196–1199.
Barthes, R. (1972) Mythologies, Hill and Wang, New York.
Barthes, R. (1977) Introduction to the structural analysis of narratives, in
Image Music Text, Hill and Wang, New York, pp. 79–124.
Bartlett, M. S. (1945) Negative probability, Mathematical Proceedings of
the Cambridge Philosophical Society, v. 41, pp. 71–73.
Bartlett, M. S. (1986) Some aspects of negative probabilities, Physics
Reports, v. 133, No. 6, pp. 392–392.
Bartocci, C., Bruzzo, U. and Hernández Ruipérez, D. (1991) The Geometry
of Supermanifolds, Kluwer Academic Publ., Dordrecht.
Barwise, K. J. (1969) Infinitary logic and admissible sets, Journal of Sym-
bolic Logic, v. 34, No. 2, pp. 226–252.
Barwise, J. and Perry, J. (1983) Situations and Attitudes, MIT Press,
Cambridge, Massachusetts, and London, England.
Barwise, J. and Seligman, J. (1997) Information Flow : The Logic of Dis-
tributed Systems, Cambridge Tracts in Theoretical Computer Science,
v. 44, Cambridge University Press, Cambridge.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 843

Bibliography 843

Barwise, K. J., Kaufman, M. and Makkai, M. (1978) Stationary logic,


Annals of Mathematical Logic, v. 13, pp. 171–224.
Basil, V. R. and Rombach, H. D. (1988) The TAME project: Towards
improvement-oriented software evironments, IEEE Transactions on
Software Engineering, v. 14, No. 6, pp. 758–773.
Basin, D., D’Agostino, M., Gabbay, D. M. Matthews, S. and Vigano, L.
(Eds.) (2000) Labelled Deduction, Kluwer Academic Publishers, Dor-
drecht.
Bass, R. F. and Burdzy, K. (1990) A probabilistic proof of the bound-
ary Harnack principle, in Seminar on Stochastic Processes, Birkhäuser,
Boston, pp. 1–16.
Bates, M. J. (2010) Information, in Encyclopedia of Library and Informa-
tion Sciences, Third Edition, v. 1, pp. 2347–2360.
Bateson, G. (1973) Steps to an Ecology of Mind: Collected Essays in Anthro-
pology, Psychiatry, Evolution and Epistemology, Paladin, Granada,
London.
Beach, L. R. and Mitchell, T. R. (1987) Image theory: Principles, goals, and
plans in decision making, Acta Psychologica, v. 66, No. 3, pp. 201–220.
Bealer, G. (1982) Quality and Concept, Clarendon Press, Oxford.
Bealer, G. (1998) Intuition and the autonomy of philosophy, in Rethinking
Intuition: The Psychology of Intuition and Its Role in Philosophical
Inquiry, DePaul and Ramsey (Eds.), Rowman and Littlefield, Lanham,
MD, pp. 201–240.
Bealer, G. and Uwe Mönnich, U. (1989) Property theories, in Handbook
of Philosophical Logic, D. Gabbay and F. Guenthner (Eds.), Vol. IV,
Reidel, Dordrecht, pp. 133–251.
Bean, C. A. and Green, R. (Eds.) (2001). Relationships in the Organization
of Knowledge, Kluwer Academic Publishers, Dordrecht.
Beck, J. (1995) Cognitive Therapy: Basics and Beyond, Guilford, New York,
NY, USA.
Bekenstein, J. D. (2003) Information in the holographic universe, Scientific
American, v. 289, No. 2, pp. 58–65.
Belchior, A. D., Xexéo, G. and da Rocha, A. R. C. (1996) Evaluating
software quality requirements using fuzzy theory, Proc. of ISAS 96,
Orlando.
Belkin, N. and Robertson, S. (1976) Information science and the phe-
nomenon of information, Journal of the American Society for Infor-
mation Science, v. 27, pp. 197–204.
Bellinger, G., Castro, D. and Mills, A. (1997) Data, Information, Knowl-
edge, and Wisdom (http://www.outsights.com/systems/dikw/dikw.
htm).
Belnap, N. D. and Steel, T. B. (1976) The Logic of Questions and Answers,
Yale University Press, New Haven/London.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 844

844 Bibliography

Bem, D. J. (1970) Beliefs, Attitudes, and Human Affairs, Brooks/Cole P.C.,


Belmont, California.
Benacerraf, P. (1973) Mathematical truth, The Journal of Philosophy, v. 70,
pp. 661–679.
Bender, J. (1993) Art as a source of knowledge: Linking analytic aesthetics
and epistemology, in Contemporary Philosophy of Art, J. Bender and
G. Blocker (Eds.) Prentice Hall, Englewood Cliffs, NJ.
Benferhat, S. and Baida, R. (2004) A stratified first order logic approach
for access control, International Journal of Intelligent Systems, v. 19,
pp. 817–836.
Benferhat, S. and Garcia, L. (2002) Handling locally stratified inconsistent
knowledge bases, Studia Logica, v. 70, pp. 77–104.
Benferhat, S., Kaci, S., Berre, D. and Williams, M. (2004) Weakening con-
flicting information for iterated revision and knowledge integration,
Artificial Intelligence, v. 153, No. 1–2, pp. 339–371.
Benferhat, S., Cayrol, C., Dubois, D., Lang, J., and Prade, H. (1993)
Inconsistency management and prioritized syntax-based entailment,
Proc. of 13th International Joint Conference on Artificial Intelligence
(IJCAI’93), pp. 640–645.
Benferhat, S., Dubois, D. and Prade, H. (1992) Representing default rules
in possibilistic logic, Proc. of 3rd International Conference of Principles
of Knowledge Representation and Reasoning (KR’92), pp. 673–684.
Benferhat, S., Dubois, D. and Prade, H. (1995) How to infer from incon-
sistent beliefs without revising? Proc. of 14th Int. Joint Conference on
Artificial Intelligence (IJCAI’95).
Bengson, J. (2012) Two conceptions of mind and action: Knowing how and
the philosophical theory of intelligence, in Knowing How: Essays on
Knowledge, Mind, and Action, J. Bengson and M. A. Moffett (Eds.),
Oxford University Press, Oxford, pp. 3–58.
Berge, C. (1973) Graphs and Hypergraphs, North Holland P.C.,
Amsterdam/New York.
Bergmann, M. (2006) Justification without Awareness, Oxford University
Press, New York.
Bergmann, M., Moor, J. and Nelson, J. (1980) The Logic Book, Random
House, New York.
Bergson, H. (1923/2007) The Creative Mind: An Introduction to Meta-
physics, Dover Books on Western Philosophy, Dover Publications.
Bergstra, J. A. and Klop, J. W. (1984) Process algebra for synchronous
communication, Information and Control, v. 60, No. 1–3, pp. 109–137.
Berkeley, G. (1710) A Treatise Concerning the Principles of Human Knowl-
edge, Dublin.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 845

Bibliography 845

Berkeley, G. (1948–1957) The Works of George Berkeley, Bishop of Cloyne,


A. A. Luce and T. E. Jessop (Eds.), Thomas Nelson and Sons, London,
9 vols.
Berlinski, D. (2001) The Advent of the Algorithm: The Idea that Rules the
World, Mariner Books.
Bernecker, S. and Dretske, F. (Eds.) (2000) Knowledge: Readings in Con-
temporary Epistemology, Oxford University Press, Oxford/New York.
Berners-Lee, T., Hendler, J. and Lassila, O. (2001) The Semantic Web,
Scientific American, v. 284, No. 5, pp. 34–43.
Bertossi, L. E., Hunter, A. and Schaub, T. (Eds.) (2005) Inconsistency
Tolerance, LNCS, v. 3300, Springer, Heidelberg.
Besnard, P. and Hunter, A. (1995) Quasi-classical logic: Non-
trivializable classical reasoning from inconsistent information, Proc. of
ECSQARU’95, LNAI, v. 946, pp. 44–51.
Beynon-Davies, P. (2002) Information Systems: An Introduction to Infor-
matics in Organizations, Palgrave, Basingstoke, UK.
Beziau, J-Y. (2012) Universal Logic: An Anthology, from Paul Hertz to Dov
Gabbay, Birkhäuser, Boston.
Bezdek, J. C. (1978) Fuzzy partitions and relations and axiomatic basis for
clastering, Fuzzy Sets and Systems, v. 1, pp. 111–127.
Biederman, I. (1987) Recognition-by-components: A theory of human image
understanding, Psychological Review, v. 94, pp. 115–147.
Biederman, I., Rabinowitz, J. C., Glass, A. L. and Stacey, E. W. J. (1974)
On the information extracted from a glance at a scene, Journal of
Experimental Psychology, v. 103, No. 3, pp. 597–600.
Binas, A. and McIlraith, S. (2008) Peer-to-peer query answering with
inconsistent knowledge, Proc. on the 11th International Conference on
Principles of Knowledge Representation and Reasoning, pp. 329–339,
Sydney, Australia.
Bird, A. (1996) Careers as repositories of knowledge: considerations for
boundaryless careers, in The Boundaryless Career : A New Employment
Principle for a New Organizational Era, Oxford University Press, New
York, pp. 150–168.
Bird, A. (1998) Philosophy of Science, McGill-Queen’s University Press,
Montreal/London/Ithaca.
Birnbaum, L. (1980) n-polar logic of classes, Notre Dame Journal of Formal
Logic, v. 21, No. 2, pp. 365–379.
Bisiach, E., Luzzatti, C. and Perani, D. (1979) Unilateral neglect, represen-
tational schema and consciousness, Brain, v. 102, pp. 609–618.
Bitbol, M. (2011) The quantum structure of knowledge, Axiomathes, v. 21,
pp. 357–371.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 846

846 Bibliography

Bjerring, J. C. (2014) Problems in epistemic space, Journal of Philosophical


Logic, v. 43, No. 1, pp. 153–170.
Black, J. (2014) The Power of Knowledge: How Information and Technology
Made the Modern World, Yale University Press, Yale.
Blake, J. (2003) Metaphor and Knowledge: The Challenge of Writing Sci-
ence, State University of New York Press, New York.
Blackler, F. (1995) Knowledge, knowledge work and organizations: An
overview and interpretation, Organization Studies, v. 16, No. 6,
pp. 1021–1046.
Blakemore, D. (1990) Understanding Utterances: The Pragmatics of Natu-
ral Language, Blackwell, Oxford.
Bliss, H. E. (1929) The Organization of Knowledge and the System of the
Sciences, (with an introduction by John Dewey), Henry Holt and Co.,
New York.
Bloch, A. S. (1975) Graph-Schemes and their Application, Vysheishaya
Shkola, Minsk, Belaruss (in Russian).
Bloch, M. (1949) Apologie pour l’Histoire ou Métier d’Historien, Armand
Colin, Paris.
Bloor, D. (1991) Knowledge and Social Imagery, University of Chicago
Press, Chicago.
Blum, M. (1967) On the size of machines, Information and Control, v. 11,
pp. 257–265.
Blum, E. K., Ehrig, H., and Parisi-Presicce, F. (1987) Algebraic specifica-
tions of modules and their basic interconnections, Journal of Computer
and System and Science, v. 34, pp. 293–339.
Blum, C. and Roli, A. (2003) Metaheuristics in combinatorial optimization:
Overview and conceptual comparison, Computing Surveys, v. 35, No. 3,
pp. 268–308.
Bobrow, D. (Ed.) (1984) Qualitative Reasoning About Phlysical Systems,
MIT Press, Cambridge, Mass.
Bogoyavlenskaya, D. B. (1983) Intellectual Activity as a Problem of Cre-
ativity, Rostov State University, Rostov (in Russian).
Bohrer, K. (1997) Middleware isolates business logic, Object Magazine, v. 7,
No. 9, pp. 41–46.
Boisot, M. H. (1998) Knowledge Assets: Securing Competitive Advantage
in the Information Economy, Oxford University Press, Oxford.
Boisot, M. (2002) The Structuring and Sharing of Knowledge, in Strate-
gic Management of Intellectual Capital and Organization Knowledge,
C. Wei Choo and N. Bontis (Eds.), Oxford University Press, Oxford,
pp. 65–77.
Boisot, M. and Canals, A. (2004) Data, information, and knowledge: Have
we got it right, Journal of Evolutionary Economics, v. 14, pp. 43–67.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 847

Bibliography 847

Bonfantini, M. and Proni, G. (1983) To guess or not to guess, in The Sign


of the Three: Dupin, Holmes, Peirce, Indiana University Press, Bloom-
ington, IN, pp. 119–134.
Boolos, G. (1993) The Logic of Provability, Cambridge University Press,
Cambridge.
Boole, G. (1854) An Investigation of the Laws of Thought on Which are
Founded the Mathematical Theories of Logic and Probabilities, Walton
and Maberly, London.
Boring, E. G., et al. (1945) Symposium on operationism, The Psychological
Review, v. 52, pp. 241–294.
Born, M. (1953) Physical reality, The Philosophical Quarterly, v. 3, No. 11,
pp. 139–149.
Borodyanskiy, Y. M. and Burgin, M. S. (1994) Problems of artificial intel-
ligence and transrecursive operators, Visnik of the National Academy
of Sciences of Ukraine, No. 11/12, pp. 29–34 (in Ukrainian).
Botha, A., Kourie, D. and Snyman, R. (2008) Coping with Continuous
Change in the Business Environment, Knowledge Management and
Knowledge Management Technology, Chandice Publishing Ltd.
Boulding, K. E. (1956) The Image, University of Michigan Press, Ann Arbor.
Bourbaki, N. (1948) L’architecture des mathématiques, Legrands courants
de la pensée mathématiques, Cahiers Sud, pp. 35–47.
Bourbaki, N. (1957) Structures, Hermann, Paris.
Bourbaki, N. (1960) Theorie des Ensembles, Hermann, Paris.
Bourbaki, N. (1987) Elements of Mathematics, in Topological Vector Spaces,
Chapters 1–5, Springer-Verlag, Berlin.
Boussinesq, J. (1877), Essai sur la theorie des eaux courantes, Memoires pre-
sentes par divers savants, l’Académie des Sciences Institut De France,
XXIII, pp. 1–680.
Brachman, R. J. (1983) What IS-A is and isn’t. An analysis of taxonomic
links in semantic networks, IEEE Computer, v. 16, No. 10.
Brachman, R. J. (1990) The future of knowledge representation, Proc.
AAAI-90, pp. 1082–1092.
Brachman, R. J. and Levesque, H. J. (2004) Knowledge Representation and
Reasoning, Morgan Kaufmann, San Mateo, CA.
Brenner, J. E.(2008) Logic in Reality, Springer, Dordrecht.
Brewer, W. F. (1999) Schemata, in The MIT Encyclopedia of the Cognitive
Sciences, MIT Press, Cambridge, MA.
Brewka, G. (1989) Preferred subtheories: An extended logical framework for
default reason, Proc. of 11th International Joint Conference on Artifi-
cial Intelligence (IJ CAI’89), pp. 1043–1048.
Brewka, G. (1991) Cumulative default logic: In defense of nonmonotonic
inference rules, Artificial Intelligence, v. 50, pp. 183–205.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 848

848 Bibliography

Bridges, D. and Richman, F. (1987) Varieties of Constructive Mathematics,


Cambridge University Press, Cambridge.
Bridgman, P. W. (1927) The Logic of Modern Physics, Macmillan, New York.
Bridgman, P. W. (1936) The Nature of Physical Theory, Dover, New York.
Bridgman, P. W. (1938) Operational analysis, Philosophy of Science, v. 5,
pp. 114–131.
Bridgman, P. W. (1950) The nature of some of our physical concepts, British
Journal for the Philosophy of Science, v. 1, pp. 257–272.
Bridgman, P. W. (1951) The nature of some of our physical concepts, British
Journal for the Philosophy of Science, v. 2, pp. 25–44.
Bridgman, P. W. (1959) The Way Things Are, Harvard University Press,
Cambridge, MA.
Brookes, B. C. (1980) The foundations of information science, pt. 1, Philo-
sophical aspects, Journal of Information Science, v. 2, pp. 125–133.
Brooks, K. (1977) The developing cognitive viewpoint in information sci-
ence, Proc. International Workshop on the Cognitive Viewpoint, Uni-
versity of Ghent, Ghent, pp. 195–203.
Brooks, S. D., Hoare, C. A. R. and Roscoe, A. W. (1984) A theory of
communicating sequential processes, Journal of the ACM, v. 43, No. 3,
pp. 560–599.
Brown, P. and Lauder, H. (2000) Collective intelligence, in Social Capital:
Critical Perspectives, Oxford University Press, New York.
Browne, S., Dongarra, J., Green, S., Moore, K., Pepin, T., Rowan, T.
and Wade, R. (1995) Location-independent naming for virtual dis-
tributed software repositories, ACM SIGSOFT Software Engineering
Notes, Proc. of the 1995 Symposium on Software Reusability, v. 20, pp.
179–185.
Brown, B. and Priest, G. (2004) Chunk and permeate: A paraconsistent
inference strategy, part I: The infinitesimal calculus, Journal of Philo-
sophical Logic, v. 33, No. 4, pp. 379–388.
Bucur, I. and Deleanu, A. (1968) Introduction to the Theory of Categories
and Functors, John Wiley, London.
Bunge, M. (1962) Intuition and Science, Prentice-Hall, Englewood Cliffs,
NJ.
Bunt, H. and Black, W. (2000) Abduction, Belief and Context in Dialogue:
Studies in Computational Pragmatics, Natural Language Processing,
v. 1, John Benjamins, Amsterdam & Philadelphia.
Burgess, J. P. (2014) Intuitions of Three Kinds in Gödel’s Views on the
Continuum, in Interpreting Gödel, Cambridge University Press, Cam-
bridge, pp. 11–31.
Burgin, M. (1973) The Block–Schema language as a programming language,
Problems of Radio-electronics, No. 7, pp. 39–58 (in Russian).
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 849

Bibliography 849

Burgin, M. (1976) Recursion operator and representability of functions in


the block-schema language, Programming, No. 4, pp. 13–23 (Program-
ming and Computer Software, 1976, v. 2, No. 4).
Burgin, M. (1977) Non-classical models of natural numbers, Russian Math-
ematical Surveys, v. 32, No. 6, pp. 209–210 (in Russian).
Burgin, M. (1982) Products of operators in a multidimensional structured
model of systems, Mathematical Social Sciences, No. 2, pp. 335–343.
Burgin, M. (1982a) Generalized Kolmogorov complexity and duality in the-
ory of computations, Notices of the Academy of Sciences of the USSR,
v. 264, No. 2, pp. 19–23 (translated from Russian, v. 25, No. 3).
Burgin, M. (1983) On the Greg’s paradox in taxonomy, Abstracts presented
to the American Mathematical Society, v. 4, No. 3, p. 303.
Burgin, M. (1985) Multiple computations and Kolmogorov complexity for
such processes, Notices of the Academy of Sciences of the USSR, v. 269,
No. 4, pp. 793–797 (translated from Russian, v. 27, No. 2).
Burgin, M. (1985) Multiple computations and Kolmogorov complexity for
such processes, Notices of the Academy of Sciences of the USSR, v. 269,
No. 4, pp. 793–797 (translated from Russian, v. 27, No. 2).
Burgin, M. (1985a) Abstract theory of properties, in Non-classical Logics,
Institute of Philosophy, Moscow, Russia, pp. 109–118 (in Russian).
Burgin, M. (1985b) Psychological aspects of flow-chart utilization in pro-
gramming, in Psychological Problems of Computer Design and Utiliza-
tion, Moscow, pp. 95–96 (in Russian).
Burgin, M. (1986) Quantifiers in theory of properties, in Nonstandard
Semantics of Non-classical Logics, Institute of Philosophy, Moscow,
pp. 99–107 (in Russian).
Burgin, M. (1989) Named Sets, General Theory of Properties, and Logic,
Institute of Philosophy, Kiev, Ukraine (in Russian).
Burgin, M. (1989a) Knowledge in intelligent systems, Proc. of the Confer-
ence on Intelligent Management Systems, Varna, Bulgaria, pp. 281–
286.
Burgin, M. (1990) Theory of named sets as a foundational basis for math-
ematics, in Structures in Mathematical Theories, San Sebastian, Spain
pp. 417–420.
Burgin, M. (1990a) Abstract theory of properties and sociological scaling,
in Expert Evaluation in Sociological Studies, Kiev, pp. 243–264 (in Rus-
sian).
Burgin, M. (1990b) Generalized Kolmogorov complexity and other
dual complexity measures, Cybernetics and System Analysis, No. 4,
pp. 21–29.
Burgin, M. (1991) Named sets in the semantic network theory, in Knowl-
edge-Dialog-Decision, Leningrad, pp. 43–47 (in Russian).
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 850

850 Bibliography

Burgin, M. (1991a) Logical methods in artificial intelligent systems, Vestnik


of the Computer Society, No. 2, pp. 66–78 (in Russian).
Burgin, M. (1992) Reflexive calculi and logic of expert systems, in Creative
Processes Modeling by Means of Knowledge Bases, Sofia, pp. 139–160
(in Russian).
Burgin, M. (1992a) Named sets as a mathematical apparatus for a data
structures representation, in Problem Solving on Mathematical Models
of Object Domains, Kiev, pp. 13–21 (in Russian).
Burgin, M. (1993) Analogy and argumentation in artificial intelligence sys-
tems, Vychislitelnyye Sistemy (Logical Methods in Computer Science),
v. 148, pp. 82–93 (in Russian).
Burgin, M. (1993a) Triad as a fundamental structure in human culture,
Studia Culturologia, v. 2, pp. 51–63.
Burgin, M. (1993b) Information triads, Philosophical and Sociological
Thought, No. 7–8, pp. 243–246 (in Russian and Ukrainian).
Burgin, M. (1994) Fundamental base of the theory of triads, Idea, No. 2,
pp. 32–45 (in Ukrainian).
Burgin, M. (1994a) Fuzzy terminological systems, Proc. of the 2nd Euro-
pean Congress on Intelligent Technologies and Soft Computing, Aachen,
Germany, pp. 984–988.
Burgin, M. (1994b) Is it possible that mathematics gives new knowledge
about reality? Philosophical and Sociological Thought, No. 1, pp. 240–
249 (in Russian and Ukrainian).
Burgin, M. (1995) Named sets as a basic tool in epistemology, Epistemolo-
gia, v. XVIII, pp. 87–110.
Burgin, M. (1995a) The phenomenon of knowledge, Philosophical and Soci-
ological Thought, No. 3–4, pp. 41–63 (in Russian and Ukrainian).
Burgin, M. (1995b) Logical tools for inconsistent knowledge systems, Infor-
mation: Theories & Applications, v. 3, No. 10, pp. 13–19.
Burgin, M. (1995c) Mistakes and misconceptions as engines of progress
in science, Visnik of the National Academy of Science of Ukraine,
No. 11/12, pp. 64–70 (in Ukrainian).
Burgin, M. (1995d) Intellectual activity and student development, in Psy-
chological Foundations of Education Humanization, Rivne, pp. 30–36
(in Ukrainian).
Burgin, M. (1996) Triad as a way to mutual understanding, Philosophi-
cal and Sociological Thought, No. 7/8, pp. 232–237 (in Russian and
Ukrainian).
Burgin, M. (1996a) Intellectual activity as a psychological phenomenon,
International Journal of Psychology, v. 31, No. 3/4 (XXVI Interna-
tional Congress of Psychology, Montreal, 1996).
Burgin, M. (1996b) Flow-charts in programming: Arguments pro et contra,
Control Systems and Machines, No. 4–5, pp. 19–29 (in Russian).
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 851

Bibliography 851

Burgin, M. (1996c) Understanding and cognition: Jewish sages


and modern science (Rational Aspects), Aviv, No. 4, pp. 54–61 (in
Russian).
Burgin, M. (1996d) Understanding and cognition: Jewish sages and
modern science (Mystical Structures), Aviv, No. 5, pp. 55–61 (in
Russian).
Burgin, M. (1997) Fundamental Structures of Knowledge and Information,
Ukrainian Academy of Information Sciences, Kiev (in Russian).
Burgin, M. (1997a) Mathematical theory of technology, in Theoretical Prob-
lems of Mathematics and Information Sciences, Ukrainian Academy of
Information Sciences, Kiev, pp. 91–100 (in Russian).
Burgin, M. (1997b) A technological approach to the system science-
industry-consumption, Science and Science of Science, No. 3/4, pp. 73–
88 (in Russian).
Burgin, M. (1997c) Non-Diophantine Arithmetics or is it Possible that 2 +
2 is not Equal to 4? Ukrainian Academy of Information Sciences, Kiev
(in Russian, English summary).
Burgin, M. (1997d) Logical varieties and covarieties, in Theoretical Prob-
lems of Mathematics and Information Sciences, Ukrainian Academy of
Information Sciences, Kiev, pp. 18–34 (in Russian).
Burgin, M. (1997e) Information algebras, Control Systems and Machines,
v. 6, pp. 5–16 (in Russian).
Burgin, M. (1997f) Knowledge visualization as a tool for creative think-
ing, 7th International Conference on Human-Computer Interaction,
San Francisco, p. 63.
Burgin, M. (1998) On the Nature and Essence of Mathematics, Ukrainian
Academy of Information Sciences, Kiev (in Russian).
Burgin, M. (1998a) Intellectual Components of Creativity, Aerospace
Academy of Ukraine, Kiyiv (in Ukrainian).
Burgin, M. (1998/1999) Information and transformation, Transformation,
No. 1, pp. 48–53 (in Polish).
Burgin, M. (2001) Information in the context of education, The Journal of
Interdisciplinary Studies, v. 14, pp. 155–166.
Burgin, M. (2002) Knowledge and data in computer systems, Proc. of the
ISCA 17th International Conference “Computers and their Applica-
tions”, International Society for Computers and their Applications, San
Francisco, California, pp. 307–310.
Burgin, M. (2002a) Elements of the System Theory of Time, LANL,
Preprint in Physics 0207055 (electronic edition: http://arXiv.org).
Burgin, M. (2003) Levels of system functioning description: From algorithm
to program to technology, Proc. of the Business and Industry Simula-
tion Symposium, Society for Modeling and Simulation International,
Orlando, Florida, pp. 3–7.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 852

852 Bibliography

Burgin, M. (2003a) From neural networks to grid automata, Proc. of the


IASTED International Conference “Modeling and Simulation”, Palm
Springs, California, pp. 307–312.
Burgin, M. (2003b) Cluster Computers and Grid Automata, Proc. of the
ISCA 17th International Conference “Computers and their Applica-
tions”, International Society for Computers and their Applications,
Honolulu, Hawaii, pp. 106–109.
Burgin, M. (2004) Data, information, and knowledge, Information, v. 7,
No. 1, pp. 47–57.
Burgin, M. (2004a) Logical tools for program integration and interop-
erability, Proc. of the IASTED International Conference on Soft-
ware Engineering and Applications, MIT, Cambridge, MA, pp. 743–
748.
Burgin, M. (2004b) Discontinuity structures in topological spaces, Interna-
tional Journal of Pure and Applied Mathematics, v. 16, No. 4, pp. 485–
513.
Burgin, M. (2004c) Algorithmic complexity of recursive and induc-
tive algorithms, Theoretical Computer Science, v. 317, No. 1/3,
pp. 31–60.
Burgin, M. (2005) Super-recursive Algorithms, Springer, New York/
Heidelberg/Berlin.
Burgin, M. (2005a) Mathematical Models in Schema Theory, Preprint in
Computer Science and Artificial Intelligence, cs.AI/0512099 (electronic
edition: http://arXiv.org).
Burgin, M. (2005b) Grammars with prohibition and human-computer inter-
action, Proc. of the Business and Industry Simulation Symposium, Soci-
ety for Modeling and Simulation International, San Diego, California,
pp. 143–147.
Burgin, M. (2005c) Recurrent points of fuzzy dynamical systems, Journal
of Dynamical Systems and Geometric Theories, v. 3, No. 1, pp. 1–14.
Burgin, M. (2006) Mathematical schema theory for modeling in business
and industry, Proc. of the 2006 Spring Simulation MultiConference
(SpringSim ’06), Huntsville, Alabama, pp. 229–234.
Burgin, M. (2007) Elements of non-diophantine arithmetics, 6th Annual
International Conference on Statistics, Mathematics and Related
Fields, 2007 Conference Proc., Honolulu, Hawaii, pp. 190–203.
Burgin, M. (2007a) Languages, Algorithms, Procedures, Calculi, and
Metalogic, Preprint in Mathematics LO/0701121 (electronic edition:
http://arXiv.org).
Burgin, M. (2007b) Universality, reducibility, and completeness, Lecture
Notes in Computer Science, v. 4664, pp. 24–38.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 853

Bibliography 853

Burgin, M. (2008) Neoclassical Analysis, Nova Science Publishers,


New York.
Burgin, M. (2008a) Structural organization of temporal databases, Proc.
of the 17 th International Conference on Software Engineering and
Data Engineering (SEDE-2008), ISCA, Los Angeles, California,
pp. 68–73.
Burgin, M. (2009) Structures in mathematics and beyond, Proc. of the
8th Annual International Conference on Statistics, Mathematics and
Related Fields, Honolulu, Hawaii, pp. 449–469.
Burgin, M. (2009a) Mathematical theory of information technology, Proc.
of the 8th WSEAS International Conference on Data Networks,
Communications, Computers (DNCOCO’09), Baltimore, Maryland,
pp. 42–47.
Burgin, M. (2010) Theory of Information: Fundamentality, Diversity and
Unification, World Scientific, New York/London/Singapore.
Burgin, M. (2010a) Mathematical Schema Theory for Network Design,
Proc. of the ISCA 25th International Conference “Computers
and their Applications” (CATA-2010), ISCA, Honolulu, Hawaii,
pp. 157–162.
Burgin, M. (2010b) Information operators in categorical information spaces,
Information, v. 1, No. 1, pp. 119–152.
Burgin, M. (2010c) Introduction to Projective Arithmetics, Preprint in
Mathematics, math.GM/1010.3287, p. 21 (electronic edition: http://
arXiv.org).
Burgin, M. (2010d) Measuring Power of Algorithms, Computer Programs,
and Information Automata, Nova Science Publishers, New York.
Burgin, M. (2010e) Interpretations of Negative Probabilities, Preprint
in Quantum Physics, quant-ph/1008.1287 (electronic edition: http://
arXiv.org).
Burgin, M. (2010d) Algorithmic complexity of computational problems,
International Journal of Computing & Information Technology, v. 2,
No. 1, pp. 149–187.
Burgin, M. (2011) Theory of Named Sets, Nova Science Publishers, New
York.
Burgin, M. (2011a) Epistemic information in stratified M-spaces, Informa-
tion, v. 2, No. 2, pp. 697–726.
Burgin, M. (2011b) Information dynamics in a categorical setting, in
Information and Computation, World Scientific, New York/London/
Singapore, pp. 35–78.
Burgin, M. (2011c) Information: Concept clarification and theoretical rep-
resentation, TripleC , v. 9, No. 2, pp. 347–357 (http://triplec.uti.at).
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 854

854 Bibliography

Burgin, M. (2011d) Information in the structure of the world, Information:


Theories & Applications, v. 18, No. 1, pp. 16–32.
Burgin, M. (2012) Structural Reality, Nova Science Publishers, New York.
Burgin, M. (2012a) A system approach to information structuring in the
context of social interaction, Information: Theories & Applications,
v. 19, No. 1, pp. 3–13.
Burgin, M. (2013) Semitopological vector spaces and hyperseminorms, The-
ory and Applications of Mathematics and Computer Science, v. 3, No. 2,
pp. 1–35.
Burgin, M. (2014) Weighted E-Spaces and epistemic information operators,
Information, v. 5, No. 3, pp. 357–388.
Burgin, M. (2015) Inductive cellular automata, International Journal of
Data Structures and Algorithms, v. 1, No. 1, pp. 1–9.
Burgin, M. (2015a) Grammars with exclusion, Journal of Computer Tech-
nology & Applications (JoCTA), v. 6, No. 2, pp. 56–66.
Burgin, M. S. and Bratalskii, E. A. (1986) The principle of asymptotic
uniformity in complex system modeling, in Operation Research and
Automated Control Systems, Kiev, pp. 115–122 (in Russian).
Burgin, M. S., Bratalskii, E. A. and Belkov, M. S. (1979) The language PDL
for the large system design automation, Programming and Computer
Software, v. 5, No. 1, pp. 80–90.
Burgin, M. and Burgina, E. (1982) Information retrieval and multi-valued
partitions in languages, Cybernetics and System Analysis, No. 1, pp. 30–
42 (translated from Russian).
Burgin, M., Calude, C. S. and Calude, E. (2013) Inductive complexity mea-
sures for mathematical problems, International Journal of Foundations
of Computer Science, v. 24, No. 4, pp. 487–500.
Burgin, M. and Debnath, N. (2003) Complexity of algorithms and software
metrics, Proc. of the ISCA 18 th International Conference “Computers
and their Applications”, International Society for Computers and their
Applications, Honolulu, Hawaii, pp. 259–262.
Burgin, M. and Debnath, N. (2006) Software correctness, Proc. of the ISCA
21st International Conference on Computers and their Applications
(CATA 2006), ISCA, Seattle, Washington, pp. 259–264.
Burgin, M. and Debnath, N. (2007) Testing in the software life cycle, Proc.
of the International Conference on Computer Applications in Industry
and Engineering (CAINE-07), San Francisco, California, pp. 156–161.
Burgin, M. and Debnath, N. (2008) Testing: Organization and evaluation,
Proc. of the ISCA 23 rd International Conference on Computers and
their Applications (CATA-2008), ISCA, Cancun, Mexico, pp. 203–208.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 855

Bibliography 855

Burgin, M. and Debnath, N. (2009) Superrecursive algorithms in test-


ing distributed systems, Proc. of the ISCA 24th International Con-
ference “Computers and their Applications” (CATA-2009), ISCA, New
Orleans, Louisiana, USA, pp. 209–214.
Burgin, M. and Debnath, N. (2010) Reusability as design of second-level
algorithms, Proc. of the ISCA 25th International Conference “Com-
puters and their Applications” (CATA-2010), ISCA, Honolulu, Hawaii,
pp. 147–152.
Burgin, M. and Debnath, N. C. (2012) Interplay of logical verification and
performance testing in software assurance, Proc. of the 21st Inter-
national Conference on Software Engineering and Data Engineering
(SEDE-2012), ISCA, Los Angeles, California, pp. 155–160 (in collabo-
ration).
Burgin, M. and Dodig-Crnkovic, G.(2011) Information and Computation —
Omnipresent and Pervasive, in Information and Computation, World
Scientific, New York/London/Singapore, pp. vii–xxxii.
Burgin, M. and Eberbach, E. (2009) On foundations of evolutionary compu-
tation: An evolutionary automata approach, in Handbook of Research
on Artificial Immune Systems and Natural Computing: Applying Com-
plex Adaptive Technologies, H. Mo (Ed.), Section II: Natural Com-
puting, Section II.1: Evolutionary Computing, Chapter XVI, Medical
Information Science Reference/IGI Global, Hershey, Pennsylvania,
pp. 342–260.
Burgin, M. and Eberbach, E. (2013) Recursively generated evolution-
ary turing machines and evolutionary automata, in Artificial Intel-
ligence, Evolutionary Computing and Metaheuristics, X.-S. Yang
(Ed.), Studies in Computational Intelligence, v. 427, Springer-Verlag,
Berlin/Heidelberg, pp. 201–230.
Burgin, M. and Eggert, P. (2004) Types of software systems and
structural features of programming and simulation languages,
Proc. of the Business and Industry Simulation Symposium, Soci-
ety for Modeling and Simulation International, Arlington, Virginia,
pp. 177–181.
Burgin, M. and Gabovich, A. M. (1997) Why a discovery was not made,
Visnik of the National Academy of Science of Ukraine, No. 3/4, pp. 55–
60 (in Ukrainian).
Burgin, M. and Gantenbein, R. E. (2002) Knowledge discovery, informa-
tion retrieval, and data mining, Proc. of the ISCA 17th International
Conference “Computers and their Applications” (CATA-2002), ISCA,
San Francisco, California, pp. 55–58.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 856

856 Bibliography

Burgin, M. and Gladun, V. P. (1989) Mathematical foundations of the


semantic networks theory, Lecture Notes in Computer Science, v. 364,
pp. 117–135.
Burgin, M. and Gladun, V. P. (1990) Mathematical models of semantic net-
works based on the named sets, in Decision Support Systems, Budapest,
pp. 81–96 (in Russian).
Burgin, M. and Gladun, V. P. (1990a) Elements of the mathematical theory
of semantic networks, in Knowledge-Dialog-Decision, Kiev, pp. 173–184
(in Russian).
Burgin, M. S. and Gladun, V. P. (1990b) Mathematical modeling of
knowledge representation structures in artificial intelligence systems, in
Methodology of Mathematical Modeling, Sofia, pp. 317–319 (in Russian).
Burgin, M. and Gorsky, D. (1991) Towards the construction of general the-
ory of concept, in The Opened Curtain, Oulder/San Francisco/Oxford,
pp. 167–195.
Burgin, M. and Gupta, B. (2012) Second-level algorithms, superrecursiv-
ity, and recovery problem in distributed systems, Theory of Computing
Systems, v. 50, No. 4, pp. 694–705.
Burgin, M. and Karasik, A. (1976) Operators of multidimensional struc-
tured model of parallel computations, Automation and Remote Control,
v. 37, No. 8, pp. 1295–1300.
Burgin, M. and Kavunenko, L. (1994) Measurement and Evaluation in Sci-
ence, STEPS, Kiev (in Russian).
Burgin, M. and Kharlamov, M. (1978) Connections between relations and
their application in computer science, in Problems of Information The-
ory and Practice, Moscow, pp. 84–93 (in Russian).
Burgin, M. and Klinger, A. (2004) Experience, generations, and limits
in machine learning, Theoretical Computer Science, v. 317, No. 1/3,
pp. 71–91.
Burgin, M. and Krymsky, S. B. (1985) Rationality principles and the prob-
lem of their modeling, in Methodological aspects of scientific research,
Kiev, pp. 3–17 (in Russian).
Burgin, M. and Kuznetsov, V. (1988) The structure-nominative recon-
struction of scientific knowledge, Epistemologia, v. XI, No. 2, pp. 235–
254.
Burgin, M. and Kuznetsov, V. (1988b) Concepts in cognitive systems and
their structure-nominative models, in Theory of cognition and logic,
Works of soviet scientists to the XVIII International Congress of Phi-
losophy (Brighton,1988), pp. 52–58.
Burgin, M. and Kuznetsov, V. (1989) Logical and structural principles
of knowledge, Proc. Conference on Intelligent Management Systems,
Varna, pp. 266–272.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 857

Bibliography 857

Burgin, M. and Kuznetsov, V. (1991) System organization of data and


knowledge bases, in Data and Knowledge Bases in Automated Systems,
Kiev, pp. 149–157 (in Russian).
Burgin, M. and Kuznetsov, V. (1991a) Scientific knowledge in the expert
system “NT-1”, in Software, Tver, pp. 77–82 (in Russian).
Burgin, M. and Kuznetsov, V. (1992) The structure-nominative analysis of
theoretical knowledge, Proc. of the Institute of Philosophy, Kiev.
Burgin, M. and Kuznetsov, V. (1992a) The structure-nominative recon-
struction and intelligibility of cognition, Epistemologia, v. XV, No. 2,
pp. 219–238.
Burgin, M. and Kuznetsov, V. (1992b) The structure-nominative direction
in Methodology of Science (1984–1991), in Methodological Conceptions
and Schools in the USSR (1951–1991), Novosibirsk, pp. 111–130 (in
Russian).
Burgin, M. and Kuznetsov, V. (1993) Properties in science and their mod-
eling, Quality & Quantity, v. 27, pp. 371–382.
Burgin, M. and Kuznetsov, V. (1994) Introduction to Modern Exact
Methodology of Science, International Science Foundation, Moscow (in
Russian).
Burgin, M. and Kuznetsov, V. (1994a) Knowledge representation in intelli-
gent systems, in Intellect, Man, and Computer, Novosibirsk, pp. 35–56
(in Russian).
Burgin, M. and Kuznetsov, V. (1994b) Scientific problems and questions
from a logical point of view, Synthese, v. 100, No. 1, pp. 1–28.
Burgin, M., Kuznetsov, V. I. and Dmitrik, I. (1989) The structure-
nominative analysis of pedagogical theories, Soviet Pedagogy, No. 3,
pp. 59–64 (in Russian).
Burgin, M., Liu, D. and Karplus, W. (2001) The problem of time scales
in computer visualization, in Computational Science, Lecture Notes in
Computer Science, v. 2074, part II, pp. 728–737.
Burgin, M., Liu, D. and Karplus, W. (2001a) Branching computation for
visualization in medical systems, Proc. of the ISCA 16th International
Conference “Computers and their Applications”, ISCA, Seattle, Wash-
ington, pp. 481–484.
Burgin, M., Liu, D. and Karplus, W. (2001b) Visualization in Human-
Computer Interaction, UCLA, Computer Science Department, Report
CSD-010010, Los Angeles, July, p. 108.
Burgin, M. and Meissner, G. (2010) Negative probabilities in modeling ran-
dom financial processes, Integration: Mathematical Theory and Appli-
cations, v. 2, No. 3, pp. 305–322.
Burgin, M. and Meissner, G. (2012) Negative probabilities in financial mod-
eling, Wilmott Magazine, pp. 60–65.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 858

858 Bibliography

Burgin, M. and Mikkilineni, R. (2014) Semantic Network Organization


Based on Distributed Intelligent Managed Elements, Sixth International
Conference on Advances in Future Internet (AFIN 2014), Lis-
bon, Portugal, pp. 1–7 (http://www.thinkmind.org/index.php?view=
instance&instance=AFIN+2014).
Burgin, M. and Milov, Yu. (1999) Existential triad: A structural analysis
of the whole, Totalogy, v. 2/3, pp. 387–406 (in Russian).
Burgin, M., Parfenzeva, N. and Gladkova, V. I. (1997) Mathematical Mod-
els of Classifications in Statistics, Institute of Statistics, Kyiv (in
Ukrainian).
Burgin, M. and Rothbart, D. (1998) Metaphor as an exact concept in the
theory of properties, Theoria, No. 2, pp. 91–103 (in Serbian).
Burgin, M. and Rybalov, A. (2003) Fuzzy logical varieties as models of
thinking, emotions, and will, Proc. of the 10th IFSA World Congress,
Istanbul, Turkey, pp. 31–34.
Burgin, M. and Schumann, J. (2006) Three levels of the symbolosphere,
Semiotica, v. 160, No. 1/4, pp. 185–202.
Burgin, M. and Slyusar, V. (1980) Structured organization of frames for
knowledge-base construction, in Knowledge Representation in Artificial
Intelligence Systems, Moscow, pp. 21–24 (in Russian).
Burgin, M. and Smith, M. L. (2006) Compositions of concurrent processes,
in Concurrent Systems Engineering Series, Frederick R. M. Barnes, Jon
M. Kerridge, Peter H. Welch (Eds.), IOS Press, Amsterdam, Communi-
cating Process Architectures, Napier University (Edinburgh, Scotland),
pp. 281–296.
Burgin, M. and Smith, M. L. (2007) A unifying model of concurrent pro-
cesses, Proc. of the 2007 International Conference on Foundations of
Computer Science (FCS’07), H. R. Arabnia and M. Burgin (Eds.),
CSREA Press, Las Vegas, Nevada, USA, pp. 321–327.
Burgin, M. and Smith, M. L. (2010) A Theoretical Model for Grid,
Cluster and Internet Computing, Selected Topics in Communi-
cation Networks and Distributed Systems, World Scientific, New
York/London/Singapore, pp. 485–535.
Burgin, M. and Tandon, A. (2006) Naming and its regularities in distributed
environments, Proc. of the 2006 International Conference on Founda-
tions of Computer Science, CSREA Press, pp. 10–16.
Burgin, M. and Tkachenko, O. I. (1993) Data and knowledge in software
systems, in Intelligent Instrumental Programming Tools, Kiev, pp. 72–
80 (in Russian).
Burgin, M. and Valkman, Yu. R. (1997) Principles of scientific library cata-
logues intellectualization, in Problems of Scientific Library Catalogues
Improvement, Kyiv, pp. 97–99 (in Ukrainian).
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 859

Bibliography 859

Burgin, M. and de Vey Mestdagh, C. N. J. (2011) The representation of


inconsistent knowledge in advanced knowledge based systems, Lecture
Notes in Computer Science, Knowlege-Based and Intelligent Informa-
tion and Engineering Systems, v. 6882, pp. 524–537.
Burgin, M. and de Vey Mestdagh, C. N. J. (2015) Consistent structuring
of inconsistent knowledge, Journal of Intelligent Information Systems,
v. 45, No. 1, pp. 5–28.
Burgin, M. and Zellweger, P. (2005) A unified approach to data represen-
tation, Proc. of the 2005 International Conference on Foundations of
Computer Science, CSREA Press, Las Vegas, pp. 3–9.
Burton, D. M. (1997) The History of Mathematics, The McGrow Hill Co.,
New York.
Butler, D. and Bryson, S. (1992) Vector-bundle classes form powerful tool
for scientific visualization, Computers in Physics, v. 6, pp. 576–584.
Buzaglo, M. (2002) The Logic of Concept Expansion, Cambridge University
Press, Cambridge.
Cadoli, M. and Schaerf, M. (1992) Approximate inference in default rea-
soning and circumscription, Proc. ECAI’92, pp. 319–323.
Campbell, R. L. (1998) Representation by correspondence: an inadequate
conception of knowledge for artificial systems, in Advanced Topics in
Artificial Intelligence, Lecture Notes on Artificial Intelligence, v. 1502,
pp. 15–26.
Campbell, D. G., Brundin, M., MacLean, G. and Baird, C. (2007) Every-
thing old is new again: Finding a place for knowledge structures in a
satisficing world, Proc. of the North American Symposium on Knowl-
edge Organization, v. 1, pp. 21–30.
Câmpeanu, C. (2012) A note on blum static complexity measures, Compu-
tation, Physics and Beyond, pp. 71–80.
Cannataro, M. and Talia, D. Semantics and knowledge grids: building the
next-generation grid, Intelligent Systems, IEEE, v. 19, No. 1, pp. 56–63.
Capuro, R. and Hjorland, B. (2003) The concept of information, Annual
Review of Information Science and Technology, v. 37, No. 8, pp. 343–
411.
Capurro, R. (1991) Foundations of information science: Review and per-
spectives, Proc. of the International Conference on Conceptions of
Library and Information Science, University of Tampere, Tampere, Fin-
land, pp. 26–28.
Carlson, J. M. and Doyle, J. (2002) Complexity and robustness, Proc. Nat.
Acad. Science of the USA, v. 99, No. 1, pp. 2538–2545.
Carlucci, D., Marr, B. and Schiuma, G. (2004) The knowledge value chain:
how intellectual capital impacts on business performance, International
Journal of Technology Management, v. 27, No. 6/7, pp. 575–590.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 860

860 Bibliography

Carnap, R. (1928) Der logische Aufbau der Welt. Scheinprobleme in der


Philosophie, Berlin (English translation: The Logical Structure of the
World. Pseudoproblems in Philosophy, London/Berkeley, CA, 1967).
Carnap, R. (1934) Logische Syntax der Sprache (English translation The
Logical Syntax of Language, Humanities, New York, 1937).
Carnap, R. (1952) The Continuum of Inductive Methods, Chicago Univer-
sity Press, Chicago.
Carnielli, W. A., Coniglio, M. E., Marcos, J. (2007) Logics of formal
inconsistency, in Handbook of Philosophical Logic, D. Gabbay and
F. Guenthner (Eds.), v. 14, Springer-Verlag, Berlin, pp. 1–93.
Carr, W. and Kemmis, S. (1986) Becoming Critical: Education, knowledge
and action research, Falmer Press, Lewes.
Carter, J. (2008) Structuralism as a philosophy of mathematical practice,
Synthese, v. 163, No. 2, pp. 119–131.
Cartwright, N. (1983) How the Laws of Physics Lie, Clarendon Press,
Oxford.
Cath, Y. (2009) The ability hypothesis and the new knowledge-how, Noûs,
v. 43, No. 1, pp. 137–156.
Cath, Y. (2012) Knowing how without knowing that, in Knowing How:
Essays on Knowledge, Mind, and Action, J. Bengson and M. A. Moffett
(Eds.), Oxford University Press, Oxford, pp. 113–135.
Cayrol, C. and Lagasquie-Schiex, M. C. (1998) Nonmonotonic reasoning:
from complexity to algorithms, Annals of Mathematics and Artificial
Intelligence, v. 22, pp. 207–236.
Chalmers, D. J. (2010) The Nature of Epistemic Space, in Epistemic Modal-
ity, Oxford University Press, New York, pp. 60–107.
Chang, H. (2004) Inventing Temperature: Measurement and Scientific
Progress, Oxford University Press, New York.
Chang, H. (2009) Operationalism, The Stanford Encyclopedia of
Philosophy (Internet Edition: http://plato.stanford.edu/archives/
fall2009/entries/operationalism/).
Chang, C.-L. and Lee, R. C.-T. (1973) Symbolic Logic and Mechanical The-
orem Proving, Academic Press, New York.
Chang, C. C. and Keisler, H. J. (1966) Continuous Model Theory, Princeton
University Press, Princeton.
Chechkin, A. V. (1991) Mathematical Informatics, Nauka, Moscow (in
Russian).
Chellas, B. (1980) Modal Logic: An Introduction, Cambridge University
Press, Cambridge.
Chen, C. and Huang, J. (2007) How organizational climate and structure
affect knowledge management — The social interaction perspective,
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 861

Bibliography 861

International Journal of Information Management, v. 27, No. 2,


pp. 104–118.
Chi, R. S. Y. (1969) Buddhist Formal Logic: A Study of Dignāga’sHetucakra
and K’uei-chi’s Great Commentary on the Nyāyapraveśa, The Royal
Asiatic Society of Great Britain, London.
Chisholm, R. (1989) Theory of Knowledge, Prentice Hall, Englewood Cliffs,
NJ.
Chmielewski, J. (2009) Language and Logic in Ancient China, Collected
Papers on the Chinese Language and Logic, PAN, Warswa, Poland.
Cholewinski, P. (1994) Stratified default logic, Proc. of Computer Science
Logic’94, LNCS, v. 933, pp. 456–470.
Chong, S. C. and Choi, Y. S. (2005) Critical factors in the success-
ful implementation of knowledge management, Journal of Knowl-
edge Management Practice (Electronic edition: http://www.tlv.
ainc.com/articl90.htm).
Choo, C. W. (1998) The Knowing Organization, Oxford University Press,
New York, NY.
Choo, C. W., Detlor, B. and Turnbull, D. (2000) Web Work : Information
Seeking and Knowledge Work on the World Wide Web, Kluwer Aca-
demic Publishers, Dordrecht.
Choquet-Bruhat, Y. and DeWitt-Morette, C. (1982) Analysis, Manifolds
and Physics, Part 1: Basics, Elsevier, Amsterdam.
Christensen, D. (2007) Epistemology of disagreement: The good news,
Philosophical Review, v. 116, pp. 187–217.
Christensen, D. (2009) Disagreement as evidence: The epistemology of con-
troversy, Philosophy Compass, 4, pp. 756–767.
Chua, A. and Lam, W. (2005) Why KM projects fail: a multi-case analysis,
Journal of Knowledge Management, v. 9, No. 3, pp. 6–17.
Chudnoff, E. (2011) What intuitions are like, Philosophy and Phenomeno-
logical Research, v. 82, pp. 625–654.
Chudnoff, E. (2011a) The Nature of intuitive justification, Philosophical
Studies, v. 153, pp. 313–333.
Church, A. (1932) A set of postulates for the foundation of logic, Annals of
Mathematics, Series 2, v. 33, pp. 346–366.
Church, A. (1956) Introduction to Mathematical Logic, Princeton University
Press, Princeton.
Ciuciura, J. (2008) Frontiers of the discursive logic, Bulletin of the Section
of Logic, v. 37, No. 2, pp. 81–92.
Clancey, W. J. (1997) Situated Cognition: On Human Knowledge and
Computer Representations, Cambridge University Press, Cambridge,
UK.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 862

862 Bibliography

Claver-Cortés, E., Zaragoza-Sáez, P. and Pertusa-Ortega, E. (2007) Orga-


nizational structure features supporting knowledge management pro-
cesses, Journal of Knowledge Management, v. 11, No. 4, pp. 45–57.
Cleveland, H. (1982) Information as a resource, The Futurist, v. 16, No. 6,
pp. 34–39.
Cleveland, H. (1985) The Knowledge Executive: Leadership in an Informa-
tion Society, Truman Talley Books, New York.
Clouston, R. (2009) Equational Logic for Names and Binders, Dissertation,
Churchill College, University of Cambridge.
Cocchiarella, N. B. (1986) Logical Investigations of Predication Theory and
the Problem of Universals, Bibliopolis, Napoli.
Cocchiarella, N. B. (2005) Denoting concepts, reference, and the logic of
names, classes as many, groups, and plurals? Linguistics and Philoso-
phy, v. 28, No. 2, pp. 135–179.
Codd, E. F. (1968) Cellular Automata, Academic Press, New York.
Codd, E. F. (1970) A relational model of data for large shared data banks,
Communications of the ACM, v. 13, No. 6, pp. 377–387.
Codd, E. (1990) Relational Model for Data Management, Addison-Wesley,
Reading, MA.
Coecke, B., Moore, D. and Wilce, A. (2000) Operational quantum logic: An
overview, Preprint in quantum physics, (arXiv:quant-ph/0008019).
Cohen, P. J. (1966) Set Theory and the Continuum Hypothesis, Benjamin,
New York.
Cohen, S. (1984) Justification and truth, Philosophical Studies, v. 46,
pp. 279–295.
Cohen, S. (2002) Basic knowledge and the problem of easy knowledge, Phi-
losophy and Phenomenological Research, v. 65, pp. 309–329.
Cohen, N. J. and Squire, L. R. (1980) Preserved learning and retention of
pattern-analyzing skill in amnesia: Dissociation of knowing how and
knowing that, Science, v. 210, pp. 207–210.
Cohen, B. and Murphy, G. L. (1984) Models of concepts, Cognitive Science,
v. 8, No. 1, pp. 27–58.
Collins, H. (1993) The structure of knowledge, Social Research, v. 60, No. 1,
pp. 95–116.
Confucius (1979) The Analects, Harmondsworth, New York.
Connelly, E. C., Zweig, D., Webster, J. and Trougakos, P. J. (2011) Knowl-
edge hiding in organizations, Journal of Organizational Behavior, v. 33,
pp. 64–88.
Connors, K. A. (1991) Chemical Kinetics: The Study of Reaction Rates in
Solution, VCH Publishers.
Cook, S. D. and Brown, J. S. (1999) Bridging epistemologies: The generative
dance between organizational knowledge and organizational knowing,
Organization Science, v. 10, No. 4.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 863

Bibliography 863

Corcoran, J. (1998) Information-theoretic logic, in Truth in Perspective,


Ashgate, Aldershot, pp. 113–135.
Corlett, J. A. (1996) Analyzing Social Knowledge, Rowman & Littlefield,
Lanham/New York.
Cornford, F. M. (2003) Plato’s Theory of Knowledge: The Theaetetus and
The Sophist, Dover, New York.
Corry, L. (1996) Modern Algebra and the Rise Mathematical Structures,
Birkhäuser, Basel/Boston/Berlin.
Cory, G. A. (1999) The Reciprocal Modular Brain in Economics and Politics:
Shaping the Rational and Moral Basis of Organization, Exchange, and
Choice, Kluwer Academic/Plenum Publishers, New York.
Cox, D. R. (2006) Principles of Statistical Inference, Cambridge University
Press, Cambridge, UK.
Cronk, M. (2011) Social capital, knowledge sharing, and intellectual
capital in the Web 2.0 enabled world, in Leading Issues in Social
Knowledge Management, Academic Publishing International Limited,
pp. 74–87.
Croxford, M. and Chapman, R. (2005) Correctness by construction: A man-
ifesto for high-integrity software, CrossTalk, Journal of Defence Soft-
ware Engineering (Internet publication).
Cruz, F. A. O., Vilhena, S. D. S. and Cortez, C. M. (2000) Solutions of non-
linear Poisson-Boltzmann equation for erythrocyte membrane, Brazil-
ian Journal of Physics, v. 30, pp. 403–409.
Cumming, S. (2009) Names, in Stanford Encyclopedia of Philosophy.
Cunningham, D. W. (2012). A Logical Introduction to Proof, Springer, New
York.
Cunnigham, W. (2004) Objects, patterns, Wiki and XP: All are systems of
Names, OOPSLA 2004, Vancouver, Canada.
Cutland, N. (1988) Nonstandard Analysis and Its Applications, Mathemat-
ical Society, London.
Cuzzocrea, A. (2004) Knowledge on the web: Making web services
knowledge-aware, Proc. IEEE/WIC/ACM International Conference on
Web Intelligence (WI 2004), pp. 419–426.
Cuzzocrea, A. (2006) Combining multidimensional user models and knowl-
edge representation and management techniques for making web ser-
vices knowledge-aware, Web Intelligence and Agent Systems, v. 4, No. 3,
pp. 289–312.
Cuzzocrea, A, Mastroianni, C A (2003) Reference architecture for
knowledge management-based web systems, WISE 2003, pp. 347–
354.
Da Costa NCA (1963) Calcul propositionnel pour les systemes formels
inconsistants, Compte Rendu Academie des Sciences (Paris), v. 257,
pp. 3790–3792.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 864

864 Bibliography

Dalal, M. (1988) Investigations into a theory of knowledge base revision:


Preliminary report, Proc. of the Seventh National Conference on Arti-
ficial Intelligence (AAAI’88), pp. 475–479.
Dale, J. (1996) Meinongian Logic. The Semantics of Existence and Nonex-
istence, Perspektiven der analytischen Philosophie, v. 11, Berlin/
New York.
Dalkir, K. (2005) Knowledge Management in Theory and Practice, Elsevier
Science Ltd, Amsterdam.
Damasio, C. V. and Pereira, L. M. (1997) A paraconsistent semantics with
contradiction support detection, Proc. of 4th Conference on Logic Pro-
gramming and Non Monotonic Reasoning (LPNMR’97), LNAI, v. 1265,
pp. 224–243.
Date, C. J., Darwen, H. and Lorentzos, N. (2002) Temporal Data & the
Relational Model, Morgan Kaufmann, San Mateo, CA.
Daum, B. (2003) Modeling Business Objects with XML Schema, Morgan
Kauffman, San Francisco, CA.
Davenport, T. H. (1997) Information Ecology, Oxford University Press,
New York.
Davenport, T. H. and Prusak, L. (1998) Working Knowledge: How Orga-
nizations Manage What They Know, Harvard Business School Press,
Boston.
Davenport, T. H., De Long, D. W. and Beers, M. C. (1998) Suc-
cessful knowledge management projects, Sloan Management Review,
pp. 53–65.
Davies, P. (1980) Other Worlds, Simon and Schuster, New York.
Davis, E. (1990) Representations of Common-Sense Knowledge, Morgan
Kaufmann, San Mateo, CA.
Davis, P. J. and Hersh, R. (1986) The Mathematical Experience, Penguin
Books, London.
Davis, R., Shrobe, H. and Szolovits, P. (1993) What is knowledge represen-
tation? AI Magazine, v. 14, No. 1, 17–33.
de Roure, D., Jennings, N. R. and Shadbolt, N. R. (2005) The semantic grid:
Past, present, and future, Proc. of the IEEE, v. 93, No. 3, pp. 669–681.
DeArmond, S. J., Fusco, M. M. and Dewey, M. (1989) Structure of the
Human Brain: A Photographic Atlas, Oxford University Press, New
York, NY, USA.
Degen, J. W. (1984) Systeme der kumulativen Logik, Philosophia Verlag.
Delgrande, J. P. and Mylopoulos, J. (1986) Knowledge Representation: Fea-
tures of Knowledge. Fundamentals of Artificial Intelligence, Springer
Verlag, Berlin/New York/Tokyo, pp. 3–38.
Dempster, A. P. (1967) Upper and lower probabilities induced by multival-
ued mappings, Annals of Mathematics and Statistics, v. 38, pp. 325–
339.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 865

Bibliography 865

Dennis, J. B., Fossen, J. B. and Linderman, J. P. (1974) Data Flow Schemes,


LNCS, 19, Springer, Berlin.
DePaul, M. and W. Ramsey (Eds.) (1998) Rethinking Intuition: The Psy-
chology of Intuition and Its Role in Philosophical Inquiry, Rowman and
Littlefield, Lanham, MD.
Descartes, R. (1984) The Philosophical Writings of Descartes, Cottingham,
J., R. Stoothoff and D. Murdoch (Eds.), Cambridge University Press,
Cambridge.
Deutsch, D. (1985) Quantum theory, the Church-Turing principle and the
universal quantum computer, Proceedings of the Royal Society of Lon-
don, A 400, pp. 97–117.
Deutsch, M. (2009) Experimental philosophy and the theory of reference,
Mind and Language, v. 24, pp. 445–466.
Deutsch, M. (2010) Intuitions, counter-examples, and experimental philos-
ophy, Review of Philosophy and Psychology, v. 1, pp. 447–460.
Devitt, M. (2006) Intuitions in linguistics, British Journal for the Philoso-
phy of Science, v. 57, pp. 481–513.
Devitt, M. (2011) Methodology and the nature of knowing how, The Journal
of Philosophy, v. 108, No. 4, pp. 205–218.
Devitt, M. (2011a) Experimental semantics, Philosophy and Phenomeno-
logical Research, v. 82, pp. 418–435.
Dewey, J. (1938) Experience and Education, Collier-MacMillan Canada
Ltd., Toronto.
DeWitt B. S. (1971) The many-universes interpretation of quantum
mechanics, in Foundations of Quantum Mechanics, Academic Press,
New York, pp. 167–218.
de Vey Mestdagh, C. N. J. and Burgin, M. (2015) Reasoning and decision
making in an inconsistent world: Labeled logical varieties as a tool
for inconsistency robustness, in Intelligent Decision Technologies, ser.
Smart Innovation, Systems and Technologies, R. Neves-Silva, L. C. Jain
and R. J. Howlette (Eds.), Springer, v. 39, pp. 411–438.
Dieudonné, J. (1970) The work of Nicholas Bourbaki, American Mathemat-
ical Monthly, v. 77, No. 2, pp. 134–145.
Dieudonné, J. (1975) L’abstraction et l’intuition mathématique, Dialectica,
v. 29, No. 1, pp. 39–54.
Dillon, J. T. (1988) Questioning in Science, in Questions and Questioning,
De Gruyter, New York.
Dilthey, W. (1981) Der Aufbau der geschlichtlichen Welt in den Geisteswis-
senschaften, Suhrkamp, Frankfurt am Main.
Ding, E. (2009) The platonic triad and its Chinese counterpart, Signs, v. 3,
pp. 41–56.
Dirac, P. A. M. (1928) The quantum theory of the electron, Proceedings of
the Royal Society of London, Series A, v. 117, No. 778, pp. 610–624.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 866

866 Bibliography

Dirac, P. A. M. (1930) Note on exchange phenomena in the Thomas atom,


Proceedings of the Cambridge Philosophical Society v. 26, pp. 376–395.
Dirac, P. (1930a) Principles of Quantum Mechanics, Clarendon Press,
Oxford.
Dirac, P. A. M. (1942) The physical interpretation of quantum mechanics,
Proceedings of the Royal Society of London, Series A, v. 180, pp. 1–39.
Dirac, P. A. M. (1974) Spinors in Hilbert Space, Plenum, New York.
Dixon, N. M. (2000) Common Knowledge: How Companies Thrive by Sharing
What They Know, Harvard Business School Press, Boston, MA, USA.
Dodig-Crnkovic, G. (2007) Knowledge generation as natural computation,
Proc. of International Conference on Knowledge Generation, in Com-
munication and Management (KGCM 2007), Orlando, Florida, USA,
pp. 25–28.
Dodig-Crnkovic, G. (2013) Rethinking knowledge. Modelling the world as
unfolding through info-computation for an embodied situated cognitive
agent, LITTERATUR OCH SPRÅK, pp. 5–27.
Dodig-Crnkovic, G. (2013a) Information, Computation, Cognition. Agency-
based Hierarchies of Levels, Preprint in Artificial Intelligence,
cs.AI/1311.0413 (electronic edition: http://arXiv.org).
Doignon, J.-P. and Falmagne, J.-Cl. (1985) Spaces for the assessment of
knowledge, International Journal of Man-Machine Studies, v. 23, No. 2,
pp. 175–196.
Doignon, J.-P. and Falmagne, J.-Cl. (1999) Knowledge Spaces, Springer
Verlag, Heidelberg.
Donnellan, K. (1972) Proper names and identifying descriptions, in Seman-
tics of Natural Language, D. Reidel Publishing Company, Dordrecht,
pp. 356–379.
Dretske, F. I. (1981) Knowledge and the Flow of Information, Basil Black-
well, Oxford.
Dretske, F. (1983) Précis of knowledge and the flow of information, Behav-
ioral Brain Sciences, v. 6, pp. 55–63.
Dretske, F. (1988) Explaining Behavior, MIT Press, Cambridge, MA.
Dretske, F. (2000) Perception, Knowledge and Belief : Selected Essays,
Cambridge University Press, Cambridge.
Dreyfus, H. L. (1973) What Computers Can’t Do, Harper&Row, New York.
Drucker, P. (1969) The Age of Discontinuity. Guidelines to our Changing
Society, Harper & Row, New York.
Duckett, J., Ozu, N., Williams, K., Mohr, S., Cagle, K., Griffin, O., Francis
Norton, F., Stokes-Rees, I. and Tennison, J. (2001) Professional XML
Schemas, Wrox Press Ltd.
Duffy, D. A. (1991) Principles of Automated Theorem Proving, John
Wiley & Sons, New York.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 867

Bibliography 867

Duffie D. and Singleton, K. (1999) Modeling term structures of defaultable


bonds, Review of Financial Studies, v. 12, pp. 687–720.
Dukhovny, A. and Ovchinnikov, S. (2000) Families of valued sets as media,
Proc. of the IPMU 2000 Conference, Madrid, Spain, pp. 205–212.
Dummett, M. (1973) The philosophical basis of intuitionistic logic, in Logic
Colloquium 1973, North-Holland, pp. 5–40.
Dunford, N. and Schwartz, J. (1958) Linear Operators, Interscience Pub-
lishers, New York.
Dung P. M. (1995) On the acceptability of arguments and its fundamen-
tal role in non-monotonic reasoning, logic programming and n-person
games, Artificial Intelligence, v. 77, pp. 321–357.
Durkheim, E. (1984) The Division of Labor in Society, Free Press, New
York.
Dylla, M., Sozio, M. and Theobald, M. (2011) Resolving temporal conflicts
in inconsistent RDF knowledge bases, BTW 2011, pp. 474–493.
Earlenbaugh, J. and Molyneux, B. (2009) Intuitions are inclinations to
believe, Philosophical Studies, v. 145, No. 89–109.
Easterbrook, S. M. (1996) Learning from inconsistency, Proc. of 8th Inter-
national Workshop on Software Specification and Design (IWSSD-8),
Paderborn, Germany, IEEE Computer Society Press, Silver Spring,
MD, pp. 136–140.
Eco, U. (1976) A Theory of Semiotics, Macmillan, London.
Eco, U. (1984) Semiotics and the Philosophy of Language, Indiana Univer-
sity Press, Bloomington.
Eco, U. (1990) The Limits of Interpretation, Indiana University Press,
Bloomington, IN.
Edwards, J. S. (2009) Business process and knowledge management, in
Encyclopedia of Information Science and Technology, v. 1, IGI Global,
Hershey, PA, pp. 471–476.
Edwards, J. S. (2011) A process view of knowledge management: It ain’t
what you do, it’s the way you it, Electronic Journal of Knowledge Man-
agement , v. 9, No. 4, pp. 297–306.
Ehrig, H. and Mahr, B. (1985) Fundamentals of Algebraic Specification 1:
Equations and Initial Semantics, EACTS Monographs on Theoretical
Computer Science, v. 6, Springer-Verlag.
Ehrig, H. and Mahr, B. (1990) Fundamentals of Algebraic Specification 2:
Module Specifications and Constraints, EATCS Monographs on Theo-
retical Computer Science, v. 21, Springer-Verlag.
Einstein, A. (1915) Die Feldgleichungen der Gravitation, Sitzungsberichte
der PreussischenAkademie der Wissenschaftenzu Berlin, pp. 844–847.
Ekinge, R. and Lennartsson, B. (2000) Organizational knowledge as a basis
for the management of development projects, Accepted to Discovering
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 868

868 Bibliography

Connections: A Renaissance Through Systems Learning Conference,


Dearborn, Michigan.
Elder, L. and Paul, R. (2007) Universal Intellectual Standards, The
Critical Thinking Community (electronic edition: http://www.
criticalthinking.org/pages/universal-intellectual-standards/527).
Eliot, T. S. (1934) The Rock, Faber & Faber.
Elmasri, R. and Navathe, S. B. (2000) Fundamentals of Database Systems,
Addison-Wesley Publishing Company, Reading, Massachusetts.
Epictetus (2008) Discourses and Selected Writings, (translated by R. Dob-
bin), Penguin Classics, Oxford.
Erdelyi, A. (2013) Operational Calculus and Generalized Functions, Dover
Books on Mathematics, New York.
Ershov, A. P. (1977) Introduction to Theoretical Programming, Nauka,
Moscow (in Russian).
Ershov, Yu. and Samochvalov, K. (1984) On a new approach to the philos-
ophy of mathematics, Computing Systems, v. 101, pp. 141–148.
Etzkowitz, H. and Leydesdorff, L. (1995) The triple helix — university–
industry–government relations: A laboratory for knowledge based eco-
nomic development, EASST Review, v. 14, pp. 14–19.
Etzkowitz, H. and Leydesdorff, L. (1997) Universities and the
Global Knowledge Economy: A Triple Helix of University–Industry–
Government Relations, Pinter, London.
Etzkowitz, H. and Leydesdorff, L. (1998) The endless transition: A “triple
helix” of university–industry–government relations, Minerva, v. 36,
pp. 203–208.
Etzkowitz, H. and Leydesdorff, L. (2000). The dynamics of innovation:
From national systems and ‘Mode 2’ to a triple helix of university–
industry–government relations, Research Policy, v. 29, No. 2,
pp. 109–123.
Etzkowitz, H., Webster, A., Gebhardt, C. and Terra, B. R. C. (2000) The
future of the university and the university of the future: Evolution of
ivory tower to entrepreneurial paradigm, Research Policy, v. 29, No. 2,
pp. 313–330.
Evans, G. (1973) A causal theory of names, Proceedings of the Aristotelian
Society, Supplementary v. 47, pp. 187–208.
Everett, H. (1957) ‘Relative State’ formulation of quantum mechanics,
Reviews of Modern Physics, v. 29, pp. 454–462.
Everett, H. (1957a) On the Foundations of Quantum Mechanics, Ph.D.
thesis, Department of Physics, Princeton University, Princeton.
Everett, A. and Hofweber, T. (Eds.) (2000) Empty Names, Fiction and the
Puzzles of Non-Existence, CSLI Publications, Stanford.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 869

Bibliography 869

Fabrikant, V. (1985) Some pedagogical ideas of J.C. Maxwell, in Maxwell


and the Development of Physics in 19th –20th centuries, Nauka,
Moscow, pp. 185–189 (in Russian).
Fallis, D. (2004) On Verifying the accuracy of information, Library Trends,
v. 52, No. 3, pp. 463–487.
Fantl, J. (2009) Knowing-how and knowing-that, Philosophy Compass, v. 3,
No. 3, pp. 451–470.
Fantl, J. (2012) Knowledge How, The Stanford Encyclopedia of Philosophy
(Internet Edition: http://plato.stanford.edu/archives/).
Fantl, J. and McGrath, M. (2009) Knowledge in an Uncertain World,
Oxford University Press, Oxford.
Fauconnier, G. and Turner, M. (2002) The Way we Think : Conceptual
Blending and the Mind’s Hidden Complexities, Basic Books, New York.
Faye, J., Scheffler, U. and Urchs, M. (Eds.) (2000) Things, Facts and Events,
Rodopi, Amsterdam.
Fayyad, U. M., Piatetsky-Shapiro, G. and Smyth, P. (1996) From data
mining to knowledge discovery: An overview, in Advances in Knowledge
Discovery And Data Mining, AAAI Press/The MIT Press, Menlo Park,
CA, pp. 1–34.
Feest, U. (2005) Operationism in psychology: What the debate is about,
what the debate should be about, Journal of the History of the Behav-
ioral Sciences, v. 41, No. 2, pp. 131–149.
Feigenbaum, E. and McCorduck, P. (1983) The Fifth Generation, Addison
Wesley, Reading, MA.
Feldman, R. (2007) Reasonable religious disagreements, in Philosophers
without Gods, L. Antony (Ed.), Oxford: Oxford University Press.
Feng, J. and Hu, W. (2002) Some considerations for a semantic analysis
of conceptual data schemata, in Systems Theory and Practice in the
Knowledge Age, G. Ragsdell, D. West and J. Wilby (Eds.), Kluwer
Academic/Plenum Publishers, New York.
Feynman, R. P. (1948) Space–time approach to non-relativistic quantum
mechanics, Reviews of Modern Physics, v. 20, pp. 367–387.
Feynman, R. P. (1949) The theory of positrons, Physical Review, v. 76,
pp. 749–759.
Feynman, R. P. (1950) The concept of probability theory in quantum
mechanics, in The Second Berkeley Symposium on Mathematical Statis-
tics and Probability Theory, University of California Press, Berkeley,
California.
Feynman, R. P. (1987) Negative probability, in Quantum Implications:
Essays in Honour of David Bohm, Routledge & Kegan Paul Ltd,
London & New York, pp. 235–248.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 870

870 Bibliography

Field, H. (1989) Realism, Mathematics and Modality, Blackwell, New York.


Fillmore, C. J. (1963) The position of embedding transformations in a gram-
mar, Word, v. 19, pp. 208–231.
Fillmore, C. J. (1976) Frame semantics and the nature of language, Annals
of the New York Academy of Sciences: Conference on the Origin and
Development of Language and Speech, v. 280, pp. 20–32.
Fillmore, C. J. (1982) Frame semantics, in Linguistics in the Morning Calm,
Seoul, Hanshin Publishing Co., pp. 111–137.
Findler, N. V. (Ed.) (1979) Associative Networks: Representation and Use
of Knowledge by Computers, Academic Press, New York.
Fisch, M. and Turquette, A. (1966) Peirce’s triadic logic, Transactions of
the Charles S. Peirce Society, v. 2, No. 2 pp. 71–85.
Fisher, R. A. (1955), Statistical methods and scientific induction, Journal
of the Royal Statistical Society, Series B, v. 17, pp. 69–78.
Fisher, R. A. (1956) Statistical Methods and Scientific Inference, Oliver and
Boyd, Edinburgh and London.
Fishman, G. S. (1978) Principles of Discrete Event Simulation, Wiley,
New York.
Fitch, F. (1974) Elements of Combinatory Logic, Yale University Press,
Yale.
Flake, G. W. (1998) The Computational Beauty of Nature: Computer Explo-
rations of Fractals, Chaos, Complex Systems, and Adaptation, MIT
Press, Cambridge, MA.
Fogel, L., Owens, A. J. and Walsh, M. J. (1966) Artificial Intelligence through
Simulated Evolution, John Wiley & Sons, Inc., New York, NY.
Fogolari, F., Zuccato, P., Esposito, G. and Viglino, P. (1999) Biomolecu-
lar electrostatics with the linearized Poisson–Boltzmann equation, Bio-
physical Journal, v. 76, pp. 1–16.
Fokkink, W. J. (2000) Introduction to Process Algebra, Texts in Theoretical
Computer Science, An EATCS Series, Springer.
Foray, D. (2004) The Economics of Knowledge, MIT Press, Cambridge,
MA.
Foray, D. and Lundvall, B.-A. (1996) The knowledge-based economy: From
the economics of knowledge to the learning economy, in OECD Doc-
uments: Employment and Growth in the Knowledge-Based Economy,
OECD, Paris, pp. 11–32.
Ford, K. M. and Bradshaw, J. M. (1993) Knowledge Acquisition as Model-
ing, John Wiley & Sons, Inc., New York, NY.
Forsyth, P. A., Vetzal, K. R. and Zvan, R. (2001) Negative Coefficients in
Two Factor Option Pricing Models, Working Paper (electronic edition:
http://citeseer.ist.psu.edu/435337.html).
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 871

Bibliography 871

Foucault, M. (1966) Les mots et les Choses — une archéologie des sciences
humaines, Gallimard, Paris.
Fraenkel, A. A. and Bar-Hillel, Y. (1958) Foundations of Set Theory, North
Holland P.C., Amsterdam.
Frank, P. G. (Ed.) (1956) The Validation of Scientific Theories, Beacon
Press, Boston.
Frascella, A. and Guido, C. (2008) Transporting many-valued sets along
many-valued relations, Fuzzy Sets and Systems, v. 159, No. 1, pp. 1–22.
Frawley, W. J., Piatetsky-Shapiro, G. and Matheus, C. (1991) Knowl-
edge Discovery in Databases: An Overview, in Knowledge Discovery
in Databases, AAAI Press/MIT Press, Cambridge, MA, pp. 1–30.
Fredkin, E. and Toffoli, T. (1982) Conservative logic, International Journal
of Theoretical Physics, v. 21, No. 3–4, pp. 219–253.
Freeman, C. (1982) The Economics of Industrial Innovation, Penguin,
Harmondsworth.
Freeman, A. and DeWolf, R. (1992) The 10 Dumbest Mistakes Smart People
Make and How to Avoid Them, Harper Collins Publ., New York, NY, US.
Frenken, K. (2005) Innovation, Evolution and Complexity Theory, Edward
Elgar, Cheltenham, UK/Northampton, MA.
Fricke, M. (2008) The Knowledge Pyramid : A Critique of the DIKW Hier-
archy, Preprint (electronic edition: http://dlist.sir.arizona. edu/2327/).
Frieden, R. B. (1998) Physics from Fisher Information, Cambridge Univer-
sity Press, Cambridge.
Frieden, B. R. (2004) Science from Fisher Information: A Unification,
Cambridge University Press, Cambridge.
Frieden, B. R. and Soffer, B. H. (1995) Lagrangians of physics and the game
of Fisher-information transfer, Physical Review E 52, 2274.
Frieden, B. R., Plastino, A. and Soffer, B. H. (2001) Population genetics
from an information perspective, Journal of Theoretical Biology, v. 208,
pp. 49–64.
Friedman, N. and Halpern, J. Y. (1994) A knowledge-based framework
for belief change, part II: Revision and update, Proc. of the Fourth
International Conference on the Principles of Knowledge Representa-
tion and Reasoning (KR’94), pp. 190–200.
Frege, G. (1891) Funktion und Begriff, Hermann Pohle, Jena.
Frege, G. (1892) Über Begriff und Gegenstand, Vierteljahrsschrift für wis-
senschaftliche Philosophie, v. 16, pp. 192–205.
Frege, G. (1892a) Über Sinn und Bedeutung, Zeitschrift für Philosophie
und philosophische Kritik, v. 100, pp. 25–50.
Friedman, N. and Halpern, J. Y. (1994) A knowledge-based framework
for belief change, part II: Revision and update, Proc. of the Fourth
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 872

872 Bibliography

International Conference on the Principles of Knowledge Representa-


tion and Reasoning (KR’94), pp. 190–200.
Frost, R. A. (1986) Introduction to Knowledge Base Systems, Collins.
Fujigaki, Y. (1998) Filling the gap between discussions on science
and scientists’ everyday activities: Applying the autopoiesis system
theory to scientific knowledge, Social Science Information, 37(1),
pp. 5–22.
Gabbay, D. M. (1993) Restrictive access logics for inconsistent information,
in ECCSQARU, Lecture Notes in Computer Science, Springer, Berlin,
pp. 137–144.
Gabbay, D. M. (1994) Labelled deductive systems and the informal fallacies,
Proc. of 3rd International Conference on Argumentation, v. 2: Analysis
and Evaluation, International Society for the Study of Argumentation,
pp. 308–319.
Gabbay, D. M. (1996) Labelled Deductive Systems, Oxford Logic Guides,
v. 33, Clarendon Press/Oxford Science Publications, Oxford.
Gabbay, D. (1999) Fibring Logics, Clarendon Press, Oxford.
Gabbay, D. M. (2002) A theory of hypermodal logics: Mode shifting in
modal logic, Journal of Philosophical Logic, v. 31, No. 3, pp. 211–243.
Gabbay, D. M. and Hunter, A. (1991) Making inconsistency respectable,
part I, Proc. of Fundamental of Artificial Intelligence Research (FAIR
’91), LNAI, Springer-Verlag, v. 535, pp. 19–32.
Gabbay, D. M. and Hunter, A. (1993) Making inconsistency respectable,
part II, Proc. of Euro Conference on Symbolic and Quantitive
Approaches to Reasoning and Uncertainity, LNCS, Springer-Verlag,
v. 747, pp. 129–136.
Gabbay, D. M. and Malod, G. (2002) Naming worlds in modal and temporal
logic, Journal of Logic, Language and Information, v. 11, pp. 29–65.
Gabbay, D. M. and Queiroz, R. J. G. B. (1992) Extending the Curry-
Howard interpretation to linear, relevance and other resource logics,
Journal of Symbolic Logic, v. 56, pp. 1129–1140.
Gabbey, A. (1995) The Pandora’s box model of the history of philosophy,
Etudes Maritainiennes — Maritain Studies, v. 11, pp. 61–74.
Gackowski, Z. J. (2004) What to teach business students in MIS courses
about data and information, Issues in Informing Science & Information
Technology; v. 1, pp. 845–867.
Gebert, H., Geib, M., Kolbe, L. and Riempp, G. (2002) Towards Customer
Knowledge Management: Integrating Customer Relationship Manage-
ment and Knowledge Management Concepts, Institute of Information
Management, University of St. Gallen, St. Gallen.
Gershman, S. J., Horvitz, E. J. and Tenenbaum, J. B. (2015) Computational
rationality: A converging paradigm for intelligence in brains, minds, and
machines, Science, v. 349, pp. 273–278.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 873

Bibliography 873

Goldman, A. (1989) Metaphysics, mind and mental science, Philosophical


Topics, v. 17, pp. 131–145.
Goldman, A. (1992) Cognition and modal metaphysics, in Liaisons: Philos-
ophy Meets the Cognitive and Social Sciences, A. Goldman (Ed.), MIT
Press, Cambridge, MA.
Goldman, A. I. (1999) A Priori warrant and naturalistic epistemology, in
Philosophical Perspectives, v. 13, pp. 1–28.
Goldman, A. (2007) Philosophical Intuitions: Their Target, Their Source,
and Their Epistemic Status, Grazer Philosophische Studien, v. 74,
pp. 1–26.
Google: Knowledge Graph (http://searchengineland.com/library/google/
google-knowledge-graph).
de Gooijer, J. (2000) Designing a knowledge management performance frame-
work, Journal of Knowledge Management, v. 4, No. 4, pp. 303–310.
Guptara, P. (1999) Why knowledge management fails: how to avoid the
common pitfalls, Knowledge Management Review, v. 9, pp. 26–29.
Gurteen, D. (2012) Introduction to leading issues in social knowledge man-
agement — A brief and personal history of Knowledge Management! in
Leading Issues in Social Knowledge Management, Academic Publishing
International Limited, pp. iii–viii.
Gaede, W. (2003) What is an object? Apeiron, v. 10, No. 1, pp. 15–31.
Gagne, R. M. (1985). The conditions of learning and theory of instruction,
Holt, Rinehart, and Winston, New York.
Gagne, R. M. (1986). Instructional technology: The research field, Journal
of Instructional Development, v. 8, No. 3, pp. 7–14.
Ganeri, J. (Ed.) (2001) Indian Logic: A Reader, Routledge Curzon,
New York.
Ganeri, J. (2004) Indian logic, in Greek, Indian and Arabic Logic,
D. Gabbay and J. Woods (Eds.), Volume I of the Handbook of the
History of Logic, Elsevier, Amsterdam, pp. 309–396.
Gantenbein, R. E. and Sung, C.-O. (2001) Integrating knowledge discov-
ery and spatial data visualization for health care research, Proc. of the
ISCA 16 th International Conference on Computers and their Applica-
tions”, ISCA.
Ganter, B. and Wille, R. (1999) Formal Concept Analysis — Mathematical
Foundations, Springer, Heidelberg.
Gärdenfors, P. (1988) Knowledge in Flux — Modeling the Dynamic of Epis-
temic States, MIT Press, Cambridge, MA.
Gärdenfors, P. (1988a) Semantics, conceptual spaces and music, in Essays
on the Philosophy of Music (Acta Philosophica Fennica, v. 43), The
Philosophical Society of Finland, Helsinki, pp. 9–27.
Gärdenfors, P. (1990) Induction, conceptual spaces and AI, Philosophy of
Science, v. 57, pp. 78–95.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 874

874 Bibliography

Gärdenfors, P. (1991) Frameworks for properties: possible worlds vs. con-


ceptual spaces, in Language, Knowledge and Intentionality (Acta Philo-
sophica Fennica, v. 49), The Philosophical Society of Finland, Helsinki,
pp. 383–407.
Gärdenfors, P. (1993) Induction and the evolution of conceptual spaces, in
Charles S. Peirce and the Philosophy of Science, University of Alabama
Press, Tuscaloosa, pp. 72–88.
Gärdenfors, P. (2000) Conceptual Spaces: On the Geometry of Thought,
MIT Press, Cambridge, MA.
Gärdenfors, P. (2004) Conceptual spaces as a framework for knowledge
representation, Mind and Matter, v. 2, No. 2, pp. 9–27.
Gärdenfors, P. and Rott, H. (1995) Belief revision, in Handbook of Logic in
Artificial Intelligence and Logic Programming, v. 4, Oxford University
Press, Oxford, pp. 35–132.
Garland, S. J. and Luckham, D. C. (1969) Program Schemas, Recur-
sion Schemas and Formal Languages, Report UCLA-EHG-7154, Los
Angeles.
Garner, R. T. and Rosen, B. (1967) Moral Philosophy: A Systematic Intro-
duction to Normative Ethics and Meta-ethics, Macmillan, New York.
Garrison, J. W. (1988) Hintikka, Laudan and Newton: An interrogative
model of scientific discovery, Synthese, v. 74, No. 2, pp. 45–172.
Gauld, D. B. (1974) Topological properties of manifolds, The American
Mathematical Monthly, v. 81, No. 6, pp. 633–636.
Gell-Mann, M. (1995) Remarks on simplicity and complexity, Complexity,
v. 1, No. 1, pp. 16–19.
Gentner, D. (1983) Structure-mapping: A theoretical framework for
analogy, Cognitive Science, v. 7, pp. 155–170.
Gentzen G (1936) Die Widerspruchfreiheit der reinen Zahlentheorie, Math-
ematische Annalen, v. 112, pp. 493–565.
Georgi, H. (1999) Lie Algebras in Particle Physics, Perseus Books, Reading,
Massachusetts.
Gerber, A. (2010) Epistemic space/Spatial knowledge, Proc. the ARCC/
EAAE 2010 International Conference on Architectural Research,
pp. 351–572.
Gershman, S. J., Horvitz, E. J. and Tenenbaum, J. B. (2015) Computational
rationality: A converging paradigm for intelligence in brains, minds, and
machines, Science, v. 349, pp. 273–278.
Gettier, E. (1963) Is justified true belief knowledge? Synthesis, v. 23,
pp. 121–123.
Geurts, B. (1997) Good news about the description theory of names, Jour-
nal of Semantics, v. 14, pp. 319–348.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 875

Bibliography 875

Giarratano, J. and Riley, G. (1998) Expert Systems: Principles and Pro-


gramming, PWS Pub. Co., Boston.
Gibbons, M., C. Limoges, H. Nowotny, S. Schwartzman, P. Scott and Trow,
M. (1994) The New Production of Knowledge: The Dynamics of Science
and Research in Contemporary Societies, Sage, London.
Gilbert, M. A. (2008) How to Win An Argument : Surefire Strategies for
Getting Your Point Across, University Press of America, Lanham, MD.
Gillies, D. A. (1972) Operationalism, Synthese, v. 25, pp. 1–24.
Ginet, C, (1975) Knowledge, Perception, and Memory, D. Reidel Publishing
Company, Dordrecht.
Ginetsinsky, V. I. (1989) Knowledge as a Pedagogical Category, Leningrad
University Press, Leningrad (in Russian).
Girard, J.-Y. (1987) Linear logic, Theoretical Computer Science, v. 50,
No. 1, pp. 1–102.
Girdenfors, P. (1988) Knowledge in Flux — Modeling the Dynamic of Epis-
temic States, MIT Press, Cambridge, MA.
Gladun, V. P. (1986) Growing semantic networks in adaptive problem
solving systems, Computers and Artificial Intelligence, v. 5, No. 1,
pp. 13–27.
Gladun, V. (1987) Decisions Planning, Naukova dumka, Kiev (in Russian).
Glover, F. and Kochenberger, G. A. (2003) Handbook of Metaheuristics,
Springer, International Series in Operations Research & Management
Science, Springer, New York.
Gödel K (1931–1932) Über formal unentscheidbare Sätze der Principia
Mathematica und verwandter Systeme I, Monatsh. Journal of Math-
ematical Physics, v. 38, No. 1, 173–198.
Gödel, K. (1940) The Consistency of the Axiom of Choice and of the Gen-
eralized Continuum-Hypothesis With the Axioms of Set Theory, Prince-
ton University Press, Princeton.
Gödel, K. (1947) What is Cantor’s continuum problem? first verison, Math-
ematical Monthly, v. 9, pp. 515–525.
Godin, B. (2005). The knowledge-based economy: Conceptual framework or
buzzword, Journal of Technology Transfer, forthcoming, forthcoming,
22.
Godin, B. and Y. Gingras. (2000). The place of universities in the sys-
tem of knowledge production, Research Policy, v. 29, No. 2, pp. 273–
278.
Goertzel, K. and Winograd, T. (2008) Enhancing the Development Life
Cycle to Produce Secure Software: A Reference Guidebook on Soft-
ware Assurance, Department of Homeland Security and Department
of Defense Data and Analysis Center for Software.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 876

876 Bibliography

Goguen, J. (2006) Mathematical models of cognitive space and time, in


Reasoning and Cognition, Proc. of the Interdisciplinary Conference on
Reasoning and Cognition, Keio University Press, pp. 125–128.
Gold, E. M. (1965) Limiting recursion, Journal of Symbolic Logic, v. 30,
No. 1, pp. 28–46.
Goldblatt, R. (1984) Topoi: The Categorical analysis of Logic, North-
Holland P.C., Amsterdam.
Goldblatt, R. (2000) Algebraic polymodal logic: A. survey, Logic Journal
of the IGPL, v. 8, No. 4, pp. 393–450.
Goldman, A. (1967) A causal theory of knowledge, The Journal of Philos-
ophy, v. 64, pp. 357–372.
Goldman, A. I. (2004) Pathways to Knowledge: Private and Public, Oxford
University Press, Oxford, UK.
Goldman, A.I. (1967) A causal theory of knowledge, The Journal of Phi-
losophy, v. 64, pp. 357–372.
Goodman, N. (1965) Fact, Fiction, and Forecast, Bobbs-Merrill, Indianapo-
lis.
Goodman, N. (1968) Languages of Art : An Approach to a Theory of Sym-
bols, Bobbs-Merrill, Indianapolis.
Gopnik, A. and Meltzoff, A. (1997) Words, Thoughts, and Theories, MIT
Press, Cambridge, MA.
Gorsky, D. P., Ivin, A. A. and Nikiforov, A. L. (1991) Brief Dictionary of
Logic, Prosveshcheniye, Moscow (in Russian).
Gowri, K. (2001) EnrXML — A schema for representing energy simula-
tion data, 7th Int. IBPSA Conference, Rio de Janeiro, Brazil, pp. 257–
261.
Granstrand, O. (1999) The Economics and Management of Intellectual
Property: Towards Intellectual Capitalism, Edward Elgar, Cheltenham,
UK.
Grant, R. M. (1996) Toward a knowledge-based theory of the firm, Strategic
Management Journal, v. 17, pp. 109–122.
Grant, J. and Hunter, A. (2006) Measuring inconsistency in knowledge
bases, Journal of Intelligent Information Systems, v. 27, pp. 159–184.
Grassberger, P. (1990) Information and complexity measures in dynamical
systems, in Information Dynamics, Plenum press, New York.
Gregg, D. G. (2010) Designing for collective intelligence, Communications
of the ACM, v. 53, No. 4, pp. 134–138.
Grene, M. (1974) The Knower and the Known, University of California
Press, Berkeley and Los Angeles.
Grice, P. (1989) Studies in the Way of Words, Harvard University Press,
Cambridge, MA.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 877

Bibliography 877

Griffin, M. (2003) More features of the mythic spacetime algebra, Journal


of Literary Semantics, v. 32, pp. 49–72.
Griffin, M. (2006) Mythic algebra uses: Metaphor, logic, and the semiotic
sign, Semiotica, v. 158, No. 1–4, pp. 309–318.
Griffin, M. (2008) Looking behind the symbol: Mythic algebra, numbers,
and the illusion of linear sequence, Semiotica, v. 171, pp. 1–13.
Griffin, M. (2009) Semiosis, mythic algebra, and the laws of association,
Semiotica, v. 176, pp. 1–14.
Grothendieck, A. (1957) Sur quelques points d’algebre homologique, Tohoku
Mathematical Journal, v. 2, No. 9, pp. 119–221.
Grove, A. J. (1995) Naming and identity in epistemic logic, II: A first-order
logic for naming, Artificial Intelligence, v. 74, No. 2, pp. 311–350.
Grove, A. J. and Halpern, J. Y. (1993) Naming and identity in epistemic
logics, I: The propositional case, Journal of Logic and Computation,
v. 3, No. 4, pp. 345–378.
Gruber, T. (1993) A translation approach for portable ontology specifica-
tions, Knowledge Engineering, v. 5, No. 2, pp. 199–220.
Gruziel, M,, Grochowski, P. and Trylska, J. (2008) The Poisson–Boltzmann
model for tRNA. Journal of Computational Chemistry, v. 29, pp. 1970–
1981.
Guarino, N. (2008) Ontological foundations of conceptual modeling and
knowledge representation. Proc. of the Sixteenth Italian Symposium on
Advanced Database Systems (SEBD 2008), Mondello, PA, Italy.
Guarino, N. (2009) The ontological level: Revisiting 30 years of knowl-
edge representation, Conceptual Modeling: Foundations and Applica-
tions, pp. 52–67.
Guhe, M., Pease, A., Smaill, A., Martinez, M., Schmidt, M., Gust,
H., Kühnberger, K-U. and Krumnack, U. (2011) A computational
account of conceptual blending in basic mathematics, Cognitive Sys-
tems Research, v. 12, No. 3–4, pp. 249–265.
Gundry, J. (2001). Knowledge Management (electronic edition:
http://www.knowab.co.uk/kma.html).
Haack, S. (1974) Deviant Logics, Cambridge University Press, London.
Haas, A. R. (1995) An epistemic logic with quantification over names, Com-
putational Intelligence, v. 11, No. 3, pp. 460–497.
Hacking, I. (1983) Representing and Intervening, Cambridge University
Press, Cambridge.
Hailperin, T. (1984) Probability logic, Notre Dame Journal of Formal Logic,
v. 25, No. 3, pp. 198–212.
Hájek, P. (1998) Metamathematics of Fuzzy Logic, Kluwer Academic Pub-
lishers, Dordrecht.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 878

878 Bibliography

Hájek, P. and Havranek, T. (1978) Mechanizing Hypothesis Formation:


Mathematical Foundations of General Theory, Springer, New York/
Heidelberg/Berlin.
Hall, M., Jr (1959) The Theory of Groups, The Macmillan Company, New
York.
Hall, D. G. (1999) Semantics and the acquisition of proper names, in Lan-
guage, Logic, and Concepts: Essays in Memory of John McNamara,
MIT Press, Cambridge, MA, pp. 337–372.
Hallal, W. E. (1998) The Infinite Resource: Creating and Leading the
Knowledge Enterprise, Jossey-Bass Publishing, San Francisco.
Halmos, P. R. (1962) Algebraic Logic, The Macmillan Company, New York.
Halmos, P. R. (2000) An autobiography of polyadic algebras, Logic Journal
of the IGPL, v. 8 , No. 4, pp. 383–392.
Halpern, J. Y. (1990) An analysis of first-order logics of probability, Arti-
ficial Intelligence Journal, v. 46, No. 3, pp. 311–350.
Halpern, J. Y. (1999) Hypothetical knowledge and counterfactual reason-
ing, International Journ. Game Theory, v. 28, pp. 315–330.
Halpern, J. Y. and Moses, Y. (1984) Knowledge and common knowledge in
distributed environment, Proc. of 3rd of ACM Conf. on Principles of
Distributed Computing, Los Angeles, CA, pp. 50–61.
Halpern, J. Y. and Moses, Y. (1985) Towards a theory of knowledge
and ignorance, in Logics and Models of Concurrent Systems, Springer-
Verlag, New York, pp. 459–476.
Halpern, J. Y. and van der Meyden, R. (2001) A logic for SDSI’s linked
local name spaces, Journal of Computer Security, v. 9, No. 1–2,
pp. 105–142.
Hamblin, C. L, (1970) Fallacies, Methuen, London.
Hamkins, J. D. and Lewis, A. (2000) Infinite time turing machines, Journal
of Symbolic Logic, v. 65, No. 3, pp. 567–604.
Hamlyn, D. W. (1970) The Theory of Knowledge, Macmillan, London.
Hampton, J. A. (2011) Concepts and natural language, in Concepts and
Fuzzy Logic, MIT Press, Cambridge, MA, pp. 233–258.
Hand, M. (1988) Game-theoretical semantics, montague semantics, and
questions, Synthese, v. 74, No. 2, pp. 207–222.
Hardcastle, G. L. (1995) S. S. Stevens and the origins of operationism,
Philosophy of Science, v. 62, pp. 404–424.
Hardy, L. (2001) Quantum Theory From Five Reasonable Axioms,
Preprint in quantum physics, quant-ph/0101012 (electronic edition:
http://arXiv.org).
Harel, D. (1979) First Order Dynamic Logic, Springer-Verlag, New York.
Harel, D., Kozen, D. and Parikh, R. (1982) Process logic: Expressiveness,
decidability, completeness, Journal of Computer and System Sciences,
v. 25, No. 2, pp. 144–170.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 879

Bibliography 879

Harel, D., Turyn, J. and Kozen, D. (2000) Dynamic Logic, MIT Press,
Cambridge, MA.
Harland, W. B., Armstrong, R. L., Cox, A. V., Craig, L. E., Smith, A. G.
and Smith, D. G. (1990) A Geologic Time Scale, Cambridge University
Press, Cambridge.
Harrah, D. (2002) The logic of questions, in Handbook of Philosophical
Logic, v. 8, Kluwer, Dordrecht/Boston/London, pp. 1–60.
Harris, J. (1988) Developments in algebraic geometry, Proc. of the
AMC Centennial Symposium, A.M.S. Publications, Providence,
pp. 89–100.
Hartley, R. T. and Barnden, J. A. (1997) Semantic networks: visualizations
of knowledge, Trends in Cognitive Science, v. 1, No. 5, pp. 169–175.
Haug, E. G. (2004) Why so negative to negative probabilities, Wilmott
Magazine, Sep/Oct, pp. 34–38.
Hawking, S. W. (1988) A Brief History of Time: From the Big Bang to
Black Holes, Bantam Books, Toronto/New York/London.
Hawley, K. (2003) Success and knowledge-how, American Philosophical
Quarterly, v. 40, No. 1, pp. 19–31.
Hawthorne, J. and Stanley, J. (2008) Knowledge and action, Journal of
Philosophy, v. 105, No. 10, pp. 571–590.
Hayakawa, S. I. (1949) Language in Thought and Action, Harcourt, Brace
and Co., New York.
Hayakawa, S. I. (1963) Symbol, Status, and Personality, Harcourt, Brace &
World, New York.
Hayakawa, S. I. (1979) Through the Communication Barrier: On Speaking,
Listening, and Understanding, Harper & Row, New York.
Hayakawa, S. I. (Ed.) (1971) Our Language and Our World, Books for
Libraries Press, Freeport, NY.
Hayakawa, S. I. (Ed.) (1964) The Use and Misuse of Language, Fawcett
Publications, Greenwich, CT.
Head, H. and Holmes, G. (1911) Sensory disturbances from cerebral lesions,
Brain, v. 34, pp. 102–254.
Heather, M. and Rossiter, N. (2009) Fragmentary structure of global knowl-
edge: constructive processes for interoperability, Kybernetes, v. 38,
No. 7/8, pp. 1409–1418.
Hecht, M., Maier, R., Seeber, I. and Waldhart, G. (2011) Fostering adop-
tion, acceptance, and assimilation in knowledge management system
design, Proc. of the 11th International Conference on Knowledge Man-
agement and Knowledge Technologies. Graz, ACM Digital Library, New
York, pp. 1–8.
Hegel, G. W. F. (1813) Wissenschaft der Logik, bd. 1&2, Schrag, Nürnberg.
Heisenberg, W. (1931) Über die inkohärente Streuung von Röntgenstrahlen,
Physik. Zeitschr., v. 32, pp. 737–740.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 880

880 Bibliography

Heisenberg, W. (1971) Physics and Beyond, George Allen & Unwin,


London.
Held L. and Bové D. S. (2014) Applied Statistical Inference — Likelihood
and Bayes, Springer, New York.
Hersh, R. (1995) Fresh breezes in the philosophy of mathematics, American
Mathematical Monthly, v. 102, pp. 589–594.
Hersh, R. (1997) What is Mathematics, Really? Oxford University Press,
Oxford.
Hersh, R. (Ed.) (2005) 18 Unconventional Essays on the Nature of Mathe-
matics, Springer, New York.
Hempel, Carl G. 1966. Philosophy of Natural Science, Prentice-Hall,
Englewood Cliffs, N.J.
Hennessy, M. (1988) Algebraic Theory of Processes, MIT Press, Cambridge,
Massachusetts.
Herbert, N. (1987) Quantum Reality: Beyond the New Physics, Anchor
Books, New York.
Hereth, J., Stumme, G., Wille, R. and Wille, U. (2000) Conceptual knowl-
edge discovery in data analysis, in Conceptual Structures: Logical, Lin-
guistic and Computational Issues, LNAI, v. 1867, Springer, Berlin,
pp. 421–437.
Herrmann, N. (1990) The Creative Brain, Brain Books, Lake Lure, North
Carolina.
Herrlich, H. and Strecker, G. E. (1973) Category Theory, Allyn and Bacon
Inc., Boston.
Herzberger, H. H. (1982) Three systems of Buddhist logic, in Matilal and
Evans (Eds.), pp. 59–76.
Hetherington, S. (2006) How to know (that knowledge-that is knowledge-
how), in Epistemology Futures, S. Hetherington (Ed.), Oxford Univer-
sity Press, Oxford, pp. 71–94.
Heuring, V. P. and Jordan, H. F. (1997) Computer Systems
Design and Architecture, Addison Wesley Logman, Inc., Menlo
Park/Reading/Harlow.
Heyderhoff, P. and Hildebrand, T. (1973) Informationsstrukturen, eine
Einführung in die Informatik, B. I.-Wissenschaftsverlag, Mannheim/
Wien/Zürich.
Heylighen F. (1996) What is complexity? Principia Cybernetica, (http://
pespmc1.vub.ac.be/COMPLEXI.html).
Hilbert, D. and Cohn-Vossen, S. (1952) Geometry and the Imagination,
Chelsea, New York.
Hilpinen, R. (Ed.) (1971) Deontic Logic: Introductory and Systematic Read-
ings, Reidel, Boston.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 881

Bibliography 881

Hindley, R., Lercher, B. and Seldin, J. (1972) Introduction to Combinatory


Logic, Cambridge University Press, Cambridge.
Hintikka, J. (1968) The varieties of information and scientific explanation,
in Logic, Methodology and Philosophy of Science, B. van Rootselaar and
J. F. Staal (Eds.), Amsterdam, The Netherlands, v. III, pp. 311–331.
Hintikka, J. (1970) Surface Information and depth information, in Infor-
mation and Inference, Synthese Library, Humanities Press, New York,
pp. 263–297.
Hintikka, J. (1971) On defining information. Ajatus, v. 33, pp. 271–273.
Hintikka, J. (1973) Surface semantis: Definition and its motivation, in
Truth, Syntax, and Modality, pp. 127–147.
Hintikka, J. (1973a) Logic, Language-Games and Information, Clarendon,
Oxford.
Hintikka, J. (1976) The semantics of questions and the questions of seman-
tics, Acta Philosophica Fennica, v. 28, pp. 302–315.
Hintikka, J. (1988) What is the logic of experimental inquiry? Synthese,
v. 74, No. 2, pp. 173–190.
Hintikka, J. (1999) Inquiry as Inquiry: A Logic of Scientific Discovery,
Kluwer, Dordrecht/Boston/London.
Hintikka, J. (2003) The notion of intuition in Husserl, Revue internationale
de philosophie, No. 224, pp. 57–79.
Hintikka, J. (2005) Knowledge and Belief: An Introduction to the Logic of
the Two Notions, Kings College Publications, New York.
Hintikka, J. (2007) Formal Ontology and Conceptual Realism, Springer,
Dordrecht.
Hintikka, J. and Hintikka, M. (1988) The Logic of Epistemology and the
Epistemology of Logic: Selected Essays, Kluwer Academic Publishers,
Dordrecht/Boston/London.
Hintikka, J. and Sandu, G. (1989) Informational Independence as a Seman-
tical Phenomenon, in Logic, Methodology and Philosophy of Science,
Vol. 8, J. E. Fenstad, I. T. Frolov and R. Hilpinen (Eds.), Amsterdam:
Elsevier, pp. 571–589.
Hjelmslev, L. (1958) Dans quelle mesureles significations des mots peu-
vent’elles etre consideres corn me formant une structure? Proc. of the
8th International Congress of Linguistics, Oslo, pp. 636–654.
Hjelmslev, L. (1963) Prolegomena to a Theory of Language, University of
Wisconsin Press, Madison.
Hjørland, B. (2007) Semantics and knowledge organization, Annual Review
of Information Science and Technology, v. 41, No. 1, pp. 367–405.
Hoare, C. A. R. (1969) An axiomatic basis for computer programming,
Communications of ACM, v. 12, pp. 576–580, 583.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 882

882 Bibliography

Hoare, C. A. R. (1985) Communicating Sequential Processes, Prentice Hall


International Series in Computer Science, Prentice-Hall International,
UK, Ltd.
Hodges, W. (1997) Compositional semantics for a language of imperfect
information, Logic Journal of the IGPL, v. 5, pp. 539–563.
Hodges, W. (1997a) Some Strange Quantifiers, in Structures in Logic
and Computer Science: A Selection of Essays in Honor of A.
Ehrenfeucht, (Lecture Notes in Computer Science, Volume 1261),
J. Mycielski, G. Rozenberg and A. Salomaa (Eds.), Springer, London,
pp. 51–65.
Hodgkin, A. L. and Huxley, A. F. (1952) A quantitative description of
membrane current and its application to conduction and excitation in
nerve, The Journal of Physiology, v. 117, No. 4, pp. 500–544.
Hodgson, G. M. and Knudsen, T. (2007) Information, complexity, and gen-
erative replication, Biology & Philosophy, v. 43, No. 1, pp. 47–65.
Hopcroft, J. E., Motwani, R. and Ullman, J. D. (2007) Introduction
to Automata Theory, Languages, and Computation, Addison Wesley,
Boston/San Francisco/New York.
Horibe, F. (1999) Managing Knowledge Workers — New Skills and Atti-
tudes to Unlock the Intellectual Capital in Your Organization, John
Wiley & Sons, New York.
Hornsby, J. (2012) Ryle’s knowing-how, and knowing how to act, in Know-
ing How: Essays on Knowledge, Mind, and Action, J. Bengson and
M. A. Moffett (Eds.), Oxford University Press, Oxford, pp. 80–100.
Horst, S. (1996) Symbols, Computation and Intentionality: A Critique of
the Computational Theory of Mind, University of California Press,
Berkeley, CA.
Horwich, P. (1998) Meaning, Oxford University Press, Oxford.
Howlett, P. and Morgan, M. S. (2010) How Well Do Facts Travel ?: The
Dissemination of Reliable Knowledge, Cambridge University Press,
Cambridge.
Howson, C. (2003) Probability and logic, Journal of Applied Logic, v. 1,
Nos. 3–4, pp. 151–165.
Hu, W. and Feng, J. (2006) Data and information quality: An information-
theoretic perspective, Proc. of the 2nd International Conference on
Information Management and Business (IMB), Sydney, Australia,
pp. 482–491.
Huemer, M. 2001, Skepticism and the Veil of Perception, Rowman and
Littlefield, Lanham, MD.
Huemer, M. 2005, Moral Intuitionism, Palgrave Macmillan, New York.
Hughes, G. and Cresswell, M. (1968) An Introduction to Modal Logic,
Methuen, London.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 883

Bibliography 883

Hume, D. (1772/1999) An Enquiry concerning Human Understanding,


Oxford Philosophical Texts, Oxford University Press, Oxford.
Hunter, G. (1971) Metalogic: An Introduction to the Metatheory of Standard
First-Order Logic, University of California Press.
Hunter, A. and Liu, W. (2010) A survey of formalisms for representing and
reasoning with scientific knowledge, Knowledge Engineering Review,
v. 25, No. 2, pp. 199–222.
Hutchins, R. (1968) The Learning Society, Penguin, London.
Husén, T. (1974) The Learning Society, Methuen, London.
Husemöller, D. (1994) Fibre Bundles, Springer Verlag, Berlin/
Heidelberg/New York.
Hyman, J. (1999) How Knowledge Works, The Philosophical Quarterly,
v. 49, No. 197, pp. 433–451.
Ianov, Y. I. (1958) On the equivalence and transformation of program
schemes, Communications of the ACM, v. 1, No. 10, pp. 8–12.
Ianov, Y. I. (1958a) On matrix program schemes, Communications of the
ACM, v. 1, No. 12, pp. 3–6.
Ianov, Y. I. (1958b) On the logical schemata of algorithms, Problems of
Cybernetics, v. 1, pp. 75–127 (in Russian).
Ilyin, V. V. (1989) Criteria of Scientific Knowledge, Vysshaya Shkola,
Moscow (in Russian).
Israel, D. and Perry, J. (1990) What is information? in Information, Lan-
guage and Cognition, University of British Columbia Press, Vancouver,
pp. 1–19.
Itti, L and Arbib, M. A. (2005) Visual salience facilitates entry into con-
scious scene representation, Proc. 9 th Annual Meeting of the Associ-
ation for the Scientific Study of Consciousness (ASSC9), Pasadena,
CA.
Jaffe, A. B. and Trajtenberg, M. (2002) Patents, Citations, and
Innovations: A Window on the Knowledge Economy, MIT Press,
Cambridge, MA.
Jacobson, V., Smetters, D. K., Thornton, J. D., Plass, M. F., Briggs, N. H.
and Braynard, R. L. (2012) Networking Named Content, Communica-
tions of the ACM, v. 55, No. 1, pp. 117–124.
Jagadish, H. V., Lakshmanan V. S. and Srivastava, D. (1999) Revisiting
the hierarchical data model, IEICE Transactions on Information and
Systems, v. E00-A, No. 1.
Jäger G. (1988) Induction in the elementary theory of types and names,
Proc. CSL 1987, Lecture Notes in Computer Science, v. 329, pp. 118–
128.
Jago, M. (2009) Logical information and epistemic space, Synthese, v. 167,
Issue 2, pp. 327–341.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 884

884 Bibliography

Jakobson, R. (1960) Closing statements: Linguistics and poetics, in Style


in Language, T. A. Sebeok (Ed.), MIT Press, Cambridge, pp. 350–377.
Jakobson, R. (1971) Selected Writings, Word and Language, v. II, Mouton
de Gruyter, Berlin, Germany.
Japaridze, G. (1988) The polymodal logic of provability, in Intensional Log-
ics and Logical Structure of Theories, Metsniereba, Tbilisi, pp. 16–48
(in Russian).
Japaridze, G. (1993), A generalized notion of weak interpretability and the
corresponding modal logic, Annals of Pure and Applied Logic, v. 61,
No. 1/2, pp. 113–160.
Japaridze, G. (2003) Introduction to computability logic, Annals of Pure
and Applied Logic, v. 123, pp. 1–99.
Japaridze, G. (2006) Propositional computability logic I, ACM Transac-
tions on Computational Logic, v. 7, pp. 302–330.
Japaridze, G. (2006a) Propositional computability logic II, ACM Transac-
tions on Computational Logic, v. 7, pp. 331–362.
Japaridze, G. and de Jongh, D. (1998) The logic of provability, in Handbook
of Proof Theory, Elsevier, pp. 475–546.
Jarrow, R. and Turnbull, S. (1995) Pricing derivatives on financial securities
subject to credit risk, Journal of Finance, v. L, No. 1, pp. 53–85.
Jaśkowski, S. (1948) Rachunek zdań dla systemów dedukcyjnych
sprzecznych, Studia Societatis Scientiarun Torunesis (Sectio A), v. 1,
No. 5, pp. 55–77.
Jaśkowski, S. (1949) O koniunkcji dyskusyjnej w rachunku zdań dla
systemów dedukcyjnych sprzecznych, Studia Societatis Scientiarum
Torunensis (Sectio A), v. 1, No. 8.
Jaśkowski, S. (1948/1999) A propositional calculus for inconsistent deduc-
tive systems, Logic and Logical Philosophy, v. 7, pp. 35–56.
Jaśkowski, S. (1949/1999) On the discussive conjunction in the propo-
sitional calculus for inconsistentdeductive systems, Logic and Logical
Philosophy, v. 7, pp. 57–59.
Jayatilleke, K. N. (1963) Early Buddhist Theory of Knowledge, George Allen
and Unwin Ltd., London.
Jeffrey, R. C. (1966) The Logic of Decision, Chicago University Press,
Chicago.
Johannsen, W. (2015) On semantic information in nature, Information, v. 6,
No. 3, pp. 411–431.
John, E. (1998) Reading fiction and conceptual knowledge: Philosophical
thought in literary context, Journal of Aesthetics and Art Criticism 56,
pp. 331–48.
Johnson, M. (1987) The Body in the Mind : The Bodily Basis of Meaning,
Imagination, and Reason, University of Chicago Press, Chicago.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 885

Bibliography 885

De Jong, T. and Ferguson-Hessler, M. G. M. (1996) Types and qualities of


knowledge, Educational Psychologist, v. 31, No. 2, pp. 105–113.
Josephson, J. R. and Josephson, S. G. (Eds.) (1995) Abductive Inference:
Computation, Philosophy, Technology, Cambridge University Press,
Cambridge, UK.
Jung, C. (1928) On psychic energy, in On the Nature of the Psyche, Prince-
ton University Press, Princeton.
Jung, C. G. (1969) The Structure and Dynamics of the Psyche, Princeton
University Press, Princeton.
Kahn, G. (1974) The semantics of a simple language for parallel program-
ming, Proc. of the IFIP Congress 74, North-Holland Publishing Co.,
Amsterdam.
Kalfoglou, Y., Dasmahapatra, S. and Chen-Burger, Y. (2004) FCA in
Knowledge Technologies: Experiences and Opportunities, in Concept
Lattices: Second International Conference on Formal Concept Analy-
sis, LNCS, v. 2961, Springer, Berlin, pp. 252–260.
Kalfoglou, Y. and Schorlemmer, M. (2005) Using Formal Concept Analysis
and Information Flow for Modeling and Sharing Common Semantics:
lessons learnt and emergent issues, Proc. of the 13th International Con-
ference on Conceptual Structures (ICCS2005), Kassel, Germany, July
2005.
Kalish, M. L., Griffiths, T. L. and Lewandowsky, S. (2007) Iterated learn-
ing: Intergenerational knowledge transmission reveals inductive biases,
Psychonomic Bulletin & Review, v. 14, No. 2, pp. 288–294.
Kaluznin, L. A. (1959) On mathematical problem algorithmization, Prob-
lems of Cybernetics, v. 2, pp. 51–69 (in Russian).
Kamm, F. (1998) Moral intuitions, cognitive psychology, and the harming-
versus-not-aiding distinction, Ethics, v. 108, pp. 463–488.
Kandel, A. and Dick, S. (2005) Computational Intelligence in Software
Quality, Series in Machine Perception Artificial Intelligence, World Sci-
entific, Singapore, v. 63.
Kaner, C., Nguyen, H. Q. and Falk, J. (1988) Testing Computer Software,
Thomson Computer Press, Boston.
Kant, I. (1929) Critique of Pure Reason, Macmillan, London (German first
edition of original work published in 1781).
Karp, R. M. and Miller, R. E. (1969) Parallel program schemata, Journal
of Computer and System Science, v. 3, pp. 147–195.
Karpatschof, B. (2000) Human Activity: Contributions to the Anthropolog-
ical Sciences from a Perspective of Activity Theory, Dansk Psykologisk
Forlag, Copenhagen, Denmark.
Karttunen, L. (1977) Syntax and semantics of questions, Linguistics and
Philosophy, v. 1, pp. 3–44.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 886

886 Bibliography

Katz, J. J. (1977) A proper theory of names, Philosophical Studies, v. 31,


No. 1, pp. 1–80.
Katz, V. J. (1996) Combinatorics and induction in medieval Hebrew and
Islamic mathematics. In Vita Mathematica: Historical Research and
Integration with Teaching, C. Ron (Ed.), Washington, D.C. Mathemat-
ical Association of America, pp. 99–107.
Katzoff, C. (1984) Knowing how, Southern Journal of Philosophy, v. 22,
pp. 61–67.
Kauppinen, A. (2007) The rise and fall of experimental philosophy, Philo-
sophical Explorations, v. 10, pp. 95–118.
Keenan, T. A. (1964) Computers and education, Communications of the
ACM, v. 7, No. 4, pp. 205–209.
Keil, F. (1989) Concepts, Kinds, and Cognitive Development, MIT Press,
Cambridge, MA.
Keller, R. M. (1973) Parallel program schemata and maximal paral-
lelism II: Construction of closures, Journal of the ACM, v. 20, No. 4,
pp. 696–710.
Kelley, J. (2002) Knowledge Nirvana: Achieving The Competitive Advan-
tage Through Enterprise Content Management and Optimizing Team
Collaboration, Xulon Press, Fairfax, VA.
Kendal, S. L. and Creen, M. (2007) An Introduction to Knowledge Engi-
neering, Springer, New York.
Kesh, S. and Ratnasingam, P. (2007) A knowledge architecture for IT secu-
rity, Communications of the ACM, v. 50, No. 7, pp. 103–108.
Ketelaar, E. (1997) Can we trust information? The International Informa-
tion & Library Review, v. 29, No. 3–4, pp. 333–338.
Khrennikov, A. (2009) Interpretations of Probability, Walter de Gruyter,
Berlin/New York.
Kiel, L. D. (Ed.) (2001) Knowledge Management, Organizational Intelli-
gence and Learning, and Complexity, EOLSS Publishers, Oxford.
Kiernan, V. (April 28 1995) Gravitational constant is up in the air, The
New Scientist, p. 18.
Kim, J. (1993) Mind and Supervenience, Cambridge University Press,
Cambridge.
King, J. C. (1995) Structured propositions and complex predicates, Nous,
v. 19, pp. 516–535.
Kirby, S., Griffiths, T. and Smith, K. (2014) Iterated learning and the
evolution of language, Current Opinion in Neurobiology, v. 28, pp. 108–
114.
Kirk, J. (1999) Information in organizations: Directions for information
management, Information Research, v. 4, No. 3 (http://informationr.
net/4-3/paper57.html).
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 887

Bibliography 887

Kirkpatrick, S., Gelatt Jr., C. D. and Vecchi, M. P. (1983) Optimization by


simulated annealing, Science, v. 220, No. 4598, pp. 671–680.
Kleene, S. C. (1936) Lambda-definability and recursiveness, Duke Mathe-
matical Journal, v. 2, pp. 340–353.
Kleene, S. C. (2002) Mathematical Logic, Courier Dover Publications, New
York.
Klein, D. A. (2013) Rabbi Ishmael, Meet Jaimini: The thirteen middot of
interpretation in light of comparative law, Hakirah, v. 16, pp. 91–112.
Kleiner, S. A. (1970) Erotetic logic and the structure of scientific revolution,
British Journal of Philosophical Science, v. 21, No. 2, pp. 149–165.
Kleiner, S. A. (1988) Erotetic logic and scientific inquiry, Synthese, v. 74,
No. 1, pp. 19–46.
Klir, G. J. and Wang, Z. (1993) Fuzzy Measure Theory, Kluwer Academic
Publishers, Boston/Dordrecht/London.
Knorr-Cetina, K. (1998) Epistemics in society. On the nesting of knowledge
structures into social structures, Sociologie et sociétés: Sociology’s Sec-
ond Wind, 30.
Knuth, D. (1997) The Art of Computer Programming, v. 2: Seminumerical
Algorithms, Addison-Wesley.
Kogut, B. and Zander, U. (1992) Knowledge of the firm: Combinative capa-
bilities, and the replication of technology, Organization Science, v. 3,
No. 3, pp. 383–397.
Kohlas, J. and Stärk, R. F. (2007) Information algebras and consequence
operators, Logica Universalis, v. 1, pp. 139–165.
Kohs, G. (2014) Google’s knowledge graph boxes: Killing Wikipedia?
In Wikipediocracy (http://wikipediocracy.com/2014/01/06/googles-
knowledge-graph-killing-wikipedia/).
Koksvik, O. (2011) Intuition, Ph.D. Thesis, Australian National University.
Kolmogorov, A. (1932) Zur deutung der intuitionistischen logik, Mathe-
matuche Zeitschrift, v. 34, pp. 58–65.
Konig, H. (2009) Measure and Integration: An Advanced Course in Basic
Procedures and Applications, Lecture Notes in Mathematics, Springer,
New York.
Konolige, K. (1988) On the relation between default and autoepistemic
logic, Artificial Intelligence, v. 35, pp. 343–382.
Koriche, F. (2001) On anytime coherence-based reasoning, in symbolic and
quantitative approaches to reasoning with uncertainty, Lecture Notes
in Computer Science, v. 2143, pp. 556–567.
Korteweg, D. J. and de Vries, G. (1895) On the change of form of
long waves advancing in a rectangular canal, and on a new type
of long stationary waves, Philosophical Magazine, v. 39, No. 240,
pp. 422–443.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 888

888 Bibliography

Korzybski, A. (1933) Science and Sanity: An Introduction to Non-


Aristotelian Systems and General Semantics, Science Press Printing
Co., Lancaster, PA.
Kotov, V. E. (1978) Introduction to the Theory of Program Schemas, Nauka,
Novosibirsk (in Russian).
Koura A. (1988) An approach to why-questions, Synthese, v. 74, pp. 191–
207.
Krause, E. F. (1987) Taxicab Geometry, Dover, New York.
Kripke, S. A. (1963) Semantical considerations on modal logic, Acta Philo-
sophica Fennica, v. 16.
Kripke, S. (1972) Naming and Necessity, Harvard University Press, Cam-
bridge, MA.
Kripke, S. (1979) A puzzle about belief, in Meaning and Use, (Margalit, A.
Ed.), D. Reidel Publishing Company, Boston, pp. 239–283.
Kripke, S. (1982) Wittgenstein on Rules and Private Language: An Ele-
mentary Exposition, Harvard University Press, Cambridge, MA.
Kucza, T. (2001) Knowledge Management Process Model, VTT Publica-
tions, Finland.
Kuhn, T. S. (1962) The Structure of Scientific Revolutions, University of
Chicago Press, Chicago, IL.
Kuhn, S. T. (1983) An axiomatization of predicate functor logic, Notre
Dame Journal of Formal Logic, v. 24, pp. 233–241.
Kuratowski, K. (1966) Topology, v. 1, Academic Press, Warszawa, Poland.
Kurosh, A. G. (1963) Lectures on General Algebra, Chelsea P. C., New
York.
Kurtz, J. (2011) The Development of Chinese Logic, Brill, Leiden, The
Netherlands.
Kuzmin, V. B. (1982) Building Group Decisions in Spaces of Strict and
Fuzzy Binary Relations, Nauka, Moscow (in Russian).
Kyburg, H. E. (1970) Probability and Inductive Logic, Macmillan,
New York, NY.
LaDuke, B. (2002) Anti-knowledge and ten immutable knowledge creation
laws, Proc. of the 6th World Multi-Conference on Systemics, Cybernet-
ics and Informatics (WMSCI 2002), Orlando, Florida.
Laertius, D. (1991) Lives of Eminent Philosophers, Loeb Classical Library,
Harvard University Press, Harvard.
Lakatos, I. (1976) Proofs and Refutations, Cambridge University Press,
Cambridge, UK.
Lakoff, G. (1987) Women, Fire, and Dangerous Things: What Categories
Reveal About the Mind, University of Chicago Press, Chicago.
Lambek, J. and Scott, P. J. (1988) Introduction to Higher Order Categorical
Logic, Cambridge University Press, Cambridge.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 889

Bibliography 889

Lambert, K. (2003) Free Logic: Selected Essays, Cambridge University


Press, Cambridge.
Land, F., Land, N., Sevasti M. and Amjad, U. (2007) Knowledge manage-
ment: the darker side of KM, Ethicomp Journal, v. 3, No. 1 (http://
www.ccsr.cse.dmu.ac.uk/journal/do previous.php?prev=View+Papers
&id=5).
Landau, R. H., Bordeianu, C. C. and Paez, M. J. (2008) A Survey of Compu-
tational Physics: Introductory Computational Science, Princeton Uni-
versity Press, Princeton.
Landauer, C. (1998) Data, information, knowledge, understanding: Com-
puting up the meaning hierarchy, Proc. of the 1998 IEEE International
Conference on Systems, Man, and Cybernetics (SMC’98), San Diego,
California, pp. 2255–2260.
Landry, E. (1999) Category Theory as a framework for mathematical struc-
turalism, The 1998 Annual Proc. of the Canadian Society for the His-
tory and Philosophy of Mathematics, pp. 133–142.
Landry, E. (2006) Category theory as a framework for an in re interpre-
tation of mathematical structuralism, in The Age of Alternative Log-
ics: Assessing Philosophy of Logic and Mathematics Today, Springer
Netherlands, pp. 163–179.
Langacker, R. W. (1987) Foundations of Cognitive Grammar, v. I, Theo-
retical Prerequisites, Stanford University Press, Stanford, California.
Langacker, R. W. (1991) Foundations of Cognitive Grammar, v. II, Theo-
retical Prerequisites, Stanford University Press, Stanford, California.
Langacker, R. W. (1991a) Concept, Image, and Symbol: The Cognitive Basis
of Grammar, Mouton de Gruyter, Berlin/New York.
Larson, R. and Edwards, C. H. (2006) Calculus: An Applied Approach,
Houghton Mifflin company, Boston/New York.
Larson, R. and Ludlow, P. (1993) Interpreted logical forms, Synthese, v. 95,
pp. 305–355.
Larson, R. and Segal, G. (1995) Knowledge of Meaning, MIT Press,
Cambridge, MA.
Lassez, C., McAloon, K. and Port, G. S. (1989) Stratification and knowledge
base management, Journal of Symbolic Computation, pp. 509–522.
Lautman, A. (1938) Essai sur les notions de structure et d’ existence en
mathématique, Hermann, Paris.
Le Potier, J. (1997) Lectures on Vector Bundles, Cambridge Studies in
Advanced Mathematics, v. 54, Cambridge University Press, Cambridge.
Leatherdale, W. (1974) The Role of Analogy, Model and Metaphor in
Science, American Elsevier, New York.
Leblanc, L. (1962) Nonhomogeniuos polyadic algebras, Proc. of the
American Mathematical Society, v. 13, No. 1, pp. 59–65.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 890

890 Bibliography

Lee, J. M. (2000) Introduction to Topological Manifolds, Graduate Texts in


Mathematics 202, Springer, New York.
Lee, E. A. and Parks, T. M. (1995) Dataflow Process Networks, Proc. of
the IEEE, May 1995 (http://ptolemy.eecs.berkeley.edu/papers/ pro-
cessNets).
Lee, E. A. and Sangiovanni-Vincentelli, A. (1996) Comparing models of
computation, Proc. of the 1996 IEEE/ACM International Conference
on Computer-Aided Design, IEEE Computer Society, Washington, DC,
pp. 234–241.
Lehmann, F. (Ed.) (1992) Semantic Networks in Artificial Intelligence,
Pergamon Press, Oxford.
Lehmann D. (1995) Another perspective on default reasoning, Annals of
Mathematics and Artificial Intelligence, v. 15, pp. 61–82.
Lehmann, D. (1995a) Belief revision, revised, Proc. of the Fourteenth
International Joint Conference on Artificial Intelligence (IJCAI’95),
pp. 1534–1540.
Lehmann, F. and Wille, R. (1995) A triadic approach to formal concept
analysis, in Conceptual Structures: Applications, Implementation and
Theory, Lecture Notes in Computer Science, v. 954, pp. 32–43.
Lehrer, K. (1990) Theory of Knowledge, Boulder, Colorado.
Leibniz, G. W. (1989) Philosophical Essays, Hackett, Indianapolis.
Lenski, W. (2004) Remarks on a publication-based concept of information,
in New Developments in Electronic Publishing AMS/SMM Special Ses-
sion, ECM4 Satellite Conference, Stockholm, pp. 119–135.
Leonard, H. S. and N. Goodman, (1940) The calculus of individuals and its
uses, Journal of Symbolic Logic, v. 5, pp. 45–55.
Leonard, H. S.(1956) The logic of existence, Philosophical Studies, v. 7,
pp. 49–64.
Leśniewski, S. (1929) Grundzüge eines neuen Systems der Grundlagen der
Mathematik, Fundamenta Mathematicae, v. 14, pp. 1–81.
Leśniewski, S. (1992) Podstawy ogólnej teoryi mnogosci, I, Prace Polskiego
Kola Naukowego w Moskwie, Sekcya matematyczno-przyrodnicza, 1916
(Eng. trans. by D. I. Barnett: Foundations of the General Theory of
Manifolds I, in S. Leśniewski, Collected Works, Dordrecht: Kluwer, v.
1, 1992, pp. 129–173).
Levesque, H. J. (1984) A logic of implicit and explicit belief, Proc. AAAI’84,
pp. 198–202.
Levesque, H. and Mylopoulos, J. (1979) A procedural semantics for seman-
tic networks, in Associative Networks: Representation and Use of
Knowledge by Computers, Academic Press, New York, pp. 93–120.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 891

Bibliography 891

Levine, P. A. (1999) Waking the Tiger: Healing Trauma, Sounds True,


Boulder, CO.
Lewicki, P., Maria, C. and Hunter. H. (1987) Unconscious acquisition of
complex procedural knowledge, Journal of Experimental Psychology,
v. 13, No. 4, pp. 523–530.
Lewis, D. (1983) Philosophical Papers: Volume I, Oxford University Press,
New York.
Lewis, D. (1990) What experience teaches, in Mind and Cognition: A
Reader, W. G. Lycan (Ed.), Blackwell, Oxford, pp. 469–477.
Lewis, D. (1996) Elusive Knowledge, Australasian Journal of Philosophy,
v. 74, No. 4, pp. 549–567.
Lewontin, R. (2000). The Triple Helix: Gene, Organism, and Environment,
Harvard University Press, Cambridge, MA/London.
Leydesdorff, L. and M. Meyer. (2003) The triple helix of university-
industry-government relations: Introduction to the topical issue, Sci-
entometrics, v. 58, No. 2, pp. 191–203.
Leydesdorff, L. (2006a) The Knowledge-Based Economy: Modeled, Mea-
sured, Simulated. Universal Publishers, Boca Raton, Florida.
Leydesdorff, L. (2006b) The knowledge-based economy and the triple helix
model, in Reading the Dynamics of a Knowledge Economy, Edward
Elgar, Cheltenham, pp. 42–76.
Liew, A. (2007) Understanding data, information, knowledge and their
inter-relationships, Journal of Knowledge Management Practice, v. 8,
No. 2, pp. 21–36.
Lindsay, R. B. (1937) A critique of operationalism in physics, Philosophy
of Science, v. 4, pp. 456–470.
Liu, A. (2004) The Laws of Cool: Knowledge Work and the Culture of Infor-
mation, University of Chicago Press, Chicago.
Liu, D., Burgin, M., Karplus, W. and Valentino, D.J. (2001) Large scale flow
field visualization in medical systems, Proc. of the 16th ACM Symposium
on Applied Computing, Las Vegas, pp. 68–72.
Lloyd, S. (2000) Ultimate physical limits to computation, Nature, v. 406,
pp. 1047–1054.
Lloyd, S. (2002) Computational capacity of the universe, Physics Review
Letters, v. 88, No. 23, pp. 7901–7904.
Lobovikov, V. (1984) Scientific theory as a system of statements, problems
and intentions, in Analysis of scientific cognitive systems, Sverdlovsk,
pp. 54–58 (in Russian).
Locke, J. (1823) The Works of John Locke, A New Edition, Corrected, In
Ten Volumes, Vol. III, T. Tegg, London.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 892

892 Bibliography

Locke, J. (1975) An Essay Concerning Human Understanding, Clarendon


Press, Oxford.
Loewenstein, W. R. (1999) The Touchstone of Life: Molecular Information,
Cell Communication, and the Foundation of Life, Oxford University
Press, Oxford/New York.
Logrippo, L. (1978) Renamings and economy of memory in program
schemata, Journal of the ACM, v. 25, No. 1, pp. 10–22.
Loomis, M. E. S. (1987) The Database Book, Macmillan, New York, NY.
Losee, R. M. (1997) A discipline independent definition of information,
Journal of the American Society for Information Science, v. 48, No. 3,
pp. 254–269.
Lomuscio, A., van der Meyden, R. and Ryan, M. (2000) Knowledge in multi-
agent systems: Initial configurations and broadcast, ACM Transactions
of Computational Logic, v. 1, No. 2, pp. 247–284.
Lotman, Y. M. (1990) Universe of the Mind: A Semiotic Theory of Culture.
(Translated by Ann Shukman) I. B. Tauris, London.
Loveland, D. W. (1978) Automated Theorem Proving: A Logical Basis, Fun-
damental Studies in Computer Science, v. 6, North-Holland Publishing.
Luchi, D. and Montagna, F. (1999) An Operational Logic of Proofs with
Positive and Negative Information, Studia Logica, v. 63, No. 1, pp. 7–25.
L
 ukasiewicz, J. (1920) O logice trójwartościowej, Ruch filozoficzny, v. 5,
pp. 170–171 (in Polish).
L
 ukasiewicz, J. and Tarski, A. (1930) Untersuchungen über den Aus-
sagenkalkül, Comptes Rendus Société des Sciences de la et Lettres
Varsovie, Cl. III, v. 23, pp. 30–50.
Lupasco, S. (1951) Le Principe D’antagonisme et la Logique de L’énergie,
Éditions Hermann, Paris.
Lyapunov, A. A. (1958) On the logical schemata of programs, Problems of
Cybernetics, v. I, pp. 46–74 (in Russian).
Lyons, D. M. (1986) A Formal Model of Distributed Computation for
Sensory-Based Robot Control. Dept. of Computer and Information Sci-
ences Technical Report 86-43, University of Massachusetts, Amherst
MA.
Lyons, D. M. and Arbib, M. A. (1989) A formal model of computation for
sensory-based robotics, IEEE Trans. on Robotics and Automation, v. 5,
pp. 280–293.
MacCartney, B., McIlraith, S. A., Amir, A. and Uribe, T. (2003) Practical
partition-based theorem proving for large knowledge bases, Proc. of
the Eighteenth International Joint Conference on Artificial Intelligence
(IJCAI-03), pp. 89–96.
MacCormac, E. (1976) Metaphor and Myth in Science and Religion, Duke
University Press, Durham, NC.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 893

Bibliography 893

Machlup, F. (1962) The Production and Distribution of Knowledge in the


United States, Princeton University Press, Princeton.
Machlup, F. (1980) Knowledge and Knowledge Production, Princeton Uni-
versity Press, Princeton.
Machlup, F. and Mansfield, U. (Eds.) (1983) The Study of Information:
Interdisciplinary Messages, Wiley, New York.
Machtey, M. and Young, P. R. (1978) An Introduction to the General Theory
of Algorithms. North Holland, New York.
MacKay, D. M. (1969) Information, Mechanism and Meaning, MIT Press,
Cambridge, Massachusetts.
Mackey, G. W. (1963) The Mathematical Foundations of Quantum Mechan-
ics, W. A. Benjamin Inc, NewYork.
Madden, A. D. (2000) A definition of information, Aslib Proceedings, v. 52,
No. 9, pp. 343–349.
Madden, A. D. (2004) Evolution and information, Journal of Documenta-
tion, v. 60, No. 1, pp. 9–23.
Maedche, A. & Staab, S. (2001) Ontology learning for the Semantic Web,
Intelligent Systems, IEEE, v. 16, No. 2, pp. 72–79.
Magnani, L. (2001) Abduction, Reason, and Science: Processes of Discovery
and Explanation, Kluwer Academic Publishers, New York.
Magnani. L. (2007) Morality in a Technological World : Knowledge as Duty,
Cambridge University Press, New York.
Maier, R. (2002). Knowledge Management Systems: Information and Com-
munication Technologies for Knowledge Management, Springer-Verlag,
Berlin/Heidelberg.
Makinson, D (2005) Bridges from Classical to Nonmonotonic Logic, College
Publications.
Manekin, C. H. (1992) The Logic of Gersonides: An Analysis of Selected
Doctrines, Kluwer Academic Publishers, Dordrecht.
Manetti, G. (1993) Theories of the Sign in Classical Antiquity, Indiana
University Press, Bloomington, IN.
Manin, Y. I. (1991) Course in Mathematical Logic, Springer-Verlag, New
York.
Manna, Z. and Waldinger, R. (1993) The Deductive Foundations of Com-
puter Programming, Addison-Wesley, Boston/New York/Toronto.
Manzano, M. (1993) Introduction to many-sorted logic, in Many-sorted
Logic and Its Applications, John Wiley & Sons, Inc., New York, NY,
pp. 3–86.
Mansell, R. and Wehn, U. (1998) Knowledge Societies: Information Tech-
nology for Sustainable Development, United Nations Commission on
Science and Technology for Development/Oxford University Press,
New York.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 894

894 Bibliography

March, J. G. (1988) Technology of foolishness, in Decisions in Organiza-


tions, Blackwell, Oxford, pp. 253–265.
Marek, W. and Truszczynski, M. (1993) Nonmonotonic Logics: Context-
Dependent Reasoning, Springer-Verlag, New York.
Margenstern, M. (2002) Cellular automata in the hyperbolic plane: A sur-
vey, Romanian Journal of Information Science, v. 5, No. 1/2, pp. 155–
179.
Markman, E. M. (1989) Categorization and Naming in Children: Problems
of Induction, MIT Press, Cambridge, MA.
Markman, A. B. (2000) If you build it, will it know, Science, v. 288,
No. 5466, pp. 624–625.
Markus, L. (2001) Toward a theory of knowledge reuse: Types of knowledge
reuse situations and factors in reuse success, Journal of Management
Information Systems, v. 18, No. 1, pp. 57–93.
Martinez, A. A. (2006) Negative Math: How Mathematical Rules Can Be
Positively Bent, Princeton University Press, Princeton.
Marwick, A. D. (2001) Knowledge management technology, IBM Systems
Journal, v. 40, No. 4, pp. 814–830.
Maslov, S.Yu. (1987) A Theory of Deductive Systems and Its Applications,
Radio and Svyaz, Moscow (in Russian).
Mason, R. O. (1978) Measuring information output: A communication sys-
tems approach, Information and Management, v. 1, pp. 219–234.
Mates, B. (1953) Stoic Logic, University of California Press, Berkeley.
Matilal, B. K. (1971) Epistemology, Logic, and Grammar in Indian Philo-
sophical Analysis, Mouton and Co., The Hague.
Matilal, B. K. (1985) Logic, Language, and Reality: An Introduction to
Indian Philosophical Studies, Motilal Barnassidas, Delhi.
Matilal, B. K. (1998) The Character of Logic in India, State University of
New York Press, Albany.
Mattessich, R. (1993) On the nature of information and knowledge and the
interpretation in the economic sciences, Library Trends, v. 41, No. 4,
pp. 567–593.
Maturana, H. R. and Varela, F. J. (1980) Autopoiesis and Cognition: The
Realization of the Living, D. Reidel Publishing Company, Dordrecht.
Maturana, H. R. and Varela F. J. (1992) The Tree of Knowledge: The
Biological Roots of Human Understanding, Shambhala, Boston.
Maxion, R. A. and Olszewski, R. T. (1998) Improving software robustness
with dependability cases, Proc. of the 28th International Symposium
on Fault-Tolerant Computing, Munich, Germany, pp. 346–355.
Maxwell, J. C. (1865) A dynamical theory of the electromagnetic field,
Philosophical Transactions of the Royal Society of London, v. 55,
pp. 459–512.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 895

Bibliography 895

McCarthy, H., Miller, P. and Sidmore, P. (Eds.) (2004) Network Logic: Who
Governs in an Interconnected World ? Demos, London, UK.
McCulloch, G. (1989) The Game of the Name: Introducing Logic, Language
and Mind, Oxford University Press, Oxford.
McDermott, D. and Doyle, J. (1980) Non-monotonic logic, I. Artificial Intel-
ligence, v. 25, pp. 41–72.
McGrath, M. (2014) Propositions, in The Stanford Encyclopedia of Philos-
ophy, E. N. Zalta (Ed.), (http://plato.stanford.edu/archives/spr2014/
entries/propositions/).
McIlraith, S. and Amir, E. (2001) Theorem proving with structured theo-
ries, Proc. of the 17th Intl’ Joint Conference on Artificial Intelligence,
(IJCAI ’01), pp. 624–631.
McKeen, J. D. and Staples, D. S. (2002) Knowledge managers: Who they are
and what they do?, in Handbook on Knowledge Management, Springer-
Verlag, New York.
McNeill, D. and Freiberger, P. (1993) Fuzzy Logic, Simon and Schuster,
New York.
Meadow, C. T. and Yuan, W. (1997) Measuring the impact of informa-
tion: Defining the concepts, Information Processing and Management,
v. 33, No. 6, pp. 697–714.
Medin, D. L. and Ortony, A. (1989). Psychological essentialism, in Simi-
larity and Analogical Reasoning, S. Vosniadou and A. Ortony (Eds.),
Cambridge University Press, Cambridge, pp. 179–195.
Medin, D. L. and Shoben, E. J. (1988) Context and structure in conceptual
combination, Cognitive Psychology, v. 20, pp. 158–190.
Meinke, K. and Tucker, J. V. (Eds.) (1993) Many-sorted Logic and Its Appli-
cations, John Wiley & Sons, Inc., New York, NY.
Meinong, A. (1904) Über Gegenstandstheorie, in Untersuchungen zur
Gegenstandstheorie und Psychologie, pp. 1–51.
Meinong, A. (Ed.) (1904a) Untersuchungen zur Gegenstandstheorie und
Psychologie.
Meinong, A. (1907) Über die Stellung der Gegenstandstheorieim System der
Wissenschaften, R. Voigtländer, Leipzig.
Meinong, A. (1910) Über Annahmen, J. A. Barth, Leipzig, Germany.
Mendelson, E. (1997) Introduction to Mathematical Logic, Chapman & Hall,
London.
Menzel, C. (1986) A Complete Type-free ‘Second Order ’ Logic and
its Philosophical Foundations, Center for the Study and Language
and Information, Technical Report #CSLI-86–40, Stanford Univer-
sity, CA.
Menzel, C. (1993) The proper treatment of predication in fine-grained inten-
sional logic, Philosophical Perspectives, v. 7, pp. 61–87.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 896

896 Bibliography

Merrill, M. D. and Tennyson, R. D. (1977) Concept Teaching: An Instruc-


tional Design Guide, Educational Technology, Englewood Cliffs, NJ.
Merton, R. K. (1968) Social Theory and Social Structure, Free Press,
New York.
Meyer, M. (1988) The revival of questioning in the twentieth century, Syn-
these, v. 74, No. 1, pp. 5–18.
Mika, P. and Akkermans, H. (2004) Towards a New Synthesis of Ontology
Technology and Knowledge Management, Technical Report IR-BI-001,
Free University Amsterdam VUA.
Mikkulainen, R. (1993) Subsymbolic Natural Language Processing: An Inte-
grated Model of Scripts, Lexicon, and Memory, MIT Press, Cambridge
MA.
Mill, J. (1973) A System of Logic, Ratiocinative and Inductive, in The
Collected Works of J. S. Mill (Vols. 7–8), University of Toronto Press,
Toronto.
Mill, J. S. (1862) A System of Logic, v. 1, Parker, Son, and Bourn, London.
Miller, M. (2008) Cloud Computing: Web-Based Applications That Change
the Way You Work and Collaborate Online, Safari.
Milner, B. (1972) Disorders of learning and memory after temporal lobe
lesions in man, Clinical Neurosurgery, v. 19, pp. 421–446.
Milner, R. (1989) Communication and Concurrency, Prentice-Hall, Engle-
wood Cliffs, NJ.
Milner, R. (1999) Communicating and Mobile Systems: The π-calculus,
Cambridge University Press, Cambridge, UK.
Milnor, J. W. (1963) Morse Theory, Princeton University Press, Princeton,
NJ.
Milovanović, S. (2011) Aims and critical success factors of knowledge man-
agement system projects, Economics and Organization Vol. 8, No. 1,
pp. 31–40.
Milton, N. R. (2007) Knowledge Acquisition in Practice, Springer, New York.
Minsky, M. (1967) Computation: Finite and Infinite Machines, Prentice-
Hall, New York/London/Toronto.
Minsky, M. (Ed.) (1968) Semantic Information Processing, MIT Press,
Cambridge, MA.
Minsky, M. (1974) A Framework for Knowledge Representation, AI Memo
No. 306, MIT, Cambridge.
Minsky, M. (1986) The Society of Mind, Simon and Schuster, New York.
Minsky, M. (1991) Society of mind: A response to four reviews, Artificial
Intelligence, v. 48, pp. 371–396.
Minsky, M. (1991a) Conscious machines, in Machinery of Consciousness,
75th Anniversary Symposium on Science in Society, National Research
Council of Canada.
Minsky, M. (2006) The Emotion Machine, Simon and Schuster, New York.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 897

Bibliography 897

Minsky, M. (2011) Interior Grounding, Reflection, and Self-Consciousness,


in Information and Computation, World Scientific, New York/London/
Singapore, pp. 287–305.
Mishkoff, M. (1985) Understanding Artificial Intelligence, Howard W. Sams,
Indianapolis, IN.
Mittelstaedt, P. (1978) Quantum Logic, D. Reidel Publishing Company,
Dordrecht.
Mizzaro, S. (19960 On the Foundations of Information Retrieval, in Atti del
Congresso Nazionale AICA’96 (Proc. of AICA’96), Roma, IT, pp. 363–
386.
Mizzaro, S. (1998) How many relevances in information retrieval? Interact-
ing with computers, v. 10, pp. 303–320.
Mizzaro, S. (2001) Towards a theory of epistemic information, in Informa-
tion Modelling and Knowledge Bases, 12, IOS Press, Amsterdam, The
Netherlands, pp. 1–20.
Mohanta (2010) Knowledge Worker Productivity Improvement Processes,
Technologies and Techniques in Defence R&D Laboratories: An Evalu-
ative Study, Bharath University, School of Management Studies.
Moller, F. and Tofts, C. (1990) A temporal calculus of communicating
systems, CONCUR-90 — Theories of Concurrency: Unification and
Extension, LNS, v. 458, pp. 401–415.
Molnar, A. R. (1997) Computers in education: A brief history, T.H.E. Jour-
nal, v. 24, No. 11, pp. 63–68.
Montanet, L., et al. (1994) Review of particle properties, Physical Review D,
50, pp. 1173–1826.
Moore, R. (1985) Semantical considerations on nonmonotonic logic. Artifi-
cial Intelligence, v. 25, No. 1, pp. 75–94.
Morris, C. W. (1938) Foundation of the theory of signs, in International
Encyclopedia of Unified Science, v. 1, No. 2.
Morris, C. W. (1946) Signs, Language and Behavior, Prentice-Hall, Inc.,
New York.
Morris, C. W. (1964) Signification and Significance: A Study of the Rela-
tions of Signs and Values, MIT Press, Cambridge, Massachusetts.
Morris, C. W. (1971). Writings on the general theory of signs. The Hague,
Mouton.
Mostowski, A. (1951) On the rules of proof in the pure functional calculus
of the first order, Journal of Symbolic Logic, v. 16, pp. 107–111.
Motter, A. E., de Moura, A. P. S., Lai, Y.-C. and Dasgupta, P. (2002)
Topology of the conceptual network of language, Physical Review E, v.
65, p. 065102.
Moutafakis, N. J. (1987) The Logics of Preference: A Study of Prohairetic
Logics in Twentieth Century Philosophy, Kluwer Academic Publishers,
Dodrecht.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 898

898 Bibliography

Moyer, A. E. (1991) P. W. Bridgman’s operational perspective on physics,


Studies in History and Philosophy of Science, v. 22, pp. 237–258 and
373–397.
Mukherji, A., Kedia, B. L., Parente, R. and Kock, N. (2004) Strate-
gies, structures and information architectures: Toward international
Gestalts, Problems and Perspectives in Management, v. 2, No. 3,
pp. 181–195.
Munindar P. S. and Nicholas M. A. (1993) A logic of intentions and beliefs,
Journal of Philosophical Logic, v. 22, pp. 513–544.
Muyeba, M. and Rybakov, V. (2014) Knowledge representation in agent’s
logic with uncertainty and agent’s interaction, Preprint in computer
Science, 1406.5495 [cs.LO] (electronic edition: http://arXiv.org).
Müller-Merbach, H. (2004) Is knowledge merely perception? Knowledge
Management Research and Practice, v. 2, pp. 200–210.
Müller-Merbach, H. (2004a) Knowledge is more than information, Knowl-
edge Management Research and Practice, v. 2, pp. 61–62.
Müller-Merbach, H. (2006) Mittelstrass’s triad: information, knowledge,
opinion, Knowledge Management Research and Practice, v. 4, pp. 331–
332.
Müller-Merbach, H. (2006a) Three kinds of knowledge, reflecting Kant’s
three kinds of action, Knowledge Management Research and Practice,
v. 4, pp. 73–74.
Murphy, G. L. and Medin, D. L. (1985) The role of theories in conceptual
coherence, Psychological Review, v. 92, pp. 289–316.
Nakashima, D. and Roué, M. (2002) Knowledge and foresight: the predictive
capacity of traditional knowledge applied to environmental assessment.
International Social Science Journal, v. 54, No. 173, (The Knowledge
Society), pp. 337–347.
Nebel, B. (1991) Belief revision and default reasoning: Syntax-based
approaches, Proc. of the Second International Conference on the Princi-
ples of Knowledge Representation and Reasoning (KR’91), pp. 417–428.
Nebel, B. (1994) Base revision operations and schemes: Semantics, repre-
sentation and complexity, Proc. of the Eleventh European Conference
on Artificial Intelligence (ECAI’94), pp. 341–345.
Neisser, U. (1967) Cognitive Psychology, Appleton-Century Crofts, New
York.
Neisser, U. (1976) Cognition and Reality, Freeman, San Francisco.
Nekrašas, E. (1987) Probabilistic Knowledge, Mintis, Vilnus, Lithuania (in
Russian).
Newell, A. (1973) Production Systems: Models of Control Structures, Visual
Information Processing, Academic Press, New York.
Newell, A. (1982) The knowledge level, Artificial Intelligence, v. 18, No. 1,
pp. 87–127.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 899

Bibliography 899

Newell, A. (1990) Unified Theories of Cognition, The William James lec-


tures, 1987, Harvard University Press, Cambridge, MA.
Newell, A. and Simon, H. (1972) Human Problem Solving, Prentice Hall,
Englewood Cliffs, NJ.
Nguen, N. T. (2008) Inconsistency of knowledge and collective intelligence,
Cybernetics and Systems, v. 39, No. 6, pp. 542–562.
Nguen, N. T. (2008a) Advanced Methods for Inconsistent Knowledge Man-
agement, Springer Series: Advanced Information and Knowledge Pro-
cessing, Springer, New York/Heidelberg/Berlin.
Nguyen, H. T. and Walker, E. A. (1996) A First Course in Fuzzy Logic,
CRC Press, New York.
Nielson, H. R. and Nielson, F. (1995) Semantics with Applications, A For-
mal Introduction, John Wiley & Sons, Chicester, England.
Nielsen, M. and Valencia, F. D. (2004) Notes on timed CCP, Lectures on
Concurrency and Petri Nets 2003, Springer-Verlag, pp. 702–741.
Nikonov, V. A. (1974) Name and Society, Moscow (in Russian).
Nishanov, V. K. (1990) The Phenomenon of Understanding: Cognitive
Analysis, Ilim, Frunze (in Russian).
Nocedal, A. S., Gerrikagoitia Arrien, J. K. and Burgin, M. (2011) A
mathematical model for managing XML data, International Jour-
nal of Metadata, Semantics and Ontologies (IJMSO), v. 6, No. 1,
pp. 56–73.
Nonaka, I. and Takeuchi, H. (1995) The Knowledge Creating Company,
Oxford University Press, Oxford, UK.
Nonaka, I. and Toyama, R. (2003) The knowledge-creating theory revisited:
knowledge creation as a synthesising process, Knowledge Management
Research and Practice, v. 1, No. 1, pp. 2–10.
Nonaka, I. and Toyama, R. (2005) The theory of the knowledge-creating
firm: subjectivity, objectivity and synthesis, Industrial and Corporate
Change, v. 14, No. 3, pp. 419–436.
Nonaka, I., Toyama, R. and Konno, N. (2000) SECI, Ba and leadership: A
unified model of dynamic knowledge creation, Long Range Planning,
v. 33, pp. 5–34.
Norman, D. (1972) Memory, Knowledge and the answering of questions,
Loyola Symposium on Cognitive Psychology, Chicago.
North, D. (1981) Structure and Change in Economic History,
W.W. Norton, New York.
North, J. (2009) The “structure” of physics: A case study, Journal of Phi-
losophy, v. 106, pp. 57–88.
Northrop, F. S. C. (1947) The Logic of the Sciences and the Humanities,
The World Publishing Company, Cleveland/New York.
Nöth, W. (1990) Handbook of Semiotics, Indiana University Press, Bloom-
ington, IN.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 900

900 Bibliography

Novák, V., Perfilieva, I. and Močkoř, J. (1999) Mathematical Principles of


Fuzzy Logic, Kluwer Academic Publishers, Dodrecht.
Nowotny, H., Scott, P. and Gibbons, M. (2001) Re-Thinking Science:
Knowledge and the Public in an Age of Uncertainty, Polity, Cambridge.
Ntuli, N. and Han, S. (2012) Detecting router cache snooping in Named
Data Networking, in International Conference on ICT Convergence
(ICTC), pp. 714–718.
Nuseibeh, B., Kramer, J. and Finkelstein, A. C. W. (1994) A framework
for expressing the relationships between multiple views in requirements
specification, Transactions on Software Engineering, v. 20, No. 10,
pp. 760–773.
Nuseibeh, B., Easterbrook, S. and Russo, A. (2001) Making inconsistency
respectable in software development, Journal of Systems and Software,
v. 58, No. 2, pp. 171–180.
Nute, D. (1980) Topics in Conditional Logic, D. Reidel Publishing Com-
pany, Boston.
Ogden, C. K. and Richards, I. A. (1953) The Meaning of Meaning, Rout-
ledge and Kegan, London.
Okhotin, A. (2003) Boolean grammars, Information and Computation,
v. 194, pp. 19–48.
Osherson, D. N. and Smith, E. E. (1981) On the adequacy of prototype
theory as a theory of concepts, Cognition, v. 9, pp. 35–58.
Osherson, D., Stob, M. and Weinstein, S. (1986) Systems That Learn, An
Introduction to Learning Theory for Cognitive and Computer Scientists,
MIT Press.
Osherson, D, Stob, M. and Weinstein, S. (1991) A Universal inductive
inference machine, Journal of Symbolic Logic, v. 56, No. 2, pp. 661–
672.
Osgood, C. E., Suci, G. J. and Tannenbaum, P. H. (1978) The Measurement
of Meaning, University of Illinois Press, Urbana/Chicago/London.
Osuga, S. (1989) Knowledge Processing, Mir, Moscow (Russian translation
from the Japanese).
Osuga, S. and Saeki, I. (Eds.) (1990) Knowledge Acquisition, Mir, Moscow
(Russian translation from the Japanese).
Oussalah, M. (2000) On the Qualitative/Necessity Possibility Measure, I,
Information Sciences, v. 126, pp. 205–275.
Ovchinnikov, S. (2000) Well-graded spaces of valued sets, Discrete Mathe-
matics, v. 245, No. 1, pp. 205–212.
The Oxford English Dictionary, 2nd ed. (1989) OED Online, Oxford Uni-
versity Press, Oxford (http://dictionary.oed.com/).
Pager, D. (1970) On the Efficiency of algorithms, Journal of the ACM,
v. 17, No. 4, pp. 708–714.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 901

Bibliography 901

Papini, O. (1992) A complete revision function in propositional calculus,


Proc. of 10th European Conference on Artificial Intelligence (ECAI’92),
pp. 339–343.
Parkinson, G. H. R. (1954) Spinoza’s Theory of Knowledge, Clarendon
Press, Oxford.
Parkinson, G. H. R. (Ed.) (1968) The Theory of Meaning, Oxford University
Press, Oxford.
Parsons, C. (1990) The structuralist view of mathematical objects, Syn-
these, v. 84, pp. 303–346.
Parsons, C. (2008) Mathematical Thought and Its Objects, Cambridge Uni-
versity Press, Cambridge.
Partridge, D. and Wilks, Y. (1990) The Foundations of Artificial Intelli-
gence, Cambridge University Press, Cambridge.
Paterson, M. S. and Hewitt, C. (1970) Comparative Schematology, MIT A.I.
Lab Technical Memo No. 201 (also in Proc. of Project MAC Conference
on Concurrent Systems and Parallel Computation).
Paul, B. (2002) Complexity — the Enemy of Integration, Managing Infor-
mation Strategies (http://www.misweb.com).
Pauli, W. (1943) On Dirac’s new method of field quantization, Reviews of
Modern Physics, v. 15, No. 3, pp. 175–207.
Pauli, W. (1956) Remarks on problems connected with the renormalization
of quantized fields, Il Nuovo Cimento, v. 4, Suppl. No. 2, pp. 703–710.
Pearce, D. and Rantala, V. (1981) On a New Approach to Metascience,
Reports of the Department of Philosophy, No. 1, University of Helsinki,
Helsinki, Finland.
Pearl, J. (1988) Probabilistic Reasoning in Intelligent Systems, Morgan
Kaufmann, San Mateo, CA.
Peirce, C. (1878) How to make our ideas clear, Popular Science Monthly,
v. 12, pp. 286–302.
Peirce, C. S. (1881) On the logic of number, American Journal of Mathe-
matics, v. 4, No. 1–4, pp. 85–95.
Peirce, C. S. (1885) On the algebra of logic, American Journal of Mathe-
matics, v. 7, pp. 180–202.
Peirce, C. S. (1903) A Syllabus of Certain Topics of Logic, Alfred Mudge
& Son, Boston.
Peirce C. S. (1931–1935) Collected Papers, v. 1–6, Cambridge University
Press, Cambridge, England.
Perez Bergliaffa, S. E., Romero, G. E. and Vucetich, H. (1998) Steps towards
an axiomatic pregeometry of space-time, International Journal of The-
oretical Physics, v. 37, pp. 2281.
Perrett, R. (Ed.) (2001) Logic and Language: Indian Philosophy, New York:
Routledge.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 902

902 Bibliography

Perrier, J. L. (1909) The Revival of Scholastic Philosophy in the Nineteenth


Century, Columbia University Press, New York.
Petersen, Å. (1963) The philosophy of Niels Bohr, Bulletin of the Atomic
Scientists, v. 19, pp. 8–14.
Peterson, J. L. (1981) Petri Net Theory and the Modeling of Systems,
Prentice-Hall, Inc., Englewood Cliffs, NJ.
Petri, C. (1962) Kommunikation mit Automaten, Ph.D. Dissertation, Uni-
versity of Bonn, Bonn, Germany.
Pfeifer, R. and Scheier, C. (1999) Understanding Intelligence, MIT Press,
Cambridge.
Piaget, J. (1936/1952) The Origins of Intelligence in Children, International
Universities Press, New York.
Piaget, J. (1937/1954). The Construction of Reality in the Child, New York:
Basic Books.
Piaget, J. (1950/1995) Explanation in sociology, in Sociological Studies,
J. Piaget (Ed.), Routledge, New York.
Piaget, J. (1964) Mother structures and the notion of number, in Cognitive
Studies and Curriculum Development, School of Education, Cornell
University, pp. 33–39.
Piaget, J. (1964a) Development and learning, Journal of Research in Sci-
ence Teaching, v. 2, No. 3, pp. 176–186.
Piaget, J. (1967/1971). Biology and Knowledge, University of Chicago
Press, Chicago.
Piaget, J. (1971) Structuralism, Routledge and Kegan Paul, London.
Piaget, J. (2001) Studies in Reflection Abstraction, Psychology Press,
Sussex.
Piatetsky-Shapiro, G. (1991) Knowledge discovery in real databases: A
report on the IJCAI-89 workshop, AI Magazine, v. 11, No. 5, pp. 68–70.
Pichert, I. W. and Anderson, R. C. (1977) Taking different perspectives on
a story, Journal of Educational Psychology, v. 69, pp. 309–315.
Pinkas, G. and Loui, R. P. (1992) Reasoning from inconsistency: A taxon-
omy of principles for resolving conflict, Proc. KR’92, pp. 709–719.
Pitts, A. M. (2003) Nominal logic, a first order theory of names and bind-
ing, Information and Computation, Theoretical Aspects of Computer
Software (TACS 2001), v. 186, No. 1/2, pp. 165–193.
Plato (1961) The Collected Dialogues of Plato, Princeton University Press,
Princeton.
Plotkin, B. I. (1966) Groups of Automorphisms of Algebraic Systems,
Nauka, Moscow (in Russian).
Plotkin, B. I. (1991) Universal Algebra, Algebraic Logic, and Databases,
Nauka, Moscow (in Russian).
Poincaré, H. (1902) La Science et l’hypothèse, Flammarion, Paris.
Poincaré, H. (1905) La valeur de la science, Flammarion, Paris.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 903

Bibliography 903

Poincaré, H. (1908) Scince et Méthode, Flamarion, Paris.


Poinsot, J. (1632) Tractatus de Signis, Alcala de Henares (Complutum),
Iberia.
Polanyi, M. (1958) Personal Knowledge, University of Chicago Press,
Chicago and London.
Polanyi, M. (1966) Personal Knowledge: The Tacit Dimension, Routledge.
Polanyi, M. (1974) Personal Knowledge: Towards a Post-Critical Philoso-
phy, University of Chicago Press, Chicago.
Pollock, J. L. (1974) Knowledge and Justification. Princeton University
Press.
Pollock, J. L. and Cruz, J. (1999) Contemporary Theories of Knowledge,
Rowman and Littlefield, Lanham/New York.
Pollock, J. L. and Gillies, A. (2000) Belief revision and epistemology, Syn-
these, v. 122, pp. 69–92.
Pollock, J. L. and Oved, I. (2005) Vision, knowledge, and the mystery link,
Philosophical Perspectives, v. 19, pp. 309–351.
Popper, K. R. (1972) The Logic of Scientific Discovery, Hutchinson,
London.
Popper, K. R. (1979) Objective Knowledge: An Evolutionary Approach,
Oxford University Press, New York.
Polya, G. (1954) Mathematics and Plausible Reasoning. Induction and
Analogy in Mathematics, Princeton University Press, Princeton.
Polya, G. (1962) Mathematical discovery, John Wiley & Sons, New York.
Popa, C. (1976) Theory of definition, progress, Moscow (in Russian).
Popper, K. R. (1979) Objective Knowledge: An Evolutionary Approach,
Oxford University Press, New York.
Popper, K. R. (2002) Conjectures and Refutations: The Growth of Scientific
Knowledge, Routledge, London, UK.
Porphyry, (1992) On Aristotle’s Categories, translated by S. K. Strange,
Cornell University Press, Ithaca, NY.
Pospelov, D. A. (1990) Production models, in Artificial Intelligence,
v. 2, Models and Methods, Radio and sviaz, Moscow, pp. 49–55 (in
Russian).
Post, E. L. (1921) Introduction to a general theory of elementary propositions,
American Journal of Mathematics, v. X LIII, pp. 163–185.
Post, E. L. (1936) Finite combinatory processes. Formulation, Journal of
Symbolic Logic, v. 1, pp. 103–105.
Post, E. L. (1943) Formal reductions of the general combinatorial decision
problem, American Journal of Mathematics, v. 65, pp. 197–215.
Potter, M. C. (1975) Meaning in visual search, Science, v. 187, pp. 965–
966.
Powell, M. (2003) Information Management for Development Organiza-
tions, Oxfam GB, Oxford.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 904

904 Bibliography

Priest, G. (1986) Contradiction, belief, and rationality, Proceedings of the


Aristotelian Society, v. 86, pp. 99–116.
Priest, G., Routley, R. and Norman, J. (Eds.) (1989) Paraconsistent Logic:
Essays on the Inconsistent, Philosophia Verlag, München.
Pring, R. (1976) Knowledge and Schooling, Open Books, London.
Prior, M. and Prior, A. (1955) Erotetic logic, Philosophical Review, v. 64,
No. 1, pp. 43–59.
Pritchard, D. and Turri, J. The value of knowledge, in The Stan-
ford Encyclopedia of Philosophy (2014), E. N. Zalta (Ed.), (http://
plato.stanford.edu/archives/spr2014/entries/knowledge-value/).
Probst, G., Raub, S. and Romhardt, K. (2000) Managing Knowledge: Build-
ing Blocks for Success, John Wiley& Sons, Chichester, England, UK.
Pust, J. (2000) Intuitions as Evidence, Garland/Routledge, New York.
Putnam, H. (1975) The meaning of ‘meaning’, in Language, Mind, and
Knowledge, Minnesota Studies in the Philosophy of Science (Volume 7),
University of Minnesota Press, Minneapolis, Minnesota, pp. 131–193.
Putnam, H. (1980) Models and reality, Journal of Symbolic Logic, v. 45,
No. 3, pp. 464–482.
Putnam, H., (1981) Reason, Truth, and History, Cambridge University
Press, Cambridge.
Quigley, E. J. and Debons, A. (1999) Interrogative theory of information
and knowledge, Proc. of SIGCPR ’99, ACM Press, New Orleans, pp.
4–10.
Quine, W. V. O. (1947) On universals, The Journal of Symbolic Logic, v. 12,
pp. 74–84.
Quine, W. V. O. (1954) Quantification and the empty domain, Journal of
Symbolic Logic, v. 19, pp. 177–179.
Quine, W. V. O. (1960) Word and Object : An Inquiry into the Linguistic
Mechanisms of Objective Reference, MIT Press, Cambridge, MA.
Quine, W. V. O. (1964) On what there is, in From a Logical Point of View,
Harvard University Press, Cambridge, Massachusetts, pp. 1–19.
Quine, W. V. O. (1969) Propositional objects, in Ontological Relativity and
Other Essays, Columbia University Press, New York, pp. 139–160.
Quine, W. V. O. (1976) Algebraic logic and predicate functors, in Ways
of Paradox and Other Essays, Harvard University Press, Cambridge,
Massachusetts, pp. 283–307.
Quine, W. V. O. (1981) Things and their place in theories, in Theo-
ries and Things, Harvard University Press, Cambridge, Massachusetts,
pp. 1–23.
Quine, W. V. O. (1982) Methods of Logics, Harvard University Press,
Cambridge, Massachusetts.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 905

Bibliography 905

Rabinovitch, N. L. (1970) Rabbi Levi Ben Gershon and the origins of math-
ematical induction, Archive for History of Exact Sciences, v. 6, No. 3,
pp. 237–248.
Rao, V. S. (1998) Theories of Knowledge: Its Validity and its Sources, Sri
Satguru Publications, Delhi, India.
Rashed, R. (1994) The Development of Arabic Mathematics: Between Arith-
metic and Algebra, Boston Studies in the Philosophy of Science, v. 156,
Kluwer Academic Publishers, Dordrecht.
Rational Software (1997) UML Semantics, (http://www.rational.
com/media/uml/resources/media/ad970804 UML11 Semantics2.pdf).
Read, S. (1988) Relevant Logic, Blackwell, Oxford.
Reading, A. (2006) The biological nature of meaningful information, Bio-
logical Theory, v. 1, No. 3, pp. 243–249.
Reeves, A.M., Beamish, N.L., Anderson, R.B., and Buel, J.W. (1906)
The Norse Discovery of America, Norroena Society, London/
Stockholm/Copenhagen/Berlin/New York.
Reichenbach, H. (1932) Axiomatik der Wahrscheinlichkeitsrechnung, Math-
ematische Zeitschrift, v. 34, No. 1, pp. 568–619.
Reichenbach, H. (1935) Wahrscheinlichkeitslehre: eineUntersuchungüber
die logischen und mathematischen Grundlagen der Wahrscheinlichkeit-
srechnung, Sijthoff, Leyden.
Reichenbach, H. (1947) Elements of Symbolic Logic, Macmillan, New York.
Reichenbach, H. (1949) Experience and Prediction, University of Chicago
Press, Chicago.
Reichenbach, H. (1949a) The Theory of Probability, University of California
Press, Berkeley.
Reid, L. A. (1985) Art and knowledge, British Journal of Aesthetics, v. 25,
pp. 115–224.
Reiter, R. (1980) A logic for default reasoning, Artificial Intelligence, v. 13,
pp. 81–132.
Renzl, B. (2002) Facilitating knowledge sharing and knowledge creation
through interaction analyses, Proc. of the 3 rd European Conference on
Organizational Knowledge, Learning, and Capabilities, Alba, Athens,
Greece.
Renzl, B. (2007) Language as a vehicle of knowing: The role of language and
meaning in constructing knowledge, Knowledge Management Research
& Practice, v. 5, pp. 44–53.
Rescher, N. (1976) Plausible Reasoning: An Introduction to the Theory and
Practice of Plausibilistic Inference, Van Gorcum, Assen, Amsterdam.
Rescher, N. and Manor, R. (1970) On inference from inconsistent premises,
Theory Decision, v. 1, No. 2, pp. 179–217.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 906

906 Bibliography

Resnick, L. B. (1983) Mathematics and science learning: A new conception,


Science, v. 220, No. 4596, pp. 477–478.
Resnick, M. (1997) Mathematics as a Science of Structures, Oxford Uni-
versity Press, Oxford.
Resnik, M. D. (1999) Mathematics as a Science of Patterns,
Clarendon Press, Oxford.
Restall, G. (2000) An Introduction to Substructural Logics, Routledge, Lon-
don.
Rieger, C. (1976) An organization of knowledge for problem solving and
language comprehension, Artificial Intelligence, v. 7, No. 2, pp. 89–127.
Rivière, F., Bouchenaki, M., Daniel, J., Erdelen, W., Khan, A. W., Sané,
P., Tidjani-Serpos, N., d’Orville, H. and Lievesley, D. (2005) Towards
Knowledge Societies, UNESCO, Paris.
Roberts, D. D. (1973) The Existential Graphs of Charles S. Peirce, Mouton,
The Hague.
Robinson, A. and Voronkov, A. (Eds.) (2001) Handbook of Automated Rea-
soning, v. I & II, Elsevier and MIT Press, Cambridge, Massachusetts.
Rocchi, P. (2014) Janus-Faced Probability, Springer, New York.
Rochester, J. B. (1996) Using Computers and Information, Education and
Training, Indianapolis.
Rogers, H. (1987) Theory of Recursive Functions and Effective Computabil-
ity, MIT Press, Cambridge, Massachusetts.
Roget’s II (1995) The New Thesaurus, Third Edition, Bantam Books, New
York/London.
Rohrer, T. (2006) Image Schemata in the Brain, in From Perception to
Meaning: Image Schemas in Cognitive Linguistics, Mouton de Gruyter,
Berlin.
Roland, J. (1958) On “Knowing How” and “Knowing That”, The Philo-
sophical Review, v. 67, No. 3, pp. 379–388.
Roos, N. (1992) A logic for reasoning with inconsistent knowledge, Artificial
Intelligence, v. 57, pp. 69–103.
Roos, N. (2000) On resolving conflicts between arguments, Computational
Intelligence, v. 16, No. 3, pp. 469–501.
Rosch, E. (1973) Natural categories, Cognitive Psychology, v. 4, No. 3, pp.
328–350.
Rosch, E. and Mervis, C. B. (1975) Family resemblances: Studies in the
internal structure of categories, Cognitive Psychology, v. 7, pp. 573–
605.
Rosefeldt, T. (2004) Is knowing-how simply a case of knowing-that? Philo-
sophical Investigations, v. 27, No. 4, pp. 370–379.
Ross, T. J. (1994) Fuzzy Logic with Engineering Applications, McGraw-Hill
P. C., New York.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 907

Bibliography 907

Rossberg, M. (2004) First-Order Logic, Second-Order Logic, and Complete-


ness, in First-order logic revisited, Logos-Verlag, Berlin.
Rothbart, D. (1997) Explaining the Growth of Scientific Knowledge:
Metaphors, Models, and Meanings, Problems in Contemporary Philos-
ophy, v. 37, The Edwin Mellen Press, Lampeter, UK.
Routley, R. (1979) The choice of logical foundations: Non-classical choices
and ultralogical choices, Studia Logica, v. 39, No. 1, pp. 77–98.
Routley, R., Plumwood, V., Meyer, R. K. and Brady, R. T. (1982) Relevant
Logics and their Rivals, Atascadero, Ridgeview, CA.
Rudavsky, T. (2015) Gersonides, The Stanford Encyclopedia of Philos-
ophy, Edward N. Zalta (Ed.), (http://plato.stanford.edu/archives/
win2015/entries/gersonides/).
Rudenko, D. I. (1986) Common Name as a Phenomenon of a Natural Lan-
guage, Izvestiya of the Academy of Sciences of the USSR, Ser. Litera-
ture and Language, v. 45, No. 1, pp. 7–9 (in Russian).
Rudenko, D. I. (1986a) “Empty Name” in Logic and Semantics of a Nat-
ural Language, in Analysis of Language Systems, History of logic and
methodology of science, (in Russian).
Rudenko, D. I. (1987) Names of natural classes, proper names and names
of nominal classes in semantics of a natural language, Izvestiya of the
Academy of Sciences of the USSR, Ser. Literature and Language, v. 46,
No. 1, pp. 8–11 (in Russian).
Rudenko, D. I. Name in Paradigms of “Language Philosophy,” Osnova,
Kharkov. 1990 (in Russian).
Rudin, W. (1991) Functional Analysis, McGrow-Hill, New York.
Rumelhart, D. E. (1975) Notes on a schema for stories, in Representation
and Understanding: Studies in Cognitive Science, Academic Press, New
York, pp. 185–210.
Rumelhart, D. E. (1980) Schemata: the building blocks of cognition,
in Theoretical Issues in Reading Comprehension, Lawrence Erlbaum,
Hillsdale, NJ, pp. 38–58.
Ruse, M. (1973) The Philosophy of Biology, Hutchinson University Library,
London.
Russell, B. (1903) Principles of Mathematics, Cambridge University Press,
Cambridge.
Russell, B. (1905) On Denoting, Mind, v. 14, pp. 479–493.
Russell, B. (1908) Mathematical logic as based on the theory of types,
American Journal of Mathematics, v. 30, pp. 222–262.
Russell, B. (1912) Problems of Philosophy, Oxford University Press, Oxford.
Russell, B. (1921) Introduction to Mathematical Philosophy, George Allen
and Unwin, London.
Russell, B. (1926) Theory of Knowledge, Encyclopedia Britannica.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 908

908 Bibliography

Russell, B. (1948) Human Knowledge: Its Scope and Limits, Simon and
Schuster, New York.
Russell, P. (1992) The Brain Book, Penguin Books, London, UK.
Russell, S, (2014) Unifying Logic and Probability: A New Dawn for AI? in
Information Processing and Management of Uncertainty in Knowledge-
Based Systems Communications in Computer and Information Science,
v. 442, pp. 10–14.
Ryle, G. (1949) The Concept of Mind, University of Chicago Press, Chicago.
Ryle, G. (1971 /1946) Knowing How and Knowing That, in Collected
Papers, v. 2, Barnes and Nobles, New York, pp. 212–225.
Ryle, G. (1957) The Theory of Meaning, Allen & Unwin, London.
Sagan, H. (1992) Introduction to the Calculus of Variations, Dover, New York.
Salii, V. N. (1965) Binary L-relations, Izv. Vysh. Uchebn. Zaved., Matem-
atika, v. 44, No. 1, pp. 133–145 (in Russian).
Sassone, V., Nielsen, M. and Winskel, G. (1996) Models for concurrency:
Towards a classification, Theoretical Computer Science, v. 170, Nos. 1–
2, pp. 297–348.
Satoh, K. (1988) Nonmonotonic reasoning by minimal belief revision, Proc.
of the International Conference on Fifth Generation Computer Systems
(FGCS’88), pp. 455–462.
Sauer, T. (2006) Numerical Analysis, Pearson Education, Inc., Boston.
de Saussure, F. (1916) Cours de linguistique générale, ed. C. Bally and
A. Sechehaye, with the collaboration of A. Riedlinger, Payot, Lausanne
and Paris (English translation by W. Baskin: Course in General Lin-
guistics, Fontana/Collins, Glasgow, 1977).
de Saussure, F. (1916a) Nature of the Linguistic Sign, in Cours de linguis-
tique générale, McGraw Hill Education.
Sax, G. (2010) Having Know-How: Intellect, Action, and Recent Work
on Ryle’s Distinction Between Knowledge-How and Knowledge-That,
Pacific Philosophical Quarterly, v. 91, No. 4, pp. 507–530.
Scaruffi, P. (2011) A Brief History of Knowledge, Amazon, Kindle edition.
Schaerf, M. and Cadoli, M. (1995) Tractable reasoning via approximation,
Artificial Intelligence, v. 74, pp. 249–310.
Schank, R. C. (Ed.) (1975) Conceptual Information Processing, North-
Holland Publishing Co., Amsterdam.
Schank, R. C. (1982) Dynamic Memory, Cambridge University Press, New
York.
Schank, R. C. (1991) Tell Me a Story: A New Look at Real and Artificial
Intelligence, Simon & Schuster, New York.
Schank, R. C. and Abelson, R. P. (1977) Scripts, Plans, Goals and Under-
standing, Lawrence Erlbaum Associates, Hillsdale, NJ.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 909

Bibliography 909

Schank, R. C. and Childers, P. G. (1984) The Cognitive Computer, Addison-


Wesley Publishing Company, Reading, Massachusetts.
Schank, R. C. and Cleary, C. (1995) Engines for Education, Erlbaum
Assoc., Hillsdale, NJ.
Schank, R. C. and Tesler, L. G. (1969) A conceptual parser for natural
language, Proc. IJCAI-69, 569–578.
Schank, R. C., Kass, A. and Riesbeck, C. K. (1994) Inside Case-Based
Explanation, Lawrence Erlbaum Associates, Hillsdale, NJ.
Scharffe, F. and Ding, Y. (2006) Three Levels of Knowledge Structura-
tion for the Web, Citeulike (electronic publication: http://www. citeu-
like.org/user/Sulpicus/article/1092431).
Scheffler, I. (1965) Conditions of Knowledge, Scott, Foresman & Co,
Chicago/Atlanta/Dallas.
Schensted, I. V. (1967) A Short Course on the Application of Group Theory
to Quantum Mechanics, NEO Press, Ann Arbor, MI.
Schiffer, S. (1972) Meaning, Oxford University Press, Oxford.
Schiffer, S. (1987) Remnants of Meaning, MIT Press, Cambridge, MA.
Schiffer, S. (2002) Amazing knowledge, The Journal of Philosophy, v. 99,
No. 4, pp. 200–202.
Schiffer, S. (2006) Two perspectives on knowledge of language, Philosophical
Issues, v. 16, pp. 275–287.
Schleiermacher, F. D. E. (1819) Hermeneutics, Winter, Heidelberg.
Schlesinger, G. N. (1985) The Range of Epistemic Logic, Aberdeen Univer-
sity Press.
Schlick, M. (1979/1930) On the Foundations of Knowledge, in Philosophical
Papers, vol. 2 (1925–1936), H. L. Mulder and B. F. B. van de Velde-
Schlick (Eds.), Reidel, Dordrecht pp. 370–387.
Schmitt, F. F. (1994) Socializing Epistemology: The Social Dimensions of
Knowledge, Rowman & Littlefield, Lanham/New York.
Scholem G. G. (1995) Major Trends in Jewish Mysticism, Schoken Books,
New York.
Schonland, D. (1965) Molecular Symmetry, D. Van Nostrand, London.
Schottenloher, M.(2008) Axioms of relativistic quantum field theory, Lec-
ture Notes in Physics, v. 759, pp. 121–152.
Schreiber, A. T., Akkermans, H., Anjewierden, A., Dehoog, R., Shadbolt,
N., Vandevelde, W. and Wielinga, B. (2000) Knowledge Engineering
and Management: The CommonKADS Methodology, MIT Press, Cam-
bridge, MA.
Schrepp, M. (1999) Extracting knowledge structures from observed data,
British Journal of Mathematical and Statistical Psychology, v. 52, No. 2,
pp. 213–224.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 910

910 Bibliography

Schultze, U. and Leidner, D. (2002) Studying knowledge management in


information systems research: Discourses and theoretical assumptions,
MIS Quarterly, v. 26, No. 3, pp. 213–242.
Schütte, K. (1960) Beweistheorie, Springer-Verlag, Berlin.
Schwanke, R. W. and Kaiser, G. E. (1988) Living with inconsistency in large
systems, Proc. of the International Workshop on Software Version and
Configuration Control, Grassau, Germany, Teubner, Stuttgart, pp. 98–
118.
Searle, J. (1958) Proper names, Mind, v. 47, No. 266, pp. 166–173.
Searle, J. R. and Vanderveken, D. (1985) Foundations of Illocutionary Logic,
Cambridge University Press, Cambridge.
Selman, B., Levesque, H. and Mitchell, D. (1992) A new method for solving
hard satisfiability problems, Proc. AAAI’92, pp. 440–446.
Sethi, S. P. (1983) Deterministic and stochastic optimization of a dynamic
advertising model, Optimal Control Application and Methods, v. 4,
No. 2, pp. 179–184.
Setzer, V. W. (1989) Computers in Education, Floris Books, Edinburgh.
Shafer, G. (1976) A Mathematical Theory of Evidence, Princeton University
Press, Princeton.
Shah, H. (1990) Types of knowledge (jnana) in jainism (http://
www.fas.harvard.edu/˜pluralsm/affiliates/jainism/article/jnana.htm).
Shannon, C. E. (1948) The mathematical theory of communication, Bell
System Technical Journal, v. 27, No. 1, pp. 379–423; No. 3, pp. 623–656.
Shapiro, S. C. (1971) A net structure for semantic information storage,
deduction and retrieval, Proc. IJCAI-71, pp. 512–523.
Sharma, C. (1994) A Critical Survey of Indian Philosophy, Motilal Banar-
sidass, Delhi.
Shoenfield, J. R. (2001) Mathematical Logic, Addison-Wesley, Reading,
Massachussets.
Shoesmith, D. J. and Smiley, T. J. (1978) Multiple Conclusion Logic,
Cambridge University Press, Cambridge.
Sholle, D. (1999) What is information? The flow of bits and the con-
trol of chaos, MIT Commucation Forum (http://web.mit.edu/ comm-
forum/papers/sholle.html).
Shreider, Y. A. (1965) On the semantic characteristics of information, Infor-
mation Storage and Retrieval, v. 2, pp. 221–233.
Siegel, M. and Madnick, S. (1991) A metadata approach to resolu-
tion semantic conflicts, Proc. Int. Conf. on Very Large Databases,
Barcelona, Spain, pp. 133–145.
Silverstein, M. (1993) Metapragmatic discourse and metapragmatic func-
tion, in Reflexive Language: Reported Speech and Metapragmatics,
J. Lucy (Ed.), Cambridge University Press, Cambridge, pp. 33–58.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 911

Bibliography 911

Silverstein, M. (2004) “Cultural” concepts and the language-culture nexus,


Current Anthropology, v. 45, No. 5.
Simon, H. A. (1971) Designing organizations for an information-rich world,
in Computers, Communications, and the Public Interest, M. Green-
berger (Ed.), Johns Hopkins Press, Baltimore, MD, pp. 37–72.
Simons, P. (2001) Calculi of names: Free and modal, in New Essays in Free
Logic, Applied Logic Series, v. 23, pp. 49–65.
Simonson, S. (2000) Mathematical gems of Levi ben Gershon, Mathematics
Teacher, v. 93, No. 8, pp. 659–663.
Sion, A. (2010) Talmudic hermeneutics, in Logic in Religious Discourse,
Ontos-Verl, Frankfurt, pp. 105–117.
Sipser, M. (1997) Introduction to the Theory of Computation, PWS Pub-
lishing Co., Boston.
Slupecki, J. (1958) Towards a generalized mereology of Lesniewski, Studia
Logica, v. 8, pp. 1–154.
Slutz, D. R. (1968) The flow graph schemata model of parallel computation,
Rep. MAC-TRy53 (Thesis), MIT Project MAC, Boston.
Smarandache, F. (2002) A unifying field in logics: Neutrosophic logic, Mul-
tiple Valued Logic, v. 8, No. 3, pp. 385–438.
Smart, J. J. C. (1965) The methods of ethics and the methods of science,
Journal of Philosophy, v. 62, pp. 344–349.
Smehov, B. M. (1987) Logic of Planning, Economics, Moscow (in Russian).
Smit, H. J. (1991) Consistency and Robustness of Knowledge Graphs, Ph.D.
thesis, University of Twente, Enschede, The Netherlands.
Smith, B. (1996) Mereotopology: A theory of parts and boundaries, Data
and Knowledge Engineering, v. 20, pp. 287–303.
Smith, B. (2008) Ontology (Science), in Formal Ontology in Information
Systems, Proc. of FOIS 2008, C. Eschenbach and M. Gruninger (Eds.),
ISO Press, Amsterdam/New York, pp. 21–35.
Smith, C. U. M. (2010) The triune brain in antiquity: Plato, Aristotle,
Erasistratus, Journal of the History of the Neurosciences, v. 19, No. 1,
pp. 1–14.
Smith, M. K. (1999) Aristotle on knowledge, in The Encyclopaedia of Infor-
mal Education (http://infed.org/mobi/aristotle-on-knowledge/).
Smith, M. L. (2000) View-Centric Reasoning about Parallel and Distributed
Computation, Ph.D. thesis, University of Central Florida, Orlando, FL.
Smith, R. G. and Farquhar, A. (2000) The road ahead for knowledge man-
agement, AI Magazine, pp. 17–40.
Smolin, L. (1995) The Bekenstein Bound, Topological Quantum Field The-
ory and Pluralistic Quantum Field Theory, Penn State preprint CGPG-
95/8–7; Los Alamos Archives preprint in physics, gr-qc/9508064.
http://arXiv.org.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 912

912 Bibliography

Smolin, L. (1999) The Life of the Cosmos, Oxford University Press, Oxford/
New York.
Smullian, R. (1978) What is the Name of this Book ? Prentice Hall, Engle-
wood Cliffs, NJ.
Smullyan, R. M. (1962) Theory of Formal Systems, Princeton University
Press, Princeton.
Sneed, J. D. (1971) The Logical Structure of Mathematical Physics, D. Rei-
del Publishing Company, Dordrecht.
Snodgrass, R. T. and Jensen, C. S. (1999) Developing Time-Oriented
Database Applications in SQL, Morgan Kaufmann, San Mateo, CA.
Snowdon, P. (2003) Knowing how and knowing that: A distinction
reconsidered, Proceedings of The Aristotelian Society, v. 104, No. 1,
pp. 1–29.
Sober, E. (1991) Core Questions in Philosophy, Macmillan Publishing Co.,
New York.
Solomonoff, R. (1964) A formal theory of inductive inference, Information
and Control, v. 7, No. 1 (Part I), pp. 1–22; No. 2 (Part II), pp. 224–254.
Solovay, R. M. (1976) Provability interpretations of modal logic, Israel Jour-
nal of Mathematics, v. 25, pp. 287–304.
Sorensen, R. (1992) Thought Experiments, Oxford University Press, New
York.
Sosa, D. (2006) Scepticism about intuition, Philosophy: The Journal of the
Royal Institute of Philosophy, v. 81, pp. 633–647.
Sowa, J. F. (1976) Conceptual graphs for a database interface, IBM Journal
of Research and Development, v. 20, No. 4, pp. 336–357.
Sowa, J. F. (1984) Conceptual Structures: Information Processing in Mind
and Machine, Addison-Wesley, Reading, MA.
Sowa, J. F. (1987) Semantic networks, Encyclopedia of Artificial Intelli-
gence, Wiley, New York.
Sowa, J. F. (Ed.) (1991) Principles of Semantic Networks: Explorations in
the Representation of Knowledge, Morgan Kaufmann Publishers, San
Mateo, CA.
Sowa, J. F. (2000) Knowledge Representation: Logical, Philosophical,
and Computational Foundations, Brooks/Cole Publishing Co., Pacific
Grove, CA.
Sowa, J. F. (2000a) Ontology, metadata, and semiotics, in Conceptual Struc-
tures: Logical, Linguistic, and Computational Issues, B. Ganter and
G. W. Mineau (Eds.), Lecture Notes in AI, v. 1867, Springer-Verlag,
Berlin, pp. 55–81.
Spanier, E. H. (1966) Algebraic Topology, Springer-Verlag, New York/
Heidelberg/Berlin.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 913

Bibliography 913

Speaks, J. (2014) Theories of Meaning, in The Stanford Encyclopedia of


Philosophy (Fall 2014 Edition), E. N. Zalta (Ed.), (http://plato. stan-
ford.edu/archives/fall2014/entries/meaning/).
Sperber, D. and Wilson, D. (1986) Relevance: Communication and Cogni-
tion, Blackwell, Oxford.
Squire, L. R. (1994) Declarative and non-declarative memory: Multiple
brain systems supporting learning and memory, in Memory Systems,
MIT Press, Cambridge, MA, pp. 203–231.
Squire, L. R. (2004) Memory systems of the brain: A brief history and cur-
rent perspective, Neurobiology of Learning and Memory, v. 82, pp. 171–
177.
Squire, L. R. and Zola, S. M. (1996) Structure and function of declarative
and nondeclarative memory systems, Proc. of the National Academy of
Sciences USA, v. 93 No. 24, pp. 13515–13522.
Stanley, J. (2011) Know How, Oxford University Press, Oxford.
Stanley, J. and Williamson, T. (2001) Knowing how, The Journal of Phi-
losophy, v. 98, No. 8, pp. 411–444.
Stata, R. (1989) Organizational learning: The key to management innova-
tion, Sloan Management Review, v. 30, No. 3, pp. 63–74.
Stegmüller, W. (1976) The Structure and Dynamics of Theories, Springer-
Verlag, New York/Heidelberg/Berlin.
Stegmüller, W. (1979) The Structuralist View. A Possible Analogue of
the Bourbaki Programme in Physical Science, Springer-Verlag, New
York/Heidelberg/Berlin.
Stehr, N. (1994) Knowledge Societies: The Transformation of Labour, Prop-
erty and Knowledge in Contemporary Society, Sage, London.
Stenmark, D. (2002) The relationship between information and knowl-
edge and the role of intranets in knowledge management, Proc. of
the 35th Annual Hawaii International Conference on System Sciences
(HICSS-35 ), v. 4, IEEE Press, Hawaii (http://csdl2.computer.org/
comp/proceedings/hicss/2002/1435/04/ 14350104b.pdf).
Stepanov, Y. S. (1975) Foundations of General Linguistics, Prosveshchenie,
Moscow (in Russian).
Sternberg, R. J. (1985) Beyond IQ: A Triarchic Theory of Intelligence,
Cambridge University Press, Cambridge.
Sternberg, R. J. (1997). A triarchic view of giftedness: Theory and prac-
tice, in Handbook of Gifted Education, Allyn and Bacon, Boston, MA,
pp. 43–53.
Steyvers, M. and Tenenbaum, J. B. (2005) The large-scale structure of
semantic networks: Statistical analyses and a model of semantic growth,
Cognitive Science, v. 29, pp. 41–78.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 914

914 Bibliography

Stewart, T. A. (1997) Intellectual Capital: The New Wealth of Organiza-


tions, Doubleday, New York.
Stewart, T. A. (2002) The Wealth of Knowledge: Intellectual Capital and
the Twenty-First Century Organization, Nicholas Brealey Publishing,
London, UK.
Stiglitz, J. E. (1999) Knowledge as a global public good, in Global Public
Goods: International Cooperation in the 21st Century, Oxford Univer-
sity Press for UNDP, New York/Oxford.
Stoner, G. R., Albright, T. D. and Ramachandran V. S. (1990) Trans-
parency and coherence in human motion perception, Nature, v. 344,
pp. 153–155.
Stonier, T. (1990) Information and the Internal Structure of the Universe:
An Exploration into Information Physics, Springer, New York/ London.
Stonier, T. (1991) Towards a new theory of information, Journal of Infor-
mation Science, v. 17, pp. 257–263.
Stonier, T. (1992) Beyond Information: The Natural History of Intelligence,
Springer-Verlag, London.
Stonier, T. (1996) Information as a basic property of the universe, Bio
Systems, v. 38, pp. 135–140.
Strawson, P. (1950) Truth, Proceedings of the Aristotelian Society, v. 24,
pp. 9–156.
Streater, R. F. and Wightman, A. S. (2000) PCT, Spin and Statistics, and
All That, Princeton University Press, Princeton.
Stubbe, H. (1670) The Plus Ultra Reduced to a Non Plus, London, England.
Studera, R., Benjamins, R. and Fensel, D. (1998) Knowledge engineering:
Principles and methods, Data & Knowledge Engineering, v. 25, pp. 161–
197.
Stumme, G. and Wille, R. (Eds.) (2000) Begriffliche Wissensverar-
beitung — Methoden und Anwendungen, Springer, Heidelberg.
Sugeno, M. (1974) Theory of Fuzzy Integrals and Its Application, Doctoral
Thesis, Tokyo Institute of Technology.
Sugeno, M. (1977) Fuzzy Measures and Fuzzy Integrals — A Survey, in
Fuzzy Automata and Decision Processes, North-Holland, New York,
pp. 89–102.
Suppe, F. (Ed.) (1974) The Structure of Scientific Theories, University of
Illinois Press, Urbana.
Suppe, F. (Ed.) (1979) The Structure of Scientific Theories, University of
Illinois Press, Urbana.
Suppe, F. (1999) The positivist model of scientific theories, in Scientific
Inquiry, Oxford University Press, New York, pp. 16–24.
Suppes, P. (1967) What is a scientific theory? in Philosophy of Science
Today, Basic Books, New York, pp. 55–67.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 915

Bibliography 915

Suppes, P. and Han, B. (2000) Brain-wave representation of words by super-


position of a few sine waves, Proceedings of the National Academy of
Sciences, v. 97, pp. 8738–8743.
Suroweicki, J. (2004) The Wisdom of Crowds: Why the Many are Smarter
than the Few and How Collective Wisdom Shapes Business, Economies,
Societies and Nations, Little Brown, Boston.
Suzuki, K. (1987) Schema theory: A basis for domain information design,
in Application of the Schema Theory to Instructional Design, A sym-
posium conducted at the Annual Meeting of the Association for Edu-
cational Communications and Technology, Atlanta, GA, USA.
Sveiby, K.-E. (1997) The New Organizational Wealth — Managing and
Measuring Knowledge-Based Assets, Berrett-Koehler, San Fransisco.
Swan, J. and Scarbrough, H. (2001) Knowledge management: Concepts and
controversies, Journal of Management Studies, v. 38, No. 7, pp. 913–
921.
Swift, D. (2008) The Epicurean Theory of Mind, Meaning and Knowledge,
Cambridge Scholars Publishing, Cambridge.
Swoyer, C. and Orilia, F. (2014) Properties, The Stanford Encyclo-
pedia of Philosophy, E. N. Zalta (Ed.), (http://plato.stanford.edu/
archives/fall2014/entries/properties/).
Sydanmaanlakka, P. (2000) Understanding organizational learning through
knowledge management, competence management, and performance
management, in 2nd Annual Knowledge Management and Organizational
Learning Conference, Linkage International, London, pp. 329–341.
Sydanmaanlakka, P. (2002) An Intelligent Organization — Integrating Per-
formance, Competence and Knowledge Management, Capstone Pub-
lishing, Knoxville, TN, USA.
Szuba, T. (2001) Computational Collective Intelligence, John Wiley & Sons,
Inc., New York, NY.
Tahara, I. and Nobesawa, S. (2006) Reasoning with inconsistent knowledge
base, Systems and Computers in Japan, v. 37, No. 3, pp. 41–48.
Talbi, E.-G. (2009) Metaheuristics: From Design to Implementation. John
Wiley & Sons, Inc., New York, NY.
Tamassia, R. (1996) Data structures, ACM Computing Surveys, v. 28, No. 1,
pp. 23–26.
Tanaka-Ishii, K. and Ishii, Y. (2007) Icon, index, symbol and denotation,
connotation, metasign, Semiotica, v. 166, No. 1/4, pp. 393– 407.
Tannenbaum, A. (2002) Metadata Solutions, Addison-Wesley, Reading,
Mass.
Tappenden, J. (2008) Mathematical concepts and definitions, in The Phi-
losophy of Mathematical Practice, P. Mancosu (Ed.), Oxford University
Press, Oxford, pp. 256–275.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 916

916 Bibliography

Tarábek, P. (2006) Concept levels imagined by triangular model of concept


structure, in Educational and Didactic Communication 2006, Educa-
tional Publisher Didaktis, Bratislava, pp. 49–58.
Tarábek, P. (2007) Cognitive analysis and triangular modeling of concepts,
Educational and Didactic Communication, v. 2, Educational Publisher
Didaktis, Bratislava, pp. 107–149.
Tarábek, P. (2009) Cognitive architecture of common and scientific con-
cepts, International Conference On Physics Education (ICPE-2009),
AIP Conf. Proc. 1263, pp. 151–154.
Tarasov, K. E., Velikov, V. K. and Frolova, A. I. (1989) Logic and Semiotics
of Diagnosis, Medicine, Moscow (in Russian).
Tarski, A. (1944) The semantic conception of truth, Philosophy and Phe-
nomenological Research, v. 4, pp. 341–375.
Tassey, G. (2002) The Economic Impacts of Inadequate Infrastructure for
Software Testing, NIST Report 7007.011.
Tatievskaia, E. (1999) Russell on the Structure of Propositions, The Paideia
Archive (http://www.bu.edu/wcp/).
Taylor, B. (2006) Models, Truth, and Realism, Oxford University Press,
Oxford.
Tell, F. (2004) What do organizations know? Dynamics of justification con-
texts in R&D activities, Organization, v. 11, No. 4, pp. 443–471.
Thagard, P. (1988) Computational Philosophy of Science, A Bradford Book,
Oxford.
Tharp, L. H. (1973) The characterization of monadic logic, Symbolic Logic,
v. 38, No. 3, pp. 481–488.
Thayse, A., Gribomont, P., Louis, G., Snyers, D., Wodon, P., Gochet, P.,
Grégoire, E., Sanchez, E. and Delsarte, P. (1988) Approche Logique de
lÍntelligence Artificielle, Bordas, Paris.
Thibodeau, P. (2002) Buggy software costs users, vendors nearly $60B
annually, Computerworld, Washington.
Thiselton, A. C. (1998) Biblical Hermeneutics, in The Routledge Encyclo-
pedia of Philosophy, v. 4, pp. 389–395.
Thompson, M. and Walsham, G. (2004) Placing knowledge management in
context, Journal of Management Studies, v. 41, No. 5, pp. 725–747.
Thorkelson, E. (2008) Knowledge as ideology: Lycée philosophy classes
and the category of the intellectual, Social Epistemology: A Journal
of Knowledge, Culture and Policy, v. 22, No. 2, pp. 165–196.
Timbrell, T. G. and Jewels, T. J. (2002) Knowledge re-use situations in an
enterprise systems context, in Issues and Trends of Information Tech-
nology Management in Contemporary Organizations, Seattle, Washing-
ton, USA.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 917

Bibliography 917

Timothy, S. (1973) What is a syllogism? Journal of Philosophical Logic,


v. 2, pp. 136–154.
Tiwana, A. (2001) The Essential Guide to Knowledge Management — E-
Business and CRM Applications, Prentice-Hall, Upper Saddle River,
NJ, USA.
Toffler, A. (1990) Powershift, Bantam Books, Toronto/New York/ London.
Tondl, L. (1975) Problems of Semantics, Progress, Moscow (in Russian).
Tong, J. and Mitra, A. (2008) Knowledge maps and organisations: An
overview and interpretation, International Journal of Business Infor-
mation Systems, v. 3, No. 6, pp. 587–608.
Toulmin, S. (1956) The Uses of Argument, Cambridge University Press,
Cambridge.
Trahtenbrot, B. A. and Barzdin, J. M. (1970) Finite Automata: Behavior
and Synthesis, Nauka, Moscow (in Russian).
Tsoukas, H. and Vladimirou, E. (2001) What is organizational knowledge?
Journal of Management Studies, v. 38, No. 7, pp. 973–992.
Tulving, E. (1972) Episodic and semantic memory, in Organization of Mem-
ory, Academic Press, New York.
Tulving, E. (1983) Elements of Episodic Memory, Oxford University Press,
Oxford.
Tuomi, I. (1999) Corporate Knowledge: Theory and Practice of Intelligent
Organization, Metaxis, Helsinki.
Tuomi, I. (1999/2000) Data is more than knowledge: Implications of the
reversed knowledge hierarchy for knowledge management and knowl-
edge memory, Journal of Management Information Systems, v. 16,
No. 3, pp. 103–117.
Turing, A. M. (1937) Computability and λ-definability, The Journal of Sym-
bolic Logic, v. 2, No. 4, pp. 153–163.
Turing, A. (1956) Can machine think? in The World of Mathematics, J. R.
Newman (Ed.), v. 4, pp. 2099–2123.
Turner, R. (1984) Logics for Artificial Intelligence, Ellis Horwood Ltd.
Tsichridsis, D. and Klug, A. (Eds.) (1978) The ANSI/X3/SPARC DBMS
Framework, AFIPS Press.
Ueno, H., Koyama, T., Okamoto, T., Matsubi, B. and Isidzuka, M. (1987)
Knowledge Representation and Utilization, Mir, Moscow (Russian
translation from the Japanese).
Umezawa, T. (1959) On logics intermediate between intuitionistic and clas-
sical predicate logic, Journal of Symbolic Logic, v. 24, No. 2, pp. 141–
153.
Uschold, M. and Gruninger, M. (1996) Ontologies: Principles, methods and
applications, Knowledge Engineering Review, v. 11, No. 2, pp. 93–136.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 918

918 Bibliography

Uzgalis, W. (2014) John Locke, The Stanford Encyclopedia of Philosophy


(http://plato.stanford.edu/archives/win2014/entries/locke/).
Väänänen, J. (2001) Second-order logic and foundations of mathematics,
Bulletin of Symbolic Logic, v. 7, No. 4, pp. 504–520.
Väänänen, J. (2007) Dependence Logic: A New Approach to Independence
Friendly Logic, Cambridge University Press, Cambridge.
Valente, G. and Rigallo, A. (2002) Operational knowledge management: A
way to manage competence, Proc. of the International Conference on
Information and Knowledge Engineering, pp. 124–130.
Valente, G. and Rigallo, A. (2003) An innovative approach for managing
competence: An operational knowledge management framework, Proc.
of the 7th International Conference on on Knowledge-Based Intelligent
Information and Engineering Systems, Springer-Verlag, pp. 124–130.
van Benthem, J. (1991) The Logic of Time, Kluwer Academic Publishers,
Boston/London/Dordrecht.
Van Benthem, J. and Sarenac, D. (2004) The Geometry of Knowledge, in
Aspects of Universal Logic, Centre de Recherches Sémiologiques, Uni-
versity of Neuchâtel.
van der Spek, R. and Spijkervet, A. (1997) Knowledge management: Deal-
ing intelligently with knowledge, in Knowledge Management and Its
Integrative Elements, CRC Press, New York, pp. 31–59.
Van der Varden, B. L. (1971) Algebra, Springer-Verlag, Berlin/
Heidelberg/New York.
Van Der Vlist, E. (2004) RELAX NG, O’Reilly & Associates Incorporated.
van der Walt, M. (2006) Knowledge management and scientific knowl-
edge generation, Knowledge Management Research and Practice, v. 4,
pp. 319–330.
van den Berg, H. (1993) Knowledge Graphs and Logic: One of Two Kinds,
Ph.D. thesis, University of Twente, Enschede, The Netherlands.
van Dijk, T. A. (2004) Discourse, knowledge and ideology: Reformulating
old questions and proposing some new solutions, in Communicating
Ideologies: Multidisciplinary Perspectives on Language, Discourse, and
Social Practice, Peter Lang — Europäischer Verlag der Wissenschaften,
Frankfurt am Main, Germany, pp. 5–38.
van Doren, C. (1992) A History of Knowledge: Past, Present, and Future,
Random House Publishing Group, New York.
Van Inwagen, P. (1997) Materialism and the psychological-continuity
account of personal identity, Philosophical Perspectives, v. 11, pp. 305–
319.
Van Leeuwen, J. and Wiedermann, J. (2000) On the power of interac-
tive computing, Proc. of the IFIP Theoretical Computer Science 2000,
pp. 619–623.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 919

Bibliography 919

VanPool, T. L. and VanPool, C. S. (Eds.) (2003) Essential Tensions in


Archaeological Method and Theory, University of Utah Press, Salt Lake
City.
van Rijsbergen, C. J. (1989) Towards an information logic, Proc. of the 12th
Annual International ACM SIGIR Conference on Research and Devel-
opment in Information Retrieval, Cambridge, Massachusetts, pp. 77–
86.
Vaught, R. L. (1967) Axiomatizability by a schema, Journal of Symbolic
Logic, v. 32, No. 4, pp. 473–479.
Vavilov, S. I. (1951) Isaac Newton, Akademie-Verlag, Berlin.
Vetrov, A. A. (1968) Semiotics and Its Main Problems, Politizdat, Moscow
(in Russian).
Vendler, Z. (1972) On what one knows, in Res Cogitans, Cornell University
Press, Ithaca, pp. 89–119.
Vey Mestdagh, C. N. J. de (1998) Legal expert systems. Experts or expe-
dients? The representation of legal knowledge in an expert system for
environmental permit law, The Law in the Information Society, Con-
ference Proc. on CD-Rom, Firenze, p. 8.
Vey Mestdagh, C. N. J. de and Hoepman, J. H. (2011) Inconsistent knowl-
edge as a natural phenomenon: The ranking of reasonable inferences
as a computational approach to naturally inconsistent (Legal) theories,
in Information and Computation, G. Dodig-Crnkovic and M. Burgin
(Eds.), WS, Singapore, pp. 439–476.
Vey Mestdagh, C. N. J. de, Verwaard, W. and Hoepman, J. H. (1991)
The logic of reasonable inferences, in legal knowledge based sys-
tems, model-based legal reasoning, Proc. 4th Annual JURIX Con-
ference on Legal Knowledge Based Systems, Vermande, Lelystad,
pp. 60–76.
Vidyabhusana, S. C. (1921) A History of Indian Logic: Ancient, Medieval
and Modern, Calcutta University, Calcutta.
Viganò, L. and Volpe, M. (2008) Labeled natural deduction systems for
a family of tense logics, in 15th International Symposium on Tempo-
ral Representation and Reasoning (TIME 2008), S. Demri and C. S.
Jensen (Eds.), University of Quebec, Montreal, Canada, 16–18 June
2008. IEEE Computer Society, pp. 118–126.
Vlastos, G. (1957) Socratic knowledge and platonic “pessimism”, The Philo-
sophical Review, v. 66, No. 2, pp. 226–238.
von Bayer, H. C. (2004) Information: The New Language of Science, Har-
vard University Press, Harvard.
von Krogh, G., Ichijo, K. and Nonaka, I. (2000) Enabling Knowledge Cre-
ation: How to Unlock the Mystery of Tacit Knowledge and Release the
Power of Innovation, Oxford University Press, New York.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 920

920 Bibliography

von Neumann, J. (1932) Mathematische Grundlagen der Quantenmechanik,


Springer-Verlag, Berlin (English translation: Mathematical Foundations
of Quantum Mechanics, Princeton University Press, 1955).
von Uexküll, T. (1982) Semiotics and medicine, Semiotica, v. 38, No. 3/4,
pp. 205–215.
von Weizsäcker, C. F. (1958) Die Quantentheorie der einfachen Alterna-
tive (Komplementarität und Logik, II), Zeitschrift für Naturforschung,
v. 13, pp. 245–253.
von Weizsäcker, C. F. (1974) Die Einheit der Natur, Deutscher Taschenbuch
Verlag, Munich, Germany.
von Weizsäcker, C. F., Scheibe, E. and Süssmann, G. (1958) Komplemen-
tarität und Logik, III (Mehrfache Quantelung), Zeitschrift für Natur-
forschung, v. 13, pp. 705–721.
von Wright, G. H. (1951) Deontic logic, Mind, v. 60, pp. 1–15.
von Wright, G. H. (1963) Norm and Action: A Logical Inquiry, Kegan Paul,
London.
von Wright, G. H. (1963a) The Logic of Preference, Edinburgh,.
von Wright, G. H. (1968) An Essay on Deontic Logic and the General
Theory of Action, North-Holland.
Vygotskii, L. S. (1956) Selected Psychological Works, Nauka, Moscow (in
Russian).
Wallis, C. (2008) Consciousness, context, and know-how, Synthese, v. 160,
pp. 123–53.
Walsh, J. P. and Ungson, G. R. (1991) Organizational memory, The
Academy of Management Review, v. 16, No. 1, pp. 57–91.
Waltz, E. (2003) Knowledge Management in the Intelligent Enterprise,
Artech House Inc.
Wang, X., Liu, X., Feng, X. and Hoede, C. (2010) A novel approach to
concepts via knowledge graph theory and AFS theory, 2010 Interna-
tional Conference on Intelligent Control and Information Processing
(ICICIP), pp. 87–92.
Wassermann, R. (2000) An algorithm for belief revision, Proc. of 7th Inter-
national Conf. of Principles of Knowledge Representation and Reason-
ing (KR’2000).
Watson, B. (2003) Xunzi: Basic Writings, Columbia University Press, New
York, NY.
Waxman, M. J. (1996) On Problem Complexity (unpublished work).
Waxman, S. R. (1998) Linking object categorization and naming: Early
expectations and the shaping role of language, The Psychology of Learn-
ing and Motivation, v. 38, pp. 249–291.
Waxman, S. R. (1999) The dubbing ceremony revisited: Object naming
and categorization in infancy and early childhood, in Folkbiology, MIT
Press, Cambridge, MA, pp. 233–284.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 921

Bibliography 921

Waxman, S. R. (2002) Early word learning and conceptual development:


Everything had a name, and each name gave birth to a new thought,
in Handbook of Childhood Cognitive Development, Blackwell, Oxford,
pp. 02–126.
Waxman, S. R. (2003) Links between object categorization and naming:
Origins and emergence in human infants, in Early Category and Con-
cept Development : Making Sense of the Blooming, Buzzing Confusion,
Oxford University Press, New York, pp. 213–241.
Waxman, S. R. and Braun, I. E. (2005) Consistent (but not variable) names
as invitations to form object categories: New evidence from 12-month-
old infants, Cognition, v. 95, pp. B59–B68.
Weber, R. O. (2007) Addressing failure factors in knowledge manage-
ment, Electronic Journal of Knowledge Management, v. 5, No. 3,
pp. 333–346.
Weber, R., Sandhu, N. and Breslow, L. (2001) On the technological,
human, and managerial issues in sharing organizational lessons, Proc.
of the 14 th Annual Conference of the International Florida Arti-
ficial Intelligence Research Society, AAAI Press, Menlo Park, CA,
pp. 334–338.
Webster’s Revised Unabridged Dictionary (1998) MICRA, Inc. of Plainfield,
NJ.
Webster’s English Language Desk Reference (1999) Gramercy Books, New
York.
Weimin, S.(2009) Chinese logic and the absence of theoretical sciences in
ancient China, Dao, v. 8, No. 4, pp. 403–423.
Weinzierl, A. (2010) Comparing inconsistency resolutions in multi-context
systems, in Student Session of the European Summer School for Logic,
Language, and Information, pp. 17–24.
Weiss, A. (2005) The power of collective intelligence, netWorker — Beyond
file-sharing, Collective Intelligence, v. 9, No. 3, pp. 16–24.
Weiss, S. I. and Kulikowski, C. (1991) Computer Systems That Learn: Clas-
sification and Prediction Methods from Statistics, Neural Networks,
Machine Learning, and Expert Systems, Morgan Kaufmann, San Fran-
cisco, CA.
Weitzenfeld, A. (1989) NSL, Neural Simulation Language, Version 1.0,
Technical Report 89–02, USC, Center for Neural Engineering.
Weitzenfeld, A., Arbib, M. A. and Alexander, A. (2002) The Neural Simu-
lation Language: A System for Brain Modeling, MIT Press, Cambridge,
MA.
Weld, D. and de Kleer, J. (Eds.) (1990) Readings in Qualitative Reasoning
about Physical Systems, Morgan Kaufmann, San Mateo, CA.
Weller, T. (2007) Information history: Its importance, relevance and future,
Aslib Proceedings, v. 59, No. 4/5, pp. 437–448.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 922

922 Bibliography

Weller, T. (2008) Information History an Introduction: Exploring an Emer-


gent Field, Chandos, Oxford.
Weller, T. (2012) The information state: A historical perspective on
surveillance, in Routledge Handbook of Surveillance Studies, K. Ball,
K. Haggerty and D. Lyon (Eds.), Routledge, pp. 57–63.
Wellman, J. L. (2009). Organizational Learning: How Companies and Insti-
tutions Manage and Apply Knowledge, Palgrave Macmillian, New York.
Wells, R. O. (1979) Complex manifolds and mathematical physics, Bulletin
of the American Mathematical Society (N.S.), v. 1, No. 2, pp. 296–336.
Wheeler, J. A. (1990) Information, Physics, Quantum: The Search for Links,
in Complexity, Entropy, and the Physics of Information, W. Zurek
(Ed.), Addison-Wesley, Redwood City, CA, pp. 3–28.
Whitehead, A. N. (1919) An Enquiry Concerning the Principles of Natural
Knowledge, Cambridge University Press, Cambridge.
Whitehead, A. N. and Russell, B. (1910–1913) Principia Mathematica, 3
vols., Cambridge University Press, Cambridge.
Wiener, N. (1961) Cybernetics, or Control and Communication in the Ani-
mal and the Machine, 2nd revised and enlarged edition, MIT Press and
Wiley, New York, London.
Wigner, E. P. (1932) On the quantum correction for thermodynamic equi-
librium, Physical Review, v. 40, pp. 749–759.
Wigner, E. P. (1959) Group Theory and its Application to the Quantum
Mechanics of Atomic Spectra, Academic Press, New York.
Wiig, K. (1993) Knowledge Management Foundations: Thinking about
Thinking — How People and Organizations Create, Represent, and Use
Knowledge, Schema Press, Arlington, TX.
Wijnhoven, F. (2003) Operational knowledge management: Identification
of knowledge objects, operation methods, and goals and means for the
support function, Journal of the Operational Research Society, v. 54,
pp. 194–203.
Williams, B. A. O. (1968) Knowledge and meaning in the philosophy of
mind, The Philosophical Review, v. 77, No. 2, pp. 216–228.
Williams, M. A. (1993) Transmutations of Knowledge Systems, Ph.D. the-
sis, University of Sydney, Australia.
Williams, M. A. (1994) Transmutations of knowledge systems, Proc. of 4th
International Conference of Principles of Knowledge Representation
and Reasoning (KR’94), pp. 412–421.
Williams, M. A. (1996) A practical approach to belief revision: Reason-
based change, Proc. of 5th International Conference of Principles of
Knowledge Representation and Reasoning (KR’96), pp. 412–421.
Williamson, T. (2000) Knowledge and Its Limits, Oxford University Press,
Oxford.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 923

Bibliography 923

Williamson, T. (2007) The Philosophy of Philosophy, Routledge, New York.


Wing, J. M. (1998) Formal methods: Past, present, and future, Advances
in Computing Science, Lecture Notes in Computer Science, v. 1538,
pp. 224–245.
Winskel, G. (1985) Synchronization trees, Theoretical Computer Science,
v. 34 pp. 33–82.
Winter, S. (1987) Knowledge and competence as strategic assets, in The
Competitive Challenge (TEECE D, Ed), Ballinger, Cambridge, MA,
pp. 159–184.
Wittgenstein, L. (1922) Tractatus Logico-Philosophicus, (English transla-
tion by C. K. Ogden and F. P. Ramsey) Routledge and Kegan Paul,
London.
Wittgenstein, L. (1953) Philosophical Investigations, Macmillan, New York.
Woods, W. A. (1975) What’s in a link: foundations for semantic networks,
in Representation and Understanding, D. G. Bobrow and A. Collins
(Eds.), Academic Press, New York, pp. 35–82.
Worth, S. E. (2010) Art and Epistemology, The Internet Encyclopedia of
Philosophy (IEP) (Internet Edition: http://www.iep.utm.edu/art-ep/).
Wright, P. (1998) Knowledge discovery in databases: Tools and techniques,
ACM Crossroads, v. 5, No. 2, pp. 7–17.
Wright, T., Watson, S. and Castrataro, D. (2010) To tweet or not to tweet-
social media as a missed opportunity for knowledge management, in
Leading Issues in Social Knowledge Management, Academic Publishing
International Limited, pp. 42–55.
Wrona, M. (2005) Stratified Boolean grammars, in Mathematical Founda-
tions of Computer Science, Lecture Notes in Computer Science, v. 3618,
Springer, Berlin/Heidelberg, pp. 801–812.
Wu, J., Du, H., Li, X. and Li, P. (2010) Creating and delivering a suc-
cessful knowledge management strategy, in Business Science Reference,
Knowledge Management Strategies for Business Development, Hershey,
PA, pp. 261–276.
Xenakis, J. (1956) Sentence and statement, Analysis, v. 16, No. 4,
pp. 91–94.
Xu, B. and Zhuge, H. (2009) The basic operation set of the semantic link net-
work and its completeness, Fifth International Conference on Semantics,
Knowledge and Grid, SKG, IEEE, pp. 232–239.
Yaghoubi, N. M. and Maleki, N. (2012) Critical success factors of knowledge
management (A case study: Zahedan Electric Distribution Com-
pany), Journal of Basic and Applied Scientific Research, v. 2, No. 12,
pp. 12024–12030.
Yazdani, B, O., Yaghoubi N, M. and Hajiabadi, M. (2011) Critical suc-
cess factors for knowledge management in organization: An empirical
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 924

924 Bibliography

assessment, European Journal of Humanities and Social Sciences, v. 3,


No. 1, pp. 95–117.
Yip, M. W., Ng, A. H. H. and Lau, D. H. C. (2012) Employee participa-
tion: Success factor of knowledge management, International Journal
of Information and Education Technology v. 2, No. 3, pp. 262–264.
Young, M. F. (Ed.) (1957) Knowledge and Control, Coolier-Macmillan,
London.
Zadeh, L. (1965) Fuzzy sets, Information and Control, v. 8, No. 3, pp. 338–
353.
Zadeh, L. A. (1973) The Concept of a Linguistic Variable and its Applica-
tion to Approximate Reasoning, Memorandum ERL-M 411, Berkeley.
Zadeh, L. A. (1975) Fuzzy logic and approximate reasoning, Synthese, v. 30,
pp. 407–428.
Zadeh, L. A. (1978) Fuzzy sets as a basis for a theory of possibility, Fuzzy
Sets and Systems, v. 1, pp. 3–28.
Zametkin, A. J. (1990) Cerebral glucose metabolism in adults with hyper-
activity of childhood onset, The New England Journal of Medicine,
v. 323, pp. 1361–1366.
Zellweger, H. P. (2011) A knowledge visualization of database content cre-
ated by a database taxonomy, Proc. of the 15th International Confer-
ence on Information Visualization, London, United Kingdom, pp. 323–
328.
Zhang, L. (2002) Knowledge Graph Theory and Structural Parsing, Twente
University Press, Twente, the Netherlands.
Zhang, X., Zhang, Z., Xu, D. and Lin, Z. (2010) Argumentation-based rea-
soning with inconsistent knowledge bases, Advances in Artificial Intel-
ligence, Lecture Notes in Computer Science, v. 6085, pp. 87–99.
Zhen, L. and Jiang, Z.-H. (2008) Innovation-oriented knowledge query in
knowledge grid, Journal of Information Science and Engineering, v. 24,
No. 2, pp. 601–613.
Zhou, R. Q. (2013) A new method of semantic network knowledge rep-
resentation based on extended Petri net, Computer Technology and
Application, v. 4, pp. 245–253.
Zhu, Z. (2006) Nonaka meets Giddens: A critique, Knowledge Management
Research and Practice, v. 4, pp. 106–115.
Zhuge, H. (2003) Active e-document framework ADF: Model and platform,
Information and Management, v. 41, No. 1, pp. 87–97.
Zhuge, H. (2004) China’s e-science knowledge grid environment, IEEE Intel-
ligent Systems, v. 19, No. 1, pp. 13–17.
Zhuge, H. (2005) The future interconnection environment, IEEE Computer,
v. 38, No. 4, pp. 27–33.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 925

Bibliography 925

Zhuge, H. (2005a) Semantic grid: Scientific issues, infrastructure, and


methodology, Communications of the ACM, v. 48, No. 4, pp. 117–119.
Zhuge, H. (2006) Discovery of knowledge flow in science, Communications
of the ACM, v. 49, No. 5, pp. 101–107.
Zhuge, H. (2006a) Semantic component networking: Toward the synergy of
static reuse and dynamic clustering of resources in the knowledge grid,
Journal of Systems and Software, v. 79, pp. 1469–1482.
Zhuge, H. (2007) Autonomous semantic link networking model for the
knowledge grid, Concurrency and Computation: Practice and Experi-
ence, v. 7, No. 19, pp. 1065–1085.
Zhuge, H. (2008) The knowledge grid environment, IEEE Intelligent Sys-
tems, v. 23, No. 6, pp. 63–71.
Zhuge, H. (2009) Communities and emerging semantics in semantic link
network: Discovery and learning, IEEE Transactions on Knowledge and
Data Engineering, v. 21, No. 6, pp. 785–799.
Zhuge, H. (2010) Interactive semantics, Artificial Intelligence, v. 174,
pp. 190–204.
Zhuge, H. (2010a) Special section: Semantic link network, Future Genera-
tion Computer Systems, v. 26, No. 3, pp. 359–360.
Zhuge, H. (2010b) Socio-natural thought semantic link network: A method
of semantic networking, The Cyber Physical Society, AINA, pp. 19–26.
Zhuge, H. (2011) Semantic linking through spaces for cyber-physical-socio
intelligence: A methodology, Artificial Intelligence, v. 175, pp. 988–
1019.
Zhuge, H. (2012) The Knowledge Grid : Toward Cyber-Physical Society,
World Scientific Publishing Co.
Zhuge, H. and Jia, R. (2004) Semantic link network builder and intelli-
gent browser, Concurrency and Computation: Practice and Experience,
v. 16, No. 14, pp. 1453–1476.
Zhuge, H., Ding, L. and Li, X. (2007) Networking scientific resources in the
knowledge grid environment, Concurrency and Computation: Practice
and Experience, v. 7, No. 19, pp. 1087–1113.
Zhuge, H. and Li, X. (2007) Peer-to-peer in metric space and semantic
space, IEEE Transactions on Knowledge and Data Engineering, v. 6,
No. 19.
Zhuge, H., Liu, J., Feng, L., Sun, X. and He, C. (2005) Query routing in a
peer-to-peer semantic link network, Computational Intelligence, v. 21,
No. 2, pp. 197–216.
Zhuge, H. and Luo, X. (2006) Automatic generation of semantics for doc-
uments in the knowledge grid, Journal of Systems and Software, v. 79,
pp. 969–983.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 926

926 Bibliography

Zhuge, H. and Luo, X. (2005) The knowledge map: Mathematical model


and dynamic behaviors, Journal of Computer Science and Technology,
v. 20, No. 3, pp. 289–295.
Zhuge, H. and Shi, X. (2004) Toward the eco-grid: A harmoniously evolved
interconnection environment, Communications of the ACM, v. 47,
No. 9, pp. 78–83.
Zhuge, H. and Shi, X. (2003) Fighting epidemics in the information and
knowledge age, IEEE Computer, v. 36, No. 10, pp. 114–116.
Zhuge, H. and Sun, Y. (2010) The schema theory for semantic link network,
Future Generation Computer Systems, v. 26, No. 3, pp. 408–420.
Zhuge, H. and Xu, B. (2011) Basic operations, completeness and dynamic-
ity of cyber physical socio semantic link network CPSocio-SLN, Con-
currency and Computation: Practice and Experience, v. 23, No. 9,
pp. 924–939.
Zhuge, H., Yuan, K., Liu, J., Zhang, J. and Wang, X. (2008) Modeling lan-
guage and tools for the semantic link network, Concurrency and Com-
putation: Practice and Experience, v. 20, No. 7, pp. 885–902.
Zhuge, H. and Zhang, J. (2010) Topological centrality and its applications,
Journal of the American Society for Information Science and Technol-
ogy, v. 61, No. 9, pp. 1824–1841.
Zhuge, H. and Zhang, J. (2011) Automatically constructing semantic link
network on documents, Concurrency and Computation: Practice and
Experience, v. 23, No. 9, pp. 956–971.
Zilberstein, S. (1996) Using anytime algorithms in intelligent systems, AI
Magazine, v. 17, No. 3, pp. 73–83.
Ziman, J. M. (1991) Reliable Knowledge: An Exploration of the Grounds
for Belief in Science, Cambridge University Press, New York.
Zimmermann, K.-J. (1991) Fuzzy Set Theory and Its Applications, Kluwer
Academic Publishers, Dordrecht.
Zinoviev, A. (1973) Foundations of the Logical Theory of Scientific Knowl-
edge (Complex logic), Reidel, Dordrecht.
Zins, C. (2007) Conceptual approaches for defining data, information, and
knowledge, Journal of the American Society for Information Science
and Technology, v. 58, No. 4, pp. 479–493.
Zwicky, F. (1969) Discovery, Invention, Research — Through the Morpho-
logical Approach, The Macmillan Company, Toronto.
Zwicky, F. and Wilson A. (Eds.) (1967) New Methods of Thought and Pro-
cedure: Contributions to the Symposium on Methodologies, Springer,
Berlin.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-index page 927

Index

A universal, 192
Abstract, 28, 30, 41, 52, 138 Algorithm, 49–50, 157, 424–425
representation, 160
Abstraction, 28, 65, 138, 175
constructive, 487
ladder, 138
data-mining, 693
Abstractness, 95, 137
deduction, 279
level of, 137
first-level, 160
Action, 14, 21, 49
generating, 403
structure, 820
genetic, 694
Acquisition, x, 6
learning, 163
Activity, 15, 17
local search, 163
economic, 7 measuring, 220
intellectual, xv network, 759
mental, 55 nondeterministic, 63
practical, 55 optimization, 163
Adaptation, 537 partial search, 163
Adequate, 561 power of, 304
Aggregate, 82 probabilistic, 63
Algebra, 422, 601 quantum, 77
of sets, 236 randomized, 64
Boolean, 439 recursive, 74
epistemic quantum, 369 second-level, 160–162
information, 192 super-recursive, 220
knowledge, 358 symbolic, 54
Lie, 420 wired, 54
linear, 827 Algorithmic
multibase, 359 information theory, 131
process, 420 complexity, 96, 131
relational, 153 ladder, 304

927
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-index page 928

928 Subject Index

language, 404, 423, 425 semantics, 145


problem, 218 set theory, 279
process, 125 theory of algorithms, 248
representation, 160, 402
space, 179 B
verification, 258 Base, 56, 59
Alphabet, 100 Belief, 169, 175
Analogy, 677 excessive, 122
Analysis false, 763
axiological, 3 justified, 23
functional, 3 true, 23
methodological, 48 Binary
sociological, 48 code, 328
structural, 2 information operator, 192
Analytic representation, 34 numerical system, 349
Anti-knowledge, 76 operation, 360
Architecture property, 105
concept, 454 relation, 188, 280, 375
named data networking, representation, 132
759 Biology, 403
network, 155 Block-schema, 551
networking, 759 Bond, 201, 340
semantic web, 759 Boolean
three-schema, 553 algebra, 439
Arithmetic, 24, 33, 35–36, 76, 764 grammar, 305
Array, 153, 410 Bounded
systolic, 823 intellectual activity, 652
Arrow, 329 rationality, 140
Artificial set, 211
intelligence, 2–3, 79 uniformly, 214
neural network, 54 Bundle, 782
Aspect, 75
Assertion, 260 C
Automaton Calculation, 36
abstract, 157 Calculus
accepting, 159 basic, 75
cellular, 820 classical, 57
deterministic, 100, 159 logical, 216
finite, 157, 159 process, 420
grid, 161, 571 propositional, 97
non-deterministic, 361 Cardinality, 293, 810
Axiom, 62, 108, 239 Category, 417
Axiomatic abstract, 417
approach, 131 algebraic, 417
mathematical system, 26 Cell, 821
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-index page 929

Subject Index 929

Century, 8, 11 axiomatic complexity measure,


Certainty, 24, 27–28 127
Clarity, 130 cognitive, 127
Classification computational, 301
bidirectional, 66 measure, 131
confidence, 66 direct complexity measure, 131
domain-oriented, 64 dual complexity measure, 131
eight-fold bidirectional, 67–68 dynamic complexity measure,
epistemic, 244 131
hierarchical, 66–67 inductive Kolmogorov, 801
ontological, 66 Kolmogorov, 131, 629, 800–801
problem-oriented, 61 of acquisition, 53
social, 66 of integration, 53
three-dimensional, 65 of knowledge, 53
Creation, 87, 152 of learning, 53
Creativity, 87 operational knowledge
Culture, 206, 350 complexity measure, 131
Code, 328 prefix, 801
Codification, 744 problem, 128
Codomain, 810 space, 131
Cognition, 3, 9, 15, 20, 28, 78–79, 122, static complexity measure, 131
124, 128–129, 171, 175, 196, 232, time, 131
239, 344, 451, 537, 540, 644, 647, transformation, 128
653, 658, 665, 688, 696, 725, 732 uniform, 801
Cognitive utilization, 128
classification, 55 Component, 20, 47
information, 42 Composition, 153
process, 29, 42 Comprehensibility, 130
semantics, 144 Computability, 280
skills, 49 Computation, 304, 402
Cognitology, 3 concurren, 280
Communication, 117 cooperative, 543
Completeness, 139 distributed, 556
Complex, 124 sequential, 545
mathematical object, 473 symbol-based, 545
metaphor, 422 Computational
operational knowledge, 123 complexity, 301
phenomenon, 123 neuroscience, 548
process, 796 parallel, 550
system, 123 practice, 425
Complexity, 53, 125–127 process, 320
algorithmic, 96, 131 Computer
average, 131 graphics, 556
average space, 131 hardware, 560
average time, 131 program, 562
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-index page 930

930 Subject Index

programming, 653 Core, 452–454, 535


science, 556 Correct
Computing descriptively, 256
cloud, 161 functionally, 256
metrics, 220 operationally, 257
power, 304 Correctness, 96
practice, 424 consumer-oriented, 259
Concept, 6, 47 designer-oriented, 259
analysis, 184 programmer-oriented, 259
behavioral, 173 software, 97, 257
formation, 205 user-oriented, 259
illusive, 124 Correlation, 96
informal, 185 Creativity, 399
knowledge, 65 Criterion
learning, 205 pragmatic, 98
mathematical, 72 semantic, 98
meaning, 143 syntactic, 98
model of, 447, 452–454 Cycle, 19, 249
predicate, 34
pure, 34 D
subject, 34 Data, 735–740
Confidence Data-mining, 690
level, 121 algorithm, 693
interval, 121 method, 693
coefficient, 121 Decidability, 301, 512
bound, 121 Decidable set, 512
Configuration, 71, 346, 546, 822 Decision
Connectedness, 207 algorithm, 694
Consistency, 96 Decidable problem, 512
local, 263 Deduction, 37, 232
global, 263 algorithm, 279
Consistent, 97, 107, 276, 296 rule, 156
Constructive Definability, 788
algorithm, 487 Description
definition, 487, 496 of algorithm, 161
logic, 442 Descriptive programming, 401
representation, 441, 487, 496 Design, 8, 126, 711
Constructivity, 639 Device, 3, 424, 478
Context, 148 Differentiation, 497, 684
abstract, 103 Dimension
algebraic, 103 abstraction, 65
formal, 284 abstractness, 95
Control, 21, 49 certainty, 95
device, 821 codification, 65
Contraction, 634 completeness, 95
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-index page 931

Subject Index 931

complexity, 95 external, 568


confidence, 95 internal, 568
correctness, 95
diffusion, 66 E
dynamic, 65 Efficiency, 135
exactness, 95 Efficient, 135
feature, 95 Effective procedure, 424
integration, 95 Element
meaning, 95 chemical, 523
possession, 65 cognitive, 725
separation, 95 competitive, 704
typological, 65 construction, 158
validation, 95 data, 733
Direct product, 208 discrete, 733
Direction environment, 525
Buddhist religious, 17 explicit, 56
in Chinese philosophy, 431 grammatical, 427
classical, 431 infological, 722–724
in epistemology, 139 information, 748
in Indian philosophy, 10 inner, 597
of information theory, 727 knowledge, 54, 93–94
in linguistic semantics, 144 minimal, 481
in logic, 269 quantum, 307
methodological, x rare earth, 523
in methodology, 46 schema, 556–557
notable, 675 semiotic, 351
philosophical, x, 477 structural, 590
popular, 269, 727 tacit, 56
of research, 415 traditional, 346
scientific, 395 triad, 749
specific, 681 unknown, 710
structuralist, 46 XML, 552
structure-nominative, 47, 603 Elementary
Discreteness, 95 data, 328
Disjunctive normal form, 373–374 image, 351
Dissemination, 6 knowledge unit, 179, 189, 307,
Domain, 61 792
Doxastology, 45 particle, 104, 315, 325, 524
Dyadic phenomenon, 652
model, 348, 350 script, 533
Dynamical system, 240 unit, 328–330, 792
fuzzy, 236 Emergence, 640, 685, 723, 741
hybrid discrete–continuous, Emotion, 15, 56, 82, 141, 406–407,
286 650
Dynamics Enumeration, 511
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-index page 932

932 Subject Index

Energy, 7, 127 characteristic, 813


Epistemic, 170, 178 elementary, 87
Epistemology, 9 computable, 665
Equivalence, 186, 341, 361, 366 fuzzy, 558
Euclidean general recursive, 305
geometry, 671 non-deterministic, 558
metrics, 473–474 partial, 338, 503, 778, 812
plane, 675, 775 partial recursive, 341, 665
space, 211, 676, 834 probabilistic, 558
Evaluation, 169 recursive, 280
Evidence, 267 set-valued, 558
Existence total, 811
material, 84 transition, 160
mental, 84 variable, 558
physical, 84 Functional
structural, 84 complexity measure, 127
Expansion, 634 programming language, 426
Experience, 56 Functioning, 701
Expression, 278 Fuzzy, 103
Extension, 677
G
F Generality, 137
False, 52 Generating rules, 403
absolutely, 52 Goal, 32, 39, 104–105, 155, 164, 171,
Fantasy, 225–226, 765 173, 197, 492, 665, 709, 736
Feeling, 673 Government, 4, 8, 21
Finite sequence, 784 Grammar
Finite word, 782 context-free, 304–305
Flow-chart, 561 context-sensitive, 305
Form phrase-structure, 305
analytic, 411 regular, 305
dynamic, 286 Graph, 518
static, 521 Grid
Formal automaton, 561–562, 571–572
grammar, 487 Guessing, 677
language, 487
Formula H
Boolean, 501 Hardware
open, 565 abstract, 400
closed, 565 Head, 431
Frame, 536 Hierarchy, 152, 197, 568, 570, 636,
Function 729
adjacency, 564 High-performance computing, 130
algorithmic, 567 History, 227
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-index page 933

Subject Index 933

I Information processing system (see


Icon, 355 also IPS)
Idea, 175 abstract, 566
Ideal, 175 autonomous, 567
Illogical, 429 real, 566
Image, 113 Information theory
conceptual, 113 algorithmic, 131, 727, 754
Inconsistency, 108, 472–473, 719 dynamic, 728
Inconsistent, 41, 57, 107–108, 218, 270 economic, 728
Index, 355 general, 728
Indian philosophy, 9 operator, 728
Induction, 37 pragmatic, 728
Inductive qualitative, 728
computation, 320 quantum, 728
hierarchy, 414 semantic, 728
inference, 665, 669 Inference, 9
process, 660 Infware
Turing machine, 157 abstract, 820
Industry, 8, 125 Input, 72
Inference, 37, 107 alphabet, 826
Infinity, 822 condition, 253
actual, 822 data, 251
negative 831 kinesthetic, 547
positive 831 port, 556
Infological system, 118, 172, 175, 180, register, 821
203, 723–724, 754, 765 relevant, 424
cognitive, 175 rule, 161
Information sensory, 648
cognitive, 42 symbol, 159
concept of, 722 symbolic, 424
essence of, 722 tactile, 547
genetic, 722 test, 260
in the strict sense, 722, 724 variable, 260
operation, 786 visual, 547
phenomenon, 722 word, 160
processing, 4, 175, 264–265, 307, Insight
541, 545, 680, 689, 691 rational, 55
proper, 722, 724, 726 Instruction
science, 5, 8, 154, 526, 728–729 of a Turing machine, 565
storage, 195–196, 650 Integration, 12, 155, 367, 497, 706,
theory, 131, 170, 179, 328, 347, 712, 804
727 Intellectual activity
transmission, 404–405, 407, 409, bounded productive, 652
532, 752–754 productive, 652
triadic mental, 86, 682, 685 reproductive, 652
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-index page 934

934 Subject Index

Intelligence foundational, 242


artificial, 2–3, 79, 151, 264, 295, internal, 242
395, 526, 651, 804, 818 knowledge, 242
collective, 85, 698 probabilistic, 242
Interaction, 42 procedural, 245
Interactive process, 243
process, 201, 301 rehabilistic, 243
Turing machine, 157 Justified
Interface, 673 by observation, 23
Internet, 690 by experience, 33
Interpretation by probable arguments, 33
conceptual, 187 by weather forecast, 222
erotetic, 187 by the recent inspection, 266
operational, 187
Intuition, 676 K
chronometric, 676 Key levels of knowledge, 41
common sense, 676 Knowledge
comprehension, 676 abstract, 28, 66
creativity, 676 acquisition, 2, 9, 17, 605
geometrical, 676 active, 65
global, 676 agentive, 61
interpretation, 676 amount of, 96
mathematical, 670, 673–674 analytic, 34
metaphoric, 676 analytical, 35
perceptual, 676 ancillary, 65
practical, 676 a posteriori, 55
primordial, 675–676 application, 3
reasoning, 676 a priori, 55
set-theoretic, 676 articulated, 56–57
spatial, 676 assertoric, 607
synthetic, 676 auditory, 68
temporal, 676 axiological, 49
base, 56
J biological, 68
Journal, 60, 90, 120, 154, 165 breadth of, 138
Justification, 23 carrier, 80
by authority/opinion, 244 case-specific, 65
by practice/experience, 244 certainty of, 119
by reasoning/thinking, 244 characteristics of, 36, 52, 96
coherence, 242 clarity of, 122
doxatic, 242 classification of, 44, 64–65, 75
epistemological, 215 codified, 65
existential, 244 cognitive, 67–68
external, 242 coherent, 64
faith-based, 245 collective, 322
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-index page 935

Subject Index 935

completeness of, 139 erotetic, 607


complexity of, 53, 125, 129, esoteric, 61
303–304 evaluation, 215
communal, 55 exact, 61
compiled, 64 exactness of, 102
concealed, 168 existential, 49
concept, 65 existential characteristics of, 40,
conceptual, 67 77, 81
conscious, 168 exoteric, 61
conditional, 70 expectational, 65
confidence in, 119 explicit, 91, 705–706, 716, 718,
consistency of, 96, 107 737
correctness of, 97, 112, 117, 231, explicitly absent, 168
242 external, 59
creation, 2, 644–645, 671–673, external characteristics of, 149
705–710, 716, 718, 762 externally explicit, 57–58
cultural, 66 factual, 66
declarative, 48, 69 false, 118, 648
deep, 67 fictitious, 66
deep-level, 67 focal, 59
definitional, 65 formal, 49
depth of, 138 function, 2–3, 804
descriptive, 48, 211, 605 future, 66
descriptive properties of, 91, 149 fuzzy, 61
differentiation of, 50 general, 65
diffused, 66 generality of, 137
digital, 68 geological, 68
dissemination, 717–718 global theory of, 593
domain, 96 graph, 554
domain-specific, 64 group, 58
domain-oriented, 64 habitual, 67
economic, 69 higher, 9
educational, 69 hypothetic, 607
efficiency of, 134 implicit, 56–57, 59, 648, 716, 761
embedded, 53 incommunicable, 57
embodied, 53 incomplete, 265, 272, 758
embrained, 53 indeterminate, 17, 61
economy, 7 inductive, 64
efficiency of, 134, 147, 521, 713 informal, 49
elementary knowledge unit, 179, individual, 58, 69
189, 307, 792 instinctive, 67, 698
empirical, 55, 744 instrumental, 49
encultured, 53, 65 intellective, 61
engineering, 1 intellectual, 69
entailed, 65 internally explicit, 57
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-index page 936

936 Subject Index

intuitive, 31, 650, 670–671, 677, procedural, 67, 69


686 propositional, 48, 98
justification, 215 public, 58, 65
learned, 67 quantum, 68, 93–94
link, 330 quantum theory of, 41, 309–310,
logico-mathematical, 60 341, 362, 754
lower, 9 quantum unit of, 309
management, x, 1 real, 66
mathematical, 24, 124, 158, 256, referential, 55
479, 593, 645, 673–674 regularity of, 405
meaning of, 140 relational, 70
measure, 111 relevance of, 104
memorized, 64 reliability of, 32, 95, 136
methodological, 65 religious, 69
modality of, 100 representation, 5, 41, 80, 97, 131,
molecular, 68 185, 263, 280, 341, 398–400,
moral, 64 449
network, 55, 709 representation of, 141, 177, 400,
node, 330 526, 603
non-referential, 55 representational, 49, 605
objectified, 58 role of, 3, 7, 40, 179
of ideas, 670 scientific, 46–47, 124, 158, 401,
olfactory, 68 600, 634, 638
ontogenetic, 67 sediment, 67
operational, 49 self-referential, 67
organization, 92–93, 301, 711 semiformal, 49
organizational, 58 shared, 65
partial, 71, 139–140, 594, 687 significance of, 131
passive, 65, 67 situational, 67
perceptual, 64 sociological, 53
personal, 65 social, 60
phylogenetic, 67 society, 6
physical, 60, 68 social, 60, 69–70, 131, 548, 644,
political, 68 647
practical, 60, 69 socio-cultural knowledge, 67
pragmatic, 69–70 source of, 32, 42, 136, 176, 433,
precision of, 139 760
printed, 68 space, 185
probabilistic, 61 specific, 66
procedural, 49, 69 spiritual, 69
processing, 4, 147, 239, 538, 697, state, 189
736, 804 state-referential, 55
production, 1, 4, 7–8, 643–644 stockpiled, 67
productive, 60 structure, 547
professional, 53 strategic, 67
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-index page 937

Subject Index 937

studies, 1–2, 9, 39–40, 117, 151, descriptive, 311


722, 804 operations with, 386
structural, 49
structure of, 92 L
subsymbolic, 54 Language
superficial, 67 abstract, 380
subconscious, 59 algorithmic, 425, 427, 496, 615
supplementary, 65 block-scheme, 539
surface, 67 context-free, 304
symbol-type, 53 empty, 119
symbolic, 54, 68 formal, 403
synthetic, 35 functional programming, 426
system, 98 imperative programming, 426
tacit, 55 input, 72
technology, 1 natural, 93, 145, 316, 343,
testing, 215 403–404, 427, 441, 466, 519,
theoretical, 60 605, 730
theory, 142 of mathematics, 403
theory of, 9 of science, 411
transgenerational, 67 object-oriented programming,
true, 648 401, 426, 470
truthfulness, 112 output, 705
unarticulated, 57 procedural programming, 426
undiffused, 66 programming, 50
unit of, 98, 311 regular, 304
utilization, 1 representation of a, 452
validation, 253 working, 129
verification, 253 Learning
visual, 68 automated, 705
wired, 54 machine, 705
written, 68 Length, 204
Knowledge system Level
axiological, 606 attributive, 604
comprehensive, 594, 601, 603, componential, 604
611, 626 of the world, 87
instrumental, 606, 622 productive, 604
logic-linguistic, 612 Limit, 605
model-representation, 617 Limit partial recursive function,
nuclear, 603 665
procedural, 67, 69, 142, 606 Limit recursive function, 665
Knowledge systems Limited recursion, 665
hierarchies of, 636 Link, 329–340, 557
operations with, 359 arrow semantic, 330
relations between, 72 complete semantic, 330
Knowledge unit inner semantic, 330
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-index page 938

938 Subject Index

knowledge, 330 hybrid, 442


semantic, 330 hypermodal, 443
Linguistic illocutionary, 443
representation, 148, 177, 401, inclusive, 443
446 Indian, 430
structure, 820 inductive, 443
Logic infinitary, 443
algebraic, 441 informal, 443
Aristotelian, 441 information-theoretic, 443
autoepistemic, 442 intensional, 443
Avicennian, 432 intermediate, 443
belief, 442 interpretability, 443
Buddhist, 430 intuitionistic, 443
business, 442 Jain, 429
categorical, 442 Judaic, 433
classical, 441 labeled, 443
combinatory, 442 linear, 443
complex, 442 local, 443
computability, 442 many-sorted, 443
conclusion, 442 many-valued, 443
conditional, 442 market, 440
conservative, 442 mathematical, 441, 463, 485,
constructive, 442 493–494, 555
contemporary, 440 modal, 443
continuous, 442 nominal, 443
cumulative, 442 non-symbolic, 443
default, 442 of computation, 426
deontic, 442 of decision, 443
dependence, 442 of diagnosis, 440
deviant, 442 of discovery, 443
dialectic, 440 of epistemology, 440
discussive, 442 of formal inconsistency, 443
discursive, 442 of names, 443
dynamic, 442 of physics, 440
epistemic, 442 of science, 440
equational, 442 operational, 443
erotetic, 442 paraconsistent, 443
European, 430, 432 polar, 443
fibring, 442 polyadic, 443
first-order, 442 polymodal, 443
formal, 440 Port-Royal, 358, 438
free, 442 possibilistic, 443
functor, 442 predicate, 441
fuzzy, 442 probabilistic, 443
higher-order, 442 process, 443
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-index page 939

Subject Index 939

prohairetic, 443 Manifold, 278


propositional, 439, 441 Many-worlds theory, 291
provability, 443 Mathematics, 35, 62, 86, 130, 156
quantum, 443 Mathematical, 41
relevant logic, 443 Meaning, 145
resource, 443 contextual, 147
scholastic, 440 denotation, 147
second-order, 443 estimate, 147
slash, 443 implicational, 147
spatial, 443 relation, 147
stationary, 443 sender, 145
substructural, 443 sentence, 145
superintuitionistic, 443 speaker, 145
symbolic, 440 textual, 147
temporal l, 443 Measure
tense, 443 direct complexity, 131
triadic, 443 dual complexity, 131
type-free, 443 dynamic complexity, 131
universal, 440 functional complexity, 131
Logical of information, 118, 802
calculus, 281 integral complexity, 129
implication, 438 semantic, 118
language, 97, 119, 141, 167, 179, static complexity, 131
181, 189–190, 290, 400, 496, Memory
659 bubble, 200
positivism, 46, 218 collective, 85
prevariety, 281 computer, 90, 178, 191, 197, 199,
quasivariety, 281 310, 400, 465
reasoning, 429, 487 core, 199
semantics, 144, 181, 429, 441 declarative, 51
theory, 280 echoic, 195–196
variety,186, 191–192, 270–274, eidetic, 52, 195
281, 294, 305 electro-acoustic, 200
Logic-based programming languages, episodic, 51
401, 426 explicit, 51
flash, 199
M haptic, 196
Machine human, 195–197
finite state, 568 iconic, 196
inductive Turing, 822 implicit, 51
limit Turing, 825 internal, 197
Turing, 825 long-term, 51, 195–196, 785, 789
Macrolevel, 41 main, 197–198
Macrosystem, 307 molecular, 200
Maintenance, 6 n-inductive, 825
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-index page 940

940 Subject Index

perceptual, 197 temporal, 228


personal, 197 Mode
phase-change, 200 accepting, 567
primary, 195 computing, 598
procedural, 51 concurrent, 599
secondary, 195 deciding, 587
semantic, 51–52 of activity, 571
semiconductor, 199 Model
sensory, 195–196 constructive, 205
short-term, 195–196, 785, 789 many-world, 291
skill, 197 mathematical, 287
social, 85 multidimensional structured, 265
static, 199 structure-nominative, 600
structured, 822 Modeling, 6
system, 197 Multigraph, 575
thin-film, 199 Multiplicity, 178
twistor, 199 Multiset, 178
vacuum tube, 200
working, 542, 821–822 N
Megalevel, 41–42, 328, 593, 601, 603 Name, 345
Mental world, 84–87, 554, 792 Named
Mentality, 84–85, 87, 89–91, 244, data, 759
477–479, 540, 545, 650 set, 487, 814
Metadata, 40, 151–155, 551, 553, 759 Natural number, 35
Meta-epistemology, 164 Network
Meta-ethics, 164 abstract neural,425
Metaheuristic, 163 artificial neural, 558
Metaknowledge, 40, 151–153, 158, architecture, 155
162, 165–167, 353, 636 assertional, 520
Metalanguage, 167 definitional, 520
Metalogic, 164 executable, 521
Metaphilosophy, 164 hybrid, 521
Metarule, 159 implicational, 520–521
Methodology, 158 learning, 521
Metric, 833 linguistic, 521
Microlevel, 41, 307 neural, 694
Microsystem, 307 semantic, 32, 520
Minimization, 197 statement, 521
Misconception, 83 theory, 520
Modality unsupervised neural, 544
anticipation, 76 Neuron, 419
bygone, 76 Node
confidential, 76 accepting, 568
current, 76 generating, 568
of knowledge, 100 transducing, 568
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-index page 941

Subject Index 941

Norm, 834–835 closed epistemic information, 783


Novelty, 96, 150 content, 767
content epistemic information,
O 767
∆-operator, 741 content thesaurus information,
Object 768
abstract, 83, 312 continuous epistemic
constructive, 96 information, 777
natural, 312 contracting epistemic
Object-oriented programming (see information, 784
also OOP), 401, 426, 470, 756 copying epistemic information,
Observers 785–786
abstract, 83 database, 484
external, 83 deletion bond, 770
internal, 83 deletion content, 768
real, 83 deletion weight, 770
Ontology, 42, 84 emotional information, 171
Operation epistemic information, 170, 193,
arithmetical, 516 766
effective, 424 generation epistemic
functional, 492 information, 770
information, 786 information, 191–193
integral, 813 inner epistemic information, 767
order, 305 instructional information, 171
topological, 412, 415 knowledge information, 171
Operational decomposition, 548 linear, 780
Operational device, 822 linear epistemic information,
Operational programming language, 775, 781–782
160 logical, 489
Operator, 49 mixed epistemic information,
addition bond, 770 767
addition content, 768 modal, 489
addition epistemic information, monotone epistemic
788–790 information, 783
addition weight, 770 moving, 769, 785
additive epistemic information, moving epistemic
788 information, 785, 788–789
antitone epistemic information, information theory, 328
784 of adding weights, 773
binary, 360 of deleting weights, 773
bond, 767 of substituting weights, 773
bond epistemic information, 767 permanent epistemic
bond thesaurus information, 768 information, 772
bounded epistemic information, projection, 487
777, 780 (p, q)-continuous epistemic
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-index page 942

942 Subject Index

information, 777 port, 557


replication epistemic variable, 260
information, 768
schema, 550 P
selection, 487 Part
semipermanent epistemic algorithmic, 608, 625
information, 772 aspect, 608, 618
stationary epistemic combination, 608, 631
information, 772 evaluation, 608, 631
stratified epistemic fragment, 608, 628
information, 787 linguistic, 608, 613–614
strictly antitone epistemic logical, 602, 608, 613
information, 784, 787 nominalistic, 608
strictly monotone epistemic nomological, 602, 608, 618
information, 784 operating, 608, 625
structured information, 766 operator, 608, 625
substantiation bond, 770 scaling, 608, 631
substantiation content, 768 system, 608, 628
substitution bond, 770 Partial order, 811
substitution content, 768 Partial projection, 208
substitution weight, 770 Partial recursive function, 341
tense, 432 Perception, 9
transformation content, 768 Philosophy, 158
transformation weight, 770 Physics, 361, 403, 416
transformation epistemic Physical, 423
information, 770 Potential infinity, 825
uniform epistemic Potential process, 109
information, 777 Port, 559
uniformly bounded epistemic Power
information, 780 accepting, 670
value-changing weight, 771 computing, 304
weight, 767 decision, 309
weight epistemic information, set, 237, 809
767 Pragmatics, 145
Operationalism, 46 Precision, 139
Operationism, 46 Predicate, 308
Opinion, 244 Prefix function, 801
Order, 165 Preprocessing, 692–693
Ordinal, 234, 813 Principle
Organization, 1, 7–8, 149, 656, 700, methodological, 791
719 of minimal action, 766
Output named data, 329, 758
alphabet, 826 ontological, 817
condition, 253 special transformation, 722–723,
data, 260 746
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-index page 943

Subject Index 943

Problem R
Decidable, 512 Range, 558
halting, 128, 565 Real number, 132, 214, 315, 774,
undecidable, 301 832–833
Procedural programming, 426 Realizability, 265
Procedure, 461 Recursion, 172, 661
Process, 11, 48 Recursive relation, 351
algorithmic, 125 Reduction, 301, 367
business, 56 Relation, 147
cognitive, 29 Relationship, 275
complex, 796 Relevance, 96
dynamic, 78 Reliability, 95
integration, 125 Representation
problem-solving, 67 of a language, 141
thermonuclear, 68 operational, 187
Processor, 821 Representational type, 217
Product, 4, 363 Resource, 4, 127
Production, 427–428 Restriction, 392
Program, 126–127, 131 Result
Property final, 530
abstract, 41, 91 of acceptation, 796
ascribed, 113, 311–312 of computation, 825
contextual, 92 Revision, 634
descriptive, 91 Robustness, 124
existential, 40 Rule
intellectual, 4 construction, 160
intrinsic, 113, 312 data transformation, 159
of knowledge, 74, 101, 133 deduction, 555
natural, 311 derivation, 465
relational, 92 execution, 159
Proposition, 176 formation, 487
Psychology logical, 493, 495
social, 85 midot, 433
of a Turing machine, 158
Q of correspondence, 422, 599
Quality of inference, 265, 284, 402
primary, 312 of interpretation, 429
secondary, 312 of propagation, 465
Quantifier production, 487
existential, 486 syntactic, 258, 427
universal, 486
Quantum, 316 S
Query, 303, 481 Scale, 170
Question, 481, 600 Scenario, 624
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-index page 944

944 Subject Index

Schema, 539 formal, 144


basic, 559 frame, 144
closed, 567 lexical, 144
conceptual, 553 linguistic, 144
content, 549 logical, 144
database, 553 operational, 145
dataflow, 551 structural, 144
deterministic, 567 Semiotic
external, 553 triangle, 356
formal, 548 Sequential composition, 153, 512–513,
function, 549 623, 769, 786
ideological, 548 Set
image, 546 acceptable, 72
interaction, 541 computable, 364
internal, 553 empty, 187
linguistic, 549 enumerable, 511
mental, 541, 547 fuzzy, 103
motor, 541 named, 330, 814
network, 543 Sign, 345
non-deterministic, 567 conceptual, 343
open, 567 material, 343
perceptual, 541 Size, 542–543
port, 559 Social
potentially open, 568 context, 3, 463, 701
process, 549 influence, 701
role, 549 intelligence, 85, 644, 647
theory, 544 mentality, 85
social, 548 memory, 85
specification, 552 psychology, 85
star, 553 recognition, 4
static, 549 Society, 80
theory, 555 Sociology, 347, 415, 556
XML, 551–552 Software, 97
Script, 527 Software metric, 220
Search, 163 Solution, 709
Semantic Source, 154
link, 329 Space
node, 329 abstract epistemic, 178–182
network, 538–539, 725, 782 conceptual, 204
Semantic link network, 330 conceptual epistemic, 206
Semantics epistemic, 178–182
axiomatic, 145 Euclidean, 676, 834
cognitive, 144 knowledge, 187
conceptual, 144 linear, 827
denotational, 145 metric, 833–835
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-index page 945

Subject Index 945

organization, 566 external, 415, 569


structure, 566 global space, 566
symbolic epistemic, 178–182 inner, 415
topological, 189, 833 intermediate, 415
topological vector, 189 internal, 415, 569
vector, 208 knowledge, 603
weighted epistemic, 206 mathematical, 157
weighted propositional of a system, 368
epistemic, 207 order, 412
Spatial, 676 outer, 415
Spatiality, 737 pure epistemic, 725
Specification, 26, 258, 274, 457 region space, 542
Square space, 566
Structure-Information-Matter- state, 820
Energy (see also SIME), 727, static, 605
746, 749 static space, 606
Specialized concurrent processing, 426 static synthetic, 804
State static systemic, 604
accepting, 794 topological, 412
configuration, 346 weighted epistemic, 725
final, 820 Structured
initial, 174 memory, 822
inner, 247 memory of order n, 305
space, 192, 201 programming, 305
start, 159, 380, 820 proposition, 483
structure, 820 Subconcept, 185
Statement, 19, 24 Subset, 362
Stoic philosophy, 686 Substance, 476
Stratification, 84, 284 Subsystem
Stratum axiological, 606
assertoric, 609 of bonds, 610
erothetic, 609 instrumental, 606
heuristic, 609 logic-linguistic, 606
hypothetic, 609 model-representing, 606
String, 819 of ties, 639
Structuralism, 415 operational, 639
Structuration, 84, 576, 593 pragmatic-procedural, 602
Structure, problem-heuristical, 602
action, 820 procedural, 602
algebraic, 412, 415 structural, 603
cognitive, 725 Syllogism, 16, 25
concept, 614 Symbol, 355
dynamic, 605 binary, 360
dynamic space, 566 empty, 360
epistemic, 170 of the alphabet, 380
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-index page 946

946 Subject Index

Symbolic, 441 ofEmotions, 649


Synchronization, 421 ofconditions, 667
System ofequalities, 328
abstract, 83 ofinstructions, 320
AI, 42 ofnamed sets, 323
algebraic, 807 ofnames, 323
artificial, 41 ofoperations, 329
assertoric knowledge, 603 ofphilosophy, 12
automated, 695 ofproperties, 324
axiom, 497 ofRational Intelligence (see also
brain, 684 SRI), 649, 681–686
cerebral, 677 of Reasoning, 244, 649, 681
cognitive, 177–178 of symbols, 676
cognizing, 648 of Will and Instinct (see also
complex, 123 SWI), 681–686, 770
comprehensive knowledge, 77 operational knowledge, 603
computer, 395, 468 partial knowledge, 594
computational, 46 physical, 656, 753
conceptual, 170 processing, 566
decision support, 712 religious, 11
descriptive knowledge, 805 representation, 806
epistemic, 178 representational knowledge,
expert, 237, 265 Samkhya, 589
formal, 164, 404 search, 279
functioning, 49 sign, 298
geographic information, 712 software, 264
global knowledge, 594 Solar, 71
heuristic knowledge, 603–604 static, 168
holistic, 13 storage, 148
inference, 277 structuration, 658
infological, 684 structured, 159
information processing, 566 symbolic, 25, 369
informational, 56 technical, 354
intelligent, 175 theory, 258
knowledge, 178 theoretical, 369
knowledge-based, 712 threefold, 458
limbic, 677 transition, 125, 326
logical, 745 Vedanta, 11–12
mathematical, 754 Systemic
nervous, 85 context, 125
neuropsychological, 683
nuclear knowledge, 603 T
numerical, 614 Table, 528
of Affective States (see also Table form, 725
SAS), 649, 681–686 Tape, 159
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-index page 947

Subject Index 947

Task, 45, 259 scientific, 325


Technology Theory-element, 368
computer, 698 Time
information, 687 physical, 676
software, 254 system, 127
theory of, 248 Tool, 487
Temporal, 51, 76 Topology, 184, 567, 833
Temporality, 737 discrete, 825
Testing Total function, 478
action, 253 Transmission, 6
ad hoc, 255 Triad, 208, 281, 329
alpha, 255 attributive, 219
auditing, 252 Bacon/Augustine Sign, 345
beta, 255 balanced sign, 350
black-box, 254 communication, 745
decision, 253 Data-Knowledge, 760
diagnostic, 253 Data-Information-Knowledge, 42
exploitation, 251–252 dyadic sign, 347
exploratory, 255 dynamic sign, 357
functional, 252–253 epistemic, 172
load, 254 epistemological, 142
logical, 252 evaluation, 220–221
performance, 251–252 existential, 84–89, 554
processual, 252–253 functional, 261
recovery, 255 fundamental, 341, 448, 487, 496,
simulation, 251 814
smoke, 255 general Popper, 89
static, 252 information, 407
stress, 254 language, 405
structural, 252 Locke triad of the world, 89
verification, 253 Locke triad of science, 89
volume, 255 process, 262
white-box, 254 reflection epistemic, 173
Theorem, 62 sign, 345
Theory substantiation epistemic, 173
of average knowledge, 346 symbolic, 330
fuzzy set, 103, 558 triangular, 587
global, 647 Triadic
many-worlds, 489 approach, 261
mathematical, 247 concept, 357
mathematical schema, 555 context, 456
of information, 365 dynamic model, 417
of knowledge, 3 logic, 444
physical, 270 mental information, 257
quantum, 208 model, 457
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-index page 948

948 Subject Index

relation, 357 Word


representation, 367 empty, 367
sign model, 348 infinite, 498
stratification, 387 finite, 657
structure, 405, 467 length of a, 217
typology, 247 Working tape, 647
Triune Brain, 677 World
Componential Triune Brain, 677 actual, 398
True, 52 business, 720
absolutely, 52 conceptual, 658
Turing machine, 157 existing, 347
Typology extended mental, 87
acquisition, 67 external, 89
dynamic, 67 information, 739
triadic, 67 material, 84–88
mental, 84–88
U mind-independent, 96
Unconscious mystical, 147
collective, 89 natural, 348
Understanding, 57 objective, 647
Universal, 324 outside, 702
University, 696 physical, 84–88
Utilization, 598 of ideas, 324
of forms, 301
V of structures, 84–88
perfect, 654
Validation, 169
possible, 478
Value, 328
real, 415
Variable, 558
structural, 84–88
Vector bundle, 211, 780
view, 650
Verification, 369

W
Weight, 208
Wisdom, 598

You might also like