Professional Documents
Culture Documents
Knowledge
Structures and Processes
Theory of
Knowledge
Structures and Processes
Mark Burgin
University of California, Los Angeles, USA
World Scientific
NEW JERSEY • LONDON • SINGAPORE • BEIJING • SHANGHAI • HONG KONG • TA I P E I • CHENNAI • TOKYO
For photocopying of material in this volume, please pay a copying fee through the Copyright Clearance
Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, USA. In this case permission to photocopy
is not required from the publisher.
Printed in Singapore
Contents
Preface ix
Acknowledgments xiii
About the Author xv
1. Introduction 1
1.1. The role of knowledge in the contemporary
society . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2. A brief history of knowledge studies . . . . . . . . . . 9
1.3. Structure of the book . . . . . . . . . . . . . . . . . . 39
v
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-fm page vi
vi Contents
Contents vii
viii Contents
9. Conclusion 803
Appendix 809
A. Set theoretical foundations . . . . . . . . . . . . . . . 809
B. Elements of the theory of algorithms . . . . . . . . . . 819
C. Elements of algebra and category theory . . . . . . . . 825
D. Numbers and numerical functions . . . . . . . . . . . . 831
E. Topological, metric and normed spaces . . . . . . . . . 833
Bibliography 837
Preface
ix
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-fm page x
x Preface
in these studies:
Preface xi
— How these models are used to better understand and utilize com-
puters and Internet, cognition and education, communication and
computation?
Acknowledgments
xiii
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-fm page xiv
xiv Acknowledgments
Dr. Mark Burgin received his M.A. and Ph.D. in mathematics from
Moscow State University, which was one of the best universities in
the world at that time, and Doctor of Science in logic and philos-
ophy from the National Academy of Sciences of Ukraine. He was
a Professor at the Institute of Education, Kiev; at International
Solomon University, Kiev; at Kiev State University, Ukraine; and
Head of the Assessment Laboratory in the Research Center of Sci-
ence at the National Academy of Sciences of Ukraine. Currently he
is working at UCLA, USA. Dr. Burgin is a member of the New York
Academy of Sciences and an Honorary Professor of the Aerospace
Academy of Ukraine. Dr. Burgin is a member of the Science Advi-
sory Committee at Science of Information Institute, Washington.
He was the Editor-in-Chief of the international journals Integra-
tion and Information, as well as an Editor and Member of Editorial
Boards of various journals. Dr. Burgin is doing research, has publica-
tions, and taught courses in various areas of mathematics, artificial
intelligence, information sciences, system theory, computer science,
epistemology, logic, psychology, social sciences, and methodology of
science. He originated theories such as the general theory of informa-
tion, theory of named sets, mathematical theory of schemas, theory
of oracles, hyperprobability theory, system theory of time, theory
of non-Dophantine arithmetics and neoclassical analysis (in mathe-
matics) and made essential contributions to fields such as founda-
tions of mathematics, theory of algorithms and computation, theory
of knowledge, theory of intellectual activity, and complexity studies.
xv
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-fm page xvi
Chapter 1
Introduction
1
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch01 page 2
Introduction 3
Knowledge is power.
Francis Bacon
Introduction 5
Introduction 7
together with information is becoming the key tool not only for fur-
ther development but also for present survival in conditions of the
knowledge economy.
To describe the role of knowledge in contemporary society, Fritz
Machlup (1902–1983) introduced the concept knowledge economy in
the book (Machlup, 1962). The knowledge economy is a particular
knowledge-driven stage of economical development, based on knowl-
edge, succeeding a phase based on physical assets such as workforce,
energy, and matter. Knowledge is in the process of taking the place
of the workforce and other resources making possible getting better
results with less workforce and other resources. Knowledge is sub-
stance and money substitutable, meaning that knowledge can replace,
to some extent, capital, labor, or physical materials. Namely, knowl-
edge allows one to use less money, labor, or physical materials than
it is possible to do without this knowledge. As a result, the cre-
ated wealth is measured less by the output of work itself but more
and more by the general level of scientific and technological develop-
ment (Jaffe and Trajtenberg, 2002). Amidon explained that knowl-
edge about how to produce different products and provide services
as well as their embedded knowledge is often more valuable than
the products and services themselves or the materials they contain
(Amidon, 1997).
That is why Machlup (1962) defined knowledge as a commod-
ity, developing techniques for measuring the magnitude of its pro-
duction and distribution within a modern economy. He correctly
assumed that all devices involved in knowledge production, dissem-
ination, and utilization have to be taken into account in these mea-
surements.
A diversity of activities linked to research, education, and ser-
vices, tend to assume increasing importance in the knowledge econ-
omy. Besides, the importance of knowledge in economic activity is
not confined to the high-tech sectors but also pervades modes of orga-
nization of production and commerce in apparently low-tech sectors,
which have also been essentially transformed. Toffler explains that
knowledge is a wealth and force multiplier, in that it augments what
is available or reduces the amount of resources needed to achieve a
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch01 page 8
Introduction 9
(Rao, 1998):
Introduction 11
Introduction 13
Introduction 15
Darsana. The first is of eight kinds and the second, of four” (Shah,
1990). Namely, sensation (Darsana) is of four kinds:
• Visual (Cakshusa)
• Non-visual (Acakshusa)
• Clairvoyant (Avadhi Dersana)
• Pure (Kevala)
Each piece of knowledge is experienced with reference to its
characteristic (Dharma) and its substratum (Dharmin). In addition,
Jainas discerned two kinds of knowledge: direct knowledge and indi-
rect knowledge. Direct knowledge does not demand the medium of
another knowledge in contrast to indirect knowledge.
According to Jainas, it is possible to obtain indirect knowledge
by five techniques: recollection, recognition, Reductio ad Absurdum
(Tarka), inference, and syllogism.
Introduction 17
— Non-cognition (Anupalabdhi).
— Cause in itself (Svabhava).
— Effect (Karya).
In addition, the Nyaya believed that the five sense organs — eye,
ear, nose, tongue, and skin — have the five elements — light, ether,
earth, water, and air — as their field, with corresponding qualities
of color, sound, smell, taste, and touch.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch01 page 19
Introduction 19
According to logicians, there are also three ways for getting invalid
knowledge (Ayatharthanubhava or Bhrama):
Tarka includes:
Introduction 21
Introduction 23
came through our senses is not knowledge of the thing itself but only
knowledge of the imperfect changing copy of the form. Thus, the only
possible way to acquire correct knowledge of the forms was through
reasoning as senses could provide only opinion.
For a long time, philosophers were not able to clearly and consis-
tently explain what Plato forms, or ideas (eidos), are. Only at the
end of the 20th century, it was discovered that the concept structure
provides the scientific representation of Plato forms, while the exis-
tence of the world of structures was postulated and proved (Burgin,
1997; 2010; 2012).
Another great philosopher Aristotle (384–322 B.C.E.) studied
problems of knowledge categorizing knowledge with respect to knowl-
edge domains (objects) and the relative certainty with which one
could know those domains (objects). He assumed that certain
domains (such as in mathematics or logic) permit one to have abso-
lute knowledge that is true all the time. However, his examples of
absolute knowledge, such as two plus two is always equal to four or all
swans are white, failed when new discoveries were made. For instance,
the statement two plus two always equals four was disproved when
non-Diophantine arithmetics were discovered (Burgin, 1977; 1997c;
2007; 2010c). The statement “all swans are white” was invalidated
when Europeans came to Australia and found black swans.
According to Aristotle, absolute knowledge, e.g., mathematical
knowledge, is characterized by certainty and precise explanations.
However, unlike Plato and Socrates, Aristotle did not demand cer-
tainty in everything. Some domains, such as human behavior, do
not permit precise knowledge. The corresponding vague knowledge
involves expectations, chances, and imprecise explanations. Knowl-
edge that falls into this category is related to ethics, psychology, or
politics. One cannot expect the same level of certainty in politics or
ethics that one can demand in geometry or logic. In his work Ethics,
Aristotle defines the difference between knowledge in different areas
in the following way:
“we must be satisfied to indicate the truth with a rough and general
sketch: when the subject and the basis of a discussion consist of matters
which hold good only as a general rule, but not always, the conclusions
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch01 page 25
Introduction 25
reached must be of the same order ... For a well-schooled man is one
who searches for that degree of precision in each kind of study which
the nature of the subject at hand admits: it is obviously just as foolish
to accept arguments of probability from a mathematician as to demand
strict demonstrations from an orator”.
(Aristotle, 1984)
All A are B.
C is A.
Therefore, C is B.
there must be one or several “first principles”, from which all other
knowledge follows and which themselves do not follow from anything.
However, if these first principles do not follow from anything, then
by Aristotle, they cannot count as knowledge because there are no
reasons or premises we can give to prove that they are true. Aristotle
suggests that these first principles are a kind of intuition of the facts
and ideas we recognize in experience.
Aristotle believes that knowledge domains or objects are struc-
tured hierarchically. Consequently, he treats definition as a process
of division and specification. For instance, defining whale, we observe
that whales are animals, which is the genus to which they belong.
Then we search for various conditions, which distinguish whales from
other animals such as: whales live in water, unlike tigers, and they
are very big, unlike mice.
While true knowledge is derived from knowledge of first principles,
actual argument and debate is much less immaculate. When two
people argue, they do not go back to first principles to ground every
claim but simply suggest premises they both acquiesce. The essence
of the debates is to find premises your opponent can agree with and
then show that conclusions different from your opponent’s position
to follow necessarily from these premises. In the Topics, Aristotle
classifies the kinds of conclusions that can be drawn from different
kinds of premises, while in the Sophistical Refutations, he explores
various logical ploys used to trick people into accepting a faulty line
of reasoning.
Thus, we can see that Aristotle strives to organize knowledge in
the manner of a well-structured, architectural construction with a
firm foundation of unshakable first principles and an upper struc-
ture of propositions firmly attached to the foundation by steadfast
inference. In such a way, the Euclid’s geometry and virtually any
axiomatic mathematical system is built. It has a foundation of defi-
nitions, postulates, and axioms or common notions as first principles
and an upper structure of deduced propositions — theorems and
lemmas.
In the first millennium, the distinguished philosopher Abü Naşr
al-Farabi (870–950) also studied knowledge and its sources. He
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch01 page 27
Introduction 27
— certain knowledge that the thing exists, which is called the knowl-
edge of existence;
— certain knowledge of the cause of the thing, which is called knowl-
edge why;
— the certain knowledge of the both together.
In its turn, sense perception comes from the actual things them-
selves, while the human mind does not have inborn ideas. At the same
time, people possess a natural ability to abstract knowledge. When
people see an object such as a tree, the actual tree is what the person
observes and perceives its reflection by senses. The mind knows that
what it is seeing corresponds to reality and as a result, an individual
attains knowledge about the tree. The form of the real object, e.g.,
a tree, is not generated by the senses, or the mind of the perceiver,
but is impressed by the object itself. All external knowledge obtained
through sense is combined by the common sense, which causes the
unifying process of the senses into a single perception, which is then
presented to the mind. The mind forms a representation sent to the
intellect, which generates the universal idea from it by abstraction
and names it by a word.
The great French philosopher Rene Descartes evaluates knowl-
edge in terms of doubt and certainty, distinguishing certain rigorous
knowledge (scientia) and knowledge with lesser grades of certainty
(persuasio). Descartes posits that doubt and certainty are comple-
mentary feelings — when certainty increases, doubt decreases, and
vice versa. Consequently, according to Descartes, knowledge is con-
viction based on a reason so strong that it could never be shaken by
any stronger reason. As a result, knowledge becomes absolute and
utterly indefeasible. Descartes writes:
Introduction 29
(1) God
(2) Material/corporeal substance
(3) Other created substance.
However, Descartes discards options (a) and (c) leaving only the
second possibility for sensations.
The basic Descartes’ principle of doubting any knowledge claim,
as well as every attempt at justification of knowledge claims gained
much support in traditional epistemology. It has been assumed
that it is vital to find a bedrock of certain knowledge immune to
all possible doubt. However, this search did not bring conclusive
solutions.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch01 page 30
Introduction 31
Introduction 33
Introduction 35
Introduction 37
Introduction 39
criteria work for most people. Those for whom they fail are called
“mad” and are widely disregarded.
However, according to contemporary psychological and neuro-
physiological theories, an individual sees a tree only if she receives
tree sense-data through her senses. However, the reception of sense-
data is not enough. To see a tree as a tree, the brain has to correctly
process sense-data, building a relevant image and assigning the cor-
rect name “tree” to this image. Besides, it is possible to know what
a tree is by observing not trees but their images, e.g., pictures or
movies with trees. After images of trees are stored in the memory,
an individual can see a tree in her dreams. In this case, the brain
simulates acceptance of sense-data from a physical tree or from its
picture.
Alfred Korzybski
Introduction 41
Introduction 43
Chapter 2
Knowledge Characteristics
and Typology
For millennia, philosophers, who were the first to study the prob-
lems of knowledge, have asserted that knowledge is a kind of beliefs
reducing all knowledge to declarative or descriptive knowledge and
actively imposing this opinion on all others. Even now, the major-
ity of philosophers believe in this declaration. For instance, such
experts in contemporary philosophical theories of knowledge as John
Pollock and Joseph Cruz write: “Epistemology might better be called
“doxastology”, which means the study of beliefs” (Pollock and Cruz,
1999).
However, this understanding was challenged. At first, physi-
cists discovered operational knowledge. It was Nobel laureate Percy
Williams Bridgman (1882–1961), who insisted that conceptual
knowledge is, in essence, operational. He wrote that “any concept
45
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 46
problems centered especially around the term “belief.” First, the defini-
tion reduces knowledge to propositional knowledge, “knowing-that,” thus
occluding other knowledge types like practical “know-how” (knowledge
embodied in routinized dispositions), affective states (knowledge embod-
ied in emotion and sentiment), and phenomenological acquaintance (con-
ferred, for instance, by sensory experience or artistic representation).”
1. Symbol-type knowledge.
2. Embodied knowledge.
3. Embrained knowledge.
4. Encultured knowledge.
— Personal knowledge.
— Communal knowledge.
— Network knowledge.
— General knowledge.
— Domain-specific knowledge.
— General knowledge.
— Procedural interpretation knowledge
— Declarative knowledge.
— Procedural knowledge.
— Special knowledge.
— Compiled knowledge.
— Coherent knowledge.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 65
Dynamic typology
following way:
— Surface or superficial knowledge is easily accessible knowledge.
— Deep or deep-level knowledge is knowledge that is hard to obtain.
For instance, according to the accessibility hardship, knowledge
that the Sun gives light is surface knowledge, while knowledge that
the Sun radiates its energy due to thermonuclear processes is deep
knowledge.
The representability trait sets these two types apart on the differ-
ent foundation:
— Surface or superficial knowledge is knowledge about outward
attributes of the knowledge object (domain).
— Deep or deep-level knowledge is knowledge about imperative prop-
erties of the knowledge object (domain).
For instance, according to the representability trait, knowledge
that the Earth is big is surface knowledge, while knowledge that the
Earth is a planet is deep knowledge.
One of the criteria for knowledge classification is the nature of the
carrier of knowledge. According to this criterion, we have the follow-
ing types: digital knowledge, printed knowledge, written knowledge,
symbolic knowledge, molecular knowledge, quantum knowledge, and
so on. For instance, digital knowledge is represented by digits, printed
knowledge is contained in printed texts, and quantum knowledge is
contained in quantum systems.
Another criterion is the type of the system that acquires infor-
mation used in knowledge formation. According to this criterion,
we have the following types: visual knowledge, auditory knowledge,
olfactory knowledge, cognitive knowledge, genetic knowledge, and so
on. For instance, according to neuropsychological data, 80% of all
information that people get through their senses is visual informa-
tion, 10% of all information is auditory information, and only 10%
of information that people get through other senses.
One more criterion for knowledge classification is the specific
domain this knowledge is about. According to this criterion, we have
the following types: physical knowledge biological knowledge, genetic
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 69
from the first four postulates. Being unable to achieve this, math-
ematicians were becoming frustrated and tried some indirect meth-
ods. Girolamo Saccheri (1667–1733) tried to prove a contradiction by
assuming that the first four postulates were valid, while the fifth pos-
tulate was not true (Burton, 1997). To do this, he developed an essen-
tial part of what is now called a non-Euclidean geometry. Thus, he
was able to become the creator of the first non-Euclidean geometry.
However, Saccheri was so sure that the only possible geometry is the
Euclidean geometry that at some point, he claimed a contradiction
and stopped further reasoning. Actually, his contradiction was only
applicable in Euclidean geometry. Of course, Saccheri did not realize
this at the time and he died thinking he had proved Euclid’s fifth
postulate from the first four. Thus, knowledge about non-Euclidean
geometries was available but not acceptable to Saccheri. As a result,
he missed an opportunity to obtain one of the most outstanding
results in the whole mathematics.
A more tragic situation due to biased comprehension involved
such outstanding mathematicians as Niels Henrik Abel (1802–1829)
and Carl Friedrich Gauss (1777–1855). As history tells us (Bell,
1965), there was a famous long-standing problem of solvability in
radicals of an arbitrary fifth-degree algebraic equation. Abel solved
this problem proving impossibility of solving that problem in a gen-
eral case. In spite of being very poor, Abel himself paid for printing
a memoir with his solution. This was an outstanding mathematical
achievement. That is why Abel sent his memoir to Gauss, the best
mathematician of his time. Gauss duly received the work of Abel
and without deigning to read it he tossed it aside with the disgusted
exclamation “Here is another of those monstrosities!”
Moreover, people often do not want to hear truth because truth is
unacceptable to them. For instance, the Catholic Church suppressed
knowledge that the Earth rotates around the Sun because people who
were in control (the Pope and others) believed that this knowledge
contradicts to what was written in the Bible.
Relations between these three types of knowledge show that any
available knowledge is also accessible. However, not any accessible
knowledge is available and not any acceptable knowledge is available
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 74
two millennia ago somebody was asked what the Sun was doing.
Historical sources allow us to presume that he or she would
tell:
The Sun is giving light and is rotating around the Earth. (2.1)
World of Structures
in any case, the mental world is different from the physical world and
constitutes an important part of our reality.
Psychological experiments and theoretical considerations show
that the Mental World is stratified into three spheres: cognitive or
intellectual sphere, affective or emotional sphere and effective or reg-
ulative sphere. This stratification is based on the extended theory
of triune brain and the concept of the triadic mental information
(Burgin, 2010).
The Mental World has elements and components, which are sim-
ilar to elements and components of the Physical World. In a natural
way, the Mental World has its mental space, mental objects (struc-
tures), and mental representations (Burgin, 1998a).
It is also necessary to explain that the World of Structures directly
corresponds to Plato’s World of Ideas/Forms because ideas or forms
might be associated with structures. Indeed, on the level of ideas,
it is possible to link ideas or forms to structures in the same way
as atoms of modern physics may be related to atoms of Democritus
and Leucippus. Only recently, modern science came to a new under-
standing of Plato ideas, representing the global world structure as the
Existential Triad of the world, in which the World of Structures is
much more comprehensible, exact, and explored in comparison with
the World of Ideas/Forms. When Plato and other adherents of the
World of Ideas/Forms were asked what idea or form was, they did
not have a satisfactory answer. In contrast to this, many researchers
have been analyzing and developing the concept of a structure (Ore,
1935; 1936; Bourbaki, 1948; 1957; 1960; Bucur and Deleanu, 1968;
Corry, 1996; Burgin, 1997; 2010; 2011; 2012; Landry, 1999; 2006). It
is possible to find the most thorough analysis and the most advanced
concept of a structure in (Burgin, 2012). As a result, in contrast to
Plato, mathematics and science has been able to elaborate a suffi-
ciently exact definition of a structure and to prove existence of the
world of structures, demonstrating by means of observations and
experiments, that this world constitutes the structural level of the
world as a whole. Informally, a structure is a collection of symbolic
(abstract) objects and relations between these objects. Each system,
phenomenon or process in nature, technology or society has some
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 87
World of Signs
Doctrine of Signs
representation
Knowledge Item K Object (Domain) D. (2.2)
reflection
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 93
Note that n may be not only a number but also a vector when
we separately consider correctness components or an element from a
partially ordered set. Let us consider some examples.
Proposition p1 :
7 ≥ 3.
Correctness weights:
w1 (p1 ) = 0 because it is not a syntactically correct English sentence.
w2 (p1 ) = 1 because it is true that 7 ≥ 3.
w3 (p1 ) = 1 because the conventional (Diophantine) arithmetic is a
model for p1 .
Proposition p2 :
A bear is an animal.
Correctness weights:
w1 (p2 ) = 1 because it is a syntactically correct English sentence.
w2 (p2 ) = 1 because it is true that a bear is an animal.
w3 (p2 ) = 1 because the set of all animals is a model for p2 .
Proposition p3 :
(p1 ; 0, 1, 1)
(p2 ; 1, 1, 1)
(p3 ; 1, 0, 0)
Note that Example 2.3.3 shows that the most popular attribute
of knowledge — truth — is only one kind of knowledge correctness.
using only means from the theory U . Nevertheless, the theory U may
be internally incorrect but externally correct if there are other means
to prove its consistency.
Testability as a component of the logical modality of knowledge
correctness reflects how and to what extent it is possible to estimate,
e.g., support by evidence or infer correctness of a given knowledge
item.
Testability is essentially important for operational knowledge. For
instance, it is possible to treat a computer program as a potentially
correct operational knowledge item if it is possible to test it finding
and correcting all deficiencies. In this case, the possibility of defi-
ciency correction is also a correctness condition.
An important type of knowledge correctness is truthfulness. There
are different ways to introduce truthfulness of knowledge. All of
them involve two types of truthfulness functions: domain-oriented,
reference-oriented, and attitude-oriented. According to the first
approach, we have the following model of truthfulness.
A system T , or as it is now fashionable to call it now, an agent
A that has knowledge K about the domain (object) D is considered.
Then the truthfulness K means that (condition from C) the descrip-
tion that K gives for D is true. Thus, the truthfulness tr(K, D) of
the knowledge K about the domain D is a function of two variables
that takes two values — true and false.
In addition, the function tr(T , D) gives conditions for differenti-
ating knowledge from similar structures, such as beliefs, descriptions
or fantasies.
Knowledge truthfulness, or domain related correctness, shows
absence of distortions in knowledge representation of its domain.
Thus, truthfulness is closely related to accuracy of knowledge, which
reflects how close is given knowledge to the absolutely exact knowl-
edge. However, truthfulness and accuracy of knowledge are different
properties. For instance, statements “π is approximately equal to
3.14” and “π is approximately equal to 3.14159” are both true, i.e.,
their truthfulness is equal to 1. At the same time, their accuracy is
different. The second statement is more accurate than the first one.
We see that conventional truthfulness can indicate only two possibil-
ities: complete truth and complete falsehood.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 110
— correct knowledge,
— incorrect knowledge,
— unverified knowledge.
t p (2.5)
U C
f
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 113
1. “π is equal to 3.”
2. “π is equal to 3.1.”
3. “π is equal to 3.14.”
4. “π is equal to 3.1415926535.”
5. “π is equal to (4/3)2 .”
no credible evidence,
some credible evidence,
a preponderance of evidence,
clear and convincing evidence,
beyond reasonable doubt,
beyond any shadow of a doubt, what is usually recognized as an
impossible standard to meet.
more and more complex systems. On the other hand, the develop-
ment of engineering and social organization resulted in building more
and more complex technical systems and developing more and more
complex social systems.
All this is directly related to knowledge. Studying complex sys-
tems in nature, society, and technology, scientists, as a rule, need ade-
quately complex knowledge systems to represent and model systems
they study. To create and invent complex systems, engineers, includ-
ing software and social engineers, need sufficiently complex opera-
tional knowledge.
Second, complexity serves as a measure of needed resources. In
turn, needed resources correlate with system efficiency. Indeed, when
two systems give the same results but the first one demands less
resources than the second system, then the fist system is more effi-
cient than the second. Thus, complexity becomes a measure of effi-
ciency. For instance, knowledge that demands less time or less efforts
for understanding is more efficient for learning. At the same time,
usually simple knowledge demands less time and efforts for under-
standing than complex knowledge. For instance, it is easier to under-
stand that 2 + 2 = 4 than the statement that there are infinitely
many prime numbers.
Pager (1970) defines efficiency of computation as the value that
is inversely proportional to complexity of the same computation. In
the same way, it is possible to define efficiency of any process as the
value that is inversely proportional to complexity of this process.
Efficiency is a clue problem and a pivotal characteristic of any
activity. Inefficient systems are ousted by more efficient systems.
Consequently, problems of efficiency are vital to any society and any
individual. Many great societies, Roman Empire, British Empire and
others perished because they had become inefficient. However, there
are many different criteria of efficiency, and to understand this impor-
tance and, at the same time, complex phenomenon, it is necessary
to use mathematical methods. Such methods are provided by the
mathematical theory of complexity.
Moreover, many other properties of systems are related to com-
plexity. For example, Carlson and Doyle (2002) investigate relations
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 124
Note that in the first three cases, knowledge K plays the role of
the used resource and its complexity is a significant characteristic of
this resource.
Problem complexity is very important because problems represent
a pivotal form of erotetic knowledge. People solve problems all the
time and solution of some of these problems is vital for individuals,
organizations, and communities. Thus, complexity of some problems
is essentially important for people. If it is impossible to solve a prob-
lem with given resources, we assume that it has infinite complexity.
The halting problem for Turing machines is an example of a problem
with infinite complexity for operational knowledge in the form of Tur-
ing machines since we know that it has no solution in the class of all
Turing machines. However, for operational knowledge in the form of
inductive Turing machines, this problem has finite complexity. This
shows that, in general, problem complexity is a relative property,
which essentially depends on knowledge used for solving the problem.
Definitions 2.3.25 and 2.3.27 imply that complexity is always com-
plexity of doing something and although complexity is attributed
to a system, it is a principal characteristic of a process and of the
operational knowledge in the form of an algorithm if the process is
determined by an algorithm. However, it is possible to extend the
constructions of such measures to complexity of arbitrary processes
and through processes to arbitrary systems. For instance, if we take
some non-algorithmic process, such as cognition, then it is possible
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 129
user. Note that not only an individual but also a software system or
a robot can be a knowledge user.
Knowledge Item
Denotation/Domain Connotation/Content
The Greek word meta means “beside”, “after”, “later than” or “in
succession to”. Often people understand that something with the
name “metaX” occurs later on the timeline than X. However, a more
popular meaning in contemporary languages is “beside” or “after.”
For instance, carpus is the wrist, while metacarpus is the part of the
human hand between the wrist and the fingers or we may say, after
the wrist and before the fingers. In a similar way, metatarsus is the
part of the human foot after the tarsus and before the toes.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 152
∂ m ui (t, x)
= Pα (aαji (t, x), u(t, x), Dxα Dtk ui (t, x))
∂tm
|α|+k≤m,k<m
— relation between knowledge and its domain, e.g., nature for nat-
ural sciences or society for social sciences;
— development of scientific knowledge;
— structure of scientific knowledge.
qh ai → aj qk ,
qh ai → qk R,
qh ai → qk L.
machine head to the right cell of the tape, and L indicates a move of
the Turing machine head to the left cell of the tape.
Each rule directs a separate step of computation of the corre-
sponding Turing machine. These rules are data transformation rules.
In addition, any algorithm has execution rules, also called metarules.
These rules instruct how to apply data transformation rules to data,
i.e., to words in the case of Turing machines.
For instance, the transition function (or relation) δA of a deter-
ministic finite automaton A is the system of data transformation
rules. However, to apply these rules and to organize a computational
process, it is necessary to have metarules for data transformation
rule application. In addition, it is necessary to have metarules that
specify how input is given to the automaton and how its output is
obtained (Burgin, 2005). Usually such metarules are described but
not formalized (cf., for example, (Hopcroft et al., 2007)). Here we
describe metarules for deterministic finite automata in an explicit
form.
For an accepting deterministic finite automaton A, we have the
following metarules:
and its methods and approaches. In the analytical tradition, the term
metaphilosophy is mostly used to name commentaries and research on
previous works as opposed to original contributions towards solving
philosophical problems. The growing research in the field of metaphi-
losophy in general and its directions, in particular, led to the estab-
lishment of the special journal Metaphilosophy in 1970.
Finally, here is one more example. The knowledge presented in
this book is metaknowledge because it is knowledge about knowledge.
In studies of metaknowledge, it is useful to introduce orders of
metaknowledge.
Knowledge, the domain of which is not knowledge, e.g., a physical
system, has the zero order as metaknowledge.
Knowledge the domain of which is metaknowledge of the zero
order is metaknowledge of the first order.
Knowledge the domain of which is metaknowledge of the first
order is metaknowledge of the second order.
Knowledge the domain of which is metaknowledge of the order n
is metaknowledge of the order n + 1.
We see that orders of metaknowledge depend on knowledge inter-
pretation. Because knowledge and especially metaknowledge have
different interpretations, it is possible that the same knowledge item
has different orders as metaknowledge. For instance, it is possible to
interpret the differential equation (2.6) as model knowledge about
some physical system. In this case, it has the zero order as meta-
knowledge. When Equation (2.6) is treated as a metamodel for the
differential equation (2.7), which is model knowledge about gases,
Equation (2.6) has the first order as metaknowledge. When Equation
(2.7) is treated as a metamodel for Equations (2.8) and (2.9), which
constitute the Carleman system or for Equations (2.10)–(2.12), which
constitute the Broadwell model, Equation (2.6) has the second order
as metaknowledge. For instance, languages of metamathematics are
also metalanguages of the second and higher orders because many
mathematical languages are metalanguages of the first and higher
orders.
As classical logic contains metaknowledge about declarative
knowledge in the form of propositions and predicates, metalogic
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch02 page 166
of four classes:
1. Conscious knowledge is knowledge K such that the knower has
and knows that she/he/it has K, i.e., the knower has assertoric
metaknowledge about K.
2. Unconscious knowledge is knowledge K such that the knower has
but does not know that she/he/it has K, i.e., the knower does not
have metaknowledge about K.
3. Explicitly absent (unknown) knowledge is knowledge K such that
the knower knows that she/he/it does not have K, i.e., the knower
has erotetic metaknowledge about K.
4. Concealed knowledge is knowledge K such that the knower does
not know that she/he/it does not have K.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 169
Chapter 3
Knowledge Evaluation
and Validation in the Context
of Epistemic Structures
169
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 170
representation
Epistemic Structure ES Domain D
reflection
interpretation
Epistemic Structure ES Domain D
substantiation
At the same time, the same symbolic system can represent dif-
ferent epistemic structures. For instance, a statement can represent
knowledge, a belief or an idea.
A symbolic representation of an epistemic structure (cognitive
epistemic unit) is called a symbolic epistemic structure (symbolic
epistemic unit). Thus, it is possible to discern structural cognitive
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 177
referring
epistemic structure domain
interpreting substantiating
(3.2)
representation model
attributing
• G is a set of objects,
• M is a set of attributes,
• I is a relation between G and M , in which the relation (g, m) ∈ I
means, the object g has the attribute m, i.e., I is the connection
between objects and attributes.
Thus, we can see that formal contexts are named sets (cf.,
Appendix A) and it is possible to apply to them different named
set operations constructed in (Burgin, 2011). There are also vari-
ous relations between formal contexts that come from the named set
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 185
Kn
Kn 1
………………….
K3
K2
K1
K3
K1 K2
Sensory memory
Short-term memory
Long-term memory
2. Cache
2a. Micro operations cache forms the level 0 and can be several
KiB in contemporary computers, where one KiB (kibibyte) is
210 = 1024 bytes.
2b. Instruction cache is a part of the level 1 and can be 128 KiB
in contemporary computers.
2c. Data cache is another part of the level 1 and can be 128
KiB in contemporary computers with the access speed about
700 GiB/second, where one GiB (gibibyte) is equal to 210 =
1024 MiB.
2d. Instruction and data (shared) cache forms the level 2 and can
be around 1 MiB in contemporary computers with the access
speed around 200 GiB/second, where one MiB (mebibyte) is
equal to 1024 KiB = 1048576 bytes and one gibibyte is equal
to 210 = 1024 MiB.
2e. Lesser shared cache forms the level 3 and can be several
MiB in contemporary computers with the access speed about
100 GiB/second.
2f. Larger shared cache forms the level 4 and can be 128 MiB
in contemporary computers with the access speed about
40 GiB/second.
3. Main memory (or primary storage) can be a number of gigabites
with the access speed is around 10 GiB/second.
4. Disk storage (or secondary storage) can be a number of tebibytes
(TiB) with the access speed (from a solid state drive) about
600 MiB/second, where one tebibyte is equal to 1024 GiB.
5. Nearline storage (or tertiary storage) can be a number of exbibytes
(EiB) with the access speed about 160 MiB/second, where one
exbibyte is equal to 220 TiB.
6. Offline storage.
Such memory stratifications are linear, reflecting the access time
with the fast CPU registers at the top and the slow hard drive at the
bottom.
Another, although related, stratification is induced by different
electronic devices, which include CPU registers, on-die SRAM caches,
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 199
• Magnetic disk, such as sloppy disks, used for off-line storage, and
the hard disk drive, used for secondary storage.
• Magnetic tape data storage, used for tertiary and off-line storage.
bubble memory, while magnetic tapes were often used for secondary
storage.
Another popular type of storage is optical discs, which stores
information in deformities on the surface of a circular disc, reading
this information by illuminating the surface with a laser diode and
observing the reflection. In modern computers, there are following
kinds of optical storage devices:
reaching some goal, e.g., for reaching the Mars, the efficiency of e
for understanding e, the efficiency of e for understanding people, the
efficiency of e for building some object A and the efficiency of e for
obtaining knowledge about some object D.
Another example of efficiency is given by such an epistemic struc-
ture int as knowledge of mathematical integration and its efficiency
for a student C. In this case, efficiency of int for getting a high grade
in the class is rather high, while efficiency of int for getting from
home to the college is rather low (usually it is zero).
Complexity comprises such properties as compression, under-
standability, and hardness.
In addition to attributes that constitute dimensions, there are
other properties/attributes of epistemic structures and epistemic
units. For instance, an important epistemic attribute is novelty with
respect to the infological system of an intelligent agent (cognitive sys-
tem). This attribute is a (fuzzy) function of another attribute that
shows the time of attribution of epistemic structure to the given
infological system.
Epistemic structures (knowledge units) taken without their prop-
erties are pure. Properties/attributes of structures (knowledge units)
and characteristics of objects they reflect (represent) induce weights
of these epistemic units. For instance, taking such an epistemic unit
as a statement P , we can consider its properties (attributes): (1) time
when this statement P was made; (2) person(s) who made this state-
ment; (3) people who supported this statement; (4) time needed to
prove validity (truthfulness) of P ; (5) truth value of P ; and so on.
The value of the first attribute is the first weight w1 of P . It is
a numerical value. The value of the second attribute is the second
weight w2 of P . It is a nominal value, i.e., it is a name or names of
people who made statement P , or a functional value, i.e., it is the
indicator function of the names of people who made statement P .
The value of the third attribute is the third weight w3 of P . It is
similar to the second weight. The value of the fourth attribute is the
fourth weight w4 of P . It is numerical and similar to the first weight.
The value of the fifth attribute is the fifth weight w5 of P and it
takes two values from the two-element set {true, false} if we utilize
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 204
The values of the first weight are positive numbers, while the
values of the second and third weights are functions (cf., Burgin,
2005).
Dimensions, which are basic complex properties, also add weights
to knowledge units as a specific kind of epistemic structures. For
instance, the knowledge unit A represented by the sentence “Now it
is ten o’clock in the morning” is the symbolic part of pure knowl-
edge. However, it can be true or false depending on the current time.
This estimate defines the weight the knowledge unit in the relevance
dimension. Namely, if the estimate “true” is represented by 1 and
the estimate “false” is represented by 0, then the weight of A is 1
when it is really ten o’clock in the morning and the weight of A is 0
when this is wrong.
As a result, dimensions and other properties/attributes bring us
from pure epistemic structures (knowledge units) to weighted epis-
temic structures (weighted knowledge units). To determine weights,
we fix a vector of attributes (A1 , . . . , Ak ). Then we change a pure epis-
temic structure (pure knowledge unit) e, to the weighted epistemic
structure (weighted knowledge unit) B = (e; w1 , . . . , wk ), where wi
is the weight of e with respect to the attributes Ai (the dimension i).
The value of the weight wi of the epistemic structure e with respect
to the attribute Ai reflects to what extent e has the attribute Ai .
When the attributes Ai is an abstract property in the sense of (Bur-
gin, 1985; cf., also Section 5.2), then wi is the value of this property
for the epistemic structure e.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 205
Animal
A: ABCD is a square.
B: ABCD is a rectangle.
C: ABCD is a rhombus.
B C
A D
B C
Figure 3.8. A simple graphical example of a weighted conceptual epistemic
space WECS
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 208
d((e; w1 , . . . , wn ), (l; u1 , . . . , um ))
= dv ((w1 , . . . , wn ), (u1 , . . . , um )) + d(e, l) (3.3)
when n = m;
d((e; w1 , . . . , wn ), (l; u1 , . . . , um ))
= dv ((w1 , . . . , wn ), (u1 , . . . , um , 0, . . . , 0)) + d(e, l) (3.4)
d((e; w1 , . . . , wn ), (l; u1 , . . . , um ))
= dv ((w1 , . . . , wn , 0, . . . , 0), (u1 , . . . , um )) + d(e, l) (3.5)
when n < m.
In finite-dimensional vector spaces, we can take the Euclidean
metric as dv , defining the distance dv ((x1 , . . . , xn ), (y1 , . . . , yn )).
However, as we discussed before, it is natural to assume that the
fiber F is an infinite-dimensional vector space. In this case, we simply
postulate existence of a metric in it. Usually, metrics in vector spaces
are defined by norms (Rudin, 1991; Burgin, 2013). Note that in this
case, we use only formula (3.3) because all fibers Fa have the same
dimension.
c ≤ a + b because d is a metric in W ,
r ≤ d + k because dv is a metric in the vector space F .
Consequently,
d((e; w1 , . . . , wn ), (h; v1 , . . . , vp )) = r + c
≤ (d + a) + (k + b) = d((e; w1 , . . . , wn ), (l; u1 , . . . , um ))
+ d((l; u1 , . . . , um ), (h; v1 , . . . , vp ))
There are other ways to define metrics in the spaces Wesw and
Wsesw based on metrics in the base and fiber of the correspond-
ing vector bundle. For instance, it is possible to use the following
formulas:
d((e; w1 , . . . , wn ), (l; u1 , . . . , um ))
= dv ((e; w1 , . . . , wn ), (l; u1 , . . . , um ))2 + d(e, l)2 (3.6)
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 211
when n = m;
d((e; w1 , . . . , wn ), (l; u1 , . . . , um ))
= dv ((w1 , . . . , wn ), (u1 , . . . , um , 0, . . . , 0))2 + d(e, l)2 (3.7)
d((e; w1 , . . . , wn ), (l; u1 , . . . , um ))
= dv ((w1 , . . . , wn , 0, . . . , 0), (u1 , . . . , um ))2 + d(e, l)2 (3.8)
when n < m.
Structures in the spaces Wes , Wses , Wesw , and Wsesw are inherited
by epistemic spaces and their states. In particular, a weighted epis-
temic space E and each its state is a vector bundle E = (E, pE , Ee )
with the metric d in the base E.
We remind that a set X in a metric space E with a metric d is
called bounded if there is a number k such that for any points a and
b from X, d(a, b) < k.
To study bounded sets in metric spaces that are spaces of vector
bundles, we need additional concepts.
➢ Testing by computation;
➢ Testing by inference;
➢ Testing by application.
modeling interpretation
model (metaknowledge) knowledge system knowledge domain
similarity
complete complete
relevance irrelevance
beliefs about beliefs and attitudes play an even more crucial role.”
As a result, beliefs are thoroughly studied in psychology and logic.
Belief systems are formalized by logical structures that introduces
structures in belief spaces and calculi, as well as by belief measures
that evaluate attitudes to cognitive structures and are built in the
context of fuzzy set theory. There are developed methods of logics of
beliefs (cf., for example, (Munindar and Nicholas, 1993) or (Baldoni
et al., 1998)) and belief functions (Shafer, 1976). Logical methods,
theory of possibility, fuzzy set theory, and probabilistic technique
form a good base for building CIF systems in computers.
complete complete
efficiency inefficiency
q p (3.11)
A NA
n
(FM1) g(Ø) = 0.
(FM2) For any A and B from B, the inclusion A ⊆ B implies g(A) ≤
g(B).
+ · · · + (−1)n+1 Bel(A1 ∩ . . . ∩ Ai ).
where Ā is a complement of A.
For each set A ∈ P(X), the number Bel(A) is interpreted as the
degree of belief (based on available evidence) that a given element x
of X belongs to the set A. Another interpretation treats subsets of
X as answers to a particular question. It is assumed that some of the
answers are correct, but we do not know with full certainty which
ones they are. Then the number Bel(A) estimates our belief that the
answer A is correct.
One more class is plausibility measures, which are related to belief
measures.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 239
Belief measure and plausibility are dual measures as for any belief
measure Bel(A), Pl(A) = 1 − Bel (Ā) is a plausibility measure and
for any plausibility measure Pl(A), Bel(A) = 1 − Pl(Ā) is a belief
measure.
When Axiom (Be2) for belief measures is replaced with a stronger
axiom
Bel(A ∪ B) = Bel(A) + Bel(B) whenever A ∩ B = Ø,
we obtain a special type of belief measures, the classical probability
measures (sometimes also referred to as Bayesian belief measures).
In a similar way, some special kinds of probability measures are
constructed (Dempster, 1967).
It is possible to consider dynamical systems in spaces with a belief,
plausibility or possibility measure. These systems allow one to model
mental processes, cognition, and knowledge processing in intelligent
systems. For example, it is possible to consider data- and knowledge
bases as dynamical systems with a belief measure and study their
behavior. Such a belief measure can reflect user beliefs in correctness
and validity of data, as well as user beliefs in truth and groundedness
of knowledge systems.
A belief system, either of an individual or of a community, con-
tains not only beliefs, but also disbeliefs, i.e., beliefs that something
is not true. To represent this peculiarity, extended belief measures
are introduced in (Burgin, 2010).
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 240
(EBe0) X = X + ∪ X −
(EBe1) Ø, X + , X ∈ A, Bel(Ø) = 0, Bel(X + ) = 1 and Bel(X) = 0.
(EBe2) For any system {Ai ; i = 1, 2, . . . , n} of sets from A ∩ X + and
any n from N , we have
n
Bel(A1 ∪ · · · ∪ Ai ) ≥ Bel(Ai ) − Bel (Ai ∩ Aj )
i=1 i<j
+ · · · + (−1)n+1 Bel(A1 ∩ · · · ∩ Ai ).
all other beliefs from basic beliefs. When beliefs are represented by
statements in a conventional logical system, then inference is done
using deduction rules of this system.
Pollock and Cruz (1999) argue that ontologically it is impossible
to have basic beliefs. However, in spite of this, people psychologically
tend to form a system of basic beliefs in some area deducing other
beliefs from the basic beliefs. Examples of basic beliefs are: axioms in
mathematics or theoretical physics (cf., for example, (Godel, 1940;
Fraenkel and Bar-Hillel, 1958; Burgin, 2011) for mathematics and
(Dirac, 1930a; von Neumann, 1932; Mackey, 1963; Atiyah, 1990;
Perez Bergliaffa et al., 1998; Streater and Wightman, 2000; Hardy,
2001; Schottenloher, 2008) for physics, axioms in computer science
where there is such a discipline as the axiomatic theory of algorithms
(Burgin, 2010d), religious credos, laws in social systems, ethical prin-
ciples, etc.
Note that even in mathematics, axioms do not represent absolute
truth and absolute knowledge. For instance, it is possible to build
consistent set theory with the continuum hypothesis as an axiom
(Gödel, 1940) and it is also possible to establish non-contradictory
set theory where the negation of the continuum hypothesis is an
axiom (Cohen, 1966).
Another example of internal logical justification is a grounded
selection (formation) of a system of basic beliefs and comparison of
all other beliefs with basic beliefs. A belief is considered justified if
there is consistency with the system of basic beliefs. When beliefs
are represented by statements in a logical system, then consistency
means logical consistency, in which addition of the compared belief
does not cause contradictions.
Now operational knowledge is mostly represented by computer
software and hardware. Consequently, correctness and reliability have
become critical issues in software and hardware production and uti-
lization as a result of the increased social and individual reliance
upon computers and computer networks. A study of the National
Institute of Standards found that only software defects resulted in
$59.5 billion annual losses. In turn, problems caused by software
faults often translate into loss of potential customers, lower sales,
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 249
higher warranty repair costs, and losses due to legal actions from
customers.
Thus, we see that production of highly reliable software, on time
and within budget, is a constant challenge for the software industry.
Just as we know how disruptive failures can be, we all know how often
schedules slip. To overcome these inconveniences, software assurance
(SwA) is employed.
According to NASA (NASA-STD-2201-93), software assurance is
a “planned and systematic set of activities that ensures that soft-
ware processes and products conform to requirements, standards, and
procedures. It includes the disciplines of Quality Assurance, Quality
Engineering, Verification and Validation, Non-conformance Report-
ing and Corrective Action, Safety Assurance, and Security Assurance
and their application during a software life cycle.”
It is useful to distinguish three types of software assurance:
➢ Verification testing;
➢ Diagnostic testing;
➢ Action/decision testing.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 254
Ad hoc testing is done without any formal Test Plan or Test Case
creation. Ad hoc testing helps in deciding the scope and duration
of the various other testing and it also helps testers in learning the
application prior starting with any other testing.
Exploratory testing is similar to the ad hoc testing and is done in
order to learn/explore the application.
Usability testing is done if user interface is important and needs
to be specific for the specific type of user.
Smoke testing also called sanity testing is performed to check if
the application is ready for further major testing and is working
properly without failing up to least expected level.
Recovery testing is chiefly done in order to check how fast and bet-
ter the application can recover against any type of crash or hardware
failure etc. Type or extent of recovery is specified in the requirement
specifications.
Volume testing is done against the efficiency of the application.
Huge amount of data is processed through the application (which is
being tested) in order to check the extreme limitations of the system.
User acceptance testing, the software is handed over to the user
in order to find out if the software meets the user expectations and
works as it is expected to.
In alpha testing, the users are invited at the development center
where they use the application and the developers note every partic-
ular input or action carried out by the user. Any type of abnormal
behavior of the system is noted and rectified by the developers.
In beta testing, the software is distributed as a beta version to
the users and users test the application at their sites. As the users
explore the software, in case if any exception/defect occurs that is
reported to the developers.
All considered kinds and types of operational knowledge testing
have their counterparts in the area of descriptive and representational
knowledge testing.
Testing in science is called an experiment. Some time ago, a new
form of scientific testing has been created. It is called computer sim-
ulation. In natural science in general and in physics, in particular,
experiment provides the evidence that tests and grounds scientific
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 256
of all system crashes (Maxion and Olszewski, 1998), hence, are wor-
thy of serious attention. In general, an exception is any unexpected
condition or event, usually environment- or data-driven, which would
cause an otherwise operational program to fail. Many different types
of conditions can cause exceptions including an empty data file, insuf-
ficient memory, type mismatch, wrong command-line argument, pro-
tection violation, and bad data returned from another program.
Conditional correctness encompasses all known types of correct-
ness and verification techniques. For instance, model-determined cor-
rectness means that conditions from C determine properties of a
relevant (usually, fixed) model of the system R. Building a relevant
formal model of R provides for using computers for correctness verifi-
cation. Different approaches exist for doing this. For instance, model
checking is an algorithmic verification technique in which efficient
programs are used to check, in an automatic way, whether a desired
property holds for a finite model of a software system. By the classi-
fication developed in (Burgin and Debnath, 2006), it is a kind of soft-
ware descriptive correctness. With respect to finite automata models,
this kind of software correctness is considered in (Burgin and Tandon,
2003).
This conditional definition makes the concept of correctness flex-
ible, efficient, and adaptive to changes. The type of conditions in C
determines the type of software correctness.
A software system is traditionally called functionally correct, if it
realizes functions prescribed by its specification. In this case, func-
tional specification gives conditions on software’s behavior. This def-
inition of correctness assumes that a specification of the system is
available and that it is possible to determine unambiguously whether
or not the software meets the specification.
A software system is traditionally called textually correct if its
text/code does not have errors. In this case, the condition is that the
software text complies with the syntactic rules of the corresponding
programming language. This definition of correctness is the simplest.
A software system is descriptively correct if it corresponds to its
specification. In this case, specification describes conditions that the
program must satisfy. Note that descriptive correctness includes (is
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 259
where elements DI , WI , EI from the first triad are input data, pro-
gram (software system) and environment, respectively; DP , WP , EP
are data, program (software system) and environment during the
computational process; and DI , WI , EI are output data, program
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 262
where CDI , CWI , CEI are input conditions on data, program (soft-
ware system) and environment, respectively; CDP , CWP , CEP are
conditions on data, program (software system) and environment dur-
ing the computational process; and CDO , CWO , CEO are output
conditions on data, program (software system) and environment,
respectively.
A functional triad has three correctness interpretations:
1. If the first part is true, then the second and the third parts are
also true.
2. If the first and the second parts are true, then the third parts are
also true.
3. If the first part and a fragment of the second part are true, then
the complementary fragment of the second parts and the third
parts are also true.
doing that, they cut themselves off from the powerful new discoveries of
computer science. Yes, it is true that we can describe the operation of
a computer’s hardware in terms of simple logical expressions. But no,
we cannot use the same expressions to describe the meanings of that
computer’s output — because that would require us to formalize those
descriptions inside the same logical system. And this, I claim, is some-
thing we cannot do without violating that assumption of consistency.”
(Minsky, 1991a).
α(x) : β(x)
.
γ(x)
In some cases, the fourth stage can cause new inconsistencies. This
demands going back to the second stage and repeating the whole
cycle.
The most general operational framework for managing inconsis-
tency was developed by Nuseibeh et al. (1991). Its modification is
presented in Figure 3.15.
Nuseibeh et al. (1994) use the term inconsistency to denote any
situation in which two descriptions do not obey some relationship that
is prescribed to hold between them. A precondition for inconsistency
is that the descriptions in question have some area of overlap. A rela-
tionship between descriptions can be expressed as a consistency con-
dition, against which descriptions can be checked. In current practice,
some such consistency rules are captured in various project docu-
ments, others are embedded in tools, and some are not captured
anywhere.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 276
Measurement Analysis of
of inconsistency impact and risk
• A pure logical calculus uses only a logical language and has only
logical axioms.
• An applied logical calculus or formal theory uses some formal lan-
guage and has non-logical axioms.
this domain into several parts with various functions and relations
connecting them. In this case, these parts being formalized form a
model variety, while the system of logics that describe these parts
forms a syntactic variety.
For instance, semantics of computer languages employ different
types (domains) of data, such as the integers and the real numbers.
Each domain has its own equality, relations, identities, and arith-
metical operations. The logical language that describes the union of
these domains will have two sorts of variables, real variables, and
integer variables. The meaning of a quantifier would be determined
by the type of the variable it binds. The corresponding logic will be
a logical variety built of two calculi. Intersection of these calculi will
include such formulas as the commutative law
x + y = y + x,
and the associative law
x + (y + z) = (x + y) + z.
Towers of calculi introduced by Maslov (1983) for representation
of dynamic aspects of formal theories are an example of logical vari-
eties.
One more example of naturally formed logical varieties is the tech-
nique Chunk and Permeate built by Brown and Priest (2004). This
technique suggests to begin reasoning from inconsistent premises pro-
ceeds by separating the assumptions into consistent theories (called
by the authors chunks). These chunks are components of the log-
ical variety shaped by them. After this, appropriate consequences
are derived in one component (chunk). Then those consequences are
transferred to a different component (chunk) for further consequences
to be derived. This is exactly the way how logical varieties are used to
realize and model non-monotonic reasoning (Burgin, 1991d). Brown
and Priest suggest that Newton’s original reasoning in taking deriva-
tives in the calculus, was of this form.
Concepts of logical varieties and prevarieties provide further for-
malization for local logics of Barwise and Seligman (1997), many-
worlds model of quantum reality of Everett (Everett, 1957; 1957a;
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 291
Definition 3.3.17. (a) The set V = {Ti ; i ∈ I} is called the flat map
of the syntactic quasi-prevariety (quasi-variety, prevariety, or variety)
M and a quasi-prevariety (quasi-variety, prevariety, or variety) flat
map of the set N .
(b) If N = M , then the set V = {C i ; i ∈ I} is called a quasi-
prevariety (quasi-variety, prevariety, or variety) map of the set N .
Note that one set N of formulas can have different maps.
Definition 3.3.18. (a) The set W of all flat maps of a set of formulas
N corresponding to quasi-prevarieties (quasi-varieties, prevarieties,
or varieties) from H is called the flat H-atlas of N .
(b) The set W of all quasi-prevariety (quasi-variety, prevariety, or
variety) maps of set of formulas N corresponding to quasi-prevarieties
(quasi-varieties, prevarieties, or varieties) from H is called the quasi-
prevariety (quasi-variety, prevariety, or variety) H-atlas of N .
Proof. We prove this result for i = 1. For all other values of i, proof
is the same.
Theorem is proved.
The compatibility of a logical variety means that it is possible to
immerse all components of this variety into one calculus from the
class K of logical calculi. Thus, the obtained results show that the
possibility of logic system immersion into one calculus is undecidable
for a finite number of logics (Theorem 3.3.2), while for an infinite
number of logics, the decidability problem is reducible to the finite
case (Theorem 3.3.3) and thus, it is undecidable in general.
It is necessary to explain that logical varieties, prevarieties, and
quasi-varieties implicitly perform various functions in knowledge
organization and management. One of these functions is stratification
of knowledge systems.
Stratification is a popular technique in knowledge base theory
and practice. For instance, Hunter and Liu (2009) introduce knowl-
edge base stratification to solve the problem of merging multiple
knowledge bases. Benferhat and Baida (2004) use stratified first
order logic for access control in knowledge bases. Benferhat and Gar-
cia (2002) employ stratification for handling inconsistent knowledge
bases. Lassez et al. (1989) show how stratification can be useful as a
tool in the interactive model-building process, demonstrating that it
is possible to reduce the computational complexity of the process by
the use of stratification that limits consistency checking to minimal
strata.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch03 page 302
— Time slicing when each step is assigned some period of time for
realization.
— Elementary operations.
Chapter 4
Knowledge Structure
and Functioning: Microlevel
or Quantum Theory
of Knowledge
307
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 308
Alex K A A A C
Ben X A B A B
Costas H B N/T A A
Dan R C D N/T N/T
Eddi T C C D A
Frank S D D B F
has
Student Frank S grade D in Math 180, (4.2)
system has two parts: cognitive, that is, knowledge per se, and sub-
stantial, which consists of the knowledge object (knowledge domain)
with its structure (internal or external). These two parts are con-
nected by a relation (correspondence), which is conveyed by the word
“about” in English.
Cognitive part of a knowledge system is also called symbolic
because, as a rule, it is represented by a system of symbols. Cog-
nitive/symbolic parts of knowledge form knowledge systems per se
as abstract structures, while addition of substantial parts to them
forms extended knowledge systems.
At first, let us consider descriptive knowledge as the most typ-
ical category of knowledge (cf., Chapter 2). In this case, the sim-
plest knowledge about an object gives some property of this object.
As Aristotle wrote, we can know about things, nothing but their
properties. The simplest property is existence of the object in ques-
tion. However, speaking about properties, we have to discern intrinsic
and ascribed properties of objects. In this, we are following the long-
standing tradition of attributive realism, in which it is assumed that
objects have intrinsic properties. Taking an object A and its feature
(intrinsic property) QA , we come to an inherent descriptive quantum
(IKQ) of knowledge K = (A, q, QA ), the graphical form of which is
represented by Diagram (4.5).
q
A QA. (4.5)
Note that it is possible to treat the property QA as a traditional
property of an object represented by a value of an attribute or as
the attribute itself represented by a predicate in the conventional
description of properties or by an abstract or natural property in the
advanced portrayal of properties (Burgin, 1985; 1986; 2010).
For example, taking a physical body B, we know that it can have
such an intrinsic property as 10 kg of mass. At the same time, it
can have such an intrinsic property as “being a rigid object” (an
attribute), as well as intrinsic property mass (a natural property).
Definition 4.1.1. When the object A and the property QA are inde-
composable, the inherent quantum of knowledge (4.1) is called an
elementary inherent descriptive knowledge unit (EIKU).
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 312
q p . (4.6)
A NA
n
Here NA is a name of the object A and QA is a feature (an intrinsic
property) of A, e.g., if A is a book, then NA is usually the title of
A, the intrinsic property QA may be the year of its publication or
the author, while the attribute (ascribed property) PA is the cognitive
representation of QA . In our case, when QA is the year of publication,
then PA is the number that represents this year, e.g., 2012, or if QA
is the author, PA is the first and the last names of the author.
For instance, it is possible to understand PA in Diagram (4.2) as
a classical property such as being white, as a fuzzy property such as
being 50% white, as a physical property such as weight or height, and
as a value of a physical property, e.g., having weight 100 lb.
Table 4.1 represents a system of descriptive extended knowledge
quanta. Let us consider their structural portrayal.
knowledge A
of Math 180 content
,
(4.7)
person Alex K
with the name
Alex K
knowledge B
of Hist 200 content
,
(4.8)
person Ben X
with the name
Ben X
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 314
knowledge (C, C, D, A)
of taken courses
.
(4.9)
person Eddi T
with the name
Eddi T
value
number of doors 4
nd number
. (4.14)
car Cadillac
vehicle make
value
color white
nd number
. (4.15)
car Cadillac
vehicle make
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 318
modeling
the intrinsic structure a model of the Solar System
structuring representation
. (4.19)
the Solar System “the Solar System”
naming
g
SA TA
q p .
(4.20)
A NA
f
description
the intrinsic structure a program (algorithm) for computation of π
structuring description
(4.21)
computational process computation of π.
naming
q
A SA , (4.22)
p
NA MA . (4.23)
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 321
and
p
NA TA . (4.27)
g
W L
q p .
(4.30)
U C
f
In Diagram (4.30), the correspondence f relates each object H
from U to its name «H » = f (H) from C (or to its system of names or,
more generally, to its conceptual representative or conceptual image
in the sense of (Burgin and Gorsky, 1991)) and the correspondence
g assigns values of the property Q to values of the property P . In
other words, g relates values of the intrinsic property to values of
the ascribed property. For instance, when we consider such property
of material things as weight, it is an intrinsic property. In weighting
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 324
any thing, we can get only an approximate value of the real weight,
or weight with some precision. It is the ascribed property. That is,
when we measure an intrinsic property, we obtain values of the cor-
responding ascribed property.
Relation f in Diagram (4.30) can have the form of some algo-
rithms/procedures of object recognition, construction, or acquisition.
Relation g can have the form of some algorithms/procedures of mea-
surement, evaluation, or prediction.
Note that in general, any object from U can be a big system that
consists of other objects. For instance, it can be a galaxy or the whole
physical universe. In this case, the knowledge quantum K about U
is not elementary.
W g L pC .
(4.38)
t DU BC
rU p rC
U C
f
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 328
α
X Y.
α
Besides, Zhuge also calls the labeled arrow , as well as
the inner component α of the semantic link α = (X, α, Y ), by the
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 330
Note that a semantic link is, like many other basic structures, a
kind of fundamental triads (named sets) (cf., Appendix). The gen-
eral nature of nodes in a semantic link implies that it is possible to
use semantic links for building not only semantic networks but also
networks in which physical objects are connected by semantic links.
A semantic link network is a triad (N , L, ) where N is a set of
nodes, L is a set of semantic links and is a semantic space, which
consists of a concept hierarchy ℘ and a set of rules . The extended
representation of a semantic link network also includes a mapping
from nodes and links into the semantic space .
As we can see, semantic links in the sense of Zhuge can con-
nect physical objects. Here we are interested in knowledge, which
is a structural essence. That is why we consider here only symbolic
semantic links to build the SLTK.
The SLTK elementary unit of knowledge is called a knowledge
link and is a symbolic triad α = (X, α, Y ) where X and Y are called
knowledge nodes and can be names of any symbolic objects, e.g.,
texts, words, symbols, pictures, semantic links, etc., while α is a
connection (link) between the knowledge nodes X and Y , which rep-
resents a semantic relation between the objects with the names X
and Y . Thus, a knowledge link is a kind of complete semantic links
in which nodes are arbitrary symbolic objects.
We will discern individual knowledge/semantic links and type
knowledge/semantic links. In an individual knowledge/semantic link,
α is an individual name of a certain relation. For instance, the knowl-
edge/semantic link ({1, 2, 3}, {(1, 3)}, {1, 2, 3}), where {(1, 3)} is
a binary relation in the set {1, 2, 3}, is individual.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 331
node, i.e., the complete semantic link (X, insOf, Y ) means the
node X is an instance of the node Y .
6. The sequential link, in which the inner semantic link is denoted
by seq indicating that the right node follows the left node, i.e.,
the complete semantic link (X, seq, Y ) means the node Y follows
the node X.
7. The reference link, in which the inner semantic link is denoted
by ref indicating that the right node is a further explanation of
the left node, i.e., the complete semantic link (X, ref, Y ) means
the node Y is further explanation of the node X.
8. The equal link, or equality link, in which the inner semantic link
is denoted by e indicating that the meaning of the right node
is the same as the meaning of the left node, i.e., the complete
semantic link (X, e, Y ) shows the meaning of the node X is same
as the meaning of the node Y .
9. The empty link, in which the inner semantic link is denoted by
Ø indicating that the right and the left nodes are completely
irrelevant to one another, i.e., the complete semantic link (X, Ø,
Y ) means the nodes Y and X are completely irrelevant to one
another.
10. The null or unknown link, in which the inner semantic link is
denoted by Null or by N indicating that the relation between the
two nodes is unknown or uncertain, i.e., the complete semantic
link (X, N , Y ) shows the relation between X and Y is unknown
or uncertain.
11. The semantic equivalence link, in which the inner semantic link
is denoted by equiv, indicating that the connected nodes can
substitute for one another wherever they occur, i.e., the complete
semantic link (X, equiv, Y ) shows X and Y can substitute for
one another wherever they occur.
The last of general semantic links considered by Zhuge (2012) rep-
resents a unary operation with semantic links and will be described
in Section 4.3.2.
12. The non-α relation link, in which the inner semantic link is
denoted by Non (α) or by αN indicating that there is no relation
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 333
α between the two nodes, i.e., the complete semantic link (X,
αN , Y ) shows there is no relation α between X and Y .
13. The property link, in which the inner semantic link is denoted by
prOf indicating that the left node is a property (feature) of the
right node, i.e., the complete semantic link (X, prOf, Y ) means
the node X is a property (feature) of the node Y . For instance,
the complete semantic link (blue, prOf, blue ball) means the color
blue ball is a property (feature) of a blue ball.
14. The part link, in which the inner semantic link is denoted by ptOf
indicating that the left node is a part of the right node, i.e., the
complete semantic link (X, ptOf, Y ) means the node X is a part
of the node Y . For instance, the complete semantic link (an arm,
ptOf, a woman) means an arm is a part of a woman.
15. The element link, in which the inner semantic link is denoted by
elOf indicating that the left node is an element of the right node,
i.e., the complete semantic link (X, elOf, Y ) means the node X
is an element of the node Y . For instance, the complete semantic
link (the Earth, elOf, the Solar System) means the Earth is an
element of the Solar System.
16. The name link, in which the inner semantic link is denoted by
nmOf indicating that the left node is a name of the right node,
i.e., the complete semantic link (X, nmOf, Y ) means the features
of the node X is a name of the node Y . For instance, the complete
semantic link (Michael, nmOf, the man) means Michael is the
name of the man.
17. The before link, in which the inner semantic link is denoted by be
indicating that on the time scale, the left node is before the right
node, i.e., the complete semantic link (X, be, Y ) means the node
X is before the node Y . For instance, the complete semantic link
(Winter, be, Spring) means Winter is before the Spring.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 334
18. The after link, in which the inner semantic link is denoted by
af indicating that on the time scale, the left node is after the
right node, i.e., the complete semantic link (X, af, Y ) means the
node X is after the node Y . For instance, the complete seman-
tic link (Summer, af, Spring) means that Summer is after the
Spring.
19. The function link, in which the inner semantic link is denoted by
fnOf indicating that the features of the left node is a function
of the right node, i.e., the complete semantic link (X, fnOf, Y )
means the features of the node X is a function of the node Y .
For instance, the complete semantic link (moving people, fnOf,
a car) means that a function of a car is moving people.
20. The relation link, in which the inner semantic link is denoted by
rn indicating that the left node and the right node are in some
relation, i.e., the complete semantic link (X, rn, Y ) means the
features of the node X and the node Y are in some relation. For
instance, the complete semantic link (10, rn, 5) means numbers
5 and 10 are in some relation, in particular, in the relation of
divisibility, i.e., 10 is divisible by 5.
21. The in link, in which the inner semantic link is denoted by in
indicating that the left node is in the right node, i.e., the complete
semantic link (X, in, Y ) means the node X is in the node Y .
For instance, the complete semantic link (Michael, in, the house)
means Michael is in the house.
22. The better link, in which the inner semantic link is denoted by
bt indicating that on some scale, the left node is better that the
right node, i.e., the complete semantic link (X, bt, Y ) means the
node X is better that the node Y . For instance, the complete
semantic link (honesty, bt, deception) means honesty is better
than deception.
23. The bigger link, in which the inner semantic link is denoted by
bg indicating that for some scale, the left node is bigger than the
right node, i.e., the complete semantic link (X, bg, Y ) means the
node X is bigger than the node Y . For instance, the complete
semantic link (the Sun, bg, the Earth) means the Sun is bigger
than the Earth.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 335
24. The subclass link, in which the inner semantic link is denoted by
scOf indicating that the left node is a subclass of the right node,
i.e., the complete semantic link (X, scOf, Y ) means the node X
is a subclass of the node Y . For instance, the complete semantic
link (all dogs, scOf, all animals) means the class of all dogs is a
subclass of the class of all animals.
the complete semantic link (X, (stOf, ext), Y ) means the features
of the node X all features of the node Y to the extent ext.
5. The graded similar link, or graded similarity, link, in which the
inner semantic link is denoted by (sim, sd) indicating that the
semantics of the right node is similar to the semantics of the left
node to the degree sd, i.e., the complete semantic link (X, imp,
Y ) means the semantics of the node X is similar to the semantics
of the node Y to the degree sd.
6. The graded instance link, in which the inner semantic link is
denoted by (insOf, id) indicating that the left node is an instance
of the right node to the degree id, i.e., the complete semantic link
(X, (insOf, id), Y ) means the node X is an instance of the node
Y to the degree id.
7. The graded sequential link, in which the inner semantic link is
denoted by (seq, pr) indicating that the right node follows the
left node with the probability pr, i.e., the complete semantic link
(X, (seq, pr), Y ) means the node Y follows the node X with the
probability pr.
8. The graded reference link, in which the inner semantic link is
denoted by (ref, rext) indicating that the right node is a further
partial explanation of the left node, i.e., explanation to the extent
ext. So, the complete semantic link (X, (ref, rext), Y ) means the
node Y is an explanation of the node X to the extent rext.
9. The graded equal link, or graded equality, link, in which the inner
semantic link is denoted by (e, eext) indicating that the meaning
of the right node is almost the same as the meaning of the left
node, i.e., the complete semantic link (X, by (e, eext), Y ) shows
the meaning of the node X is same to the extent eext as the
meaning of the node Y .
10. The graded empty link, in which the inner semantic link is
denoted by (Ø, ed) indicating that the right and the left nodes
are partially irrelevant to one another to the degree ed, i.e., the
complete semantic link (X, (Ø, ed), Y ) means the nodes Y and
X are irrelevant to one another to the degree ed.
11. The graded null or graded unknown link, in which the inner
semantic link is denoted by (Null, nd) or by (N , nd) indicating
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 337
17. The graded name link, in which the inner semantic link is denoted
by (nmOf, next) indicating what part of a name of the right node
is the left node, i.e., the complete semantic link (X, (nmOf, next),
Y ) means the features of the node X is next of a name of the node
Y . For instance, if the name of a bridge is the Golden Bridge,
then the complete semantic link (Golden, (nmOf, 21 ), bridge)
means that Golden is one half of the name of this bridge.
18. The graded before link, in which the inner semantic link is
denoted by (be, bdg) indicating to what degree on the time scale,
the left node is before the right node, i.e., the complete semantic
link (X, (be, bdg), Y ) means the node X was bdg years before
the node Y . For instance, the complete semantic link (Colum-
bus, (be, Washington)) means Columbus lived years before
Washington.
19. The graded after link, in which the inner semantic link is denoted
by (af, adg) indicating to what degree on the time scale, the left
node is after the right node, i.e., the complete semantic link (X,
(af, adg), Y ) means the node X is (was) bdg years after the node
Y . For instance, the complete semantic link (Washington, (be),
Columbus) means Washington lived years after Columbus.
20. The graded function link, in which the inner semantic link is
denoted by (fnOf, fgext) indicating that the features of the left
node is a partial function of the right node, i.e., the complete
semantic link (X, (fnOf, fgext), Y ) means the features of the
node X is a function of the node Y . For instance, the semantic
link (moving things, (fnOf, 30%), a car) means that 30% of car
functions is moving things.
21. The graded relation link, in which the inner semantic link is
denoted by rn indicating that the left node and the right node
are in some relation, i.e., the complete semantic link (X, rn,
Y ) means the features of the node X and the node Y are in
some relation. For instance, the complete semantic link (10, rn,
5) means numbers 5 and 10 are in some relation, in particular,
10 is divisible by 5.
22. The graded in link, in which the inner semantic link is denoted
by in indicating that the left node is in the right node, i.e., the
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 339
1. knowledge flow
2. information flow
3. service flow
4. material flow
5. energy flow
6. money and other symbolic goods, e.g., stocks or bonds, flow
or
(X, α, Y ).
signs. Thus, in what follows, the word sign denotes a conceptual sign,
while a name of a sign usually means a material sign.
In a similar way, there are three different, however, connected,
meanings of the word “symbol”. In a broad sense, symbol is the
same as sign. For example, the terms “symbolic system” and “sign
system” are considered as synonyms, although the first term is used
much more often. Another understanding identifies symbol with a
physical sign.
Theoretical models of the structure of conceptual signs has been
constructed in the discipline, which is called semiotics and is a gen-
eral theory of signs. Semiotics studies structures and functions of
signs and their communicative operation, including sign processes
(semiosis), indication, designation, signification, likeness, analogy,
metaphor, symbolism, signification, and communication.
The term semiotics comes from the Greek word σηµει̃oν (meaning
a sign or a mark) and it was first used in English by Henry Stubbes
(1670) in the form semeiotics denoting the branch of medical science
related to the interpretation of signs and by John Locke (1690) in
the form semeiotike as “the doctrine of signs”.
The importance of signs and signification has been recognized
throughout much of the history of philosophy, and in psychology as
well. For instance, Umberto Eco (1986) argues that semiotic theo-
ries are implicit in the work of most, perhaps all, major thinkers.
Plato and Aristotle both explored the relationship between words
(as signs) and the real world. Much later, Augustine of Hippo (354–
430) (Saint Augustine) considered the nature of the sign in society
(St. Augustine, 1974). The general study of signs was popular in
scholastic philosophy and logic. For instance, Peter Abelard (1079–
1142) noted that linguistic signification does not cover the whole
range of sign processes instructed that arbitrary things might func-
tion as signs, too, if they were connected to each other in such a way
that the perception of one led to the cognition of the other (Abelard,
1927; 1956). The unknown author, now commonly named Ps.-Robert
Kilwardby, in his work written somewhere between 1250 and 1280,
strengthens Augustine’s renowned dictum that “all instruction is
either about things or about signs” stating that “every science is
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 345
Material Sign
signification
signifier signified
expression plane
•
content plane
Figure 4.4. The Hjelmslev–Eco model of sign with one expression and one
content
expression plane
• • •
content plane
Figure 4.5. The Hjelmslev–Eco model of sign with one expression and several
contents
expression plane
•
•
•
•
content plane
Figure 4.6. The Hjelmslev–Eco model of sign with several expressions and one
content
content plane B
•
expression plane
• •
content plane A
Figure 4.7. The Hjelmslev–Eco model of sign with one expression plane and
two content planes
is, a material sign, which is called sign in everyday life. In other words,
a sign is often comprehended as some elementary image inscribed on
paper, clay tablet, piece of wood or stone, presented on the screen of
a computer monitor, and so on. This material representation plays
the role of a name of the sign.
Peirce implied that signs establish meaning through recursive rela-
tionships that arise in sets of three main semiotic elements:
• Representamen, also called Sign or Sign Vehicle by Peirce or Sign
Name in a general context, is the component of the sign that rep-
resents the denoted object or objects and is similar to Saussure’s
signifier. Note that in this context, the sign name is not necessarily
a single word. It can be a quite elaborated object.
• Object, also called extent, is what the sign represents or encodes.
• Interpretant is the meaning formed into a further sign by inter-
preting or decoding a sign.
The object of a sign can be anything thinkable, for example, a law,
fact, possibility, or idea. Peirce considered two kinds of sign objects:
• The immediate object is represented in the sign name (representa-
men).
• The dynamic object is the object as it really is.
In addition, Peirce considered three kinds of the interpretants of
a sign:
◦ The immediate interpretant is the meaning that is already in the
sign, or more exactly, the meaning that is without delay ascribed
to the sign by the interpreter when she/he receives the sign.
◦ The dynamic interpretant is the meaning as formed in a process of
sign comprehension by the interpreter.
◦ The final interpretant is the meaning that would be reached if
formation process were to be pushed far enough, namely, it is a
kind of an ideal meaning, with which an actual, that is, dynamic,
interpretant may, at most, coincide.
Note that it is also possible to consider the final object, which is
an ideal collection of the object with all its possible changes.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 352
object name (symbol) of the object name (symbol) of the name string of letters
a fire is burning. Words this, that, these, and those are also examples
of indices.
Interpreter (I)
Sign/Sign Name
Interpreter
Sign vehicle
Sense Referent
In this context, sense denotes the concept meaning for the interpreter
of the sign. Eco (1976) discerns three kinds of the sign vehicles, which
are material signs:
2. Signs whose tokens are different but similar, for example, a word
which someone speaks or which is handwritten.
3. Signs whose token is their type, or signs in which type and
token are identical, for example, a unique original oil-painting or
sculpture.
Another semiotic triangle (Figure 4.14) was suggested by Ogden
and Richards (1953). Note that not only this but also other versions
of the Balanced Sign Triad of Peirce have been called by the name
semiotic triangle.
Sowa introduced another model of sign, which is similar to the
Dynamic Sign Triad of Morris. In his model, a sign has three aspects:
(1) an entity that represents (2) another entity to (3) an agent (Sowa,
2000a). It is represented in Figure 4.15.
In (Vetrov, 1968), another model of a sign is considered (cf.,
Figure 4.16).
There is one more model of a sign, in which the name and the
object are connected by the triadic relation. Namely, according to
agent
entity entity
Origin
Name Type Object
Reliability
,
(4.40)
p q
C D
f
k
W M
.
(4.41)
t q
U D
h
. (4.43)
r l
V E
m
.
(4.44)
Having the joined diagram for FG and HG and the joined diagram
for HG and KG, we can build the joined diagram for FG and KG
because composition (product) of injections is an injection.
As we can see from Chapter 2, another important relation between
knowledge quanta (knowledge units) is consistency. For instance, the
knowledge quanta (A, is, a man) and (A, is, a student) are consistent,
while the knowledge quanta (A, is, a man) and (A, is, a building) are
inconsistent.
Computability and measurability are useful properties of knowl-
edge quanta (knowledge units).
l
P L
, (4.45)
p q
V D
f
k
W M
. (4.46)
t r
U C
h
l
{color} {white}
, (4.47)
p q
{car 1} {BMW1}
f
k
{color} {white, gray},
.
(4.48)
t r
{car 1, car 2} {BMW1, BMW2}
h
f
. (4.50)
D M
, (4.51)
q p
A NA
n
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 370
e
QA PA
. (4.52)
q po
A MA
no
Renaming can be pure, when only the name is changed, and com-
bined, when, for example, the ascribed property (attribute) is also
changed.
e
QA PA
,
q p (4.53)
A NA
n
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 371
e
QA = QB PA
.
qo po (4.54)
B NB = N A
no
e
QA PA
, (4.55)
q p
A NA
n
e
QB PB
. (4.56)
qo po
B NB
no
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 372
K:
g
W L
q p ,
(4.57)
U C
f
H:
h
V M
r n ,
(4.58)
T D
t
K
H:
g∪h
{W, V} {L, M}
q∪r p∪n .
(4.59)
{U, H} {C, D}
f ∪t
{U, H} w C∪D
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 375
For instance, the domain U has one object with the name ball and
the domain H has one object with the name ball. Then due to the
name amalgamation, the domain {U , H} has two objects with the
same name ball. That is, C = D = {ball} and C ∪ D = {ball}.
As the union of sets is a commutative operation (Fraenkel and
Bar-Hillel, 1958), we have the following result.
I segment
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 378
g
length 25 in
q r , (4.64)
I interval
g
length 25 in
q {p, r} . (4.65)
I {segment, interval}
q×r p×n .
(4.59)
U×H C×D
f×t
As the Cartesian product of sets is a commutative operation
(Fraenkel and Bar-Hillel, 1958), we have the following result.
Proposition 4.3.25. The Cartesian product of collective extended
quantum knowledge units is a commutative operation.
As the Cartesian product of sets is an associative operation
(Fraenkel and Bar-Hillel, 1958), we have the following result.
Proposition 4.3.26. The Cartesian product of collective extended
quantum knowledge units is an associative operation.
There are also operations with higher than two arity, as well
as integral operations with extended knowledge units (knowledge
quanta) but they are studied elsewhere.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 380
Example 4.3.7. The before link is the reverse of the after link.
Example 4.3.8. The worse link is the reverse of the better link.
Example 4.3.9. The smaller link is the reverse of the bigger link.
α ◦ β = (X, α ◦ β, Y ).
α ∨ β = (X, α ∨ β, Y ).
There are also operations with higher arity than two and integral
operations with semantic links but they are studied elsewhere.
Operations with semantic links and symbolic knowledge quanta
play an important role in construction and functioning of semantic
networks.
Although people are more accustomed to operations with a fixed
arity, e.g., to binary operations integral operations with knowledge
quanta find various applications. For instance, as it is explained at
the beginning of this chapter, a relation in a relational database is
an individual knowledge quantum if such a relation describes one
object and is a collective knowledge quantum when it describes sev-
eral objects. Then the basic operations in relational databases —
projection and selection — are examples of integral operations with
knowledge quanta.
Projection is a basic operation in relational databases (Codd,
1970). It is applied to rows (tuples) in a relation R from a rela-
tional database and has a set of attribute names {a1 , a2 , a3 , . . ., an }
as its parameters or arguments. Projection transforms the relation R
in such a way that all rows in the result are restricted to the set {a1 ,
a2 , a3 , . . ., an }. As projection can be applied to any number of rows
(tuples) in a relation and rows are symbolic individual knowledge
quanta, it is an integral operation on symbolic individual knowledge
quanta. Being applied to one row, database projection coincides with
the right projection of symbolic knowledge quanta.
Selection, sometimes called restriction, is another basic operation
in relational databases (Codd, 1970). It is applied to rows (tuples)
in a relation R from a in relational database and has a relation Q
between two of the attributes as its parameter or argument. Selec-
tion selects all those rows in R for which the relation Q holds for the
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch04 page 393
Chapter 5
Knowledge Structure
and Functioning: Macrolevel
or Theory of Average
Knowledge
395
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 396
— Formal representations
— Informal representation
— Semiformal representations
their papers, but they never formulated exact definitions that com-
pletely characterized them. However, without exact definition, semi-
formal methods are more flexible and adaptive.
Informal methods are very flexible and adaptive but not efficient
for many purposes. For instance, natural languages do not allow pre-
cise representation of scientific knowledge. Informal methods are also
very hard for computer processing. Natural languages form the most
popular informal knowledge representations.
It is interesting that similar to knowledge, knowledge representa-
tions also have the same three forms:
— Mathematical representations,
— Logical representations,
— Scientific representations,
— Metaphoric representations,
— Linguistic representations,
— Digital representations,
— Schematic representations,
— Iconic representations,
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 402
— Symbolic representations,
— Algorithmic representations.
— Visual representations,
— Vocal representations,
— Tactile representations, e.g., the tactile writing system Braille.
The term language has two basic meanings: an abstract concept stud-
ied by linguistics and a specific linguistic system, e.g., “English”.
There are various interpretations of language as a linguistic system.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 404
Syntax
Semantics Pragmatics
From the pragmatics point of view, there are four types of sen-
tences in languages that have the same structure as English, French,
or Spanish:
4. The Poetic Function focuses on “the message for its own sake”
(the code itself, and how it is used) performing representation of
descriptive assertoric knowledge about the text (code) and is the
operative function in poetry as well as in slogans.
5. The Phatic Function is utilization of language for the sake of
interaction, e.g., building a relationship between both parties in a
conversation or dialogue. We can observe the Phatic Function in
greetings and casual discussions of the weather, particularly with
strangers. It also provides the keys to start, maintain, verify, or
finish the communication process with such words as “Hello?”,
“Ok?”, “Hummm”, “Goodbye”, etc.
6. The Metalingual (also called Metalinguistic or Reflexive) Func-
tion is the use of language (or of code by Jakobson) to discuss or
describe itself, i.e., it involves self-reference. For instance, the sen-
tence “The previous sentence is declarative” performs metalingual
function presenting knowledge about the text.
— Internal structures
— Inner structures
— External structures
— Intermediate structures
— Outer structures
1. Major premise;
2. Minor premise;
3. Conclusion.
Eliezer ben Jose, also called Eliezer ben Yose HaGelili, was a
renown Jewish rabbi, who lived in Judea in the 2nd century, was
a student of the famous Rabbi Akiva (ca. 50–135) and whose views
were recorded in the Talmud. In particular, Rabbi Eliezer ben Jose
elaborated the thirty two rules of Rabbi Eliezer intended for inference
of haggadic interpretation.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 437
— syllogistics (Aristotle);
— classical logic, which includes the classical propositional logic and
classical predicate logic (Boole; De Morgan; Peirce);
— algebraic logic (cf., (Halmos, 1962; Plotkin, 1991));
— algebraic polymodal logic (Goldblatt, 2000);
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 442
Each level has its own basic structures and rules of their
composition.
However, the development of logic and extension of its domain
brought forth a new (organizational) level of knowledge where real-
ity (the knowledge domain or knowledge object) is described by
logical varieties, prevarieties and quasi-varieties (Burgin, 1991d;
1995b; 1997d; 2004a; 2008a; Burgin and de Vey Mestdagh, 2011;
2015; de Vey Mestdagh and Burgin, 2015). On this level, all sys-
tems from the previous levels, such as concepts, statements, lan-
guages, rules of inference, and logical calculi, are organized as
the higher-order structures called logical varieties, prevarieties and
quasi-varieties.
However, this description of four logical levels reflects the classical
approach to logic representing only descriptive or assertoric knowl-
edge, which asserts something about its domain (object). At the same
time, people use not only statements for reasoning, but also ques-
tions, queries, and conjectures. To represent these forms of reasoning
in a formal way, probabilistic, hypothetic, and erotetic logics have
been created in the 20th century (Boole, 1854; Reichenbach, 1932;
1935; Hailperin, 1984; Halpern, 1999; Russell, 2014; Kleiner, 1970;
Belnap and Steel, 1976; Harrah, 2002).
In addition, dynamic and operational logics have been elaborated
for operational knowledge representation (cf., for example, (Allen,
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 446
1984; van Benthem, 1991; Luchi and Montagna, 1999; Harrel, 1979;
Harel et al., 2000)).
In non-assertoric logics, the first level is similar to conceptual
level of the classical logic employing names, terms, and concepts.
Only the basic names, terms, and concepts from these logics rep-
resent other epistemological objects forming foundation for the sec-
ond level, which is essentially different. For instance, on the second
level, erotetic logics employ questions, queries and problems instead
of statements; probabilistic logics assign probabilities to statements;
dynamic logics use instructions and other names of processes, actions,
and events; while hypothetic logics use conjectures and hypotheses
instead of statements. Consequently, the third level of these logics
consists of hypothetic, dynamic and erotetic logical calculi, while the
fourth level encompasses hypothetic, dynamic, and erotetic logical
varieties.
Below we consider the first three levels in more detail, while the
fourth level is in depth described in Section 3.3.
Concept name
Denotation Sense
Name
Denotation Meaning
following structure.
indicates
Proper name Meaning .
of the proper name (5.4)
(object)
Thus, for Russell a concept has a name and two more constituents.
On one hand, concepts symbolize objects that are their exemplifica-
tions. Russell calls the relation between concepts and their particular
exemplifications denotation (Russell, 1905). This relation is objective
or, as Russell also says, logical. On the other hand, concepts have a
part formed by meanings of corresponding linguistic expressions that
mean objects denoted by the concept. This gives us the following
structure of a concept.
Each component of the structure of a concept — the name, deno-
tation and meaning in the Concept Triangle of Russell or the name,
denotation and sense in the Concept Triangle of Frege — is also an
object and has a name, the name of this object. In addition, these
objects also have a denotation and sense (or meaning), associated
with their names. As a name is itself an object, it has a name and a
denotation and sense (or meaning), as any other object. In an inten-
sional context, the names that occur denote the meaning or sense of
the objects for the reader or listener. It means that each component
of a concept can acquire the role of another component.
It means that each component of a concept can acquire the role
of another component. As a result, the structure of a concept has the
property called fractality, which tells that the structure of the whole
is repeated/reflected in the structure of its parts.
It is interesting that explicating the structure of concepts,
proper names and propositions in the theories of Russell and Frege,
Tatievskaia (1999) demonstrates that they have the form of a fun-
damental triad, which is called a named set, or named set chain
(cf., Appendix). She portrays these structures as the diagrams in
Figure 5.4. This portrayal allows us to see that in the first diagram
in Figure 5.4, Sense connects Sentence with Reference, while in the
second diagram, Sense connects Proper Name with Reference. At the
same time, the third diagram is a composition of two fundamental
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 449
t
Concept name Set of objects . (5.5)
w
Concept name Prototype . (5.9)
Concept name
Examples Attributes
Name
Examples Attributes
Core
Meaning Sense
core
• G is a set of objects,
• M is a set of attributes,
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 455
Conceptual Representative
attributive
Intention Sense Meaning Prototype Exemplars Scope/Extent
operational
relational literal/direct metaphorical/indirect
literal/direct metaphorical/indirect individual group
all individual
objects united
strict fuzzy by the concept
graded exact approximate
when reasoning about each other. At the same time, problems moti-
vated by practical computer science applications show that utilized
theories of naming are often inadequate. For instance, their main
concern is proper names while other names, i.e., common names for
many objects, are also extremely important. Thus, practical appli-
cations demand logic to pay more attention to names and naming.
Logicians have developed special tools for working with names in
logic, and all of them involve building new or transforming existing
named sets. For instance, Gabbay and Malod (2002) extend predicate
modal and temporal logics introducing a special predicate W (x),
which names the world under consideration. Such a naming allows
one to compare the different states the world (universe or individual)
can be in after a given period of time, depending on the alternatives
taken on the way. As Gabbay and Malod (2002) remark, the idea of
naming the worlds and/or time points goes back to Prior (1967).
Labeling is a kind of naming andlabeled logics and labeled deduc-
tive systems form a new and actively expanding direction in logic (cf.,
(Basin et al., 2000; Chau, 1993; Gabbay, 1994; 1996; Gabbay and
Malod, 2002; Viganò and Volpe, 2008)). Labeled logics use labeled
signed formulas where labels (names of the formulas) are taken from
information frames. As the result, the set of formulas in a labeled
logic becomes an explicit named set, the support of which consists of
logical formulas, while the set of names is an information frame, i.e.,
the system of labels. The derivation rules act on the labels as well
as on the formulae, according to certain fixed rules of propagation.
It means that derivation rules are morphisms (mappings) of the cor-
responding named sets. In default logics, there is even an algorithm
for grounded naming (labeling) (Roos, 2000).
Besides, the set of names is the level of types, and the naming rela-
tion connects objects and types. The elementary theory of types and
names developed by Jäger (1988) is aimed at application in computer
science. Objects are the entities the computer can directly manipu-
late. They are promptly accessible and explicitly represented in suit-
able form, e.g., as bitstrings in a computer memory. In contrast to
this, types are abstract collections of objects. In order to address
them, computers have to use their names. Hence the name nX of
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 466
name
referent sense
1. Modularity means that the source code for any object can be writ-
ten and maintained independently of the source code for other
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 471
y
B 1
K H
A O C
-1 1 x
-1 D
following way:
d((x, y), (u, v)) = |x − u| + |y − v|.
In particular, the distance between the point (0, 0) and the point
(x, y) is equal to |x| + |y|. Consequently, the distance from the point
(0, 0) to any point of the figure ABCD is equal to 1. Thus, the
figure ABCD is a circle in the Manhattan metrics and a square in
the Euclidean metrics. Consequently, it is a round square.
In his theory, Meinong gives the following classification of objects.
Objects, which always have outside-being, are separated into two
classes:
1. Objects that have being are separated into two classes:
a. Real objects, which exist as well as subsist.
b. Ideal objects, which only subsist.
2. Objects that do not have being are separated into two classes:
a. Objects that have non-being are separated into two classes:
i. Non-contradictory objects.
ii. Contradictory objects.
b. Objects that are not determined with respect to being.
Defining an object as what can be experienced in some way,
Meinong analyzes experiences. By his approach, all experiences, even
the most elementary ones, are complex mental phenomena, contain-
ing, at least, three constituents: (1) the action, (2) the psychological
(mental) content, and (3) the object of the experience. This gives us
the following diagram (cf., Figure 5.15).
While the first two components, (1) and (2), must exist if the
experience exists, the third components (3) need not. If somebody
object
action content
has a strong hope for total peace, for example, then (1) the action of
hope exists, (2) the psychological content, i.e., hope, exists, but (3)
no total peace may occur. It means that there is only the non-existent
object the total peace.
Meinong believes that experiences can have different objects for
two reasons. First, different kinds of acts correspond to different
kinds of objects. For instance, “objects” correspond to representa-
tions, while “objectives” are related to thoughts. Second, it is pos-
sible to assume that inside an action, any variation of the objects
is dependent on a variation of some mental component that is the
psychological content of the experience. The difference between the
objects must somehow come down to an internal difference between
the representations in question. If you have two different represen-
tations, one of red and another of green, for example, the difference
between the objects is founded on a genuinely mental difference,
namely the difference between the psychological red-content and the
psychological green-content.
Meinong calls the relation of a content to its corresponding object
the “adequacy relation” (Meinong, 1910), and he takes it to be an
ideal relation. Ideal relations, in contrast to real relations, subsist
necessarily between the terms of the relation. If one color, say red,
is different from another, say green, than they must be different. If
you compare colors located somewhere, the relation between a color
spot and its location is called real because the color, say red, could
be located elsewhere, or another color could be in the place of the
red color spot. Ideal relations, however, attach once and for all and
with necessity to their terms.
Meinong just postulates the adequacy relation and offers only neg-
ative determinations of it. He stresses the point that adequacy is not
a relationship of sameness or of similarity. Since it is an ideal relation,
real relations — for example pictorial or even causal relationships —
are excluded. A positive hint is given by a kind of metaphorical use
of the word “fitting”: the mental content and its object must be fitted
to each other.
As Meinong supposes that the different kinds of acts are coordi-
nated with the different kinds of presented objects, his classification
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 476
described it. Each of these directions gives its own description of this
unique reality.
In contrast to this, the multidimensional theory of existence pos-
tulates three pure forms (dimensions) of reality:
1. The actual reality consists of natural objects and processes directly
or indirectly perceived by senses and reflected in the central ner-
vous system (CNS).
2. The virtual reality, often called virtuality, is created, simulated or
reflected by some technological system (device or machine), e.g.,
computer games, movies, and videos.
3. The imaginary reality is created by mentality, e.g., heroes in prose
and poems, characters in movies and plays.
In addition, there are four combined forms of reality:
1. The mixed reality is a combination of actual and virtual reality,
i.e., it is situated between actual reality and virtual reality forming
the Virtuality Continuum and including augmented reality and
augmented virtuality.
2. The materialized reality is a combination of imaginary and virtual
reality.
3. The actualized reality is a combination of imaginary and actual
reality.
4. The enhanced reality is a combination of actual, imaginary and
virtual reality.
Understanding multiplicity of realities allows obtaining natural
solutions to problems of object existence, which bothered philoso-
phers for a long time. For instance, actively discuss whether such
objects as “the king of France in 2000” or “Pegasus” or “a golden
mountain” exist. The multidimensional theory of existence tells us
that all these objects do exist but “the king of France in 2000” and “a
golden mountain” exist in the imaginary reality of the philosophical
discourse, while “Pegasus” exists in the imaginary reality of Greek
legends.
It has been an active discussion in philosophical circles in what
sense mathematical objects in general and numbers, in particular,
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 479
— Substantial definitions;
— Conceptual definitions;
— Substitution definitions;
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 480
— Differentiating definitions;
— Causal definitions;
— Rhetoric definitions;
— Exemplifying definitions;
— Relative definitions;
— Listing definitions.
Definitions are a specific kind of knowledge and therefore, they
also have three forms:
• Descriptive definitions;
• Operational definitions;
• Representational definitions.
The majority of researchers starting with Plato and Aristotle
acknowledged only descriptive definitions in the form of statements
or propositions (Popa, 1976). Only in the 20th century, Bridgman,
(1927) introduced operational definitions as sets of operations with
the defined objects. Representational definitions were not explicitly
introduced or used in philosophy and methodology of science. How-
ever, such definitions are often employed in early childhood. Indeed,
to learn concepts, children observe representatives, e.g., exemplars or
prototypes of some concept and these representatives shape represen-
tational definitions of the learned concepts. Note that representatives
can be real objects or their images, e.g., pictures or photographs. For
instance, to learn the concept dog, a child observes different dogs con-
necting them with the word dog and in such a way, learns the concept.
Besides, representational definitions correspond to listing definitions
described by Marius Victorinus.
The question what is necessary to define separated two classes
of definitions. Some philosophers presumed that people define things
(physical objects). For instance, Aristotle in his Topics wrote:
“A definition is a proposition describing the essence of a thing.”
representation .
{propositions} {affirmative sentences}
∃x∀y,
∃a∀b.
and
represented by .
∃ selection operator
• is called top;
• ⊥ is called bottom;
• ! is interpreted as of course (or sometimes bang);
• ? is interpreted as why not;
• −◦ is called linear implication;
These rules form the set of algorithms R that are used to build
the language LP .
Elements of the language LCP C of the classical predicate
logic/calculus of the first order give a formal representation of binary
properties. The predicate calculus language has a developed alpha-
bet and elaborated symbolic notation. Lower-case letters a, b, c, . . .,
x, y, z, . . . are traditionally used to denote individuals (variables or
constants). Upper-case letters M , N , P , Q, R, . . . are traditionally
used to denote (variable or constant) predicates.
The alphabet LCP C of the language LCP C consists of six parts:
Note that non-classical logics often use not only logical opera-
tions but also logical operators. For instance, the modal operator
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 490
(1) Letters of the alphabet A of the language LCP C are wffs from
LCP C .
(2) Expressions P (x1 , x2 , . . ., xn ) where P is an n-ary predicate
symbol is a wff from LCP C .
(3) If ϕ is a wff, then ϕ is a wff from LCP C .
(4) If ϕ and ψ are wffs, then (ϕ ∧ ψ), (ϕ ∨ ψ), (ϕ → ψ), and (ϕ ↔ ψ)
are wffs from LCP C .
(5) If H(x1 , x2 , . . ., xn ) is a wff containing a free variable x, then ∃
xH(x1 , x2 , . . ., xn ) and ∀ xH(x1 , x2 , . . ., xn ) are wffs from LCP C .
almost all elements of A are bigger than 10. Taking the mathemat-
ical calculus (cf., for example, (Larson and Edwards, 2006; Burgin,
2008), we have another example of the quantifier ∀∀, namely, the
conventional convergence of a sequence l = {ai ; i = 1, 2, 3, . . .} to
number a means that any neighborhood O a of a contains almost all
elements from l, or in the formal language, we have:
There are many other logics and they have different languages.
For instance, conventional logics are extended to probabilistic log-
ics by assigning probabilities to statements, i.e., to propositions and
predicates. This allows representation of probabilistic knowledge (cf.,
for example, (Boole, 1854; Reichenbach, 1932; 1935; Hailperin, 1984;
Russell, 2014)). This direction in formal logic was initiated by Leib-
niz, who envisioned that it would be necessary to estimate likelihood
of propositions and a way of proof leading not to certainty but only to
probability of propositions. However, Leibniz did not develop such
a logic and it was George Boole, who introduced a mathematical
concept of imprecise probability aiming to reconcile classical logic,
which tends to express complete knowledge or complete ignorance,
and probability theory, which has a propensity to express partial
or/and imprecise knowledge or ignorance (Boole, 1854).
It is necessary to remark that some logics have an essentially dif-
ferent structure. For instance, the “logic” of (non-relativistic) quan-
tum mechanics is thought of as being the lattice of closed subspaces
of a separable infinite dimensional Hilbert space (Mackey, 1963).
By this definition, all axioms from A are situated at the zero level.
Levels of inner potential knowledge allow us to solve the Problem
of Omniscience. Namely, Omniscience in this context means that a
person who has some knowledge K in the logical form and logical
rules of (inference) also knows all knowledge deducible from K. The
Problem of Omniscience asks whether it is possible for a human being
really to have infinite knowledge.
The solution to this problem is that a person who has some knowl-
edge K in the logical form and logical rules of (inference) knows only
some number of levels of knowledge deducible from K.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 496
C = (A, H, T ). (5.15)
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 497
Associative rule: (ϕ ∨ ψ) ∨ χ = ϕ ∨ (ψ ∨ χ)
Cut rule: (ϕ ∨ ψ) and (ϕ ∨ χ) imply (ψ ∨ χ)
∃-introduction rule: ϕ → ψ implies ∃xϕ → ψ if x is not a free
variable in ψ.
Pentium III 866 MHz,” you know that this computer has the proces-
sor clock frequency 866 MHz. In turn, the processor clock frequency
determines how many instructions it can execute per second.
An important property of physical bodies is mass. The scale of
this property is the infinite interval [0, ∞) of the real line.
The most popular property in logic is truth defined for logical
expressions. In classical logics, only one such property as truth with
the scale L = {T, F} is considered, i.e., predicates and propositions
take one of two value T (true) and F (false). Thus, the set {T, F}
is the scale of the abstract properties that represent predicates and
propositions.
For modal logics, which have only one truth property that is deter-
mined for logical expressions, modalities are expressed by means of
modal operators. At the same time, modal operators are abstract
properties defined for well-formed formulas and taking values in
modal well-formed formulas. For instance, the modal operator of
necessity is defined as :f → f for any well-formed formula f .
Another possibility to express modality is to determine differ-
ent modal truth properties: “truth,” “necessary truth”, and “possible
truth”. These truth properties are also abstract properties.
Valued sets give one more example of abstract properties
(Dukhovny and Ovchinnikov, 2000; Ovchinnikov, 2000; Frascella and
Guido, 2008). We remind that a valued set is a function from a given
set into a given linearly ordered set L. Thus, we see that valued sets
also are particular cases of abstract properties and thus, they are
represented by named sets (Burgin, 2011).
There are several methods to represent concepts by abstract prop-
erties. For instance, we can take an abstract property P of names
with the scale that consists of conceptual representatives. It is also
possible to represent concepts by abstract properties by an abstract
property P so that P assigns the Extent of a concept C to the name of
C. We obtain another representation when the Meaning of a concept
C is assigned to the name of C.
Taking the next level of logic, we see that it is possible to represent
statements and propositions by abstract properties. Indeed, a state-
ment or a proposition is the information content of a well-formed
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 505
A
X X
H D
X X
B
b c b d
÷ = · .
a d a c
F2 Div
g F
2
F Mlt
a − b = a + (−b).
U p
g L
V r
f
N N
(5.31)
v u
x x
T F
F T
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 515
x y x&y
T T T
F T F
T F F
F F F
x y x∨y
T T T
F T T
T F T
F F F
x y x→y
T T T
F T T
T F F
F F T
x y x↔y
T T T
F T F
T F F
F F T
• is an instance of
is a relation that indicates that one object is an element of another
object in the network, e.g., “Jumbo is an instance of elephant”.
• is a prototype of
is a relation that indicates that one object is a special case or a model
of another object.
• is a part of
is a relation that indicates that one object is a physical part of
another object, e.g., “Wheel is a part of a car”.
It is possible to find other semantic relations in Section 4.1.2.
Semantic networks became very popular when researchers started
to use them in artificial intelligence and machine translation although
earlier versions have long been used in philosophy, psychology, and
linguistics.
The oldest known semantic network was drawn in the 3rd cen-
tury C.E. by the Greek philosopher Porphyry in his commentary
on Aristotle’s categories. Porphyry used this network to illustrate
Aristotle’s method of defining categories by specifying a genus or
general type, which encompasses the special cases as the subtypes
of the general type. Then this procedure was iterated for introduced
subtypes and so on. The structure of this semantic network is a spe-
cial kind of graphs called a forest.
For computers, semantic networks were first introduced by
Richard H. Richens of the Cambridge Language Research Unit in
1956 as an “interlingua” for machine translation of natural languages
because semantic networks allow spreading activation, imposing
inheritance, and using nodes as representations of objects.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 520
Chemical elements
Physical microobjects
particles antiparticles
quarks leptons
up quark down quark charm quark strange quark top quark bottom quark positron
is
A cat an animal
A fish is not
Figure 5.19. A statement network for the proposition “A cat is an animal, while
a fish is not an animal”
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 525
goes to
Andy his school
Alice the theater
is in
Figure 5.20. A statement network for the proposition “Andy goes to school and
Alice is in the theater”
0, 1
q p
ε 0
1
r t
0, 1
1. Go to a restaurant;
2. Be seated;
3. Get menu;
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 529
4. Read menu;
5. Order food;
6. Eat food;
7. Pay for meal;
8. Exit the restaurant.
Props:
Tables;
Menu;
Food;
Money.
Roles:
Customer;
Waiter;
Cook.
Scene 1: Entering
Customer PTRANS Customer into restaurant;
Customer ATTEND eyes to tables;
Customer MBUILD where to sit;
Customer PTRANS Customer to table;
Customer MOVE Customer to sitting position.
Scene 2: Ordering
Customer PTRANS menu to Customer (menu already on table);
Customer MBUILD choice of food;
Customer MTRANS signal to Waiter;
Waiter PTRANS to table;
Customer MTRANS ‘I want food’ to Waiter;
Waiter PTRANS to Cook.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 530
Scene 3: Eating
Cook ATRANS food to Waiter;
Waiter PTRANS food to Customer;
Customer INGEST food.
Scene 4: Exiting
Waiter MOVE write check;
Waiter PTRANS to Customer;
Waiter ATRANS bill to Customer;
Customer ATRANS money to Waiter;
Customer PTRANS out of restaurant.
The final formalization gives us the formal script “Restaurant”:
Props
Tables;
Menu;
F = Food;
Bill;
Money.
Roles
P = Customer;
W = Waiter;
C = Cook;
K = Cashier;
O = Owner.
Entry conditions
P is hungry;
P has money.
Results
Scene 1: Entering
P PTRANS P into restaurant;
P ATTEND eyes to tables;
P MBUILD where to sit;
P PTRANS P to table;
P MOVE P to sitting position.
Scene 2: Ordering
(Menu on table) (S asks for menu) (O brings menu)
P PTRANS menu to P O MTRANS signal to W W PTRANS menu to P
WPTRANS W to table
PMTRANS “need menu” to W
W PTRANS W to menu
W PTRANS W to table
Scene 3: Eating
C ATRANS F to W
W ATRANS F to P
P INGEST F
(Option: Return to Scene 2 to order more; otherwise go to Scene 4)
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 532
Scene 3: Eating
C ATRANS F to W;
W ATRANS F to P;
P INGEST F.
(Option: Return to Scene 2 to order more; otherwise go to Scene 4).
Scene 4: Exiting
S MTRANS to W
or
I will go to Italy.
In addition, we have:
Rome is in Italy.
(N , D, C),
(S, A → B, P ),
1. The slot with the name “Vehicle” informs that the class of cars
is a subclass of the class of vehicles and is related to the frame
“Vehicle”.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 538
2. The slot with the name “Number of wheels” informs how many
wheels the car has, e.g., this slot may have the value 4.
3. The slot with the name “Number of doors” informs how many
doors the car has, e.g., this slot may have the value 2 or 4.
4. The slot with the name “Make” informs what company produced
the car, e.g., this slot may have the value “GMC”, “Honda” or
“Toyota”.
5. The slot with the name “Model” describes the model of the car,
e.g., this slot may have the value “Accord” when the previous slot
has the value “Honda” or “Toyota”.
Visual
activation Location
of visual
search Size Orientation
target Recognition Recognition
location
Figure 5.23. Dashed lines — activation of signals (i.e., control links, in our
terminology); solid lines — transfer of data (i.e., information links, in our
terminology)
Computational Neuroscience
via Structure and Function
Subneural Modeling
in a form of grid automata
Figure 5.24. A version of the schema for the Computational Neuroscience sug-
gested by M. A. Arbib
schemas have been introduced and utilized (Slutz, 1968; Keller, 1973;
Dennis, Fossen, and Linderman, 1974). Dataflow schemas are for-
malizations of dataflow languages. Program schemas and dataflow
schemas formed an implicit base for the development of the first pro-
gramming metalanguage — the block-schema (flow-chart) language
(Burgin, 1973; 1976).
Moreover, the advent of the Internet and introduction of the
Extensible Markup Language, abbreviated XML, started the devel-
opment of schema languages (cf., for example, (Duckett et al., 2001;
Van Der Vlist, 2004)). As developers know, the advantage of XML is
that it is extensible, even to the point that you can invent new ele-
ments and attributes as you write XML documents. Then, however,
you need to define your changes so that applications will be able to
make sense of them and this is where XML schema languages come
into play. In these languages, schemas are machine-processable spec-
ifications that define the structure and syntax of metadata specifica-
tions in a formal schema language. There are many different XML
schema languages (W3C Schema, Schematron, Relax NG, and so
on). They are based on schemas that define the allowable content
of a class of XML documents. Schema languages form an alterna-
tive to the DTD (Document Type Definition), and offer more pow-
erful features including the ability to define data types and struc-
tures. XML schemas from these languages provide means for defining
the structure, content and semantics of XML documents, including
metadata. A specification for XML schemas is developed and main-
tained under the auspices of the World Wide Web Consortium. The
Resource Description Framework (RDF) is an evolving metadata
framework that offers a degree of semantic interoperability among
applications that exchange machine-understandable metadata on the
Web. RDF schema (Resource Description Framework schema) is a
specification developed and maintained under the auspices of the
World Wide Web Consortium. The Schematron schema language dif-
fers from most other XML schema languages because it is a rule-
based language that uses path expressions instead of grammars.
This means that instead of creating a grammar for an XML doc-
ument, a Schematron schema makes assertions applied to a specific
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 552
Most DBMS do not separate the three levels completely, but sup-
port the three-schema architecture to some extent.
An interesting special kind of schemas was recently introduced
by Google in 2012. It is called a knowledge graph (Google: Knowl-
edge Graph, 2012; Kohs, 2014). It is organized around objects called
entities, which include individuals, societies, places, events, organiza-
tions, countries, sports teams, books, works of art, movies and so on,
with facts connected to them and relations between these different
objects. This structure is growing very fast: in May 2012, it included
3.5 billion facts connected to 500 million entities. By December of
that year, it had grown to include 570 million entities with 18 billion
facts connected to them.
The knowledge management system also called Knowledge Graph
organizes and manages this gigantic knowledge schema — a labeled
graph of knowledge — collecting and merging information about
entities from many data sources. Based on this knowledge schema,
Knowledge Graph provides structured and detailed information
about the topic in addition to a list of links to other sites.
Now other Internet companies, such as Yahoo or Diffbot, are
developing their own knowledge graphs.
It is interesting to know that the term knowledge graph appeared
in computer science much earlier. In 1982, Hoede and Stokman
started building a theory of knowledge graphs to use it for extract-
ing knowledge from medical and sociological texts and building
corresponding expert systems (cf., (Zhang, 2002)). Later several
researchers in the Netherlands continued to develop and apply this
theory (Hoede and Willems, 1989; Smit, 1991; van den Berg, 1993;
Zhang, 2002; Wang et al., 2010).
In this context, a knowledge graph is also a schema with the
variable called a token, which is a node in a knowledge graph and
denotes a perception of an individual from the real world. Note that
according to the Existential Triad, the real world consists of three
components: the physical world, the mental world and the structural
world (Burgin, 2012). Perceptions of an individual from the mental
world are usually called concepts or conceptions.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 555
Rose → ← cat
Here Rose and cat are marks and Rose is the name of a cat.
In this example, relations are represented by arrows (directed
arcs). However, according to the theory of knowledge graphs, a rela-
tionship between two concepts a and b is a graph in which both a and
b occur. It gives an interesting example of a named set A = (a, f, b),
in which the naming relation f is a graph (cf., Appendix).
It is demonstrated that the mathematical schema theory (Burgin,
2005; 2006; 2010a) encompasses all types of schemas used in pro-
gramming, database theory, and computer science, as well as on the
Internet and real-world databases.
A notion of a schema has been also used in mathematical logic,
metamathematics, and set theory. In 1927, John von Neumann (orig-
inally, János Lajos Margittai von Neumann) (1903–1957) introduced
the concept of an axiom schema. It has become very useful in
axiomatic set theories (for instance, the axiom of subsets is, according
to the conceptions of Thoralf Skolem (1887–1963), Wilhelm Friedrich
Ackermann (1896–1962), Willard Van Orman Quine (1908–2000) and
some other logicians, an axiom schema) and other axiomatic math-
ematical theories (Fraenkel and Bar-Hillel, 1958). Logicians studied
axiomatizability by a schema in the context of general formal theo-
ries (Vaught, 1967). In addition to axiom schemas, schemas of infer-
ence (e.g., syllogism schemas) have been also studied in mathematical
logic (cf., for example, (Fraenkel and Bar-Hillel, 1958)). Actually, syl-
logisms introduced by Aristotle, as well as deduction rules of modern
logic are schemas for logical inference and mathematical proofs.
Another mathematical field where the concept of schema is
used is category theory. This concept was introduced by Alexan-
der Grothendieck (1928–2014) in a form equivalent to a multigraph
and later generalized to the form of a small category (Grothendieck,
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 556
Remark 5.6.4. There are fuzzy functions with fuzzy domain and/or
range. However, here we do not consider such functions.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 559
Here:
The set AB is the set of all object names (node constants) from B
used as nodes of the schema; the multiset VN B consists of all object
variables from B also used as nodes of the schema; the set C B is
the set of all connections/links (link constants) from B; the multiset
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 560
GA
c1 c3 c5
Tm1 CA1
c2 c4 c6
FA1
m2
FA2
FA4
FA5
m4
RAM2 m6
GA1
c1 c3 c5
Tm1 CA1
c2 c4 c6
FA1
m2
FA2
FA4
FA5
m4
RAM1 m6
GA3
X1
X2 X3
X4 X5 X6
X7 X8
Figure 5.26. Dashed lines represent activation of signals, while solid lines rep-
resent transfer of data
One input comes from the outside through M , while the second
input of G is the output of the machine T . The automaton G can be
in two states: closed and open. Initially G is closed until it receives
some input from T , which makes it open. When G is closed, it gives
no output. When G is open, it gives the word that comes to G from
M as its output. The structure of the Turing machine AT,w .
When an informal schema, such as an interaction schema or flow-
chart of a program, is formalized, its formal representation is a
G
u M
w T AT,w
Lemma is proved.
Concretization is a special kind of a more general operation on
schemas.
G = (V, E, c).
G = (V, E, c).
(a)
(b)
(c)
(d)
(e)
(f)
Figure 5.28. Examples of graphs (a), multigraphs (b), directed graphs (c),
directed multigraphs (d), partially directed graphs (e), and partially directed
multigraphs (f)
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 578
Example 5.6.12. Figure 5.29 gives the graphical form of the grid
G(P ) of the schema P from Example 5.6.9.
We can see that the grid G(P ) of a schema P is a partially directed
graph.
Example 5.6.13. Figure 5.30 gives the graphical form of the grid
G(R) of the schema R from Example 5.6.12.
o o
o
o
o o o o o o o o
o
o o
o o
Figure 5.30. The grid G(R) of the schema R from Example 5.6.12
Any port schema has the same grid as its projection on the
corresponding basic schema.
Proposition 5.6.4. For any port schema P , we have G(P ) =
G(DP ) where DP is the basic schema built from P .
Grids of schemas allow one to characterize definite classes of
schemas.
Proposition 5.6.5. A schema B is closed if and only if its grid
G(B) satisfies the condition Im c ⊆ V × V , or in other words, the
grid G(B) of B is a conventional multigraph.
Proposition 5.6.6. A schema B is an acceptor only if it has exter-
nal input ports or/and its grid G(B) has edges connected by their
end, or Im c ∩ Ve = Ø.
Proposition 5.6.7. A schema B is a transmitter only if it has exter-
nal output ports or/and its grid G(B) has edges connected by their
beginning, or Im c ∩ Vb = Ø.
Proposition 5.6.8. A schema B is a transducer only if it has exter-
nal input and output ports or/and its grid G(B) has edges connected
by their beginning and edges connected by their end.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 580
(a) variables from P are mapped into variables and constants of the
same type from R;
(b) constants from P are mapped into constants of the same type
from R.
In a natural way, compositions of typed homomorphisms of
schemas are introduced as conventional sequential composition of
mappings.
WGSC in which objects are schemas and morphisms are their weak
structural homomorphisms.
T1 FA1 FA
NN T3
Figure 5.31. Dashed lines represent activation of signals, while solid lines rep-
resent transfer of data
the schema from Figure 5.23 nor a subschema of the schema from
Figure 5.26.
Here is the list of variables in this schema:
T1 and T3 are Turing machines;
NN is a neural network;
FA and FA1 are variables the range of which is the class of all finite
automata.
Lemma 5.6.7. If a schema P is a structural subschema of a schema
R, then there is a structural VE-monomorphism of P into R.
Proof is left as an exercise.
Definition 5.6.25. A schema P is a subschema of a schema R if
all nodes of P belong to the set of nodes of R, all links of P belong
to the set of links of R, all ports of P belong to the set of ports of
R, and the internal and external port assignment functions pIP and
pEP and port-link adjacency function cP of P is a restriction of the
internal and external port assignment functions pIR and pER and
port-link adjacency function cR of R, respectively. It is denoted by
P ⊆ R.
Let P be a subschema of a schema R.
Proposition 5.6.19. For any concretization Con P (abstraction
Abs P ) of the schema P , there is a unique minimal concretization
Con R (abstraction Abs R) of the schema R such that Con P (Abs
P ) is a subschema of Con R (Abs R).
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 586
kinesthetic
visual and and tactile
kinesthetic input input
Hand Hand
Preshape Rotation
Actual
Grasp
Grasping
Figure 5.32. Dashed lines denote activation of signals and solid lines denote
transfer of data
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch05 page 588
X4 X5 X6
X7 X8
Figure 5.33. Dashed lines represent activation of signals, while solid lines rep-
resent transfer of data
Chapter 6
593
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch06 page 594
Definition 6.2. A partial knowledge system does not have all types
of knowledge.
Formal theories, such as axiomatic set theory or Peano arithmetic,
are examples of partial knowledge systems.
Structural analysis of big knowledge systems in general and scien-
tific theories, in particular, has been, as a rule, concerned only with
the inner structure of knowledge systems. The reason for this was the
limited understanding of the concept of structure that had existed
for a long time. In the general theory of structures developed in
(Burgin, 2012), this limitation was eliminated by demonstration that
any system has five types of structures — internal structure, inner
structure, intermediate structure, external structure, and outer struc-
ture. In addition, the new theory demonstrated that the traditional
understanding of the concept of structure, as well as its mathematical
formalizations are incomplete and the complete concept of structure
and its mathematical formalization was created. As a result, in the
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch06 page 595
While the traditional approach takes into account only one structure
of a system, the general theory of structures postulates that any
system has different structures, which belong to five basic types:
inner, internal, intermediate, outer, and external structures.
— discovery of counterexamples,
— possible discounting of such examples as noise,
— generation of hypotheses to modify a theory,
— production of a new theory when an old one has accumulated
too many counterexamples or repulsive and complicated auxiliary
amendments,
— applications of a theory.
Problem Affirmative
Thinking Thinking
Thinking
Logical
Part
Heuristic
Part Logic-Linguistic
Problem-Heuristic Subsystem
Subsystem
Linguistic
Part
Problem
Part
Subsystem
of Ties
Nomological
Axiological Part
Part
Pragmatic-Procedural Model-representing
Subsystem Subsystem
Procedural Model
Part Part
Object
Domain
Action World
(Burgin and Kuznetsov, 1989; 1989a; 1991; 1992; 1993; 1994; Balzer
et al., 1991). Here we further develop the SNR model building the
modal stratified bond model (MSB model) of comprehensive knowl-
edge systems in general, as well as advanced scientific and mathemat-
ical theories, in particular, describing their structure and functioning.
To build the MSB model, we configure the global knowledge in
tree directions — systemic, modal and hierarchical.
Namely, we take into account three modalities of knowledge con-
sidered in Chapter 2 to construct the modal direction:
∗ Descriptive knowledge.
∗ Representational knowledge.
∗ Operational knowledge.
◦ Procedural knowledge.
◦ Instrumental knowledge.
◦ Axiological knowledge.
— Assertoric knowledge;
— Erotetic knowledge;
— Hypothetic or heuristic knowledge.
Subsystem (Part)
Componential Nominalistic part: Aspect part: Operating part: Scaling part: Fragment part:
level concepts, lexicon, properties primitive relation, scalesparts and
vocabularies, operations, data and their systems
components of
alphabets automata, devices,
machines
Attributed Linguistic part: Modeling part: Operator part: Evaluation part: Performer part:
level grammars, parametric, instructions, estimates, automata, devices,
linguistic attributive and operators judgments, norms, machines,
relations, relational models goals, measures, instruments
languages criteria, and
values
Productive Logical part: logical Nomological part: Algorithmic part: Combination part: System part:
level calculi, logics, systems and algorithms, algebras and systems and
deduction/ algebras of models procedures, calculi of networks of
inference rules, operational properties, automata, devices,
Theory of Knowledge: Structures and Processes - 9in x 6in
values
page 608
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch06 page 609
All subsystems, their levels and strata form the inner struc-
ture of a comprehensive knowledge system. This structure with
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch06 page 611
Mental World
Hypothetic
Thinking
Affirmative
Thinking
Attitude
evaluation
Inquisitive
Thinking
Thinking
Structural World
Hypothetic
stratum
Logic-Linguistic
Axiological Subsystem
Subsystem Assertoric
stratum
Subsystem
Subsystem
of Bonds
of Bonds
Procedural Model-representation
Subsystem Subsystem
Instrumental
Subsystem
Object
Domain
Action
LSS
Logical
part
Calculi
Varieties
Linguistic Productive
part
level
Languages
Grammars Attributed
Conceptual level
part
Concepts
Lexicon Componential
level
Each basic level is, in turn, divided into sublevels, which struc-
turally form a named set sequence in the sense of (Burgin, 2011). Let
us consider all these levels.
The first level of the LSS consists of the symbols used in the
languages of this subsystem. For instance, mathematical theories use
such symbols as: decimal digits, Latin letters, Greekletters, Hebrew
letters, additional mathematical symbols, e.g., ∅ or , and letters of
the natural language used by mathematicians, e.g., French, German,
or Russian.
On the second level, symbols are organized into alphabets. For
instance, decimal digits 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9 form the alphabet
of the decimal positional numerical system. If a knowledge system
uses several languages, e.g., a physical theory, as a rule, uses natural
languages, mathematical languages and the language of the area of
this theory, then it is possible to take the union of all its alphabets
as the alphabet of this theory.
The third level of the LSS consists of the rules for building words,
expressions and well-formed formulas from the symbols.
On the fourth level, some words and expressions are chosen as
names and terms. For instance, 123 is a name of natural number
in the decimal positional numerical system. In the binary positional
numerical system, the same number has the name 1111011.
Concepts used in the considered knowledge system or connected
to this system form the fifth level of the LSS. Concept structure and
properties are modeled and studied by means of various named sets
(Burgin and Kuznetsov, 1988; 1990).
Symbolic forms of concept expressions (terms) are used for the
construction of the lexicon comprising alphabets and vocabularies of
languages from the whole knowledge system, which are elements of
the sixth level of LLS.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch06 page 615
Hypothetic
stratum
Assertoric
stratum
Subsystem
of BondsLSS
Erotetic
stratum
MSS
Nomological
part
Systems
of
Modeling models Productive
part
level
Properties
Names Componential
level
and relations between them, the names of names, etc.; the names
of abstract properties and relations corresponding to properties and
relations of objects, to properties of their properties, etc.; and names
of ideal entities like truth values; while R(D) is a subset of S(D) and
S(D) is the set scale in the sense of (Bourbaki, 1960) with basis D.
This set scale includes D and all its elements, functions from D into
D, functions defined on these functions; the set of all subsets of D;
and so on. In addition, L is the scale of the properties of the elements
from R(D) and f is the (partial) function that assigns values of their
properties to the elements from R(D).
The ninth level contains relations and ties between theory models,
as well as their properties and parameters. For instance, an important
relation between theory models is “to be a submodel of”. Examples
of properties are “standard” and “non-standard”, namely, there are
standard and non-standard models.
The tenth level of MSS is comprised of relations and ties between
relations and ties between theory models, as well as their proper-
ties and parameters. Using the structural hierarchy described in Sec-
tion 6.2, it is possible to come to higher levels of the MSS.
The eleventh level contains algebras and calculi of models, as well
as their properties and parameters. Examples of such algebras and
calculi are:
• Process algebras (Hennessy, 1988; Burgin and Smith, 2010).
• Process calculi (Hoare, 1985; Milner, 1989; 1999; Moller and Tofts,
1990).
• Algebras of abstract automata (Burgin, 2010d).
These algebras and calculi contain models of physical (usually,
computational) processes. Moreover, now there is a tendency to treat
physical, biological, psychological and economical processes as com-
putational processes.
In addition to parts and levels, the MSS has three basic strata
(cf., Figure 6.6):
— The assertoric stratum, which contains knowledge about the
knowledge system, e.g., theory, domain, e.g., propositions describ-
ing the knowledge domain properties.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch06 page 621
Hypothetic
stratum
Assertoric
stratum
Subsystem
of BondsMSS
Erotetic
stratum
The hypothetic and erotetic strata of the MSS also has several
levels inheriting them from the described above assertoric stratum of
the knowledge system. However, while the assertoric stratum employs
validated models of knowledge objects and their properties and rela-
tions, the hypothetic stratum of the MSS uses hypothetic and heuris-
tic models of knowledge objects and their properties and relations.
It also contains calculi and algebras of such models.
The erotetic stratum of the MSS contains problems and questions
concerning models of knowledge objects, as well as their properties
and relations.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch06 page 622
PSS
Algorithmic
part
Algorithms,
procedures,
Operator scenarios Productive
part
level
Operators Attributed
Operating
part level
Operations
Data Componential
level
Hypothetic
stratum
Assertoric
stratum
BondsPSS
Erotetic
stratum
ISS
Fragment
part
Parts
and
components
of automata
Performer Productive
part level
automata
machines
Attributed
devices
System part level
Systems
and
networks
of automata Componential
level
Hypothetic
stratum
Assertoric
stratum
BondsISS
Erotetic
stratum
ASS
Combination
part
Algebras &
calculi of
estimates,
norms, Productive
Evaluation values level
part
Estimates
Scales
Componential
level
Hypothetic
stratum
Assertoric
stratum
BondsASS
Erotetic
stratum
For instance, knowledge about cats and dogs is the union of knowl-
edge about cats and knowledge about dogs.
It is also possible to define structural relations between and struc-
tural operations with knowledge systems that have some structure,
e.g., a propositional algebra, i.e., a set of propositions closed with
respect to logical operations of disjunction, conjunction, implication
and negation, or a logical calculus (cf., Section 5.2).
Here are some of these relations.
Algebra
theory of universal algebras
σ σ σ
semigroup theory ring theory module theory σ
σ σ theory of vector spaces `
group theory field theory theory of associative rings
σ σ
σ
theory of abelian groups theory of finite groups
Scientific theories are not static systems. They are born, develop-
ing and often going to another world — from science to the history
of science. To represent theory dynamics, structuralists introduced
the concept theory-evolution.
topology
σ σ σ
Chapter 7
643
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch07 page 644
Note that there are people who are creative but lack intelligence
and there are intelligent people who are deficient in creative skills.
There are different forms of material activity that involve knowl-
edge processes:
a long time, but it has not been sufficiently exact lacking an adequate
and efficient definition (Bogoyavlenskaya, 1983). Consequently, this
absence has implied many difficulties for understanding and investi-
gation of this important phenomenon.
Such an exact definition was constructed by Burgin (1995d; 1996a;
1998a). According to this definition, Intellectual activity is a mean-
ingful functioning of mind (intelligent thinking).
This definition provides for the dynamic expression of human
intelligence as well as for elaboration of efficient means for its
study. That is why, an investigation of various properties of intellec-
tual activity is of the greatest interest to psychology and pedagogy
because intelligence has been always considered as main characteris-
tic of a human being.
The main assumption of this approach is that intelligence is
always displayed in different kinds of behavior and, in particular,
in cognition. Thus, knowing very little about inner structure of intel-
ligence, it is more efficient to consider intelligent behavior, or more
exactly, intellectual activity of a person.
Taking essential components of human activity as the base, dif-
ferent types and grades of intellectual activity are explicated and
explored.
With respect to the result, there are three types of intellectual
activity:
• Empirical intuition;
• Reasoning;
• Pure intuition.
— Sensation
— Intuition
— Thinking
— Feeling
— Analogy;
— Extension or generalization;
— Guessing.
Neomammalian
Paleomammalian
Reptilian
Will/instinct
Behavior
Information retrieval
Knowledge
Stages 8 and 9:
Interpretation/evaluation
Patterns
Stage 7:
Data mining
Algorithms, models
and methods
Stages 5 and 6:
Selection of algorithms, models
and methods
Stage 1:
Identification of the application domain, goals,
tools and initial database (or databases) for KDD
The basic stages of the KDD process are (cf., Figure 7.5):
7.1.4. Learning
“The most erroneous stories are those we think we know best —
and therefore never scrutinize or question.”
Stephen Jay Gould
A source of
the behavioral
change Environment Society Self
In the case of learning in the broad sense, we have one more type:
— learner ability
— gender
— culture
— attitude
— motivation
— learner ability to manage the learning process
Tacit Tacit
T E
Socialization Externalization x
a
p
c Empathizing Articulating l
i i
t c
i
t
E
Internalization Combination x
T p
a Embodying Connecting l
c i
i c
t i
t
Explicit Explicit
It is assumed that each phase takes equal time when the problem
is solved.
Confucius
— Process analysis;
— Process mending.
➢ Goal determination;
➢ Means determination;
➢ Activity organization;
➢ Activity realization;
➢ Result evaluation.
1. Knowledge acquisition;
2. Knowledge dissemination;
3. Knowledge utilization.
PEOPLE
Define the roles of,
and knowledge needed by
Help design
and then use
TECHNOLOGY
Chapter 8
721
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch08 page 722
Knowledge
Information
Data
Wisdom
Knowledge
Information
Data
is due to their not taking into account these different levels (of the
Data–Information–Knowledge Pyramid) and the fundamentally dif-
ferent processing required within and between them”.
The most popular approach to the relation between data and
information implies that information is an organized collection of
facts and data. In this context, the process of transition from data to
knowledge goes in two steps: at first, data are transformed into infor-
mation and then information is converted into knowledge by structur-
ing processes. Thus, information comes into view as an intermediate
level of similar phenomena situated between data and knowledge
forming the triad Data–Information–Knowledge.
However, the triad Data–Information–Knowledge is not always
visually represented by a three-level pyramid. Another structure
(cf., Figure 8.3) of the system Data–Information–Knowledge — the
chain — is considered by Liew (2007). In this structure, not only
Data
Wisdom
Understanding
Knowledge
Information
Data
of datum, a noun formed from the past participle of the Latin verb
dare — to give. Originally, data were things that were given (accepted
as “true”). A data element, d , “is the smallest thing which can be
recognized as a discrete element of that class of things named by a
specific attribute, for a given unit of measure with a given precision
of measurement”. (cf., (Zins, 2007))
Data has experienced a variety of definitions, largely depending
on the context of its use. With the advent of information technol-
ogy the word data became very popular and is used in a diversity of
ways. For instance, information science defines data as unprocessed
information, while in other domains data are treated as a represen-
tation of objective facts. In computer science, expressions such as
a data stream and packets of data are commonly used. The concep-
tualizations of data as a flow in both a data stream and drowning
in data occur due to our common experience of conflating a multi-
plicity of moving objects with a flowing substance. Data can travel
down a communication channel. Other commonly encountered ways
of talking about data include having sources of data or working with
raw data. We can place data in storage, e.g., in files or in databases,
or fill a repository with data. Data are viewed as discrete entities.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch08 page 734
similar
Energy ≈ Information
contains contain
similar
Matter ≈ Structures
similar
Energy ≈ Information
contains contains
similar
Matter ≈ Knowledge/Data
does not specify representation of what knowledge is, while the KIME
model does it (Burgin, 2010). Second, MacKay does not specify what
kind of representation knowledge is, while the KIME model does it,
assuming that knowledge is a structure or a system of structures.
Third, the KIME model provides an explication of the quantum
knowledge structure (cf., Section 4.1).
The KIME Square well correlates with the distinction some
researchers in information sciences have continually made between
information and knowledge. For instance, Machlup (1983) and Sholle
(1999) distinguish information and knowledge along three axes:
D N
f
q
U A . (8.4)
p
N L . (8.5)
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch08 page 757
e.g., at the recipient computer. The internet protocol (IP) part han-
dles the address of the destination computer so that each packet is
routed (sent) to its proper destination.
In named data networking (NDN) architecture for the future
Internet, the transmitted packets of data carry data names rather
than source or destination addresses. The developers of this architec-
ture believe that this conceptually simple shift will have far-reaching
implications for how people design, develop, deploy, and use networks
and applications. The named data principle implies that a commu-
nication network should allow a user to focus on the data identified
by their names he or she needs, rather than having to reference a
specific, physical location where that data would be retrieved.
Actually, Internet packets of data are already named by destina-
tion addresses and the new approach suggests changing these names
to the original data names (identifiers). It is assumed that such a
renaming brings potential for a wide range of benefits such as sim-
pler configuration of network devices, building security into the net-
work at the data level and content caching to reduce congestion and
improve delivery speed. In addition, sustained growth in e-commerce,
digital media, social networking, and smartphone applications has led
to prevailing use of the Internet in the role of a distribution network.
Utilization of a point-to-point communication protocol in distribu-
tion networks is complex and error-prone, while NDN better suits
distribution environment.
In this context, named sets give a natural mathematical model for
named data, which form the naming component of the knowledge
quanta with data as their object or domain (Burgin and Tandon,
2006). Consequently, named set theory provides powerful means for
network algorithms and procedures in the form of various operations
and correspondences (Burgin, 2011).
Another example of naming data is named graphs, which are a key
structure of the Semantic Web architecture. In it, a set of resource
description framework (RDF) statements (a graph) are identified
using a universal resource identifier (URI), allowing derivation of
descriptions of context, provenance information or other metadata.
This shows that named graphs form an extension of the RDF data
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch08 page 760
prior expectations
and water turbines rotate magnets and in such a way create electric
current. Petroleum-powered engines make cars ride and planes fly.
The KIME Square shows essential distinction between knowledge
and information in general, as well as between knowledge and cogni-
tive information, in particular. This distinction has important impli-
cations for education. For instance, transaction of information (for
example, in a teaching process) does not give knowledge itself. It only
causes such changes that may result in the growth of knowledge.
This correlates with the approaches of Dretske (1981) and
MacKay (1969), who declare that information increases knowledge
and knowledge is considered as a completed act of information.
However, the general theory of information differs from Dretske’s
and MacKay’s conceptions of information because the general theory
of information demonstrates that information transaction may result
not only in the growth of knowledge but also in the decrease of knowl-
edge (Burgin, 1994). An obvious case for the decrease of knowledge is
misinformation and disinformation. For instance, people know about
the tragedy of the Holocaust during the World War II. However,
when articles and books denying the Holocaust appear, some people
believe this and lose their knowledge about the Holocaust. Disin-
formation, or false information, is even used to corrupt opponent’s
knowledge in information warfare.
However, the general theory of information differs from this con-
ception of information because it demonstrates that information
transaction may result also in the decrease of knowledge (Burgin,
1994). Namely, even genuine information can decrease knowledge.
For instance, some outstanding thinkers in ancient time, e.g., Greek
philosophers Leucippus (5th century B.C.E.) and Democritus (ca.
460–370 B.C.E.), knew, in some sense, that all physical things con-
sisted of atoms. Having no proofs of this feature of nature and lack-
ing any other information supporting it, people lost this knowledge.
Although some sources preserved these ideas, they were considered
as false beliefs. So, information about absence of supporting evidence
often decreases knowledge in society. Nevertheless later when physics
and chemistry matured, they found experimental evidence for atomic
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch08 page 764
does not give knowledge itself. It only causes changes that may result
in the growth of knowledge. In other words, it is possible to trans-
mit only information from one system to another, allowing a cor-
responding infological system to transform data into knowledge. In
microphysics, the main objects are subatomic particles and quantum
fields of interaction. In this context, knowledge and data play role of
particles, while information realizes interaction.
As we have seen, usually people assume that information cre-
ates knowledge. For instance, when an individual sends some text
by e-mail, this text, which is usually a knowledge representation,
is converted to data packages, which are then transmitted to the
recipient. However, utilizing information, it is possible to create data
from knowledge. This shows that the triad (8.13) inverse to the triad
(8.10) is also meaningful and reflects a definite type of information
processes.
Knowledge Data . (8.13)
At the same time, data are used not only knowledge generation
but also for deriving other epistemic structures such as beliefs and
fantasies. It gives us two more information diagrams.
information
Data Beliefs
(8.14)
and
information
. (8.15)
Data Fantasy
For instance, let us assume the value of the weight complexity for
texts was estimated based on recursive algorithms, such as Turing
machines. Later the complexity estimate was obtained by means of
super- recursive algorithms, such as inductive Turing machines. As it
is proved that super-recursive algorithms decrease algorithmic com-
plexity (Burgin, 2005), the value-changing operator has to be applied
to give correct complexity of the texts processed by inductive Turing
machines.
There are also mixed epistemic information operators. A mixed
epistemic information operator acts on symbolic epistemic items
(structures) in some epistemic state, on their weights and on their
connections (bonds or relations).
For instance, a mixed epistemic information operator can act on
knowledge items in a knowledge state, their weights and their con-
nections (bonds or relations).
Operators of logical inference, such as rules of deduction, are
mixed epistemic information operators act because they add new
knowledge items in the form of propositions or/and predicates and
establish relations of provability/deducibility between propositions
or/and predicates.
Subspaces of knowledge spaces represent subsystems of knowledge
systems. For instance, in large knowledge systems, such as a scien-
tific theory, it is possible to separate the subsystem of denotational
knowledge and the subsystem of operational knowledge.
It looks like it might be sufficient to consider only finite or at least,
locally finite agents. However, if knowledge is represented by logical
statements and it is assumed (as it is done, for example, in the theory
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch08 page 772
f
E H
p r . (8.16)
B D
g
C(pE (x)) = pH (A(x)) for all elements x from E. This gives us the
commutative diagram (8.17).
A
E H
pE pH . (8.17)
EE HE
C
(a) uniform if for any real number a and any weighted epis-
temic structures (e; w1 , . . . , wk ) and (l; v1 , . . . , vh ), the equal-
ity A(e; w1 , . . . , wk ) = (l; v1 , . . . , vh ) implies the equality
A(e; au 1 , . . . , au k ) = (l; av 1 , . . . , av h ).
(b) additive if A(e; w1 + u1 , . . . , wk + uk ) = A(e; w1 , . . . , wk ) +
A(e; u1 , . . . , uk ).
(c) linear if it is uniform and additive.
A B
C D
Figure 8.9. A topological space that is path (2, 12 )-connected but it is not path
(1, 1)-connected
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch08 page 776
√
(2, 12 )-connected. Indeed, d(A, C) = 2, the lengths of both paths
(A, B, C) and (A, D, C) is equal to 2 with d(A, B) = d(B, C) =
d(A, D) = d(D, C) = 1. Consequently, d(A, B) + d(B, C) <
2d(A, C) but d(A, B) + d(B, C) > 1 · d(A, C). Thus, the space C is
path (2, 1)-connected but it is not path (1, 1)-connected. In addition,
it is not path (q, p)-connected when p < 1 because there no paths
between A and C such that the distance between two consecutive
points is less than 1.
However, not all sets in metric spaces are path (q, r)-connected.
to a stratum of the current knowledge state that does not have the
same replica. One of its special cases is a restricted copying epistemic
information operator COPY0 , which makes a copy of a knowledge
item and adds it only to a stratum of the current knowledge state
that does not have the same replica. Operators REPL0 , and COPY0
are used in stratified M-spaces not to make these spaces stratified
M-multispaces.
Proposition 8.4.2. The operator COPY0 can copy a knowledge
item only to a different stratum, i.e., if a ∈ Ki and COPY0 a ∈ Kj ,
then i
= j.
Indeed, if this condition is violated, then the initial M-space is
converted to an M-multispace.
Complex information operations and operators are studied in
(Burgin, 1997e).
Definition 8.4.23. An epistemic information operator C is called
the sequential composition of an epistemic information operator A
with an epistemic information operator B if C(x) is defined and equal
to B(A(x)) when: 1) A(x) is defined and belongs to the domain of
B; 2) B(A(x)) is defined. Otherwise, C gives no result being applied
to x, i.e., C(x) = ∗ .
It is denoted by B ◦ A.
Taking sequential composition of an epistemic information oper-
ator A with itself, we obtain sequential powers An of the operator A.
In the general case, the sequential composition of epistemic infor-
mation operators is not commutative in M-spaces as the following
example demonstrates.
Example 8.4.9. Let us consider a structured M-space M = {KS M ;
OSM } where KS M = ∪i∈I KS M i . In this space, the operator MVaij
moves an element a from the stratum KS M i to the stratum KS M j ,
and does not change other elements from KS M . Taking the sequential
composition of such operators, we have
MV aij ◦ MV aik = MV aij
= MV aik ◦ MV aij = MV aik
if i
= j, k
= j, and i
= k. Thus, operators MV aij do not commute
with one another.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch08 page 787
At the same time, all these operators are idempotents, i.e., MV aij ◦
MV aij = MV aij .
It is necessary to remark that in a structured M-multispace M
with an infinite number of elements a in each stratum KS M i , oper-
ators MV aij and MV aik commute with one another. This demonstrates
difference between M-spaces and M-multispaces.
Proposition is proved.
He who knows, does not speak. He who speaks, does not know.
Lao-Tzu
• KRO is the set of all knowledge items added in the process of infor-
mation acceptation, i.e., in the process caused in the knowledge
space of A by the actual epistemic information operator R.
• KRD is the set of all knowledge items removed (deleted) in the
process of information acceptation.
• KRM is the set of all knowledge items moved between strata in the
process of information acceptation.
operator R
KRO = KRF \KI ,
and
KRD = KI \KRF .
Proof . Knowledge items that belong to KRF but does not belong to
KI are definitely added when the operator R is applied. This gives us
the inclusion KRO ⊇ KRF \KI . Besides, knowledge items that belong
to KI cannot be added in the transition process — it is only possible
to move or to delete them. This gives us the equality KRO = KRF \KI .
Knowledge items that belong to KI but does not belong to KRF
are definitely deleted when the operator R is applied. This gives us
the inclusion KRD ⊇ KI \KRF . Besides, knowledge items that belong
to KRF cannot be deleted in the transition process — it is only
possible to move or to add them. This gives us the equality KRD =
KI \KRF .
Proposition is proved.
Introduced components of knowledge states allow us to define
measures of information. At first, we define measures of received
information.
Let us consider an actual epistemic information operator R and
a system A from K in the knowledge state KI .
Definition 8.5.2. The transitional measure |IRA | of information IRA
transmitted by R to A represents changes of knowledge in A with
the knowledge state KI under the impact of R and is defined by the
following formula
|IRA | = ∆K = |KRF | − |KI |
where |X| is the number of elements in the set X.
This shows that transitional measure of transmitted information
can be positive when more knowledge items are added than deleted
or negative when less knowledge items are added than deleted.
As in the transition process, each operation is performed at most
one time with each of knowledge item from KI , Proposition 8.5.1
gives us the following result.
September 27, 2016 19:40 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch08 page 796
Chapter 9
Conclusion
803
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-ch09 page 804
Conclusion 805
Conclusion 807
Appendix
A. N. Whitehead
809
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-appendix page 810
810 Appendix
Appendix 811
Q is asymmetric, i.e., only one relation xQy or yQx is true for all
x, y ∈ X.
An equivalence on a set X is a binary relation Q on X that is
reflexive, transitive and satisfies the following additional axiom:
812 Appendix
Appendix 813
For any set S, χS (x) is its characteristic function, also called set
indicator function, if χS (x) is equal to 1 when x ∈ S and is equal
to 0 when x ∈ / S, and CS (x) is its partial characteristic function if
CS (x) is equal to 1 when x ∈ S and is undefined when x ∈ / S.
If f : X → Y is a function and Z ⊆ X, then the restriction
f|Z of f on Z is the function defined only for elements from Z and
f|Z (z) = f (z) for each element z from Z.
If U is a correspondence of a set X to a set Y (a binary relation
between X and Y ), i.e., U ⊆ X ×Y , then U (x) = {y ∈ Y ; (x, y) ∈ U }
and U −1 (y) = {x ∈ X; (x, y) ∈ U }.
An n-ary relation R in a set X is a subset of the nth power of X,
i.e., R ⊆ X n . If (a1 , a2 , . . . , an ) ∈ R, then one says that the elements
a1 , a2 , . . . , an from X are in relation R.
Let X be a set. An integral operation W on the set X is a mapping
that given a subset of X, corresponds to it an element from X, and
for any x ∈ X, W ({x}) = x.
Examples of integral operations are: sums, products, taking min-
imum, taking maximum, taking infimum, taking supremum, integra-
tion, taking the first element from a given subset, taking the sum of
the first and second elements from a given subset, and so on.
Examples of finite integral operations defined for numbers are:
sums, products, taking minimum, taking maximum, taking average,
weighted average, taking the first element from a given subset, and
so on.
As a rule, integral operations are partial, that is, they assign val-
ues, e.g., numbers, only to some subsets of X.
814 Appendix
correspondence
. (2)
Essence 1 Essence 2
Appendix 815
named set
816 Appendix
take as X and Y the set of all words in some alphabet Q and all Tur-
ing machines that work with words in the alphabet Q as algorithms.
Then theory of algorithms tells us that, there are much more algorith-
mic named sets than relations (set theoretical named sets) because
several different algorithms (e.g., Turing machines) can define the
same function or relation. Thus, algorithmic named sets are different
from set theoretical named sets.
Mereological named sets are essentially different from set theo-
retical named sets (Leśniewski, 1916; 1992; Leonard and Goodman,
1940; Burgin, 2011). Categorical named sets (fundamental triads)
also are different from set theoretical named sets. For instance, an
arrow in a category is a fundamental triad but does not include sets
as components (Herrlich and Strecker, 1973). Named sets with phys-
ical components, such as (a woman and her name), (an article and
its title), (a book and its title) and many others, are far from being
set theoretical.
People meet fundamental triads (named sets) constantly in their
everyday life. People and their names constitute a named set. Cars
and their owners constitute another named set. Books and their
authors constitute one more named set. A different example of a
named set (fundamental triad) is given by the traditional scheme of
communication:
Sender Receiver .
Appendix 817
This explains the name “named set” that has been applied to this
structure (Burgin, 1990; 1991; 2011). A standard model of a named
set is a set of people who constitute the carrier, their names that
form the set of names, and the naming relation consists of the cor-
respondence between people and their names.
Many mathematical systems are particular cases of named sets or
fundamental triads. The most important of such systems are fuzzy
sets (Zadeh, 1965; 1973; Zimmermann, 1991), multisets (Knuth,
1997), graphs and hypergraphs (Berge, 1973), topological and fiber
bundles (Husemoller, 1994; Herrlich and Strecker, 1973). Moreover,
any ordinary set is, as a matter of fact, some named set, and namely,
a singlenamed set, i.e., such a named set in which all elements have
the same name (Burgin, 2011). It is interesting that utilization of
singlenamed sets instead of ordinary sets allows one to solve some
problems in science, e.g., Gregg’s paradox in biology (Ruse, 1973;
Burgin, 1983).
It is possible to find many named sets in physics. For instance,
according to particle physics, any particle has a corresponding
antiparticle, e.g., electron corresponds to positron, while proton cor-
responds to antiproton. Thus, we have a named set with particles as
its support and antiparticles as its set of names. A particle and its
antiparticle have identical mass and spin, but have opposite value
for all other non-zero quantum number labels. These labels are elec-
tric charge, color charge, flavor, electron number, muon number, tau
number, and barion number. Particles and their quantum number
labels form another named set with particles as its support and quan-
tum number labels as its set of names.
When we study information and information processes, fun-
damental triads become extremely important. Each direction in
information theory has fundamental concepts that are and mod-
els of which are either fundamental triads or systems built of
fundamental triads. Indeed, the relation between information and
the receiver/recipient introduced in the Ontological Principle O1
is a fundamental triad, while the Ontological Principles O4 and
O4a introduce the interaction and second communication triads
(Burgin, 2010). The first communication triad is basic in statistical
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-appendix page 818
818 Appendix
information theory (Chapter 3). A special case of named sets are Chu
spaces used in information studies, introduced by Barr (1979) and
theoretically developed by Chu.
A Chu space C over a set B is a triad (A, r, X) with r : A × X →
B. The set A is called the carrier and X the cocarrier of the Chu
space C. Matrices are often used to represent Chu spaces. Note that
in the form (A, r, X), Chu space is a triad but not a fundamental
triad. At the same time, in the more complete form (A × X, r, B),
Chu space is a particular case of fundamental triads.
One more example of named sets is given by operands from the
multidimensional structured model of computer systems and compu-
tations (Burgin and Karasik, 1976; Burgin, 1976; 1982).
An operand Q is a triad (Z, r, C) where Z = Z1 × Z2 × · · · × Zn ,
each Zi is a subset of the set Z of all integer numbers, r : Z × Z → C
and C is a set. The set Z is called the support of the operand Q.
Taking an operand with n = 2 and arbitrary sets Z1 and Z2 , we
get the concept of a valued relation (Dukhovny and Ovchinnikov,
2000; Frascella and Guido, 2008).
An operator A in the multidimensional structured model of com-
puter systems and computations is a mapping
A = Ql → Qh ,
where Qk is the set of all k-tuples (Q1 , Q2 , . . . Qk ) and Qi all are
operands (i = 1, 2, 3, . . . , k; k ∈ {l, h}).
Many constructions, such as valued relations, fuzzy relations
(Salii, 1965), classifications in the sense of (Barwise and Seligman,
1997), and Chu spaces, are particular cases of such operands.
Named sets are explicitly used in many areas: in models of com-
puters and computation (Burgin and Karasik, 1976), artificial intel-
ligence (Burgin and Gladun, 1989; Burgin and Gorsky, 1991; Bur-
gin and Kuznetsov, 1992), mathematical linguistics (Burgin and
Burgina, 1982), software engineering (Browne et al., 1995) and
Internet technology (Balakrishnan et al., 2004; Cunnigham, 2004).
Set-theoretical named sets are very popular in databases and knowl-
edge engineering due to the fact that both binary relations and hier-
archical structures are specific kinds of named sets. Even images of
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-appendix page 819
Appendix 819
820 Appendix
Appendix 821
822 Appendix
(Burgin, 2005) have type RN and store real numbers. When dif-
ferent kinds of devices are combined into one, this new device has
several types of memory cells. In addition, different types of cells
facilitate modeling the brain neuron structure by inductive Turing
machines.
It is possible to realize an arbitrary structured memory of an
inductive Turing machine M , using only one linear one-sided tape L.
To do this, the cells of L are enumerated in the natural order from
the first one to infinity. Then L is decomposed into three parts
according to the input and output registers and the working memory
of M . After this, nonlinear connections between cells are installed.
When an inductive Turing machine with this memory works, the
head/processor is not moving only to the right or to the left cell
from a given cell, but uses the installed nonlinear connections.
Such realization of the structured memory allows us to consider an
inductive Turing machine with a structured memory as an inductive
Turing machine with conventional tapes in which additional connec-
tions are established. This approach has many advantages. One of
them is that inductive Turing machines with a structured memory
can be treated as multitape automata that have additional struc-
ture on their tapes. Then it is conceivable to study different ways to
construct this structure. In addition, this representation of memory
allows us to consider any configuration in the structured memory E
as a word written on this unstructured tape.
If we look at other devices of the inductive Turing machine M ,
we can see that the processor H performs information processing
in M . However, in comparison to computers, this operational device
performs very simple operations. When H consists of one unit, it can
change a symbol in the cell that is observed by H, and go from this
cell to another using a connection from K. This is exactly what the
head of a Turing machine does.
It is possible that the processor H consists of several processing
units similar to heads of a multihead Turing machine. This allows
one to model in a natural way various real and abstract computing
systems by inductive Turing machines. Examples of such systems
are: multiprocessor computers; Turing machines with several tapes;
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-appendix page 823
Appendix 823
824 Appendix
Appendix 825
826 Appendix
Appendix 827
828 Appendix
Appendix 829
830 Appendix
Appendix 831
832 Appendix
Commutativity of addition: a + b = b + a;
Associativity of addition: (a + b) + c = a+ (b + c);
Commutativity of multiplication: a · b = b · a;
Associativity of multiplication: (a · b) · c = a · (b · c);
Distributivity of multiplication with respect to addition:
a · (b + c) = a · b + a · c.
Zero is a neutral element with respect to addition:
a + 0 = 0 + a = a.
One is a neutral element with respect to multiplication:
a · 1 = 1 · a = a.
A function with R as its range is called a real function or a real-
valued function.
A function with C as its range is called a complex function or a
complex-valued function.
Appendix 833
That is, the distance from x through z to y is never less than the
distance directly from x to y, or the shortest distance between any
two points is a straight line.
A set X with a metric d is called a metric space. The number
d(x, y) is called the distance between x and y in the metric space X.
For instance, in the set R of all real numbers, the distance d(x, y)
between numbers x and y is the absolute value |x − y| , i.e., d(x, y) =
|x−y|. This metric defines the following topology in R. If a is a point
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-appendix page 834
834 Appendix
N1.
x
= 0 if and only if x = 0, i.e., the zero vector has zero length,
while any other vector has a positive length.
N2. For any positive number a from R, we have
ax
= a
x
, i.e.,
multiplying a vector by a positive number has the same effect
on the length.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-appendix page 835
Appendix 835
i.e., the norm of a sum of vectors is never larger than the sum
of their norms.
This implies:
x
−
y
≤
x + y
.
A vector space L with a norm is called a normed vector space or
simply, a normed space.
The space Rn is a normed vector space with the following norm:
if x ∈ Rn and x = ni=1 ai xi where xi are elements from a basis B
of the space Rn , then
x
= a21 + a22 + · · · + a2n .
Proposition D.1. Any normed vector space is a metric space.
Indeed, we can define d(x, y) =
x−y
and check that all axioms
of metric are valid for this distance.
Another natural metric in a normed vector space is the British
Rail metric (also called the Post Office metric or the SNCF metric)
on a normed vector space, given by d(x, y) =
x
+
y
for distinct
vectors x and y, and d(x, x) = 0.
A Hilbert space is an abstract linear (vector) space over the field
R of real numbers or the field C of complex numbers with an inner
product and complete as a metric space.
An inner product in a vector space V is a function ·, · : V × V →
R (or in the complex case ·, · : V × V → C) that satisfies the
following properties for all vectors x, y, and x from V and number a:
Conjugate symmetry:
x, y = y, x.
Linearity in the first argument:
ax , y = ax, y,
x + z, y = x, y + z, y.
Positive-definiteness:
x, y > 0
for all x = 0 from V .
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-appendix page 836
836 Appendix
Bibliography
837
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 838
838 Bibliography
Bibliography 839
840 Bibliography
Bibliography 841
842 Bibliography
Bibliography 843
844 Bibliography
Bibliography 845
846 Bibliography
Bibliography 847
848 Bibliography
Bibliography 849
850 Bibliography
Bibliography 851
852 Bibliography
Bibliography 853
854 Bibliography
Bibliography 855
856 Bibliography
Bibliography 857
858 Bibliography
Bibliography 859
860 Bibliography
Bibliography 861
862 Bibliography
Bibliography 863
864 Bibliography
Bibliography 865
866 Bibliography
Bibliography 867
868 Bibliography
Bibliography 869
870 Bibliography
Bibliography 871
Foucault, M. (1966) Les mots et les Choses — une archéologie des sciences
humaines, Gallimard, Paris.
Fraenkel, A. A. and Bar-Hillel, Y. (1958) Foundations of Set Theory, North
Holland P.C., Amsterdam.
Frank, P. G. (Ed.) (1956) The Validation of Scientific Theories, Beacon
Press, Boston.
Frascella, A. and Guido, C. (2008) Transporting many-valued sets along
many-valued relations, Fuzzy Sets and Systems, v. 159, No. 1, pp. 1–22.
Frawley, W. J., Piatetsky-Shapiro, G. and Matheus, C. (1991) Knowl-
edge Discovery in Databases: An Overview, in Knowledge Discovery
in Databases, AAAI Press/MIT Press, Cambridge, MA, pp. 1–30.
Fredkin, E. and Toffoli, T. (1982) Conservative logic, International Journal
of Theoretical Physics, v. 21, No. 3–4, pp. 219–253.
Freeman, C. (1982) The Economics of Industrial Innovation, Penguin,
Harmondsworth.
Freeman, A. and DeWolf, R. (1992) The 10 Dumbest Mistakes Smart People
Make and How to Avoid Them, Harper Collins Publ., New York, NY, US.
Frenken, K. (2005) Innovation, Evolution and Complexity Theory, Edward
Elgar, Cheltenham, UK/Northampton, MA.
Fricke, M. (2008) The Knowledge Pyramid : A Critique of the DIKW Hier-
archy, Preprint (electronic edition: http://dlist.sir.arizona. edu/2327/).
Frieden, R. B. (1998) Physics from Fisher Information, Cambridge Univer-
sity Press, Cambridge.
Frieden, B. R. (2004) Science from Fisher Information: A Unification,
Cambridge University Press, Cambridge.
Frieden, B. R. and Soffer, B. H. (1995) Lagrangians of physics and the game
of Fisher-information transfer, Physical Review E 52, 2274.
Frieden, B. R., Plastino, A. and Soffer, B. H. (2001) Population genetics
from an information perspective, Journal of Theoretical Biology, v. 208,
pp. 49–64.
Friedman, N. and Halpern, J. Y. (1994) A knowledge-based framework
for belief change, part II: Revision and update, Proc. of the Fourth
International Conference on the Principles of Knowledge Representa-
tion and Reasoning (KR’94), pp. 190–200.
Frege, G. (1891) Funktion und Begriff, Hermann Pohle, Jena.
Frege, G. (1892) Über Begriff und Gegenstand, Vierteljahrsschrift für wis-
senschaftliche Philosophie, v. 16, pp. 192–205.
Frege, G. (1892a) Über Sinn und Bedeutung, Zeitschrift für Philosophie
und philosophische Kritik, v. 100, pp. 25–50.
Friedman, N. and Halpern, J. Y. (1994) A knowledge-based framework
for belief change, part II: Revision and update, Proc. of the Fourth
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 872
872 Bibliography
Bibliography 873
874 Bibliography
Bibliography 875
876 Bibliography
Bibliography 877
878 Bibliography
Bibliography 879
Harel, D., Turyn, J. and Kozen, D. (2000) Dynamic Logic, MIT Press,
Cambridge, MA.
Harland, W. B., Armstrong, R. L., Cox, A. V., Craig, L. E., Smith, A. G.
and Smith, D. G. (1990) A Geologic Time Scale, Cambridge University
Press, Cambridge.
Harrah, D. (2002) The logic of questions, in Handbook of Philosophical
Logic, v. 8, Kluwer, Dordrecht/Boston/London, pp. 1–60.
Harris, J. (1988) Developments in algebraic geometry, Proc. of the
AMC Centennial Symposium, A.M.S. Publications, Providence,
pp. 89–100.
Hartley, R. T. and Barnden, J. A. (1997) Semantic networks: visualizations
of knowledge, Trends in Cognitive Science, v. 1, No. 5, pp. 169–175.
Haug, E. G. (2004) Why so negative to negative probabilities, Wilmott
Magazine, Sep/Oct, pp. 34–38.
Hawking, S. W. (1988) A Brief History of Time: From the Big Bang to
Black Holes, Bantam Books, Toronto/New York/London.
Hawley, K. (2003) Success and knowledge-how, American Philosophical
Quarterly, v. 40, No. 1, pp. 19–31.
Hawthorne, J. and Stanley, J. (2008) Knowledge and action, Journal of
Philosophy, v. 105, No. 10, pp. 571–590.
Hayakawa, S. I. (1949) Language in Thought and Action, Harcourt, Brace
and Co., New York.
Hayakawa, S. I. (1963) Symbol, Status, and Personality, Harcourt, Brace &
World, New York.
Hayakawa, S. I. (1979) Through the Communication Barrier: On Speaking,
Listening, and Understanding, Harper & Row, New York.
Hayakawa, S. I. (Ed.) (1971) Our Language and Our World, Books for
Libraries Press, Freeport, NY.
Hayakawa, S. I. (Ed.) (1964) The Use and Misuse of Language, Fawcett
Publications, Greenwich, CT.
Head, H. and Holmes, G. (1911) Sensory disturbances from cerebral lesions,
Brain, v. 34, pp. 102–254.
Heather, M. and Rossiter, N. (2009) Fragmentary structure of global knowl-
edge: constructive processes for interoperability, Kybernetes, v. 38,
No. 7/8, pp. 1409–1418.
Hecht, M., Maier, R., Seeber, I. and Waldhart, G. (2011) Fostering adop-
tion, acceptance, and assimilation in knowledge management system
design, Proc. of the 11th International Conference on Knowledge Man-
agement and Knowledge Technologies. Graz, ACM Digital Library, New
York, pp. 1–8.
Hegel, G. W. F. (1813) Wissenschaft der Logik, bd. 1&2, Schrag, Nürnberg.
Heisenberg, W. (1931) Über die inkohärente Streuung von Röntgenstrahlen,
Physik. Zeitschr., v. 32, pp. 737–740.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 880
880 Bibliography
Bibliography 881
882 Bibliography
Bibliography 883
884 Bibliography
Bibliography 885
886 Bibliography
Bibliography 887
888 Bibliography
Bibliography 889
890 Bibliography
Bibliography 891
892 Bibliography
Bibliography 893
894 Bibliography
Bibliography 895
McCarthy, H., Miller, P. and Sidmore, P. (Eds.) (2004) Network Logic: Who
Governs in an Interconnected World ? Demos, London, UK.
McCulloch, G. (1989) The Game of the Name: Introducing Logic, Language
and Mind, Oxford University Press, Oxford.
McDermott, D. and Doyle, J. (1980) Non-monotonic logic, I. Artificial Intel-
ligence, v. 25, pp. 41–72.
McGrath, M. (2014) Propositions, in The Stanford Encyclopedia of Philos-
ophy, E. N. Zalta (Ed.), (http://plato.stanford.edu/archives/spr2014/
entries/propositions/).
McIlraith, S. and Amir, E. (2001) Theorem proving with structured theo-
ries, Proc. of the 17th Intl’ Joint Conference on Artificial Intelligence,
(IJCAI ’01), pp. 624–631.
McKeen, J. D. and Staples, D. S. (2002) Knowledge managers: Who they are
and what they do?, in Handbook on Knowledge Management, Springer-
Verlag, New York.
McNeill, D. and Freiberger, P. (1993) Fuzzy Logic, Simon and Schuster,
New York.
Meadow, C. T. and Yuan, W. (1997) Measuring the impact of informa-
tion: Defining the concepts, Information Processing and Management,
v. 33, No. 6, pp. 697–714.
Medin, D. L. and Ortony, A. (1989). Psychological essentialism, in Simi-
larity and Analogical Reasoning, S. Vosniadou and A. Ortony (Eds.),
Cambridge University Press, Cambridge, pp. 179–195.
Medin, D. L. and Shoben, E. J. (1988) Context and structure in conceptual
combination, Cognitive Psychology, v. 20, pp. 158–190.
Meinke, K. and Tucker, J. V. (Eds.) (1993) Many-sorted Logic and Its Appli-
cations, John Wiley & Sons, Inc., New York, NY.
Meinong, A. (1904) Über Gegenstandstheorie, in Untersuchungen zur
Gegenstandstheorie und Psychologie, pp. 1–51.
Meinong, A. (Ed.) (1904a) Untersuchungen zur Gegenstandstheorie und
Psychologie.
Meinong, A. (1907) Über die Stellung der Gegenstandstheorieim System der
Wissenschaften, R. Voigtländer, Leipzig.
Meinong, A. (1910) Über Annahmen, J. A. Barth, Leipzig, Germany.
Mendelson, E. (1997) Introduction to Mathematical Logic, Chapman & Hall,
London.
Menzel, C. (1986) A Complete Type-free ‘Second Order ’ Logic and
its Philosophical Foundations, Center for the Study and Language
and Information, Technical Report #CSLI-86–40, Stanford Univer-
sity, CA.
Menzel, C. (1993) The proper treatment of predication in fine-grained inten-
sional logic, Philosophical Perspectives, v. 7, pp. 61–87.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 896
896 Bibliography
Bibliography 897
898 Bibliography
Bibliography 899
900 Bibliography
Bibliography 901
902 Bibliography
Bibliography 903
904 Bibliography
Bibliography 905
Rabinovitch, N. L. (1970) Rabbi Levi Ben Gershon and the origins of math-
ematical induction, Archive for History of Exact Sciences, v. 6, No. 3,
pp. 237–248.
Rao, V. S. (1998) Theories of Knowledge: Its Validity and its Sources, Sri
Satguru Publications, Delhi, India.
Rashed, R. (1994) The Development of Arabic Mathematics: Between Arith-
metic and Algebra, Boston Studies in the Philosophy of Science, v. 156,
Kluwer Academic Publishers, Dordrecht.
Rational Software (1997) UML Semantics, (http://www.rational.
com/media/uml/resources/media/ad970804 UML11 Semantics2.pdf).
Read, S. (1988) Relevant Logic, Blackwell, Oxford.
Reading, A. (2006) The biological nature of meaningful information, Bio-
logical Theory, v. 1, No. 3, pp. 243–249.
Reeves, A.M., Beamish, N.L., Anderson, R.B., and Buel, J.W. (1906)
The Norse Discovery of America, Norroena Society, London/
Stockholm/Copenhagen/Berlin/New York.
Reichenbach, H. (1932) Axiomatik der Wahrscheinlichkeitsrechnung, Math-
ematische Zeitschrift, v. 34, No. 1, pp. 568–619.
Reichenbach, H. (1935) Wahrscheinlichkeitslehre: eineUntersuchungüber
die logischen und mathematischen Grundlagen der Wahrscheinlichkeit-
srechnung, Sijthoff, Leyden.
Reichenbach, H. (1947) Elements of Symbolic Logic, Macmillan, New York.
Reichenbach, H. (1949) Experience and Prediction, University of Chicago
Press, Chicago.
Reichenbach, H. (1949a) The Theory of Probability, University of California
Press, Berkeley.
Reid, L. A. (1985) Art and knowledge, British Journal of Aesthetics, v. 25,
pp. 115–224.
Reiter, R. (1980) A logic for default reasoning, Artificial Intelligence, v. 13,
pp. 81–132.
Renzl, B. (2002) Facilitating knowledge sharing and knowledge creation
through interaction analyses, Proc. of the 3 rd European Conference on
Organizational Knowledge, Learning, and Capabilities, Alba, Athens,
Greece.
Renzl, B. (2007) Language as a vehicle of knowing: The role of language and
meaning in constructing knowledge, Knowledge Management Research
& Practice, v. 5, pp. 44–53.
Rescher, N. (1976) Plausible Reasoning: An Introduction to the Theory and
Practice of Plausibilistic Inference, Van Gorcum, Assen, Amsterdam.
Rescher, N. and Manor, R. (1970) On inference from inconsistent premises,
Theory Decision, v. 1, No. 2, pp. 179–217.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 906
906 Bibliography
Bibliography 907
908 Bibliography
Russell, B. (1948) Human Knowledge: Its Scope and Limits, Simon and
Schuster, New York.
Russell, P. (1992) The Brain Book, Penguin Books, London, UK.
Russell, S, (2014) Unifying Logic and Probability: A New Dawn for AI? in
Information Processing and Management of Uncertainty in Knowledge-
Based Systems Communications in Computer and Information Science,
v. 442, pp. 10–14.
Ryle, G. (1949) The Concept of Mind, University of Chicago Press, Chicago.
Ryle, G. (1971 /1946) Knowing How and Knowing That, in Collected
Papers, v. 2, Barnes and Nobles, New York, pp. 212–225.
Ryle, G. (1957) The Theory of Meaning, Allen & Unwin, London.
Sagan, H. (1992) Introduction to the Calculus of Variations, Dover, New York.
Salii, V. N. (1965) Binary L-relations, Izv. Vysh. Uchebn. Zaved., Matem-
atika, v. 44, No. 1, pp. 133–145 (in Russian).
Sassone, V., Nielsen, M. and Winskel, G. (1996) Models for concurrency:
Towards a classification, Theoretical Computer Science, v. 170, Nos. 1–
2, pp. 297–348.
Satoh, K. (1988) Nonmonotonic reasoning by minimal belief revision, Proc.
of the International Conference on Fifth Generation Computer Systems
(FGCS’88), pp. 455–462.
Sauer, T. (2006) Numerical Analysis, Pearson Education, Inc., Boston.
de Saussure, F. (1916) Cours de linguistique générale, ed. C. Bally and
A. Sechehaye, with the collaboration of A. Riedlinger, Payot, Lausanne
and Paris (English translation by W. Baskin: Course in General Lin-
guistics, Fontana/Collins, Glasgow, 1977).
de Saussure, F. (1916a) Nature of the Linguistic Sign, in Cours de linguis-
tique générale, McGraw Hill Education.
Sax, G. (2010) Having Know-How: Intellect, Action, and Recent Work
on Ryle’s Distinction Between Knowledge-How and Knowledge-That,
Pacific Philosophical Quarterly, v. 91, No. 4, pp. 507–530.
Scaruffi, P. (2011) A Brief History of Knowledge, Amazon, Kindle edition.
Schaerf, M. and Cadoli, M. (1995) Tractable reasoning via approximation,
Artificial Intelligence, v. 74, pp. 249–310.
Schank, R. C. (Ed.) (1975) Conceptual Information Processing, North-
Holland Publishing Co., Amsterdam.
Schank, R. C. (1982) Dynamic Memory, Cambridge University Press, New
York.
Schank, R. C. (1991) Tell Me a Story: A New Look at Real and Artificial
Intelligence, Simon & Schuster, New York.
Schank, R. C. and Abelson, R. P. (1977) Scripts, Plans, Goals and Under-
standing, Lawrence Erlbaum Associates, Hillsdale, NJ.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 909
Bibliography 909
910 Bibliography
Bibliography 911
912 Bibliography
Smolin, L. (1999) The Life of the Cosmos, Oxford University Press, Oxford/
New York.
Smullian, R. (1978) What is the Name of this Book ? Prentice Hall, Engle-
wood Cliffs, NJ.
Smullyan, R. M. (1962) Theory of Formal Systems, Princeton University
Press, Princeton.
Sneed, J. D. (1971) The Logical Structure of Mathematical Physics, D. Rei-
del Publishing Company, Dordrecht.
Snodgrass, R. T. and Jensen, C. S. (1999) Developing Time-Oriented
Database Applications in SQL, Morgan Kaufmann, San Mateo, CA.
Snowdon, P. (2003) Knowing how and knowing that: A distinction
reconsidered, Proceedings of The Aristotelian Society, v. 104, No. 1,
pp. 1–29.
Sober, E. (1991) Core Questions in Philosophy, Macmillan Publishing Co.,
New York.
Solomonoff, R. (1964) A formal theory of inductive inference, Information
and Control, v. 7, No. 1 (Part I), pp. 1–22; No. 2 (Part II), pp. 224–254.
Solovay, R. M. (1976) Provability interpretations of modal logic, Israel Jour-
nal of Mathematics, v. 25, pp. 287–304.
Sorensen, R. (1992) Thought Experiments, Oxford University Press, New
York.
Sosa, D. (2006) Scepticism about intuition, Philosophy: The Journal of the
Royal Institute of Philosophy, v. 81, pp. 633–647.
Sowa, J. F. (1976) Conceptual graphs for a database interface, IBM Journal
of Research and Development, v. 20, No. 4, pp. 336–357.
Sowa, J. F. (1984) Conceptual Structures: Information Processing in Mind
and Machine, Addison-Wesley, Reading, MA.
Sowa, J. F. (1987) Semantic networks, Encyclopedia of Artificial Intelli-
gence, Wiley, New York.
Sowa, J. F. (Ed.) (1991) Principles of Semantic Networks: Explorations in
the Representation of Knowledge, Morgan Kaufmann Publishers, San
Mateo, CA.
Sowa, J. F. (2000) Knowledge Representation: Logical, Philosophical,
and Computational Foundations, Brooks/Cole Publishing Co., Pacific
Grove, CA.
Sowa, J. F. (2000a) Ontology, metadata, and semiotics, in Conceptual Struc-
tures: Logical, Linguistic, and Computational Issues, B. Ganter and
G. W. Mineau (Eds.), Lecture Notes in AI, v. 1867, Springer-Verlag,
Berlin, pp. 55–81.
Spanier, E. H. (1966) Algebraic Topology, Springer-Verlag, New York/
Heidelberg/Berlin.
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-bib page 913
Bibliography 913
914 Bibliography
Bibliography 915
916 Bibliography
Bibliography 917
918 Bibliography
Bibliography 919
920 Bibliography
Bibliography 921
922 Bibliography
Bibliography 923
924 Bibliography
Bibliography 925
926 Bibliography
Index
A universal, 192
Abstract, 28, 30, 41, 52, 138 Algorithm, 49–50, 157, 424–425
representation, 160
Abstraction, 28, 65, 138, 175
constructive, 487
ladder, 138
data-mining, 693
Abstractness, 95, 137
deduction, 279
level of, 137
first-level, 160
Action, 14, 21, 49
generating, 403
structure, 820
genetic, 694
Acquisition, x, 6
learning, 163
Activity, 15, 17
local search, 163
economic, 7 measuring, 220
intellectual, xv network, 759
mental, 55 nondeterministic, 63
practical, 55 optimization, 163
Adaptation, 537 partial search, 163
Adequate, 561 power of, 304
Aggregate, 82 probabilistic, 63
Algebra, 422, 601 quantum, 77
of sets, 236 randomized, 64
Boolean, 439 recursive, 74
epistemic quantum, 369 second-level, 160–162
information, 192 super-recursive, 220
knowledge, 358 symbolic, 54
Lie, 420 wired, 54
linear, 827 Algorithmic
multibase, 359 information theory, 131
process, 420 complexity, 96, 131
relational, 153 ladder, 304
927
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-index page 928
Problem R
Decidable, 512 Range, 558
halting, 128, 565 Real number, 132, 214, 315, 774,
undecidable, 301 832–833
Procedural programming, 426 Realizability, 265
Procedure, 461 Recursion, 172, 661
Process, 11, 48 Recursive relation, 351
algorithmic, 125 Reduction, 301, 367
business, 56 Relation, 147
cognitive, 29 Relationship, 275
complex, 796 Relevance, 96
dynamic, 78 Reliability, 95
integration, 125 Representation
problem-solving, 67 of a language, 141
thermonuclear, 68 operational, 187
Processor, 821 Representational type, 217
Product, 4, 363 Resource, 4, 127
Production, 427–428 Restriction, 392
Program, 126–127, 131 Result
Property final, 530
abstract, 41, 91 of acceptation, 796
ascribed, 113, 311–312 of computation, 825
contextual, 92 Revision, 634
descriptive, 91 Robustness, 124
existential, 40 Rule
intellectual, 4 construction, 160
intrinsic, 113, 312 data transformation, 159
of knowledge, 74, 101, 133 deduction, 555
natural, 311 derivation, 465
relational, 92 execution, 159
Proposition, 176 formation, 487
Psychology logical, 493, 495
social, 85 midot, 433
of a Turing machine, 158
Q of correspondence, 422, 599
Quality of inference, 265, 284, 402
primary, 312 of interpretation, 429
secondary, 312 of propagation, 465
Quantifier production, 487
existential, 486 syntactic, 258, 427
universal, 486
Quantum, 316 S
Query, 303, 481 Scale, 170
Question, 481, 600 Scenario, 624
September 27, 2016 19:41 Theory of Knowledge: Structures and Processes - 9in x 6in b2334-index page 944
W
Weight, 208
Wisdom, 598