Professional Documents
Culture Documents
One of the first designs of the information theory is the model of communication by
Shannon and Weaver. Claude Shannon, an engineer at Bell Telephone Laboratories,
worked with Warren Weaver on the classic book ‘The mathematical theory of
communication’. In this work Shannon and Weaver sought to identify the quickest and
most efficient way to get a message from one point to another. Their goal was to discover
how communication messages could be converted into electronic signals most efficiently,
and how those signals could be transmitted with a minimum of error. In studying this,
Shannon and Weaver developed a mechanical and mathematical model of
communication, known as the “Shannon and Weaver model of communication”.
Conceptual Model
Studies on the model of Shannon and Weaver takes two major orientations. One stresses
the engineering principles of transmission and perception (in the electronic sciences). The
other orientation considers how people are able or unable to communicate accurately
because they have different experiences and attitudes (in the social sciences).
2. Algorithmic information theory
Overview
This use of the term "information" might be a bit misleading, as it depends upon the
concept of compressibility. Informally, from the point of view of algorithmic information
theory, the information content of a string is equivalent to the length of the shortest
possible self-contained representation of that string. A self-contained representation is
essentially a program – in some fixed but otherwise irrelevant universal programming
language – that, when run, outputs the original string.
From this point of view, a 3000 page encyclopedia actually contains less information than
3000 pages of completely random letters, despite the fact that the encyclopedia is much
more useful. This is because to reconstruct the entire sequence of random letters, one
must know, more or less, what every single letter is. On the other hand, if every vowel
were removed from the encyclopedia, someone with reasonable knowledge of the English
language could reconstruct it, just as one could likely reconstruct the sentence "Ths sntnc
hs lw nfrmtn cntnt" from the context and consonants present. For this reason, high-
information strings and sequences are sometimes called "random"; people also sometimes
attempt to distinguish between "information" and "useful information" and attempt to
provide rigorous definitions for the latter, with the idea that the random letters may have
more information than the encyclopedia, but the encyclopedia has more "useful"
information.
History
Algorithmic information theory was founded by Ray Solomonoff[2], who published the
basic ideas on which the field is based as part of his invention of algorithmic probability -
a way to overcome serious problems associated with the application of Bayes rules in
statistics. He first described his results at a Conference at Caltech in 1960,[3] and in a
report, Feb. 1960, "A Preliminary Report on a General Theory of Inductive Inference."[4]
Algorithmic information theory was later developed independently by Andrey
Kolmogorov, in 1965 and Gregory Chaitin, around 1966.
Specific Sequence
"0101010101010101010101010101010101010101010101010101010101010101"
"1100100001100001110111101110110011111010010000100101011110010110"
presumably has no simple description other than writing down the string itself.
More formally, the Algorithmic Complexity (AC) of a string x is defined as the length of
the shortest program computes or outputs x, where the program is run on some fixed
reference universal computer.The major drawback of AC and AP are their
incomputability. Time-bounded "Levin" complexity penalizes a slow program by adding
the logarithm of its running time to its length. This leads to computable variants of AC
and AP, and Universal "Levin" Search (US) solves all inversion problems in optimal
(apart from some unrealistically large multiplicative constant) time.
3.
1. Knowledge pyramid:
Data: facts and figures -> Decision Making ->Information : Data with meaning
->Critical Thinking->Knowledge : Information with value ->Reflective Thinking-
>Wisdom: Knowledge with personal insight
2. Circular flow of information
3. Data-Info-Knowledge Continuum
4. IM vs KM
IM
• Records management
• Can be stored, catalogued, organized
• Align with corporate goals and strategies
• Set up a database
• Utilize for day-to-day optimum decision-making
• Focus on critical NEEDS for the company
• Aims for efficiency
• Data with a purpose
KM
5. Intelligent Organization