You are on page 1of 10
88 The Question of the Origin of Biological Fnformat seruction a priori, however, seems impossible. If we could really inulate the process of the biological generation of information— for example, withoutgiving target sequence and the evaluation scheme for the “letter mutants"—we would have solved | central problem in the field of artificial intelligence: the com- puter could generate information de nove in a selforganization process, simply consuming energy. Various attempts have been undertaken to make the simulation more realistic—for example, by omitting the definition of a target sequence and instead defining general criteria of fitness." The difficulties outlined above, however, cannot be put aside, as they are fundamental in nature. The reasons for this will be examined in chapter 12. Vv Algorithmic Information Theory 9 Random Sequences Amere look at the discussion of philosophical problem: shows that hardly anything could be as random as the ways the concept ness has been used in connection wi evolution." This all che more serious for us because it is precisely Darwin's theory of evolution that makes such central use of the idea of randomness. For this reason, one of the aims of this discussion will be to define the concept of randomness as precisely as possible, and to find out its consequences for evolutionary biology. As was shown in chapter 6, the idea of a random sequence is of major importance for the problem of the origin of biological information.” ‘The mathematical conception of random se- quences, which wiltnow be described, is due to the work of Gregory Chaitin, Andrei Kolmogorov and Ray Solomonoff. In the litera- ture itis frequently referred to as algorithmic information theory. Almost everyone has an intuitive picture of what a random sequence is. Consider the following binary sequences. Sequence S,: 10101010101010101010101010101010101010101010101010 Sequence Sy: 10100011111001011001011100010110001010000001000111 The two sequences have in common that they are 50 characters long and consist only of the wo characters “0” and “I.” For the sequence S, there is, however, a simple rule according to which the character sequence can be continued indefinitely, since the char acters “0” and “I” appear in strict alternation, ‘The resulting sequence pattern is clearly an ordered sequence. For the sequence 5,.0n the other hand, there doesnot seem to be any rule thatallows 92 Algorithmic Information Thesry a logical extension of the sequence pattern. It is therefore reason- able to denote the sequence S, as a random sequence “The classical method of generating a random sequence is to toss acoin {whose faces are “0” and “1”). One could be tempted to Conclude from this that it is the origin, or the manner of ori tion, of a sequence that determines whether itis random in nature. But this would be fallacious. For example, ifwe toss coin fifty times in succession and note the result (“0" or “1”) each Gme, then at the tnd we could have any one of the 2 combinatorially possible ‘nce cach of these has the same prior probability. This applies just as much for the two binary sequences that we have used ae examples of an ordered and a disordered character sequence. Clearly, we need a definition of the concept “random sequence” that is independent of the manner in which the sequence origi- nated, but that still is consonant with our intuitive picture of ordered and disordered sequences. The aigorithmic definition of fandomness” that we shall now discuss does not consider the origin of a character sequence, but refers purely and simply to the Characteristics of the sequence pattern. In order (o explain the Gefinition, we shall again make use of the language of information theory, We consider once more the binary sequences S, and S,, To establish the identity of a single binary character requires exactly One yes/no decision, thatis, one bit of information. For equal prior probabilities for the individual characters, equation (2) shows that Bach sequence contains 50 bits of information (or in general m bits orinformation for the general case of a binary sequence of length sequences, n. ‘We now examine the following problem in informat A recipient, who has at his disposal the same communications System as the sender, is to be sent the information coded in the sequences, and §, in the most economical way possible. By “information” we mean here the syntactic information in S, andl Sy that is, the structural information in the sequences in the sense of equation (2) ‘In the case of the binary sequence 8,, recipient the following message, again in binary code: ion theory: . the sender would send the PRINT "10" (x/2) TIMES 0. As can be seen, the length of In the present case, mis equal to 5 \es only slightly greater when the transmission program bec: Random Sequences — 93 increases, even if nbecomes very large indeed. The number of units of information in the transmission stays roughly the same, however long the binary sequence S, is—assuming, of course, that the alternating sequence pattern of §, continues for n> 50. Things are different for the binary sequence §,. Here, since there is obviously no repeating pattern in the sequence, the sender must each time transmit the entire string of characters: PRINT “1010001 111190101100101110001011000101000001000111" of information required for transmission is proportional to the as the sequence itself." —_ faadaral importance for our further discussion, Its contents agais on tape asa row ofonesandizeros. Athird tape is used during these calculations consist of finite series of si Ln Hane each one follows on from the previous one in a purely met coreutils co lecisions. A program that consists of instructions of as 94 Algo Information Theory START Doe Transition rules 20 <= Ire «1 —= stop 20 —= Ire Bl —= Oe .synbol tothe vo re has two different ihe, the monitor inthe eae sho es Car ane of moven For al tas ("ar ad, which program the directo i ermine for i punpose Ulere wa feed wt of transition rues, and these ds rorthe possible combinations of m state and current monitor (CRT or 2) she monitor isto move, wh acter (-0" of *1") it © pFIAt out an ics to adopt. The figure shows a short calculation 01 transformed into the output "01110" and in whieh the program stops after Tucing particu: the Tring machines themselves. 9 broken down Rurdom Sequencer 95 With the idea ofa Turing machine, awhole series of fundamental mathematical problems can be solved. These include above all the so-called decision problems, whose common feature is this ques- tion: “Can itbe decided, ina finite number of computational steps, whether a given character sequence A can be obtained from ‘haracter sequence B by the application of an algorithm?” ig to the above definition, a character sequence S is to be regarded as random ifitcan only be generated bya program that ‘on the input tape of a Turing machine contains as many binary characters as does the binary representation of S itself (figure 18) The smallest possible algorithm with which a given character sequence can be generated is called the minimal algorithm or the rimal program. By definition, it cannot be reduced further to an ler algorithm. The algorithms are themselves binary sequences when they are fed into the Turing machine, Fora given ler sequence there can exist one or more minimal algo- rithms. In every case, the minimal algorithm isa random sequence, independently of whether the character sequence it generates is a sequence or not, because the minimal algorithm is by \ot further reducible. Caleulcton of an ordered sequence Calculation of @ random sequence r 7 Figure 18 Calculation (a) of an ordered seq (b) ofa rand F 96 Algnithmic Information Theory 1 concept ofa minimal algorithm lal to another important ideninslgortinmie information theory, that of algorithm complex 2 This mil play an importane parc in our further discussions, s0 IP he usetl to place bon amore exactmathematical basis now IWoshall make special use ofthe concept of recarsve functions ‘Ramet is termed recursive when there ia an algorithi, for extunple for the Turing machine, ith the help of whieh for given Stine ofthe argmerts uke facon values can he cated Te TRS Where the algorithm never ends, and he function thus sea Midetined forsome values of ie argument, the funcdons cee eated partaly recursive. We now reintroduce the concept of Algorithme compleniy by the wae of three definitions" computer” is defined 28a partially recursive function Clp) Vida bunaby sequence pasits agement, The funcdon value Cp) veihar binary sequence thatthe computer producesasompat when the prosram pis fed into 2. The "algorithmic complexity” K,(S) of a binary sequence Sis Gene ae he length L ofthe shoriest computer program p that froduces ay ouput the binary sequence 5 K{S) = ming, L(P) aay “universal” when for any computer C 3. A computer Vis terme and any binary sequence Sthe relation KAS < KAS) +e as) holds, where cis a constant (greater than zero) that depends only upon Cp) is a universal Kolmogoroy was able to show that there really 8 a uni coutpnter U/for which there exists an optimal algorithm in the Sense of equation (15).!% We now opt to use a universal machine Uae oor standard machine, so that we from new on can assum vrithout loss of generality that K,(S) = K(S). Further theorems hour the propertes of K(S) can he found in the velevant liter Os =. we have seen that ‘Using the example of che binary sequence Sy, any unquence of length m dat can be generated by the sepested ‘pplication ofainitealgovithm ponsessesan algorithmiccomplexity ‘K(S), whose value lies close to la(n), as long.as nis very large."* In accordance with the definition of algorithmic complexity, se- Guences of length n with maximal complexity can be generated by Random Sequencer 97 ‘a program whose length (in bits) is much less that the length bits) of the sequence itself. In other worels, in a sequence with maximal complexity the syntactic information is encoded irreduci- bly, and the simplest way of transmitting the sequence is to copy it symbol by symbol. Clearly, the sequence of characters in sequences I complexity is so irregular ‘and thusso unpredictable that it is appropriate here to speak of a "random sequence.” We now redefine the term "random sequence” with help of a concept of complexity: An mmembered character sequence Swill be called a random sequence when its algorithmic complexity K(S) contains approximately the same number of binary digits as the character sequence itself does. The qualifying term “approximately” already shows that the transition from random sequences to ordered sequences is a gradual one. This facts already implied in our definition. We can also now determine the degree of randomness of a character sequence; if a sequence 5 consists of n binary symbols, then we can—at least in principle—classify all the n combinatorial sible sequences according to their degree of complexi way, a hierarchical order of sequences arises, and we can choose what limits to set for the degree of complexity thata sequence shall have if itis to be regarded as random. The algorithmic concept of complexity will be used again in chapter 12 in order to bring into focus the biologically important concept of “structural complexity.” In principle, itis always possible to show that a given sequence is nota random sequence. To do this itis necessary merely to find an algorichm that is capable of generating the sequence but that in binary code is shorter than the sequence." However, to prove the opposite, thatis, that particular sequence israndom, itis necessary to prove that such an algorithm does not exist. It can easily be shown that most binary sequences are random. For example, only every thousandth of all mmembered binary sequences possesses a complexity K(S) with a value less than n= 10, and is thus capable of being generated by an algorithm that is ten bits shorter that the sequence itself." So ifa coin is tossed n times, there is a probability greater that 0.999 that the result will be a random sequence with a degree of complexity K(S) > »—10. But Chaitin has shown that a proof of the randomness of a character sequence can only be given in very rare cases (the randomness theorem)."” The ultimate origin of the randomness theorem is the incompleteness theorem of Kurt Gédel. pos- In this 88 Algenthmee Informaion Theory In proving his theorem, Chaitin started out from a variant of ul so-called Berry paradox, which runs as follows." Consider the number 111,777. This number is remarkable because satisfies the following definition: ‘THE LEAST INTEGER NOT NAMEABLE IN FEWER THAN NINETEEN SYLLABLES: “But ‘the least integer not nameable in fewer than bles’ is itself'a name consisting of eighteen syllables; hence the least integer not nameable in fewer than nineteen syllables can be named in eighteen syllables, which is a contradiction." ‘To prove his randomness theorem, Chaitin wansformed the Berry paradox into 2 decision problem for the universal Turing machine, so that the paradox could be formulated as a halti problem. For this purpose, Chaitin replaced the vague notion of the “specification” of a sequence with the mathematically precise concept of complexity, as introduced above. The Berry paradox could then be encoded in the following program for a Turing machine: FIND A SERIES OF BINARY DIGITS THAT CAN BE PROVED TO BE OF A COMPLEXITY GREATER THAN THE NUMBER OF BITS IN THIS PROGRAM Within the framework of the given formal system, the program is, given the task of testing all possible proofs for their complexity, nds the first proof that shows that a specific sequence of ts possesses a greater complexity than the number of bits in the test program in question. It then prints out the sequence, and the Turing machine stops. ‘There is a clear contradiction here, since the program is sup. posed to caleulate a number that by definition no program of this size can calculate. In other words, the progam finds the first number for which it can be proved that the number never could have been found by the program. ‘The Turing machine can therefore never fulfill the task set forth above. This isa direct consequence of the Berry paradox, and itis naturally not eliminated by the transformation into a decision problem. It can therefore not be proved in a formal system that a specific series of digits possesses a greater complexity than the program needed to generate the series, Random Sequencer 98 For our further discussion we shall need to important result more precisely. To this end we construction and the function ofa Turing machine more closely We assume that that the Turing program contains a fixed subro\ tine of length cbits, with the help of which a given system of axi I be used to generate all the proofs that le in this axiom system. We assume that the quantity of information that resides in the system of axioms and the deductive jes is K(S) bits, so that the total length of the Turing program is K(S) + cbits. [The actual value of cis independent of K(S) and depends only on the machine language used.] In order to carry ont the task set above, a Turing machine would have to proceed step by step. It would first generate all the proofs thatrequire one computational step, then all those that require two steps, and so forth. For each proof, the subroutine would sce whether it is significantly longer than K(S) + cbits. Ifthe machine were to find such a proof, it would print out the binary sequence and stop. However; this result would be a contradiction in itself. The ‘Turing program has itself a total length of only K(S) + cbits. Yet according to the above scheme the program would generate a string that contained significantly more bits than the program itself. This string would, according to our definition of random- ness, not be able to be generated_by the program. So we can formulate a general theorem stating that in a formal system of complexity K it is impossible to prove that a certain string has a complexity greater than K(S) + «!% The randomness of a string Sof length n can therefore only be proved within a formal system whose complexity exceeds that of S. It is apparent that a proof of randomness for $ is in practice impossible, for in order to decide whether the formal system F, possesses the (provable) complexity K, > m, one must construct a formal system F, of complexity K, > K,. ‘The complexity of F, can in wr only be proved in a transcendent system of complexity K,> K,, and so on ad infinitum, We shall take up this result in part Vand discuss its relevance to our main problem. late this, examine the Limits of Objective Knowledge in Biology 108 LO t 0110100110001004 }01001100101211010011011001 10090100100011110 {c-t0211000021Lo91090010101000131011221001119120100011112190001012002 10 ~111111902000011111.10101600900100001200001 0016051001116010101, 1901190001011001.100101101011000101000001110119101101201 W001010101112001011 102301 1111110001 120100190100110100202, a. U that were 110000011 2001011000101 16101001000110011101 11110010121000000011101— examine the nena. ‘T100100131000001010000110011; 101 10201101 11000011090010191100001100131, ee aoe _luot1210011200212101001000110010010010011011120110011111301011001000 See eoee 12003 :201901001111101090:12012401910013100110101201011101101010. excerpt isshown in figure 9. The hers .0100111011100%100011111001100120120001012910100020204L30111 14111140 c such a sequence of symbols can 11190111001110100120111600109010110010010100001.110011100010011020210_, mm theory: since nucleic acids are bi comton900010010121 010101011991001003:00100100200010 H- oxcoooom2oy030111:1300 1 1900200012120100013010013, 20001012 200009001610100:001000°000N1111090:1000001203010110) U represent one nucleotide in the binary system. Figure 19 is Sure 9 reexpressed in the following code: A= 00; U = 11; G=01;C=10. i) Since DNA molecules an be expressed as binary sequences, wecan oxo ago} on.26020}019001011e00119:10100019%0, the theres of algrtinicinforaton eheory det to ae a OHO eee ie dies ainonede 10111 10110101100 102100100101111411 11000000, We examine rst the chance hypothesis. Jacques Monod was 5 11100100000000111001123102 14090010010 111121001100200010003101101 160102 10011004. 1000101 111211100103001110202, }2010i00009010000001110001001131112110010100000001111011001001 11 4041 }ex0s000c03130111012 111004101030120200110130110200220012130000_, 190010101020131130101610010011611010021100 1140011000001021102 1200 convinced that the macromolecular structures occut Nature and their systematic comparison with the experimental and numerical methods led to 1 > them, Monod’s remarks can be applied di genetic symbols. Thus Monod assumed that, .e obvious irregular- 102 Algorithmic Information Theory tay of genetic ymnbol sequences would reflect directly the random rrure of thelr origin With the help ofthe randomness theorem developed in chapter 4, the epistemological content of Monod's chance hypothesis can be examined, The randoraness theorern of algorichmic inform roa thomy stata hat the randomness ofan remembered sequence eta snip be proven ina formal systems F, of complesity Ky Ky greater xan m, This implies thatthe proof ofthe randomness ofan ‘membered sequence presupposes the systematic construction of Foam yatem fy with K.>n However, the proof that, possesses che complexity X requires in tarn a higher system Faith K, > Ky wecguirement dhatcbatimues indefinitely. The random constrac: seats "prorable™ system Fy isin the context of the degrees of ron lent dieused here, justas mprobableas the chance origin Sta genetic eymbol sequence of defined semantics. aEgequently, Monod's clam that the genetic plan of tving ongantatt is the produet ofa random syathesis because ofthe lack Sfliemonstable sequence patterns snot provable. Incidentally Gha'is cnurely an episieatological problem: che random theorem does aor exchade the existence of random sequences, bit sree Randamentaly the possbilty of proving theie random css inadeduetive process. Iisa fact that, with a few exceptions, no lomgeange peiuiiy has et been obvervedin genetic sol seeeee ak Five genetic informationceariets seem eather (0 seM gona wo Erie Schrodinger’ picture of an aperiodic crystal Trurher the considerations in chapter 6 concerning the se quence variability of homolagows proteins showed that we cannot XStude te htoreat possibility tha the plan foram organi arose a iuingulr, random process. The objections are more of an a cmntiogical nature and have to do with the methodological seine ofthe chance hypothesis. There is indeed no way i which the chance hypothesis Can be proved-—as Monod tried to do—by the sual ales of geese sequence Here, Were are Fandamenval limits to what we can disco TKeouldatthis poincbe objected that the lack of provability ofthe chance Iypothetisis characteristic of al physical hypothesis, for, seeing to the demarcation criteria lai@ down by the schoo! of trical radonatiaa, ypothesesean neverbe proved at best Wong Sttematres can be excluded by falsiiation. Nevertheles» the Shanes hypothesis occupies a rather special place in this especk THe ehar vote that Monod ascribes to the effect of chance i ‘Mision raises dhe phenomenon of random events tothe satus of Lirmits of Objecive Knowladge in Biology — 102 an antiteleological law that in its structure represents a negation of the modern-version of the principle of causality. Seen thus, itis not a contradiction in terms when Monod speaks of the “law of chance.” In the context of Monod’s hypothesis, the concept of chance fakes on an essential and no longer merely operational mean= strates exactly what this difference consists in by taking owo easily-grasped examples. These underline the point that the element of chance associated, for example, with a rouletie game is merely an operational indeterminacy, that is, an indetermi- nacy that could in principle be removed, ifonly the trajectory of the roulette ball could be determined with sufficient precision, On the other hand, there are according to Monod situations in which chance has an essential significance: “This is the case, for instance, in what could be called “absolute coincidences,’ those that result from the intersection of evo totally independent chains of events. Suppose that Dr. Brown sets out on an emergency call to a new patient. In the meantime Jones the carpenter has started work on repairs to the roof ofa nearby building. As Dr. Brown walks past the building, Jones inadvertently drops his hamrner, whose (determin istic) wajectory happens to intercept that of the physician, who dies ofa fractured skull, We say he was victim of chance. What other term fits such an’ event, by its very nature unforeseeable? Chance is obviously the essential factor here, inherent in the complete independence of ovo causal chains of events whose convergence produces the accident," Whether two chains of events cross or not depends substantially in this example upon the realization of particular initial condi- tions. In this way, Monod transfers the element of randomness to the coincidental meeting of certain initial conditions (see chapter 13). This is made clear by the following example. Let there be two bodies that move independently, in straight lines, from points A and B, respectively, with velocities v, and vy. ‘The starting condi- tions for these two chains of events are givers by the positions and the speeds of the two bodies at time & We consider the following three situations: In situation (1), the coordinates of position and velocity are chosen such that the two chains of events can never cross. In situation (2) the opposite is the case: however v, and vy, are chosen, the result will always be a collision, thats, a crossing of the wo chains of events. Only situation (8) expresses the concept of chance as used by Monod. In this case, it is the random constellation of particular values of the initial velocities that deter- 104 Algorithmic Information Theory ‘ » o rmineswhether or not thece willbe bea crossingofthe paths. (Seen Int way situation (2) fas a special cave of sation (3) a txamplc of de inherently randot superposition of ov indepen ent enusal chains] a [a a comparable way, according to Monod, the appearance mutation iv the carrefoof genet information (te aucleie aids) finds elles a the phenetpi level (ei mproned function of the teleonomic apparatus) are twoevente independen chrone another, The origin of biological information is thus Inberendy determined by chance, inasmuch a8 there exits no tausil connection between its syntactic aspect, represented bythe sequence ofa nucet act, and seman aspect represented By the tanetional consequences ofthis yequence a the protein level ie te that the genetic code eepresente 4 mere “satacte Cranaation fom the fevel of nucleic ais othat of proteins, But tt prowdes no Kad of eaual connection between the syntactic purely material aspect and dhe purpose und meaning of biolog Extinformation he semantics of biological information are for Monod a mere epiphenomenon of particular syutactic tractares, that is, ofthe SEuencesofnucleicacidsand proteins. Since thereisobviously no Sauoat conneedion between the semantie and the syntactic leve seiue s there any lawlike explanadon for the origin of semantic For Mood, the origin of seranv information oF it the “improvernent of the teleonomie apparatus." is therefore, in the above sense, a process caused mherenty by Chance, Phisisthe quintessence ofhisassertion (Gee note 161) that ‘eis mposible wo formulate any theoretical or empirical rae that stonld allow the function of a biological macromolecule 10 be faced in a direct causal context ain chapters 12 and 13, we shall see that it is possible toagree with Monod ta certain extent. Icis tue that the origin ef semantic Information canmot be explained in deal by natura laws, But We Limits of Objective Knexeladge in Bistogy 105 must immediately add that it certainly is possible to explain the origin of semantic.infarmation as a general lawlike phenomenon. ‘We now turn to the teleological explanatory approach. In order to assess its epistemological status, we must first take up the question of how the phenomenon of a relationship governed by natural law can be understood from an information-theoretical viewpoint. We have already answered this question, implicitly, in the foregoing discussion. Now, we must generalize this concept and apply it to the idea of a teleological law. The first question to be clarified is that of what we are to understand at all by “teleologi- cal law." Iewill be remembered that the starting point of our discussion was the sequences of biological information-carriers. The experi- mentally determined sequences—for example, the sequence of nucleotide monomers in a DNA molecule—were transliterated into the language of information theory by representing them as a defined sequence of binary characters (see figure 19). This procedure is not limited to DNA sequences. On the contrary, ail observational data of science, as far as they can be described in natural or artificial language, can be encoded in sequences of binary symbols. In order to introduce the idea of a “relationship governed by natural law” in a meaningful way, we return to the conceptual apparatus of algorithmic information theory. As Ray Solomonoft"™" points out, there is a clear lawlike relationship in a set of observa: tional data when and only when the symbol sequences in which the observables are encoded are nonrandom, that is, when their regularity makes them compressible. ‘The concept of natural law introduced by way of algorithmic information theory is phenomenological in nature, as long as the logical structure of such a lawlike (i.e., compact) algorithm is not specified further. Although the information-theoretical interpre- tation of the concept of natural law still has to be worked out in detail, there are already several important conclusions that can be drawn. First, we observe that the reduction of theories, discussed in 50 much detail in the philosophy of science, represents basically a reduction in the complexity of a compact algorithm, and, secon we note that the philosophical idea of a unified theory of all natural appearances can in this reduction schema be described as the search for the smallest irreducible algorithm with which our world can be described completely.'® Such a minimal algorithm repre- 105 Algor sfration Theory sents by definition a random sequence (see chapter 9). On the ‘other hand, the randomness theorem of Gregory Chaitin shows us that the randomness ofan algorithm, and therefore its property of being a minimal algorithm, is in principle not provable. In other words, in the framework of algorithmic information theory, there is a strict mathematical proof for the assertion that we can never know whether we are in possession of the minimal formula by means of which all the phenomena of the real world can be predicted." The completeness of a scientific theory can in principle never be proved. Icisonly refutable by pragmatic means, hamely, by the invention of a new, even more compact algorithm. ‘We now apply the information-theoretical concept of natural law to the teleological approach. Clearly, in the context of algorithmic information theory. all teleological explanations of the origin of biological information are irrefutable, for the teleological ap- proach postulates the existerice of an algorithm that embodies a faw immanent in animate Nature according to which the DNA molecules of living organisms are constructed. However, a tele: ological algorithm can only have the character of a natural law when the symbol sequences that it produces are not random sequences and the algorithm itself is more compact than they are. Yel the existence of such compactalgorithms, as postulated by the teleological approach, cannot be disproved, since their nonexist: ence is not provable. On account of this fundamental irrefutability, teleological explanatory models are immune againstall attempts to falsify them. We have therefore, within algorithmic information theory, a proof of what had already been presumed by Monod, ‘mely, that the postulate of objectivity (sce chapter 2) is a pure postulate, that never can be proved, for “it is obviously impos fo imagine an experiment proving the nonexistence anywhere nature of a purpose, of'a pursued end."" Furthermore, within the framework of teleological theory for~ mation, such an information-generating algorithm has never been concretely stated. The teleological approach thus represents & mere pseudosolution for the problem of the origin of biological jnformation, one which must always rest upon current gaps in. physical and chemical knowledge. ‘Let us thus summarize as fundamental limits of objective knowl edge in biology: the chance hypothesis is inherently incapable of proof, the teleological approach of disproof. le Vv e Evolutionary Origin of Information

You might also like