You are on page 1of 51

STOCHASTIC CHEMICAL KINETICS: Theory and Systems

Biological Applications

Péter Érdi
perdi@kzoo.edu

Henry R. Luce Professor


Center for Complex Systems Studies
Kalamazoo College
http://people.kzoo.edu/ perdi/

and
Institute for Particle and Nuclear Physics, Wigner Research Centre,
Hungarian Academy of Sciences, Budapest
http://cneuro.rmki.kfki.hu/
STOCHASTIC CHEMICAL KINETICS: Theory and
Systems Biological Applications

1973 1989 2014


STOCHASTIC CHEMICAL KINETICS: Theory and
Systems Biological Applications

I. Stochastic kinetics: why


and how?
• Chemical kinetics: some basic concepts
• Fluctuations phenomena: introductory and histor-
ical remarks
II. Continuous time dis-
crete state stochastic
models
• Stochastic processes: some basic concepts
• Model frameworks, master equation
III. Systems Biological Appli- • Solutions of the master equation
cations • Stationary distributions
• Simulation methods
• Fluctuations near instabilities • Non-Markovian approaches
• Enzyme kinetics
• Signal processing in biochemical networks
• The beneficial role of noise: fluctuation-
dissipation theorem and stochastic resonance
• Stochastic models of gene expression
• Chiral symmetry
Stochastic kinetics: why and how?

Chemical kinetics: some basic concepts

CHEMICAL REACTION
n
0 = ∑ ν j,i Ai ( j = 1, 2, . . . , m) (1)
i=1

m: the number of different reactions; ν j,i : stoi- MASS ACTION KINETICS


chiometric coefficient of component Ai in reac-
m
tion step j, is positive for species that are pro- fi (c) = (i = 1, 2, . . . , n) (3)
duced, negative for species that are consumed
∑ ν j,iv j
j=1
in the reaction step j.
constituent functions of f:the sum of the rates of
the individual steps
Power law kinetics:

n
KINETIC EQUATION v j = k j ∏ [Ak ]α j,k (4)
k=1
dc(t)
= f(c(t); k); c(0) = c0 (2)
dt
m n
where f is the function which governs the tem-
poral evolution of the system, k is the vector of
fi (c) = ∑ ν j,ik j ∏ [Ak ]α j,k (i = 1, 2, . . . , n) (5)
j=1 k=1
the parameters (rate constants or rate coef-
α j,k : order of reaction
ficients) and c0 (with elements [A1 ]0 , [A2 ]0 ,. . . ,
[An ]0 ) is the initial value vector of the component
concentrations.
Continuous time Continuous state Deterministic
(CCD model
Stochastic kinetics: why and how?

Fluctuations phenomena: introductory and historical


remarks
Einstein-Smoluchowski-Langevin
theory of the Brownian motion I.
• gave a relationship for the time-dependence of the av-
erage of the square of the displacement X(t) of the
Brownian particle,
Figure 1: there is a large discrepancy between

the classical displacement and the Brownian motion


< X(t)2 >:=< [x(t) − x(0)]2 >= Dt, (6)
displacement

where x(t) is the actual and x(o) is the initial coordinate


of the Brownian particle;
• found the connection between the mobility of the parti-
cle and the - macroscopic - diffusion constant FDT: the fluctuation of the particle at rest has the
same origin as the dissipative frictional force

D = µkB T. (7)

• considered the motion of the Brownian particles as


memoryless and non-differentiable trajectories, and
helped prepared the pathway to the formulation of the
theory of stochastic (actually Markov) processes.
• offered a new method to determine the Avogadro con-
stant Figure 2: non-differentiable trajectories
Stochastic kinetics: why and how?

Fluctuations phenomena: introductory and historical


remarks
White noise is considered as a stationary Gaussian
process with E[ξt ] = 0 and E[ξt ξt 0 ] = δtt 0 |t −t 0 |, where δ
is the Dirac delta function.

Einstein-Smoluchowski-Langevin theory of the


Brownian motion II.

Langevin equation: forcing function has a systematic


and deterministic part, and a term due to the rapidly
varying, highly irregular random effects:

dx/dt = a(x,t) + b(x,t)ξ (t) (8)

The equation (8) is not precise, since ξ (t) is of-


ten non-differentiable, and therefore x(t) is also non-
differentiable. (further studies: stochastic integrals:"Ito
versus Stratonovich")
Retrospectively,
√ Einstein’s theory: a(x,t) = 0 and
b(x,t) = 2Dξ t and assuming Gaussian white noise


dv/dt = −γv + 2DdW (t). (9)

Figure 3: white noise and its spec-


trum
Stochastic kinetics: why and how?

Fluctuations phenomena: introductory and historical


remarks: Diffusion processes
b = 0: Liouville process

Fokker–Planck equation
δ(x − x0 )

p(x, t|x0 , t0 ) axis


2
∂ 1 ∂
A=a + b 2 (10) x axis
∂x 2 ∂x x0

• a(x,t): the velocity of the condi-


tional expectation (called “drift”) t = t0 x(t) solution of
dx
= A(x, t)
• b(x,t): the velocity of the condi-
dt
with x(t0 ) = x0

tional variance (called a “diffusion t axis

constant”).
• The general form of the FP equa- Figure 4: Temporal evolution of a Liouville pro-

tion: cess: drift of the PDF in time without changing its


dP shape. Starting from a degenerate density function
= AP (11)
dt (i.e. when the initial PDF is concentrated to a point),

the point will be drifted.


Ornstein-Uhlenbeck process (OU):

a(x,t) = −kx, b(x,t) = b, (k > 0, D ≥ 0). (12)


a(x,t) = a, b(x,t) = b: Wiener process,
X(t) ≡ W (t).a = 0 b = 1:
b = 0: Special Wiener process

δ(x − x0 )

p(x, t|x0 , t0 ) axis δ(x − x0 ) p(x, t|x0 , t0 ) axis

x axis
x axis

[D(t − t0 )]1/2

[(D/2k)(1 − exp[−2k(t − t0 )])]1/2

t = t0 x(t) = x0 exp[−k(t − t0 )]
x(t) = x0 + A(t − t0 ) t = t0

t axis
Fig. 6.3. Temporal evolution of special Wiener process. From [204].
t axis

Figure 5: Temporal evolution of a special


Figure 6: OU has a stationary PDF, and it is
Wiener process: s a Gaussian process, but is not
the normal density function. For any t > 0 the PDF
stationary. It is used in modeling the Brownian pro-
is the normal density function with an exponentially i
cess.For finite t > t0 the PDF is Gaussian, with mean
moving mean. The temporal evolution of the standard
x0 + a(t − t0 ), and the standard deviation |b(t − t0 )|1/2
deviation describes spreading out, but the width of the
is flattening out as time tends to infinite. For a spe-
curve is finite even for infinite time. The OU process,
cial Wiener process the center of the spreading curve
defined by a linearly state-dependent drift and a con-
remains unchanged.
stant diffusion term, proved to be a very good model

of the velocity of a Brownian particle characterized by

normal distribution.
Stochastic kinetics: why and how?

Fluctuation-dissipation theorem
• Noise in electric circuits being in thermal equi-
librium by Nyquist (theory) and Johnson (exper- < V 2 >= 4RKB T (13)
iments)
• Internal voltage fluctuation: proportional to the re-
sistance R, temperature T and the b bandwidth of C(t) := E[ξ (t)ξ (t − τ)]
the measurement: (14)
• Same forces that cause the fluctuations also re- Z +∞
sult in their dissipation 1 1 γ(ω)
S(ω) = C(t)cosωdτ = < ξ 2 >eq
2π −∞ 2π γ(ω) + ω 2
• Chemical kinetics: estimation of individual rate
(15)
constants from concentration fluctuations
• Chemical fluctuations measurements: electric
conductance; fluorescence correlation spec- γ(ω) is the Fourier transform of the dissipation con-
troscopy stant, and < ξ 2 >eq characterized the measure of the
• Fluctuation or noise phenomena: representation equilibrium fluctuations. The relationship is famously
in the time domain and the frequency domain called the Wiener–Khinchine theorem.
Stochastic kinetics: why and how?

Model framework: preliminary remarks

• chemical reaction is considered as


Pn(t) := P(ξ (t) = n (16) a Markovian jump process
• Kolmogorov equation (master
equation)
• ann0 : infinitesimal transition prob-
dPn ability, which gives the probability
= APn(t). (17) (per unit time) of the jump from n0
dt
to n
• master equation: a gain-loss
dPn equation for the probability of each
= ∑ n0[ann0 Pn(t)−an0nPn(t)].
dt state n:
(18)
Stochastic kinetics: why and how?

A model of genetic expression: first encounter


1 λ+ λ2 λ3
inactive gene −*
)− active gene −→ mRNA −→ protein (19)
λ1−

supplemented with two degradation steps (the second also called as proteolysis):

γm γp
mRNA −→ 0 protein −→ 0 (20)

dP( n1 ,n2 ,n3 )(t)


dt = λ1+ (nmax + max
1 − n1 + 1)P(n1 − 1, n2 , n3 ) − λ1 (n1 − n1 )P(n1 , n2 , n3 )
+λ1− (n1 + 1)P(n1 + 1, n2 , n3 ) − λ1− n1 P(n1 , n2 , n3 )
+λ2 n1 P(n1 , n2 − 1, n3 ) − λ2 n1 P(n1 , n2 , n3 )
+ (n2τ+1)
2
P(n1 , n2 + 1, n3 ) − nτ22 P(n1 , n2 , n3 )
+λ3 n2 P(n1 , n2 , n3 − 1) − λ3 n2 P(n1 , n2 , n3 )
+ (n3τ+1)
3
P(n1 , n2 , n3 + 1) − nτ33 P(n1 , n2 , n3 ),
(21)
where nmax
1 denotes a constant number of switching genes
Continuous time discrete state stochastic models

Model frameworks, master equation

CDS model of chemical kinetics


• (a) takes into consideration
– the discrete character of the quan-
tity of components
– the inherently random character of
the phenomena
• (b) is in accordance (more or less)
– with the theories of thermodynam- Figure 7: Renyi, A. ( 1953). Treating

ics chemical reactions using the theory of stochastic


– with the theory of stochastic pro- processes. MT A Alk. Mat. Int. Kozl., 2, 83-101
cesses (in Hungarian). The first complete treatment of
• (c) is appropriate to describe a second-order reaction was given. This pa-
– ’small systems’ per give evidence that the differential equation
– instability phenomena
for the expectation of the stochastic model can-

not be identified with the differential equation as-

sociated with the usual deterministic model.


Continuous time discrete state stochastic models

Model frameworks, master equation


Chapman–Kolmogorov equation:

Mathematical background pi j (s,t) = ∑ pik (s, u)pk j (u,t) (i, j = 0, 1, 2, . . . )


{X(t)}t∈(R) is a continuous time Markov pro- k
cess if the following equation holds for every (25)
t1 < t2 < · · · < tn+1 (n is a positive integer): The transition probability matrix P(s,t) can be
constructed from the individual transition proba-
bilities:
P (X(tn+1 ) = j|X(t1 ) = i1 , X(t2 ) = i2 , . . . X(tn ) = in ) =

= P (X(tn+1 ) = j|X(tn ) = in )  
(22) p1,1 (s,t) p1,2 (s,t) ···
The transition probability for a Markov process is
 p (s,t)
P(s,t) =  2,1 p2,2 (s,t) ··· 
 (26)
defined as: .. .. ...
. .

pi j (s,t) = P (X(t) = j|X(s) = i) . (23)


Obviously, the following equation holds for the The absolute state probabilities Pi (t) :=
transition probabilities for all possible i values: P(X(t) = i) of a CDS Markov process, i. e. the
probabilities for the system to be in the state i,
(24) satisfy a relatively simple recursive equation with
∑ pik (s,t) = 1 the transition probabilities:
k∈S

Pi (t) = ∑ p ji (s,t)Pj (s) (27)


j

Pi (t) functions are very often the most pre-


ferred characteristics of CDS Markov processes
in physical and chemical applications.
Continuous time discrete state stochastic models

Master equation
Example: a birth-and-death process

dPi(t)
= ∑(q jiPj (t) − qi j Pi(t)), 1
Ø −→
k
A1
dt j
k
(28) 2
A1 −→ 2A1 (30)

The qi j values: infinitesimal 3


A1 −→ Ø
k

transition probabilities (tran-


sition rates): can be obtained pi,i+1 (t,t + h) = λi h + o(h)
from the transition probabilities: pi,i−1 (t,t + h) = µi h + o(h)

pi,i (t,t + h, ) = 1 − (λi + µi )h + o(h)

qii = 0 pi, j (t,t + h) = o(h) if j 6= i and j 6= i ± 1 h → 0.


(31)
dP0 (t)
dt = −λ0 P0 (t) + µ1 P1 (t)
pi j (s,t)
qi j = lims→t t−s (i 6= j) dPn (t)
dt = −(λn + µn )Pn (t) + λn−1 Pn−1 (t) + µn+1 Pn+1 (t) (n1)
(32)
(29)
Continuous time discrete state stochastic models

Chemical Master equation: general case


transition rate of reaction j starting from state (a1, a2, ..., an):
combinatorial kinetics; ”kn(n − 1)”

The full master equation of the process: summing all transition rates rele-
vant to a given state:

dPf (a ,a ,...,an )
1 2
dt = − ∑mj=1 v j (a1, a2, ..., an)Pf (a1,a2,...,an)
(33)
+ ∑mj=1 v j (a1 − ν j,1, a2 − ν j,2, ..., an − ν j,n)Pf (a1−ν j,1,a2−ν j,2,...,an−ν j,n)

dP(t)
= ΩP(t) (34)
dt
Continuous time discrete state stochastic models

Solutions of the master equation

• Direct matrix operations


• Laplace transformation
• Generating functions
• Other techniques (Q-functions,Poisson representation)
• Stationary distributions
• Simulation method
Continuous time discrete state stochastic models

Solutions of the master equation


The expectation of the number of Ai molecules can be
generated using first partial derivatives:

Generating function: ∂ G(1, 1, . . . , 1,t)


hai i (t) = (36)
∂ zi
Second order moments and correlations can be given
G(z1 , z2 , . . . , zn ,t) = ∑all states za1 za2 · · · zab Pf (a1 ,a2 ,...,an ) (t) with a using second and mixed partial derivatives as
follows:
zi ∈ C i = 1, 2, . . . , n
(35)
WHY?

2 ∂ 2 G(1, 1, . . . , 1,t) ∂ G(1, 1, . . . , 1,t)
ai (t) = + (37)
all the individual variables of interest in the stochas- ∂ z2i ∂ zi
tic description can be obtained from it in a relatively
straightforward manner

∂ 2 G(1, 1, . . . , 1,t)
ai a j (t) = i 6= j (38)
∂ zi ∂ z j

transient solutions for compartmental systems fully known


Master equation (differential-difference equation ->
Partial DE for the generating function
(can be solved if linear)
Continuous time discrete state stochastic models

Stationary distributions

• A stationary distribution is a vector, usually denoted as π, and satisfies the following equa-
tion, which is derived from equation (34) by setting the left side 0:

0 = Ωπ (39)

• In typical cases, the stationary distribution π can also be thought of as the limit of the vector
of probability functions with time approaching infinity:

π f (a1,a2,...,an) = lim Pf (a1,a2,...,an)(t) (40)


t→∞

• unimodality versus multimodality

• stochastic theory of bistable reactions


Stationary distributions
ki
Schlögl reaction of the first-order phase transition volume V is explicitly taken into account: k̂i = V m−1 .

ak1 n(n − 1)
k1 λn = + bk3V NA , (46)
V NA
A + 2X −*
)− 3X (41)
k2
and
k3 k2 n(n − 1)(n − 2)
µn = nk4 + (47)
B −*
)− X,. (42) (NAV )2
k4
The stationary distribution calculated by using the de-
tailed balance assumption λn−1 Pn−1 ss = µ Pss as
n n
deterministic model is
n−1 ∞
λi
2 3
dx(t)/dt = k1 ax − k2 x − k4 x + k3 b; x(0) = x0 . (43) Pnss = P0ss ∏ , P0ss = 1 − ∑ Pjss . (48)
i=0 µi+1 j=1

three stationary states: bistability

master equation is given as

dPn (t)
= λn−1 Pn−1 + µn+1 Pn+1 − (λn + µn )Pn , (44)
dt

for n = 1 . . . ∞, and

dP0
= µ1 P1 − λ0 P0 . (45)
dt

Here λn = k̂3 nB + k̂1 nA n(n − 1) and µn = k̂4 n + k̂2 n(n −


1(n − 2), λn and µn are the birth and death rates, re- Figure 8: The volume-dependence of the
spectively. modality
Continuous time discrete state stochastic models:
Simulation methods
The Doob-Hanusse-Hárs-Tóth-Érdi-Gillespie • simulation softwares: Cain
algorithm
• algorithms
• when will the next reaction occur?
• visualization methods
• what kind of reaction will it be?

• Doob theorem

• P(τ, µ): reaction probability


density function

• variations, accelerated, hybrid etc


methods.
Figure 9: Realization

P(τ, µ)dτ) means the probability


at time t that the next reaction in
V will occur in the differential time
interval (t +τ,t +τ +dτ), and it will
be an Rµ reaction

Figure 10: Histogram


CCD versus CDS models

A more satisfactory way of dealing with this problem could be called


stochastic mapping (G. Lente) , which attempts to identify the part of the
parameter space of a given kinetic scheme in which only the stochastic ap-
proach is viable. A convenient definition of this stochastic region is the set
of parameter values for which the stochastic approach shows that the rela-
tive standard error of the target variable is larger than a pre-set critical value
(often 1% because of the usual precision of analytical methods used for
concentration determination). Although there is no general proof known yet,
a small standard deviation of the expectation of a variable calculated based
on the stochastic approach seems to ensure that the stochastic expectation
is very close to the deterministic solution.

x
Systems Biological Applications

• Fluctuations near instabilities

• Enzyme kinetics

• Signal processing in biochemical networks

• Calcium signaling

• The beneficial role of noise: fluctuation-dissipation theorem and stochastic


resonance

• Stochastic models of gene expression

• (Chiral symmetry)
Fluctuations near instabilities

E[ξ (t)] = x0 exp(λ − µ)t, (56)


CCD vs CDS models
D2 [ξ (t)] = (λ + µ)t. (57)
For the case of λ = µ
λ0
A + X → 2X (49)
µ
X → 0. (50) D2 [ξ (t)] = 2Dλt,

dx(t)/dt = (λ − µ)x(t); x(0) = x0 , (51)


(here λ = λ 0 [A].) The solution is
E[ξ(t)] ± D[ξ(t)] :
E[ξ(t)] :

x(t) = x0 exp(λ − µ)t. (52)

If λ > µ, x is exponentially increasing function of time.


λ = µ:
x(t) = x0 . (53)

dPk (t)/dt = −k(λ + µ)Pk (t) + λ (k − 1)Pk−1 (t) + µ(k + 1)Pk+1 (t) t
PSfrag replacements
(54)

Pk (0) = δkx0 ; k = 1, 2, ...N.


(55) Figure 11: Amplifications of fluctuations might

imply instability. While the expectation is constant,

the variance increases in time.


Systems Biological Applications

Enzyme kinetics

1 kk2
−*
E + S )− ES −→ E+P (58)
k−1

dPe,s (t)
dt = − [κ1es + (κ−1 + κ2)(e0 − e)] Pe,s(t)+

κ1(e + 1)(s + 1)Pe+1,s+1(t)+

+κ−1(e0 − e + 1)Pe−1,s−1(t)+
Figure 12: Stochastic map of the
κ2(e0 − e + 1)Pe−1,s(t)
Michaelis-Menten mechanism with
(59)
the number of product molecules
formed as the target variable.
Systems Biological Applications

Enzyme kinetics

Figure 13: An overlooked very im- Figure 14: Comparison of CDS


portant paper and CCD model
Systems Biological Applications

Enzyme kinetics
Oxidation of molecular hydrogen by HynSL hydrogenase from Thiocapsa
roseopersicina
three step catalytic cycle

E2 , E3 and E4 are different enzyme forms in the cat-


alytic cycle, H2 is hydrogen, whereas Mo and Mr are
the oxidized and reduced form of the electron accep-
tor compound benzyl viologen. state: E2 , E3 and Mr
b κ
E2 + E3 −→ 2E3 molecules as e2 , e3 and p. The master equation can
then be stated as using m and h for the number of Mr
κ
c
E3 −→ E4 (60) and H2 species, and introducing the constant n as the
total number of all enzyme forms (n = e2 + e3 + e4 ): It is
d
E4 + H2 + 2Mo −→
κ
E2 + 2Mr possible to find rate constants leading to extinction.

dPe2 ,e3 ,p (t)


dt = − [κb e2 e3 + κc e3 + κd (n − e2 − e3 )m(m − 1)h] Pe2 ,e3 ,p (t)

+κb (e2 + 1)(e3 − 1)Pe2 +1,e3 −1,p (t) + κc (e3 + 1)Pe2 +1,e3 +1,p (t)
Extinction in autocatalytic system , which occurs
+κd (n − e2 − e3 + 1)(m + 2)(m − 1)(h + 1)Pe2 −1,e3 ,p−2 (t) when the molecule number of the autocatalytic
(61) species falls to zero in a system that involves a
pathway for the decay of the autocatalyst. This
phenomenon is unknown in deterministic kinet-
ics, as an initially nonzero concentration can at
no time be exactly zero there.
Systems Biological Applications

Signal processing in biochemical networks


M(I, O) = H(O) − H(O|I) (62)
Z
H(O) ≡ − p(O)logp(O))dO

Evaluation of signal transfer by mutual informa- is the information-theoretical entropy of the out-
tion put O having P(O) probability distribution, and
R R
H(O|I) ≡ − p(I)dI p(O|I)logp(O|I)dO is the average
• Biochemical networks map time-dependent in-
(over inputs I) information-theoretical entropy of
puts to time-dependent outputs.
O given I, with p(O|I) the conditional probability
• The efficiency of information transmission: mu- distribution of O given I
tual information between the input signal I and
output signal O Input species S and output species X( Tostevin and
ten Wolde (2009):

p(S(t), X(t))
Z Z
M(S, X) = DS(t) DX(t)p(S(t), X(t))log
p(S(t))p(X(t))
(63)
.
Systems Biological Applications

Signal processing in biochemical networks

Analytical results: small Gaussian fluctuations:


Mutual information rate: R(s, x) = limT →∞ M(s, x)/T - calculated from the power spectra of the fluctuations:
Z∞
|Ssx (ω)|2
 
1
R(s, x) = − dω ln 1 − (64)
4π Sss (ω)Sxx (ω)
−∞

R(s, x): temporal correlations between the input and output signals. Equation 64 is exact for linear systems with Gaussian
noise. Detection of input signals may generate correlations between the signal and the intrinsic noise of the reactions. If there is
NO correlation:spectral addition rule (65).

Sxx (ω) = N(ω) + g2 (ω)Sss (ω) (65)


|Ssx (ω)|2
Here N(ω): internal fluctuation; Sss (ω): the power spectrum of the input signal, g2 (ω) = Sss (ω)2
: frequency-dependent gain.

The spectrum of transmitted signal: as Pω = g2 (ω)Sss (ω), so equation (64):

Z∞  
1 P(ω)
R(s, x) = − dω ln 1 + (66)
4π N(ω)
−∞
Systems Biological Applications

Signal processing in biochemical networks


The effect of external noise (on E+):
addition of a correction (diffusion)
term to the deterministic result (σ is
the forward enzyme noise strength)

k−E − (X0 − Xss)(K+ + Xss σ 2k+K+


RN(σ )(Xss, E+; E−) = E+ − + =0
K+Xss)(K− + X0 − Xss) (K+ + Xss)2
(67)

Figure 15: The enzymatic fu-


tile cycle reaction mechanism. Figure 16: Transition between
Adapted from Samoilov
. 2005 uni- and multistaionarity. The
bifurcation parameter p is the
exponent of E+ in a relationship
connecting the variance E+ and
E+. (Samoilov 2005)
Signal processing in biochemical networks
Stochastic bifurcation in a signaling pathway

Figure 17: Stochastic bifurcation plot of the fractional steady-state values of


activated kinase, as a function of the volume V. The solid curves represent the
maxima of the steady-state distribution, and the dashed curve represents the
minimum of the distribution. V = 1.67 is the critical value, where the bistability
disappears. (Bishop-Qian 2010) .
Calcium signaling
Hierarchy of spatiotemporal events

• single channel opening (blip)


• the opening of several closely
packed channels (puff)
• cooperation of puffs may set off a
wave traveling through the cell
• Waves may occur periodically, so Figure 18: A lumped kinetic scheme of the
they can be seen as global oscil-
channel kinetics. X00 : state with no Ca2+ bound; X10 :
lations
activated state; X11 and X01 : inhibited states. An index

is 1 if an ion is bound and 0 if not. Transition rates are

shown at the edges of the rectangle. Falcke03.

Figure 19: Distributions of clusters and of receptors involved obtained with


simulations of intercluster dynamics with calcium accumulation. Power law dis-
tribution can be rather well fitted. Based on Lopez 2012.
Systems Biological Applications

The beneficial role of noise: fluctuation-dissipation theorem


and stochastic resonance
• Estimation of rate constants For the association-dissociation reaction of the beryl-
k1
from equilibrium fluctuations lium sulfate described by the reaction X )−−*
k−1
Y, the

spectrum of the electric fluctuation (which reflects the


concentration fluctuation) was found to be
• Chemical fluctuations mea-
surements: electric conduc- const
Sν = . (68)
tance; fluorescence correla- 1 + (2πν(k1 + k2 ))2

tion spectroscopy k
Since from deterministic equilibrium the ratio 1 is
k
1

given, therefore, the individual rate constants can be
• Membrane noise analysis calculated.
Fluorescence correlation spectroscopy is able to measure the fluctuation
of the concentration of fluorescent particles (molecules). Temporal changes
in the fluorescence emission intensity caused by single fluorophores are
recorded. The autocorrelation function C(t) := E[ξ (t)ξ (t − τ)] of the signal
ξ (t) is calculated, and from their time-dependent decay of the fluorescence
intensity the rate parameters can be calculated. Higher order correlations
Cmn(t) := E[ξ (t)mξ (t − τ)n] were used to study the details of molecular aggre-
gation. To extract more information from the available data beyond average
and variance at least two efficient methods were suggested. Fluorescence-
intensity distribution analysis is able to calculate the expected distribution
of the number of photon counts, and photon counting histogram gives an
account of the spatial brightness function. Forty years after, fluorescence
fluctuation spectroscopy still is a developing method.
Parameter estimation for stochastic kinetic models: beyond
the fluctuation-dissipation theorem
where the jth experimental replicates
o1j , o2j , . . . onj are taken at time points t1,
• time-dependent data t2, ... tn for j = 1, 2, ..., m (i.e. the ex-
periments are done in m replicates).
f (oij , ti; k is the likelihood function de-
• maximum likelihood estima- termined by the density function his-
tor for the rate constants togram constructed from the realiza-
tions of the stochastic process speci-
• kinetic parameters of bio- fied by the master equation.
chemical reactions, such The maximization of the likelihood
function (actually from numerical rea-
as gene regulatory, signal
sons the minimization of the nega-
transduction and metabolic tive log-likelihood function) gives the
network, generally cannot best estimated parameters (i.e. it
measured directly, gives the greatest possible probability
to the given data set): training data.
m n
m n k = argmin −logL(k) = argmin ∑ ∑ − log P(oij ,ti),

k k
L(k) = ∏ ∏ f (oij , ti; k), (69) (70)
j=1 i−1

j=1 i=1
where P(oij ,ti)
is the conditional prob-
ability density function reconstructed
from the simulated realizations.
Stochastic resonance
Photosensitive Belousov Zhabotinsky Reaction J. Phys. Chem. A, Vol. 102, No. 24, 1998 4541

Interspike histogram in the case of the high BrMA


Figure 3.

concentration (V 0.05 M). The number of adjacent spikes (NAS) in


Y, whose time interval was between 270 and 330 s (i.e., (1 0.1)T1
and (1 0.1)T2, was plotted as a function of the noise amplitude under
the two conditions: ( ) a periodic signal in the flow rate (kf0 1.05
10 3 s 1, 2 0.05, T2 300 s, 2 0) with noise in the light flux
Figure 20: A typical(( curve
0
0
9 10of 7 M s 1,
stochastic
1 0); ( ) a periodic
reso- signal in the light flux
9 10 7 M s 1, 1 0.05, T1 300 s, 1 0) with noise in
the flow rate (kf0 1.05 10 3 s 1, 2 0). The NAS is divided by
nance shows a the integration time,
single maximum 6 104 s. Errorperfor-
bars are the standard deviations
of the NASof withthe
a set ofoutput
six random numbers at each noise amplitude.
The solid line is to guide the reader’s eye. The parameters used are the
same as those in Figure 1a.
mance as a function of the intensity of the input noise.
Figure 5. Threshold paradigm of SR. (a) One-parameter system: a
threshold is constant in time and the system responds when the sum of
a signal and additive noise crosses the threshold. (b) Two-parameter
Figure 22:system: a periodic modulation (a)in oneOne-parameter
parameter changes a thresholdsystem:
of the other parameter, and the system responds when noise in the latter
a
parameter crosses the modulated threshold.
threshold is constant in time and the system gener-
processes with only one process that represents zero-order
kinetics for the production of Br proposed by Krug, et al.26
The reduction canifbethe
ates a response made sum
as follows:
of pthe
2 0, signal
p1 in the dz/and
d additive
equation is equal to zero, and p1 in the dy/d equation is equal
Interspike histogram in the case of the low BrMA
Figure 4.
to 1. This reduced form of a modified Oregonator model in a
concentration (V 0.005 M) under the two conditions: ( ) a periodicnoise batch system has
exceeds thebeenthreshold.
found to be very(b) useful,Two-parameter
particularly in sys-
signal in the flow rate (kf 1.7 10 s , 2 0.05, T2 300 s,
0 3 1
2 0) with noise in the light flux ( 0 1.5 10 5 M s 1, 1 0); studies of the effect of light on spatio-temporal behavior in the
( ) a periodic signal in the light flux ( 0 1.5 10 5 M s 1, 1 BZ system. 48,49 However, if we use the simple scheme for the
0.05, T1 300 s, 1 0) with noise in the flow rate (kf0 1.7 tem: 10 3 a periodic
photochemicalmodulation in flow-Oregonator
reaction, the resulting one parameter model changes
s 1, 2 0). The NAS is divided by 6 104 s. Error bars are the is considered to be quasi two-parameter system, because both
standard deviations of the NAS with a set of six random numbers at the light flux term ( ) and the flow term ( fy0) control the
Figure 21: each noise amplitude.
Signal-to-noise ratioTheassolidalinefunction a The
is to guide the reader’s eye.
parameters used are the same as those in Figure 1b.
threshold
system inof the the
same wayother parameter,
under the conditions whereand the otherthe system
flow terms are negligibly small: f 8.6 10 4 1, f
less effect of noise in the light intensity was observed, because generates 2 10 3 1, and f when 4.6 10noise6 q in9.5 the10 5, with parame-
of the width of the double well.
the value of Since
the light width
flux at the can be is about
Hopf bifurcation 10- kf 2response10 3. Contrarily, the model presented in thislatter
study
fold smaller than that in the case of the low concentration of involves two independent bifurcation parameters that will
BrMA. the peak is the fea-
identified as the noise parameter, ter exceedsrepresentthe
the different physical rolesthreshold.
modulated in the real photosensitive
From Tomo Ya-
From the experimental point of view, a modified Oregonator BZ reaction in a flow system.
model used in this study is realistic enough to reproduce the
ture of stochastic resonance.experimentally
From Leonard observed behavior
1994of the photosensitive BZ
maguchi’s lab: 1998.
V. Conclusion
reaction in a batch system.35 The photochemical production of
activator HBrO236 38 through the process (P2) is crucial to the Computational study of a model of the photosensitive BZ
quantitative agreement between the experiments and the model, reaction in a flow system showed that SR occurs in the situation
in addition to the photochemical production of inhibitor where a periodic signal and noise are added to the different
Br 29 34 through the process (P1). The good agreement physical inputs (a flow rate and light flux). The two bifurcation
between the experimental and modeling results in the batch
35 parameters control the investigated system differently, which
system suggests that the model can readily be applied to the resulted in the difference in the noise amplitudes for the SR
flow system as well. In practice, the bifurcation diagram as peak. The model is based on the experimental behavior of the
shown in Figure 1b can explain the experimental behavior such photosensitive BZ reaction and is readily applied to the SR
31,38,47 36,38,47
Stochastic resonance of aperiodic signals
Not only periodic, but aperiodic signals might be the subject of amplification by noise, both in experimental (actually electrochemical)
and model studies. Information transfer is quantified by the C0 cross correlation function

C0 = h(x1 − hx1 it (x2 − hx2 it )it i, (71)

where x1 and x2 represents the time series of the aperiodic input signal, and the noise induced response of the electrochemical
system, respectively. hi denotes the respective time averages.

Figure 23 illustrates the existence of optimal noise level for information transfer.

Figure 23: Cross-correlation as a function of noise level. Parma-show 2005.


Systems Biological Applications

Stochastic models of gene expression: second encounter


A three-stage model of gene expression (say: Paulsson 2005):

λ1+ λ λ
2 3
inactive gene −*
)− active gene −→

mRNA −→ protein (72)
λ1

supplemented with two degradation steps (the second also called as prote-
olysis):
γm γp
mRNA −→ 0 protein −→ 0 (73)
The circular gene hypothesis: feedback effects and protein
fluctuations
Raoul Wadhwa, Làszlò Zalànyi, Judit Szente, Làszlò Nègyessy, Pèter Èrdi

1. Canonical stochastic models of gene expression 3. Direct Kinetic measurements are missing 4. Stochastic Simulation 6. Where are we now?

• PROTEIN FLUCTUATION seems to be sensitive

• mRNA FLUCTUATION is smaller but shows similar tendencies

A possible test: • MODEL DISCRIMINATION seems to be possible

HOW does the feedback mechanism • Some ANALYTICAL calculations look feasible

influence protein fluctuation? • The justification of the circular gene expression hypothesis
needs kinetic studies

5. Simulation Results – Canonical Model 5. Simulation Results – Feedback Model 7. A Similar Project
Investigation of transmitter-receptor interactions by
analyzing postsynaptic membrane noise using stochastic
kinetics
Erdi, P., Ropolyi, L.
From Paulsson (2005)
Abstract
2. Circular Gene Expression Hypothesis
The stoichiometric and kinetic details of transmitter-receptor interaction
The skeleton network model of coordinated mRNA (the number of conformations and the rate constants of conformation
degradation and synthesis changes) in synaptic transmission have been investigated analyzing
postsynaptic membrane noises by the aid of the fluctuation-dissipation
theorem of stochastic chemical kinetics. The main assumptions are the
Figure 1. The stationary distribution of the number of protein particles at Figure 3. The stationary distribution of the number of protein particles at following: (i) the transmitter-receptor model interactions are modelled by
t=50,000 for n=100,000 simulated stochastic trajectories of the canonical t=50,000 for n=100,000 simulated stochastic trajectories of the feedback model a closed compartment system (a special complex chemical reaction) of
model with a normal transcription rate (propensity=0.0231). The resulting with a normal transcription rate (propensity=0.0231). The resulting distribution unknown length, (ii) the quantity of transmitter is maintained at a
distribution is statistically well-modeled by a Poisson distribution with expected is statistically well-modeled by an exponential distribution. constant level, (iii) the conductance is a linear function of the
value λ=654. conformation quantity vector. The main conclusion is the conductance
spectral density function is determined by three qualitatively different
factors, (i) the length of the compartment system, (ii) the precise form of
the conductance-conformation quantity vector, (iii) the matrix of the
reaction rate constants.

8. References
Haimovich, G., Medina, D., Causse, S., & Garber, M. Gene Expression Is
Circular: Factors for mRNA Degradation Also Foster mRNA
Synthesis. Cell, 153, 1000-1011.
Paulsson, J. Models of stochastic gene expression. Physics of Life Reviews, 2,
157-175.
Figure 2. The stationary distribution of the number of protein particles at Figure 4. The stationary distribution of the number of protein particles at Mauch, S., & Stalzer, M. Efficient Formulations for Exact Stochastic Simulation
Table 1. The reaction list for the feedback model, which takes into account the t=50,000 for n=100,000 simulated stochastic trajectories of the canonical t=50,000 for n=100,000 simulated stochastic trajectories of the feedback model of Chemical Systems. IEEE/ACM Transactions on Computational
presence of the xrn1 promoter. The value of the transcript_rate parameter is model with an elevated transcription rate (propensity=0.1155). The resulting with an elevated transcription rate (propensity=0.1155). The resulting Biology and Bioinformatics, 27-35.
0.0231. The table above was taken from the reaction setup using the Cain: distribution is statistically well-modeled by a Poisson distribution with expected distribution is statistically well-modeled by an exponential distribution.
Stochastic Simulations for Chemical Kinetics environment. Simulations were value λ=3,270.
run using this environment.
a
TEMPLATE DESIGN © 2008

www.PosterPresentations.com
Motti Choder: circular gene experession hypothesis
Systems Biological Applications

Chiral symmetry: An important topic: Gábor Lente

 
r+s
P(r, s) = (0.5 + ε)r (0.5 − ε)s (74)
r
Here, P(r,s) is the probability that r molecules of the R
enantiomer and s molecules of the S enantiomer oc-
cur in an ensemble of (r + s) molecules. Parameter
• Molecular chirality: lack of certain symme- ε is characteristic of the degree of inherent difference
try elements in the three dimensional struc- between the two enantiomers (ε ≤ 0.5), and is con-
tures of molecules nected to the energy difference (∆E) between the R
• Homochirality or biological chirality: mir- and S molecules as follows:
ror image counterparts have very different
roles: typically, only one of them is abun- e∆E/RT − 1
ε= (75)
dant and it cannot be ex- changed with the 2(e∆E/RT + 1)
other one. The expectation and standard deviation for the num-
• Racemic mixtures (R and S) ber of R enantiomers from this distribution is given by
straightforward formulae:

hri = (0.5 + ε)(r + s) (76)

p
σr = (r + s)(0.5 + ε)(0.5 − ε) (77)
Systems Biological Applications

Chiral symmetry

Figure 24: The classical Frank model (1953)


! ! !

dP (a, r , s, t )
" {2a u ! ar c ! as c ! rs d ! (n a) f }P(a, r , s, t ) !
dt
! {( a ! 1) u ! (a ! 1)(r 1) c }P(a ! 1, r 1, s, t ) !

! {( a ! 1) u ! (a ! 1)( s 1) c }P(a ! 1, r , s 1, t ) ! (59)

! {( r ! 1)( s ! 1) d }P ( a, r ! 1, s ! 1, t ) ! (r ! 1) f P (a 1, r ! 1, s, t ) !

! ( s ! 1) f P(a 1, r , s ! 1, t ) ! (n a r s ! 1) f P(a 1, r , s, t )
Figure 26: Final probability distributions in the Frank model in a closed system

You might also like