Professional Documents
Culture Documents
(B)
n=1
h(X
n
) | X
0
= x
_
(dx) < ,
for i = 1, 2, . . . , l, B A where (B) = min{n :
X
n
B}.
Without loss of generality s(x) can be considered (see
Nummelin, 1984) as an indicator of a set S, i.e.,
s(x) =
_
1, x S
0, otherwise
For the case when {X
n
} is a homogeneous
Markov chain with a nite number of states, =
{0, 1, . . . , m}, denote the transition matrix by P, and
the stationary probability vector by = (
0
, . . . ,
m
).
Let us suppose, that
(d) n such that (P
n
)
ij
> 0,
where (P
n
)
ij
> 0 is an element with subscripts
i, j of the matrix P
n
, i, j = 0, 1, . . . , m;
(e) h
i
(x) < , for x , i = 1, 2, . . . , l, and
h
i
(x) 0.
Condition (d) means that the chain is an aperiodic
and positive recurrent chain.
2.2 Simulation Problem
It is required to estimate one or several functionals of
the form
J
i
=
_
X
h
i
(x) (dx), i = 1, 2, . . . , l
with respect to simulation results.
In case of chains with nite state space these func-
tionals have the form
J
i
=
m
x=0
h
i
(x)
x
, i = 1, 2, . . . , l.
2.3 Branching Technique
Let {X
n
} be a Markov chain of the general form de-
scribed in 1.1. Let us introduce independent ran-
dom variables Z
n
, n = 0, 1, . . ., where P{Z
n
= 1} =
s(X
n
), P{Z
n
= 0} = 1 s(X
n
).
Let
n
chains {X
n
, Z
n
}, de-
noted {X
()
n
, Z
()
n
; = 1, 2, . . . ,
n
}, be simulated
270 Melas
at the nth step. A procedure for regulating
n
can
be introduced in the following way.
Denition. A measurable function (x) on
(X, A) with the following properties:
1) (x) = const x S,
2) (x) 0, x ,
3)
_
B
(x)(dx) > 0, if (B) > 0
is called a branching density.
For nite Markov chains satisfying condition (d),
conditions 2) and 3) are replaced by (x) > 0, x .
Introduce the random variable
r(x, y) = (y)/(x),
with probability 1 (y)/(x)
and
r(x, y) = (y)/(x) +1,
with probability (y)/(x),
where a, with a = (y)/(x), means the integer
part of a and a means the whole part of a.
At step zero, set
0
= 1. At each following step
(n + 1 = 1, 2, . . .) if Z
n+1
= 1, then the simulation
of path discontinues. If Z
n+1
= 0 and X
n
= x,
X
n+1
= y, then when (y)/(x) < 1 the simulation
of path discontinues with probability 1(y)/(x).
When (y)/(x) 1 we simulate r(x, y) 1 addi-
tional paths, beginning from point x, where k-steps
of these chains will be denoted by index n + k.
It can be shown (see Theorem 1 below), that the
number N = {n :
k
> 0, k < n,
n
= 0} is nite
with probability 1, and moreover T
N
n=0
n
<
with probability equal to 1. Thus the simulation
terminates with probability 1. This process will be
called a loop.
Let us simulate k loops: {X
n
(i); =
1, 2, . . .,
n
(i)}, i = 1, 2, . . . , k. Let us determine
estimators of the functionals J
j
(j = 1, 2, . . . , l) by
the formula
J
k,
(j) =
1
k
k
i=1
Y
(i, j)
_
k
i=1
(i, j) ,
where
Y
(i, j) =
N(i)
n=0
n
=1
h
j
(X
n
(i)) / (X
n
(i)) ,
(i, j) = Y
(i, j) with h
i
(x) 1.
Note that when (x) 1 the process reduces to di-
rect simulation of the chain {X
n
}, and the estimators
J
i,
become the known ratio estimators of the regen-
eration method (see Crane, Iglehart, 1974).
In the next section we will show that these es-
timators are asymptotically unbiased estimators and
we will introduce a representation for the variance-
covariance matrix.
3 EXPRESSION FOR COVARIANCES
Set
D() =
_
lim
k
kCov
_
J
k,
(i),
J
k,
(j)
_
_
l
i,j=1
,
T() = lim
k
k
i=1
T
i,
/k.
By virtue of Law of Large Numbers, T() is the
expectation of the number of steps for all chains in
one loop.
Set
h
j
(x) =
J
k,
(j), j = 1, 2, . . . , l,
under the condition that all paths begin at x (X
1
0
(i) =
x, i = 1, . . . , k).
Theorem 1. When conditions (a)(c) are valid
for general Markov chains, estimators
J
k,
(i) are
asymptotically unbiased and
D() =
__
1
(x)(dx) (d
ij
(x) + r
)
_
l
i,j=1
,
T() =
_
(x) (dx),
where d
ij
(x) = E
h
i
(x)
h
j
(x) E
h
i
(x) E
h
j
(x), E is
the expectation operator, and r
is a remainder term.
This theorem is an obvious modication of The-
orem 3 in Melas (1993). There it can be found the
form of the remainder term as well as arguments for
its ignoring.
Note that the value of d
ij
(x) can be estimated
with respect to current data of simulation in the stan-
dard way on the basis of the formula from Theorem
1.
A similar result is valid for nite Markov chains.
Theorem 1 permits us to introduce optimality cri-
teria for the choice of the branching measure. To
simplify the notation let us consider that (dx) =
(x) dx. Set
(x) = (x) (x)
__
(x) (x) dx (1)
x
=
x
x
_
x
(nite chains). (2)
On the Efficiency of the Splitting and Roulette Approach for Sensitivity Analysis 271
Then T() = 1 for any . Dropping the remainder
term, we obtain the matrix
D() =
__
1
(x) d
ij
(x)
2
(x) dx
_
,
D() =
_
m
x=0
1
x
2
x
d
ij
(x)
_
(nite chains).
As an estimation accuracy criterion let us take quan-
tities
trAD(), (3)
det D() (4)
where A is an arbitrary nonnegative-denite matrix.
Note that when A = I (where I is identity matrix)
and l = 1, trAD() is just the variance of the esti-
mator
J
1
. The statistical meaning of criterion (4) is
well known from the theory of regression experimen-
tal design (see Kiefer, 1974).
The probability measure (x) is called a design of
the simulation experiment.
4 OPTIMAL EXPERIMENTAL DESIGNS
Consider the minimization problem for criteria (3)
and (4) in the class of probability measures induced
by branching densities. The optimal design for crite-
rion (3) can be found with the help of the Schwarz
inequality. The direct application of this inequality
brings the following result.
Theorem 2. Let the hypothesis of Theorem 1 be
satised. Then the experimental design minimizing
the value of tr AD() is given by formula (1), where
(x) =
_
trAD().
Let us formulate the results for criterion (4) for -
nite Markov chains and the case of h
i
(x) =
ix
, where
ix
is the Kronecker delta, i = 1, 2, . . ., m. The gen-
eral case can be formulated in the same way.
Theorem 3. For nite Markov chains satisfying
the conditions (d) and (e), and functions h
i
(x) of the
form h
i
(x) =
ix
, i = 1, . . . , m, there exists an exper-
iment design minimizing the value of (4). This design
is unique and satises the relation
(x) =
_
tr B
x
B
1
(
)
_
1/2
x
_
m,
where
B
x
= (p
xy
yz
p
xy
p
xz
)
m
y,z=1
,
x = 0, 1, . . . , m,
B() =
m
x=0
_
2
x
/
x
_
B
x
.
The following iterative method can be used to nd
the optimal design.
Set
0
(y) =
y
, y = 0, . . . , m,
k+1
(y, ) = (1 )
k
(y) +
(x)
(y),
k = 0, 1, 2, . . .
where
(x)
(y) =
xy
x = arg max
x
2
x
2
k
(x)
tr B
x
B
1
(
k
).
Set
k
= arg min
[0,1]
det B({
k+1
(y, )}) ,
k+1
(y) =
k+1
(y,
k
), y = 0, 1, . . . , m.
Theorem 4. Under the hypothesis of Theorem 3
for k
det D(
k
) det D(
),
where det D(
) = mindet D().
When applying the branching method in practice,
statistical estimators of
x
with respect to simulation
results are to be used instead of the quantities
x
in
the present algorithm. Proofs of theorems 2-4 can be
found in Ermakov, Melas (1995).
5 BRANCHING EFFICIENCY
Let us study the eciency of the branching technique
for the case of nite Markov chains and the random
walk processes.
5.1 Finite Markov Chains
Let us investigate the eciency of the branching
method using optimally chosen design , with respect
to direct simulation, which corresponds to the exper-
iment design = . The relative eciency is dened
by the formula
R =
det D(
)
det D()
.
This value depends on the transition matrix P, so we
write R = R(P).
It is not dicult to verify that R = 1 in the case
when = (1/(m+ 1), . . . , 1/(m+ 1)), and in the case
when the Markov chain turns into a sequence of inde-
pendent random variables. However, these cases are
of no practical interest. The following result demon-
strates that the relative eciency can be arbitrarily
large.
272 Melas
Theorem 5. Under the hypothesis of Theorem 3
when m = 1
sup
P:P=
R(P) = min
_
1
1 + 2
1
, max(
0
,
1
)
_
,
when m > 1
sup
P
R(P) = .
The proof of Theorem 5 can be found in Ermakov,
Melas (1995). The branching techniques eciency
for the special class of chains for which p
ii+l
= 0,
l > 1 is investigated in Melas (1994). And a gener-
alization of the formula for sup
P:P=
R(P) with an
arbitrary given and arbitrary m was developed in
Melas, Fomina (1997).
5.2 Random Walk
Consider a random walk process on a line
S
1
= 0, S
n+1
= S
n
+ X
n
, k = 1, 2, . . . ,
where X
n
, k = 1, 2, . . . are independent random vari-
ables with common density function f(x), and con-
nect it with the waiting process
W
1
= 0, W
n+1
= max(0, W
n
+ X
n
), n = 1, 2, . . .
It is known (see Feller, 1970) that the quantities W
n
and
M
n
= max{S
1
, . . . , S
n
}
have the same distribution. Under an additional con-
dition on EX
1
, M
n
M and W
n
W in distribu-
tion, where M and W are random variables.
Set = P{W v} = P{M v}. This value
cannot be calculated analytically and the problem is
the estimation of with the help of simulation. This
problem has a number of applications. It had been in-
vestigated in Siegmund (1976) and Asmussen (1985).
Set (0) = P{W = 0}, (x) = W
(x), W(x) =
P{W < x}, (x) = 0 for x < 0, X
n
= W
n
, S = {0},
(dx) = af(x)dx, a = 1/(1 F(0)), and
1 F(0) =
_
0
f(t)dt,
h
1
(x) = 0, x < v, h
1
(x) = 1, x v, l = 1.
Then it is evident that conditions (a), (b) and (c) are
valid. The problem is the estimation of the functional
J = J
1
=
_
h
1
(x)(x)dx.
Suppose that f(x) satises the condition
_
e
0t
f(t)dt = 0
for some positive
0
. Set
0
(x) 0,
(t) = e
0t
,
0 < t < v,
(t) = e
0t
, t v. Dene the relative
eciency by the formula
R = R
v
=
T
T
0
D
J
0
.
It is known (see Feller, 1970) that as v
e
0t
0.
Theorem6. Let the conditions formulated above
be satised and for some > 0
_
e
2(0+)t
f(t) dt < .
Then for v
R
v
= O
_
1
ln
2
(1/)
_
, e
0v
.
Thus the relative eciency converges to as v
. Theorem 6 is a modication of Theorem 5.5 from
Ermakov, Melas (1995), chapter 4.
Since Theorem 6 describes only the asymptotic
behavior of the relative eciency we give numerical
results in the next section.
6 NUMERICAL EXAMPLE
Let us consider a queueing system of the type
GI/G/1/. This system can be described in the fol-
lowing manner. Demands come in moment t
1
, t
2
, . . .
and need service time u
1
, u
2
, . . ., respectively. De-
mands are served in order of coming. We assume
that {v
i
= t
i+1
t
i
} and {u
i
} are independent ran-
dom variables. In the simulation experiments it is
assumed that v
i
and u
i
(i = 1, 2, . . .) have exponential
distribution with parameters a and 1, respectively.
The problem is to estimate the steady-state probabil-
ity = lim
n
P{W
n
< v}, where W
n
is the waiting
time for nth demand and v is a given magnitude.
To compare the branching technique (BT) and the
usual regeneration technique (RT) we simulated the
corresponding random walk by both methods during
100,000 loops. The magnitude v was chosen so that
= 10
2
, 10
3
, . . . , 10
6
, a = 1/2. Denote by I the
magnitude
I =
2
/(ETD
)
where is the proper value of the estimated param-
eters, ET is the mean length of the simulated paths
and D