You are on page 1of 15

Applied Mathematics and Computation 154 (2004) 389403

www.elsevier.com/locate/amc

Estimations of parameters in a three


state reliability semi-Markov model
A. El-Gohary

Department of Mathematics, Faculty of Science, Mansoura University, Mansoura 35516, Egypt

Abstract
Maximum likelihood and Bayes estimates of the parameters included in a three state
semi-Markov reliability model are presented. The renewal kernel which depends upon a
vector of unknown parameters is used to dene a semi-Markov process which can be
used to describe some reliability model. Moreover, some of the obtained results are
compared with those obtained in other literature. Special case of the obtained results is
presented.
2003 Elsevier Inc. All rights reserved.
Keywords: Maximum likelihood; Bayes estimators; Gamma prior distribution; Posterior mean;
Semi-Markov model; Standby system with repair

1. Introduction
Many reliability systems can be modelled by using the semi-Markov process,
such as machine with repairman, standby system with repair and others. An
assumption of exponentially of all distributions is rather unrealistic. If a system
under consideration is not highly reliable, no asymptotic methods can be applied [1]. A discrete semi-Markov risk model is introduced in [2]. In this paper,
a recursive system for nding the probability of ruin and the distribution of
severity of ruin in a particular annual are presented.

*
Address: Department of Statistics and O.R., Faculty of Science, King Saud University, P.O.
Box 2455, Riyadh 11451, Saudi Arabia.
E-mail addresses: aigohary@ksu.edu.sa, elgohary0@yahoo.com (A. El-Gohary).

0096-3003/$ - see front matter 2003 Elsevier Inc. All rights reserved.
doi:10.1016/S0096-3003(03)00718-5

390

A. El-Gohary / Appl. Math. Comput. 154 (2004) 389403

The system reliability engineering has advanced up to a level since it is


beginning to be separated into various specialized areas such as life cycle
costing, reliability growth modeling, reliability optimization and others. A
useful model for forecasting the future development of salary costs in a rm is
presented in [7]. Four Markov models pertaining to repairable and nonrepairable on surface transit system are introduced in [8]. Maximum likelihood
and Bayes estimates for the parameters included in 1-out-of-2 G repairable
system are presented in [10]. In such study, it is assumed that the failure rate
and repair time distributions of the units are exponential with unknown
parameters. Thomas Bayes was published in 1763 the famous paper and provides the basis for the method known as Bayesian Statistical Inference. Because
of the fundamental importance the paper has been republished 1953. Over the
years reliability estimation methods based on sampling theory have been found
to be extremely useful for wide variety of problems. There are two important
practical benets of a Bayesian analysis. One of them is the increased quality of
the inferences, provided the prior information accurately reects the true
variation in the parameters. The second of them is the reduction in testing
requirements in Bayesian reliability demonstration test programs.
A semi-Markov process fX t : t P 0g is a stochastic process in which changes
of state occur according to a Markov chain and in which the time interval between two successive transitions is a random variable whose distribution depends on the state from which the transition takes place as well as the state to
which the next transition takes place [3]. Generally a semi-Markov process with
discrete state space can be dened as a Markov renewal process [4]. Assuming
that the state space S is nite, we can dened the renewal kernel as follows:
Denition 1 [4]. The stochastic matrix Qt Qij t; i; j 2 S, t P 0 is said to
be a renewal kernel if and only if the following conditions are satised:
1. P
The functions Qij t are non-decreasing functions in t.
2. j2S Qij Gi t are distribution functions in t.
3. Qij 1 Pij ; i; j 2 S P is a stochastic matrix.

Denition 2 [4]. A two-dimensional Markov process fnn ; #n ; n 2 N g with values in S0; 1 is called a Markov renewal process if and only if
1. Qij P fnn1 j; #n1 6 tjnn i; #n tn ; . . . ; n0 i0 ; #0 t0 g P fnn1
j; #n1 6 tjnn ig.
2. P fn0 i; #0 0g pi0 .
It follows from this denition that the transition probabilities of Markov
renewal process do not depend upon the second component. This mean that

A. El-Gohary / Appl. Math. Comput. 154 (2004) 389403

391

transition probabilities depend on the discrete component. In the Markov renewal process, the non-negative random variables #n , n P 1 dene the interval
between Markov renewal times
sn

n
X

#k ;

n P 1; s0 0:

1:1

k1

Now, let
mt :

1
X

I0;t s:

n1

The process mt is called a counting process. It determines a number of renewal


times on the segment 0; t.
Denition 3 [4]. A stochastic process fX t : t P 0g where X t nmt is called
a semi-Markov process that generated by Markov renewal process with initial
distribution P 0 and the kernel Qt, t P 0.
Since the counting process mt keeps constant values on the half-interval
tn ; tn1 and continuous from the right, then semi-Markov process keeps also
constant values on the half intervals sn ; sn1 ; Xn t nn for t 2 sn ; sn1 .
Moreover the sequence X sn : n 2 N is a Markov chain with transition
probability matrix P fpij Qij 1; i; j 2 Sg that is called an embedded
Markov chain. The concept of a Markov renewal process is a natural generalization of the concept of the ordinary renewal process given that by a sequence of independent identically non-negative random variables hn , n P 1.
The random variables hn can be interpreted as lifetimes.
Now, using the Denition 3, the following lemma can be formulated.
Lemma 1. If X t : t P 0 is a semi-Markov process with the renewal kernel
Qt Qij t; i; j 2 S; t 2 0; 1;
then
P fn0 i0 ; #0 0; n1 i1 ; #1 6 u1 ; . . . ; nn in ; #n 6 un g
n
Y
pi0
Qik 1 ik uk :

1:2

k1

This lemma will be used, to obtain the likelihood function of some semiMarkov reliability models.

392

A. El-Gohary / Appl. Math. Comput. 154 (2004) 389403

2. Bayesian estimation
Assuming that the semi-Markov renewal kernel of the reliability model
depends upon unknown vector of parameters H h1 ; h2 ; . . . ; hn . That is
Qt j H fQij t j H : i; j 2 Sg:

2:1

Our aim in this paper is to nd both Maximum likelihood estimator and Bayes
estimator of the unknown vector H, respectively, based on the realization of
the semi-Markov process which given by sequence of observations
i0 ; t0 ; i1 ; t1 ; . . . ; in ; tn of the random vectors n0 ; #0 ; n1 ; #1 ; . . . ; nn ; #n .
We assume that there exists functions denoted by qij t j H; i; j 2 S such that
Z t
Qij t j H
qij u j H du:
2:2
0

Using the above Lemma 1, the likelihood function for the given observations of the semi-Markov process is given by
Li0 ; t0 ; i1 ; t1 ; . . . ; in ; tn ; H pi0

n
Y

qik 1 ik tk j H:

2:3

k1

In the Bayesian procedure, we assume that H is a vector of random variables.


Using this fact gH is assumed to be the density function of H that is called
the prior density function of H. If the loss incurred when the vector H of the
^ is a quadratic, then the value of the
unknown parameters is estimated by H
Bayes estimator for hi is given by
Z
^
hi Ehi j z hi ghi j z dhi ; i 1; 2; . . .
2:4
In this paper, we present a maximum likelihood and Bayesian approaches
for estimating the parameters included in a three-state semi Markov reliability
model. It is assumed that the failure rate is linear function of lifetime. The
posterior mean and associated minimum posterior risk are obtained.

3. Semi-Markov standby model


In this section, we will consider a system consists of one operating unit, an
identical spare, a switch and a repair facility. When the operating unit fails,
then the spare immediately is putting in motion by the switch. Using the repair
facilities the failed unit can be repaired. It is assumed that after repaired the
repair unit is operating perfect (as a new one). The system fails if either the
operating unit fails and repair has not been nished yet or if the operating unit
and the switch are fail. Now, assuming that the life times of the operating units

A. El-Gohary / Appl. Math. Comput. 154 (2004) 389403

393

are represented by independent copies of a non-negative random variable n1


with distribution function F t P fn1 6 tg, t P 0. We suppose that the time
intervals of repair periods of the units are represented by the identical copies of
a non-negative random variable n2 with the distribution function
H t P fn2 6 tg. Let E be an event that denotes a switchover at moment, when
the operating unit fails and the probability that the switch performs when
required is represented by P E k1 . We assume that the whole system can
also repaired, and the failed system is replaced by the new identical one. The
replacing time is represented by a non-negative random variable n3 with distribution function Gt P fn3 6 tg. Further, the random variables ni i
1; 2; 3 are assumed to be independent.
Now, we will dened the following states {0, 1, 2} of the described semiMarkov reliability system:
1. denotes the failure of the system;
2. the failed unit is repaired and the spare is operated;
3. both operating unit and spare are up to.
Let s 0 ; s 1 ; s 2 ; . . . denote that the instances of the state of the system changes
where s 0 0 and let fX t : t P 0g be a stochastic process with state space
S f0; 1; 2g that keeps constant values on the half intervals s n ; s n1 which is
continuous from the right. As see this process is not semi-Markov because of
the rst condition of the denition of semi-Markov is not satised for some
instances.
Let us dene a new stochastic process as follows:
Assuming that s0 0 and sn , n 1; 2; . . . represent the instances that the
components of the system failed or the whole system renewal. The stochastic
process Y t : t P 0 that dened by
Y 0 0;

Y t X sn

for t 2 sn ; sn1

3:1

is a semi-Markov process and the kernel of this process is given by the following matrix
2

0
4 Q10
Q20

0
Q11
Q21

3
Q02
0 5:
0

3:2

The semi-Markov process fY t; t P 0g is well dened if all elements of its


kernel are given. Let us deduce the elements of the semi-Markov kernel as
follows:

394

A. El-Gohary / Appl. Math. Comput. 154 (2004) 389403

Q02 t P fY sn1 2; #n1 6 t j Y sn 0g P fn3 6 tg Gt;


Q10 t P fY sn1 0; #n1 6 t j Y sn 1g
 n1 6 t; n2 < n1 g
P fn1 6 t; n2 > n1 g P fE;
Z t
Z t

1 H t dF x 1 k1
H x dF x
0
0
Z t
F t k1
H x dF x;
0

Q11 t P fY sn1 1; #n1 6 t j Y sn 1g


Z t
P fE; n1 6 t; n2 > n1 g k1
H x dF x;

3:3

Q21 t P fY sn1 1; #n1 6 tjY sn 2g


P fE; n1 6 tg k1 F t;
Q20 t P fY sn1 0; #n1 6 tjY sn 2g
 n1 6 tg 1 k1 F t:
P fE;
The joint posterior probability density function for the given observations
z i0 ; t0 ; i1 ; t1 ; . . . ; in ; tn according to the Bayes formula
Q
gH nk1 qik 1 ik tk j H
Qn
:
3:4
gH j z R 1
gH k1 qik 1 ik tk j HdH
1
Now, the kernel densities of the dened semi-Markov process can be obtained in the following form:
q02 t gt; t P 0; q10 f t1 k1 H t;
q11 k1 H tf t; q20 1 k1 f t; q21 k1 f t:

3:5

Now, assuming that the system failure rate of the system is linearly dependent
on lifetime of the system. That is, the system failure rate is given by k2 k3 t and
the probability density function of the lifetime is given by


1 2
f t k2 k3 t exp k2 t k3 t
:
3:6
2
The main aim of this paper, is to obtain estimate the unknown parameters
k 1; 2; 3.

4. Maximum likelihood estimation


In this section, we will derive the maximum likelihood estimators of the
unknown vector of parameters K k1 ; k2 ; k3 that included in the semi-

A. El-Gohary / Appl. Math. Comput. 154 (2004) 389403

395

Markov reliability model described here. Using (3.5) and (3.6), the semiMarkov kernel densities of the described model as functions of the unknown
parameters are given by:



1 2
q10 t 1 k1 H tk2 k3 t exp k1 t k3 t
;
2


1
q11 t k1 H tk2 k3 t exp k1 t k3 t2 ;
2


1
q20 t 1 k1 k2 k3 t exp k1 t k3 t2 ;
2


1
q21 t k1 k2 k3 t exp k1 t k3 t2 :
2

4:1

We can easily observe that, the kernel Qij t; i; i 2 f0; 1; 2g of the semiMarkov reliability model depends upon the unknown vector K k1 ; k2 ; k3 .
That is
Qt j K Qij t j K; i; j 2 f0; 1; 2g:
Now we proceed to derive the maximum likelihood estimators of K based on
the sequence of the observations z fi0 ; 0; i1 ; t1 ; . . . ; in ; tn g of the random
vector fn0 ; #0 ; n1 ; #1 ; . . . ; nn ; #n g.
These observations can be classied as follows:
Let
Aij fk : ik 1 i; ik j; k 1; 2; . . . ; ng
be the set of numbers direct observed transition from the state i to the state
j and nij is the cardinal number of the set Aij which represents the number of direct transition from the state i to state j. In the present case we nd
that
n02 n10 n11 n20 n21 n:

4:2

Using (2.3) the likelihood function Lz; K becomes


Lz; K

Y
k2A02

ktk

Y
k2A10

q10 tk jK

Y
k2A11

q11 tk jK

Y
k2A20

q20 tk jK

q21 tk jK:

k2A21

4:3

396

A. El-Gohary / Appl. Math. Comput. 154 (2004) 389403

Substituting from (4.1) into (4.3) we get


Lz; K

n02
Y

gtk

k1

n21 
Y
k1

n11 
Y
k1

n20 
Y
k1




1
1 k1 H tk  k2 k3 t2 exp k2 t k3 t2
2
k1

n10
Y



1
k1 k2 k3 t2 exp k2 t k3 t2
2


1
k1 H tk k2 k3 t2 exp k2 t k3 t2
2



1 2
1 k1 k2 k3 t exp k2 t k3 t
:
2
2

4:4

After making some non-dicult calculations we can get


n n

Lz; K Ct exp k2 s2 k3 s3 k1 11 21 1 k1


Y
Y

1 k1 H tk  k2 k3 tk2 ;
k2A10

n20

k2A

where
s2

tk ;

s3

k2A

1X 2
t ;
2 k2A k

Ct

Y
k2A02

ktk

H tk ;

4:5

k2A11

A A10 UA11 UA20 UA21 :


The log likelihood function is
L log Lz; K
log Ctk k2 s2 k3 s3 n11 n21 log k1 n20 log1 k1

n20
X

flogk2 k3 tk2 log1 k1 H tk g

k1

n20
X
k1

n11
X

logk2 k3 tk2

k1

logk2 k3 tk2

n11
X
k1

logk2 k3 tk2 :

4:6

A. El-Gohary / Appl. Math. Comput. 154 (2004) 389403

397

The maximum likelihood equations are


10
X
oL n11 n21
n20
H tk
0;



ok1
k1
1 k1 k1 1 k1 H tk

10
11
X
X
oL
1
1
s2

2
ok2
k
k

k
t

k3 tk2
2
3
2
k
k1
k1

n20
X
k1

21
X
1
1

0;
2
k2 k3 tk k1 k2 k3 tk2

4:7

10
11
X
X
oL
tk
tk
s3

2
ok3
k2 k3 tk k1 k2 k3 tk2
k1

n20
X
k1

21
X
tk
tk

0:
2
k2 k3 tk k1 k2 k3 tk2

The maximum likelihood estimators of the parameters k 1; 2; 3 are the


solutions of the system of non-linear equations (4.7). As it seems, the solution
of this system is dicult.
The above system of non-linear equation (4.7) has no closed form solution
in ki as general. Therefore, a special case can be derived from the system (4.7).
Setting 1 k1 H tk 1 k1 for k 2 A10 and k3 0 the system (4.7) reduces
to maximum likelihood equations of the parameters included in the semiMarkov reliability model when life time of the system has constant failure
rates. That is, the results presented here generalize the results obtained in [6]
when the failure rate of lifetime of the system is constant.
Now, in this special case the maximum likelihood estimators are
^1 n11 n21 ; k
^2 s 2 ;
k
4:8
m
m
where m n10 n11 n20 n21 .
To nd the local Fisher information matrix, a second partial derivatives of
the log-likelihood function L must be found. The second partial derivative of
L are given by
o2 L
m3
;

n11 n21 n10 n20


ok21
o2 L m 2
2;
s1
ok22

o2 L
0:
ok1 ok2

4:9
4:10

The local Fisher information matrix can be determined by using estimators


k^1 k^2 as arguments in the second partial derivatives.
One must calculate the Fisher information matrix to obtain the asymptotic
variance and covariance of the maximum likelihood estimators of the distribution parameters

398

A. El-Gohary / Appl. Math. Comput. 154 (2004) 389403

3
o2 L
ok1 ok2 7
7
:
7
o2 L 5
ok22
k^1 ;k^2 

o2 L
6
2
6 ok
F 6 2 1
4 oL
ok1 ok2

Generally the matrix F0 is symmetric. Evaluated at ki ^ki is the local sher


information matrix
2
3
n11 n21 n10 n20


0
Covk1 ; k2
Vark1
6
7
m3
4
F 1
5:
s21
Covk1 ; k2
Vark2 k^ k^ 
1 2
0
m3

5. Bayes analysis
As we have seen in the previous section, the maximum likelihood equations
have no solution as a closed form in the unknown parameters, in the general
case. Therefore, we look for another approach which may enable us to obtain
estimations of these parameters in closed form. To establish a theorem which
gives the joint posterior pdf of K we need the following lemma.
Lemma 2. The following relation is fullled for a non-negative integer n
n
n
Y
X

k2 k3 tk2
u tk kn
2 k3 ;
k1

5:1

where
X

u tk

tkj ;

1; 2; . . . n;

u0 tk 1:

1 6 k1 <k2 < 6 k n j1

The proof of this lemma can be reached by using the mathematical induction. Using the above lemma, one can formulate and prove the following
lemma.
Lemma 3. The likelihood function can be written in the following form:
Lz j K

n10 X
n10 X
n20 X
n11 X
n21
X

n
1 1 Al1 . . . 5 1 k1 20 kn111 n21 1
1 0 2 0 3 0 4 0 5 0

3
Y
m2

md2m 1m
km

P5
i2

i s k
m m

5:2

A. El-Gohary / Appl. Math. Comput. 154 (2004) 389403

399

where
1
Y

u1 tk

H tkj ;

1 6 k1 <k2 < 6 k1 n10 j1


3
Y

u3 tk

5
Y

2
Y

tkj ;

1 6 k1 <k2 < 6 k1 n10 j1

tkj ;

tkj ;

4
Y

1 6 k1 <k2 < 6 k4 n20

j1

u4 tk

1 6 k1 <k2 < 6 k3 n11 j1

u5 tk

u2 tk

A1 ...5

1 6 k1 <k2 < 6 k5 n21 j1

5
Y

u5 tk

s1

n11
Y

tkj ;

H tk :

k1

5:3
To obtain the Bayes estimate of the unknown parameters, kl l 1; 2; 3 the
following assumptions are adopted:
1. the unknown parameters kl l 1; 2; 3 behave as independent random variables;
2. the prior distribution of the unknown parametres kl l 1; 2; 3 is classied
as follows:
2.1. the parameter k1 has a b distribution with known parameters a1 , b1 .
That is
gk1 k1

k1a1 1 1 k1
ba1 ; b1

b1 1

k1 2 0; 1a1 ; b1 > 0;

5:4

2.2. both the parameter k2 and k3 has gamma distributions with known
parameters a2 , b2 and a3 , b3 , respectively. That is
gk2 k2

ba22 a2 1
k
exp b2 k2 ;
Ca2 2

k2 2 0; 1a2 ; b2 > 0;

5:5

ba33 a3 1
k
exp b3 k3 ; k3 2 0; 1a3 ; b3 > 0;
5:6
Ca3 3
3. we assume that the loss incurred when the vector K is estimated byK^ is quadratic. That is the loss function is given by
gk3 k3

^
LK; K

3
X

2
k k k^ ;

k > 0:

5:7

l1

Using the assumptions 1 and 2, the joint prior pdf of K, say gK takes the
following form:

400

A. El-Gohary / Appl. Math. Comput. 154 (2004) 389403

gK

3
k1a1 1 1 k1 b1 1 Y
bam m kam m 1 e bm km
;
ba1 ; b1
Cam
m2

k1 2 0; 1; k2 ; k3 2 0; 1;
5:8

Now, we proceed to present a theorem which gives the joint posterior pdf of K
given z.
Theorem 1. Under the assumptions 1 and 2, the joint posterior pdf of K given the
observations z is
10 X
10 X
11 X
20 X
21
1 X

n b 1
1 1 A1 ...5 k1n11 n21 a1 1 1 1 k1 20 1
D0 0
5
1
2
3
4
P5
3
m
Y
md2m 1
li am 1 b s k
i2

km
e m m m ; 0 6 k 6 1; k2 ; k3 P 0;
5:9

gKjz

m2

where D0 is given by
D0

n10 X
n10 X
n11 X
n20 X
n21
X
11 A1 ...5 tk bn11 n21 1 a1 ; n20 b1 
1 0



m P5
3 C md2m 1
Y
i2 i am

 :

P5
md2m 1m
i am
m2 sm b
i2
m

5:10

Proof. Using Bayes theorem [5] the joint posterior pdf of K given observation z
is related with the joint prior pdf of K and the joint distribution of z by the
following relation:
gKjz R

gKLzjK
ba1 ; b1 Ca3 Ca4 gKLz; K
;

ba22 ba33 D0
gKLz j K dK

5:11

where D0 is given by
D0

ba1 ; b1 Ca2 Ca3


ba22 ba33

Z
gKLz; K dK:

5:12

Substituting from (5.2) and (5.8) into (5.11) we can obtain (5.9), where D0
can be obtained as follows. Substituting from (5.2) and (5.8) into (5.12), one
can get
Z 1
n10 X
n10 X
n20 X
n11 X
n21
X
D0
11 A1 ...5
kn111 n21 k1 a1 1 1 k1 n20 b1 1 dk1
1 0

3
Y
m2

3
1

md2m 1

km
0

P5
i2

i am 1 b s k
m m
m

dkm :

5:13

A. El-Gohary / Appl. Math. Comput. 154 (2004) 389403

401

Using [9], one can obtain D0 as given by (5.10) that completes the proof of
the theorem. 
The following corollary gives the marginal posterior pdf of k given
z 1; 2; 3.
Corollary 1. The marginal posterior pdf of k ; 1; 2; 3, is given by
10 X
10 X
20 X
11 X
21
1 X

n
1 1 A1 ...5 k1n11 n21 1 a1 1 1 k1 20b1 1
D0 0
5
1
2
3
4
m P5
3
Y
Cmd2m 1
i2 i am

; 0 6 k1 6 1
5:14
P5
m
md2m 1
i am
i2
m2 sm b
m

g1 k1 jz

and for 2; 3
10 X
10 X
20 X
11 X
21
1 X

1 1 A1 ...5 bn11 n21 1 a1 ; n20b1 


D0 0
5
1
2
3
4
P5
P5
Cm a2 d3 a3 d2 1 i2 i  md2 1 i2 i a2 1
k
;

P5

s b ma2 d3 a3 d2 1 i2 i 

g k jz

5:15

k > 0:

The proof of this corollary can be done by integrating the joint posterior pdf
of K given z over all the variable km , m 2 f1; 2; 3g n fg.
The rth moment r 1; 2; . . ., of the marginal posterior pdf of k 1; 2; 3
are given by the following theorem.
Theorem 2. The rth moment r 1; 2; . . ., of the marginal posterior pdf of
k 1; 2; 3 are given by
r

lk

Drd1 ; rd2 ; rd3


;
D0

1; 2; 3; r 1; 2; . . . ;

5:16

where
Dp1 ;p2 ;p3

n10 X
n10 X
n20 X
n11 X
n21
X
1 0 2

1 1 A1 ...5 bn11 n21 1 a1 p1 ;n20b1 

m P5
3
Y
Cmd2m pm 1 i2 i am
:

P5
m
m2
s b md2m pm 1 i2 i am

5:17

Proof. The rth posterior moment of k 1; 2; 3 is dened as the posterior


expectation of kr 1; 2; 3. That is

402

A. El-Gohary / Appl. Math. Comput. 154 (2004) 389403


r

lk

kr g k j z dk ;

1; 2; 3:

Substituting from (5.14) and (5.15) and calculating the resulting integrals, one
can reach the proof. 
The following theorem gives the Bayes estimators for the unknown
parameters k
Theorem 3. Using the assumptions 13 we have
1. the Bayes estimator for k 1; 2; 3 is
Dd1 ; d2 ; d3
;
k
Ek jz ^
D0

5:18

2. the minimum posterior risk associated with the Bayes estimator ^k 1; 2; 3


is

2
D2d1 ; 2d2 ; 2d3
Dd1 ; d2 ; d3

Vark jz
:
5:19
D0
D0
Proof. Using the denitions of the posterior mean and posterior variance, one
can get
Z
^
k Ek jz k gk jz dk ; 1; 2; 3:
5:20
Using Theorem 2, we can get
Dd1 ; d2 ; d3
^
k Ek j z
D0

1; 2; 3:

5:21

Also, the posterior variance, we can obtain in the form


Vark jz Ek2 jz fEk jzg2

D2d1 ; 2d2 ; 2d3



D0

which completes the proof.

Dd1 ; d2 ; d3
D0

2
1; 2; 3;

5:22

Finally, the maximum likelihood estimators and Bayes estimators for the
unknown parameters k 1; 2; 3 are given by (4.8) and (5.21) we can use
these estimators to deduce the estimators of the systems reliability at a given
time. The results obtained here generalize the results obtained in [6]. Bayes
estimators for the unknown parameters included in a three state semi-Markov

A. El-Gohary / Appl. Math. Comput. 154 (2004) 389403

403

reliability model with a constant failure rate can be obtained from the present
results by putting k3 0.

6. Conclusions
We have presented both maximum likelihood and Bayes approach for
estimating the parameters included in a semi-Markov three state reliability
models. In this, we have used a b prior distribution for the rst parameters and
gamma prior the second and third parameters.

References
[1] U. Igor, Handbook of Reliability Engineering, John Wiley and Sons, New York, 1994.
[2] J.M. Reinhard, M. Snoussi, The severity of run in a discrete semi-Markov risk model,
Stochastic Models 18 (1) (2002) 85107.
[3] J. Medhi, Stochastic Processes, Wiley Eastern Limited, New Delhi, 1982.
[4] V. Korolyuk, Swishchuk, Semi-Markov Random Evolution, Kluwer Academic Publishers,
Dordrecht, 1994.
[5] F.H. Martz, R.A. Waller, Bayesian Reliability Analysis, John Wiley and Sons, New York,
1982;
V. Koroluk, A. Swishchuk, Semi-Markov Random Evolutions, Kluwer Academic Publishers,
London, 1995.
[6] F. Grabski, Bayesian estimation of parameters in semi-Markov reliability models, in:
Proceedings of the Tenth European Conference on Safety and Reliability Engineering
Germany, 1999.
[7] J. Janssen, R. Manca, Salary cost evaluation by means of non-homogeneous semi-Markov
processes, Stochastic Models 18 (1) (2002) 723.
[8] B. Dhillon, S. Rayapati, Reliability and availability analysis of on surface transit systems,
Microelectronic Reliability 24 (6) (1984) 10291033.
[9] I.S. Gradshteyn, I.M. Ryzhih, Table of Integrals, Series and Products, Academic Press, New
York, 1980.
[10] A. Sarhan, A. El-Gohary, Parameter estimations of 1-out-of-2: G repairable system, Appl.
Math. Comput. 145 (2003) 469479.

You might also like