Professional Documents
Culture Documents
The basics of mathematical theory that are relevant to the study of reliability and
safety engineering are discussed in this chapter. The basic concepts of set theory
and probability theory are explained first. Then the elements of component reliability are presented. Different distributions used in reliability and safety studies
with suitable examples are explained. The treatment of failure data is given in the
last section of the chapter.
16
The intersection of A and B is the set of elements which belong to both sets.
The intersection is denoted by and read as AND. The Venn diagram of A B is
shown in Figure 2.3.
A
Two sets A and B are termed mutually exclusive or disjoint sets when A and B
have no elements in common, i.e., A B = , where denotes an empty set.
This can be represented by a Venn diagram as shown in Figure 2.4.
A
B
17
Name
Identity law
Idempotency law
Commutative law
Associative law
Distributive law
Complementation law
Description
A = A; A U = U
A = ; A U = A
AA=A
AA=A
AB=BA
AB=BA
A (B C) = (A B) C
A (B C) = (A B) C
A (B C) = (A B) (A C)
A (B C) = (A B) (A C)
A A =U
A A =
A=A
De Morgans laws
( A B) = A B
( A B) = A B
18
Postulate/theorem
Remarks
Identity
x+0 = x
x 1 = x
x+x= x
xx = x
Idempotence
0 = 1 and 1 = 0
Involution
x=x
x1 + x1 x2 = x1
Absorption
x1 ( x1 + x2 ) = x1
x1 + (x2 + x3 ) = (x1 + x2 ) + x3
Associative
x1 (x2 x3 ) = (x1 x2 ) x3
( x1 + x2 ) = x1 x2
De Morgans theorem
( x1 x2 ) = x1 + x2
Consider a function f(x1, x2, x3,, xn) of n variables, which are combined by
Boolean operations. Depending upon the values of constituent variables x1, x2,,
xn, f will be 1 or 0. As these are n variables and each can have two possible values
1 or 0, 2n combinations of variables will have to be considered for determination
of the value of f. Truth tables are used represent the value of f for all these combinations. A truth table is given for a Boolean expression f(x1, x2, x3) = x1 x2 + x2
x3 + x1 x3 in Table 2.3.
In reliability calculations, it is necessary to minimize the Boolean expression in
order to eliminate repetition of the same elements. The premise of all minimization techniques is the set of Boolean algebra theorems mentioned in the Table 2.2.
The amount of labor involved in minimization increases as the number of variables increases. Geometric methods and the famous Karnaughs map is applicable
only up to six variables. Nowadays, sophisticated computerized algorithms are
available for calculation with large number of variables.
Table 2.3 Truth table
x1
0
0
0
0
1
1
1
1
x2
0
0
1
1
0
0
1
1
x3
0
1
0
1
0
1
0
1
f
0
0
0
1
0
1
1
1
19
n
.
N
(2.1)
n
.
N
(2.2)
The subjective interpretation of probability views probability as a degree of belief, and this notion can be defined operationally by having an individual make
20
P(S) = 1.
0 P(E) 1.
For two events E1 and E2 with E1 E2 = , P(E1 E2) = P(E1) + P(E2).
Two events are said to be independent if the occurrence of one does not affect
the probability of occurrence of other event. Let us say A and B are two events; if
the occurrence of A does not provide any information about occurrence of B then
A and B are statistically independent. For example in a process plant, the failure of
a pump does not affect the failure of a valve.
Two events are said to be mutually exclusive if the occurrence of one event
prevents the occurrence of the other event. If the two events A and B are mutually
exclusive then they are statistically dependent. Success and failure events of any
component are mutually exclusive. In a given time, a pump that is successfully
operating implies that failure has not taken place.
21
If the event B occurs then in order for A to occur it is necessary that the actual
occurrence be a point in both A and B, i.e., it must be in A B (Figure 2.5). Now,
since we know that B has occurred, it follows that B becomes our new sample
space and hence the probability that the event A B occurs will equal the probability of A B relative to the probability of B. It is mathematically expressed as
P( A | B) =
P( A B)
.
P( B)
(2.3)
P( A B)
.
P( A)
(2.4)
(2.5)
(2.6)
22
Thus when A and B are independent, the probability that A and B occur together
is simply the product of the probabilities that A and B occur individually.
In general the probability of occurrence of n dependent events E1, E2,, En is
calculated by the following expression:
(2.7)
Let A and B be two events. From the Venn diagram (Figure 2.6), as regions 1, 2,
and 3 are mutually exclusive, it follows that
P (A B ) = P (1) + P(2) + P(3),
P (A) = P (1) + P(2),
P (B ) = P(2) + P(3),
which shows that
P (A B ) = P (A) + P (B ) P (2),
as P (2) = P (A B ),
P ( A B ) = P ( A) + P( B) P ( A B ) .
(2.8)
The above expression can be extended to n events E1, E2,, En by the following equation:
P(E1 E2 ... En ) = P(E1 ) + P(E2 ) + ... + P(En )
[P(E1 E2 ) + P (E2 E3 ) + ... + P(En 1 En )]
+ [P(E1 E2 E3 ) + P (E2 E3 E4 ) + ... + P(En 2 En 1 En )]
( 1) n +1 P(E1 E2 ... En ) .
A
B
1
(2.9)
23
Let A1, A2,, An be n mutually exclusive events forming a sample space S and P
(Ai) > 0, i = 1, 2,, n (Figure 2.7). For an arbitrary event B one has
B = B S = B ( A1 A2 ... An )
= ( B A1 ) ( B A2 ) ... ( B An )
where the events B A1, B A2,, B An are mutually exclusive:
P ( B ) = P ( B Ai ) = P( Ai ) P ( B | Ai ) .
i
(2.10)
A3
A4
A2
A1
A5
A6
B
B
An
P ( A) P( B | A)
.
P( B)
(2.11)
24
This is a useful result that enables us to solve for P(A | B) in terms of P(B | A).
In general, if P(B) is written using the total probability theorem, we obtain the
following general result, which is known as Bayes theorem:
P ( Ai | B) =
P( Ai ) P ( B | Ai )
.
P( Ai ) P( B | Ai )
(2.12)
25
The probability distribution of a random variable X is a description of the probabilities associated with the possible values of X. For a discrete random variable,
the distribution is often specified by just a list of the possible values along with the
probability of each. In some cases, it is convenient to express the probability in
terms of a formula.
Let X be a discrete random variable defined over a sample space S = {x1, x2,
xn}. Probability can be assigned to each value of S. It is usually denoted by f(x).
For a discrete random variable X, a probability distribution is a function such that
f (xi ) 0,
n
f (x ) = 1,
i
i =1
f (xi ) = P (X = xi ).
Probability distribution is also known as probability mass function. Some examples are binomial, Poisson, and geometric distributions. The graph of a discrete
probability distribution looks like a bar chart or histogram. For example, in five
flips of a coin, where X represents the number of heads obtained, the probability
mass function is shown in Figure 2.8.
0.35
0.3
f(x)
0.25
0.2
0.15
0.1
0.05
0
0
x (Number of Heads)
Figure 2.8 A discrete probability mass function
26
1
0.8
F(x)
0.6
0.4
0.2
0
0
x
Figure 2.9 A discrete CDF
0 F (x) 1,
if x y then F (x) F (y ).
As the elements of sample space for a continuous random variable X are infinite in
number, the probability of assuming exactly any of its possible values is zero.
Density functions are commonly used in engineering to describe physical systems.
Similarly, a probability density function (PDF) f(x) can be used to describe the
27
f (x) = 1,
P (a X b) = f (x)dx .
a
F ( x) = P( X x) =
f ( )d .
(2.13)
The PDF of a continuous random variable can be determined from the CDF by
differentiating. Recall that the fundamental theorem of calculus states that
x
d
f ( )d = f ( x) .
dx
f ( x) =
dF ( x)
.
dx
(2.14)
xf (x)dx
for continuous.
28
(x mean)
f (x)dx
for continuous.
(2.15)
As reliability denotes failure-free operation, it can be termed success probability. Conversely, the probability that failure occurs before the time t is called failure probability or unreliability. Failure probability can be mathematically expressed as the probability that time to failure occurs before a specified period of
time t:
R (t ) = P(T < t ) .
(2.16)
As per the probability terminology, R (t ) is the same as the CDF of the random
variable T:
F(t)= R (t ) = P(T < t ) .
(2.17)
Going by the first axiom of probability, the probability of the sample space is
unity. The sample space for the continuous random variable T is from 0 to .
Mathematically, it is expressed as
P (S ) = 1 ,
P (0 < T < ) = 1 .
29
The sample space can be made into two mutually exclusive intervals: one is T <
t and the second is T t . Using the third axiom of probability, we can write
P (0 < T < ) = 1 ,
P (T < t T t ) = 1 ,
P (T < t ) + P(T t ) = 1 .
(2.18)
As the time to failure is a continuous random variable, the probability of T having exactly a precise t will be approximately zero. In this situation, it is appropriate to introduce the probability associated with a small range of values that the
random variable can take on:
P (t < T < t + t ) = F (t + t ) F (t ) .
t 0
t
F (t + t ) F (t )
= Lt
t 0
t
dF (t )
=
dt
dR (t )
(from Equation 2.18).
=
dt
From the above derivation we have an important relation between R(t), F(t), and f(t):
f (t ) =
dF (t )
dR(t )
=
.
dt
dt
(2.19)
F (t ) = f (t )dt ,
0
R (t ) = f (t )dt .
t
(2.20)
30
R (t ) R(t + t )
.
R (t )
R(t ) R (t + t )
is the conditional probability of failure per unit of time
R (t )t
(failure rate).
Then
R(t ) R (t + t )
[R (t + t ) R(t )] 1
= lim
0
R (t )t
t
R(t )
dR (t ) 1
f (t )
=
=
.
dt R (t ) R (t )
(t ) = lim
t 0
(2.21)
(t ) =
31
R (t ) = exp ( )d .
0
(2.22)
Consider a trial in which the only outcome is either success or failure. A random
variable X with this trial can have success (X = 1) or failure (X = 0). The random
variable X is said to be a Bernoulli random variable if the probability mass function of X is given by
P (X = 1) = p ,
P (X = 0) = 1 p ,
where p is the probability that the trial is a success. Suppose now that n independent trials, each of which results in a success with probability p and in a failure with
probability 1 p, are to be performed. If X represents the number of successes that
occur in the n trials, then X is said to be a binomial random variable with parameters n, p. The probability mass function of binomial random variable is given by
P(X = i ) = n ci p i (1 p ) n i
i = 0 ,1, 2 ,...,n .
(2.23)
P( X i ) = n c j p j (1 p) n j .
j =0
(2.24)
32
f(x)
0.25
0.2
0.15
0.1
0.05
0
0
10
= i n ci p i (1 p ) n i
i =0
= np n 1ci 1 p i 1 (1 p) n i
i =1
m
= np m c j p j 1 (1 p ) m j
j =0
= np .
33
P(X = x) = n
p q
n x
Now:
(i) In the case of zero defects, i.e., p(X = 0),
P(X = 0) = n
p q
n x
50
(ii) In the case that all are defective, i.e., p(X = 50),
P(X = 50) = n
p q
n x
= 50 C 50 (0.04)
50
(0.96)
( 50 50 )
= 0.8701.
P(X = x) = n c x p q
n x
Formula
(i)
(ii)
(iii)
(iv)
P(X = 0) = n
P(X = 1) = n
P(X = 2) = n
P(X = 3) = n
Numerical solutions
c
c
c
c
p q
x
p q
x
p q
x
p q
n x
Value
( 3 0 )
P(0) = 1E6
( 31)
P(1) = 2.9E4
n x
n x
( 3 2 )
P(2) = 2.9E2
( 3 3 )
P(3) = 0.97
n x
Now the failure probability is the sum of (i) and (ii), which is obtained as 2.98E4,
and the success probability is the sum of (iii) and (iv), which is obtained as
0.999702.
34
The Poisson distribution is useful to model when the event occurrences are discrete and the interval is continuous. For a trial to be a Poisson process, it has to
satisfy the following conditions:
A random variable X is said to have a Poisson distribution if the probability distribution is given by
f (x) =
e t ( t ) x
x!
x = 0 , 1, 2,...
(2.25)
F ( x) = f ( X = i ) .
(2.26)
i =0
The probability mass function and CDF for = 1.5/year and t = 1 year are
shown in Figure 2.12. Both the mean and variance of the Poisson distribution are
t.
If the probability of occurrence is near zero and sample size very large, the
Poisson distribution may be used to approximate the binomial distribution.
Example 3 If the rate of failure for an item is twice a year, what is the probability
that no failure will happen over a period of 2 years?
e t ( t ) x
x!
e 2 2 ( 2 2 ) 0
0!
= 0.0183 .
35
0.4
0.35
0.3
f(x)
0.25
0.2
0.15
0.1
0.05
0
0
10
(a)
1
F(x)
0.8
0.6
0.4
0.2
0
0
(b)
Figure 2.12 Probability functions for Poisson distribution: (a) probability mass function and
(b) CDF
cx
N k
Cn x
N Cn , x = 0, 1, 2, 3, 4,, n.
(2.27)
36
nK
.
N
(2.28)
K N n
.
1
N N 1
(2.29)
x 1
, x = 1, 2, 3, , n,
(2.30)
1
.
p
1 p
.
p2
The geometric distribution is the only discrete distribution which exhibits the
memoryless property, as does the exponential distribution in the continuous case.
37
The exponential distribution is most widely used distribution in reliability and risk
assessment. It is the only distribution having constant hazard rate and is used to
model the useful life of many engineering systems. The exponential distribution
is closely related to the Poisson distribution, which is discrete. If the number of
failures per unit time is a Poisson distribution then the time between failures follows an exponential distribution. The PDF of the exponential distribution is
f ( t ) = e t
for 0 t ,
=0
(2.31)
for t < 0 .
The exponential PDFs are shown in Figure 2.13 for different values of .
3
2.5
f(t)
= 0.5
1.5
=1
=3
1
0.5
0
0
F (t ) =
f (t )dt = e
0
e t
e t
1
t
dt =
=
= 1 e .
0
(2.32)
(2.33)
38
The exponential reliability functions are shown in Figure 2.14 for different values of .
1
0.8
= 0.5
=1
=3
R(t)
0.6
0.4
0.2
0
0
3
t
The hazard function is the ratio of the PDF and its reliability function; for the exponential distribution it is
h(t ) =
f (t ) e t
= t = .
R(t )
e
(2.34)
The exponential hazard function is the constant . This is the reason for the
memoryless property for the exponential distribution. The memoryless property
means the probability of failure in a specific time interval is the same regardless of
the starting point of that time interval.
Mean and Variance of Exponential Distribution
E (t ) = t f (t )
0
= t e t dt .
0
39
E (t ) = t
dt
0 0
1 1 1
1 e t
= = .
= 0 +
Thus mean time to failure of the exponential distribution is the reciprocal of the
failure rate:
Variance(t) = E(T2) (mean)2,
E (T 2 ) = t 2 f (t )dt = t 2e t dt .
(2t )dt
E (T ) = t
0 0
2
= 0 + 2 t e t dt .
0
But the integral term in the above expression is E(T), which is equal to 1/; substituting the same,
2 1 2
E (T 2 ) = 0 + 2 = 2 .
Now the variance is
Variance =
1
1
= 2 .
2
2
(2.35)
Example 4 The failure time (T) of an electronic circuit board follows an exponential distribution with failure rate = 104/h. What is the probability that (i) it will
fail before 1000 h; (ii) it will survive at least 10,000 h; (iii) it will fail between
1000 h and 10,000 h? Determine (iv) the mean time to failure and (v) the median
time failure also.
40
The normal distribution is the most important and widely used distribution in the
entire field of statistics and probability. It is also known as the Gaussian distribution and it is the very first distribution, introduced in 1733. The normal distribution often occurs in practical applications because the sum of a large number of
statistically independent random variables converges to a normal distribution
(known as the central limit theorem). The normal distribution can be used to represent wear-out regions of a bath-tub curve where fatigue and aging can be modeled. It is also used in stress-strength interference models in reliability studies. The
PDF of normal distributions is
f (t ) =
1 t
e
,
t .
(2.36)
2
where and are parameters of the distribution. The distribution is bell-shaped
and symmetrical about its mean with the spread of distribution determined by . It
is shown in Figure 2.15.
41
1
=0;=1
0.8
= 0 ; = 0.5
= 0 ; = 1.5
f(t)
0.6
0.4
0.2
0
-5
-4
-3
-2
-1
The normal distribution is not a true reliability distribution since the random
variable ranges from to +. But if the mean is positive and is larger than
by severalfold, the probability that random variable T takes negative values can be
negligible and the normal can therefore be a reasonable approximation to a failure
process.
The normal reliability function and CDF are
R (t ) =
t
F (t ) =
1
2
1 t
dt ,
1 t
(2.37)
dt .
(2.38)
As there is no closed-form solution to these integrals, the reliability and CDF are
often expressed as a function of the standard normal distribution ( = 0 and = 1)
(Figure 2.16). Transformation to the standard normal distribution is achieved with
the expression
z=
42
( z) =
z2
2
dz .
(2.39)
1
0.8
=0;=1
0.6
= 0 ; = 1.5
F(t)
= 0 ; = 0.5
0.4
0.2
0
-5
-4
-3
-2
-1
0 t
Table A.1 (see Appendix) provides the cumulative probability of the standard
normal distribution. This can be used to find the cumulative probability of any
normal distribution. However, these tables are becoming unnecessary, as electronic spreadsheets (e.g., Microsoft Excel) have built-in statistic functions.
18
16
=0;=1
14
= 0 ; = 0.5
12
= 0 ; = 1.5
H(t)
10
8
6
4
2
0
-5
-3
-1
43
f (t )
f (t )
.
=
R (t ) 1 ( z )
(2.40)
9889
= 988.9 .
10
n
Now, the sample standard deviation ( ) is
Mean = x =
n xi2 ( i =1 xi ) 2
= 84.8455 .
i =1
n(n 1)
(Calculations are given in Table 2.5.) The instantaneous failure rate is given by the
hazard function, and is established by
h(t ) =
( z)
f (t ) f (1000)
0.0046619
=
=
=
= 0.0104 .
R (t ) R (1000) 1 ( z ) 1 0.552
xi
xi2
850
890
921
955
980
1025
1036
1047
1065
1120
722500
792100
848241
912025
960400
1050625
1073296
1096209
1134225
1254400
= 9889
2
i
=9844021
44
t 2
1 ln t
, t >0,
(2.41)
where and are known as the location parameter and shape parameter, respectively. The shape of distribution changes with different values of as shown in
Figure 2.18.
1.40E-01
= 1 ; = 0.25
1.20E-01
=1;=1
1.00E-01
=1;=3
f(t)
8.00E-02
6.00E-02
4.00E-02
2.00E-02
0.00E+00
0
ln t
R(t ) = 1
,
(2.42)
ln t
F (t ) =
.
(2.43)
45
(2.44)
V (t ) = e ( 2 + ) (e 1) .
(2.45)
1
= 1 ; = 0.25
0.8
=1;=1
=1;=3
F(t)
0.6
0.4
0.2
0
0
0.3
= 1 ; = 0.25
0.25
=1;=1
0.2
=1;=3
H(t)
0.15
0.1
0.05
0
0
46
Example 6 Determine the mean and variance of time to failure for a system having lognormally distributed failure time with = 5 years and = 0.8.
E (t ) = e
2
+
0 .8 2
5 +
2
=e
= 204.3839 .
The variance of the lognormal distribution is
2
V (t ) = e ( 2 + ) (e 1) ,
2
t
f (t ) =
, t > 0,
(2.46)
2.00E-01
1.50E-01
= 0.5
=1
=2
=2.5
f(t)
=3.6
1.00E-01
5.00E-02
0.00E+00
0
Figure 2.21 Weibull PDF
10
15
20
47
where and are known as the scale parameter (or characteristic life) and the
shape parameter, respectively. An important property of the Weibull distribution is
as increases, the mean of the distribution approaches and the variance approaches zero. Its effect on the shape of the distribution can be seen in Figure 2.21
with different values of ( = 10 is assumed in all the cases).
It is interesting to see from Figure 2.21 that all are equal to or approximately
matching with several other distributions. Due to this flexibility, the Weibull distribution provides a good model for much of the failure data found in practice. Table 2.6 summarizes this behavior.
Table 2.6 Distributions with different values of
Remarks
Identical to exponential
Identical to Rayleigh
Approximates lognormal
Approximates normal
1
2
2.5
3.6
F (t ) = e
(2.47)
(2.48)
= 0.5
=1
=2
=2.5
=3.6
R(t)
6.00E-01
4.00E-01
2.00E-01
0.00E+00
0
10
15
20
48
1.00E+00
8.00E-01
= 0.5
=1
=2
=2.5
=3.6
H(t)
6.00E-01
4.00E-01
2.00E-01
0.00E+00
0
10
15
t 1
.
(2.49)
The effects of on the hazard function are demonstrated in Figure 2.23. All
three regions of bath-tub curve can be represented by varying the value:
t
Mean= tf (t )dt = t
0
0
Let
t
x= ,
t
dx =
Now
Mean = t e y dy .
0
Since t = x ,
dt .
dt .
(x )
Mean =
49
1
e x dx = 1 + ,
(2.50)
( x ) = y e y dy .
0
2 = 2 1 +
2
1
2 1 + .
(2.51)
R(t ) = e
t
( )
0.95 = e
t
(
)1.5
10000
t
(
)
1
= e 10000 .
0.95
1.5
= log
10000
1.5
10000
0.85996 = log t log 10000 log 10000 0.85996 = log t
log 0.051293 = 1.5 log
t = 1380.38 h .
As the name suggests, the gamma distribution derives its name from the wellknown gamma function. It is similar to the Weibull distribution, where by varying
the parameter of the distribution a wide range of other distributions can be derived. The gamma distribution is often used to model lifetimes of systems. If an
event takes place after n exponentially distributed events take place sequentially,
50
the resulting random variable follows a gamma distribution. Examples of its application include the time to failure for a system consisting of n independent components, with n 1 components being standby components; time between maintenance actions for a system that requires maintenance after a fixed number of uses;
time to failure of system which fails after n shocks. The gamma PDF is
f ( t ) = ( t; , ) =
where ( ) = x
1 t
t e , t 0,
( )
(2.52)
dx.
1 t
0 ( )t e dt .
(2.53)
The gamma CDF in general does not have a closed-form solution. However,
tables are available giving the values of CDFs having a standard gamma distribution function.
1.2
= 0.5
=1
=2
=3
f(t)
0.8
0.4
0
0
Distribution
=1
Exponential
= integer
Erlangian
=2
Chi-square
>2
Normal
51
(2.54)
.
2
(2.55)
For integer values of , the gamma PDF is known as the Erlangian PDF.
t 1
e .
( 1)!
(2.56)
R(t ) =
k =0
.
k!
(2.57)
t 1
t
1
( )
k
!
k =0
(2.58)
52
2.5
= 0.5
=1
=3
h(t)
1.5
1
0.5
0
0
By changing the value of , all three phases of bath-tub curves can be selected
(Figure 2.25). If < 1 , failure rate is decreasing, = 1, failure rate is constant,
and >1, failure rate is increasing.
A special case of the gamma distribution, with = 2 and = 2/ , is the chisquare (2) distribution, which is used as a determinant of goodness of fit and confidence limits.
The chi-square PDF is
2(x, v) = f(x) =
1
x ( v / 2 1) e x / 2 , x > 0.
2 v / 2 ( v / 2)
(2.59)
) has a chi-
53
0.3
v=1
v=6
v=12
0.25
f(x)
0.2
0.15
0.1
0.05
0
1
11
13
15
12
F=
v1
(2.60)
v2
v1 + v2 v1 2
2 v
2
f(F)=
v
1 v2
2 2
v
1
F2
F
1 + v
1
v2
+ v2
2
, F > 0.
(2.61)
54
v1=18,v2=6
v1= 6,v2=18
v1= 5,v2=5
f(F)
the area under the F PDF, with degrees of freedom v1 and v2, to the right of , then
F 1 (v1,v2) =
1
.
F (v2 , v1 )
(2.62)
2
2
2s2
s1 / 1
= 22 12
2
1 s2
s 2 / 2
(2.63)
If Z is a normally distributed random variable and the independent random variable 2 follows a chi-square distribution with v degrees of freedom then the random variable t defined by
t=
2 /v
(2.64)
55
Distribution
Poisson
Binomial
Exponential
Weibull
Lognormal
Normal
Gamma
Chi-square
F
t
2
f(t) =
( v / 2 ) v
t2
1 +
v
( v +1)
2
, t .
(2.65)
56
Like the standard normal density, the t-density is symmetrical about zero. In
addition, as v becomes larger, it becomes more and more like the standard normal
density. Further,
E(t) = 0 and v(t) = v/(v 2) for v > 2.
2.4.3 Summary
The summary of applications of the various distributions is described in Table 2.8.
57
ns
ns
=
,
N ns + nf
(2.66)
where ns is the number of favorable outcomes, nf is the number of unfavorable outcomes, and N is the total number of trials = ns + nf.
When N units are tested, let us assume that ns(t) units survive the life test after
time t and that nf(t) units have failed over the time t. Using the above equation, the
reliability of such a unit can be expressed as
R (t ) =
ns (t )
ns (t )
=
.
N
ns (t ) + nf (t )
(2.67)
This definition of reliability assumes that the test is conducted over a large
number of identical units.
The unreliability Q(t) of the unit is the probability of failure over time t,
equivalent to the CDF and is given by F(t),
Q(t ) F (t ) =
nf (t )
.
N
(2.68)
We know that the derivative of the CDF of a continuous random variable gives
the PDF. In reliability studies, the failure density function f(t) associated with failure time of a unit can be defined as follows:
f (t )
dF (t ) dQ(t ) 1 dnf
1
n (t + t ) nf (t )
=
=
= lim f
.
0
dt
dt
N dt
N
t
(2.69)
The hazard rate can be derived from Equation 2.21 by substituting f(t) and R(t)
as expressed below
h(t ) =
1
n (t + t ) nf (t )
lim f
.
0
ns (t )
t
(2.70)
Equations 2.67, 2.69, and 2.70 can be used for computing reliability, failure
density, and hazard functions from the given failure data.
The preliminary information on the underlying failure model can be obtained
if we plot the failure density, hazard rate, and reliability functions against time.
We can define piecewise continuous functions for these three characteristics by
selecting some small time interval t. This discretization eventually in the limit-
58
ing conditions i.e., t0 or when the data is large, would approach the continuous function analysis. The number of intervals can be decided based on the range
of data and accuracy desired. But the higher the number of intervals, the better
would be the accuracy of results. However, the computational effort increases
considerably if we choose a large number of intervals, but there exist an optimum number of intervals given by Sturges [4], which can be used to analyze the
data. If n is the optimum number of intervals and N is the total number of failures, then
n = 1 + 3.3log10 (N ) .
(2.71)
Solution:
Table 2.9 Failure data
Lamp
1
2
3
4
5
6
7
8
9
10
Failure time
5495.05
8817.71
539.66
2253.02
18887
2435.62
99.33
3716.24
12155.56
552.75
Lamp
11
12
13
14
15
16
17
18
19
20
Failure time
3511.42
6893.81
1853.83
3458.4
7710.78
324.61
866.69
6311.47
3095.62
927.41
Lamp
21
22
23
24
25
26
27
28
29
30
Failure time
4037.11
933.79
1485.66
4158.11
6513.43
8367.92
1912.24
13576.97
1843.38
4653.99
The optimum number of intervals as per Sturges formula (Equation 2.71) with
N = 30 is
n = 1 + 3.3 log(30) = 5.87.
In order to group the failure times under various intervals, the data is arranged
in increasing order. Table 2.10 shows the data with ascending order of failure
times. The minimum and maximum of failure time is 99.33 and 18,887 respectively.
59
Bulb
1
2
3
4
5
6
7
8
9
10
Failure time
99.33
324.61
539.66
552.75
866.69
927.41
933.79
1485.66
1843.38
1853.83
Bulb
11
12
13
14
15
16
17
18
19
20
Failure time
1912.24
2253.02
2435.62
3095.62
3458.4
3511.42
3716.24
4037.11
4158.11
4653.99
Bulb
21
22
23
24
25
26
27
28
29
30
Failure time
5495.05
6311.47
6513.43
6893.81
7710.78
8367.92
8817.71
12155.56
13576.97
18887
18887 99.33
= 3131.27 3150 .
6
We can now develop a table showing the intervals and corresponding values of
R(t), F(t), f(t), and h(t) respectively. The following notation is used. The summary
of calculations is shown in Table 2.11.
Interval size = ti =
nf (ti )
N ti
Interval
ns(ti)
nf(ti)
R(ti)
F(ti)
03150
3151
6300
6301
9450
9451
12600
12601
15750
15751
18900
30
16
14
7
1
0.53
0
0.47
1.48E4
7.4E5
1.48E4
1.38E4
0.3
0.7
6.35E5
2.11E4
0.1
0.9
1.06E5
1.05E4
0.066
0.934
1.06E5
1.58E4
0.033
0.967
1.06E5
3.17E4
f (ti ) =
h(ti ) =
nf (ti )
ns (ti )ti
60
f(t)
1.00E-04
8.00E-05
6.00E-05
4.00E-05
2.00E-05
0.00E+00
0
3150
6300
h(t)
2.50E-04
2.00E-04
1.50E-04
1.00E-04
5.00E-05
0.00E+00
0
3150
6300
9450
R(t), F(t)
0.8
0.6
R(t)
0.4
F(t)
0.2
0
0
3150
6300
9450
12600
Time
15750
18900
22050
61
In the parametric approach, fitting a theoretical distribution consists of the following three steps:
1. identifying candidate distributions;
2. estimating the parameters of distributions;
3. performing a goodness-of-fit test.
In the earlier section on nonparametric methods, we have seen how one can obtain
empirical distributions or histograms from the basic failure data. This exercise
helps one to guess a failure distribution that can be possibly employed to model
the failure data. But nothing has been said about an appropriate choice of the distribution. Probability plots provide a method of evaluating the fit of a set of data to
a distribution.
A probability plot is a graph in which the scales have been changed in such a
manner that the CDF associated with a given family of distributions, when represented graphically on that plot, becomes a straight line. Since straight lines are
easily identifiable, a probability plot provides a better visual test of a distribution
than comparison of a histogram with a PDF. Probability plots provide a quick
method to analyze, interpret, and estimate the parameters associated with a model.
62
Probability plots may also be used when the sample size is too small to construct
histograms and may be used with incomplete data.
The approach to probability plots is to fit a linear regression line of the form
mentioned below to a set of transformed data:
y = mx + c .
(2.72)
The nature of the transform will depend on the distribution under consideration.
If the data of failure times fit the assumed distribution, the transformed data will
graph as a straight line.
Consider an exponential distribution whose CDF is F (t ) = 1 e t ; rearranging
to 1 F (t ) = e t , and taking the natural logarithm of both sides,
ln(1 F (t )) = ln(e t ) ,
ln(1 F (t )) = t ,
1
ln(
) = t .
1 F (t )
Now if y is plotted on the ordinate, the plot would be a straight line with a slope
of .
The failure data is generally available in terms of the failure times of n items
that have failed during a test conducted on the original population of N items.
Since F(t) is not available, we can make use of E[F(ti)]:
E[ F (ti )] =
N +1 .
(2.73)
i =1
Example 9 Table 2.12 gives a chronological sequence of the grid supply outages
at a process plant. Using a probability plotting method, identify the possible distributions.
Solution:
Table 2.13 gives the summary of calculations for x and y coordinates. These are
plotted in Figure 2.31.
63
Date/time
11.04.1998/14:35
101
101
17.06.1998/12:30
168
67
24.07.1998/09:19
205
37
13.08.1999/10:30
590
385
27.08.1999
604
14
21.11.1999
721
117
02.01.2000
763
42
01.05.2000/15:38
882
119
27.10.2000/05:56
1061
179
10
14.05.2001
1251
190
11
03.07.2001/09:45
1301
50
12
12.07.2002/18:50
1674
374
13
09.05.2003/08:43
1976
301
14
28.12.2005
2940
964
15
02.05.2006/11:02
3065
125
16
17.05.2007/11:10
3445
380
17
02.06.2007/16:30
3461
16
Table 2.13 Time between failure (TBF) values for outage of class IV (for Weibull plotting)
i
y = ln(ln(1/R(t)) x = ln(t)
14
0.04023
3.19268
2.639057
17
16
0.097701
2.27488
2.772589
37
0.155172
1.78009
3.610918
42
0.212644
1.43098
3.73767
11
50
0.270115
1.1556
3.912023
67
0.327586
0.92412
4.204693
101
0.385057
0.72108
4.615121
117
0.442529
0.53726
4.762174
119
0.5
0.36651
4.779123
10
15
125
0.557471
0.20426
4.828314
11
179
0.614943
0.04671
5.187386
12
10
190
0.672414
0.109754
5.247024
13
13
301
0.729885
0.269193
5.70711
14
12
374
0.787356
0.437053
5.924256
15
16
380
0.844828
0.622305
5.940171
16
385
0.902299
0.844082
5.953243
17
14
964
0.95977
1.16725
6.871091
64
2
1
ln(ln(1/R(t))
0
-1
-2
y = 0.996x - 5.2748
-3
-4
-5
-6
lnt
The shape parameter = 0.996; scale parameter, = e5.2748 = 194.4 days. As the
shape parameter is close to unity, the data fits an exponential distribution.
Table 2.14 summarizes (x, y) coordinates of various distributions used in probability plotting.
Table 2.14 Coordinates of distributions for probability plotting
Distribution
Exponential
t , ln 1
1 F (t )
F (t ) = 1 e t
Weibull
F (t ) = 1 e
y = mx + c
(x, y)
Normal
t
F (t ) =
ln t , ln ln 1
1 F (t )
(t , [F (t )])
1
m=
c=0
m =
c = ln(1 / )
m=
c=
65
Let the failure times t1, t2,, tn represent observed data from a population distribution, whose PDF is f (t 1 ,..., k ) , where i is the parameter of the distribution.
Then the problem is to find the likelihood function given by
L(1... k ) =
f (t ... ) .
i
(2.74)
i =1
The objective is to find the values of the estimators of 1,, k that render the
likelihood function as large as possible for given values of t1, t2,, tn. As the likelihood function is in the multiplicative form, it is to maximize log(L) instead of L,
but these two are identical, since maximizing L is equivalent to maximizing
log(L).
By taking partial derivates of the equation with respect to 1,, k and setting
these equal to zero, the necessary conditions for finding MLEs can be obtained:
ln L(1... k )
= 0 , i = 1, 2,, k.
i
(2.75)
Exponential MLE
t
j =1
(2.76)
66
(2.77)
j =1
(2.78)
j =1
The point estimates would provide the best estimate of the parameter, whereas the
interval estimation would offer the bounds within which the parameter would lie.
In other words, it provides the confidence interval for the parameter. A confidence
interval gives a range of values among which we have a high degree of confidence
that the distribution parameter is included.
Since there is always an uncertainty associated in this parameter estimation, it
is essential to find the upper and lower confidence limits of these two parameters.
Upper and Lower Confidence of the Failure Rate
The chi-square distribution is used to find the upper and lower confidence limits
of the mean time to failure. The chi-square equations are given as follows:
LC
UC
2 T
2
2r , / 2
2 T
2
2r , 1 / 2
(2.79)
where:
(2.80)
67
The mean time represents the mean time between failure or mean time to failure.
When the failure model follows an exponential distribution, the failure rate can be
expressed as
1
= .
Thus, the inverse of LC and UC will be the maximum and minimum possible
value of the failure rate, i.e., the upper and lower confidence limit of the failure
rate.
Upper and Lower Confidence Limit of the Demand Failure Probability
In the case of demand failure probability, the F-distribution is used to derive the
upper and the lower confidence limit:
PLC
PUC
r+
( D r + 1)
r
F0.95
( 2 D 2r + 2 , 2r )
( r + 1) F0.95 ( 2r + 2, 2 D 2r ) ,
( r + 1) F0.95 ( 2r + 2, 2 D 2r )
Dr+
(2.81)
(2.82)
where:
PLC, PUC = lower and upper confidence limits for demand failure probabilities;
r = number of failures;
D = number of demands;
F0.95 = 95% confidence limit for variables from F-distribution Table A.4.
Example 10 Estimate the point and 90% confidence interval for the data given in
the previous example on grid outage in a process plant.
Solution: Total number of outages = 17; total period = 10 years; mean failure rate
= 17/10 = 1.7/year = 1.94 104 /h.
The representation of lower (5%) and upper (95%) limits of the chi-square (2)
distribution for failure-terminated tests is as follows:
/ 2 ;2
2T
1 / 2 ;2
2T
(2.83)
68
= 100 90 = 10%;
n = 17;
degrees of freedom = = n = 17;
T = 10 years.
2
.0 5 ; 2 1 7
2 10
2
.9 5 ; + 2 1 7
2 10
Obtaining the respective values from the 2 tables Table A.3, 1.077 2.55.
The mean value of grid outage frequency is 1.7/year (1.94 104/h) with
lower and upper limit of 1.077/year (1.23 104/h) and 2.55/year (2.91 104/h),
respectively.
2.5.2.3 Goodness-of-fit Tests
Exercise Problems
69
2 =
i =1
( xi Ei ) 2
.
Ei
(2.84)
If we obtain a very low 2 (e.g., less than the 10th percentile), it suggests that
the data corresponds more closely to the proposed distribution. Higher values of 2
cast doubt on the null hypothesis. The null hypothesis is usually rejected when the
value of 2 falls outside the 90th percentile. If 2 is below this value, there is insufficient information to reject the hypothesis that the data come from the supposed
distribution.
For further reading on treatment of statistical data for reliability analysis, interested readers may refer to Ebeling [5] and Misra [6].
Exercise Problems
1. A continuous random variable T is said to have an exponential distribution with
parameter ; if the PDF is given by f (t ) = e t , calculate the mean and variance of T.
2. Given the following PDF for the random variable time to failure of a circuit
breaker, what is the reliability for a 1500 h operating life?
t
f (t ) =
1 t
3. Given the hazard rate function (t ) = 2 105 t , determine R(t) and f(t) at
t = 500 h.
4. The diameter of a bearing manufactured by a company under the specified supply conditions has a normal distribution with a mean of 10 mm and standard
deviation of 0.2 mm.
(i) Find the probability that a bearing has a diameter between 10.2 and 9.8 mm.
(ii) Find the diameters, such that 10% of the bearings have diameters below the
value.
70
References
1. Ross SM (1987) Introduction to probability and statistics for engineers and scientists. John
Wiley & Sons, New York
2. Montgomery DC, Runger GC (1999) Applied statistics and probability for engineers. John
Wiley & Sons, New York
3. Weibull W (1951) A statistical distribution of wide applicability. Journal of Applied Mechanics
18:293297
4. Sturges HA (1976) The choice of class interval. Journal of American Statistics Association
21:6566
5. Ebeling CE (1997) An introduction to reliability and maintainability engineering. Tata
McGraw-Hill, New Delhi
6. Misra KB (1992) Reliability analysis and prediction. Elsevier, Amsterdam
http://www.springer.com/978-1-84996-231-5