Professional Documents
Culture Documents
PROBABILITY
Probability of any event: 0 P (event) 1
Permutation: subsets of r elements from n different
elements
n!
Prn n (n 1) (n 2) (n r 1)
(n r )!
Permutation of similar objects: n1 is of one type, n2 is of
second type, among n = n1+n2++nr elements
n!
n1 ! n 2 ! n3 ! n r !
Combinations: subsets of size r from a set of n elements
n
n!
nC r C rn
r r!(n r )!
Independent Events:
P(AB) = P(A or B) = P(A) + P(B) P(A and B)
P(ABC) = P(A) + P(B) + P(C) P(AB) P(AC) - P(AB) = P(A and B) = P(A)P(B)
P(A|B) = P(A)
P(BC) + P(ABC)
P(B|A)=P(B)
P(B) = P(BA) + P(BA) = P(B|A)P(A)+P(B|A)P(A)
Dependent Events:
For Mutually exclusive events AB=:
P(A and B) = P(A) * P(B given A)
P(AB) = P(A or B) = P(A) + P(B)
P(AB)=P(A|B)P(B)=P(BA)=P(B|A)P(A)
P(A and B and C) = P(A) * P(B | A) * P(C given
A and B)
Bayes Theorem
P(AB) = P(AB) = P(A | B) P(B) = P(B | A) P(A)
P(B | A) P( A)
P( AB)
P( A | B )
Conditiona l Probabilit y P( A | B )
P (B | A) P( A) P (B | A) P( A)
P(B )
A, B
= any two events
A
= complement of A
Markovs Inequality
Chebyshevs Inequality
If X is a non-negative random variable with mean , then If X is a random variable with a finite mean
for any constant a > 0
and variance 2, then for any constant a > 0
2
P(X a)
P( X - a) 2
a
a
DECISION ANALYSIS
Criterion of Realism
Expected Monetary Value
Weighted average = (best in row) + (1 )(worst in
EMV(altern ative) = X i P ( X i )
row)
Xi = payoff for the alternative in state of
nature i
For Minimization:
P(Xi) = probability of achieving payoff Xi (i.e.,
Weighted average = (best in row) + (1 )(worst in
probability of state of nature i)
row)
= summation symbol
EMV (alternative i) = (payoff of first state of nature) x
Expected Value with Perfect Information
(probability of first state of nature) + (payoff of second
EVwPI = (best payoff in state of nature i)
state of nature) x (probability of second state of nature) +
(probability of state of nature i)
+ (payoff of last state of nature) x (probability of last
EVwPI = (best payoff for first state of nature) x
state of nature)
(probability of first state of nature) + (best
payoff for second state of nature) x (probability
of second state of nature) + + (best payoff for
last state of nature) x (probability of last state of
nature)
Y 0 1 X
REGRESSION MODELS
Y b b X
0
Y predicted value of Y
b0 estimate of 0 , based on sample results
b1 estimate of 1 , based on sample results
X
Y
b1
(X X)
b0 Y b1 X
Hypothesis Test H 0 : 1 0
H 1 : 1 0
Reject if Fcalculated F ,df1 , df 2
df 1 k df 2 n k 1
p - value P ( F calculated test statistic )
Reject if p - value
Y b0 b1 X 1 b2 X 2 ... bk X k
Y = predicted value of Y
b0 = sample intercept (an estimate of 0)
bi =
sample coefficient of the i th variable (an
estimate of i)
SSR
SSE
1
SST
SST
SSE
Mean Squared Error s 2 MSE
n k 1
Coefficien t of Determinat ion r 2
MSR
MSE
degrees of freedom for the numerator = df1 = k
degrees of freedom for the denominator = df2 = n
k1
F Statistic : F
( )( )
yi
i=1
y i x i
x 2i
( )
i=1
i=1
S xy
S xx
xi
Where
xi
i=1
1
x =
n
( ) x y=(1 /n) y
i
i =1
i=1
Error of squares , SS E = e = ( y i ^y i) 2
2
i
i=1
i=1
SS
Unbiased estimator , ^ = E
n2
SS E =SST ^ 1 S xy
SS T = ( y i y )
i=1
^
S xx
[ ]
2
x
2 1
^
^
Estimated Standard error of theintercept , se ( 0 ) =
+
n S xx
Hypothesis Testing ( ttest )
Null hypothesis: H 0 : 1= 1,0
H 1 : 1 1,0
^1 1,0
Test Statistic: : T 0 :
Reject null hypothesis if
^ 2 /S xx
|t 0|>t /2, n2
H 0 : 0 =0,0
H 1 : 0 0,0
Null hypothesis:
Test Statistic: :
T0 :
^ 0 0,0
[ ]
1 x 2
^ 2 +
n S xx
|t 0|>t /2, n2
i=1
i=1
i=1
SS T =SS R + SS E
SS R /1
MS R
Test for signifance of Regression , F 0=
=
follows F 1,n2 distribution
SS E /(n2) MS E
Reject null hypothesis , if f 0> f , 1,n2
^ 1t
100 (1 ) CI of slope 0 is
^ 0t
100 (1 ) CI of slope 1 is
, n2
, n2
^ 2
^ 2
1 ^1 +t
,n2 S
S xx
xx
2
[ ]
[ ]
1 x 2
1 x 2
^ 2 +
0 ^ 0 +t
^ 2 +
, n2
n S xx
n S xx
2
^Y x t
0
,n2
[
^ 2
2
1 ( x0 x )
1 (x x )
+
Y x ^Y x +t
^ 2 + 0
, n2
n
S xx
n
S xx
2
0
]
4
2
2
1 ( x0 x )
2 1 ( x 0 x )
^y 0t
^
+
Y 0 ^y 0 +t
^
+
,n2
, n2
n
S xx
n
S xx
2
2
where ^y 0= ^ 0 + ^ 1 x 0 is computed theregression model
2
Residuals ei = y i^y i
d i=e i / ^ 2
Coefficient of determination, R2=
SS R
SS
=1 E
SST
SS T
]
CorrelationCoefficient , =
XY
X Y
, n2
Multiple Regression
y= X+
y i= 0 + 1 x i 1+ 2 xi 2 ++ k x ik + i i=1,2, , n
[] [
y1
y
y= 2
yn
n
x
1 x 11 x12
1k
1 x 21 x22
x
X=
2k
1 xn 1 xn 2
x nk
i=1
i=1
n
[]
= 2
[]
= 2
n ^ 0+ ^1 x i1 + ^ 2 xi 2 ++ ^ k x ik = y i
i=1
i=1
i=1
n
i=1
n
i=1
i=1
i=1
^ x + ^ x 2 + ^ x x ++ ^ x x = x y
0
i1
1
i1
2
i1 i2
k
i 1 ik
i1 i
i=1
i=1
i=1
i=1
n
^ 0 x ik + ^1 x ik x i1 + ^ 2 x ik x i 2 ++ ^ k x 2ik= xik yi
Normal Equations : X X ^=X ' Y
i=1
'
n
n
xi 1
i =1
n
xi 1
xi 2
i=1
n
i=1
n
x 2i 1 x i 1 x i 2
n
x i 1 x ik
i=1
i=1
x ik x ik x i 1 xik x i 2
i=1
xik
i=1
n
i=1
i=1
x2ik
i=1
][ ]
n
i=1
[]
^0
^1
^ k
yi
i=1
n
xi 1 yi
i=1
x ik y i
i=1
1
Least square estimate of : ^=( X ' X ) X ' Y
n
e 2i
SS
Estimator of Variance , ^ 2= i=1 = E
n p n p
n
i=1
'
i=1
Covariance ,C=( X X)
Estimated standard error of ^ j =se ( ^ j ) = 2 Cij
Hypothesis of ANOVA Test
Null hypothesis: H 0 : 1= 2== k =0
( )
yi
'
SS E = y y
i=1
'
i=1
( )
i=1
yi
( )
yi
^ X y
SS R = ^ X ' y
SS R
SS
=1 E
SST
SS T
SS E /(n p)
SST /(n1)
H 0 : j= j , 0
H 1 : j j ,0
^ j j , 0 ^ j j , 0
T0 :
=
Reject null hypothesis if
2
se ( ^ )
^ C
Null hypothesis:
Test Statistic: :
ij
|t 0|>t /2, n p
^ t
j
^ C
2
,n p
ij
j ^ j+t
2
^ C
2
,n p
ij
^ x ( X X )
2
,n p
'
0
'
x 0 Y x ^ Y x + t
0
,n p
x'0 ( X ' X ) x 0
^ ( 1+ x ( X X )
2
,n p
'
0
'
x 0 ) Y 0 ^y 0 +t
2
Residuals ei = y i^y i
, n p
d i=
Studentized Residuals r i =
ei
^ (1h )
2
^ ( 1+ x ( X X )
'
0
'
x 0)
ei
=e i / ^ 2
MS E
Where i=1,2, , n
ii
1
^ H=X ( X ' X ) X '
1
'
'
hii =ith diagonal element of the H xi ( X X ) xi
'
Coo k sDistance
'
( ^ (i ) ^ ) X ' X ( ^ (i) ^ )
Di=
where i=1,2, , n
p ^ 2
r 2 hii
Di= i
where i=1,2, , n
p (1hii )
Stepwise Regression
SS R ( j 1 , 0 )
F j=
MS E (x j , x 1)
SS E (p)
n+ 2 p
2
^
Prediction Error of Squares
n
n
ei
PRESS= ( y i ^y(t )) 2=
i=1
i=1 1hii
C p Statistic ,C p=
( )
1
VIF ( j )=
Where j=1,2, , k
2
(1R j )
Single-Factor Experiments
a
i=1
i=1 j=1
SS T =SS Treatments + SS E
a
E ( SS Treatments ) =( a1 ) + n i
2
i=1
E ( SS E )=a ( n1 )
MS E =SS E /[a ( n1 )]
SS
/(a1) MS Treatments
F0 = Treatments
=
MS E
SS E /[a ( n1 ) ]
2
a
n
y ..
2
SS T = y ij
N
i=1 j=1
2
a
y i . y 2..
SS Treatments =
N
i=1 n
SS E =SST SSTreatments
Y
T = i . i has t distribution witha(n1) degrees of freedom
MS E /n
7
t 0=
y i . y j .
2 MS E
n
Reject null hypothesis if | y i . y j .|> LSD
LSD=t
2
,a (n1)
2 MS E
n
, N a
MS E
( n1 + n1 )
i
Power of ANOVA test , 1=P { Reject H 0| H 0 is false }=P {F 0 > f ,a1,a (n1)H 0 is false }
a
2=
n 2i
i=1
a 2
MS
SS Treatments
E( Treatments)=E
= 2 +n 2
a1
MS
SS E
E( E)=E
= 2
a (n1)
2
^ =MS E
MS Treatments MS E
^ 2 =
n
i=1
j=1
i=1 j=1
E ( MS Treatments ) = 2 +
b 2i
i=1
a1
b
E ( MS Blocks )= 2+
a 2j
j=1
b1
8
E ( MS E ) = 2
SS
MS Treatments= Treatments
a1
SS
MS Blocks = Blocks
b1
SS E
MS E =
(a1)(b1)
2
a
b
y ..
2
SS T = y ij
ab
i=1 j=1
a
y 2..
1
2
SS Treatments = y i .
b i=1
ab
2
b
y..
1
2
SS Blocks = y . j
a j =1
ab
SS E =SST SSTreatments SS Blocks
e ij = y ij ^y ij
^y ij = y i . + y . j y ..
y i ..= y ijk
j=1 k=1
y i ..
wherei=1,2, , a
bn
y i ..=
y . j .= y ijk
i =1 k=1
y . j .=
y. j .
where j=1,2, ,b
an
n
y ij. = y ijk
k=1
y ij. =
y ij.
where i=1,2, , a
n
a
y ...= y ijk
i=1 j=1 k=1
y ...=
y ...
where j=1,2, , b
abn
a
SS T = ( y ijk y ... )2
i=1 j=1 k=1
i=1
j=1
i=1 j=1
i=1 j=
E ( MS A ) =E
SS
( a1
)= +
A
bn 2i
i=1
a1
E ( MS B ) =E
an 2j
2
SSB
= + j=1
b1
b1
( )
n ( )2ij
SS AB
E ( MS AB )=E
= + i=1 j=1
(a1)(b1)
(a1)(b1)
2
SS E
E ( MS E ) =E
=
ab (n1)
MS A
MS E
MS B
F Test for Factor B , F 0=
MS E
MS AB
F Test for AB Interaction , F 0=
MS E
2
a
b
n
y ...
2
SS T = y ijk
abn
i=1 j=1 k=1
2
2
a
y
y
SS A = i .. ...
abn
i=1 bn
2
2
b
y
y
SS B = . j . ...
abn
j=1 an
2
2
a
b
y
y
SS AB= ij . ... SS A SS B
abn
i=1 j=1 n
F Test for Factor A , F 0=
SSB
b1
MSB
SSC
c1
MSC
AB
SSAB
(a 1)(b 1)
MSAB
AC
SSAC
(a 1)(c 1)
MSAC
BC
SSBC
(b 1)(c 1)
MSBC
bcn 2i
+
a1
acn 2j
2+
b1
abn 2k
2+
c1
cn ()2ij
2+
(a1)(b1)
2
2 bn ( )ik
+
(a1)( c1)
2
2 an ( ) jk
+
(b1)(c1)
2
F0
MS A
MS E
MS B
MS E
MS C
MS E
MS AB
MS E
MS AC
MS E
MS BC
MS E
10
ABC
SSABC
(a 1)(b 1)(c
1)
MSABC
MS ABC
MS E
n ( )2ijk
(a1)(b1)(c1)
Error
SSE
abc(n 1)
MSE
2
Total
SST
abcn 1
2k Factorial Designs
(l) Represents the treatment combination with both factors at the low level.
a+ ab b+(l) 1
= [ a+abb(l) ]
2n
2n
2n
A+ y
Main Effect of Factor A : A= y
b+ab a+(l) 1
B=
= [ b+ aba(l) ]
2n
2n
2n
B+y
Main Effect of Factor B : B= y
ab+(l ) a+ b 1
Interaction Effect AB: AB=
= [ ab+ (l ) ab ]
2n
2n 2n
Contrast Coefficients are always+11. Contrast A =a+abb(l)
Contrast
Effect=
n 2k1
( Contrast )2
of squares for an effect , SS= n 2k
Y = 0 + 1 x 1 +
y
+
2
y
effect
Coefficienteffect , ^=
=
2
^
1
1
1
Standard error of a coefficient , standard error ^=
+ k1 =^
k1
2 n2
n2
n 2k
y
^
tstatistic for a coefficient , t=
=
standard error ^
2k Factorial Designs for k 3 factors
1
A=
[ a+ ab+ac +abc( 1 ) bcbc ]
4n
A + y
A= y
1
B=
[ b+ab+ bc+ abc( 1 )acac ]
4n
B+ y
B= y
A=
11
C=
1
[ c+ ac+bc +abc (1 )abab ]
4n
C+ y
C= y
1
[ abcbc +abbac+ ca+(1)]
4n
1
AC =
[ ( a ) a+babc +acbc+ abc ]
4n
1
BC =
[ ( 1 ) +ababcac +bc +abc ]
4n
1
ABC =
[ abcbcac+ cab+b +a(1)]
4n
Y = 0 + 1 x 1 + 2 x 2+ 12 x1 x 2+
y F y C 2
n F nC ( y F y C )2
SS Curvature =
=
n F +n C
1 1
+
nF nC
First order model :Y = 0+ 1 x 1+ 2 x2 + k x k +
AB=
( )
i=1
i=1
i< j
Steepest Ascent ^y = ^ 0+ ^i x i
i=1
i=1
i=1
2
Fitted Second order model ^y = ^ 0+ ^i x i+ ^ii x i + ^ ij x i x j
i< j
12
FORECASTING
(error ) 2
forecast error
(Weight
(Weights )
New forecast Last period s forecast (Last period s actual demand Last period s forecast)
Exponentia l Smoothing with Trend :
Y b0 b1 X
Ft 1 FITt (Yt FIT t )
where Y predicted value
Tt 1 Tt (Ft 1 FIT t )
b0 intercept
FIT t 1 Ft 1 Tt 1
b1 slope of the line
X time period (i.e., X 1, 2, 3, , n)
Y a b X b X b X b X
Tracking signal
RSFE
MAD
(forecast error)
MAD
13
Co Co
Number of units in each order
Q
Annual holding cost Average Inventory
Economic Order Quantity
Annual ordering cost = Annual holding cost
(Carrying cost per unit per year)
D
Q
Co C h
Order quantity
Q
Q
d
pt dt p d Q 1
p
p
p
Total produced Q pt
Q
d
1
2
p
D
Annual setup cost Cs
Q
Average inventory
Q*
2DCs
d
Ch 1
p
Safety Stock
Q
d
1 Ch
2
p
D
Annual ordering cost Co
Q
D = the annual demand in units
Q number of pieces per order, or production run
Quantity Discount Model
2DCo
EOQ
IC
If EOQ < Minimum for discount, adjust the quantity to Q
= Minimum for discount
Total cost Material cost + Ordering cost + Holding cost
D
Q
Total cost DC + Co + Ch
Q
2
Holding cost per unit is based on cost, so Ch = IC
Where I = holding cost as a percentage of the unit cost
(C)
Safety Stock with Normal Distribution
Annual holding cost
14
d daily demand
Total Annual Holding Cost with Safety Stock
Total Annual Holding Cost = Holding cost of regular
inventory + Holding cost of safety stock
Q
THC Ch (SS)Ch
2
ROP d L Z L d2 d 2 L2
15
PROJECT MANAGEMENT
t=
a + 4m + b
6
b a
6
Earliest start = Largest of the earliest finish times of
immediate predecessors
ES = Largest EF of immediate predecessors
Latest finish time = Smallest of latest start times for
following activities
LF = Smallest LS of following activities
Project Variance = sum of variances of activities on the
critical path
Due date Expected date of completion
Z
Variance =
T
Activity difference = Actual cost Value of work
completed
16
n = m 1 1
1
m
L
n =0 n! m! m
2
Lq
The average time a unit spends in the waiting line or
( )
being served, in the system
The average time a customer spends waiting in the
( / ) m
1 L
queue, Wq
W
P
2 0
(m 1)! (m )
Wq
The
average
number
of
customers
or units in line
( )
waiting for service
The utilization factor for the system, (rho), the
Lq L
probability the service facility is being used
P0 1
P0=1
Pn=(1) n
Finite Population Model
(M/M/1 with Finite Source)
= mean arrival rate
= mean service rate
N = size of the population
Probability that the system is empty
()
P0
= (W)Cw
1
n
n 0
Average length of the queue
Lq N
1 P0
Average number of customers (units) in the system
L Lq 1 P0
N
N!
(N n)!
N!
Pn
P for n 0,1,..., N
N n ! 0
Constant Service Time Model (M/D/1)
Average length of the queue
2
Lq
2 ( )
Average waiting time in the queue
Wq
2 ( )
Average number of customers in the system
L Lq
W =E (waiting time system ( includes service time ) for each individual customer )
W q =E( waitingtimequeue ( excludes service time ) for each individual customer)
Littl e' s formula : L=W
Lq= W q
1
W =W q +
P {U > T } =exp i t
i=1
= i
i=1
Pn =1
n=0
P0=
( )
Cn
n=0
rate
long run , = n Pn
the
n=0
T1, T2, be independent service-time random
variables having an exponential distribution with
parameter, and let
Sn+1 = T1 + T2 + + Tn+1, for n = 0, 1, 2,
Sn+1 represents the conditional waiting time given n
customers already in the system. Sn+1 is known to have
Average arrival
19
an Erlang distribution.
( 1 ) t
, for t 0
n=0
Cn =
n!
()
s
s ! s
()
ns
( )
P0=1/
for n=1,2, , s
=
s ! s ns
()
s1
( / )n ( / ) s
1
n ! + s ! 1/(s)
n=0
P
if 0 n s
n! 0
P n=
n
P0
if n s
s ! sn s
()
()
P0 ( /)s
Lq =
s !(1 )2
L
W q= q
1
W =W q +
L=Lq +
t (s1 )
P ( /)s 1e
P { E (W ) >t }=et 1 + 0
s ! (1 ) s1/
Wq
P{ E(W q)>t }=(1P {E ( 0 }) es (1 ) t
Wq
)]
s1
P{ E ( 0 }= Pn
n=0
N!
Cn = ( Nn ) !
0
()
P0=1/
n=0
for n N
for n> N
( )]
N!
( Nn ) !
N!
P0 ,if n=1, 2, , N
( Nn ) !
N
+
Lq= (n1) P0=N
(1P0 )
n =1
P n=
()
20
L=N (1P0 )
L
W=
L
W q= q
K +1
L=
( K +1)
1
1 K+1
Lq=L(1P 0)
L
W q= q
L
W=
= n Pn= ( NL)
n=0
= P = (1P )
n n
K
n=0
M/M/s/K
( /)n
n!
s
Cn = ( / ) ns ( /)n
=
s ! s
s ! s ns
0 for n> K
for n=1, 2, , s
( )
P0=1/
n=0
n s
K
n!
s ! n=s+1 s
( ) ()
P
n! 0
Pn=
n
P0
s ! sn s
0 for n> K
()
()
ns
( )
N!
( N n ) ! n !
()
C =
N!
(
)
( Nn ) ! s ! s
n
n s
0 for n> K
s1
for n=1, 2, , s
( )]
N!
n
( Nn ) ! n! + ( N nN) !!s ! s ns
n=0
n=s
()
N!
P
(
( Nn ) ! n ! )
P=
N!
(
)P
( N n ) ! s ! s
P0=1/
if 0 n s
ns
if s n N
0 if n> N
N
Lq= (ns) Pn
n=s
s1
s1
L= n Pn + Lq + s 1 Pn
n=0
s1
L= n Pn + Lq + s 1 Pn
n=0
for n=0,1, 2, , s
n=0
n=0
L
W q= q
1 K s(K s) K s
P ( /)s
Lq = 0
s !(1 )2
s1
W=
= P = ( NL)
n n
n=0
L
W q= q
L
W=
M/G/1 Model
P0=1
M/D/s Model
2
+
2(1)
For M / D/1model , Lq =
2
2(1)
L=+ Lq
L
W q= q
21
1 1
Standard deviation=
k
T1, T2, , Tk are k independent random variables with
an identical exponential distribution whose mean is 1/
(k).
T = T1 + T2 + + Tk, has an Erlang distribution with
parameters and k. Exponential and degenerate
(constant) are special cases of Erlang distribution with
k=1 and k=respectively.
M/Ek/1 Model
2
2
2
2
/ (k )+ 1+k
Lq =
=
2(1)
2k ()
L=W
1+ k
W q=
2 k ( )
1
W =W q +
W =W q +
Jackson Networks
With m service facilities where facility i (i=1, 2, .,., m)
Where A=s !
B 0=1
s
rj
+s
r s j=0 j !
B k =1 i=1
s
s=number of servers
=meanservice rate per busy server
i=meanarrival rate for priority classi
N
= i
i=1
r=
i< s
i=1
Lk = k W k , for k =1,2, , N
A=s !
For single server , s=1
A= 2 /
With different exponential
k =mean service rate for priority class k ,
for k=1, 2, , N
ak
1
Wk=
+ , for k =1,2, , N
b k1 b k k
k
Where a k = i2
i=1 i
b0 =1
k
i
bk =1
i=1 i
k
i <1
i=1
i
Preemptive Priorities Model
1/
For s=1,W k =
, for k=1, 2, , N
B k1 B k
22
1. Infinite Queue
2. Customers arriving from outside the
system according to a Poisson input
process with parameters ai
3. si servers with an exponential servicetime distribution with parameter
Lk = k W k , for k =1,2, , N
qi =1 pij
j=1
j=a j + i pij
i=1
where s j j > j
i= i
si i
23
MARKOV ANALYSIS
(i) =
vector of state probabilities for period Pij = conditional probability of being in state j in the
future given the current state of i
i
P11 P12 P1n
=
(1, 2, 3, , n)
P
where
P22 P2 n
21
P
n
=
number of states
1, 2, , n =
probability of being in state 1,
P
P
P
m
1
m
2
mn
state 2, , state n
For any period n we can compute the state
probabilities for period n + 1
(n + 1) = (n)P
Fundamental Matrix
F = (I B)1
Inverse of Matrix
a b
P
c d
P -1
a b
c d
r
c
b
r
a
r = ad bc
Partition of Matrix for absorbing states
I O
P
A B
I = identity matrix
O = a matrix with all 0s
Equilibrium condition
= P
M represent the amount of money that is in each of the
nonabsorbing states
M = (M1, M2, M3, , Mn)
n
= number of nonabsorbing states
M1
= amount in the first state or category
M2
= amount in the second state or category
Mn
= amount in the nth state or category
P(n)
ij =1 for all i ; n=0, 1,2,
j=0
0 1 M
P(n)
00
P(n01) P(n)
0M
P(n)
P(n)
P(n)
11
1M
10
(n)
P M 0 P(n)
P(n)
M1
MM
ChapmanKolmogorov Equations
24
P(ijn )= P(ikm ) P(kjnm ) , for all i=0,1, , M ; j=0, 1, , M ; m=1, 2, ,n1 ; n=m+1, m+ 2,
k=0
1
lim p(k)
ij = j
n n k=1
M
j=1
j=0
1
lim E C (X t ) = j C( j)
n t=1
n
j =0
First Passage Time
(1)
f (1)
ij = pij = pij
(1)
f (2)
ij = pik f kj
k j
f ij = p ik f kj
(n)
(n1)
k j
f (n)ij 1
n=1
if f ij < 1
(n )
n=1
nf ij
n=1
(n )
if f ij =1
(n)
n=1
if f (n)
ij =1,then ij =1+ p ik kj
n=1
k j
Absorbing States
M
f ik = p ij f jk , for i=0,1, , M ,
j=0
25
x
of the sample means = n
UCL R D4 R
LCL R D3 R
UCLR = upper control chart limit for the range
LCLR = lower control chart limit for the range
D4 and D3 = Upper range and lower range
p-charts
UCL p p z p
LCL p p z p
p = mean proportion or fraction defective in the sample
c-charts
The mean is c and the standard deviation is equal to
c
is estimated by p
Estimated standard deviation of a binomial distribution
p (1 p )
p
n
where n is the size of each sample
Range of the sample = Xmax - Xmin
LCL c c 3 c
Control Chart Model
k is the distance of control limits from the
center line, expressed in Standard Deviation
units. Common choice is k = 3.
UCL=W + k W
CL=W
LCL= W k W
W isthe mean of the sample W
W is the standard deviationof sample W
W =
n
X Control Chart
UCL=+3 / n
LCL=3 / n
CL=
Estimate of meanof the population , grand mean,
m
1
^= X = X i
m i=1
X isthe center line on X control chart
X control chart R
UCL=x + A2 r
CL=x
LCL=x A 2 r
26
R Control Chart
S Control Chart
m
1 Ri
Estimate of mean R is R=
m i=1
R
Estimate of , is ^ =
d2
Constant d 2 istabulated for various sample sizes
+ 3 R
UCL= X
d2 n
3
LCL= X
R
d2 n
3
A 2=
d 2 n
UCL=D 4 r
CL=r
LCL=D3 r
Where r isthe sample average range .
D3D4 are tabulated for various sample sizes
Moving Range Control Chart
m
1
MR=
|X X i1|
m1 i=2 i
MR
MR
Estimate of , is ^ =
=
d 2 1.128
CL, UCLLCL for control chart for individuals
mr
mr
UCL=x +3
=x +3
d2
1.128
CL=x
mr
mr
LCL=x 3
=x 3
d2
1.128
Control chart for moving ranges
UCL=D 4 mr=3.267
mr
CL=mr
LCL=D3 mr=0
as D3 is 0 for n=2.
1
p
S
c4
Constant c 4 is tabulated for various sample sizes
s
UCL=s +3 1c 24
c4
CL=s
s
LCL=s 3 1c24
c4
Estimate of , is ^ =
X control chart S
s
UCL=x +3
c 4 n
CL=x
s
LCL=x 3
c4 n
USLLSL
6 ^
r
^ =
d2
Onesided PCR , is PCR k =min
USL LSL
,
3
3
LSL
)
USL
P ( X >USL ) =P (Z >
)
CUSUM Chart
(Cumulative Control Chart )
27
UCL=0 +3
[ 1(1)2 t ]
n 2
CL=0
LCL= 03
[ 1(1)2t ]
2
n
R
^ = ^ = S
^0= X
d2
c4
MR
For n=1, ^ 0= X ^ =
1.128
Reference value , K=
2
1=0 +
K=k ; H =h
s (i)
0 + K + H , if s H ( i ) > H
nH
^=
s (i)
0K L , if s L ( i )> H
nL
28
OTHERS
Computing lambda and the consistency index
The input to one stage is also the output from
another stage
n
CI
sn1 = Output from stage n
n 1
The transformation function
Consistency Ratio
tn = Transformation function at stage n
CI
CR
General formula to move from one stage to
RI
another using the transformation function
sn1 = tn (sn, dn)
The total return at any stage
fn = Total return at stage n
Transformation Functions
sn 1 an sn bn d n cn
Return Equations
rn an sn bn d n cn
Probability of breaking even
Fixed cost
Break - even point (units)
break - even point
Price/unit Variable cost/unit
Z
sv
P(loss) = P(demand < break-even)
Price Variable cost
EMV
(Mean demand)
P(profit) = P(demand > break-even)
unit
unit
Fixed costs
Using
the
unit normal loss integral, EOL can be
K(break
even
point
X)for
X
BEP
Opportunit y Loss
computed using
$0for X BEP
EOL = KN(D)
where
EOL = expected opportunity loss
K = loss per unit when sales are below the break-even
K = loss per unit when sales are below the breakpoint
even point
X = sales in units
= standard deviation of the distribution
N(D) = value for the unit normal loss integral for a
given value of D
break even point
D
a
AB b d
c
a b
c d
ad ae
e bd be C
cd ce
d
c e ad be cf
f
e f ae bg af bh
g h ce dg cf dh
a b
c d
Determinant Value = (a)(d) (c)(b)
a b c
d e f
g h i
Determinant Value = aei + bfg + cdh gec hfa
idb
Numerical value of numerator determinan t
X
Numerical value of denominato r determinan t
29
a b
Original matrix
c d
Determinan t value of original matrix ad cb
d c
Matrix of cofactors
b a
d b
Change in X X X 2 X 1
a b
c d
d
ad cb
c
ad cb
ad cb
a
ad cb
Change in X X X 2 X 1
Y1 aX 2 bX c
Y C
Y 0
Y2 a ( X X ) 2 b( X X ) c
Y Xn
Y nX n 1
Y Y2 Y1 b( X ) 2aX (X ) c(X ) 2
Y b(X ) 2aX (X ) c(X ) 2
X
X
X (b 2aX cX )
b 2aX cX
X
Y cX n
Y cnX n 1
1
Xn
Y g ( x ) h( x )
Name
Y g ( x ) h( x )
Economic Order Quantity
dTC DCo Ch
dQ
Q2
2
2DCo
Q
Ch
n
X n 1
Y g ( x ) h( x)
Y g ( x ) h( x)
d 2TC DCo
3
dQ 2
Q
When to Use
Approximations /
Probability M
Conditions
an
E(X) is Expected Value = Mean; Xi = random variables possible values; P(Xi) = Probability of each of the random
n
i 1
i 1
E X X i P X i Variance 2 [ X i E (X)] 2 P (X i )
Uniform
(Discrete)
Equal probability
Finite number of possible values
Cumulative F
(b a)
2
2 Variance
(b a 1) 2 1
12
30
Binomial /
Bernoulli
(Discrete)
Bernoulli Trials:
Each trial is independent
Probability of success in a trial is
constant
Only two possible outcomes
Unknown: Number of successes
Known: Number of trials
Number of trials that result in a
success
Probability of r s
x = 0,1,,n
nC r C
Expected valu
Variance =
n r nr
p q
k 0 r
n
a b n
Geometric
(Discrete)
Negative
Geometric
(Discrete)
Hypergeom
etric
(Discrete)
Poisson
(Discrete)
Name
x = 1,2,,n
Expected value (mean) = E(X) = = 1/p
= (1 p)/p2
Poisson Process:
Probability of more than one event
in a subinterval is zero.
Probability of one event in a
subinterval is constant &
proportional to length of subinterval
Event in each subinterval is
independent
Number of events in the interval
E X
xf ( x)dx
Uniform
(Continuo
us)
f ( x) (1 p ) x 1 p
When to Use
Equal probability
0
F ( x ) ( x a ) /( b a)
1
n!
p r q n r
r!(n r )!
x 1
1 p x r p r
r 1
f (x )
x = r,r+1,r+2,, 0
E(X) = = r/p
V(X) = V(X) of binomial *
((N-n)/(N-1)) where ((N-n)/
(N-1)) is called finite
population correction factor
n << N or (n/N) < 0.1,
hypergeometric is equal to
binomial.
Approximated to normal if
np > 5, n(1-p) > 5 and (n/N)
< 0.1
Arrival rate does not change
over time; Arrival pattern
does not follow regular
pattern; Arrival of disjoint
time intervals are
independent.
Approximated to normal if
>5
Z X
= Variance =
f (x )
x =max(0,n-N+k
K objects classed
objects classified
of n objects
E(X) =
f ( x) P( X x )
P(X) = probabilit
occurrences
Expected
Taylor
Approximations / Conditions
Probability
a
P(x1 X x2) = P(x1 X x2) = P(x1 X x2) = P(x1 X x2)
( x ) f ( x )dx
2
xa
axb
bx
f (u )du
; for
( a b)
(b a ) 2
Variance V ( X )
12
2
31
Normal
(Continuo
us)
Notation: N(,)
X is any random variable
Cumulative (z) = P(Z < z); Z
is standard normal
x 0.5 np
P ( X x) P( X x 0.5) P Z
np(1 p )
x 0.5 np
P ( x X ) P ( x 05 X ) P
Z
np (1 p )
X
Z
f(
- < x
E(X
Standard norm
variance =
P ( X x) P
Cumulative
normal
- < < +
Exponenti
al
(Continuo
us)
Memoryless
P ( X t1 t 2 | X t1 ) P ( X t 2 )
distance between
successive events of Poisson
process with mean > 0
length until first count in a
Poisson process
f (x ) e
Erlang
(Continuo
us)
r shape
scale
Time between events are
independent
length until r counts in a
Poisson process or
exponential distribution
Variance =
r
2
f (x )
r x r 1 e x
for x 0 an
(r 1)!
P(X>0.1) = 1 F(0.1)
If r = 1, Erlang random variable is an exponential random v
Gamma
(Continuo
us)
Gamma Function
f (x)
r 1 x
x e
(r )
r
Mean r
(r ) x r 1 e x dx, for r 0
0
Variance 2 r 2
32
Weibull
(Continuo
us)
f (x )
Cumu
E( X )
2
Lognorma
l
(Continuo
us)
ln( x)
F ( x ) P X x P exp(W ) x P W ln( x ) P Z
F ( X ) 0, for x 0
f (x )
(ln x ) 2
exp
for 0 x
2 2
x 2
E ( X ) e
Beta
(Continuo
us)
E( X )
Power
Law
(Continuo
us)
Central
Limit
Theorem
Called as heavy-tailed
distribution.
f(x) decreases rapidly with x
but not as rapid as
exponential distribution.
V ( X ) e 2 e 1
( ) 1
f (x)
x (1 x ) 1 for 0 x 1,
( )( )
V (X )
( 1) x
f (x)
xmin xmin
variance 2, and if X is the sample mean, the limiting form of the distri
X
/ n
33
Name
Two or more
Discrete
Random
Variables
f X (x ) P( X x)
y
E Y | x y f Y |x ( y)
V (Y | x) ( y Y | x ) 2 f Y | x ( y )
f XY ( x, y ) f X ( x) f Y ( y )
f Y ( y ); f XY ( x, y ) f X ( x) f Y ( y
f X ( x)
f X ( x)
f X 1 X 2 X p ( x1 , x 2 , , x p ) P ( X 1 x1 , X 2 x 2 , , X p x p )
Independence f Y | x ( y )
Joint Probability Mass Fn:
for all po
of X1,X2,,Xp
Joint Probability Mass For subset:
f X 1 X 2 X k ( x1 , x 2 , , x k ) P ( X 1 x1 , X 2 x 2 , , X k x k ) P ( X 1 x1 , X 2 x 2 , , X k x k )
X1,X2,,Xp for which X1=x1, X2=x2,, Xk=xk
Marginal Probability Mass Function:
f X i ( x i ) P( X i x i ) f X 1 X 2 X p ( x1
E ( X i ) x i f X 1 X 2 X p ( x1 , x 2 , , x p )
Multinomial
Probability
Distribution
Two or more
Continuous
Random
Variables
V ( X i ) xi X i
Mean:
Variance:
The random experiment that generates the probability distribution consists of a s
However, the results from each trial can be categorized into one of k classes.
P( X 1 x1 , X 2 x 2 , , X k x k )
n!
p1x1 p 2x2 p kxk
x1 ! x 2 ! x k !
E X i np i
f Y |x (y )
for x1 x 2 x k n and p1
V ( X i ) np i (1 p i )
f X (x ) f XY ( x, y ) dy
f Y (y )
f XY ( x, y )
V (Y | x) ( y
for f X ( x) 0 E Y | x y f Y | x ( y ) dy
f X ( x)
y
y
f Y | x ( y ) f Y ( y ); f X | y ( x ) f X ( x) f XY ( x, y ) f X ( x) f Y ( y ) fo
Independence:
P ( X 1 x1 , X 2 x 2 , , X p x p ) B f X 1 X 2 X p ( x1 ,
B
( x , x , , x ) f
( x , x , , x )dx1 d
k
X1 X 2 X p
1
2
p
Joint Probability Mass For subset: X 1 X 2 X k 1 2
range of X1,X2,,Xp for which X1=x1, X2=x2,, Xk=xk
f X i ( xi ) f X 1 X 2 X p ( x1 , x 2 , , x p ) dx1 dx 2 dxi 1
R
Marginal Probability Density Function:
is over all points of X1,X2,,Xp for which Xi=xi
Mean:
E ( X i ) x i f X 1 X 2 X p ( x1 , x 2 , , x p ) dx1 dx 2 dx p
V ( X i ) xi X i
Varia
f X 1 X 2 X p ( x1 , x 2 , , x p ) dx1 dx 2 dx p
Covariance is a measure of linear relationship between the random variables. If the relationship betwe
nonlinear, the covariance might not be sensitive to the relationship.
Two random variables with nonzero correlation are said to be correlated. Similar to covariance, the cor
linear relationship between random variables.
Covariance: XY E[( X X )(Y Y )] E ( XY ) X Y
XY
Correlation:
cov( X , Y )
V ( X )V (Y )
34
1 ( x x ) 2 2 ( x X )(
f XY ( x, y; X , Y , X , Y , )
exp
XY
X2
2(1 2 )
2 X Y 1 2
for - < x < and - < y < , with parameters x > 0, y > 0, - < X < , - <
1
Y |x Y X
Linear
Functions of
random
variables
Y Y
X
X X
and variance
E ( X ) ;
E (Y ) c1 1 c 2 2 c p p ;
General
Functions of
random
variables
2 Y | X Y2 1
V (Y ) c V ( X 1 ) c 22V ( X 2 )
2
1
V (X ) 2 p
with E ( X i )
V (Y ) c12 12 c 22 22 c 2p
35
Confidence Interval
Sample Size
x is an estimator of ; S 2 is sample variance
Error: Rejecting the null Hypothesis H 0 when it is true; Type II Error: Failing to reject null hypothesis H 0 when it is
Probability of Type I Error = = P(type I error) = Significance level = -error = -level = size of the test.
Probability of Type II Error = = (type II error)
= Probability of rejecting the null hypothesis H 0 when the alternative hypothesis is true = 1 - = Probability of co
rejecting a false null hypothesis.
P-value = Smallest level of significance that would lead to the rejection of the null hypothesis H 0.
100 (1 ) CI on is
100 (1 ) upper confident that the error 100(1-)% upper confidence bound for
xz /2 / n x+ z/ 2 / n
z /2 isthe upper 100 /2
percentage point of standard
normal distribution
xz / n=l
Large scale sample size n: Using central limit theorem, X has approximately a normal distribution with me
variance /n.
xz /2
S/ n
Z0
X 0
/ n
Test Statistic:
Alternative
p-value
hypothesis
Probability above |z0|
H1: 0
& below -|z0|,
P = 2[1 - (|z0|)]
Probability above z0
H1: 0
P = 1 - (z0)
Probability below z0
H1: 0
P = (z0)
S
S
x+ z / 2
n
n
= z /2
Rejection
Criteria
n=
2 2
( z /2 + z )
where =0
n=
z 0> z
z 0<z
) (
n
n
z / 2
2 2
( z + z )
where =0
d=
| 0| ||
t distribution (Similar to normal in symmetry and unimodal. But t distribution is heavier tails than normal) w
degrees of freedom. k = n-1
X
S/ n
f ( x )=
[( k +1) /2]
1
.
2
k (k /2) [ ( x /k ) +1 ](k+1)/ 2
Mean = 0
u=x+ t ,n1 s / n
100(1-)% lower confidence bound for
xt , n1 s/ n=l
Probability of Type II Error for a two-sided
36
X 0
T0
S/ n
Test Statistic:
Alternative
hypothesis
H1: 0
H1: 0
H1: 0
p-value
Rejection Criteria
Probability above
|t0| & below -|t0|
Probability above
t0
Probability below
t0
t 0> t ,n 1
t 0<t ,n1
d=
| 0| ||
=
(n 1) S 2
2
f ( x )=
k /2
1
(k /2)1 x /2
x
e
(k /2)
100 (1 ) CI on 2 is
(n1) s 2 2 ( n1)s 2
2
2 / 2,n1
1 / 2,n1
Mean = k
(n1)s
2
2
,n1
H0 : 2 02
02
(n 1) S 2
02
Rejection Criteria
2
0
2
0
(n1)s2
21 ,n1
Variance = 2k
H 1 :
2 02
20 > 2 , n1
H 1 :
2 02
20 < 2 ,n1
d=
| 0| ||
X np
np (1 p )
p p
p (1 p )
n
37
100 (1 ) CI on proportion p is
Z 2
n= /2 p (1 p)
^p ( 1p )
^p (1p) E
^pz
p ^p + z /2
n
np can be computed as
2
( )
^p from a
preliminary sample or use the
maximum value of p, which is 0.5.
H1: p p0
p-value
^p ( 1p )
p
n
p0 p+ z /2 p0 (1 p0 )/n
p (1 p)/n
) (
p0 p
n=
]
]
z /2 p0 (1 p0)+ z p(1 p)
pp 0
z p (1 p0 )+ z p( 1 p)
n= 0
p p0
z 0> z
z 0<z
100 (1) CI on 12 is
Rejection Criteria
Probability above
|z0| & below -|z0|,
P = 2[1 - (|z0|)]
Probability above
z0
P = 1 - (z0)
Probability below
z0
P = (z0)
^p ( 1p )
n
X np0
np0 (1 p0 )
Test Statistic:
H0 : p p0
z0
H1: p p0
p ^p + z
^pz
Null hypothesis:
Alternative
hypothesis
H1:p p0
12 x 1x 2 + z
( )
21 22
+
n1 n2
x 1x 2z
21 22
+
n1 n 2 1 2
X 1 X 2 1 2
12 22
n1 n2
= z /2
21 22
+
n1 n2
)(
z /2
21+ 22
n= 2
Sample size for a two-sided test, with n 1n2,
1 /n1+ 22 /n2
2
2
2
( z /2 + z ) ( 1 + 2 )
n=
Sample size for a two-sided test, with n1n2,
2
( 0 )
2
21 22
+
n1 n 2
( z + z ) ( 1 + 2 )
n=
2
( 0 )
38
d=
|1 2 0| | 0|
Z0
+
2
1
2
2
X 1 X 2 0
12 22
n1 n2
Test Statistic:
Alternative
p-value
hypothesis
Probability above
H1: 0
|z0| & below -|z0|,
P = 2[1 - (|z0|)]
Probability above
H1: 0
z0
P = 1 - (z0)
Probability below
H1: 0
z0
P = (z0)
21 + 22
Rejection
Criteria
s 21 s22
+ x x + t
n1 n2 1 2 1 2 /2,
X 1 X 2 1 2
1 1
n1 n2
2
p
2
p
S =
( n1 1 ) S 21+ ( n21 ) S 22
n1+ n22
, is:
T0
Test Statistic:
X 1 X 2 0
1 1
Sp
n1 n2
Alternative
hypothesis
H1: 0
H1: 0
H1: 0
39
T0*
If H0:= 0 is true, the statistic
S12 S 22
n1 n2
2 2
2
2
1
S S
+
n1 n2
( S21 / n1 ) ( S22 / n2 )
n11
X 1 X 2 0
n 21
(Oi Ei )
Ei
i 1
Where Oi is the observed frequency and Ei is the expected frequency in the ith class.
Approximated to Chi-square distribution with k-p-1 degrees of freedom. p represents the number of parameters. If test statistic
r
1 c
1 c
1 r
ui Oij v j Oij
Eij nui v j Oij Oij
n j 1
n j 1 i 1
n i 1
Expected Frequency of each cell
k
X 02
(Oij Eij ) 2
0
Eij
i 1 j 1
For large n, the statistic
P-value is
100 (1) predictioninterval on a single future observation a normal distribution is
1
1
xt /2,n 1 s 1+ X n+1 x+ t /2,n1 s 1+
n
n
Prediction interval for Xn+1 will always be longer than the CI for because there is more variability associated
Tolerance interval for capturing at least of the valuesa normal distribution with cofidence level 100 ( 1) is
xks , x + ks
z0
H : ~ ~0
0
Null hypothesis: 0
One-sided hypothesis: 0
H1 : ~ ~0
H1 : ~ ~0
P R r when p
2
P-value:
P R r when p
2
P-value:
R 0.5n
0.5 n
H
Two-sided hypothesis: 0
H1 : ~ ~0
If r+ < n/2, P-value
2 P R r when p
2 P R r when p
H0 : 0
0
Sort based on i
differences; Give the ranks the signs of th
corresponding differences.
Sum of Positive Ranks: W+; Absolute value of Sum of Negative Ranks: W-. W = min(W+,W-)
Null hypothesis:
H1 : 0
w w*
40
H1 : 0
w w*
H1 : 0
w w*
W n(n 1) / 4
n(n 1)( 2n 1) / 24
z0
W 2=
( n1 +n 2 )( n1+ n2 +1 )
2
W 1
Z0=
W 1w
w
2
, n1
,n1
, n1
T0
Test Statistic:
D 0
S D / n where
F=
W /u
Y /v
f ( x )=
Mean
u +v
u
v
u/ 2
u
(
)1
x2
( )( )
u
v u
( ) ( ) ( ) x +1
]
2
2 [ v
for >2
2
(u +v)/2
Variance
F Distribution
F=
S21 / 21
S22 / 22
2 (u+ 2)
, for > 4
2
u ( 2 ) (4)
1
f 1 , u , =
f , ,u
2=
, 0< x <
H 0 : 21= 22
41
F0 =
Test Statistic:
Alternative
hypothesis
2
2
H1: 1 2
2
H1: 1
S21
S22
Rejection Criteria
f 0> f
,n11,n21
f 0 <f
1 ,n11,n21
2
f 0> f , n 1, n 1
22
f 0< f 1 ,n 1,n 1
H1: 1 2
P-value is the area (probability) under the F distribution with n 1-1 and n2-1 degrees of freedom that lies bey
computed value of the test statistic f0.
2
21
is
22
s 21
21 s12
f
f
s 22 1 2 ,n 1,n 1 22 s 22 2 ,n 1,n 1
f
f
are theupper lower /2 percentage point of F distribution withn21 numerator n11 denominator
2
, n21, n11
1 ,n21,n11
2
H 0 : p1 =p 2
H 1 : p1 p2
^
^ 2 ( p1 p2 )
P 1 P
Null hypothesis:
Z=
Alternative
hypothesis
H1:p1 p2
p-value
H1: p1 p2
H1: p1 p2
] [
z /2 pq
( 1 /n 1+1 /n 2 )( p 1 p2)
z /2 pq
( 1/n1+ 1/n2 )( p 1 p2)
P1P2
P1P2
n1 p 1+ n2 p2
n1 (1 p1 )+ n2 (1 p2 )
Where p=
and q =
n1+ n2
n 1+ n2
[ z /2 ( p 1+ p 2)(q 1+ q2 )/2+ z p 1 q 1+ p2 q 2 ]
n=
(p 1 p2 )2
100 (1 ) CI on the differencethe true proportions p1 p2 is
where q1 = 1 p1 and q2 = 1 p2
42
Inferences:
1. Population is normal: Sign test or t-test.
a. t-test has the smallest value of for a significance level , thus t-test is superior
to other tests.
2. Population is symmetric but not normal (but with finite mean):
a. t-test will have the smaller (or a higher power) than sign test.
b. Wilcoxon Signed-rank test is comparable to t-test.
3. Distribution with heavier tails:
a. Wilcoxon Signed-rank test is better than t-test as t-test depends on the sample
mean, which is unstable in heavy-tailed distributions.
4. Distribution is not close to normal:
a. Wilcoxon Signed-rank test is preferred.
5. Paired observations:
a. Both sign test and Wilcoxon Signed-rank test can be applied. In sign test, median
of the differences is equal to zero in null hypothesis. In Wilcoxon Signed-rank test,
mean of the differences is equal to zero in null hypothesis.
6.
43