You are on page 1of 94

UNSW Business School

School of Risk and Actuarial Studies

ACTL2131
Probability and Mathematical Statistics

Exercises

S1 2016

March 4, 2016
Contents

Schedule of Tutorial Exercises 1

1 Probability Theory 2
Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Exercise 1.1 [wk01Q1] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Exercise 1.2 [wk01Q2] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Exercise 1.3 [wk01Q3] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Exercise 1.4 [wk01Q4] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Exercise 1.5 [wk01Q5] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Exercise 1.6 [wk01Q6] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Exercise 1.7 [wk01Q7] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Exercise 1.8 [wk01Q8] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.1 Mathematical Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Exercise 1.9 [wk01Q9] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Exercise 1.10 [wk01Q10] . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Exercise 1.11 [wk01Q11] . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Exercise 1.12 [wk01Q12] . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Exercise 1.13 [wk01Q13] . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Exercise 1.14 [wk01Q14] . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Exercise 1.15 [wk01Q15] . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Exercise 1.16 [wk01Q16] . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Exercise 1.17 [wk01Q17] . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2 Univariate Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Exercise 1.18 [wk02Q1] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Exercise 1.19 [wk02Q2] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Exercise 1.20 [wk02Q3] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Exercise 1.21 [wk02Q4] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Exercise 1.22 [wk02Q5] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Exercise 1.23 [wk02Q6] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

i
ACTL2131 Probability and Mathematical Statistics, S1 2016 Exercises

Exercise 1.24 [wk02Q7] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9


Exercise 1.25 [wk02Q8] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.3 Joint and Multivariate Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Exercise 1.26 [wk03Q4] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Exercise 1.27 [wk03Q5] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Exercise 1.28 [wk03Q6] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Exercise 1.29 [wk03Q8] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Exercise 1.30 [wk03Q9] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Exercise 1.31 [wk03Q10] . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Exercise 1.32 [wk03Q11] . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Exercise 1.33 [wk03Q12] . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.4 Sampling and Summarising Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Exercise 1.34 [wk03Q1] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Exercise 1.35 [wk03Q2] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Exercise 1.36 [wk03Q3] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Exercise 1.37 [wk03Q7] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.5 Functions of Random Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Exercise 1.38 [wk04Q1] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Exercise 1.39 [wk04Q2] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Exercise 1.40 [wk04Q3] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Exercise 1.41 [wk04Q4] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Exercise 1.42 [wk04Q5] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Exercise 1.43 [wk05Q14] . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Exercise 1.44 [wk05Q15] . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Exercise 1.45 [wk05Q16] . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Exercise 1.46 [wk04Q6] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Exercise 1.1 [wk01Q1] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Exercise 1.2 [wk01Q2] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Exercise 1.3 [wk01Q3] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Exercise 1.4 [wk01Q4] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Exercise 1.5 [wk01Q5] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Exercise 1.6 [wk01Q6] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
Exercise 1.7 [wk01Q7] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Exercise 1.8 [wk01Q8] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Exercise 1.9 [wk01Q9] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Exercise 1.10 [wk01Q10] . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

ii
ACTL2131 Probability and Mathematical Statistics, S1 2016 Exercises

Exercise 1.11 [wk01Q11] . . . . . . . . . . . . . . . . . . . . . . . . . . . 22


Exercise 1.12 [wk01Q12] . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Exercise 1.13 [wk01Q13] . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Exercise 1.14 [wk01Q14] . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
Exercise 1.15 [wk01Q15] . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Exercise 1.16 [wk01Q16] . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Exercise 1.17 [wk01Q17] . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
Exercise 1.18 [wk02Q1] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
Exercise 1.19 [wk02Q2] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
Exercise 1.20 [wk02Q3] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
Exercise 1.21 [wk02Q4] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
Exercise 1.22 [wk02Q5] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
Exercise 1.23 [wk02Q6] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Exercise 1.24 [wk02Q7] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Exercise 1.25 [wk02Q8] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
Exercise 1.26 [wk03Q4] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Exercise 1.27 [wk03Q5] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Exercise 1.28 [wk03Q6] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Exercise 1.29 [wk03Q8] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Exercise 1.30 [wk03Q9] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Exercise 1.31 [wk03Q10] . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Exercise 1.32 [wk03Q11] . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
Exercise 1.33 [wk03Q12] . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
Exercise 1.34 [wk03Q1] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Exercise 1.35 [wk03Q2] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
Exercise 1.36 [wk03Q3] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
Exercise 1.37 [wk03Q7] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
Exercise 1.38 [wk04Q1] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
Exercise 1.39 [wk04Q2] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
Exercise 1.40 [wk04Q3] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Exercise 1.41 [wk04Q4] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Exercise 1.42 [wk04Q5] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
Exercise 1.43 [wk05Q14] . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
Exercise 1.44 [wk05Q15] . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
Exercise 1.45 [wk05Q16] . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
Exercise 1.46 [wk04Q6] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

iii
ACTL2131 Probability and Mathematical Statistics, S1 2016 Exercises

2 Parameter Estimation 58
2.1 Estimation Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
Exercise 2.1 [wk05Q2] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
Exercise 2.2 [wk05Q5] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
Exercise 2.3 [wk05Q6] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
Exercise 2.4 [wk05Q9] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Exercise 2.5 [wk05Q12] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Exercise 2.6 [wk05Q13] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
2.2 Limit Theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
Exercise 2.7 [wk05Q1] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
Exercise 2.8 [wk05Q3] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
Exercise 2.9 [wk05Q4] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
Exercise 2.10 [wk05Q7] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
Exercise 2.11 [wk05Q8] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
Exercise 2.12 [wk05Q10] . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
Exercise 2.13 [wk05Q11] . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
2.3 Evaluating Estimators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
Exercise 2.14 [wk06Q1] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
Exercise 2.15 [wk06Q2] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
Exercise 2.16 [wk06Q3] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
Exercise 2.17 [wk06Q4] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
Exercise 2.18 [wk06Q5] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
Exercise 2.19 [wk06Q6] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
Exercise 2.20 [wk06Q7] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
Exercise 2.21 [wk06Q8] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
Exercise 2.22 [wk06Q9] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
Exercise 2.23 [wk06Q10] . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
Exercise 2.24 [wk06Q11] . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Exercise 2.25 [wk06Q12] . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Exercise 2.26 [wk06Q13] . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Exercise 2.27 [wk06Q14] . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Exercise 2.1 [wk05Q2] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Exercise 2.2 [wk05Q5] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Exercise 2.3 [wk05Q6] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
Exercise 2.4 [wk05Q9] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
Exercise 2.5 [wk05Q12] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70

iv
ACTL2131 Probability and Mathematical Statistics, S1 2016 Exercises

Exercise 2.6 [wk05Q13] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71


Exercise 2.7 [wk05Q1] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
Exercise 2.8 [wk05Q3] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
Exercise 2.9 [wk05Q4] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
Exercise 2.10 [wk05Q7] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
Exercise 2.11 [wk05Q8] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
Exercise 2.12 [wk05Q10] . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
Exercise 2.13 [wk05Q11] . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
Exercise 2.14 [wk06Q1] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
Exercise 2.15 [wk06Q2] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
Exercise 2.16 [wk06Q3] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
Exercise 2.17 [wk06Q4] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
Exercise 2.18 [wk06Q5] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
Exercise 2.19 [wk06Q6] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
Exercise 2.20 [wk06Q7] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
Exercise 2.21 [wk06Q8] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
Exercise 2.22 [wk06Q9] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
Exercise 2.23 [wk06Q10] . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
Exercise 2.24 [wk06Q11] . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
Exercise 2.25 [wk06Q12] . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
Exercise 2.26 [wk06Q13] . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
Exercise 2.27 [wk06Q14] . . . . . . . . . . . . . . . . . . . . . . . . . . . 87

v
Schedule of Tutorial Exercises

Exercises Before Tutorial During Tutorial After Tutorial Additional


Week 1 1.1, 1.2 1.10, 1.11, 1.12, 1.14, 1.15, 1.16, 1.3-1.9
1.13 1.17
Week 2 1.18, 1.19 1.20, 1.21, 1.22 1.23, 1.24, 1.25
Week 3 1.34, 1.35, 1.36 1.26, 1.28, 1.37, 1.29, 1.30, 1.31, 1.33
1.27 1.32
Week 4 1.38 1.40(1,2,3), 1.39, 1.42, 1.46, 1.43,
1.41, 1.40(4,5) 1.44, 1.45
Week 5 2.7, 2.1 2.8, 2.9, 2.2, 2.3 2.10, 2.11, 2.4, 2.12, 2.13, 2.5,
2.12 2.6
Week 6 2.14, 2.15 2.17, 2.16, 2.19, 2.20, 2.21, 2.22, 2.24, 2.25, 2.26,
2.18 2.23 2.27
Table 1: Schedule of tutorial exercises.

1
Module 1

Probability Theory

Preliminaries
Exercise 1.1: [wk01Q1, Solution, Schedule] An urn contains one black ball and one gold ball while
a second urn contains one white and one gold ball. One ball is selected at random from each urn.

1. Describe the sample space for this experiment.

2. Describe a -algebra for this experiment.

3. Describe the event that both balls will be of the same colour. What is the probability of this
event?

Exercise 1.2: [wk01Q2, Solution, Schedule] A box contains 100 Christmas balls: 49 are red, 34 are
gold, and 17 are silver. Three balls are to be drawn without replacement. Determine the probability
that:

1. all 3 balls are red;

2. the balls are drawn in the order: red, gold, and silver;

3. the third ball is a silver, given that the first 2 are red and gold (not necessarily in that order); and

4. the first 2 are red, given that the third ball is a silver;

Exercise 1.3: [wk01Q3, Solution, Schedule] Let A and B be two independent events. Prove that the
following pairs are also independent:

1. A and BC

2. AC and B

3. AC and BC

Exercise 1.4: [wk01Q4, Solution, Schedule] A pair of events A and B cannot be simultaneously
mutually exclusive and independent. Assume that their probabilities are strictly positive, i.e., Pr (A) >
0 and Pr (B) > 0. Prove the following:

2
ACTL2131 Probability and Mathematical Statistics, S1 2016 Exercises

1. If A and B are mutually exclusive, then they cannot be independent.

2. If A and B are independent, then they cannot be mutually exclusive.

Exercise 1.5: [wk01Q5, Solution, Schedule] This exercise shows that independence does not imply
pairwise independence. Consider a random experiment which consists of tossing two dice. Define
the following events:

E1 = doubles appear


E2 = {the sum is between (and includes) 7 and 10}


E3 = {the sum is 2 or 7 or 8}

1. Show that E1 , E2 and E3 are independent.

2. Show that E1 and E2 are not pairwise independent.

3. Show that E2 and E3 are not pairwise independent.

4. What about E1 and E3 are they pairwise independent?

Exercise 1.6: [wk01Q6, Solution, Schedule] In an undergraduate statistics class, three students A, B,
and C submitted exactly (word-for-word) the same solution to a homework problem. It is the policy
of the lecturer to give zero marks for those who copy homework problems. Believing that there must
be one of the three who actually did the work, the lecturer will pardon one of the three and chooses at
random the student to pardon.
However, the lecturer will only inform the students at the end of the semester who among them has
been pardoned.
The next day, A tries to get the lecturer to tell him who had been pardoned. The lecturer refuses. A
then asks which of B or C will not be pardoned. The lecturer thinks for a while, then tells A that B is
not to be pardoned.

Lecturers reasoning: Each student has a 1/3 chance of being pardoned. Clearly, either B or C
must not be pardoned, so I have given A no information about whether A will be pardoned.

As reasoning: Given that B will not be pardoned, then either A or C will be pardoned. My
chance of being pardoned has risen to 1/2.

1. Evaluate the lecturers reasoning, i.e., explain whether his reasoning is justified.

2. Evaluate student As reasoning, i.e., explain whether his reasoning is justified.

Exercise 1.7: [wk01Q7, Solution, Schedule] Two airlines serving some of the same cities in Australia
have merged. Management has decided to eliminate some of the repetitious daily flights. On the
Perth-Sydney route, one airline originally had five daily flights (each at different a time) and the other
had six daily flights (each at different a time). Determine the number of ways:

1. four flights can be eliminated.

2. the first airline can eliminate two of its scheduled five flights.

3. the second airline can eliminate two of its scheduled six flights.

3
ACTL2131 Probability and Mathematical Statistics, S1 2016 Exercises

4. two flights can be eliminated from each airline.

Exercise 1.8: [wk01Q8, Solution, Schedule] Three boxes are numbered 1, 2 and 3. For k = 1, 2 and
3, box k contains k blue marbles and 5 k red marbles. In a two-step experiment, a box is selected and
2 marbles are drawn from it without replacement. If the probability of selecting box k is proportional
to k, what is the probability that the two marbles drawn have different colors?

1.1 Mathematical Methods


Exercise 1.9: [wk01Q9, Solution, Schedule] The probability function of a certain discrete random
variable on the non-negative integers satisfies the following:

Pr(0) = Pr(1)
Pr(k + 1) = Pr(k)/k for k = 1, 2, 3, . . ..

Find Pr(0).

Exercise 1.10: [wk01Q10, Solution, Schedule] Consider X, a continuous random variable with den-
sity function
fX (x) = cex , x > 1, and zero otherwise.
Find

1. all c such that f x is a random variable, and


2. Pr(X < 3 | X > 2).

Exercise 1.11: [wk01Q11, Solution, Schedule] The distribution function for a discrete random vari-
able X is given by:
if x < 1


0
F X (x) = if 1 x < 2/3

1/3


if x 2/3.

1

1. Specify the probability mass function pX (x).


2. Sketch the graphs of pX (x) and F X (x).

Exercise 1.12: [wk01Q12, Solution, Schedule] Let X be a random variable with density:
1  x 2
" #
1
fX (x) = exp , for < x < .
2 2
Here, X is called a normally distributed random variable.

1. Find an expression for the moment generating function, MX (t) of X.


2. Now define S (t) = log [MX (t)]. Show that, in general,

d2

d
S (t) = E [X] and = Var (X) .

S (t)
dt t=0 dt2 t=0

4
ACTL2131 Probability and Mathematical Statistics, S1 2016 Exercises

3. Use the above result to prove that, with the normal density, we have

E (X) = and Var (X) = 2 .

4. How do we call the function S (t)?

Exercise 1.13: [wk01Q13, Solution, Schedule] Let X be a random variable with parameters , , ,
and <, and have the following moment generating function: MX (t) = t + t2 + t3 + t4 .

1. How many distribution functions corresponds to this m.g.f. for given values of the parameters?

2. Determine the first five non-central moments of X.

3. Determine the first five central moments of X.

4. Determine the mean, variance, skewness, and kurtosis of X.

5. Let X represents the claim sizes, i.e., a higher value is bad for the insurer. Insurer A and Bi
ask a quote for reinsuring a tail risk (for example: the reinsurer makes a payment to the insurer
if the loss is larger than $1 million). Based on the mean, variance, skewness and kurtosis, which
of the two would receive a higher quote for reinsuring the risk, and why, if:

i) As parameters are: = 1, = 2, = 1, and = 1 and B1 parameters are = 1,


= 1, = 0.5384, and = 0.2606;
ii) As parameters are: = 1, = 2, = 1, and = 1 and B2 parameters are = 1, = 2,
= 2, and = 2;
iii) As parameters are: = 1, = 2, = 1, and = 1 and B2 parameters are = 1, = 2,
= 1, and = 1.625.

Exercise 1.14: [wk01Q14, Solution, Schedule] The probability density function for a continuous
random variable X is given by:
(
2/x3 for x 1
fX (x) =
0 otherwise.

1. Determine a formula for the cumulative distribution function F X (x).

2. Determine the probability that X 4.

3. Sketch the graphs of fX (x) and F X (x).

Exercise 1.15: [wk01Q15, Solution, Schedule] Let X be a random variable with probability density
function:
e , if x 0;
1 x

2

fX (x) =

1 ex , if x < 0.


2

1. Verify that fX () is a pdf.

2. Find expression for the cdf F X (x).

3. Find the moment generating function and the probability generating function of X.

5
ACTL2131 Probability and Mathematical Statistics, S1 2016 Exercises

4. Suppose = 1. Evaluate Pr (|X| < 3/4).

Exercise 1.16: [wk01Q16, Solution, Schedule] Actuaries often model the age-at-death as a non-
negative random variable X and define the force of mortality as follows:
F X (x + h) F X (x)
(x) = lim ,
h0 h (1 F X (x))
where F X () denotes the cdf of X.

1. Using this definition, prove that:


Z x !
F X (x) = 1 exp (z) dz .
0

2. Show that for a non-negative random variable:


Z
E [X] = [1 F X (z)] dz.
0

Use this result to show that:


" #
1
E [X] = E .
(X)

Exercise 1.17: [wk01Q17, Solution, Schedule] A random variable X has a probability density func-
tion of the form:
 
fX (x) = ax 1 bx2 , for 0 x 1, and zero otherwise,

where a and b are positive constants.

1. Determine the value of a in terms of b and show that b 1.

2. For the case b = 1, determine the mean and variance of X.

1.2 Univariate Distributions


Exercise 1.18: [wk02Q1, Solution, Schedule] For each of the following situations, specify the type
of distribution that best models the random variable X and give the parameters of the distribution
chosen (where possible):

1. This year, there are 100 students enrolled in an introductory actuarial studies course. For the
mid-session test for this course, the papers are marked by a team of tutors; however, a sample of
these papers is examined by the course professor for marking consistency. Experience suggests
that 1% of all papers will be improperly marked. The professor selects 10 papers at random
from the 100 papers and examines them for marking inconsistencies. X is the number of papers
in the sample that are improperly marked.

2. A standard drug has been known to be effective in 90% of the cases in which it is used. To
re-evaluate the effectiveness of this same drug, a clinical trial will be performed where 20 has
volunteered. X is the number of cases where the drug has been found effective.

6
ACTL2131 Probability and Mathematical Statistics, S1 2016 Exercises

3. An immunologist is studying blood disorders exhibited by people with rare blood types. It
is estimated that 10% of the population has the type of blood being investigated. Volunteers
whose blood type is unknown are tested until 100 people with the desired blood type are found.
X is the number of people tested who do not have the desired rare blood type.

4. Customers arrive at a fastfood restaurant independently and at random. During lunch hour,
where more customers are often expected to arrive, customers arrive at the fastfood restaurant
at the rate of two per minute on the average. X is the number of people who arrive between 12:15
p.m. and 12:30 p.m.

5. A set of 25 multiple-choice questions was asked in an examination. It has been determined,


according to experience, that the proportion of the questions which are guessed and answered
correctly is 35%. X is the number of questions guessed and answered correctly by a particular
student who wrote for the examination.

Exercise 1.19: [wk02Q2, Solution, Schedule] For each of the following moment generating functions
of discrete random variables X, identify the distribution and specify the associated parameters.

et
1. MX (t) =
2 et
!3
et + 1
2. MX (t) =
2
!
1 1
3. MX (t) = exp et
2 2
!4
et
4. MX (t) =
2 et
!5
3et + 1
5. MX (t) =
4

Exercise 1.20: [wk02Q3, Solution, Schedule] Poisson approximation to the binomial. This exercise
is to show that binomial probabilities can be approximated using the Poisson probabilities, which
are generally easier to calculate. Let X Binomial(n, p) and Y Poisson() where = np. The
approximation states that

Pr (X = x) Pr (Y = x) ,

for large n and small np. This can be proven using convergence of mgfs. Denote the respective mgfs
by MX (t) and MY (t).

1. Prove that limn MX (t) = MY (t).


1
Hint: use lim (1 + nx )n = exp(x) = lim(1 + hx) h .
n h0

2. Another method to prove this approximation is as follows: First, establish that the Poisson
distribution satisfies the relation
Pr (Y = x)
= for x = 1, 2, . . . .
Pr (Y = x 1) x

7
ACTL2131 Probability and Mathematical Statistics, S1 2016 Exercises

Second, a similar relation can be approximated for the binomial distribution:


Pr (X = x) np
lim = .
p0 Pr (X = x 1) x

Hint: first show that Pr(X=x)


Pr(X=x1)
= nx+1
x
p
1p
, then take lim and use that lim(np) = . Then show
p0 p0
that:

Pr (Y = 0) Pr (X = 0) ,

for large n.

3. A typesetter, on the average makes one error in every 400 words typeset. A typical page contains
300 words. Use the Poisson approximation to the binomial to compute the probability that there
will be more than 3 errors in 10 pages.

Exercise 1.21: [wk02Q4, Solution, Schedule] An insurance company receives 200 claims per day on
the average. Claims arrive independently and at random at the company office. Of the claims, 95%
are for amounts less than $100 and are processed immediately; the remaining 5% are examined more
closely to verify their accuracy and eligibility.

1. Determine the probability of getting no claims over $100 in a given day.

2. Determine the probability of getting at most two claims over $100 in a given day.

3. How many claims for amounts less than $100 should this company expect to receive in a week
(5 business days)?

Exercise 1.22: [wk02Q5, Solution, Schedule] Let X have a Gamma(, ) distribution.

1. Prove that the mgf of X can be expressed as:


!

t
for t < .

2. Establish also that for any positive constant r


(r + )
E [X r ] = r .
()

Exercise 1.23: [wk02Q6, Solution, Schedule] Suppose that you have $1 000 to invest for a year. You
are currently evaluating two investments: Investment A and Investment B, with annual rates of return,
respectively denoted by RA and RB . Assume:

RA Normal (0.05, 0.1) and RB Normal (0.10, 0.5) .

1. Under Investment A, compute the probability that your investment will be below $1 000 in a
year.

2. Under Investment A, compute the probability that your investment will exceed $1 200 in a year.

3. Under Investment B, compute the probability that your investment will below $1 000 in a year.

8
ACTL2131 Probability and Mathematical Statistics, S1 2016 Exercises

4. Under Investment B, compute the probability that your investment will exceed $1 200 in a year.

Exercise 1.24: [wk02Q7, Solution, Schedule] A city engineer has studied the frequency of accidents
at two busy intersections. He has determined that the time T in months between accidents at each
intersection has an exponential distribution. The parameters for these two distributions are 2 and 2.5.
Assume that the occurrence of accidents at these intersections is independent.

1. Determine the probability that there are no accidents at either intersection in the next month.

2. Determine the probability that there will be no accidents for at least one of the intersections in
the next month.

Exercise 1.25: [wk02Q8, Solution, Schedule] The Pareto distribution is very commonly used to
model certain insurance loss amounts. We say X has a Pareto distribution if its density can be ex-
pressed as:

 +1
fX (x) = for x > ,
x
and zero otherwise.

1. Find expressions for the mean and variance of X.

 quantile of X. The quantile function is f (u) = F X (u) hence, one should


1
2. Find expression
 for the
solve u = F X F X1 (u) .

3. What is then its median (i.e., u = 0.5)?

4. An insurance policy has a deductible1 of $5. The random variable for the loss amount (before
deductible) on claims filed has a Pareto distribution with = 3.5 and = 4. Find:

(a) the mean loss amount;


(b) the expected value of the amount of a single claim; and
(c) the variance of the amount of a single claim.

1.3 Joint and Multivariate Distributions


Exercise 1.26: [wk03Q4, Solution, Schedule] Let X and Y be two discrete random variables whose
joint probability function is given by:

Pr(X = x, Y = y) X=x
0 1 2 3
Y=y 1 0.05 0.20 0.15 0.05
2 0.20 0.15 0.12 0.08

Calculate:
1
A deductible is that the policy only makes no payment if the loss amount is smaller than the deductible; and the claim
amount equals the loss amount minus the deductible if the loss amount is larger than the deductible.

9
ACTL2131 Probability and Mathematical Statistics, S1 2016 Exercises

1. E [X]
2. E [Y]
3. E [X |Y = 1]
4. Var (Y |X = 3)
5. E [XY] and Cov(X,Y).

Exercise 1.27: [wk03Q5, Solution, Schedule] Let X and Y be two discrete random variables whose
joint probability mass function is given by:

Pr(X = x, Y = y) X=x
1 2 3 4
1 0.10 0.05 0.02 0.02
Y=y 2 0.05 0.20 0.05 0.02
3 0.02 0.05 0.20 0.04
4 0.02 0.02 0.04 0.10

1. Find the marginal probability mass functions of X and Y.


2. Find the conditional probability mass of X given Y = 2 and of Y given X = 2.
3. Find E[XY] and Cov(X, Y).

Exercise 1.28: [wk03Q6, Solution, Schedule] Let X and Y have the joint density:
6
fX,Y (x, y) = (x + y)2 , for 0 x 1 and 0 y 1,
7
and zero otherwise.

1. By integrating over the appropriate regions, find:


(a) Pr (X < Y)
(b) Pr (X + Y 1)
 
(c) Pr X 21
2. Find the marginal densities of X and Y.
3. Find the two conditional densities.

Exercise 1.29: [wk03Q8, Solution, Schedule] Let xn and s2n denote the sample mean and variance
for the sample x1 , x2 , . . . , xn . Let xn+1 and s2n+1 denote these quantities when an additional observation
xn+1 is added to the sample.

1. Show how xn+1 can be computed from xn and xn+1 .


2. Show that:
!
n1 2 1
s2n+1 = sn + (xn+1 xn )2
n n+1

so that s2n+1 can be computed from xn , xn+1 , and s2n .

10
ACTL2131 Probability and Mathematical Statistics, S1 2016 Exercises

Exercise 1.30: [wk03Q9, Solution, Schedule] Suppose X and Y are two continuous random variables.
Prove that:
Z
E [Y] = E [Y |X = x ] fX (x) dx.

Exercise 1.31: [wk03Q10, Solution, Schedule] You are given:

X1 Uniform[0, 1]

Conditional on X1 , X2 Uniform[0, X1 ]

1. Find the joint distribution function of X1 and X2 .

2. Find the marginal distribution function of X2 .

Exercise 1.32: [wk03Q11, Solution, Schedule] Suppose that the joint distribution function of X1 and
X2 is given by

if x1 < 0 or x2 < 0;



0, h i
x1 x2 1 + 12 (1 x1 ) (1 x2 ) ,

if 0 x1 1 and 0 x2 1;
F X1 ,X2 (x1 , x2 ) =


F x1 (x1 ), if x2 > 1;




if x1 > 1,

F x2 (x2 ),

1. Find the joint density.

2. Find the marginal distribution functions of X1 and X2 . Can you recognise these distributions?

3. Find the correlation coefficient of X1 and X2 .

Exercise 1.33: [wk03Q12, Solution, Schedule] We have the joint probability density function:
(
k(1 x2 ), if 0 x1 x2 1;
fX1 ,X2 (x1 , x2 ) =
0, else.

1. Determine the value k for which this function is a density.

2. Determine the region for the integral for determining Pr(X1 3/4, X2 1/2).

3. Calculate Pr(X1 3/4, X2 1/2).

1.4 Sampling and Summarising Data


Exercise 1.34: [wk03Q1, Solution, Schedule] The claim amounts (in dollars, to the nearest $10) for
a sample of 24 recent claims for storm damage to private homes in a particular town are as follows:2

2 710 670 2 380 4 670 1 220 6 780 1 590 3 110


960 8 230 3 320 3 380 2 490 1 940 3 710 4 630
4 270 4 210 1 880 3 880 1 490 5 400 2 430 850
2
Modified Institute of Actuaries exam question.

11
ACTL2131 Probability and Mathematical Statistics, S1 2016 Exercises

1. Construct a stem-and-leaf display of these claim amounts.

2. Find the mean and median of the claim amounts. What can you say about the skewness of the
distribution?

3. Find the interquartile range of the claim amounts.

4. Evaluate F24 (1 000) where F24 () denotes the ecdf.

Exercise 1.35: [wk03Q2, Solution, Schedule] Data were collected on 100 consecutive days for the
number of claims, X, arising from a group of insurance policies. This resulted in the following
frequency distribution:

observed claims from policy (x): 0 1 2 3 4 5


frequency: 14 25 26 18 12 5

Calculate the following sample statistics for these data:

1. mode

2. median

3. interquartile range

4. Suppose the average value for 5 claims or more is 7.5. Calculate the sample mean.

Exercise 1.36: [wk03Q3, Solution, Schedule] For a set of 32 observations, you are given:
32
X 32
X
xk = 13 337.6 and xk2 = 5 667 388.7.
k=1 k=1

The largest of the observations is 605. Suppose you are interested in measuring the impact of the
largest observation on the mean and standard deviation.

1. Calculate the sample mean and the sample standard deviation.

2. Calculate the sample mean and the sample standard deviation, with the largest observation
deleted.

3. What is the percentage change in the mean?

4. What is the percentage change in the standard deviation?

Exercise 1.37: [wk03Q7, Solution, Schedule] Two independent measurements, X and Y, are taken
of a quantity . We are given the means are equal, E [X] = E [Y] = , but the variances 2X and 2Y
are not equal. The two measurements are then combined by means of a weighted average to give:

Z = X + (1 ) Y,

where is a constant between 0 and 1, i.e., 0 1.

1. Show that E [Z] = .

12
ACTL2131 Probability and Mathematical Statistics, S1 2016 Exercises

2. Find in terms of X and Y to minimise Var (Z).

3. Under what circumstances is it better to use the average (X + Y) /2 than either X or Y alone to
determine ? Hint: a smaller variance would give a better estimate of the population mean.

4. Now, suppose X and Y are instead not independent and have covariance:

Cov (X, Y) = XY .

Find in terms of X , Y and XY to minimise Var (Z).

1.5 Functions of Random Variables


Exercise 1.38: [wk04Q1, Solution, Schedule] Compound Distribution. In a portfolio of insurance
policies, the amount of claim is a random variable Xk which has an exponential distribution with
1
mean , for k = 1, 2, . . . The number of claims N in a single period is also a random variable but with

a Poisson() . The total claims then in the portfolio during the period is given by:

S = X1 + X2 + . . . + X N .

1. Find the mean of S , E[S ].

2. Find the variance of S , Var(S ).

3. Find the moment generating function of S , MS (t).

Exercise 1.39: [wk04Q2, Solution, Schedule] Let X1 , X2 and X3 be i.i.d. with common density:

fX (x) = ex , x 0.

1. Find the joint density of X(1) , and X(3) .


Hint: First find the joint density of X(1) , X(2) , and X(3) , i.e., fX(1) ,X(2) ,X(3) (y1 , y2 , y3 ) = . . .
Second you find the distribution of only X(1) and X(3) by integrating over the other random
variable (similar to finding the marginal distribution). Be careful by the limits for X(2) , what are
the lowest and highest numbers it can take?
   
2. Compute E X(1) and E X(3) .
 
3. Compute Var X(1) and Var X(3) .

4. Compute E X(1) X(3) and the correlation coefficient X(1) , X(3) .


  

Exercise 1.40: [wk04Q3, Solution, Schedule] Let X Gamma(, 1) and Y Gamma(, 1) be inde-
pendent random variables. Define U = X + Y and V = X/(X + Y).

1. Use the moment generating function technique to find the distribution of U.

2. Use the Jacobian transformation technique to find the joint distribution of U and V.

3. Show that U and V are independent.


Hint: You do not need to do any additional calculations to show this.

13
ACTL2131 Probability and Mathematical Statistics, S1 2016 Exercises

4. Find the marginals of U and V using their joint distribution derived in part 2. Demonstrate the
the marginal of U is consistent with that derived from Exercise 1.40 part 1.

5. Use Exercise 1.40 part 3 and 4, to find the mean and variance of V.

Exercise 1.41: [wk04Q4, Solution, Schedule] Let X1 , X2 and X3 be three independent and identically
distributed as Exp(1) random variables. Find:
h i
1. E X(3) X(1) = x
h i
2. E X(1) X(3) = x

3. fX(1) ,X(3) (x, y)

4. fR (r), where R = X(3) X(1) is the range.

Exercise 1.42: [wk04Q5, Solution, Schedule] Let X1 and X2 be i.i.d. (independent and identically
distributed) N (0, 1) random variables.

1. Show that X1 + X2 has a normal distribution and specify its parameters.

2. Show that X1 X2 has the same distribution as X1 + X2 .

3. Suppose X1 and X2 are no longer independent but each still has N (0, 1) distribution. Will X1 +X2
and X1 X2 be still independent?

4. Let X Gamma(, ) distributed.

(a) Find the p.d.f. of an Inverse Gamma Distribution, i.e., find the p.d.f. of Y = X1 .
(b) Find the c.d.f. of the inverse gamma distribution as function of the c.d.f. of the gamma
distribution.

Exercise 1.43: [wk05Q14, Solution, Schedule] Using the p.d.f. of a chi-squared distribution with
one degree of freedom:
exp(y/2)
fY (y) = p , if y > 0,
2y
and zero otherwise, prove that the moment generating function of Y is given by:

MY (t) = (1 2t)1/2 .

Exercise 1.44: [wk05Q15, Solution, Schedule] Prove that:


d
tn1 N(0, 1) as n ,

where you might use that:


 
n+1 r
2 n
lim   = .
n n
2
2

14
ACTL2131 Probability and Mathematical Statistics, S1 2016 Exercises

Exercise 1.45: [wk05Q16, Solution, Schedule] Prove that the p.d.f. of a Snecdors F distribution,
given by the transformation:
U/n1
F= ,
V/n2
where U 2 (n1 ) and V 2 (n2 ), is given by:

((n1 + n2 )/2) f n1 /21


fF ( f ) = nn11 /2 nn22 /2 .
(n1 /2) (n2 /2) (n2 + f n1 )(n1 +n2 )/2

Exercise 1.46: [wk04Q6, Solution, Schedule]

I Let Z1 and Z2 be two independent N (0, 1) random variables and let V1 2 (r1 ) and V2
2 (r2 ) be two independent chi-squared random variables. Which of the following random vari-
ables has a t-distribution with degrees of freedom (r1 + r2 )?
Z1 + Z2
(A)
(V1 + V2 ) /(r1 + r2 )
Z1 + Z2
(B)
(V1 /r1 ) + (V2 /r2 )
Z1 + Z2
(C)
2 (V1 + V2 ) /(r1 + r2 )
Z1 Z2
(D)
(V1 + V2 ) /(r1 + r2 )
Z1 Z2
(E) +
V1 /r1 V2 /r2
II Let Z1 and Z2 be two independent standard normal random variables. Which of the following
combinations of the two has also a standard normal random variable?

(A) (Z1 + Z2 ) /2
(B) Z1 + Z2
(C) Z1 /Z2
(D) Z1 Z2

(E) (Z1 Z2 ) / 2

III Let Z1 N (0, 1) and Z2 N (0, 1) be two random variables with correlation coefficient

(Z1 , Z2 ) = ,

where 1 1. Let V be a 2 (r) random variable independent of Z1 and Z2 .


Which of the following has a t-distribution with r degrees of freedom?

i. rZ1 V 1/2

ii. rZ2 V 1/2
r
r
iii. (Z1 + Z2 ) V 1/2
2
r
r
iv. (Z1 + Z2 ) V 1/2
2 ( + 1)

15
ACTL2131 Probability and Mathematical Statistics, S1 2016 Exercises

(A) All but i


(B) All but ii
(C) All but iii
(D) All but iv
(E) All

IV Let X1 , X2 , . . . , Xn be i.i.d. (independent and identically distributed) Exp() random variables


 1
(m.g.f.: MXi (t) = 1 t ). Which of the following describes the distribution of the sample
mean:
n
1X
X= Xk ?
n k=1

(A) X Exp()
(B) X Exp(n)
(C) X Exp(/n)
(D) X Gamma(n, n)
X Gamma n, n
 
(E)
 n
Note: m.g.f. of Gamma: MXi (t) = 1 t ).
V Let X1 , . . . , Xn be n independent and identically distributed Poisson random variables with mean
. Describe the distribution of the sum of these random variables:
Xn
S = Xk .
k=1

(A) S Poisson(1)
(B) S Poisson()
(C) S Poisson(/n)
(D) S Poisson(n)
(E) Cannot be determined from the given information
VI Suppose X1 , X2 , . . . , X20 are twenty independent random variables and are identically distributed
as Exp(2). Determine Pr X(20) > 1 .


(A) Pr X(20) > 1 = 0.94




(B) Pr X(20) > 1 = 0.95




(C) Pr X(20) > 1 = 0.96




(D) Pr X(20) > 1 = 0.97




(E) Pr X(20) > 1 = 0.98




VII Let X1 , X2 , . . . , Xn be n i.i.d. (independent and identically distributed) random variables each
with density:

fX (x) = 2x, for 0 < x < 1,

and zero otherwise.


Determine E X(n) .
 

16
ACTL2131 Probability and Mathematical Statistics, S1 2016 Exercises

(A) E X(n) = n/(n + 2)


 

(B) E X(n) = n/(n + 1)


 

(C) E X(n) = 1
 

(D) E X(n) = 2n/(2n + 1)


 

(E) E X(n) = 2n/(n + 1)


 

VIII In a 100-meter Olympic race, the running times are considered to be uniformly distributed
between 8.5 and 10.5 seconds. Suppose there are 8 competitors in the finals. The current world
record is 9.9 seconds.
Determine the probability that the loser of the race will not break the world record.

(A) 0.54
(B) 0.64
(C) 0.74
(D) 0.84
(E) 0.94

17
ACTL2131 Probability and Mathematical Statistics, S1 2016 Exercises

Solutions
Solution 1.1: [wk01Q1, Exercise, Schedule] Urn 1: 1 Black (B), 1 Gold (G) and Urn 2: 1 White
(W), 1 Gold (G). Define B = black ball, G = gold ball, W = white ball

1. = {BW, BG, GW, GG}




, {BW} , {BG} , {GW} , {GG} , {BW, BG} , {BW, GW} , {BW, GG} ,

2. F =

{ BG, GW} , {BG, GG} , {GW, GG} , {BW, BG, GW} , {BW, BG, GG} ,




{BW,GW, GG} , {BG, GW, GG} , {BW, BG, GW, GG}

3. Let E be the event of getting the same color for both balls. Then E = {GG} and Pr (E) =

Pr (Urn 1 = G Urn 2 = G) = Pr (Urn 1 = G) Pr (Urn 2 = G) = 12 12 = 14 , * using indepen-
dence.

Solution 1.2: [wk01Q2, Exercise, Schedule]

1. Define R = red ball, G = gold ball, S = silver ball.


Draw without replacement: use combination.   100
possible combinations RRR (n=49, r=3) = 49 3
, total combinations (n=100, r=3) = 3
! !
49 100 49 48 47 94
Pr(RRR) = = = = 0.1139
3 3 100 99 98 825
2. Pr(RGS ) = 49
100
34
99
17
98
= 289
9900
= 0.0292, using multiplication rule.
3. Let A = 3rd ball is silver and B = first 2 are red and gold. Thus,
Pr (A B) Pr (RGS GRS )
Pr (A |B) = =
Pr (B) Pr (RG. GR.)
where !
49 34 17
Pr (RGS GRS ) = 2!
100 99 98
and !
49 34 98
Pr (RG. GR.) = 2!
100 99 98
Therefore, we have the required probability:
17
Pr (A |B) = = 0.1735.
98
4. Let C = first 2 are red. Thus,
Pr (A C) Pr (RRS)
Pr (C |A ) = =
Pr (A) Pr (A)
where
49 48 17
Pr (RRS ) =

100 99 98
and note that for the event A, you either have 0, 1, or 2 silver in the first two so that
! ! !
83 82 17 83 17 16 17 16 15
Pr (A) = + 2! + .
100 99 98 100 99 98 100 99 98
After simplifying, the required probability is:
49 48
Pr (C |A) = = 0.2424.
(83 82) + 2 (83 16) + (16 15)

18
ACTL2131 Probability and Mathematical Statistics, S1 2016 Exercises

Solution 1.3: [wk01Q3, Exercise, Schedule] Let A and B be two independent events so that Pr (A B) =
Pr (A) Pr (B) , Pr (B|A) = Pr (B) , and Pr (A|B) = Pr (A).

1. To show A and BC are independent, we have:


   
Pr A BC = Pr BC |A Pr (A)


= 1 Pr (B |A) Pr (A)

| {z }
Pr(B)
 
= Pr BC Pr (A) ,

thus independent.
2. To show AC and B are independent, we have:
   
Pr AC B = Pr AC |B Pr (B)


= 1 Pr (A |B) Pr (B)

| {z }
Pr(A)
 
= Pr A Pr (B) ,
C

thus independent.
3. It then becomes straightforward to show AC and BC are independent. Given that A and B are
independent, we know from part (b) that AC and B are also independent. Applying (a), then AC
and BC must also be independent.

Solution 1.4: [wk01Q4, Exercise, Schedule]

1. Let A and B be mutually exclusive, i.e., A B = and Pr (A B) = 0. Suppose they are also
independent. Then
Pr (A B) = Pr (A) Pr (B) = 0.
Therefore, either Pr (A) = 0 or Pr (B) = 0. But, both Pr (A) > 0 or Pr (B) > 0 by assumption.
This is a contradiction so that A and B cannot be independent.
2. Now suppose A and B are independent, i.e., Pr (A B) = Pr (A) Pr (B). Suppose they are
mutually exclusive. Then Pr (A B) = 0 which implies Pr (A) Pr (B) = 0 and following
similar argument above, this cannot be true. Therefore they cannot be mutually exclusive.

Solution 1.5: [wk01Q5, Exercise, Schedule] There are a total of 6 6 = 36 possible outcomes.
We have that: E1 = {(1, 1), (2, 2), (3, 3), (4, 4), (5, 5), (6, 6)},
E2 = {(1, 6), (2, 5), (3, 4), (4, 3), (5, 2), (6, 1), (2, 6), (3, 5), (4, 4), (5, 3), (6, 2), (3, 6), (4, 5), (5, 4), (6, 3), (4, 6), (5, 5),
and
E3 = {(1, 1), (1, 6), (2, 5), (3, 4), (4, 3), (5, 2), (6, 1), (2, 6), (3, 5), (4, 4), (5, 3), (6, 2)}. Thus, by counting
from these possible outcomes, we see that:
Pr (E1 ) = 6
36
= 1
6

Pr (E2 ) = 18
36
= 1
2

Pr (E3 ) = 12
36
= 1
3

19
ACTL2131 Probability and Mathematical Statistics, S1 2016 Exercises

1. Note that
1
Pr (E1 E2 E3 ) = Pr (double and sum is 8) = Pr ((4, 4)) =
36
and
1 1 1 1
Pr (E1 ) Pr (E2 ) Pr (E3 ) = = .
6 2 3 36
Thus, are independent.

2. However,
1
Pr (E1 E2 ) = Pr (double and sum is 8 or 10) = ,
18
which is not equal to:
1 1 1
Pr (E1 ) Pr (E2 ) = = .
6 2 12
3. Note that:
11
Pr (E2 E3 ) = Pr (sum is 7 or 8) =
36
which is not equal to:
1 1 1
Pr (E2 ) Pr (E3 ) = = .
2 3 6
4. Consider:
2 1
Pr (E1 E3 ) = Pr (doubles and sum is 2 or 8) = =
36 18
and note that
1 1 1
Pr (E1 ) Pr (E3 ) = = .
6 3 18
Therefore, they are independent.

Solution 1.6: [wk01Q6, Exercise, Schedule] Let A = student A is pardoned, B = student B is


pardoned, C = student C is pardoned, and Z = Lecturer says B is not pardoned.

Pr (Z) = Pr (Z A) + Pr (Z B) + Pr (Z C)

= Pr (A) Pr (Z|A) + Pr (Z B) + Pr (C) Pr (Z|C)
1 1 1 1
= +0+ 1= .
3 2 3 2
* using law of total probability, ** using multiplication rule.

1. Thus, using the lecturers reasoning,



Pr lecturer says B is not pardoned and A is pardoned
Pr (A |Z ) =
Pr (Z)
1/6 1
= = .
1/2 3
Lecturers reasoning is clearly justified as the additional information provides no change in the
probability.

2. However, A falsely interprets the event Z as equal to the event BC (event B is not pardoned) and
calculates:  
  Pr A BC 1/3 1
Pr A BC = C
= = .
Pr (B ) 2/3 2

20
ACTL2131 Probability and Mathematical Statistics, S1 2016 Exercises

Student As reasoning is not justified, i.e., the event Z has more information than the event B.
The lecturer does provide extra information on the probability that event C will happen, i.e.,
Pr (C |Z ) = Pr(CZ)
Pr(CZ)
= 1/2
1/3
= 23 .
Using x pX (x) = 1, Pr (C |Z ) = 2/3 and Pr (B |Z ) = 0 we have that Pr (A |Z ) = 1/3
P

Solution 1.7: [wk01Q7, Exercise, Schedule]


! !
n 11
1. Use combinations. We have n = 11, r = 4: = = 330 ways to eliminate 4 flights.
r 4
! !
n 5
2. Use combinations. We have n = 5, r = 2: = = 10 ways to eliminate 2 flights for the
r 2
first airline.
! !
n 6
3. Use combinations. We have n = 6, r = 2: = = 15 ways to eliminate 2 flights for the
r 2
second airline.

4. Use multiplication
! rule. S 1 = number of ways first airline can eliminate 2 flights,
! with
5 6
n1 = , S 2 = number of ways second airline can eliminate 2 flights, with n2 = .There
2 ! ! 2
5 6
are n1 n2 = = 150 ways to eliminate 2 flights for each airline.
2 2

Solution 1.8: [wk01Q8, Exercise, Schedule] Define D = 2 marbles have different colors, B1 =
Box 1 is selected, B2 = Box 2 is selected,B3 = Box 3 is selected. Let p be the probability that
box 1 is selected. Then p + 2p + 3p = 1. Thus p = 16 . The required probability is:

Pr (D) = Pr (D|B1 ) Pr (B1 ) + Pr (D|B2 ) Pr (B2 ) + Pr (D|B3 ) Pr (B3 )


14 1 23 2 32 3
= 5 + 5 + 5
6 6 6
2 2 2
17
=
30

Solution 1.9: [wk01Q9, Exercise, Schedule] We know: Pr(i) 0 for i = 0, 1, 2, . . . and i Pr(i) = 1.
P
Rewriting the p.d.f. gives: Pr(0) = Pr(1) = Pr(2); Pr(3) = 21 Pr(2) = 21 Pr(0); Pr(4) =
1
3
Pr(3) = 3! Pr(0)
1

Hence, we have: Pr(k) = k1


1
Pr(k 1) = (k1)!
1
Pr(0).
!

Then we rewrite: Pr(i) = Pr(0) + Pr(i) = Pr(0) + (i1)! Pr(0) = 1 + (i1)! Pr(0) = 1.
1 1
P P P P
i=0 i=1 i=1 i=1
Using the series expansion for e x at x = 1, we have Pr(0) = 1
1+e
.

Solution 1.10: [wk01Q10, Exercise, Schedule] X is a continuous random variable with density func-
tion:
fX (x) = cex , x > 1,
and zero otherwise.
R
1. To prove X is a random variable the following two conditions must be satisfied: 1) fX (x)dx =
1 and 2) fX (x) 0, for all x <.
R R
f (x)dx = cex dx = [cex ]
X 1 = ce
1
= 1. Thus for c = e the first conditional holds.
1

21
ACTL2131 Probability and Mathematical Statistics, S1 2016 Exercises

For c = e we have: fX (x) = ex+1 e = 0 for x > 1 and zero otherwise, hence also the
second condition is satisfied. Thus c = e.
R3 R3
cex dx ex dx R3
c ex dx e2 e3
2. Pr (X < 3|X > 2) = Pr(X<3X>2)
Pr(X>2)
= R2
cex dx
= R2
ex dx
= R 2 x = e2
= 1 e1 = e1
e
.
2 2 c 2
e dx

Solution 1.11: [wk01Q11, Exercise, Schedule]

1.
if x = 1
1

32

pX (x) = if x = 2/3

03


otherwise.

2. graphd of pX :

Solution 1.12: [wk01Q12, Exercise, Schedule] We are given a normal distributed random variable.

1. MX (t) = et+ 2 t (shown in Module 1.2 video lecture)


1 2 2

2. Since S (t) = log [MX (t)], we have



d 1 d 1
S (t) = MX (t) S (t) =
0
MX0 (0) = E (X)
dt MX (t) dt t=0 MX (0)
* using MX (0) = 1 and MX0 = E [X],
and (using the quotient rule for derivatives)
h i2
d2 MX (t) MX00 (t) MX0 (t)
S (t) =
dt2 [MX (t)]2
h i2
MX (0) MX00 (0) MX0 (0)  

d2
= = E X 2 [E (X)]2 = Var (X) .

S (t) 2
dt2 t=0 [MX (0)]
h i
** using MX (0) = 1, MX = E [X], and MX00 = E X 2
0

22
ACTL2131 Probability and Mathematical Statistics, S1 2016 Exercises

  
3. Thus, S (t) = log (MX (t)) = log exp + 1/22 t2 = t + 12 2 t2 implies S 0 (t) = + 2 t and
S 00 (t) = 2 so that
S 0 (0) = and S 00 (0) = 2
and the result E [X] = and V (X) = 2 immediately follows.
4. This function is called the cumulant generating function of X.

Solution 1.13: [wk01Q13, Exercise, Schedule]

1. By the theorem explained in the lecture notes we know that every m.g.f. corresponds to only
one distribution.
2. First, we determine the first five derivatives of MX (t) with respect to t:
MX(1) (t) = + 2t + 3t3 + 4t3
MX(2) (t) =2 + 6t + 12t2
MX(3) (t) =6t + 24t
MX(4) (t) =24
MX(5) (t) =0
Next, we can easily derive the non-central moments:
E [X] =MX(1) (0) =
h i
E X 2 =MX(2) (0) = 2
h i
E X 3 =MX(3) (0) = 6
h i
E X 4 =MX(4) (0) = 24
h i
E X 5 =MX(5) (0) = 0

3. The central moments can easily be determined using the non-central moments:
E X = = = 0
 
h i h i
E (X )2 =E X 2 2X + 2
h i
=E X 2 2E [X] + 2
=2 2
h i h i
E (X )3 =E X 3 3X 2 + 3X2 3
h i h i
=E X 3 3E X 2 + 3E [X] 2 3
=6 6 + 23
h i h i
E (X )4 =E X 4 4X 3 + 6X 2 2 4X3 + 4
h i h i h i
=E X 4 4E X 3 + 6E X 2 2 4E [X] 3 + 4
=24 24 + 122 34

h i h i
E (X )5 =E X 5 5X 4 + 10X 3 2 10X 2 3 + 5X3 5
h i h i h i h i
=E X 5 5E X 4 + 10E X 3 2 10E X 2 3 + 5E [X] 4 5
= 120 + 602 203 + 44

23
ACTL2131 Probability and Mathematical Statistics, S1 2016 Exercises

n 
* using the Binomial expansion: (a + b)n = an + 1
an1 b + n2 an2 b2 + . . . + bn with a = X,
b = , and n=2,3,4, or 5.

4. Given the central moments we have:

- Mean: = E [X] = ;
h i
- Variance: 2 = E (X )2 = 2 2 ;
E[(X)3 ] 66+23
- Skewness: = 2 3/2
= ;
E[(X) (22 )3/2
]
E (X)4 ]
- (Excess) Kurtosis: = [
2 34

2 2
= 2424+12 2 3.
E[ (X) ] (2 )
2

5. First, we calculate the mean, variance, skewness, and kurtosis for those four set of parameters.
We have that:

- for A we have: Mean=1, Variance=3, skewness=-0.7698, and (Excess) Kurtosis=-2/3;


- for B1 we have: Mean=1, Variance=1, skewness=-0.7698, and (Excess) Kurtosis=-2/3;
- for B2 we have: Mean=1, Variance=3, skewness= 0.3849, and (Excess) Kurtosis=-2/3;
- for B3 we have: Mean=1, Variance=3, skewness=-0.7698, and (Excess) Kurtosis=1;

Comparing the distributions we have:

i) We have a smaller variance for insurer B1 than A, and the same mean, skewness, and
kurtosis. This implies that a large claim for insurer A is more likely than for insurer B
(i.e., more variability in the claim size for insurer A), hence the price for reinsuring this
risk is larger for insurer A than insurer B.
ii) We have a smaller skewness for insurer A than B2 , and the same mean, variance, and
kurtosis. The negative skewness of insurer A indicates that the probability of a claim
larger than the mean is more than 50%. This implies that a large claim for insurer A is
more likely than for insurer B2 , hence the price for reinsuring this risk is larger for insurer
A than insurer B2 .
iii) We have a smaller kurtosis for insurer A than B3 , and the same mean, variance, and skew-
ness. Hence, the distribution of the claims of insurer A are more flat than of insurer B.
This implies that a large claim for insurer B3 is more likely than for insurer A, hence the
price for reinsuring this risk is larger for insurer B3 than insurer A.

Solution 1.14: [wk01Q14, Exercise, Schedule] The probability density function for a continuous
random variable X is given by:
( 2
, for x 1;
fX (x) = x3
0, otherwise.

Rx Rx Rx h ix
1. F X (x) = Pr(X x) = fX (z)dz = 2
z3
dz = 2 z3 dz = z2 = 1
x2
(1) = 1 x12 for x 1
1
1 1
and zero otherwise.

2. Pr(X 4) = 1 Pr(X < 4) = 1 Pr(X 4) = 1 F X (4) = 1
16
.
* using Pr(X = x) = 0 for continuous random variables.

24
ACTL2131 Probability and Mathematical Statistics, S1 2016 Exercises

3. graph of fX :

graph FX
~
FX
~
1

0.8
0.6
0.4

0.2

x
1 2 3 4 5

Solution 1.15: [wk01Q15, Exercise, Schedule]


R
1. To show fX () is a density, we have to show that: 1) fX (x) dx = 1 and 2) fX (x) 0 for all
x. For 1) we have:
Z Z 0 Z
1 x 1 x
fX (x) dx = e dx + e dx
2 0 2
" #0 " #
1 x 1 x
= e e
2 2 0
+ = if 0;
1 1
1,
2 2

= if = 0;

0,
1 ( 1 ) = , if < 0,


2 2

and for 2) we have that fX (x) 0 for all x if 0.


Thus, combining 1) and 2) we have fX is a p.d.f. if > 0.

2. For x < 0 we have,


Z x Z x " #x
1 u 1 u 1
F(x) = f x (z)dz = e du = e = ex
2 2 2

and for x 0,
Z 0 Z x " #x
1 u 1 u 1 1 u 1
e du + e du = e = 1 ex .
2 0 2 2 2 0 2

Thus,
1 21 ex , if x 0;



F X (x) =


1 ex ,


if x < 0.
2

25
ACTL2131 Probability and Mathematical Statistics, S1 2016 Exercises

3. To find the m.g.f.,


Z Z 0 Z
xt 1 x 1
h i
MX (t) = E e Xt
= e fX (x)dx =
xt
e e dx + e xt ex dx
2 0 2
Z 0 Z
1 1
= e x(t+) dx + e x(t) dx
2 2 0
#0 #

" "
1 1
= exp (x (t + )) + exp (x (t ))
2 t+ 2 t 0
1 1 2
= + = ,
2 + t 2 t 2 t2
provided t + > 0 and t < 0 (because e to the power is infinity). This is equivalent to
< t < . Therefore, blindly applying the formula, the p.g.f. would be:
2
PX (t) = MX (ln t) = ,
2 (ln t)2
but this is not a p.g.f. as this is not a discrete distribution (trick question!). Hence, the p.g.f.
does not exist for this/a continuous random variable.
4. Suppose = 1, then
e , if x 0;
1 x

2

fX (x) =

1 e x , if x < 0.


2
and
! ! Z 0 Z 3/4
3 3 3 1 x 1 x
Pr |X| < = Pr < X < = e dx + e dx
4 4 4 3/4 2 0 2
1  1 
= 1 e3/4 + 1 e3/4 = 1 e3/4 .
2 2
Or, alternatively:
! ! ! !
3 3 3 3 3
Pr |X| < = Pr < X < = Pr X < Pr X
4 4 4 4 4
! !
3 3
= Pr X Pr X = F(3/4) F(3/4) = 1 e3/4
4 4
* using Pr(X = x) = 0 for continuous r.v.

Solution 1.16: [wk01Q16, Exercise, Schedule] We define the force of mortality as:
F X (x + h) F X (x)
(x) = lim .
h0 h (1 F X (x))
The force of mortality is thus a conditional instantaneous rate of death at age xconditional on
surviving to age x. (The corresponding conditional instantaneous probability of death at age x is thus
x dx.)

1. First, note that we can express this as:


1 F X (x + h) F X (x) F X0 (x) fX (x)
(x) = lim = = .
1 F X (x) |
h0
{zh } 1 F X (x) 1 F X (x)
F X0 (x)

26
ACTL2131 Probability and Mathematical Statistics, S1 2016 Exercises

Now, integrate both sides from 0 to x and we get:


Z x Z x
fX (z)
(z) dz = dz.
0 0 1 F X (z)
Applying a change of variable u = 1 F X (z), du = fX (z)dz then
Z x Z 1FX (x)
1
(z) dz = du = [ln(u)]1F
1
X (x)
= ln (1 F X (x)) .
0 1 u
Thus we have,
Z x !
exp (z) dz = exp ( ( ln(1) F X (x))) = 1 F X (x) ,
0

which then gives the result.


R
2. By definition of expectation, we know E [X] = z fX (z) dz Now integrate the right-hand side
0
Rb Rb
dv
by applying integration by parts (see F&T page 3), i.e., a u dx dx = [u v]ba a v du
dx
dx:
dv
u = z and
= fX (z)
dx
so that du = dz and v = 1 F X (z). The choice of v is not black magicif you just use
F X (z) then the first term on the RHS below is not finite.3 The only choice F X (z) + C for an
antiderivative of fX (z) that might produce finite terms on the RHS is F X (z) 1.4 Therefore
Z
E [X] = (z) ( fX (z))dz =
0
Z
= [z (1 F X (z))]0 +

(1 F X (z)) dz
0
Z Z
= 0 + 0 + (1 F X (z)) dz = (1 F X (z)) dz.
0 0

Using this result, we have:


Z
E [X] = (1 F X (z)) dz
0
Z ! " #
1 F X (z) 1
= fX (z) dz = E .
0 fX (z) (X)
| {z }
1
(z)

3
Using v = F X (z) we would have:
Z
E [X] = (z) ( fX (z))dz =
0
Z
= [z (F X (z))]0 +

(1 F X (z)) dz
0
Z
= + 0 + (F X (z)) dz.
0

Hence, the only way to get the first part ([z (F X (z) + C)]0 ) to a value smaller than infinity (and larger than minis
infinity) is to set C = 1. R
4
That the integrated term vanishes for x is proven as follows: since E[X] < , the integral 0 x f x (x)dx is
convergent, and hence the tails tend to zero, so
Z Z
x[1 F X (x)] = x fX (t)dt t fX (t)dt 0 for x .
x x

27
ACTL2131 Probability and Mathematical Statistics, S1 2016 Exercises

Solution 1.17: [wk01Q17, Exercise, Schedule] We are given the p.d.f.:


 
fX (x) = ax 1 bx2 , for 0 x 1.
R R1
1. We must have f
X
(x) dx = 0
fX (x) dx = 1 so that:
Z 1 Z 1 " #1 !
  1 2 1 2b 4
ax 1 bx dx =
2
ax abx dx = ax abx = a
3 4
=1a= .
0 0 2 4 0 4 2b
Since a > 0 (given), we must have b < 2. Also, fX (x) 0 for all x so that at x = 1, fX (1) =
a (1 b) 0 which implies b 1.
 
2. If b = 1, then a = 4 and fX (x) = 4x 1 x2 . Therefore, mean is
Z 1   8
E [X] = 4x2 1 x2 dx = .
0 15
and Z 1
h  i  1
E X 4x3 1 x2 dx = .
2
=
0 3
h i  2
Variance is Var (X) = E X 2 (E [X])2 = 31 15
8
= 225
11
.

Solution 1.18: [wk02Q1, Exercise, Schedule]

1. X Binomial(n = 10, p = 0.01).


2. X Binomial(n = 20, p = 0.90).
3. Let Y be the random variable represented the number of people tested to get 100 rare blood
types. Then Y NB(r = 100, p = 10. We have that X = Y 100, hence: X + 100
N.B.(r = 100, p = 0.10)
4. 2 per minute, thus in 15 minutes, you expect 30 on the average. X Poisson( = 30)
5. X Binomial(n = 25, p = 0.35)

Solution 1.19: [wk02Q2, Exercise, Schedule]

1. Geometric(p = 1/2)
2. Binomial(n = 3, p = 1/2)
3. Poisson( = 1/2)
4. N.B.(r = 4, p = 1/2)
5. Binomial(n = 5, p = 3/4)

Solution 1.20: [wk02Q3, Exercise, Schedule] Let X Binomial(n, p). Then


n
MX (t) = pet + (1 p)
and let Y Poisson(). Then
MY (t) = e(e 1) = enp(e 1)
t t

where is set to np.

28
ACTL2131 Probability and Mathematical Statistics, S1 2016 Exercises

1. Now, consider
n
lim MX (t) = lim pet + (1 p)
n n
n
= lim 1 + p et 1
n
 t n
= lim 1 + e 1 .
n n
Equivalently, by taking x = 1/n, so that as n , x 0,
1/x
lim MX (t) = lim 1 + x et 1
n x0

= e(e 1) = MY (t) .
t

* using:
x n

lim 1 + = ex
n n
or, equivalently (h = 1/n),
lim (1 + hx)1/h = e x
h0

2. Let Y Poisson() so that:


Pr (Y = x) e x /x!
= (x1) = for x = 1, 2, . . . .
Pr (Y = x 1) e / (x 1)! x
Similarly, for X Binomial(n, p), we have:
!
n x
p (1 p)nx
Pr (X = x) x
=
Pr (X = x 1)
!
n
p x1 (1 p)nx+1
x1
(n x + 1) p
= .
x 1 p
 (n)
* using nk = (nk)!k!
n!
, thus: ( nx ) = n!
(nx)!x!
(nx+1)!(x1)!
n!
= nx+1
x
Now, let = np and let lim p0
x1
i.e., let p be small. We have:
(n x + 1) p np xp + p np p (x 1)
lim lim = = lim
p0 x 1 p p0 x px p0 x px
np p(x 1) np
= lim = lim = .
p0 (1 p)x p0 x x
** using lim p0 np = , lim p0 (1 p) = 1, and lim p0 p(x 1) = 0. Note that:
Pr (Y = 0) = e
and that, using the p.m.f. of the Binomial distribution we have:
!
n
Pr (X = 0) = p0 (1 p)n = (1 p)n
0
 n
= 1
n
so that  n
lim 1 = e Pr (Y = 0) .
n n

29
ACTL2131 Probability and Mathematical Statistics, S1 2016 Exercises

1
3. n = 300 10 = 3, 000 and p = . Then X Binomial(n = 3, 000, p = 1/400) and =
400
E[X] = np = 7.5. Approximating using Y Poisson( = 7.5), we have:

Pr (X > 3) Pr (Y > 3) = 1 Pr(Y 3)


= 1 [Pr (Y = 0) + Pr (Y = 1) + Pr (Y = 2) + Pr(Y = 3)]
7.5 (7.5)2 7.53
!
= 1e 7.5
1+ + + = 0.940 854 5.
1! 2! 3!

Doing the Poisson approximation in R you would use:

1-sum(dpois(0:3,7.5))

The true value is given by:

1-sum(dbinom(0:3,3000,(1/400)))

which gives 0.941 073 3.

Solution 1.21: [wk02Q4, Exercise, Schedule] Let X = number of claims over $100. Then X
Binomial(n = 200, p = 0.05).
!
200
1. Pr (X = 0) = (0.05)0 (0.95)200 = 0.0000351.
0
! !
200 200
2. Pr (X = 0)+Pr (X = 1)+Pr (X = 2) = 0.0000351+ (0.05) (0.95) +
1 199
(0.05)2 (0.95)198 =
1 2
0.0023363.

3. E [5 (n X)] = 5 E [(n X)] = 5 (n E [X]) = 5 (200 200 0.05) = 950

Solution 1.22: [wk02Q5, Exercise, Schedule] We are given X Gamma(, ) so that the p.d.f. has
the form
x1 ex
fX (x) = , for x 0.
()

1. The m.g.f. is:


h i Z x1 ex
MX (t) = E eXt = e xt dx
()
Z 10 x(t)
x e
= dx
0 ()
( t) x1 ex(t)
Z

= ( t)
dx
0 ()
| {z }
density of a Gamma(,t)
!

=
t

and this will exist provided t > 0 or equivalently t < .

30
ACTL2131 Probability and Mathematical Statistics, S1 2016 Exercises

2. To find expression for higher moment,


x1 ex
Z
E [X ] =
r
xr dx
0 ()
Z r+1 x
x e
= dx
0 ()
Z r+ 1 x
1 x e
= (r + ) r
dx
() 0 (r + )
| {z }
density of a Gamma(r+,)
(r + )
= r .
()

Solution 1.23: [wk02Q6, Exercise, Schedule] Returns are normally distributed. Use the standard
normal table to get the desired probabilities, but only after standardising, i.e., Z = X

.
1. The probability that investment will be below 1, 000 is:
!
0 0.05
Pr (1000 (1 + RA ) < 1000) = Pr (RA < 0) = Pr ZA <
0.1
= Pr (ZA < 0.16) = 1 (0.16)
= 1 0.5636 = 0.4364.
2. The probability that investment will exceed 1, 200 is:
!
0.2 0.05
Pr (1000 (1 + RA ) > 1 200) = Pr (RA > 0.2) = Pr ZA >
0.1
= Pr (ZA > 0.47) = 1 (0.47)
= 1 0.6808 = 0.3192.
Similar procedures above under investment B but the mean and variance are different. You can
verify that for part (c), the probability is 0.4443 and for part (d), the probability is 0.4443.

Solution 1.24: [wk02Q7, Exercise, Schedule] Let T 1 and T 2 be the time until the next accident at
each of the busy intersections. Then T 1 Exp (2) and T 2 Exp (2.5).
1. The probability that there are no accidents at either intersections in the next month is:

Pr (T 1 > 1 T 2 > 1) = Pr (T 1 > 1) Pr (T 2 > 1)
= (1 Pr (T 1 1)) (1 Pr (T 2 1))
= (1 FT1 (1)) (1 FT2 (1))
   
= (1 1 e2 (1 1 e2.5 )
= e2 e2.5 = e4.5 = 0.0111.
* using independence between T 1 and T 2 .
2. The probability that there will be no accidents for at least one of the intersections in the next
month is:
Pr (T 1 > 1 T 2 > 1) = Pr (T 1 > 1) + Pr (T 2 > 1) Pr (T 1 > 1 T 2 > 1)
= (1 Pr (T 1 1)) + (1 Pr (T 2 1)) Pr (T 1 > 1 T 2 > 1)
= (1 FT1 (1)) + (1 FT2 (1)) Pr (T 1 > 1 T 2 > 1)
   
= (1 1 e2 ) + (1 1 e2.5 ) Pr (T 1 > 1 T 2 > 1)
= e2 + e2.5 e4.5 = 0.2063.

31
ACTL2131 Probability and Mathematical Statistics, S1 2016 Exercises

Solution 1.25: [wk02Q8, Exercise, Schedule]

1. Note that:

 +1
Z Z Z
E [X] = x fX (x)dx = x dx = x dx
x
" +1 #
x
= =
1 1

and that
 +1
2
Z Z Z

h i
E X 2
= x fX (x)dx =
2
x dx = x+1 dx
x
" +2 #
x 2
= = .
2 2

Thus, the variance is given by:

2  2 2
Var (X) = = .
2 1 ( 1)2 ( 2)

2. The distribution function of X is given by:


Z x  +1 " # x

Z x
u
F X (x) = fX (u)du = du =
u

!
1 1 
= =1 for x >
x x

We have that F X (x) = 0 for x . Hence, the quantile function of X is given by solving

u = 1 F 1(u)
X

F X1 (u) = ,
(1 u)1/
where u [0, 1].

3. The median is therefore:


M = 21/ .

4. Define X to be the loss amount and Y, the amount of a claim. Thus,


(
X 5, if X 5;
Y= = max {0, X 5} .
0, otherwise;

Notice that the variable Y is a mixed random variable with a probability


R5 mass at zero, reflecting
the loss amount smaller than or equal to five, i.e., Pr(Y = 0) = 0 f x (x)dx. For loss amounts
larger than five, the density is of the loss amount equals the density of the claim amount in
y = x 5.
The expected loss amount is:
3.5 (4)
E [X] = = 5.6
3.5 1

32
ACTL2131 Probability and Mathematical Statistics, S1 2016 Exercises

and the expected amount of a single claim is:


Z Z Z !4.5
3.5 4
E [Y] = y fY (y)dy = (x 5) fX (x)dx = (x 5) dx
0 5 5 4 x
 Z  
= 0.875 44.5 x3.5 5x4.5 dx
" 2.5 5 3.5 # " 2.5
(5)3.5
#
x x (5)
= 448 5 = 448 5
2.5 3.5 5 2.5 3.5
= 0.9159

and the variance of a single claim is:


h i h i Z
Var (Y) = E (Y E (Y)) = E Y (0.9159) =
2 2 2
y2 fY (y)dy 0.8389
0
Z
= (x 5)2 fX (x)dx 0.8389
5
 Z
= 0.875 44.5 (x 5)2 x4.5 dx 0.8389
5
 Z  
= 0.875 44.5 x2 + 25 10x x4.5 dx 0.8389
5
 Z  
= 0.875 44.5 x2.5 + 25x4.5 10x3.5 dx 0.8389
5
  " x1.5 25x3.5 10x2.5 #
= 0.875 44.5 + 0.8389
1.5 3.5 2.5 5
(5)1.5 (5)2.5 2 (5)
3.5 !
= 448 2 (5) + (5) 0.8389
1.5 2.5 3.5
= 5.267

Solution 1.26: [wk03Q4, Exercise, Schedule] First we derive the marginals:

x 0 1 2 3
Pr (X = x) = y Pr(X = x, Y = y)
P
0.25 0.35 0.27 0.13

and
y 1 2
Pr (Y = y) = x Pr(X = x, Y = y)
P
0.45 0.55
The conditional probability functions are:

x 0 1 2 3
Pr (X |Y = 1) = Pr(X=x,Y=y)
Pr(Y=y)
1/9 4/9 3/9 1/9

and
y 1 2
Pr (Y |X = 3) = Pr(X=x,Y=y)
Pr(X=x)
5/13 8/13

1. E [X] = x Pr(X = x) = 0(0.25) + 1(0.35) + 2(0.27) + 3(0.13) = 1.28


P
x

2. E [Y] = y Pr(Y = y) = 1(0.45) + 2(0.55) = 1.55


P
y

3. E [X |Y = 1] = x x Pr(X = x|Y = 1) = 0(1/9) + 1(4/9) + 2(3/9) + 3(1/9) = 13/9


P

33
ACTL2131 Probability and Mathematical Statistics, S1 2016 Exercises

h i h i P
4. Var (Y |X = 3 ) = E Y 2 |X = 3 (E [Y |X = 3])2 = 40/169, where E Y 2 |X = 3 = y y2 Pr(Y =
y|X = 3) = 12 5/13 + 22 8/13 = 37/13 and E [Y |X = 3] = y y Pr(Y = y|X = 3) =
P
1 5/13 + 2 8/13 = 21/13

5. E[XY] = x y xy Pr(X = x, Y = y) = 0 1 0.05 + 1 1 0.20 + 2 1 0.15 + 3 1 0.05 + 0


P P
2 0.20 + 1 2 0.15 + 2 2 0.12 + 3 2 0.08 = 1.91 and Cov(X, Y) = E[XY] E[X] E[Y] =
1.91 1.28 1.55 = 0.074.

Solution 1.27: [wk03Q5, Exercise, Schedule] The marginals can be obtained using:
X X
pX (x) = p (x, y) and pY (y) = p (x, y)
y x

and the conditional probability mass functions using:


p (x, 2) p (2, y)
pX|Y (X |Y = 2) = and pY|X (Y |X = 2) = .
pY (2) pX (2)

1. The marginal probability mass functions for X and Y, respectively:

x/y 1 2 3 4
pX (x) = Pr (X = x) 0.19 0.32 0.31 0.18
pY (y) = Pr (Y = y) 0.19 0.32 0.31 0.18

2. The conditional probability mass functions are:

x/y 1 2 3 4
Pr (X |Y = 2) 5/32 20/32 5/32 2/32
Pr (Y |X = 2) 5/32 20/32 5/32 2/32

3. We have E[XY] = x y x y p(x, y) = 0.1 1 1 + 0.05 2 10.02 3 1 + 0.02 4 1 + 0.05 1


P P
2 + 0.20 2 20.05 3 2 + 0.02 4 2 + 0.02 1 3 + 0.05 2 30.20 3 3 + 0.04 4 3 + 0.02 1
4 + 0.02 2 4 + 0.04 3 4 + 0.10 4 4 = 0.34 + 1.36 + 2.64 + 2.32 = 6.66.
We have Cov(X, Y) = E[XY] E[X] E[Y] = 6.66 2.48 2.48 = 0.5096, where E[X] =
x pX (x) = 1 0.19 + 2 0.32 + 3 0.31 + 4 0.18 = 2.48 and E[Y] = y pY (y) = 1 0.19 + 2
P P
0.32 + 3 0.31 + 4 0.18 = 2.48.

Solution 1.28: [wk03Q6, Exercise, Schedule]

1. Rewrite fX,Y (x, y) = 67 x2 + 67 y2 + 12


7
xy for 0 x, y 1. We have that:

i.
Z Z y Z 1Z y
6 2 6 2 12
Pr (X < Y) = fX,Y (x, y)dxdy = x + y + xydxdy
0 0 7 7 7
Z 1" #y Z 1
6 3 6 2 12 2 3 6 3 6 3
= x + y x + x2 y dy = y + y + y dy
0 21 7 14 0 0 7 7 7
Z 1 " 4 #1
14 3 y 1
= y dy = 2 = .
0 7 4 0 2

34
ACTL2131 Probability and Mathematical Statistics, S1 2016 Exercises

ii. Use X + Y 1 X 1 Y:
Z Z 1y Z 1 Z 1y
6 2 6 2 12
Pr (X + Y 1) = fX,Y (x, y)dxdy = x + y + xydxdy
0 0 7 7 7
Z 1" #1y
6 3 6 2 12
= x + y x + x2 y dy
0 21 7 14 0
Z 1
6 6 12
= (1 y)3 + (1 y)y2 + (1 y)2 ydy
0 21 7 14
Z 0
6 6 12
= (z)3 z(1 z)2 (z)2 (1 z)dz
1 21 7 14
Z 1 ! !
6 6 12 12 2 6 6
= + (z) +
3
(z)2 + (z)dz
0 21 7 14 14 7 7
Z 1 ! !
2 6 6
= (z)3 (z)2 + (z)dz
0 7 7 7
" #1
2 4 6 3 3 2 3
= z z + z = ,
74 73 7 0 14
* using z = 1 y, dz = 1dy.
iii.
Z 1/2 Z Z 1/2 Z 1
6 2 6 2 12
Pr (X 1/2) = fX,Y (x, y)dydx =x + y + xydydx
0 0 7 7 7
Z 1/2 " #1 Z 1/2
6 2 6 12 6 2 6 12
= x y + y3 + xy2 dx = x + + xdx
0 7 21 14 0 0 7 21 14
" #1/2
6 3 6 6 2
= x + x + x2 = .
21 21 14 0 7
2. Rewriting fX.Y (x, y) = 76 (x2 + y2 + 2xy), we have the following marginal densities:
Z Z 1
6 2
fX (x) = fX,Y (x, y)dy = (x + y2 + 2xy)dy
0 7
" #1
6 2 1 3 1 2 2 2 
= x y + y + xy = 3x + 3x + 1 for 0 x 1,
7 3 2 0 7
and zero otherwise, and
Z Z 1
6 2
fY (y) = fX,Y (x, y)dx =
(x + y2 + 2xy)dx
0 7
" #1
6 2 1 3 1 2 2 2 
= y x + x + yx = 3y + 3y + 1 for 0 y 1,
7 3 2 0 7
and zero otherwise.
3. You can also immediately check the following conditional densities:
fX,Y (x, y) 3 (x + y)2
fY|X (y |x ) = = 2 for 0 y 1,
fX (x) 3x + 3x + 1
and zero otherwise, and
fX,Y (x, y) 3 (x + y)2
fX|Y (x |y) = = 2 for 0 x 1,
fY (y) 3y + 3y + 1
and zero otherwise.

35
ACTL2131 Probability and Mathematical Statistics, S1 2016 Exercises

Solution 1.29: [wk03Q8, Exercise, Schedule] Let xn and s2n denote the sample mean and variance
for the sample x1 , x2 , . . . , xn . Let xn+1 and s2n+1 denote these quantities when an additional observation
xn+1 is added to the sample.

1. To show xn+1 can be computed from xn and xn+1 , note that


n+1
n
1 X 1 X nxn + xn+1
xn+1 = xk = xk + xn+1 = .
n + 1 k=1 n + 1 k=1 n+1

2. Using the result in part (a), we start with


n+1 n+1 !2
1 X 1X nxn + xn+1
s2n+1 = (xk xn+1 ) = 2
xk
n + 1 1 k=1 n k=1 n+1
n+1 2
((n + 1) 1)xn + xn+1
!
1X
= xk +
n k=1 n+1
n+1 !2
1X xn xn+1
= (xk xn ) +
n k=1 n+1
n+1 ! !2
1 X xn xn+1 xn xn+1
= (xk xn ) + 2 (xk xn )
2
+
n+1 n+1

n k=1
n
1 X
1
= (xk xn )2 + (xn+1 xn )2
nk=1
n
n+1 ! n+1 !2
X2 xn xn+1 1 X xn xn+1
+ (xk xn ) +
n k=1 n+1 n k=1 n+1
n
1 X 1
= (xk xn )2 + (xn+1 xn )2
n
k=1
n
! n+1
2 xn xn+1 X n + 1 (xn xn+1 )2
+ (xk xn ) +
n n+1 n (n + 1)2
n k=1
1 X 1
= (xk xn )2 + (xn+1 xn )2
n
k=1
n
! n+1
2 xn xn+1 X 1 (xn xn+1 )2
+ (xk xn ) +
n n+1 k=1
n n+1

= 1 n+1 and ** using n+1 k=1 (xk xn ) =


2
k=1 (xk xn ) + (xn+1 xn ) . From the
2 2
n 1 P Pn
* using n+1
last part of the above equation, the first term can be shown to be equal to:
n
1X n1 2
(xk xn )2 = s
n k=1 n n

while the second term can be shown to be equal to:


1
(xn+1 xn )2 .
n+1

36
ACTL2131 Probability and Mathematical Statistics, S1 2016 Exercises

Proceed as follows. Note n+1 k=1 (xk xn ) = xn+1 x, using k=1 xk xn = 0, then the second term
P Pn



n+1
!X 2

1 xn xn+1 (xn xn+1 )
(xn+1 xn ) + 2
2
(xk xn ) +
n+1 +

n n 1
|k=1 {z }
=xn+1 x
2
(xn xn+1 )2
" #
1 2(xn+1 xn )
= (xn+1 xn )2 +
n n+1 n+1
2
" #
1 (xn+1 xn )
= (xn+1 xn )2
n n+1
1 n
= (xn+1 xn )2
n n+1
(xn+1 xn )2
= ,
n+1
as required, *** using (a b)2 = ((a b))2 = (b a)2 .
xn )2
Hence, we have s2n+1 = n1 s2 + (xn+1n+1
n n
.

Solution 1.30: [wk03Q9, Exercise, Schedule] Starting with the right-hand side, we have:
Z Z Z
E [Y |X = x ] fX (x) dx = y fY|X (y |x ) fX (x) dydx

Z Z
f (x, y)
= y fX (x) dydx
fX (x)
Z Z

= y f (x, y) dydx
Z Z

= y f (x, y) dxdy

Z Z
= y f (x, y) dxdy

| {z }
= fY (y)
Z
= y fY (y) dy = E [Y] .

Solution 1.31: [wk03Q10, Exercise, Schedule] X1 Uniform[0, 1] implies that fX1 (x1 ) = 1 for
0 x1 1, and zero otherwise, and conditional on X1 , X2 Uniform[0, X1 ] implies
1
fX2 |X1 (x2 |x1 ) = , for 0 x2 x1 1,
x1
and zero otherwise.

1. Thus, the joint density of X1 and X2 is given by:

f (x1 , x2 ) = fX2 |X1 (x2 |x1 ) fX1 (x1 )


1
= , for 0 x2 x1 1,
x1
and zero otherwise.

37
ACTL2131 Probability and Mathematical Statistics, S1 2016 Exercises

The joint distribution function for 0 x2 x1 1 is:


Z x2 Z x1 Z x2 Z x1
1
F (x1 , x2 ) = fX1 ,X2 (u1 , u2 )du1 du2 = du1 du2
u2 0 u2 u1
Z x2 ! !!
x1 x1
= log du2 = x2 1 + log .
0 u2 x2

Thus we have:
if x2 < 0 or x1 < 0;



0,   
x2 1 + log xx21 ,

if 0 x2 x1 1;


F X1 ,X2 (x1 , x2 ) =

  



x1 1 + log xx11 = x1 , if 1 > x2 > x1 > 0;

1, else.

2. The marginal density of X2 is:


Z 1 Z 1
fX2 (x2 ) = f (x1 , x2 ) dx1 = f (x1 , x2 ) dx1
x2
Z 1
1
= dx1 = log(x2 ), for 0 < x2 < 1
x2 x1

and zero otherwise, and hence the marginal distribution function is:

if x2 < 0;



0,
R x2
F X2 (x2 ) = log(u2 )du2 = u2 log(u2 ) u2 0 = x2 log(x2 ) + x2 , if 0 x2 1;
  x2
0
if x2 > 1.

1,

Solution 1.32: [wk03Q11, Exercise, Schedule] First note that we can re-express the joint distribution
function as:
3 1 1 1
F (x1 , x2 ) = x1 x2 x12 x2 x1 x22 + x12 x22 .
2 2 2 2

1. The joint density can be derived by taking the partial derivative twice:

2 F (x1 , x2 ) 3
f (x1 , x2 ) = = x1 x2 + 2x1 x2
x1 x2 2
if 0 x1 , x2 1 and zero otherwise.

2. The marginal densities are:


F(x1 , 1)
fX1 (x1 ) = = 3/2 x1 + 1/2 + x1 = 1, for 0 x1 1
x1
and zero otherwise, and
F(1, x2 )
fX2 (x2 ) = = 3/2 1/2 + x2 + x2 = 1, for 0 x2 1
x2
and zero otherwise, so that the marginal distribution functions are:

if x1 < 0;

0,
Z x1

R x1
F (x1 ) = fX1 (u1 )du1 = 1du1 = [u1 ]0 = x1 , if 0 x1 1;
x1

0

if x1 > 1.

1,

38
ACTL2131 Probability and Mathematical Statistics, S1 2016 Exercises

and
if x2 < 0;

Z x2


0,
R x2
F (x2 ) = fX2 (u2 )du2 = 1du2 = [u2 ]0 = x2 , if 0 x2 1;
x2

0

if x2 > 1.

1,

They are both uniform on [0, 1] .

3. To derive the correlation, we first note that:


1
E [X1 ] = E [X2 ] =
2
and
1
Var (X1 ) = Var (X2 ) = .
12
Furthermore, we have that:
Z Z
E [X1 X2 ] = x1 x2 fX1 ,2 (x1 , x2 )dx1 dx2

Z 1Z 1 !
3
= x1 x2 x1 x2 + 2x1 x2 dx1 dx2
0 0 2
Z 1Z 1
3
= x1 x2 x12 x2 x1 x22 + 2x12 x22 dx1 dx2
0 0 2
Z 1" #1
3 2 1 3 1 2 2 2 3 2
= x x2 x1 x2 x1 x2 + x1 x2 dx2
0 4 1 3 2 3 0
Z 1
3 1 1 2
= x2 x2 x22 + x22 dx2
0 4 3 2 3
" #1
3 2 1 2 1 3 2 3
= x x x + x
8 2 6 2 6 2 9 2 0
19
= .
72
Therefore,

Cov (X1 , X2 ) = E [X1 X2 ] E [X1 ] E [X2 ]


19 1 1
= =
72 4 72
so that
Cov (X1 , X2 ) 1/72 1
(X1 , X2 ) = = = .
Var (X1 ) Var (X2 ) 1/12 6

Solution 1.33: [wk03Q12, Exercise, Schedule] We have the joint probability density function:
(
k(1 x2 ), if 0 x1 x2 1;
fX1 ,X2 (x1 , x2 ) =
0, else.

1. For fX1 ,X2 (x1 , x2 ) to be a (joint) probability density function, the function should satisfy the two
conditions:
1) RfX1 ,XR2 (x1 , x2 ) 0 for all x1 , x2

2) fX1 ,X2 (x1 , x2 ) dx1 dx2 = 1.

39
ACTL2131 Probability and Mathematical Statistics, S1 2016 Exercises

1) is satisfied if k 0.
For the second condition we calculate:
Z Z Z 1Z 1
fX1 ,X2 (x1 , x2 ) dx1 dx2 = k(1 x2 )dx2 dx1
0 x1
#1
x2
Z 1"
=k x2 2 dx1
0 2 x1
1
x12
Z
1
=k x1 + dx1
0 2 2
" 2 3 #1
x1 x1 x1
=k +
2 2 6
! 0
1 1 1 1
=k + =k .
2 2 6 6

Thus, equating k/6 equal to 1 implies that k = 6.

2. To determine the region for the integral for determining Pr(X1 3/4, X2 1/2) we have three
conditions, namely:

0 X1 X2 1: part above the red-dotted line;


X1 3/4: part left of blue-dashed line;
X2 1/2: part above black-solid line.

Hence, the upper left part of the figure is the region which we integrate.

1 1

0.8 0.8

0.6 0.6
2

x2
x

0.4 0.4

0.2 0.2

0 0
0 0.5 1 0 0.5 1
x1 x1

3. To calculate Pr(X1 3/4, X2 1/2) we split the integral into three part, namely:

0 X1 1/2, and 0 X2 1/2: black part;


1/2 X1 3/4, and 3/4 X2 1: blue part;

40
ACTL2131 Probability and Mathematical Statistics, S1 2016 Exercises

1/2 X1 X2 3/4: red part;

Thus we have:
Z 1 Z 3/4
Pr(X1 3/4, X2 1/2) = fX1 ,X2 (x1 , x2 ) dx1 dx2
1/2 0
Z 1 Z 1/2 Z 1 Z 3/4
= k(1 x2 )dx1 dx2 + k(1 x2 )dx1 dx2
1/2 0 3/4 1/2
Z 3/4 Z 3/4
+ k(1 x2 )dx2 dx1
1/2 x1
Z 1 Z 1
= [k(1 x2 )x1 ]1/2
0 dx2 + [k(1 x2 )x1 ]3/4
1/2 dx2
1/2 3/4
#3/4
3/4
x22
Z "
+k x2 dx1
1/2 2 x1
Z 1 Z 1
k k
= (1 x2 )dx2 + (1 x2 )dx2
1/2 2 3/4 4
x12
Z 3/4
15
+k x1 + dx1
1/2 32 2
1 !#1
x22 x22
" !# "
k k
= x2 + x2
2 2 1/2 4 2 3/4
3/4
15x1 x12 x13
" #
+k +
32 2 6 1/2
! !
k 1 3 k 1 15
= +
2 2 8 4 2 32
!
9 25 3 3 4 31
+k = + + = = 0.484375.
64 192 8 64 64 64

Solution 1.34: [wk03Q1, Exercise, Schedule] The data arranged in ascending order:

670 1880 3110 4270


850 1940 3320 4630
960 2380 3380 4670
1220 2430 3710 5400
1490 2490 3880 6780
1590 2710 4210 8230

1. Stem-and-Leaf Display of the claim amounts:

> stem(storm,scale=2)

The decimal point is 3 digit(s) to the right of the |

0 | 79
1 | 025699
2 | 4457
3 | 13479
4 | 2367

41
ACTL2131 Probability and Mathematical Statistics, S1 2016 Exercises

5 | 4
6 | 8
7 |
8 | 2

2. Mean = 3 175, Median = 2710 + 1/2 (3110 2710) = 2 910. Stem-and-leaf display appears
to show a positively-skewed distribution.
3. Q1 = 1 807.5 and Q3 = 4 225. Therefore IQR = Q3 Q1 = 2 417.5.
24
4. F24 (1 000) = 1
I (xk 1 000) = 3
= 18 .
P
24 24
k=1

Solution 1.35: [wk03Q2, Exercise, Schedule] Sample statistics:

1. Mode = 2
2. Median = 2
3. Q1 = 1 and Q3 = 3 so that IQR = 3 1 = 2.
4. Since the average value of 5 claims or more is 7.5, then the sum of claims of 5 or more is
(5 7.5) = 37.5. Therefore
1 X 0 (14) + 1 (25) + 2 (26) + 3 (18) + 4 (18) + 37.5
x= xk = = 2.165.
100 100

Solution 1.36: [wk03Q3, Exercise, Schedule] Recall the formulas for the sample mean and variance:
1X 1 X 1 X 2 
x= xk and s2 = (xk x)2 = xk nx2 .
n n1 n1
q  
1. x = 321 (13, 337.6) = 416.8 and s = 311
5667388.7 32 (416.8)2 = 3492.8071 = 59.1.

2. Let xnew and s2new be the new mean and variance respectively after deleting the largest observa-
tion. Thus,
1
xnew = (13337.6 605) = 410.73
31
and
1 h  i
s2new = 5667388.7 6052 31 (410.73)2 = 2389.686.
30

Therefore, snew = 2389.686 = 48.9.
3. Percentage change in the mean = newold
old
100% = 410.73416.8
416.8
100% = 1.46%.
4. Percentage change in the standard deviation = newold
old
100% = 48.959.1
59.1
100% = 17.26%.

Solution 1.37: [wk03Q7, Exercise, Schedule] We are given that Z = X + (1 ) Y.

1. Therefore,
E [Z] = E [X + (1 ) Y]
= E [X] + (1 ) E [Y]
= + (1 ) = .

42
ACTL2131 Probability and Mathematical Statistics, S1 2016 Exercises

2. First, we note that we can express the variance as:

Var (Z) = Var (X + (1 ) Y)


= Var (X) + Var ((1 ) Y) + 2Cov (X, (1 ) Y)
= 2 Var (X) + (1 )2 Var (Y) + 2 (1 ) Cov (X, Y)
| {z }
=0, by independence

= 2
2X + (1 )
2
2Y

Taking the first order condition (FOC) with respect to , i.e., differentiating with respect to
and then equating to zero, we have:

Var (Z) = 22X 2 (1 ) 2Y = 0,

which gives us:

2
22X = 2(1 )2Y = 2Y
1 X
2Y
= 2 .
X + 2Y

You must check for second derivative to ensure this gives the minimum!
X+Y
3. 2
is better than either X or Y if it has smaller variance than both of them, i.e.,
X + Y  X + Y 
Var < Var (X) and Var < Var (Y) .
2 2
Equivalently,
1 2  1 2 
X + 2Y < 2X and X + 2Y < 2Y
4 4
2Y < 32X and 2X < 32Y
2X 1 2X
> and 2 < 3.
2Y 3 Y

Thus, it is better than either X or Y if

1 2X
< < 3.
3 2Y

4. If there is a covariance, we have:

Var(Z) = 2 2X + (1 )2 2Y + 2 (1 )X,Y

Hence, we have the FOC:



Var(Z) =22X 2(1 )2Y + 2(1 2)X,Y = 0

22X 2X,Y = 2(1 )2Y 2(1 )X,Y
2 X,Y
= 2Y ,
1 X X,Y

43
ACTL2131 Probability and Mathematical Statistics, S1 2016 Exercises

thus we have:
2Y XY
= .
2X + 2Y 2XY

Note: second derivative:


2
Var(Z) =22X + 22Y 4X,Y
2  
=2 2X + 2Y 2 X,Y
=2 (Var(X Y)) 0,
2Y XY
Thus, = X +2Y 2XY
2 is indeed the minimum.

Solution 1.38: [wk04Q1, Exercise, Schedule] We are given X Exp() so that:


1 1
E [X] = and Var (X) = .
2
Also, N Poisson() so that:

E [N] = and Var (N) = .

1. The mean of S is:

E [S ] = E [E [S |N ]] = E [NX] = E [N] E [X] = /.

2. The variance of S is:

Var (S ) = E [N] Var (X) + (E [X])2 Var (N)


!2
1 1
= 2+ = 2/2 .

3. The m.g.f. of S is:


MS (t) = MN log MX (t) ,


where

MN (t) = e(e 1) .
t
MX (t) = and
t
Thus,

!
e ( ) 1
log t  t 
MS (t) = e = exp .
t

Solution 1.39: [wk04Q2, Exercise, Schedule] Xk Exp(1) implies that fXk (x) = ex for x 0 and
zero otherwise, for k = 1, 2, 3. We have that:

if x < 0;
(
0,
F Xk (x) =
1 ex , if x 0,

for k = 1, 2, 3.
Let X(1) = min {X1 , X2 , X3 } and X(3) = max {X1 , X2 , X3 }. Finding the distributions of the minimum and
the maximum, we have:
F X(1) (x) = 1 (1 F (x))3 = 1 e3x ,

44
ACTL2131 Probability and Mathematical Statistics, S1 2016 Exercises

for x 0 and zero otherwise. So that:

fX(1) (x) = 3e3x , for x 0,

and zero otherwise. This is Exp(3) and

F X(3) (x) = (F (x))3 = 1 ex 3 ,




for x 0 and zero otherwise. So that:

fX(3) (x) = 3ex 1 ex 2 ,



for x 0,

and zero otherwise.

1. The joint distribution of X(1) , X(2) , X(3) is given by:




fX(1) ,X(2) ,X(3) (y1 , y2 , y3 ) = 3! f (y1 ) f (y2 ) f (y3 )


= 6e(y1 +y2 +y3 ) , for 0 y1 y2 y3 < .

and zero otherwise. Therefore, we get the joint distribution of X(1) , X(3) by integrating over all

possible values of X(2) as:
Z y3 i y3
6e(y1 +y2 +y3 ) dy2 = 6 e(y1 +y2 +y3 )
h
fX(1) ,X(3) (y1 , y3 ) =
y1
y1
 
= 6 e2y1 y3 ey1 2y3 , for 0 y1 y3 < ,

and zero otherwise.

2. We can show that:


Z
E X(1) =3
 
x exp(3x)dx
"0 #
exp(3x) 1
=3 (3x 1) =
9 0 3

h exp(cx) i
x exp(cx)dx = (cx 1) , and (note exp(a)b = exp(a b)):
R
* using c2
Z
E X(3) =3 y exp(y)(1 exp(y))2 dy
 
Z0
=3 y exp(y) 2y exp(2y) + y exp(3y)dy
0
" #
exp(y) exp(2y) exp(3y)
=3 (y 1) 2 (2y 1) + (3y 1)
1 4 9 0
11
=3 (1 1/2 + 1/9) = .
6

45
ACTL2131 Probability and Mathematical Statistics, S1 2016 Exercises

3. We have that (note exp(a)b = exp(a b)):


Z
h i 1
Var X(1) =E =3
2 2
x2 exp(3x)dx
 
X(1) E X(1)
0 9
" 2
!#
x 2x 2 1 1
=3 exp(3x) + =
3 9 27 0 9 9
Z !2
h i 11
Var X(3) =E X(3) E X(3) = 3
 2  2 2 2
y exp(y)(1 exp(y)) dy
0 6
Z !2
11
=3 y2 exp(y) 2y2 exp(2y) + y2 exp(3y)dy
6
"0 2
! 2
!
y 2y 2 y 2y 2
=3 exp(y) + 2 exp(2y) +
1 1 1 2 4 8
2
!# !2
y 2y 2 11
+ exp(3y) +
3 9 27 0 6
! !2
1 2 11 49
=3 (2) + 2 = ,
4 27 6 36
h  2 i
** using x2 exp(cx)dx = exp(cx) xc 2x + c23 .
R
c2

4. Now, for:
Z Z Z Z  
E X(1) X(3) = xy fX,Y (x, y)dydx = xy 6e(x+y) ex ey dydx
 
Z0 x
Z
0 x

= e2x yey ex ye2y dydx


6x
0
Z "x !#
ey e2y
!

= 6x e 2x
(y 1) e x
(2y 1) dx
0 1 4 x
Z
ex e2x
! !!
= 6x e 2x
(x + 1) e x
(2x + 1) dx
0 1 4
Z
12 6
= 6x2 e3x + 6xe3x x2 e3x xe3x dx
4 4
Z0
9
= 3x2 e3x + xe3x dx
0 2
!# #
2
9 e3x
" "
& x 2x 2
= 3 e 3x
+ + (3x 1)
3 9 27 0 2 9 0
3 2 1 13
= + = .
27 2 18
h i
again * using x exp(cx)dx = exp(cx)
R
c2 
(cx 1) ,
h 2
i
** using x2 exp(cx)dx = exp(cx) c c2 + c23 .
R x 2x

Therefore, we have:

Cov X(1) , X(3) = E X(1) X(3) E X(1) E X(3)


      
! !
13 1 11 1
= = .
18 3 6 9

46
ACTL2131 Probability and Mathematical Statistics, S1 2016 Exercises

Therefore, the required correlation coefficient is:

Cov X(1) , X(3)



X(1) , X(3) =

p  
Var X(1) Var X(3)
1/9 2
= = .
(1/9) (49/36) 7

Solution 1.40: [wk04Q3, Exercise, Schedule] X Gamma(, 1) implies:


!
x1 ex 1
fX (x) = , for x 0 and MX (t) = .
() 1t
Similarly, Y Gamma(, 1)implies:
!
y1 ey 1
fY (y) = , for y 0 and MY (t) = .
() 1t

1. Using the m.g.f. technique, we have:


h i h i h i h i
MU (t) = E eUt = E e(X+Y)t = E eXt E eYt
!+
1
= MX (t) MY (t) =
1t
which is the m.g.f. of a Gamma( + , 1).

2. By independence, we note that:

x1 ex y1 ey
f (x, y) = fX (x) fY (y) = .
() ()
The inverse of the transformation:
x
u= x+y and v=
x+y
is given by:
x = uv and y = u uv = u (1 v) .
Which is derived by: x = u y v = uy
u
uv = u y y = (v 1)u y = (1 v)u
x = u u(1 v) x = uv.
Its jacobian is:

h1 (u, v) /u h1 (u, v) /v

v
u
J (u, v) = det = det


h2 (u, v) /u h2 (u, v) /v 1 v u

= uv u(1 v) = u.

Thus |J (u, v) | = u, because 0 < u < . By the Jacobian transformation technique, the joint
density of U and V is:

1 (uv)1 euv [u (1 v)]1 eu(1v)


fUV (u, v) =
1/u () ()
for 0 < u < and 0 < v < 1 and zero otherwise.

47
ACTL2131 Probability and Mathematical Statistics, S1 2016 Exercises

3. Use euv eu(1v) = eu , than we can further simplify the joint density as:
1 1
fUV (u, v) = u|+1 u 1
{ze } v| (1{z v)1 .
() () }
| {z } function of u alone function of v alone
constant

Thus, we see that we can express the joint density as a product of functions of u alone and v
alone, i.e., fU,V (u, v) = fU (u) fV (v). Therefore, U and V are independent.

4. Note x, y 0, thus 0 X
X+Y
X
X
= 1. For the marginal of U, we have

u+1 eu v1 (1 v)1
1
Z
fU (u) = dv
0 () ()
u+1 eu 1 ( + ) 1
Z
= v (1 v)1 dv
( + ) 0 () ()
| {z }
density of a Beta(,) =1
(+)1 u
u e
= , for u > 0
( + )
and zero otherwise. This is the density of a Gamma( + , 1). This reinforces the result in (a).
Note: 0 X + Y . For the marginal of V, we have:

e v (1 v)1
Z +1 u 1
u
fV (v) = du
0 () ()
Z +1 u
( + ) 1 1 u e
= v (1 v) du
() () ( + )
|0 {z }
density of a Gamma(+,1) =1
( + ) 1
= v (1 v)1 , for 0 < v < 1
() ()
and zero otherwise. This is the density of a Beta(, ).

5. Since X = UV and by independence, we have:

E [X] = E [U] E [V]


= ( + ) E [V]

so that

E [V] = .
+
Similarly, we have for the variance:
h i
Var (X) = Var (UV) = E U 2 V 2 (E [UV])2
h i h i
= E U 2 E V 2 2

so that
h i + 2 + 2
E V 2 =  2 = .
E U ( + ) (1 + + )

48
ACTL2131 Probability and Mathematical Statistics, S1 2016 Exercises

Thus, the variance of V is:


h i
Var (V) = E V 2 (E [V])2
!2
+ 2
=
( + ) (1 + + ) +
( + ) ( + ) (1 + + )
2 2
=
( + )2 (1 + + )

=
( + ) (1 + + )
2

Solution 1.41: [wk04Q4, Exercise, Schedule] Xk Exp(1) implies that fXk (x) = ex for x 0 and
zero otherwise, for k = 1, 2, 3. We have that:
if x < 0;
(
0,
F Xk (x) =
1 ex , if x 0,

for k = 1, 2, 3.
Let X(1) = min {X1 , X2 , X3 } and X(3) = max {X1 , X2 , X3 }. Finding the distributions of the minimum and
the maximum, we have:
F X(1) (x) = 1 (1 F (x))3 = 1 e3x ,
for x 0 and zero otherwise. So that:

fX(1) (x) = 3e3x , for x 0,

and zero otherwise. This is Exp(3) and

F X(3) (x) = (F (x))3 = 1 ex 3 ,




for x 0 and zero otherwise. So that:

fX(3) (x) = 3ex 1 ex 2 ,



for x 0,

and zero otherwise. The joint distribution of X(1) , X(2) , X(3) is given by:


fX(1) ,X(2) ,X(3) (y1 , y2 , y3 ) = 3! f (y1 ) f (y2 ) f (y3 )


= 6e(y1 +y2 +y3 ) , for 0 y1 y2 y3 < .

and zero otherwise. Therefore, we get the joint distribution of X(1) , X(3) by integrating over all

possible values of X(2) as:
Z y3 iy3
6e(y1 +y2 +y3 ) dy2 = 6 e(y1 +y2 +y3 )
h
fX(1) ,X(3) (y1 , y3 ) =
y1
y1
 
= 6 e2y1 y3 ey1 2y3 , for 0 y1 y3 < ,

and zero otherwise.



1. First, we find the conditional density of X(3) X(1) and by definition,

fX(1) X(3) (y1 , y3 )


fX(3) |X(1) (y3 |y1 ) =
fX (y1 )
 (1) 
6 e2y1 y3 ey1 2y3
= , for 0 y1 y3 < ,
3e3y1

49
ACTL2131 Probability and Mathematical Statistics, S1 2016 Exercises

and zero otherwise.


Replacing y1 = x and y3 = y, we have:
 
fX(3) |X(1) (y |x ) = 2 e(xy) e2(xy) , for 0 x y < ,

and zero otherwise. Thus,


h i Z  
E X(3) X(1) = x = 2y e(xy) e2(xy) dy
x
" 2y #
 2x e
= 2e e (y 1) x 2e
x  y
(2y 1)
4 x
2x
e
= 2e x (x + 1)ex 2e2x (2x + 1)
4
3
= x+ ,
2
h i
ecy
yecy dy =
R
* using c2
(cy 1) .

2. Similarly, we can find the conditional density of X(1) X(3) as:
 
2 e2y e(y+x)
fX(1) |X(3) (y |x ) = , for 0 y x < ,
(1 ex )2
and zero otherwise. Thus,

h
i Z x 2 e2y e(y+x) 
E X(1) X(3) = x = y dy
0 (1 ex )2
" 2x #x !
2 e
= x  y x
(2y 1) e e (y 1) 0
(1 ex )2 4 0
2xe2y e2y + 1 + 4xe2x + 4e2x 4ex
=
2(1 ex )2
2y
!
2 e 1
= (2x 1) + + e (x + 1) e
2x x
(1 ex )2 4 4
 
1 4ex + 3e2x + 2xe2x
= ,
2 (1 ex )2
h cx i
* using xecx dx = ec2 (cx 1) .
R

3. Already derived earlier.


4. We use Jacobian transformation by first letting:

R = X(3) X(1) and S = X(1)

with the inverse transformation:

X(1) = S and X(3) = R + S .

The Jacobian of this transformation is given by:


S /R S /S

0 1
J (R, S ) = det = det = 1.

(R + S ) /R (R + S ) /S 1 1

50
ACTL2131 Probability and Mathematical Statistics, S1 2016 Exercises

Thus, |J (R, S ) | = 1 and


 
fR,S (r, s) = 6 e2srs es2(r+s)
= 6e3sr 1 er , for 0 s < r + s < ,


and zero otherwise, where the range is equivalently:

0s< and 0 r < .

Thus, the marginal density of R is obtained by integrating all possible values of S :


Z
fR (r) = 6e3sr 1 er ds

0
" #
1 3sr
= 6 1e r 
e
3 0
r
e
= 6 1 er

3
= 2er 1 er , for 0 r < ,

and zero otherwise.


 
Solution 1.42: [wk04Q5, Exercise, Schedule] Use the m.g.f. technique. Recall that if X N , 2 ,
then
MX (t) = et+ 2 t .
1 2 2

1. Let S = X1 + X2 .

MS (t) = E e(X1 +X2 )t = E eX1 t E eX2 t


h i h i h i

1 2 1 2 1 2
= e 2 t e 2 t = e 2 (2)t

which is the m.g.f. of a N (0, 2).

2. Let D = X1 X2 .
h i h i h i
MD (t) = E e(X1 X2 )t = E eX1 t E eX2 (t)
1 2 1 2 1 2
= e 2 t e 2 (t) = e 2 (2)t

which is the m.g.f. of a N (0, 2). Thus, D has the same distribution as S .

3. Now, assume that they are no longer independent and has the bivariate normal density:

1 1  !
fX1 ,X2 (x1 , x2 ) = exp  x1 2x1 x2 + x2 .
2 2
2 1 2 2 1 2
p

Using Jacobian transformation technique, we find the joint distribution of S and D. From

S = X1 + X2 and D = X1 X2

the inverse of this transformation is


1 1
X1 = (S + D) and X2 = (S D) ,
2 2

51
ACTL2131 Probability and Mathematical Statistics, S1 2016 Exercises

which is derived by X1 = S X2 D = S X2 X2 X2 = S D 2
X1 = S S D 2
= S +D
2
. Its
Jacobian is:
(S + D)/2 /S (S + D)/2 /D

1/2 1/2
J (S , D) = det = det = 1/41/4 = 1/2.

(S D)/2 /S (S D)/2 /D 1/2 1/2

Thus |J (S , D) | = 1/2. Therefore,

1 (s + d) 2 2 1 (s + d) 1
   
1 1
fS ,D (s, d) = exp  2   2 
2 1 2 2 1 2 1 (s d) + 1 (s d) 2 |2|
p
2 2
1 1    !
= exp  s + d 2 s d + s + d
2 2 2 2 2 2
4 1 2 8 1 2
p

1 1  !
= exp  (1 ) s + (1 + ) d
2 2
4 1 2 4 1 2
p

s2 d2
! !
1
= exp exp
4 1 2 4 (1 + ) 4 (1 )
p

Therefore, clearly we can write the density as a product of functions of s alone and d alone. S
and D are therefore independent.

4. We have that the p.d.f. of X is given by:



fX (x) = x1 ex , if x > 0,
()
and zero otherwise.

(a) The transformation g(X) = 1/X is a monotonic decreasing function for x > 0, because
d 1x
dg(x)
d
x = d
x = x2 < 0 for x > 0. Hence, we can apply the CDF technique, with
g1 (Y) 1y
g(Y) = 1/X, g1 (Y) = 1/Y, and y
= y
= y2 < 0, support of Y: g(0) = ,
g() = 0 we have:

fY (y) = fX (g1 (y)) g1 (y)
y

!1
1  
= e y y2
() y
 
= (y)+1 e y y2
()

= y1 e y
()
for y > 0 and zero otherwise.
(b) The c.d.f. of the inverse gamma distribution, as function of the c.d.f. of the gamma
distribution, is given by applying the CDF technique:

FY (y) =1 F X (g1 (y))


=1 F X (1/y).

52
ACTL2131 Probability and Mathematical Statistics, S1 2016 Exercises

Solution 1.43: [wk05Q14, Exercise, Schedule] The p.d.f. of a chi-squared distribution with one
degree of freedom:
exp(y/2)
fY (y) = p , if y > 0,
2y
and zero otherwise. We need to prove that the moment generating function of Y is given by:

MY (t) = (1 2t)1/2 .

Using the transformation x = 2 y(t 1/2) and thus dy = y1/2 /2 2 (t 1/2)dx we have:
p

Z Z
exp(y/2)
MY (t) = e fY (y)dy =
ty
exp(ty) p dy
0 2y
Z
exp(y (t 1/2))
= p dy
0 2y
exp(x2 /2)
Z
2
= dx
2 (t 1/2) 0 2
2 1
=
2 (t 1/2) 2
1
= = (2 (t + 1/2))0.5 = (1 2t)0.5
2 (t 1/2)
R 2 /2)
* using that 0 exp(x

2
dx is the integral of the p.d.f. of a standard normal distributed random variable
over the positive values of x. Due to the symmetry property of the standard normal distribution in 0,
we have that this integral equals 1/2.

Solution 1.44: [wk05Q15, Exercise, Schedule] We need to prove that:


d
tn1 N(0, 1) as n .

This implies that a tdistribution converges in distribution to a standard normal distribution as n


. Here we cannot use the moment generating function, because it is not defined for a student-t
distribution. Note that the definition of convergence in distribution is:
Xn converges in distribution to the random variable X as n if and only if, for every x:

F Xn (x) F X (x) as n .

This implies that one can use the cumulative density function of the student-t distribution and the
standard normal distribution to prove the convergence. However, these do not have a closed form
expression. Therefore, we will prove that the probability density function of a studentt distribution is
the same as the standard normal one when n . When the probability density function converges,
also the cumulative density function must converge.

53
ACTL2131 Probability and Mathematical Statistics, S1 2016 Exercises

We have:
 
n+1
21 x2
!(n+1)/2
lim ft|n (x) = lim   1 +
n n n n n
2
r !(n+1)/2
n 1 x2
= lim 1+
n 2 n n
n/21/2
x2 /2
!
1
= lim 1 +
n 2 n/2
1 1 1
= lim  n/2
q
1 + xn/2/2
2

1 + xn/2/2
n 2 2

1 1 1
= 1/2x2 lim q
2 e n
1+ x2 /2
n/2
1 2
= e1/2x ,
2
( n+1 )
which is the probability density function of a standard normal random variable, * using lim ( 2n ) =
n 2
a n
 
= + q 1 =
pn a
2
, ** using e lim 1 n
, and *** using lim x2 /2
1.
n n 1+ n/2

Solution 1.45: [wk05Q16, Exercise, Schedule] i) Define transformations:


U/n1
F= G = V.
V/n2
ii) Determine the inverse of the transformations:

V =G U = n1 F V/n2 = n1 F G/n2 .

iii) Calculate the absolute value of the Jacobian:


" #
0 1 n1
J = det =g .
g nn12 f nn21 n2

iv) Determine the joint probability density function of F and G:


1 1
fFG ( f, g) = fUV (u, v) = fU (u) fV (v)
|J| |J|
n1 g u(n1 2)/2 v(n2 2)/2
= n1 /2 exp(u/2) n2 /2 exp(v/2)
n2 2 (n1 /2) 2 (n2 /2)
(n1 2)/2
g(n2 2)/2
!
n1 g ( f n1 g/n2 ) f n1 g
= exp exp (g/2)
n2 2n1 /2 (n1 /2) 2n2 2n2 /2 (n2 /2)
(g)(n1 +n2 2)/2
!!
1 f n1 1
= n1 ( f n1 )(n1 2)/2
n1 /2 exp g + n2 /2
n2 2n1 /2 (n1 /2) 2 2n2 2 (n2 /2)

* using independence between U and V, ** using inverse transformation, determined in step ii), and
*** using exp(ga) exp(gb) = exp(g(a + b)) and ab ac = ab+c .

54
ACTL2131 Probability and Mathematical Statistics, S1 2016 Exercises

v) Calculate the marginal distribution of F by integrating over the other variable:


Z
fF ( f ) = fFG ( f, g)dg
0
Z
( f n1 )(n1 2)/2
!!
1 (n1 +n2 2)/2 1 f n1
= n2 /2 n1 n1 /2 g exp g + dg
2 (n2 /2) n2 2n1 /2 (n1 /2) 0 2 2n2
!(n1 +n2 2)/2
1 ( f n1 )(n1 2)/2 2n2 2n2
= n2 /2 n1 n1 /2
2 (n2 /2) n2 2n1 /2 (n1 /2) n2 + f n1 n2 + f n1
Z
x(n1 +n2 2)/2 exp (x) dx
0
!(n1 +n2 2)/2
1 ( f n1 )(n1 2)/2 2n2 2n2
= n2 /2 n1 n1 /2
2 (n2 /2) n2 2 1 (n1 /2) n2 + f n1
n /2 n2 + f n1
((n1 + n2 )/2)
!(n1 +n2 )/2
1 f (n1 2)/2 n(n1 )/2
2n2 ((n1 + n2 )/2)
= (n1 +n2 )/2 1

2 n2n1 /2 n2 + f n1 (n1 /2) (n2 /2)
(n1 +n2 )/2
f (n1 2)/2 n(n 1 )/2
((n1 + n2 )/2)
!
n2
= 1

n2n1 /2 n2 + f n1 (n1 /2) (n2 /2)
((n1 + n2 )/2)
= f (n1 2)/2 n(n 1 )/2
n(n 2 )/2
(n2 + f n1 )(n1 +n2 )/2
1 2
(n1 /2) (n2 /2)
n1 /21
((n1 + n2 )/2) f
=nn11 /2 nn22 /2
(n1 /2) (n2 /2) (n2 + f n1 )(n1 +n2 )/2
   
* using transformation x = 21 + 2n f n1
2
g and thus g = 2n2
n2 + f n1
x and we have dx = 1
2
+ f n1
2n2
dg,
R
1
and ** using () = 0 x exp(x)dx.

Solution 1.46: [wk04Q6, Exercise, Schedule]

I. C
A t-distribution is obtained by a standard normal r.v. divided by the square root of a chi-squared
r.v. divided by its degree of freedom.
We have Z1 + Z2 N(0, 2), i.e., Z1+Z2 2 N(0, 1) (see lecture notes).
For a chi-squared distribution we have the m.g.f.: MVi (t) = (1 2 t)ri /2 for i = 1, 2. Hence,
MV1 (t) MV2 (t) = (1 2 t)r1 /2 (1 2 t)r2 /2
= (1 2 t)(r1 +r2 )/2 ,
which is the m.g.f. of a chi-squared distribution with r1 + r2 degrees of freedom. Hence, V1 + V2
has a chi-squared distribution with r1 + r2 degrees of freedom.
II. E
See lecture notes/ previous question.
III. C
We have:
Z1 + Z2 N (0, Var(Z1 ) + Var(Z2 ) + 2Cov(Z1 , Z2 ))
N (0, 1 + 1 + 2 1 1)
N (0, 2 + 2)
N (0, 2 (1 + )) .

55
ACTL2131 Probability and Mathematical Statistics, S1 2016 Exercises

 Z +Z 
Thus Var 1 2
2
= 2(1+)
2
= 1 + , 1.
IV. D
We have MXk (t) = (1 t/)1 for k = 1, . . . , n. Let Yk = Xk /n, then we have: MYk (t) =
 1
MXk /n (t) = MXk (t/n) = 1 nt for k = 1, . . . , n.
Using the m.g.f. technique we determine the distribution of the sample mean by the m.g.f.:
MX (t) =MY1 (t) . . . MYn (t)
 t n
= 1
n
which is the m.g.f. of a Gamma distribution with parameters n and n.
V D
Use the m.g.f. technique. MXk (t) = exp((exp(t) 1)) for k = 1, . . . , n. We have:
MS (t) =MX1 (t) . . . MXn (t)
Y
= n exp((exp(t) 1))
k=1
= exp((exp(t) 1))n
= exp(n (exp(t) 1)),
which is the m.g.f. of a Poisson r.v. with mean n.
VI. B
We have:
Pr X(20) > 20 =1 Pr X(20) 1
 

=1 (F X (1))20
=1 1 exp(2) .
20

VII. D
We have:

Z x


0, h ix if x 0;
Rx
F X (x) = fX (x)dx = 2xdx = x = x , if 0 < x < 1;
2 2
0 0
0
1, if x 1.

Let U = X(n) , then:


fU (u) =n fX (u) (F X (u))n1
=n 2u u2(n1) ,
for u [0, 1] and zero otherwise.
Thus we have:
Z Z 1
E [U] = u fU (u)du = un 2u u2(n1) du
0
Z 1
=2n u2n du
0
" 2n+1 #1
u
=2n
2n + 1 0
2n
= .
2n + 1

56
ACTL2131 Probability and Mathematical Statistics, S1 2016 Exercises

VIII. E
We have that X U (8.5, 10.5), then fX (x) = 1/2 if x [8.5, 10.5] and zero otherwise and we
have:
if x < 8.5;


0,
F X (x) = ,

x8.5
if 8.5 x 10.5;
1,2


if x > 10.5.

Then we have: Pr(loser will not break world record) = Pr X(8) 9.9 = 1 Pr X(8) < 9.9 =
 
1 F X (9.9)8 = 1 0.78 .

57
Module 2

Parameter Estimation

2.1 Estimation Techniques


Exercise 2.1: [wk05Q2, Solution, Schedule] Assume that X1 , X2 , . . . , Xn is a random sample from a
population with density:
2( x)
, for 0 < x < ;


fX (x|) =

2


0,
otherwise.
Find an estimator for using the method of moments.

Exercise 2.2: [wk05Q5, Solution, Schedule] Consider N independent random variables each having
a binomial distribution with parameters n = 3 and so that:
!
3 k
Pr (Xi = k) = (1 )nk ,
k
for i = 1, 2, . . . , N and k = 0, 1, 2, 3, and zero otherwise. Assume that of these N random variables n0
take the value 0, n1 take the value 1, n2 take the value 2, and n3 take the value 3 with N = n0 +n1 +n2 +n3 .

1. Use maximum likelihood to develop a formula to estimate .


2. Assume that when you go to the races that you always bet on 3 races. You have taken a random
sample of your last 20 visits to the races and find that you had no winning bets on 11 visits, one
winning bet on 7 visits, and two winning bets on 2 visits. Estimate the probability of winning
on any single bet.

Exercise 2.3: [wk05Q6, Solution, Schedule] Assume that we have n independent observations y> =
[y1 , y2 , . . . , yn ], each with the Pareto p.d.f. given by:
A
fYi | (yi |; A) = ,
y+1
i

where 0 < < and 0 < A < yi < , and zero otherwise. You are now told the value of A, leaving
as the only unknown parameter.

1. Explain why the likelihood function L(; y, A) can be written as:


n An
,
Gn(+1)
where G = (y1 y2 . . . yn )1/n is the geometric mean of the observations.

58
ACTL2131 Probability and Mathematical Statistics, S1 2016 Exercises

2. Explain why we can express the relationship between the posterior distribution, prior distribu-
tion and likelihood function as follows:

(|y; A) fY| (y|; A)().

3. We assume our prior pdf for is such that log() is uniformly distributed, implying:
1
() , 0 < < .

Show that the posterior pdf for is:

(|y; A) n1 ean ,

where a = log(G/A).

4. Explain why the posterior pdf is given by:


(an)n n1 an
(|y; A) = e , 0 < < .
(n)

5. Calculate the Bayes estimator of , b


B .

Exercise 2.4: [wk05Q9, Solution, Schedule] Given that there are n realizations of xi ,where i =
1, 2, . . . , n. We know that xi |p Ber(p) and p U(0, 1).

1. Find the Bayesian estimator for p.

2. Find the Bayesian estimator for p(1 p).

3. Why might we be interested in the Bayesian estimator for p(1 p)? Hint: consider the case
when n is large.

Exercise 2.5: [wk05Q12, Solution, Schedule] Suppose that X follows a geometric distribution, with
probability mass function:

Pr(X = k) = p (1 p)k1 , if k = 1, 2, . . ., and zero otherwise,

and assume a sample of size n.

1. Find the method of moments estimator of p.

2. Find the maximum likelihood estimator of p.

Exercise 2.6: [wk05Q13, Solution, Schedule] The Pareto distribution is often used in economics as
a model for a density function with a slowly decaying tail. Its density is given by:

fX (x|) = x0 x1 , x x0 , > 1,

and zero otherwise. Assume that x0 > 0 is given and that x1 , . . . , xn is a sample from this distribution.

1. Find the method of moments estimate of .

2. Find the maximum likelihood estimator of .

59
ACTL2131 Probability and Mathematical Statistics, S1 2016 Exercises

2.2 Limit Theorems


Exercise 2.7: [wk05Q1, Solution, Schedule]
An insurance company has a portfolio of 100 insurance contracts. The companys losses on these
contracts are independent and identically distributed. Each loss X has an exponential distribution
with mean 5 000 and each policyholder pays a premium of 5 050. Notice that each policyholder pays
an amount larger than its expected loss. Determine the probability that the aggregate loss of the
insurance company will exceed the total premiums collected. Use the normal approximation (Central
Limit Theorem).

Exercise 2.8: [wk05Q3, Solution, Schedule] Let X1 , X2 , . . . be a sequence of independent random


variables with common mean E[Xk ] = but different variance Var(Xk ) = 2k . Suppose:
n
1X 2
0, as n .
n2 k=1 k

p
Prove X in probability.

Exercise 2.9: [wk05Q4, Solution, Schedule] A drunkard executes a random walk in the following
manner: each minute, he takes a step north or south, with probability 12 each, and his successive step
directions are independent. Each step he takes is of length 50 cm. Use the central limit theorem to
approximate the probability distribution of his location after one hour. Where is he most likely to be?

Exercise 2.10: [wk05Q7, Solution, Schedule] Using moment generating functions:

1. show that as n , p 0 and np , the binomial distribution with parameters n and p


tends to the Poisson distribution.

2. show that as , the gamma distribution with parameters and , properly standardised,
tends to the standard Normal distribution.

Exercise 2.11: [wk05Q8, Solution, Schedule] A random variable X with p.d.f.


1
fX (x) = for < x <
1 + x2
,

is said to have a Cauchy distribution. It is well-known that for Cauchy distribution, its mean does not
exist. Furthermore, suppose X1 , X2 , . . . , Xn are n independent Cauchy random variables, then it can be
shown that the sample mean:
n
1X
Xn = Xk
n k=1
also has a Cauchy distribution.1 Deduce then that from these results, the Cauchy violates the law of
large numbers. Explain why.

Exercise 2.12: [wk05Q10, Solution, Schedule] Let X1 , X2 , . . . be independent random variables with
common density:
fX (x) = x(+1) , for x > 1,
1
Proofs of these results are not expected for this course.

60
ACTL2131 Probability and Mathematical Statistics, S1 2016 Exercises

where > 0. Define a new sequence of random variables:


1
Yn = X(n) ,
n1/
where X(n) is the highest observation of n i.i.d. r.v. X1 , . . . , Xn .
Show that Yn converges in distribution as n and find the limiting distribution.

Exercise 2.13: [wk05Q11, Solution, Schedule] (Problem from [JR]) Suppose that X1 , X2 , . . . , X20 are
independent random variables with density functions:

fX (x) = 2x, for 0 x 1,

and zero otherwise. Let S = X1 + . . . + X20 . Use the central limit theorem to approximate

Pr (S 10) .

2.3 Evaluating Estimators


Exercise 2.14: [wk06Q1, Solution, Schedule] Let X1 , X2 , . . . , Xn be a random sample from an expo-
nential distribution with:
fX (x|) = ex , x > 0,
.
and zero otherwise, where > 0. Find the value of a so that the interval from 0 to a X provides a
95% confidence interval for the parameter .

Exercise 2.15: [wk06Q2, Solution, Schedule] Consider a random sampling from a normal distri-
bution with mean and variance 2 . Derive a 100 (1 ) % confidence interval of 2 when is
known.

Exercise 2.16: [wk06Q3, Solution, Schedule] This exercise aims to show that if we sample from a
continuous distribution, a pivotal quantity always exists. Let X1 , X2 , . . . , Xn be a random sample from
a continuous distribution fX (x|). Denote the corresponding cumulative distribution function by:
Z x
F X (x|) = fX (z|) dz.

(a) Show that F X (X|) U (0, 1).


Hint: Show that Pr(F X (X|) x) = x using the quantile function (inverse of the c.d.f.), then
explain why x (representing a probability, taking values between 0 and 1) would have this
distribution.

(b) Show that W = log (1/F X (X|)) has an exponential distribution with mean 1. To do so, first
find the c.d.f c.d.f. W.
n
P
(c) From (b), deduce that log (1/F X (Xk |)) has a Gamma distribution. Specify its parameters.
k=1

(d) Use (c) to prove that there will always be a pivotal quantity when sampling from a continuous
distribution.

Exercise 2.17: [wk06Q4, Solution, Schedule] (modified based on a past Institute of Actuaries exam.)
Let X1 , X2 , . . . , Xn denote a random sample of a Gamma(3, ) and X is the sample mean.

61
ACTL2131 Probability and Mathematical Statistics, S1 2016 Exercises

(a) Describe the distribution of the sample mean X.

(b) Use (a) to construct a lower 95% confidence interval for , of the form (0, U) .

(c) Use (a) to construct an upper 95% confidence interval for , of the form (L, ).

(d) Use (a) to construct a 95% confidence interval for , of the form (L, U) where L and U are not
necessarily equal to those found in (b) and (c).

1. Evaluate the intervals in (b), (c) and (d) in the case for which the total of a random sample of
20 observations yielded a value[(e)] of 20 k=1 xk = 98.2.
P

Exercise 2.18: [wk06Q5, Solution, Schedule] A local health club advertises that its members lose at
least 10 pounds on the average during a 30-day weight loss programme. After receiving a number
of complaints from people who were enticed to join the club, the Better Business Bureau sends out a
representative to the club to check out the claim. The representative sampled the following nine (9)
people who are enrolled in the program:

Person Before-Weight After-Weight Diffrence


1 157 150 7
2 174 167 7
3 198 187 11
4 205 198 7
5 147 146 1
6 165 153 12
7 212 199 13
8 169 171 -2
9 158 156 2
P9
xi 1,585 1,527 58
Pi=1
9 2
i=1 xi 283,457 262,465 590

The representative of Better Business Bureau reported its findings in terms of a confidence interval.
Construct the appropriate 95% confidence interval for the average weight loss for participants in the
programme.

Exercise 2.19: [wk06Q6, Solution, Schedule] (Past Institute of Actuaries Exam Question)  Inde-
pendent random samples of size n1 and n2 are taken from the normal populations N 1 , 21 and
 
N 2 , 22 . Let the sample means be X 1 and X 2 and the sample variances be S 12 and S 22 . You may
assume that X l and S l2 , l = 1, 2 are independent and distributed as follows:

2k (nk 1) S k2
!
X k N k , and 2 (nk 1) for k = 1, 2.
nk 2k

(a) It is required to construct a confidence interval for (1 2 ), the difference between the popu-
lation means.
 
i. Suppose that 21 and 22 are known. State the distribution of X 1 X 2 and write down a
suitable pivotal quantity together with its sampling distribution. Hence, write down a 95%
confidence interval for (1 2 ).
ii. Suppose that 21 and 22 are unknown but are known to be equal. State the definition of a
tk variable in terms of independent N(0, 1) and 2k variables and use it to develop a suitable
pivotal quantity. Hence, write down a 95% confidence interval for (1 2 ).

62
ACTL2131 Probability and Mathematical Statistics, S1 2016 Exercises

21
(b) It is required to construct a confidence interval for , the ratio of the population variances.
22
State the definition of an Fk,l variable in terms of independent 2k and 2l variables and use it to
21
develop a suitable pivotal quantity. Hence, obtain a 90% confidence interval for 2 .
2
(c) A regional newspaper included a consumer rights article comparing the cost of shopping in
corner shops and supermarkets. The researchers investigated the price of a standard se-
lection of household goods in a sample of 10 corner shops selected at random from the region,
and in a sample of 10 supermarkets selected at random from the region. The data yielded the
following values:
Sample Mean Sample S.D.
Corner Shops 22.55 1.22
Supermarkets 19.72 0.96
i. Use the result in part (a)(ii) to calculate a 95% confidence interval for (1 2 ), the dif-
ference between the population means (1 = corner shops, 2 = supermarkets).
2
ii. Use your result in part (b) to calculate a 90% confidence interval for 21 , the ratio of
2
the population variances. Use this result to comment briefly on the assumption of equal
variances required for the confidence interval in part (c)(i).

Exercise 2.20: [wk06Q7, Solution, Schedule] (IoA, Subject CT3, April 2005, No.6) In a survey
conducted by a mail order company a random sample of 200 customers yielded 172 who indicated
that they were highly satisfied with the delivery time of their orders.
Calculate an approximate 95% confidence interval for the proportion of the companys customers
who are highly satisfied with delivery times.

Exercise 2.21: [wk06Q8, Solution, Schedule] (IoA, Subject CT3, April 2005, No.8) The distribution
of claim size under a certain class of policy is modelled as a normal random variable, and previous
years records indicate that the standard deviation is 120.

(a) Calculate the width of a 95% confidence interval for the mean claim size if a sample of size 100
is available.
(b) Determine the minimum sample size required to ensure that a 95% confidence interval for the
mean claim size is of width at most 10.
(c) Comment briefly on the comparison of the confidence intervals in (a) and (b) with respect to
widths and sample sizes used.

Exercise 2.22: [wk06Q9, Solution, Schedule] (IoA, Subject CT3, April 2005, No.12 (partial))

1. A random variable Y has a Poisson distribution with parameter but there is a restriction that
zero counts cannot occur. The distribution of Y in this case is referred to as the zero-truncated
Poisson distribution.
(a) Show that the probability function of Y is given by:
y e
pY (y) = , for y = 1, 2, 3, . . . ,
y!(1 e )
and zero otherwise.

63
ACTL2131 Probability and Mathematical Statistics, S1 2016 Exercises


(b) Show that E[Y] = .
1 e
2. Answer the following.

(a) Let y1 , . . . , yn denote a random sample from the zero-truncated Poisson distribution. Show
that the maximum likelihood estimate of may be determined by the solution to the fol-
lowing equation:
e
y = 0,
1 e
and deduce that the maximum likelihood estimate is the same as the method of moments
estimate.
(b) Obtain an expression for the Cramer-Rao lower bound for the variance of an unbiased
estimator of .

Exercise 2.23: [wk06Q10, Solution, Schedule] (IoA, Subject 101, April 2004, No.12) For the esti-
mation of a bernoulli probability p = Pr(success), a series of n independent trials are performed and
X represents the number of successes observed.

(a) Write down the likelihood function L(p; x) and show that the maximum likelihood estimator
p = X/n.
(MLE) of p is b

(b) Answer the following.

1. Determine the Cramer-Rao lower bound for the estimation of p.


2. Show that the variance of the MLE is equal to the Cramer-Rao lower bound.
3. Write down an approximate sampling distribution for p valid for large n.

(c) In order to develop an approximate 95% confidence interval for p for large n, the following
pivotal quantity is to be used:
pp
b
r N(0, 1).
p(1 p)
n
Assuming that this pivotal quantity is monotonic in p, show that rearrangement of the inequal-
ity:
pp
b
1.96 < r < 1.96
p(1 p)
n
leads to a quadratic inequality in p, and hence determine an approximate 95% confidence inter-
val for p.

(d) A simpler and more widely used approximate confidence interval is obtained by using the fol-
lowing pivotal quantity
bpp
r N(0, 1).
bp(1 b
p)
n
Determine the resulting approximate 95% confidence interval using this.

(e) In two separate applications the following data were observed:

64
ACTL2131 Probability and Mathematical Statistics, S1 2016 Exercises

(a) 4 successes out of 10 trials


(b) 80 successes out of 200 trials

In each case calculate the two approximate confidence intervals from parts (c) and (d) and
comment briefly on your answers.

Exercise 2.24: [wk06Q11, Solution, Schedule] A random sample of 16 values, x1 , x2 , . . . , x16 , was
drawn from a normal population and gave the following summary statistics:
16
X
xi = 51.2
i=1
16
X
xi2 = 243.19
i=1

Calculate a 95% confidence interval for the population mean.

Exercise 2.25: [wk06Q12, Solution, Schedule] Consider a random sample of size n from a normal
distribution N(, 2 ) and let S 2 denote the sample variance.

(n 1) S 2
1. State the sampling distribution for and specify an approximate sampling distribution
2
for this expression when n is large.

2. For n = 101 calculate an approximate value for the probability that S 2 exceeds 2 by more than
a factor of 10%, i.e. Pr(S 2 > 1.12 ).

Exercise 2.26: [wk06Q13, Solution, Schedule] A group of 500 insurance policies gave rise to a total
of 83 claims during the last year. Assuming a Poisson model for the occurrence of claims, calculate
an approximate 95% confidence interval for , the claim rate per policy per year.

Exercise 2.27: [wk06Q14, Solution, Schedule] Let Xi , i = 1, . . . , n denote a random sample of size
n from a population with a uniform distribution on the interval (0, ). Let X(n) = max{X1 , . . . , Xn } and
define U = (1/)X(n) .

1. Show that U has distribution function:


0, if u < 0;



FU (u) = u , if 0 u 1;

n
1, if u > 1.

2. Because the distribution of U does not depend on , U is a pivotal quantity. Find the 95% lower
confidence bound for .

65
ACTL2131 Probability and Mathematical Statistics, S1 2016 Exercises

Solutions
Solution 2.1: [wk05Q2, Exercise, Schedule] To find an estimator for using the method of moments,
let E [X] = X. We then have:
Z
X = E [X] = fX (x)dx

Z
2 ( x)
= x dx
2
0
Z 
2 
= 2 x x2 dx

0
#
2 x2 x3
"
= 2
2 3 0
2 2
3
!
= 2
2 3

= .
3
Hence, the method moments estimate is:
= 3X.
b

Solution 2.2: [wk05Q5, Exercise, Schedule] Consider N independent random variables each having
a binomial distribution with parameters n = 3 and so that Pr (Xi = k) = 3k k (1 )nk , for i =
1, 2, . . . , N and k = 0, 1, 2, 3. Assume that of these N random variables n0 take the value 0, n1 take the
value 1, n2 take the value 2, and n3 take the value 3 with N = n0 + n1 + n2 + n3 .

1. The likelihood function is given by:


  Y n
L ; x = fX (xi )
i=1
! !n0 ! !n1 ! !n2 ! !n
3 3 3 2 3 3 3
= (1 )3
(1 )2
(1 ) .
0 1 2 3
The log-likelihood function is given by:
    X n
` ; x = log L(; x) = log ( fX (xi ))
i=1
!! ! !! !
3 3
=n0 log + 3 log (1 ) + n1 log + log() + 2 log (1 )
0 1
!! ! !! !
3 3
+ n2 log + 2 log() + log (1 ) + n3 log + 3 log() ,
2 3
* using log(a b) = log(a)
 + log(b)
 and log(ac b) = c log(a) + log(b)
Then, take the FOC of ` ; x :
 
` ; x 3n0 n1 2n1 2n2 n2 3n3
= + + +
(1 ) (1 ) (1 )
n1 + 2n2 + 3n3 3n0 + 2n1 + n2
= .
(1 )

66
ACTL2131 Probability and Mathematical Statistics, S1 2016 Exercises

Equating this to zero we obtain:


n1 + 2n2 + 3n3 3n0 + 2n1 + n2
= 0,
(1 )
or, equivalently:
(n1 + 2n2 + 3n3 ) (1 ) = (3n0 + 2n1 + n2 ) .
Thus we have the maximum likelihood estimator for is:
(n1 + 2n2 + 3n3 )
=
b
(n1 + 2n2 + 3n3 ) + (3n0 + 2n1 + n2 )
(n1 + 2n2 + 3n3 )
=
(3n0 + 3n1 + 3n2 + 3n3 )
(n1 + 2n2 + 3n3 )
= ,
3N
a
* using: 1a = bc 1/a1
1
= bc 1a 1 = bc 1a = c+b
b
a = b+cb
.
2. We have:
N = 20, n0 = 11, n1 = 7, n2 = 2, n3 = 0.
Thus the ML estimate for is given by:
(n1 + 2n2 + 3n3 )
=
b
3N
7 + 4 11
= =
60 60
= 0.1833.
Thus, the probability of winning any single bet is given by 0.1833.

Solution 2.3: [wk05Q6, Exercise, Schedule]


1. The likelihood function is given by:
A
n
Y n
Y
L(; y, A) = fY (yi ) =
i=1 i=1
y+1
i
A
Qn
= Qi=1
n +1
i=1 yi
A
n n
= Qn +1
i=1 yi
n An
= Q
1/n n(+1)
n

y
i=1 i
n An
= n(+1)
G
2. In the lecture we have seen that:
(|y; A) = f|Y (|y; A)

fY| (y|; A)()
=R
f (y|; A)()d
Y|
f (y|; A)()
Y|
=
fY| (y; A)

fY| (y|; A)()

67
ACTL2131 Probability and Mathematical Statistics, S1 2016 Exercises

* using Bayes formulae: Pr(Ai |B) = PPr(B|A i )Pr(Ai )


j Pr(B|A j )Pr(A j )
, where the set Ai (= ()) i = 1, . . . , n is a
complete partition of the sample space.
** using the law of total probability: Pr(A) = Pr(A|Bi ) Pr(Bi ) if Bi (= ()) i = 1, . . . , n is a
complete partition of the sample space.
*** using that fY| (y; A) is, given the data, a known constant.

3. We have that the posterior density is given by:

(|y; A) = f|Y (|y; A)


fY| (y|; A)()
Y n

=() fY (yi ; A)
i=1
=L(; y, A) ()
1
L(; y, A)

A
n1 n
= n(+1)
G
 A n 1
=n1 n
G G
 G n 1
=n1 n
A G
 G n
n1

A
 G n !!
= exp log
n1
A
  G 
=n1 exp n log
A
= exp (na)
n1

* using independence between all fY| (yi |; A) and fY| (y j |; A) for i , j


* using (G1n ) is a known constant.

4. We have that (|y; A) n1 exp (na) or, equivalently, there exist some constant c <
for which (|y; A) = c n1 exp (na). we need to determine the constant c. We know that
R

(|y; A)d = 1, because otherwise it is not a posterior density.
Given this observation, we are going to compare c n1 exp (na) with the p.d.f. of
X Gamma( x , x ), which is given by:

x x
fX (x) = xx 1 ex x .
( x )

Now, substitute x = , x = n, x = an, and c = (1 x ) = (n)


1
. Then we have the density of a
Gamma(n, an) distribution. Hence, the posterior density is given by:
(an)n n1 an
(|y; A) = e , for 0 < < ,
(n)
and zero otherwise.

68
ACTL2131 Probability and Mathematical Statistics, S1 2016 Exercises

5. The Bayesian estimator of is the expected value of the posterior. The posterior has a Z Gamma(n, an)
distribution. We have that E [Z] = na
n
. Thus:
h i n 1
B = E (|y; A) =
b = .
na a
Thus the Bayesian estimator of is a1 .

Solution 2.4: [wk05Q9, Exercise, Schedule] Given that there are n realizations of xi ,where i =
1, 2, . . . , n. We know that xi |p Ber(p) and p U(0, 1). We are asked to find the Bayesian esti-
mators for p and p(1 p). Since n random variables are independent, then:
n
Y
f (x1 , x2 , . . . , xn |p) = f (xi |p)
i=1
Pn Pn
=p i=1 xi (1 p)n i=1 xi

Since xi s are independent with random variable p, then


Pn Pn
f (x1 , x2 , . . . , xn , p) = p i=1 xi (1 p)n i=1 xi .

Then we can compute the joint density for xi where i = 1, 2, . . . , n,


Z 1 P
n Pn
f (x1 , x2 , . . . , xn ) = p i=1 xi (1 p)n i=1 xi d p
0
( ni=1 xi + 1)(n ni=1 xi + 1)
P P
= .
(n + 2)

1. Method 1: Hence we can obtain the posterior function:


f (x1 , x2 , . . . , xn , p)
f (p|x1 , x2 , . . . , xn ) =
f (x1 , x2 , . . . , xn )
(n + 2) Pn Pn
= Pn p i=1 xi (1 p)n i=1 xi ,
( i=1 xi + 1)(n + 1 i=1 xi )
Pn

which is the probability density function for:


Beta ni=1 xi + 1 , n + 1 ni=1 xi . Method 2: Observe that the difference between f (x1 , x2 , . . . , xn )
P  P 
and the p.d.f. in of a Beta distribution are proportional to each other and use this to find the
distribution of f (p|x1 , x2 , . . . , xn ).
Hence, we have f (p|x1 , x2 , . . . , xn ) fY (x),
where Y Beta ni=1 xi + 1 , n + 1 ni=1 xi .
P  P 
The Bayesian estimator for p will thus be:

xi + 1
Pn
p = E p|X = i=1
B
.
 
n+2
b

(See Formulae and Tables page 13).

69
ACTL2131 Probability and Mathematical Statistics, S1 2016 Exercises

2. Now we wish to find a Bayesian estimator for p(1 p). Then using the similar idea:
B
p)) =E p(1 p)|X
 
(p(1
[
Z 1
= p(1 p) f (p|x1 , x2 , . . . , xn )d p
0
(n + 2)
Z 1 Pn Pn
= Pn p1+ i=1 xi (1 p)n+1 i=1 xi d p
( i=1 xi + 1)(n + 1 i=1 xi ) 0
Pn

(n + 2) ( ni=1 xi + 2)(n ni=1 xi + 2)


P P

= Pn
( i=1 xi + 1)(n + 1 ni=1 xi ) (n + 4)
P

(n + 2)
= Pn
( i=1 xi + 1)(n + 1 ni=1 xi )
P
(( ni=1 xi + 1) ( ni=1 xi + 1)) ((n ni=1 xi + 1) (n ni=1 xi + 1))
P P P P

(n + 3) (n + 2) (n + 2)
xi + 1)(n + 1 i=1 xi )
Pn Pn
(
= i=1 .
(n + 3)(n + 2)

* using Beta function: B(, ) = ()()


R1
= x1 (1 x)1 dx, where = ni=1 xi + 2,
P
(+) 0
= n + ni=1 xi + 2, + = n + 4.
P
** using Gamma function: () = ( 1) ( 1).
Alternatively, using first to moments of the beta distribution (see Formulae and Tables page 13)
we have:
B
p)) = E p(1 p)|X
 
(p(1[
h i
= E p|X E p2 |X
 

xi + 1 (a + b) (a + 2)
Pn

= i=1
n+2 (a) (a + b + 2)
(a + 1) a
Pn
x i 1
= i=1
n2 (a + b + 1)(a + b)
( i=1 xi + 1)(n + 1 ni=1 xi )
Pn P
= ,
(n + 3)(n + 2)
* where a = ni=1 xi + 1 and b = n + 1 ni=1 xi
P P

3. We are interested in the Bayesian estimator of p(1 p), since np(1 p) is the variance of the
binomial distribution (with n a known constant) and we can use this for the normal approxima-
tion.

Solution 2.5: [wk05Q12, Exercise, Schedule] Note that X can be interpreted as a geometric random
variable where k is the total number of trials. Here E [X] = 1p .

1. The method of moments estimator is given by:


1
X =
p
1
p =
e
X
n
= n
P
Xi
i=1

70
ACTL2131 Probability and Mathematical Statistics, S1 2016 Exercises

2. The likelihood function is:


n
Y n
Y
L(p; x) = fX (xi ) = p(1 p) xi 1
i=1 i=1
Pn
= pn (1 p) i=1 xi n .

The log-likelihood function is:


n
n
  X X
`(p; x) = log L(p; x) = log( fX (xi )) = n log(p) +

xi n log(1 p).
i=1 i=1

Take the FOC of `(p; x) wrt p and equate equal to zero:


Pn
n xi n
` (p) = i=1
0
= 0.
p 1 p
The we obtain the Maximum Likelihood estimator for p:
n n
p = Pn = Pn ,
i=1 Xi n + n
b
i=1 Xi

* using: a
1a
= b
c
1
1/a1
= b
c
1
a
1= c
b
1
a
= c+b
b
a= b
b+c
.

Solution 2.6: [wk05Q13, Exercise, Schedule] For the Pareto distribution with parameters x0 and
we have the following p.d.f.:

f (x) = (x0 ) x1 , x x0 , > 1,

and zero otherwise. The expected value of the random variable X is then given by:
Z Z
E [X] = x fX (x)dx = x (x0 ) x1 dx
x0
R
x
= (x0 ) x dx 0

1
#"
x
= (x0 )
1 x0

= x0
1

= x0 .
1

1. Given x0 , we have E [X] = x,
1 0
thus:


x0 =X
1
x0 =X ( 1)
x0 =X X
 
X = X x0
X
=
b .
X x0

Thus the method of moment estimator of is X


Xx0
.

71
ACTL2131 Probability and Mathematical Statistics, S1 2016 Exercises

2. The likelihood function is given by:


n
Y n
Y
L(; x) = fX (xi ) = (x0 ) xi1
i=1 i=1
n
Y
= n (x0 )n xi1 .
i=1

The log-likelihood function is given by:


n
X
`(; x) = log(L(; x)) = log( fX (xi ))
i=1
n
X
=n log() + n log (x0 ) ( + 1) log(xi ).
i=1

Take the FOC of `(; x) and equate equal to zero:


n
`() n X
= + n log (x0 ) log(xi ) = 0
i=1
n
n X
= n log (x0 ) + log(xi )
i=1
n
b = n
.
n log (x0 ) + log(xi )
P
i=1

Thus, the maximum likelihood estimator for is given by n


n
P .
n log(x0 )+ log(xi )
i=1

Solution 2.7: [wk05Q1, Exercise, Schedule] We are given that X Exp(1/5000). Thus, E [X] =
5000 and Var (X) = (5000)2 . Let S = X1 + . . . + X100 . Then E [S ] = 100 (5000) = 500, 000 and
Var (S ) = 100 (5000)2 .Thus, using the central limit theorem, we have:
!
S E (S ) 100 (50)
Pr (S > 100 (5050)) = Pr >
Var (S ) 10 (5000)
Pr (Z > 0.10) = 1 0.5398 = 0.4602.
p
Solution 2.8: [wk05Q3, Exercise, Schedule] To prove X n in probability, we show that if we
take any > 0, we must have:
 
Pr X n > 0, as n

or, equivalently;  
lim Pr X n > = 0.
n

First, note that we have:


h i   1 X n
E Xn = and Var X n = 2 2 .
n k=1 k
Applying the Chebyshevs inequality:
  1 1 Xn
Pr X n > 2 2 2 .
n k=1 k

72
ACTL2131 Probability and Mathematical Statistics, S1 2016 Exercises

And take the limits on both sides:


n
  1 1 X 2
lim Pr X n > lim 2 2
n n n k=1 k
n
1 1 X 2
= 2 lim 2 = 0.
n n k=1 k
| {z }
=0

Thus, the result follows.

Solution 2.9: [wk05Q4, Exercise, Schedule] Let L be the location after one hour (or 60 minutes).
Therefore:
L = X1 + . . . + X60 ,
where (
50 cm, w.p. 21
Xk =
50 cm, w.p. 12 ,
so that E [Xk ] = 0 and Var (Xk ) = 2500.
Therefore,
E [S ] = 0 and Var (S ) = 60 (2500) = 150000.
Thus, using the central limit theorem, we have:
! !
L E [L] x x
Pr (L x) = Pr Pr Z .
Var (L) 150000 100 15
In other words,
L N (0, 150000)
approximately. The mean of a normal is also the mode, therefore its most likely position after one
hour is 0, the point where he started with.

Solution 2.10: [wk05Q7, Exercise, Schedule] We use moment generating function to show that:

1. The binomial tends to the Poisson: Let X Binomial(n, p). Its m.g.f. is therefore:
n
MX (t) = 1 p + pet
let np = so that p = /n
 n
= 1 + et
n n !
n
et 1
= 1+
n
and by taking limit on both sides, we have:
 !n
et 1
lim MX (t) = lim 1 + = exp et 1 ,

n n n
which is the moment generating function of a Poisson with mean .

2. The gamma, properly standardized, tends to Normal: Let X Gamma(, ) so that its density
is of the form:
1 x
f (x) = x e , for x 0,
()

73
ACTL2131 Probability and Mathematical Statistics, S1 2016 Exercises

and zero otherwise, and its m.g.f. is:


!

MX (t) = .
t

Its mean and variance are, respectively, / and /2 . These results have been derived in lecture
week 2. Consider the standardized Gamma random variable:
X E (X) X / X X
Y= = p = =
Var (X) /2

Its moment generating function is:

t
!
 X 
t t t
MY (t) = e E e =e
MX



= e t   = e t e log(1(t/ ))

t/
!!
  1  2
= exp t t/ t/ + R
2
here R is the Taylors series remainder term
!
12 0
= exp t + R ,
2
 
where R involves powers of 1/ .. Thus in the limit, MY (t) exp 21 t2 as .
0

Solution 2.11: [wk05Q8, Exercise, Schedule] If the law of large numbers were to hold here, it would
have had the sample mean X approaching the mean of X, which does not exist in this case. At first
glance therefore it would seem not a violation. But, in fact, it is, because the assumption of finite
mean does not hold for Cauchy and therefore the law of large numbers cannot hold.

Solution 2.12: [wk05Q10, Exercise, Schedule] The common distribution function is given by:
Z x
F X (x) = u(+1) du = u 1x = 1 x , if x > 1,
 
1

and zero otherwise. The distribution function of Yn will be:


!
1
FYn (x) = Pr (Yn x) = Pr X(n) x
n1/
!n
     n
1/ x
= Pr X(n) n x = 1 n x
1/
= 1 ,
n
X(n)
if x > 1 and zero otherwise. Notice that whereas x > 1, due to the transformation Yn = n1/
y > 0, i.e.,
when is close to zero n1/ is large! Taking the limit as n , we have:
!n
x
lim FYn (x) = lim 1 = exp x .

n n n

Thus, limit exists and therefore converges in distribution. The limiting distribution is:

FYn (y) = exp y , for y > 0,




74
ACTL2131 Probability and Mathematical Statistics, S1 2016 Exercises

and zero otherwise, the corresponding density is:


FYn (y)
fYn (y) = = y(1) exp y , if y > 0,

y
and zero otherwise. You can prove that this is a legitimate
R density by fYn (y) 0 for all y, because
> 0, y+1 0 and exp (y ) 0 and FYn () = fYn (y)dy = exp(0) = 1.

Solution 2.13: [wk05Q11, Exercise, Schedule] The mean and the variance of S are respectively:
40 10
E [S ] = and Var (S ) = .
3 9
Thus, using the central limit theorem, we have:
!
S E [S ] 10 (40/3)
Pr (S 10) = Pr
Var (S ) 10/9
 
Pr Z 10 = Pr (Z 3.16) = 0.0008.

Solution 2.14: [wk06Q1, Exercise, Schedule] We wish to find a so that:


!
a
Pr 0 < < = 0.95,
X
or, equivalently,  a
Pr X < = 0.95,

since > 0, X > 0. Note that X Exp() so that:
 
MX (t) = ,
t
and the m.g.f. of the sample mean, X, is:

MX (t) =MPni=1 Xi /n (t) = MPni=1 Xi (t/n) = MXi (t/n) n



!n 
n n
= = ,
t/n n t
which is the m.g.f. of a Gamma(n, n). Therefore,
!
1
2nX Gamma n, = 2 (2n) ,
2
which is therefore free of the parameter . Thus, using this as a pivot, we have:
 
Pr 2nX < 21 (2n) = 1 ,

or, equivalently,
21 (2n)
!
Pr < = 1 .
2nX
For = 0.05, the required constant is:
20.95 (2n)
a= ,
2n
where 20.95 (2n) denotes the 95th quantile of a chi-squared distribution with 2n degrees of freedom.

75
ACTL2131 Probability and Mathematical Statistics, S1 2016 Exercises

 
Solution 2.15: [wk06Q2, Exercise, Schedule] Let X1 , . . . , Xn be a random sample from N , 2 ,
and Zk a standard normally distributed. If is known, then it is known that:
 X 2
k
= Zk2 2 (1) ,

so that: n  n
X Xk 2 X
= Zk2 2 (n) ,
k=1
k=1

and to construct a 100 (1 ) % confidence interval for 2 , we define 2/2 (n) and 21/2 (n) to be the
(/2)th and (1 /2)th quantiles respectively of a chi-squared distribution with n degrees of freedom.
Using the above as a pivot quantity, we have:

k=1 (Xk )
Pn 2 !
Pr /2 (n) <
2
< 1/2 (n) = 1 ,
2
2

which implies that:


k=1 (Xk )2 )
Pn Pn
2
(Xk
< 2 < k=1 2 = 1 .

Pr
1/2 (n)
2
/2 (n)
Thus we have a 100 (1 ) % confidence interval estimate for 2 when is known:

k=1 (Xk ) k=1 (Xk )


Pn 2 Pn 2
< 2
< .
21/2 (n) 2/2 (n)

Solution 2.16: [wk06Q3, Exercise, Schedule] We have a random sample from a continuous distribu-
tion.

1. To prove that the c.d.f., when viewed as a random variable, has a uniform distribution, we have:
 
Pr (F X (X) x) = Pr X F X1 (x)
 
= F X F X1 (x) = x.

We know that this is the c.d.f. of a Uniform(0, 1) random variable, because x represents proba-
bility, which lays between 0 and 1 and the p.d.f. of the probability is uniformly distributed (e.g.
the probability of the probability occurring is equal for all probabilities between 0 and 1).

2. Let W = log (1/F X (X)) . Then we have:

FW (x) = Pr (W x) = Pr log (1/F X (X)) x




= Pr log (F X (X)) x = Pr log (F X (X)) x


 

= Pr log (F X (X)) x = 1 Pr log (F X (X)) < x


 

= 1 Pr F X (X) ex = 1 ex ,


so that its density is (take the derivative of FW (x) w.r.t. x):


FW (x)
fW (x) = = ex ,
x
which implies W Exp(1), standard exponential.

76
ACTL2131 Probability and Mathematical Statistics, S1 2016 Exercises

3. Let Wk = log (1/F X (Xk )) Exp(1). Then, the m.g.f. of Wk , with = 1 is given by:
 t 1
MWk (t) = 1

Using the m.g.f. technique we have, using the properties of m.g.f. (week 1):
n
X
Y= Wk
k=1
 t n
MY (t) =MW1 +...+Wn (t) = MWk (t) n = 1


Y Gamma (n, 1) .

It has a gamma distribution with parameters = n and = 1.

4. Using the properties of m.g.f. and the result from (c), we have:
n
X
2Y =2 Wk
k=1
!n 2n
2t t 2
M2Y (t) =MW1 +...+Wn (2t) = MWk (2t) = 1 = 1
n


2
n !
X 2n 1
2Y 2 Wk Gamma , = 2 (2n) .
k=1
2 2
Pn
Thus, you can always choose 2 k=1 Wk as a pivot because its distribution is free of any param-
eter.

Solution 2.17: [wk06Q4, Exercise, Schedule] Suppose X1 , X2 , . . . , Xn is a random sample from a


Gamma(3, ) so that its m.g.f.:
 3
MXi (t) = .
t

1. Use the MGF technique. We have that the m.g.f. of the sample mean X can be written as:

MX (t) =MPni=1 Xi /n (t) = MPni=1 Xi (t/n) = MXi (t/n) n



!3n 
n 3n
= = ,
t/n n t

which has the form of the m.g.f. of a Gamma(3n, n) distribution.

2. Note that the m.g.f. of the random variable Y = 2nX is:


!6n/2
h i 1
MY (t) = E e Yt
= MX (2nt) = ,
1 2t

which is the m.g.f. of a 2 (6n). Thus, we can use it as a pivot to construct confidence interval.
To construct a lower 95% confidence interval for , we note:
 
Pr 0 < 2nX < 20.95 (6n) = 0.95,

77
ACTL2131 Probability and Mathematical Statistics, S1 2016 Exercises

so that equivalently:
20.95 (6n)
!
Pr 0 < < = 0.95.
2nX
Therefore:
20.95 (6n)
!
0, =U ,
2nX
is the required confidence interval.

3. Similar to (b) above, it can be shown that:


 
Pr 20.05 (6n) < 2nX < = 0.95,

so that equivalently:
20.05 (6n)
!
Pr < < = 0.95,
2nX
and, hence:
20.05 (6n)
!
L= , ,
2nX
provides an upper confidence interval.

4. Similar to (b) and (c) above, the following:

20.025 (6n) 20.975 (6n)


!
, ,
2nX 2nX
provides the required confidence interval.

5. Using 20k=1 xk = 98.2, we have the following 95% confidence intervals for :
P

20.95 (120)
! !
146.57
Lower Tail: 0, = 0, = (0, 0.7463)
2 (98.2) 196.4

20.05 (120)
! !
95.70
Upper Tail: , = , = (0.4873, )
2 (98.2) 196.4

20.025 (120) 20.975 (120)


! !
91.58 152.21
Both Tails: , = , = (0.4663, 0.7750) .
2 (98.2) 2 (98.2) 196.4 196.4

Note: the values 20.95 (120) = 146.57, 20.05 (120) = 95.70, 20.975 (120) = 152.21, and 20.025 (120) =
91.58 are computed using R or Excel (using: =chiinv(q,df)). Formulae and Tables only pro-
vides percentage points of the chi-squared distribution until 100 degrees of freedom (page 169).
Alternatively: use the approximate distribution of a chi-squared distributed for large n. Let
Y 2 (n), then we know Y = ni=1 Zi2 , where Zi are i.i.d. standard normal random variables.
P
Applying the Law of Large Numbers (see week 5) we have: Y N(n, 2n). Using this we
have that (n) = n + 2nz , thus: 20.95 (120) 145.49 (z0.95 = 1.6449), 20.05 (120) 94.52
2

(z0.05 = 1.6449), 20.975 (120) 150.36 (z0.975 = 1.96), and 20.025 (120) 89.64 (z0.025 = 1.96).

Solution 2.18: [wk06Q5, Exercise, Schedule] The weight loss is clearly:

D = After-Weight Before-Weight .
 

78
ACTL2131 Probability and Mathematical Statistics, S1 2016 Exercises

A negative difference will mean a weight loss and a positive difference, a gain in loss. The sample
mean and standard deviation of the difference is
v
t n
1 X
d = 6.4444 sD = xi2 n x2 = 5.1988.

and
n 1 i=1
The required 95% confidence interval is therefore given by:
 .   . 
d t1/2,n1 sD n = 6.4444 (2.306) 5.1988 9
= (10.44056322, 2.448236783).
This result may differ slightly because of rounding.

Solution 2.19: [wk06Q6, Exercise, Schedule]

1. (a) We have that the difference in sample mean, given known population variances, is given
by:
21 22
!
(X 1 X 2 ) N 1 2 , + ,
n1 n2
note that the samples are independent, thus Cov(X 1 , X 2 ) = 0.
The pivotal quantity is then:
(X 1 X 2 ) (1 2 )
q 2 N (0, 1) .
1 22
n1
+ n2
Finally, using the pivotal quantity we have that the 95% confidence interval is given by:
s s
21 22 21 22
(x1 x2 ) + z10.025 < (1 2 ) < (x1 x2 ) + + z10.025 ,
n1 n2 n1 n2
where z10.025 is the 0.975 quantile of a standard normal random variable.
(b) Now, we have that the difference in sample mean, given equal, but unknown population
variance, is given by:
(X 1 X 2 )(1 2 )

1/n1 +1/n2
tk = r 
kS 2p
2
k
Z
= ,
Y/k
(n 1)S 2 +(n 1)S 2
where S p = 1 n1 +n 1 2
2 2
2
, Z N(0, 1), and Y 2n1 +n2 2 thus k = n1 + n2 2. The
pivotal quantity is given by:
(X 1 X 2 )(1 2 )

1/n1 +1/n2
q tk .
S 2p
2

The 95% confidence interval is then given by:


r r
1 1 1 1
(x1 x2 ) t10.025,k s p + < (1 2 ) < (x1 x2 ) + t10.025,k s p + ,
n1 n2 n1 n2
where t10.025,k is the 0.975 quantile of a student-t random variable with parameter k =
n1 n2 2.

79
ACTL2131 Probability and Mathematical Statistics, S1 2016 Exercises

(n1 1)S 12 (n2 1)S 22


2. We have that 21
2 (n1 1) and 22
2 (n2 1). Thus we have:

(n1 1)S 12 /21


n1 1
Fk,l =
(n2 1)S 22 /22
n2 1
S 12 /21
= 2 2
S 2 /2
Yk
= ,
Yl
where Yk 2 (n1 1), hence k = n1 1 and Yl 2 (n2 1), hence l = n2 1. The pivotal
quantity is given by:
22 S 12
Fk,l = .
21 S 22
Thus the 90% confidence interval is given by:
s21 1 21 s21 1
< 2 < 2 ,
s22 F
0.95 (n1 1, n2 1) 2 s2 F0.05 (n1 1, n2 1)

where F (n1 1, n2 1) is the th quantile of a F-distributed random variable with parameters


n1 1 and n2 1.
3. (a) We use here the t-distribution, because n1 = 10 and n2 = 10, hence, we do not have a large
sample and cannot use that tn N(0, 1) as n . We have S 12 = (1.22)2 = 1.4884 and
(n 1)S 2 +(n 1)S 2
S 22 = (0.96)2 = 0.9216, hence S p = 1 n1 n 1 2
2 2
2
= (101)1.4884+(101)0.9216
10+102
= 1.205.
Thus, the 95% confidence interval is given by:
r r
1 1 1 1
(x1 x2 ) t10.025,k s p + <(1 2 ) < (x1 x2 ) + t10.025,k s p +
n1 n2 n1 n2
r r
2 2
(2.83) t10.025,18 1.205 <(1 2 ) < (2.83) + t10.025,18 1.205
10 10
1.798582315 <(1 2 ) < 3.861417685,
using t10.025,18 = 2.101 (see table Formulae and Tables page 163). Note, when n
t10.025,n = z10.025 = 1.96, we see that the small sample size makes a difference.
(b) The 90% confidence interval for the ratio of the population variance is given by:
s21 1 21 s21 1
< < 2
2 F
s2 0.95 (n1 1, n2 1) 2 s2 F0.05 (n1 1, n2 1)
2

1.4884 1 2 1.4884
< 12 < 3.179
0.9216 3.179 2 0.9216
21
0.508027 < 2 < 5.13414
2

using F0.05 (9, 9) = F0.951(9,9) = 3.179


1
and F0.95 (9, 9) = 3.179 (see table Formulae and Tables
page 172).
We have that one is in the 90% confidence interval for the ratio of the variance, thus we
cannot reject that the variance of the samples is equal with probability 90%. Therefore,
the assumption of equal variances is a good assumption.

80
ACTL2131 Probability and Mathematical Statistics, S1 2016 Exercises

Solution 2.20: [wk06Q7, Exercise, Schedule] (See Q9 before doing this) X, the number of customers
who indicated high satisfaction; X Binomial(200, p).
X
p = = 172
Estimator of the parameter p : b = 43 = 0.86. Then:
n 200 50
0.86 p
Z= r N(0, 1).
0.86 (1 0.86)
200
An approximate 95% confidence interval for p is:
r r
0.86 (1 0.86) 0.86 (1 0.86)
, 0.86 + z10.025

0.86 z10.025
200 200
r r
0.86 (1 0.86) 0.86 (1 0.86)
= 0.86 1.96 , 0.86 + 1.96


200 200
 
= 0.811 910 05, 0.908 089 95 .

Solution 2.21: [wk06Q8, Exercise, Schedule] The distribution of claim size under a certain class of
policy is modelled as a normal random variable, and previous years records indicate that the standard
deviation is 120.
n
P
Xi
1. Let X = i=1
n
, an unbiased estimator of . We have:
dist
X N(, 2X = 2 /n).

To derive a 95% confidence interval for :


Use as Pivot function:
X X X dist
Z= = N(0, 1).
X / n
In general,

!
X z1/2 , X + z1/2 ,
n n
is an approximate 100 (1 ) % confidence interval for . Hence the width of a 95% confidence
interval for is 2z10.025 n . For this problem,

120
width = 2(1.96) = 47. 04.
10

2. We want:

2z10.025 10,
n
hence:
120
2(1.96) 10,
n
 2
and n 2(1.96) 120
10
= 2212. 761 6. The minimum sample size required is 2213.

3. The smaller the width of the confidence interval, the larger is the required sample size.

Solution 2.22: [wk06Q9, Exercise, Schedule]

81
ACTL2131 Probability and Mathematical Statistics, S1 2016 Exercises

1. A random variable Y has a Poisson distribution with parameter but there is a restriction that
zero counts cannot occur. The distribution of Y in this case is referred to as the zero-truncated
Poisson distribution.
(a) Let X be the not-truncated random variable of Y. Hence, X has a Poisson distribution.
Note that:
Pr(X = x)
Pr(Y = y) = Pr(X = x|X > 0) =
Pr(X 1)
Pr(X)
=
1 Pr(X = 0)
y e
= , for y = 1, 2, 3, . . . ,
y!(1 e )
and zero otherwise.
(b)

X yy e
E[Y] = E[X|X > 0] =
y=1
y!(1 e )

1 X y e
=
1 e y=1 (y 1)!

1 X y+1 e
=
1 e y=0 (y)!

X y e
=
1 e y=0 (y)!

=
1 e

P y e
* using (y)!
is the sum of all the probability mass function of a Poisson random variable
y=0
with parameter and thus equals one.
Alternatively, one could use:
E[X] = E[X|X > 0] Pr(X > 0) + E[X|X 0] Pr(X 0).
| {z }
=0

Using this we have:


E[X]
E[X|X > 0] =
Pr(X > 0)

=
1 Pr(X = 0)

= ,
1 e
* using E[X] = and ** using Pr(X = 0) = e 0 /0! = e .
2. (a) The likelihood function is:
n
P
n n yi
Y Y e
yi
e
n i=1
L(; y) = fY (yi ) = = ,
yi !(1 e ) (1 e ) n
n Q
i=1 i=1 yi !
i=1

82
ACTL2131 Probability and Mathematical Statistics, S1 2016 Exercises

and the log-likelihood function is:


n n
n
  X X Y
` ; y = log( fY (yi )) = n n ln(1 e ) +

yi ln ln yi ! .
i=1 i=1 i=1

To find the maximum point, we set the derivative of the log-likelihood function equal to
zero: n
P
`(; y) yi
ne i=1
= n + = 0.
d (1 e )
Equivalently,
e Y
1 + = 0,
(1 e )

or:
e
Y = 0.
1 e
Also, from the method of moments:

Y =E [Y] =
=0
 1 e
e + e
0 =Y
1 e
e
0 =Y ,
1 e
* using result in Q9(a)2. Hence the maximum likelihood estimate is the same as the
method of moments estimate.
(b)
n
P
`(; y) yi
ne i=1
=n +

(1 e )
n
P
`(; y)
2 2
! yi
(1 e )ne ne i=1
=
2 (1 e )2 2
Pn
yi
ne
!
i=1
= 2
(1 e )2
Pn
`(; y)
2
! yi
=E
ne i=1
2

E
2
(1 e )
2

ne n 1e
!
=
(1 e )2 2

!
ne n
=

(1 e ) 2 (1 e )
ne n(1 e )
=
(1 e )2
ne n + ne )
=
(1 e )2

83
ACTL2131 Probability and Mathematical Statistics, S1 2016 Exercises

(1 e )2
CRLB =
ne n + ne )

Solution 2.23: [wk06Q10, Exercise, Schedule] We observe Yi Ber(p) the number of successes. We
know that ni=1 Yi Bin(n, p). We have that X is the number of successes.
P

1. We have that:
n
Y
L(p, y) = pyi (1 p)1yi = p x (1 p)nx = L(p, x).
i=1

The maximum likelihood estimator can be found using the log-likelihood:


n
X
`(p, y) = yi log(p) + (1 yi ) log(1 p) = X log(p) + (n X) log(1 p) = `(p, x),
i=1

and than the the derivative of `(p, x) with respect to p and equate that equal to zero:
`(p, x) X n X
0= =
p p 1 p
(1 p)X (n X)p
=0
p(1 p) p(1 p)
(1 p)X =(n X)p
X =np
b p =X/n.

2. (a) The Cramer-Rao lower bound is given by:


1
Var(T ) ,
nI f ? (p)
where
log( f (x, p)) 2
!
I f ? (p) =E
p

log( f (x, p))


" 2 !#
=E
p2
log(p x (1 p)1x )
" 2 !#
=E
p2
x log(p) + (1 x) log(1 p)
" 2 !#
=E
p2
x/p (1 x)/(1 p)
" !#
=E
p
" !#
x (1 x)
=E
p2 (1 p)2
!
p (1 p)
= 2 +
p (1 p)2
1 1 1
= = ,
p 1 p p(1 p)
using E [X r ] = p. Thus we have the CRLB:
p(1 p)
Var(T ) .
n

84
ACTL2131 Probability and Mathematical Statistics, S1 2016 Exercises

(b) The variance of the Maximum Likelihood estimator is given by:


p(1 p)
Var(X/n) = Var(X)/n2 = np(1 p)/n2 = .
n
Thus the MLE is on the CRLB.
(c) The approximate sampling distribution for b
p is obtained using the law of large numbers.

p = X/n N(p, p(1 p)/n).


b

3. We have:
p p
1.96 < p <1.96
b
p (1 p)/n

p p
<1.96
b
p
p (1 p)/n
p p 2

<1.962
b
p (1 p)/n
p (1 p)
p2 + p2 2 b
b p p <1.962
n
(1 + 1.962 /n) p2 (2 b
p + 1.962 /n) p + b
p2 <0
p + 1.962 /n)2 4 (1 + 1.962 /n) b
p
p + 1.962 /n)
(2 b (2 b p2
<p <
2 (1 + 1.962 /n) 2 (1 + 1.962 /n)
p + 1.962 /n)2 4 (1 + 1.962 /n) b
p
p + 1.962 /n)
(2 b (2 b p2
+ ,
2 (1 + 1.962 /n) 2 (1 + 1.962 /n)

where the last step is derived using the abc-formula: ax2 +bx+c = 0 d = b2 4ac, x = b d
2a
.
p
p(1 b
4. Using b p) p(1 p) as n we can approximate the approximated pivotal quantity
by:

p p
b
p N(0, 1).
bp (1 b
p)/n

Using this approximation the 95% confidence interval becomes:

p p
z10.025 < p <z10.975
b
p (1 b
b p)/n
p p
1.96 < p <1.96
b
p (1 b
b p)/n
p p
1.96 b p (1 bp)/n < b p p <1.96 b p (1 bp)/n
p p
1.96 b p (1 bp)/n b p < p <1.96 b p (1 bp)/n b
p
p p
p 1.96 b
b p (1 bp)/n < p <bp + 1.96 b p (1 b
p)/n.

5. Now, we are applying the two confidence intervals to data:

(a) We have n = 10, X = 4 and b p = X/n = 0.4. Using this the confidence interval in c) is
given by (0.168177581, 0.687330453) and in d) by (0.096358106, 0.703641894).

85
ACTL2131 Probability and Mathematical Statistics, S1 2016 Exercises

(b) We have n = 200, X = 80 and bp = X/n = 0.4. Using this the confidence interval in c) is
given by (0.33460464, 0.469164561) and in d) by (0.332103608, 0.467896392).
p
We observe that for large n indeed the convergence b p(1 b
p) p(1 p) is is a good approx-
imation, but for small n (i.e., equal to 10) this does not hold. Therefore, the approximation of
the confidence interval in d) is substantial different than the confidence interval in c).
Note that we use in cases the law of large numbers for the pivotal quantity. The Law of Large
Number is a good approximation if we have a large sum, which is not the case for n = 10.
Therefore, it would be better to use the exact binomial test if n is small and not the normal
approximation. Hence, if n is large, both the Law of Large Numbers and the convergence
p
p(1 b
b p) p(1 p) can be used for a good approximation of the confidence interval for p, but
if n is small, one should use the exact Binomial pivotal quantity.

Solution 2.24: [wk06Q11, Exercise, Schedule] Note that the sample size is equal to 16 (i.e., n = 16),
thus we have a small sample size and have to use the student-t distribution for the population mean.
The 95% (i.e., = 0.05) confidence interval for the population mean is given by:
s s
x t1/2,n1 < < x t1/2,n1
n n
s s
P16 P16 2 2 P16 P16 2 2
i=1 xi i=1 xi n x 1 i=1 xi i=1 xi n x 1
t1/2,n1 < < t1/2,n1
n n1 n n n1 n
r r
51.2 243.19 163.84 1 51.2 243.19 163.84 1
t10.025,15 < < + t10.025,15
16 15 16 16 15 16
r r
5.29 5.29
3.2 2.131 < < 3.2 + 2.131
16 16
1.974675 < < 4.425325,

using t10.025,15 = 2.131 (see table Formulae and Tables page 163).

Solution 2.25: [wk06Q12, Exercise, Schedule]

1. We have that:
(n 1)S 2
2 (n 1).
2
Moreover, we know that 2 (n 1) = n1 2
P
i=1 Zi N(n 1, 2 (n 1)) as n due to the Law
of Large numbers. Thus, as n we approximately have:

(n 1)S 2 /2 (n 1)
N(0, 1).
2 (n 1)

86
ACTL2131 Probability and Mathematical Statistics, S1 2016 Exercises

2. Using n = 101, we need to find:


   
Pr S 2 > 1.12 = Pr S 2 /2 > 1.1
S 2 /2 (n 1) (n 1) 1.1 (n 1) (n 1)
!
= Pr >
2(n 1) 2(n 1)
!
0.1 (n 1)
= Pr Z >
2(n 1)
!
n1
=1 0.1
2(n 1)
!
100
=1 0.1
! 200
1
=1 = 1 0.76025 = 0.23975.
2

Solution 2.26: [wk06Q13, Exercise, Schedule] We have that Xi POI() i.i.d. for i = 1, . . . , 500.
Using the moment generating technique we have MP500 i=1 Xi
(t) = MX500
i
(t) = exp( (exp(t) 1))500 =
exp(500 (exp(t) 1)). Thus we have i=1 Xi = X POI(500).
P500
Due to the Law of Large numbers, we have that 500i=1 Xi = X is approximately normally distributed
P
with mean 500 and variance 500. Thus:
X
500 N(0, 1).
500
We have:
83
z0.025 < 500 < z0.975
500

83/ 500 500
1.96 < < 1.96

83
500 <1.96


500
83 2
83
+ 500 2 2 500 <1.962
500 500
832
83
+ 500 2 (2 500 + 1.962 ) <0
500 500
0.133923 < < 0.205761,

where the last step is derived using the abc-formula, i.e., ax2 + bx + c = 0 d = b2 4ac, x = b d
2a
.

Solution 2.27: [wk06Q14, Exercise, Schedule] We have that Xi UNIF(0, ) i.i.d. for i = 1, . . . , n.
We denoted U = (1/)X(n) .

1. We know (week 5) that that the cumulative distribution function of the maximum F X(n) =
F X (x(n) ) n and we have F X (x) = x , if 0 < x < . Thus we have:


if x(n) < 0;


0,
  x(n) n
F X x(n) = , if 0 x(n) ;


if x(n) > .

1,

87
ACTL2131 Probability and Mathematical Statistics, S1 2016 Exercises

Hence, using the transformation U = (1/)X(n) we have:

if u < 0;


0,
FU (u) = n
,

(u) if 0 u 1;

if u > 1.


1,

2. We use U as a pivotal quantity. To find the confidence interval for we have:

Pr (q1 ) =0.95

Pr q1 X(n) /U =0.95

!
q1
Pr 1/U =0.95
X(n)
!
X(n)
Pr U =0.95
q1
!
X(n)
FU =0.95
q1
X(n) n
!
=0.95
q1
X(n)
=0.951/n
q1
q1 =X(n) 0.951/n
 
Pr X(n) 0.951/n =0.95,

* using U = (1/)X(n) = X(n) /U. Thus, the 95% lower confidence interval for is
(0, X(n) 0.951/n ).

88

You might also like