You are on page 1of 4

1

Filtration - An Example

Suppose we are modeling the price of an asset in two periods. We know its price at day 0. The price can then either go up or down by a certain percentage on both days. Let u denote the event that the price goes up on a particular day and d denote the event that the price goes down on a particular day. The sample space of our 2-day model is therefore = {(u, u), (u, d), (d, u), (d, d)} where (u, u) denotes the event that the price goes up on the rst day and on the second day, so on and so forth. We now construct the ltered probability space (, Ft , P) of our 2-day model as follows

F0 = {, } |F0 | = 22 = 2 F1 = {, {(u, u), (u, d)}, {(d, u), (d, d)}, } |F1 | = 22 = 4 F2 = {, {(u, u)}, {(u, d)}, {(d, u)}, {(d, d)}, {(u, u), (u, d)}, {(u, u), (d, u)}, {(u, u), (d, d)}, {(u, d), (d, u)}, {(u, d), (d, d)}, {(d, u), (d, d)}, {(u, u), (u, d), (d, u)}, {(u, u), (u, d), (d, d)}, {(u, d), (d, u), (d, d)}, {(u, u), (d, u), (d, d)}, } |F2 | = 22 = 16
2 1 0

Clearly, F0 F1 F2 = 2 . Therefore the set {Ft }, t {0, 1, 2} is a ltration. Let Xt : R be a random variable where t {0, 1, 2}. Dene X0 : R as follows:

X0 ((u, u)) = 1 X0 ((u, d)) = 2 X0 ((d, u)) = 3 X0 ((d, d)) = 4 Is X0 F0 -measurable? No. Consider any possible values X0 can take on, for example, 1. The set { : X0 () = 1} = {(u, u)} F0 . In fact none of the sets { : X0 () = xi } where xi {1, 2, 3, 4} / is a subset of F0 . Hence, X0 is not F0 -measurable. (Note: we only need to nd one negative case in order to reject a random variable as F0 -measurable) Suppose we dene another random variable X0 : R as follows 1

X0 ((u, u)) = X0 ((u, d)) = X0 ((d, u)) = X0 ((d, d)) = 1 Now, the set { : X0 () = 1} = {(u, u), (u, d), (d, u), (d, d)} = F0 , hence X0 dened this way is F0 -measurable. We denote this fact by X0 F0 . Moving on to F1 , we dene the random variable X1 as follows

X1 ((u, u)) = 1 X1 ((u, d)) = 1 X1 ((d, u)) = 2 X1 ((d, d)) = 2

X1 is F1 -measurable since { : X1 () = 1} = {(u, u), (u, d)} F1 { : X1 () = 2} = {(d, u), (d, d)} F1 In particular, on day 1, we know what could happen is either {(u, u), (u, d)} or {(d, u), (d, d)}. In the former case, X1 will be equal to 1 and in the latter case, it will be equal to 2. Thus if we know which event in F1 has occurred, we will know denitively which value of X1 has occurred. This is the interpretation of X1 being F1 -measurable. Similarly, dene the random variable X2 as

X2 ((u, u)) = 1 X2 ((u, d)) = 2 X2 ((d, u)) = 3 X2 ((d, d)) = 4

X2 is F2 -measurable, since { : X2 () = 1} = {(u, u)} F2 { : X2 () = 2} = {(u, d)} F2 { : X2 () = 3} = {(d, u)} F2 { : X2 () = 4} = {(d, d)} F2 The set of random variables {Xt } dened this way, where t {0, 1, 2}, i.e. the set {X0 , X1 , X2 } is called a stochastic process adapted to the ltration F = {F0 , F1 , F2 }. 2

Conditional probability and expectation - An Example

The following denitions are from Klebaner [1]. Given a probability space (, 2 , P ), the conditional probability of an event A given a -eld F, generated by a partition of denoted by {D1 , D2 , ..., Dk } (By denition of partition, Di Dj = for any i = j and k Di = ) is a function P (A|F) : R i=1 dened to be: P (A|F)() = k P (A|Di )IDi () i=1 Given a random variable X : R, which takes on value xi , i {1, 2, ..., k} in R, and a eld F, since X is a function from to R, every element is mapped to one and only one value in R and therefore the set { : X() = xi }, i {1, 2, ..., k} must be a partition of . Call this partition {Ai } where Ai = { : X() = xi }, the conditional expectation of X given F, is dened to be E(X|F) = In addition, if X is F-measurable, then E(X|F) = X Following on our previous example, recall that
k i=1 xi P (Ai |F)

= {(u, u), (u, d), (d, u), (d, d)} F1 = {, {(u, u), (u, d)}, {(d, u), (d, d)}, } D1 = {(u, u), (u, d)} D2 = {(d, u), (d, d)}

and X1 is dened to be

X1 ((u, u)) = 1 X1 ((u, d)) = 1 X1 ((d, u)) = 2 X1 ((d, d)) = 2

Applying the denitions, given any event A 2 , 3

a constant

a constant

P (A|F1 )() = P (A|D1 )

ID1 (w)
a random variable

+ P (A|D2 )

ID2 (w)
a random variable

is a random variable, since ID1 and ID2 are indicator random variables. Expanding further,

P (A|D1 ) = P (A|D2 ) =

P (A D1 ) P (A {(u, u), (u, d)}) P (A {(u, u), (u, d)}) = = P (D1 ) P ({(u, u), (u, d)}) (1/2) P (A {(d, u), (d, d)}) P (A {(d, u), (d, d)}) P (A D2 ) = = P (D2 ) P ({(d, u), (d, d)}) (1/2)

assuming P () = 1/4, = {(u, u), (u, d), (d, u), (d, d)}. In particular note that P (Di |Dj ) = 1 when i = j and P (Di |Dj ) = 0 when i = j since Di Dj = when i = j. Recall that X1 is F1 -measurable, meaning that the partition of generated by X1 is exactly the partition generating F1 . Therefore, A1 = D1 = {(u, u), (u, d)} and A2 = D2 = {(d, u), (d, d)}. Then

E(X1 |F1 ) = (1)P (A1 |F1 ) + (2)P (A2 |F1 ) = (1)[P (A1 |D1 )ID1 + P (A1 |D2 )ID2 ] + (2)[P (A2 |D1 )ID1 + P (A2 |D2 )ID2 ] = (1)[P (D1 |D1 )ID1 + P (D1 |D2 )ID2 ] + (2)[P (D2 |D1 )ID1 + P (D2 |D2 )ID2 ] = (1)[P (D1 |D1 )ID1 + 0] + (2)[0 + P (D2 |D2 )ID2 ] = (1)ID1 + (2)ID2 = X1

References
[1] Klebaner, F., Introduction to Stochastic Calculus with Applications, Imperial College Press, 2nd edition, 2005. Chapter 2.

You might also like