You are on page 1of 32

Course : T0264 Artificial

Intelligence
Year : 2013

LECTURE 14
Probabilistic Reasoning Over
Time

Learning Outcomes
At the end of this session, students will be
able to:
LO 5 : Apply various techniques to an agent
when acting under certainty

T0264 - Artificial Intelligence

Outline

1.
2.
3.
4.
5.

Time and Uncertainty


Inference in Temporal Model
Hidden Markov Models
Dynamic Bayesian Networks
Summary

T0264 - Artificial Intelligence

Time and Uncertainty

T0264 - Artificial Intelligence

Time and Uncertainty


Markov Chains

T0264 - Artificial Intelligence

Time and Uncertainty


Markov Process
Markov Property: The state of the system at time t+1
depends only on the state of the system at time t
Pr X t 1 xt 1 | X 1 X t x1 xt
X1

X2

X3

Pr X t 1 xt 1 | X t xt
X4

X5

Stationary Assumption: Transition probabilities are independent of


time (t)

Pr X t 1 b | X t a pab

Bounded memory transition model


T0264 - Artificial Intelligence

Time and Uncertainty


Markov Chains
A Markov chain process is a simple type of
stochastic process with many social science
applications.
Markov Decision process is one of the well-known
methods to solve optimization problem in stochastic
modeling theory.

T0264 - Artificial Intelligence

Time and Uncertainty


Markov Chains
Consider a process that occupies one of n possible
states each period. If the process is currently in
state i, then it will occupy state j in the next period
with probability P(i, j).
Crucially, transition probabilities are determined
entirely by the current state no further history
dependence is permitted and these probabilities
remain fixed over time.
Under these conditions, this process is a Markov
chain process, and the sequence of states
generated over time is a Markov chain.
T0264 - Artificial Intelligence

Time and Uncertainty


Markov Chains
Suppose that the possible states of the process
are given by the finite set
S = {1, 2, . . . , n}.
Let stS denote the state occupied by the process
in period t{0, 1, 2, . . .}. Further suppose that
The parameters of a Markov chain process can
thus be summarized by a transition matrix,
written as:

T0264 - Artificial Intelligence

10

Time and Uncertainty


Markov Chains
A Markov process has 3 states, with the transition
matrix

Transition diagram for the 3 3 transition matrix


given above.

T0264 - Artificial Intelligence

11

Time and Uncertainty


Markov Chains Example:
A child with a lower-class parent has a 60% chance of
remaining in the lower class, has a 40% chance to rise
to the middle class, and has no chance to reach the
upper class. A child with a middle-class parent has a
30% chance of falling to the lower class, a 40% chance
of remaining middle class, and a 40% chance of rising
to the upper class. Finally, a child with an upper-class
parent have no chance of falling to the lower class, has
a 70% chance of falling to the middle class, and has a
30% chance of remaining in the upper class.
Assuming that 20% of the population belongs to the
lower class, that 30% belong to the middle class, and
that 50% belong to the upper class.
T0264 - Artificial Intelligence

12

Time and Uncertainty


Solution:
Transition matrix

Transition diagram

Initial condition (0) reflected the individuals


class
T0264 - Artificial Intelligence

13

Time and Uncertainty


Solution
Illustrate, consider population dynamics over the
next 4 generations is :

T0264 - Artificial Intelligence

14

Inference in
Temporal Models

T0264 - Artificial Intelligence

15

Inference in
Temporal Models

T0264 - Artificial Intelligence

16

Inference in
Temporal Models

T0264 - Artificial Intelligence

17

Inference in
Temporal Models

T0264 - Artificial Intelligence

18

Inference in
Temporal Models

T0264 - Artificial Intelligence

19

Inference in
Temporal Models

T0264 - Artificial Intelligence

20

Inference in
Temporal Models

T0264 - Artificial Intelligence

21

Hidden Markov Models


(HMM)
Hidden states

H1

H2

Hi

HL-1

HL

X1

X2

Xi

XL-1

XL

Observed
data
T0264 - Artificial Intelligence

22

Hidden Markov Models


(HMM)
Coin-Tossing Example
0.9

0.9

0.1

fair

transition probabilities

loaded

0.1
1/2

3/4

1/2

emission probabilities

1/4

Fair/Loade
d
H1

H2

Hi

HL-1

HL

X1

X2

Xi

XL-1

XL

Head/Tail
T0264 - Artificial Intelligence

23

Hidden Markov Models


(HMM)

T0264 - Artificial Intelligence

24

Dynamic Bayesian Network


(DBN)
DBNs are directed graphical models of stochastic
process
DBNs generalize HMMs and KFLs (Kalman Filter ) by
representing the hidden and observed state in terms
of state variables, which can have complex
interdependencies
The graphical structure provides an easy way to
specify these conditional independencies
A compact parameterization of the state-space
model
An extension of BNs to handle temporal models
Time-invariant: the term dynamic means that we
T0264 - Artificial
Intelligence
are modeling a dynamic
model,
not that25the
networks change over time

Dynamic Bayesian Network


(DBN)

T0264 - Artificial Intelligence

26

DBNs vs. HMMs

T0264 - Artificial Intelligence

27

Dynamic Bayesian Network


(DBN)
Unrolling Dynamic Bayesian Network

T0264 - Artificial Intelligence

28

Dynamic Bayesian Network


(DBN)

T0264 - Artificial Intelligence

29

Summary
The changing state of the world is handled by using a
set of random variables to represent the state at each
point in time.
Representations can be designed to satisfy the Markov
property, so that the future is independent of the past
given the present. Combined with the assumption that
the process is stationarythat is, the dynamics do not
change over timethis greatly simplifies the
representation.
The principal inference tasks in temporal models are
filtering, prediction, smoothing, and computing the
most likely explanation. Each of these can be achieved
using simple, recursive algorithms whose rim time is
linear in the length of the sequence Three families of
temporal models T0264
were- Artificial
studied
in more depth:
hidden
Intelligence
30
Markov models and dynamic Bayesian networks

References
Stuart Russell, Peter Norvig,. 2010. Artificial
intelligence : a modern approach. PE. New Jersey.
ISBN:9780132071482, Chapter 15
Elaine Rich, Kevin Knight, Shivashankar B. Nair. 2010.
Artificial Intelligence. MHE. New York. , Chapter 8
Probabilistic Reasoning over Time: Part I:
http://www-users.cselabs.umn.edu/classes/Spring-2012/
csci5512/slides/lec8.pdf
Probabilistic Reasoning over Time: Part II:
http://www-users.cselabs.umn.edu/classes/Spring-2012/
csci5512/slides/lec9.pdf
T0264 - Artificial Intelligence

31

<< CLOSING >>

End of Session 14
Good Luck

T0264 - Artificial Intelligence

32

You might also like