Professional Documents
Culture Documents
Andrei Barbu
Every event has a probability of occurring, some event always occurs, and
combining separate events adds their probabilities.
Every event has a probability of occurring, some event always occurs, and
combining separate events adds their probabilities.
Every event has a probability of occurring, some event always occurs, and
combining separate events adds their probabilities.
Every event has a probability of occurring, some event always occurs, and
combining separate events adds their probabilities.
Every event has a probability of occurring, some event always occurs, and
combining separate events adds their probabilities.
Negative p
Negative p
I’m offering to pay R for −pR dollars.
Negative p
I’m offering to pay R for −pR dollars.
Probabilities don’t sum to one
Negative p
I’m offering to pay R for −pR dollars.
Probabilities don’t sum to one
Take out a bet that always pays off.
Negative p
I’m offering to pay R for −pR dollars.
Probabilities don’t sum to one
Take out a bet that always pays off.
If the sum is below 1, I pay R for less than R dollars.
Negative p
I’m offering to pay R for −pR dollars.
Probabilities don’t sum to one
Take out a bet that always pays off.
If the sum is below 1, I pay R for less than R dollars.
If the sum is above 1, buy the bet and sell it to me for more.
Negative p
I’m offering to pay R for −pR dollars.
Probabilities don’t sum to one
Take out a bet that always pays off.
If the sum is below 1, I pay R for less than R dollars.
If the sum is above 1, buy the bet and sell it to me for more.
When X and Y are incompatible the value isn’t the sum
Negative p
I’m offering to pay R for −pR dollars.
Probabilities don’t sum to one
Take out a bet that always pays off.
If the sum is below 1, I pay R for less than R dollars.
If the sum is above 1, buy the bet and sell it to me for more.
When X and Y are incompatible the value isn’t the sum
If the value is bigger, I still pay out more.
Negative p
I’m offering to pay R for −pR dollars.
Probabilities don’t sum to one
Take out a bet that always pays off.
If the sum is below 1, I pay R for less than R dollars.
If the sum is above 1, buy the bet and sell it to me for more.
When X and Y are incompatible the value isn’t the sum
If the value is bigger, I still pay out more.
If the value is smaller, sell me my own bets.
Negative p
I’m offering to pay R for −pR dollars.
Probabilities don’t sum to one
Take out a bet that always pays off.
If the sum is below 1, I pay R for less than R dollars.
If the sum is above 1, buy the bet and sell it to me for more.
When X and Y are incompatible the value isn’t the sum
If the value is bigger, I still pay out more.
If the value is smaller, sell me my own bets.
Decisions under probability are “rational”.
X
Mean µX = E[X ] = xp(x)
x
X
Mean µX = E[X ] = xp(x)
x
Variance σX2 = var(X ) = E[(X − µ)2 ]
X
Mean µX = E[X ] = xp(x)
x
Variance σX2 = var(X ) = E[(X − µ)2 ]
X
Mean µX = E[X ] = xp(x)
x
Variance σX2 = var(X ) = E[(X − µ)2 ]
Mean
Variance
Covariance
Correlation
Mean
Variance
Covariance
Correlation
Mean
Variance
Covariance
Correlation
Mean
Variance
Covariance
Correlation
Mean
Variance
Covariance
Correlation
Mean
Variance
Covariance
Correlation
Bernoulli Distribution 1
probability 1 –
Parameters
• Probability mass function
• : probability of
observing a success
Expectation
•
• Notation Variance
•
37
Human-Oriented Robotics
Common Probability Distributions Prof. Kai Arras
Social Robotics Lab
38
Human-Oriented Robotics
Common Probability Distributions Prof. Kai Arras
Social Robotics Lab
Gaussian Distribution
46
Human-Oriented Robotics
Common Probability Distributions Prof. Kai Arras
Social Robotics Lab
Gaussian Distribution
• Notation
47
Human-Oriented Robotics
Common Probability Distributions Prof. Kai Arras
Social Robotics Lab
48
Human-Oriented Robotics
Common Probability Distributions Prof. Kai Arras
Social Robotics Lab
Parameters
• : mean vector
• : covariance matrix
Expectation
•
Variance
•
49
Human-Oriented Robotics
Common Probability Distributions Prof. Kai Arras
Social Robotics Lab
Parameters
• : mean vector
• : covariance matrix
Expectation
•
Variance
•
49
Human-Oriented Robotics
Common Probability Distributions Prof. Kai Arras
Social Robotics Lab
Parameters
• : mean vector
• : covariance matrix
Expectation
•
Variance
Source [1]
•
50
Human-Oriented Robotics
Common Probability Distributions Prof. Kai Arras
Social Robotics Lab
45
Bayesian updates
P(X |θ)P(θ)
P(θ|X ) = P (X )
P(X |θ)P(θ)
P(θ|X ) = P (X )
P(θ): I have some prior over how good a player is: informative vs
uninformative.
P(X |θ)P(θ)
P(θ|X ) = P (X )
P(θ): I have some prior over how good a player is: informative vs
uninformative.
P(X |θ): I think dart throwing is a stochastic process, every player has
an unknown mean.
P(X |θ)P(θ)
P(θ|X ) = P (X )
P(θ): I have some prior over how good a player is: informative vs
uninformative.
P(X |θ): I think dart throwing is a stochastic process, every player has
an unknown mean.
X : I observe them throwing darts.
P(X |θ)P(θ)
P(θ|X ) = P (X )
P(θ): I have some prior over how good a player is: informative vs
uninformative.
P(X |θ): I think dart throwing is a stochastic process, every player has
an unknown mean.
X : I observe them throwing darts.
P(X ): Across all parameters this is how likely the data is.
P(X |θ)P(θ)
P(θ|X ) = P (X )
P(θ): I have some prior over how good a player is: informative vs
uninformative.
P(X |θ): I think dart throwing is a stochastic process, every player has
an unknown mean.
X : I observe them throwing darts.
P(X ): Across all parameters this is how likely the data is.
Normalization is usually hard to compute, but it’s often not needed.
P(X |θ)P(θ)
P(θ|X ) = P (X )
P(θ): I have some prior over how good a player is: informative vs
uninformative.
P(X |θ): I think dart throwing is a stochastic process, every player has
an unknown mean.
X : I observe them throwing darts.
P(X ): Across all parameters this is how likely the data is.
Normalization is usually hard to compute, but it’s often not needed.
P(X |θ)P(θ)
P(θ|X ) = P (X )
P(θ): I have some prior over how good a player is: informative vs
uninformative.
P(X |θ): I think dart throwing is a stochastic process, every player has
an unknown mean.
X : I observe them throwing darts.
P(X ): Across all parameters this is how likely the data is.
Normalization is usually hard to compute, but it’s often not needed.
P(X |θ)P(θ)
P(θ|X ) = P (X )
P(θ): I have some prior over how good a player is: informative vs
uninformative.
P(X |θ): I think dart throwing is a stochastic process, every player has
an unknown mean.
X : I observe them throwing darts.
P(X ): Across all parameters this is how likely the data is.
Normalization is usually hard to compute, but it’s often not needed.
P(X |θ)P(θ)
P(θ|X ) = P (X )
P(θ): I have some prior over how good a player is: informative vs
uninformative.
P(X |θ): I think dart throwing is a stochastic process, every player has
an unknown mean.
X : I observe them throwing darts.
P(X ): Across all parameters this is how likely the data is.
Normalization is usually hard to compute, but it’s often not needed.
Y
P(c|X ) ∝ P(Xk |c)P(c)
K