7 views

Uploaded by ppawasthi

Feeler-Lecture01 Estimation Theory Handouts

- Pranav Sichel
- Mixgarch Alexander Lazar
- Chapter 1
- Chapter 11
- Class 2 Sampling
- Sociological Methods & Research 2008 Himmelfarb 495 514
- MVU
- Unit 01
- estimating_procedures.pdf
- Estimating Procedures
- Dvanced Ecoonmetrics III, Final
- part2
- Statistics 03
- Multiauxiliarity Information
- On Blending Attacks For Mixes with Memory
- j.1741-3737.2005.00191.x
- review2_17
- Phase Composition Analysis of the NIST Reference Clinkers by Optical Microscopy and X-ray Powder Diffraction.pdf
- 2014 - Measuring Missing Heritability. Inferring the Contribution of Common Variants
- eksperimentalna biohemija

You are on page 1of 13

Estimation theory

Johan E. Carlson

Dept. of Computer Science, Electrical and Space Engineering

Lule University of Technology

Lecture 1

1 / 26

Outline

1. General course information

2. Introduction (Chapter 1)

2.1. Estimation theory in Signal Processing

2.2. Problem formulation

2.3. Assessing estimator performance

3.1. Unbiased estimators

3.2. Existence of the MVUB estimator

3.3. Finding the MVUB estimator

3.4. Extension to a vector parameter

2 / 26

Notes

Notes

http://staff.www.ltu.se/~johanc/estimation_theory/

Text book:

Steven M. Kay, Fundamentals of Statistical Signal Processing:

Estimation Theory, Vol 1. Prentice Hall, 1993.

ISBN10: 0133457117

Examination:

Completion of theoretical homework assignments (written

solutions to be handed in to me).

Completion of computer assignments (short lab reports to me).

3 / 26

Modern estimation theory is central to many electrical system, e.g.

Radar, Sonar, Acoustics

Speech

Image and video

Biomedicine (Biomedical engineering)

Communications (Channel estimation, synchronization, etc.)

Automatic control

Seismology

4 / 26

Notes

Notes

materials

Locate flaws/cracks in internal layers

5 / 26

Simply put, given an observed N -point data set

{x[0], x[1], . . . , x[N 1]}

which depends on an unknown parameter , wee define the

estimator

b = g(x[0], x[1], . . . , x[N 1]),

where g is some function.

6 / 26

Notes

Notes

of probability density functions (PDF), i.e.

p(x[0], x[1], . . . , x[N 1]; ),

where semicolon (;) denotes that the PDF is parameterized by the

unknown parameter .

The estimation problem is thus to find (infer ) the value of from the

observations. The PDF should be chosen so that

It takes into account any prior knowledge or constraints

It is mathematically tractable

7 / 26

Example: The Dow-Jones index

DowJones average

3200

3100

3000

2900

2800

0

20

40

60

Day number

8 / 26

80

100

Notes

Notes

A reasonable model could then be

x[n] = A + Bn + w[n], n = 0, 1, . . . , N 1,

where w[n] is white Gaussian noise (WGN), i.e. each sample of

w[n] has the PDF N (0, 2 ), and is uncorrelated with all the other

samples. The unknown parameters can arranged in the vector

= [A B]T . The PDF of x[n] then is

#

"

N 1

1

1 X

2

(x[n] A Bn) ,

p(x; ) =

N exp

2 2

(2 2 ) 2

n=0

9 / 26

Example: The Dow-Jones index

The straight line assumption is consistent with data. A models

the offset and B models the linear increase over time.

The choice of the Gaussian PDF makes the model

mathematically tractable.

Here the parameters are assumed to be unknown, but

deterministic.

One could also assume that is random, but constrained, say

A is in [2800, 3200], and that it is uniformly distributed in this

interval.

This would lead to a Bayesian approach, and

the joint PDF will be

p(x; ) = p(x|)p()

10 / 26

Notes

Notes

corrupted by noise).

3

x[n]

2

1

0

1

0

20

40

60

80

100

x[n] = A + w[n],

where w[n] is N (0, 2 ).

11 / 26

How to estimate A?

A reasonable estimator would be the sample mean

N 1

1 X

b

x[n]

A=

N

n=0

b to A?

How close will A

Are there any better estimators than the sample mean?

12 / 26

Notes

Notes

b = x[0]

A

using all available data.

But, for any given realization of x[n] it might actually be closer

to the true A than the sample mean.

So, how do we assess the performance?

We need to consider the estimators from a statistical

perspective!

b and var(A)

b

Lets consider E(A)

13 / 26

For the first estimator

b = E

E(A)

N 1

1 X

x[n]

N

n=0

1

N

N

1

X

E(x[n])

n=0

= A

For the second estimator

b = E(x[0]

E(A)

= A

14 / 26

Notes

Notes

b = var

var(A)

N 1

1 X

x[n]

N

n=0

=

=

=

1

N2

N

1

X

var(x[n])

n=0

1

N 2

N2

2

N

b = var(x[0])

var(A)

= 2

15 / 26

b = A, i.e. they

So, the expected value of both estimators, E(A)

are both unbiased.

The variance of the second estimator is 2 , which is larger

than the variance of the first estimator.

It appears that indeed the sample mean is a better estimate

than x[0]!

16 / 26

Notes

Notes

deterministic parameters.

We will restrict the search for estimators to those who on

average yield the true parameter value, i.e. to unbiased

estimators.

For all possible unbiased estimators, we will then look for the

one with the minimum variance, i.e. the minimum variance

unbiased estimator (MVUB).

17 / 26

Unbiased estimators

An estimator is said to be unbiased if

b = ,

E()

for all possible values of .

If = g(x), this means that

Z

b

E() = g(x)p(x; )dx = ,

for all .

18 / 26

Notes

Notes

square error (MSE)

h

i

b = E (

b )2

mse()

19 / 26

Unfortunately, this criterion often leads to unrealizable estimators,

since

h

i2

b

b

b

b

mse() = E

E() + E()

h

i2

b + E()

b

= var()

b + b2 (),

= var()

which shows that the MSE depends both on the variance of the

estimator and on the bias. If the bias depends on the parameter

itself, were in trouble!

Lets restrict ourselves to search only for

unbiased estimators!

20 / 26

Notes

Notes

No!

21 / 26

A counterexample of to the existence

Assume that we have to independent observations x[0] and x[1]

with PDF

x[0] N (, 1)

N (, 1), 0

x[1]

N (, 2), < 0

22 / 26

Notes

Notes

b1 =

b2 =

1

(x[0] + x[1])

2

2

1

x[0] +

3

3

var(b1 ) =

var(b2 ) =

1

(var(x[0]) + var(x[1]))

4

4

1

var(x[0]) + var(x[1])

9

9

23 / 26

As a result (looking back at the PDFs), we have that

18

36 , 0

var(b1 ) =

24

36 , < 0

20

36 , 0

var(b2 ) =

24

36 , < 0

So, for 0, the minimum variance is 18/36 (estimator 1) and for

< 0 it is 24/36 (estimator 2). Hence, no single estimator will have

the uniformly minimum variance for all .

24 / 26

Notes

Notes

the MVUB estimator.

There are some possible approaches though

Determine the Cramer-Rao lower bound (CRLB) and check if

some estimator satisfies it (Chapter 3 and 4).

Apply the Rao-Blackwell-Lehmann-Scheffe (RBLS) theorem

(Chapter 5).

Further restrict the estimators to be also linear. Then find the

MVUB estimator within this class. (Chapter 6).

25 / 26

If = [1 , 2 , . . . , p ]T is a vector of unknown parameters, we say

that an estimator is unbiased if

E(bi ) = i ,

for i = 1, 2, . . . , p. By defining

E() =

E(b1 )

E(b2 )

..

.

E(bp )

b = .

E()

26 / 26

Notes

- Pranav SichelUploaded bypranpat2005
- Mixgarch Alexander LazarUploaded bySang Huynh
- Chapter 1Uploaded byFernando Jordan
- Chapter 11Uploaded byArio Rachim
- Class 2 SamplingUploaded byjbrunomaciel1957
- Sociological Methods & Research 2008 Himmelfarb 495 514Uploaded bylucianaeu
- MVUUploaded byzaykox
- Unit 01Uploaded byZaenal Muttaqin
- estimating_procedures.pdfUploaded byKhushboo Gambhir
- Estimating ProceduresUploaded byKhushboo Gambhir
- Dvanced Ecoonmetrics III, FinalUploaded bysilvio de paula
- part2Uploaded bypallab2110
- Statistics 03Uploaded byAhmed Kadem Arab
- Multiauxiliarity InformationUploaded byAnonymous 0U9j6BLllB
- On Blending Attacks For Mixes with MemoryUploaded byLuke O'Connor
- j.1741-3737.2005.00191.xUploaded bytrzalizza
- review2_17Uploaded byRahul Singh Chandraul
- Phase Composition Analysis of the NIST Reference Clinkers by Optical Microscopy and X-ray Powder Diffraction.pdfUploaded bysaeedhoseini
- 2014 - Measuring Missing Heritability. Inferring the Contribution of Common VariantsUploaded byAlberto Rico
- eksperimentalna biohemijaUploaded byeminacengic5851
- SSRN-id1467184 2Uploaded bySandy Terpope
- Regression Analysis 05Uploaded byIRPS
- StatisticsUploaded bySachin Todur
- Assignment 1Uploaded byvikram_908116002
- Probabilistic Analysis of Rock Slope StabilityUploaded byBrian Conner
- deliniationUploaded byArifiani Maghfiroh
- FinalUploaded bySreenivas Reddy G
- Analysis of Efficient MarketsUploaded byAhmad Kamil
- Yelp Business Rating PredictionUploaded byRahul Venkat
- Employee Data Set 1 (2)Uploaded byguruvignesheee

- 1-20ELE661notesUploaded byppawasthi
- 121-140ELE661notesUploaded byppawasthi
- 241-247ELE661notesUploaded byppawasthi
- 101-120ELE661notesUploaded byppawasthi
- 201-220ELE661notesUploaded byppawasthi
- 41-60ELE661notesUploaded byppawasthi
- 21-40ELE661notesUploaded byppawasthi
- 221-240ELE661notesUploaded byppawasthi
- 61-80ELE661notesUploaded byppawasthi
- 81-100ELE661notesUploaded byppawasthi
- 141-160ELE661notesUploaded byppawasthi
- 161-180ELE661notesUploaded byppawasthi
- 181-200ELE661notesUploaded byppawasthi
- 14-Estimation Theory Lecture notesUploaded byppawasthi
- 10-Estimation Theory Lecture notesUploaded byppawasthi
- 8-Estimation Theory Lecture notesUploaded byppawasthi
- 12-Estimation Theory Lecture notesUploaded byppawasthi
- 181-200ELE509_2006Uploaded byppawasthi
- 2-Estimation Theory Lecture notesUploaded byppawasthi
- 4Estimation Theory Lecture notesUploaded byppawasthi
- 1-Estimation Theory Lecture notesUploaded byppawasthi
- 201-220ELE509_2006Uploaded byppawasthi
- 6Estimation Theory Lecture notesUploaded byppawasthi
- 221-237ELE509_2006Uploaded byppawasthi
- 16-Estimation Theory Lecture notesUploaded byppawasthi

- How to Model Time-Varying CovariatesUploaded byalicorpanao
- MeasurementInvarianceUploaded byJuliana Williams
- LLJ110705Uploaded byFatima Beena
- Swiss QuantUploaded byLameune
- Summary of Formulas About Simple Linear RegressionUploaded byWilliam Noguera
- Forecasting Crude Oil Prices using EviewsUploaded byNaba Kr Medhi
- research paper on working capitalUploaded byRujuta Shah
- Krajewski Ism Ch13 solutionUploaded byalex_sasc
- Reliability, Validity, And Injury Predictive Value of the Functional Movement Screen a Systematic Review and Meta-AnalysisUploaded byJosé Toledo
- notaUploaded byJeff Maynard
- Addition to Notes on ARIMAUploaded byAlex Chen
- Chapter6_01Uploaded bySkyeS
- docxUploaded byShubham Shanker
- Correlation and RegressionUploaded byNjuh Polki
- Ch 10 Even e and pUploaded byYee Sook Ying
- CFA Level 1 Quantitative Analysis E Book - Part 4(1)Uploaded byZacharia Vincent
- MOM and MLEUploaded byadditionalpyloz
- Hu - Time Series AnalysisUploaded byMadMinarch
- 5620- T2R-F14Uploaded byLibyaFlower
- Sem Exercise v2.5Uploaded byMohamed Mansour
- ECS3706_001_2018_4_b-22Uploaded byVusi Mazibuko
- Descriptive StatisticsUploaded bySaad Ur Rehman
- statistics and dfaUploaded byanupam99276
- Sample Midterm ExamUploaded byKairatTM
- CASE STUDY (Statistics) Mam EstorUploaded byChristine Jane Ramos
- Normal Probability DistributionUploaded byPratikshya Sahoo
- Spatial Statistics for DummiesUploaded byMuttaqien Abdurrahaman AR
- Forecast 2Uploaded byDeden Istiawan
- Layout Hasil AnalisisUploaded byAYU LESTARI DARTI AKHSA
- Introduction to Econometric Solutions to Exercises(Part 2)Uploaded byJustin Liao