4 views

Uploaded by Dalghak Khan

Solution

- vaghi2017
- MODEL SELECTION AND COMPARISON OF TIME SERIES MODELS.
- Paper 18
- 28. Human Resources - Ijhrmr - A Study on Leadership Behaviour - Sheeba Wilfredl
- Game Playing
- 49.2 Hypotesis Tests Qp Ial-cie-maths-s2
- 484-7hgf
- Confid Rep Theorem
- ch13
- Lecture Six Decision Theory
- Hwk_07
- TechBrief SAE
- causation theory
- Decision Theory
- Primary Data
- Inference About Population Variance
- Using Mnemonics in Teaching Statistics
- 11.[45-60]Electricity Billing Systems at Residential Level in Sylhet City
- Further Statistics
- MANUAL Statdisk.pdf

You are on page 1of 7

∫∞ ∫∞

1. Given that −∞ p1 (y)dy = −∞ p0 (y)dy = 1, the constants c0 and c1 can be found as c0 = 32 , c1 = 19 .

Hence, the prior ratio is η = 1 and the likelihood function L(y) is

{ 2(3−|y|)

p1 (y) 27y 2

|y| ≤ 1

L(y) = = ,

p0 (y) ∞ 1 ≤ |y| ≤ 3

which by using L(y) ≷ η, gives the Bays rule ĤB (y), and the minimum risk rB , as

{

1 y ∈ Λ1 = {y| |y| ≤ c ∨ 1 < |y| ≤ 3} 1 (√ )

ĤB (y) = ,c = 163 − 1 ≃ 0.43582,

0 y ∈ Λ0 = {y| − 1 ≤ y < −c ∨ c < y ≤ 1} 27

√

1 11423 − 326 163

rB = (Pr(y ∈ Λ1 |H = 0) + Pr(y ∈ Λ0 |H = 1)) = ≃ 0.184446.

2 39366

{ 2(3−|y|)

p1 (y) 27y 2

|y| ≤ 1

L(y) = = ,

p0 (y) ∞ 1 ≤ |y| ≤ 3

which under the assumption of uniform costs and by using L(y) ≷ η = 1−π π0

0

, gives the Bays rule

ĤB (y, π0 ), as

{

1 y ∈ Λ1 (π0 ) = {y| |y| ≤ c(π0 ) ∨ 1 < |y| ≤ 3}

ĤB (y, π0 ) = ,

0 y ∈ Λ0 = {y| − 1 ≤ y < −c(π0 ) ∨ c(π0 ) < y ≤ 1}

in which, c(π0 ) is

{

1 0 ≤ π0 ≤ 31

4

c(π0 ) = √ .

27π0 (π0 − 1 + −161π02 + 160π0 + 1) 31 < π0 ≤ 1

1 4

π0 0 ≤ π0 < 4

rB (π0 ) = (π0 −1) 2 √−(π0 −1)(161π0 +1)−1 +π0 −10451π0 +322√−(π0 −1)(161π0 +1)−482

( ( ) ( )) 31

2

4

≤ π0 ≤ 1

19683π0 31

1

Reza Oftadeh

For minmax rule we have to find the least favorable prior πl for which rB (πl ) is maximum. The above

function is continuous and a simple plot shows that it acquire its maximum between 31 4

≤ π0 ≤ 1.

0.20

0.15

0.10

0.05

Hence, the least favorable prior πl and the minmax risk are

{

1 y ∈ Λ1 (π0 ) = {y| |y| ≤ c ∨ 1 < |y| ≤ 3}

ĤM (y) = , c = 0.587156

0 y ∈ Λ0 = {y| − 1 ≤ y < −c ∨ c < y ≤ 1}

4. (a) The minimum Bayes risk V (π0 ) is a concave function, where the least favorable prior is

πl that maximizes it and the minmax risk is its value V (πl ). V (π0 ) = 2 − π02 has its maximum

at

πl = 0 and the minmax risk isV (πl ) = 2.

(b) Under the prior πl = 0 the Bayes rule is simply 1 for any y ∈ R and hence, Λ1 = R and Λ0 = ∅.

Therefore, the probability of error under H0 is

5. We have Hi : Y ∼ N ((i − 3)µ, σ 2 ). The Bayes risk under uniform costs becomes as a function

of priors π = {π1 , · · · , π5 }

∑

5 ∑

5 ∑

5 ∑

5 ∑

5

r(δ, π) = cij πj Pr(δ(y) = i|Hj ) = πj Pr(δ(y) = i|Hj ) = πi (1 − Pr(δ(y, πi ) = i|Hi )) .

i=1 j=1 i=1 j=1,j̸=i i=1

2

Reza Oftadeh

The Bayes decision rule δ(y, π) minimizes the above risk and can be written as

1≤i≤5

1 1 1

= arg max {− (y − (i − 3)µ)2 − ln 2π − ln σ 2 + ln πi }

1≤i≤5 2σ 2 2 2

1

= arg max {ln πi − 2 (y − (i − 3)µ)2 }.

1≤i≤5 2σ

The above maximization is over the 2nd order polynomials si (y) = ln πi − 2σ1 2 (y − (i − 3)µ)2 . The

crossing point of si and sj is where the regions change from i to j but there are many subtleties

to account for since some priors can lead to some regions become empty. Generally, si (y) = sj (y)

results in

σ2 πj 1

yij = ln + (i + j − 6)µ

µ(i − j) πi 2

which is simplified to

1 y ∈ (−∞, min y1j ]

j>1

2 y ∈ (max y2j , min y2j ]

j<2 j>2

δ(y, π) = 3 y ∈ (max y3j , min y3j ] .

j<3 j>3

4 y ∈ (max y4j , min y4j ]

j<4 j>4

5 y ∈ (max y5j , ∞)

j<5

By setting d = σµ , let

yij (d, π) = = ln + d,

σ d(j − i) πj 2

we have

′

Φ(min y1j (d, π)) i=1

(

j>1 )

′ ′

Pr(δ(y, π) = i|Hi ) = max Φ(min yij (d, π)) − Φ(max yij (d, π)), 0 i ∈ {2, 3, 4} .

j>i j<i

′

1 − Φ(max y5j (d, π)) i=5

j<5

∑

Hence, the average Bayes risk for the Bayes rule is r(d, π) = 5i=1 πi (1 − Pr(δ(y, π) = i|Hi )), which

is function of d and the priors π. Now for the minmax rule we have to find the least favorable

prior πl that maximizes r(d, π) and as qn equalizer results in R = (1 − Pr(δ(y, π) = i|Hi )) be the

same for all i. Also, the minmax rule will be (δ(y, πl ) with minmax error being r(d, πl ), which is a

function of only d.

As requested, I wrote a Matlab code to calculate πl for d = 3, which is shown below:

3

Reza Oftadeh

2 d=5;

3 yp = z e r o s ( 5 , 5 ) ;

4 f o r i =1:5

5 f o r j =1:5

6 i f j==i

7 yp ( i , j ) = 0 ;

8 else

9 yp ( i , j ) = 1 / ( d ∗ ( j −i ) ) ∗ l o g ( p i ( i ) / p i ( j ) ) +( j −i ) ∗d / 2 ;

10 end

11 end

12 end

13 pr ( 1 ) = p h i ( min ( yp ( 1 , 2 : 5 ) ) ) ;

14 pr ( 2 ) = p h i ( min ( yp ( 2 , 3 : 5 ) ) ) − p h i (max( yp ( 2 , 1 ) ) ) ;

15 pr ( 3 ) = p h i ( min ( yp ( 3 , 4 : 5 ) ) ) − p h i (max( yp ( 3 , 1 : 2 ) ) ) ;

16 pr ( 4 ) = p h i ( min ( yp ( 4 , 5 ) ) ) − p h i (max( yp ( 4 , 1 : 3 ) ) ) ;

17 pr ( 5 ) = 1−p h i (max( yp ( 5 , 1 : 4 ) ) ) ;

18 f o r i =1:5

19 i f pr ( i ) < 0

20 pr ( i ) = 0 ;

21 end

22 end

23 v = −sum ( p i . ∗ ( 1 − pr ’ ) ) ;

24 end

25 f u n c t i o n pr = p h i ( x )

26 pr = ( 1 / 2 ) ∗ e r f c (−x/ s q r t ( 2 ) ) ;

27 end

28

29 p i l = fmincon ( @ p r o b e r r o r , [ 1 / 5 ; 1 / 1 5 ; 1 / 1 5 ; 1 / 1 5 ; 1 / 5 ] , . . .

30 [] ,[] ,[1 ,1 ,1 ,1 ,1] ,1 ,[0;0;0;0;0] ,[1;1;1;1;1]) ;

The optimization provides the following result for the least favorable priors πl and respective

errors:

5 ∀i and hence, the Bayes rule becomes

1 i ∈ (−∞, − 32 µ]

i ∈ (− 32 µ, − 12 µ]

2

δ(y, πl ) = arg min {(y − (i − 3)µ) } = 3

2

i ∈ (− 12 µ, 12 µ] .

1≤i≤5

4 i ∈ ( 12 µ, 32 µ]

5 i ∈ ( 32 µ, ∞]

4

Reza Oftadeh

1

r(δ(y)) = π0 (c10 PF + c00 (1 − PF )) + π1 (c11 PD + c01 (1 − PD )) = 2(π0 − 1)PF4 + π0 PF + 2.

Now the Bayes rule, for a given π0 minimizes the above risk by setting PF (δ(y)) through δ(y).

The PF = PF,m that minimizes the risk r(δ(y)) is achieved by setting

( )

∂r(δ(y)) 1 − π0 4/3

= 0 =⇒ PF,m = ∧ 0 ≤ PF,m ≤ 1.

∂PF 2π0

Now for π0 = 13 the above expression is PF,m = 1 and PF,m > 1 for π0 < 13 . Hence, for pi0 ≤ 13 ,

the minimum Bayes risk is achieved by setting

PF,m = Pr(δ(y) = 1|H0 ) = 1 =⇒ δ(y) = 1.

( )4/3

1−π0 1

(b) From the previous part we know that PF,m = 2π0 minimizes the Bayes risk. For π0 = 2

4

13

we have PF,m = 2 and the minimum Bayes risk becomes

1 3

rB = 2(π0 − 1)PF,m

4

+ π0 PF,m + 2 =⇒ rB = 2 − √ ≃ 1.40472.

432

Optional Problems:

1. The minimum Bayes risk rB is

rB = π0 (c10 PF + c00 (1 − PF )) + π1 (c11 PD + c01 (1 − PD )) = π0 (PF + 1) + (1 − π0 )(4 − 2PD )

= π0 (PF + 2PD − 3) + (4 − 2PD ),

comparing with the given V (π0 ) = 2 − π02 , we have PD = 1 and PF = 1 − π0 . Hence, the ROC plot

is just a horizontal line PD (PF ) = 1 ∀ 0 ≤ PF ≤ 1.

2.0

1.5

1.0

0.5

5

Assignment 3

Problems:

1. Find the Bayes rule and the minimum Bayes risk for a binary hypothesis testing problem

with equal priors and uniform costs, with

(

c0 y 2 |y| ≤ 1

p0 (y) =

0 otherwise

(

c1 (3 − |y|) |y| ≤ 3

p1 (y) =

0 otherwise

2. A communication system uses binary signaling by sending one of the following two signals:

s1 (t) to send one, and s0 (t) to send zero, where s1 (t) = −As(t), s0 = As(t), A > 0, and

(

1 0<t<1

s(t) = .

0 otherwise

The transmitted signal is corrupted by zero mean white Gaussian noise n(t) with autocorre-

lation function Rn (τ ) = σ 2 δ(t), so that the received signal is given by

(

−As(t) + n(t) 1 sent

y(t) = .

As(t) + n(t) 0 sent

Z ∞

Z= y(t)s(t)dt.

−∞

The receiver would like to decide whether 0 or 1 was sent, or (if it is not confident about a

0/1 decision) erase the signal. We would like to obtain a Bayesian decision rule for doing this

as follows.

Let Θ = {0, 1} and A = {0, 1, e}. The observation is Z. Assume the cost structure

1 a 6= θ, a = 0, 1

C(a, θ) = 0 a=θ .

c a = e

(a) Find the Bayes rule for equal priors. Simplify the form of the rule as much as possible

and specify its dependence on the problem parameters c, A, and σ 2 .

(b) For d = A/σ = 5, find c and a correspoinding Bayesian decision rule δA so that the

probability of erasure p1 is twice the probability of error p2 . Compare with the values

of p1 and p2 for a Bayesian decision rule δB corresponding to c = 1/2.

1

3. Find the minimax rule and minimax risk for the binary hypothesis testing problem with

(

c0 y 2 |y| ≤ 1

p0 (y) =

0 otherwise

(

c1 (3 − |y|) |y| ≤ 3

p1 (y) =

0 otherwise

4. The minimum Bayes risk for a binary hypothesis testing problem with costs C00 = 1, C11 =

2, C10 = 2, C01 = 4 is given by

V (π0 ) = 2 − π02

where π0 is the prior probability of hypothesis H0 .

(a) Find the minimax risk and the least favorable prior.

(b) What is the conditional probability of error given H0 for the minimax rule in (a)?

5. A communication system uses multi-amplitude signaling to send one of 5 signals. The decision

statistic when i(i = 1, . . . , 5) is sent is given by

Y = (i − 3)µ + N, i = 1, . . . , 5

where N is Gaussian noise with mean zero and variance σ 2 . For uniform costs, specify in

its simplest form a Bayes rule which is an equalizer, and hence is minimax. Show that the

minimax error probability depends only on d = µ/σ, and give a numerical value for the

minimax error probability (use a computer program if needed) for d = 3. You may use a

computer program if you wish, but you may save time by deriving an approximation based on

large d instead. Compare with the worst-case error probability using a Bayes rule for uniform

costs and equal priors.

6. Consider a binary hypothesis testing problem in which the ROC for the Neyman-Pearson rule

is given by PD = (PF )1/4 . For the same hypotheses, consider a Bayesian problem with cost

structure C10 = 3, C01 = 2, C00 = 2, C11 = 0.

(a) Show that the decision rule δ(y) = 1 is optimal for π0 ≤ 1/3.

(b) Find the minimum Bayes risk for π0 = 1/2.

Optional Problems:

1. The minimum Bayes risk for a binary hypothesis testing problem with costs C00 = 1, C11 =

2, C10 = 2, C01 = 4 is given by

V (π0 ) = 2 − π02

where π0 is the prior probability of hypothesis H0 . Find the receiver operating characteristic

of optimal detectors.

- vaghi2017Uploaded byJéssica Cristina Braga Américo
- MODEL SELECTION AND COMPARISON OF TIME SERIES MODELS.Uploaded byIJAR Journal
- Paper 18Uploaded byDevanathan Mathiyalagan
- 28. Human Resources - Ijhrmr - A Study on Leadership Behaviour - Sheeba WilfredlUploaded byTJPRC Publications
- Game PlayingUploaded byagoyal5145
- 49.2 Hypotesis Tests Qp Ial-cie-maths-s2Uploaded byBalkis
- 484-7hgfUploaded byBonvi
- Confid Rep TheoremUploaded byGeoff Cad
- ch13Uploaded byrameshaarya99
- Lecture Six Decision TheoryUploaded bymahmoud
- Hwk_07Uploaded bysofia
- TechBrief SAEUploaded bySHADAC
- causation theoryUploaded bySumeshbr Janayugam Sumeshbr Janayugam
- Decision TheoryUploaded byJomel Baptista
- Primary DataUploaded byProfessor Sameer Kulkarni
- Inference About Population VarianceUploaded byUnaib Rabbani
- Using Mnemonics in Teaching StatisticsUploaded bysergio8o
- 11.[45-60]Electricity Billing Systems at Residential Level in Sylhet CityUploaded byAlexander Decker
- Further StatisticsUploaded byTan Chi Keen
- MANUAL Statdisk.pdfUploaded byBruno Rush
- QTRMUploaded byJigar M. Upadhyay
- A Building Contractor Claims He Can Renovate a 200Uploaded byNabeela Mushtaq
- Nomenclature in Evaluation of Analytical MethodsUploaded byjljimenez1969
- MIT PresentationUploaded byabaskm
- chi square test.pptUploaded bydrmsupriya091159
- 2nd Year Statistics Assignment 6 SolutionsUploaded bySuvas Patel
- Phase Review WorksheetsUploaded byflyinzesky
- Research MethodologyUploaded byMihir Desai
- CH 5 HP TESTINGUploaded byWonde Biru
- Review SessionUploaded byLiyana Jalal

- Rainfall Analysis 1Uploaded bymrs_minal
- Research sentiment in FX.pdfUploaded byMd Yusof
- STAT 30100. Workbook 7 1Uploaded byA K
- Regression quizUploaded byejoneel
- Null and Alternative HypothesisUploaded byGercel Millare
- Least Squares EstimationUploaded byKofi Deh
- Probability & StatisticsUploaded byJainamMehta
- MOM and MLEUploaded byadditionalpyloz
- Best RegressionUploaded byArifKaryadiUswandi
- Chapter Twenty.time SeriesUploaded bySDB_Econometrics
- 2-3-1-SMUploaded byfemmy
- An OvaUploaded byranausmanzia
- serravallejgisc9308d1Uploaded byapi-284549230
- Frequency Distribution Math4Uploaded byRico Tabu-ao
- Brandon LAB 5.docxUploaded byZack Chong
- Literature ReviewUploaded bySharif Sayeed
- Midterm Fall2011Uploaded byPatrick Benson
- Trivariate Regression by HandUploaded byAvinash Supkar
- HMWK and QuizzesUploaded byOntopofthemarket
- pcaUploaded byapi-283392068
- 411_Ex1.pdfUploaded byMohd HelmiHazim
- Oliver2014 Sobre SemivariogramasUploaded bysebas_semi
- Summary of Formulas About Simple Linear RegressionUploaded byWilliam Noguera
- 12 Regresi Linier Dan KorelasiUploaded bysudahkuliah
- 352L92017(ENG)Uploaded byCharlotte Snyman
- Presentation by Deepan.pptxUploaded byDeepan Kumar Das
- C3Uploaded bymuralidharan
- daa2cntns_014_02.pdfUploaded byValine Cysteine Methionine
- Course Pro FormaUploaded byArchu Dareece
- Bayesian Course MainUploaded byWong Wai