You are on page 1of 35

Chapter 4

Experimental Research
4.1

Experimental Research of the Properties of


Offered Optimal Algorithms

Let us consider examples for a concrete probability distribution law, in particular,


the normal law (3.2), for experimental research of the properties of offered algorithms.
In Examples 1 and 2 there are investigated the qualities of hypotheses testing by
unconditional and conditional Bayesian algorithms. From example 1 the character
of changes in coefficients and regions of acceptance of hypotheses with changing
under the appropriate restriction of the considered task, and also of the ratios among
the qualities of hypotheses testing in conditional and unconditional Bayesian tasks,
is evident. By Example 2 the specificity of hypotheses testing rules in unconditional
and conditional Bayesian tasks is shown. Hence it is evident that the unconditional
Bayesian decision rule always accepts one of the tested hypotheses even if the set
of formed hypotheses does not contain a true hypothesis. In this case, it accepts
the hypothesis which is the closest to the true hypothesis. In contradistinction to
this, the conditional Bayesian decision rule does not accept any hypothesis until the
set of tested hypotheses contains a true hypothesis or the hypothesis which is rather
59

60

close to the true one. Such situation arises at solving many practical problems. For
example, when monitoring the ship hull deformation caused by storm waves in the
open sea, the decision that the ship is in a critical situation must be made when
the set of tested hypotheses contains the mathematical expectation of measurement
results of the controlled parameters [Korolev, (2008)]. If the ship is not in the critical
situation, the set of tested hypotheses does not contain the mathematical expectation
of measurement results and none of the tested hypothesis must be accepted on their
basis.
Examples 3 and 4 are cited for the purpose of showing the necessity in using the
offered approach (the statement of the Bayesian problem in the form of the conditional optimizing problem) when solving the problems in such important areas of
human activity as the environment protection and air defence. Though, it is evident
that applications of this approach extend further than these areas.

Example 1. Tested hypotheses: H1 : 11 = 1, 21 = 1, H2 : 12 = 4, 22 = 4. A priori


probabilities of the hypotheses: p(H1 ) = 0.5, p(H2 ) = 0.5. Covariance matrices used:
!
!
!
1 0
1 0.4
1 0.8
W1 =
, W2 =
, W3 =
.
0 1
0.6 1
0.99 1
On the basis of calculation results brought in Table 1, we infer the following. The
calculated values of average risk r in Tasks 1 and 2 and the average power of criterion
subtracted from one 1 G in Tasks 4 and 7 for the values of for which Lagrange
(2)

(2)

multipliers take the values (1) = 1, 1 = p(H1 ) = 0.5, 2 = p(H2 ) = 0.5, (4) = 1,
(7)

(7)

and 12 21 = 1, respectively, coincide with for Tasks 1, 2 and 4 and with the
value of average risk in the unconditional task and with 2 for Task 7.
In the general case, in all conditional Bayesian tasks, the interval of changing
contains three sub-intervals. If falls in the middle subinterval the correct decision is

61

made, and, if it falls in the left or the right subintervals, there are accepted or rejected
both hypotheses. To the extreme subintervals correspond the values of Lagrange
multipliers opposite concerning unity, i.e. less or more than one. For example, in
conditional Task 1 for x1 = 1.49, x2 = 1.49 and W1 , for extreme points of subinterval
where hypothesis H1 is accepted, i.e. for = 0.0002 and = 0.244, there takes
place = 411.387 and = 0.0234, respectively, i.e. coefficient changes from 411.387
to 0.0234.
For all considered covariance matrices and for which Lagrange coefficients in
(2)

(2)

(7)

(7)

conditional tasks satisfy condition (1) = 1 +2 = (4) = 12 21 = 1, comparison


of calculation results of unconditional and conditional tasks confirm the validity of
Proposition 3.8. In particular, there takes place r ,uncond = r ,1 = r ,2 = 1 G ,4 =
(1 G ,7 )/2.
At x1 = 2.5, x2 = 2.5 , the middle subinterval, i.e. the subinterval of acceptance of
one tested hypothesis, degenerates into an empty set and decision is not made, since,
in accordance with the condition of the example, this measured value could be born
by both distributions with equal probabilities.
At x1 = 2.5, x2 = 2.5 for Tasks 1, 2, 4 and for all considered W , the thresholds
of separating the sub-regions of acceptance of both hypotheses or acceptance of
neither hypothesis, coincide. In Task 7 these thresholds are equal to the suitable
thresholds of the previous tasks divided by two. This is easy to explain by comparing
the restrictions of Task 7 with the restrictions of Tasks 1, 2 and 4.
(2)

(2)

(7)

(7)

In Tasks 2 and 7, there takes place 1 = 2 and 1 = 2 , because, in the


appropriate restrictions, identical values of are used.

62

Table 1. The results of hypotheses testing by unconditional and conditional Bayesian tasks

Unconditional task
Covariance

Measurement

Restriction

Matrix

result

level

W1

2.5,2.5

Accepted Hypothesis

Risk

Langrange

function

Multipliers

Hi

H2

0.01695

W2

H2

0.04163

W3

H2

0.06632

H2

0.01695

W2

H2

0.04163

W3

H2

0.06632

H1

0.01695

W2

H1

0.04163

W3

H1

0.06632

W1

W1

2.51,2.51

1.49,1.49

Conditional tasks
task 1
W1

2.5,2.5

W2

W3

W1

2.51,2.51

W2

W3

W1

W2

W3

1.49,1.49

0.01694

Both hypotheses are accepted

= 0.01694

Both hypotheses are accepted

> 0.01694

No hypothesis is accepted

0.0416

Both hypotheses are accepted

= 0.0416

Both hypotheses are accepted

> 0.0416

No hypothesis is accepted

0.06632

Both hypotheses are accepted

= 0.06632

Both hypotheses are accepted

> 0.06632

No hypothesis is accepted

0.0163

Both hypotheses are accepted

0.01695

1.00074

0.04166

1.00125

0.06632

1.00002

1.0048

(0.0163, 0.0175)

H2

0.01699

> 0.0175

No hypothesis is accepted

( = 0.0169)

0.0406

Both hypotheses are accepted

(0.04063, 0.042)

H2

0.04179

> 0.042

No hypothesis is accepted

( = 0.04127)

0.065

Both hypotheses are accepted

(0.0651, 0.0676)

H2

0.06756

> 0.0676

No hypothesis is accepted

( = 0.0651)

0.0002

Both hypotheses are accepted

(0.0002, 0.244)

H1

0.01699

> 0.244

No hypothesis is accepted

( = 0.0169)

0.0018

Both hypotheses are accepted

(0.0018, 0.285)

H1

0.04176

> 0.285

No hypothesis is accepted

( = 0.0415)

0.021

Both hypotheses are accepted

(0.021, 0.344)

H1

0.06664

> 0.344

No hypothesis is accepted

( = 0.066)

1.00634

1.02913

1.0048

1.00516

1.00754

63

Table 1 (Continuation)

Covariance

Measurement

Restriction-

Matrix

result

level

Accepted Hypothesis

Hi

Risk

Langrange

function

Multipliers

0.01695

0.5004

0.5004

0.04164

0.5002

0.5002

0.06632

0.5

0.5

0.5024

0.5024

0.5007

0.5007

0.5002

0.5002

0.4999

0.4999

0.5007

0.5007

0.5

0.5

task 2
W1

2.5,2.5

W2

W3

W1

2.51,2.51

W2

W3

W1

1.49,1.49

W2

W3

0.01694

Both hypotheses are accepted

= 0.01694

Both hypotheses are accepted

> 0.016945

No hypothesis is accepted

0.04193

Both hypotheses are accepted

= 0.04193

Both hypotheses are accepted

> 0.04193

No hypothesis is accepted

0.06632

Both hypotheses are accepted

= 0.06632

Both hypotheses are accepted

> 0.06632

No hypothesis is accepted

0.01636

Both hypotheses are accepted

(0.01636, 0.01755)

H2

0.01699

> 0.01755

No hypothesis is accepted

( = 0.01699)

0.0409

Both hypotheses are accepted

(0.0409, 0.043)

H2

0.04167

> 0.043

No hypothesis is accepted

( = 0.0419)

0.065

Both hypotheses are accepted

(0.0651, 0.0676)

H2

0.06634

> 0.0676

No hypothesis is accepted

( = 0.0663)

0.00019

Both hypotheses are accepted

(0.00019, 0.244

H1

0.01694

> 0.244

No hypothesis is accepted

( = 0.01695)

0.000187

Both hypotheses are accepted

(0.000187, 0.2857)

H1

0.04167

> 0.2857

No hypothesis is accepted

( = 0.0419)

0.00592

Both hypotheses are accepted

(0.00592, 0.311)

H1

0.06632

> 0.311

No hypothesis is accepted

( = 0.06632)

0.01694

No hypothesis is accepted

0.9830

1.0008

> 0.01694

Both hypotheses are accepted

= 0.01694

Both hypotheses are accepted

0.9831

0.9997

0.04163

No hypothesis is accepted

0.9584

1.0001

> 0.04163

Both hypotheses are accepted

0.9584

0.9997

0.9337

1.0000

0.9337

0.9998

task 4
W1

W2

W3

2.5,2.5

0.06632

No hypothesis is accepted

> 0.06633

Both hypotheses are accepted

= 0.06633

Both hypotheses are accepted

64

Table 1 (Continuation)

Covariance

Measurement

Restriction-

Matrix

result

level

Accepted Hypothesis

Multipliers

Hi

W1

2.51,2.51

0.01755

No hypotheses are accepted

W2

W3

W1

1.49,1.49

W2

W3

Langrange

>1

(0.01637, 0.01755)

H2

0.9825

1.0609

> 0.01755

Both hypothesis is accepted

( = 0.01637)

<1

0.0406

No hypothesis is accepted

(0.0407, 0.0426)

H2

0.9574

1.0373

> 0.0426

Both hypotheses are accepted

( = 0.01637)

<1

0.0651

No hypotheses are accepted

(0.0651, 0.0676)

H2

0.9324

1.0291

( = 0.0651)

<1

>1

>1

> 0.0676

Both hypothesis is accepted

0.000192

No hypotheses are accepted

(0.000193, 0.2441)

H1

0.9829

1.0160

( = 0.01679)

<1

>1

> 0.2441

Both hypothesis is accepted

0.00188

No hypotheses are accepted

(0.00188, 0.28578)

H1

0.9582

1.0052

> 0.28578

Both hypothesis is accepted

( = 0.0415)

<1

0.00592

No hypotheses are accepted

(0.00593, 0.3116)

H1

0.9334

1.0075

> 0.3116

Both hypothesis is accepted

( = 0.066)

<1

0.98305

1.0007

1.0007

0.95834

1.0013

1.0013

0.93356

1.0028

1.0028

>1

>1

task 7
W1

2.5,2.5

W2

W3

W1

2.51,2.51

W2

W3

W1

W2

W3

1.49,1.49

0.00847

No hypothesis is accepted

> 0.00847

Both hypotheses are accepted

= 0.00847

No hypotheses are accepted

0.0208

No hypothesis is accepted

> 0.0208

Both hypotheses are accepted

= 0.0208

No hypothesis is accepted

0.0331

No hypothesis is accepted

> 0.0331

Both hypotheses are accepted

= 0.0331

Both hypotheses are accepted

0.0081

No hypotheses are accepted

(0.0082, 0.0087)

H2

> 0.0087

Both hypothesis is accepted

0.0204

No hypothesis is accepted

(0.0204, 0.0213)

H2

> 0.0213
0.0325

>1

>1

0.98244

1.0109

1.0109

( = 0.00842)

<1

<1

>1

>1

0.95813

1.0091

1.0091

Both hypotheses are accepted

( = 0.0207)

<1

<1

No hypotheses are accepted

0.93336

>1

>1

(0.0326, 0.0338)

H2

( = 0.033)

1.0075

1.0075
<1

> 0.0338

Both hypothesis is accepted

<1

0.000096

No hypotheses are accepted

>1

>1

(0.000097, 0.122)

H1

0.98301

1.0048

1.0048

( = 0.00845)

<1

<1

> 0.122

Both hypothesis is accepted

0.00093

No hypotheses are accepted

(0.00094, 0.142)

H1

> 0.142

Both hypothesis is accepted

0.00296

No hypotheses are accepted

(0.00297, 0.155)

H1

> 0.155

Both hypothesis is accepted

>1

>1

0.95813

1.0091

1.0091

( = 0.0207)

<1

<1

>1

>1

0.93336

1.0075

1.0075

( = 0.033)

<1

<1

65

Example 2. For simplicity, there is considered the case of two hypotheses. By


the generator of multivariate normal vectors were simulated normally distributed
measurement results of the co-ordinates of plane points taken five at a time: (5, 5),
(6, 6), (7, 7), (8, 8) and (9, 9) (see Table 2). The components of simulated vectors
!
1 0.4
are correlated with covariance matrix W =
. On the basis of these
0.6 1
measurement results the following hypotheses must be tested: H1 : 11 = 1, 21 = 1,
H2 : 12 = 9, 22 = 9, i.e. the hypotheses that the mathematical expectation of the
measured vector is equal to (1, 1) or (9, 9). At hypotheses testing, it is accepted that
a priori probabilities of the hypotheses are p(H1 ) = 0.5 and p(H2 ) = 0.5, respectively.
From the computation results, it is seen that, in the unconditional Bayesian task,
the decision is always made independently of the fact whether the set of tested hypotheses contains the true or close-to-it hypothesis or no. In the conditional Bayesian
task, the decision is made in favour of one of the tested hypotheses only in case the
probability of appearance of the considered measurement result at validity of one of
tested hypotheses is high, i.e. when the set of tested hypotheses contains the true or
close-to-it hypothesis.
The conditional tasks work logically right. In particular, in the considered example
for observation result x = (2.5, 2.5) which with identical probabilities could belong
to both hypothetical distributions, there is not made a decision in contradistinction
to the unconditional task in which the second hypothesis is accepted. In other cases
there are made correct decisions if it is possible to provide the given significance level
of criterion. Otherwise a decision is not made.
Using the offered conditional Bayesian tasks, there is an opportunity of realizing
easily and clearly the idea offered in [Kiefer, (1977)], the essence of which is in multilevelness of the decision rule depending on the value of observation result on the basis

66

of which the decision is made. Here is an opportunity of introducing not only two levels a strongly conclusive decision and a weakly conclusive decision at hypotheses
testing for each of the tested hypotheses, but any number of levels by introducing the
gradation in the values of significance levels of criterion by which the decision is made.
For clarity let us cite an example from [Kiefer, (1977)]: there is considered a normally
distributed random variable with mathematical expectation and unit variance. It is
required to test hypotheses H0 : = 1 and H1 : = 1. For distinguishing the cases
when the decision is made on the basis of observation results x = 0.5 and x = 5, the
regions of Neyman-Pearson criterion 0 = {x : x < 0} and 1 = {x : x 0} are divided into sub-regions: 0 = {x : x c} and 1 = {x : x c} for making a weakly
conclusive decision, and 0 = {x : c < x < 0} and 1 = {x : 0 x < c} for making
a strongly conclusive decision. Here is chosen c = 1. For achieving the same goal,
in our case, by the example of Task 1, it is possible to act as follows. In Task 1, at
x = 0.5, hypothesis H1 is accepted when (0.0668, 0.3085). For < 0.0668, both
hypotheses are accepted, and, at > 0.3085, no hypothesis is accepted. At x = 5,
hypothesis H1 is accepted when 0 1 (in the limits of accuracy realized by a
computer program), i.e. in a complete range of changing . Let us introduce the
following gradation of hypotheses testing by the significance level: if the hypothesis is
accepted for (0.06, 0.4), we will call such a decision the weakly conclusive decision, if the hypothesis is accepted for (0.006, 0.75), we will call such a decision a
strongly conclusive decision and if hypothesis is accepted for (0, 1) such decision
we call the strongest conclusive decision. In this case, the hypotheses testing on
the basis of observation result x = 1.5 will be the weakly conclusive decision as
the range (0.0062, 0.6915) of acceptance of hypothesis H1 corresponds to it, and
the hypotheses testing on the basis of observation result x = (5; 5) will belong to the
class the strongest conclusive decision.

67

Table 2. The specificity of hypotheses testing rules in unconditional and conditional Bayesian tasks

Mathematical

Expectation

Measurement
result

Unconditional task
Accepted

Risk

Hypothesis
(5.0,5.0)

(6.0,6.0)

(7.0,7.0)

(8.0,8.0)

(9.0,9.0)

Conditional task 1
Accepted

Risk

Hypothesis

Significance

Lagrange

level

Multiplier

(6.601381, 5.960739)

H2

0.000002

0.05

(4.477790, 3.479032)

H1

0.000002

0.05

(5.062139, 6.161450)

H2

0.000002

0.05

(7.027960, 4.518770)

H2

0.000002

0.05

(4.930952, 5.760556)

H2

0.000002

0.05

(6.743147, 5.500071)

H2

0.000002

0.05

(6.346078, 5.615309)

H2

0.000002

0.05

(7.340373, 7.072274)

H2

0.000002

0.05

(5.814550, 5.753617)

H2

0.000002

0.05

(5.835888, 5.098627)

H2

0.000002

0.05

(7.024591, 8.316058)

H2

0.000002

0.05

(8.515107, 6.315058)

H2

0.000002

H2

1.56e14

0.05

1.17e12

(7.284548, 7.100557)

H2

0.000002

0.05

(6.868883, 7.284989)

H2

0.000002

0.05

(7.493570, 8.226540)

H2

0.000002

H2

1.56e14

0.05

1.17e12

(9.067327, 9.299284)

H2

0.000002

H2

1.56e14

0.05

1.17e12

(7.126108, 7.398457)

H2

0.000002

0.05

(7.025678, 6.848193)

H2

0.000002

0.05

(7.425569, 7.490179)

H2

0.000002

0.05

(7.923893, 7.297512)

H2

0.000002

H2

1.56e14

0.05

1.17e12

(7.732763, 7.486079)

H2

0.000002

H2

1.56e14

0.05

1.17e12

H2

14

0.05

1.17e12

14

0.05

1.17e12

14

0.05

1.17e12

14

0.05

1.17e12

2
3
4
5

(7.696770, 10.67430)
(8.852090, 10.14633)
(10.68423, 10.38797)
(8.703572, 8.388226)

H2
H2
H2
H2

0.000002
0.000002
0.000002
0.000002

H2
H2
H2

1.56e
1.56e
1.56e
1.56e

Example 3. As a real example for comparison of working qualities of conditional and


unconditional Bayesian problems an example from [Kachiashvili & Melikdzhanian,
(2006)] can be offered. The problem of detection of excessive river water pollution
sources operating between two controllable cross-sections is considered here. The
given section of the river is polluted by five potential emergency pollutants. Let us
designate them as OB1, OB2, OB3, OB4 and OB5. Following parameters of pollution
are considered: chlorides, sulfates, ammonia nitrogen, petroleum products and iron.
In the upper section the pollution does not exceed the norm. In the lower section, it
has been revealed that the concentration of chlorides, sulfates and ammonia nitrogen

68

exceed the corresponding maximum allowable concentration. This shows that the
pollution exists in the controllable section.
By means of a specialized software package, for the considered problem, on the
basis of modelling of the process of river pollution, six hypotheses regarding the fault
of aforementioned five potential pollutants in the emergency pollution were set up
(see Table 3 from [Kachiashvili & Melikdzhanian, (2006)]). The decision about the
fault of potential pollution sources in the emergency pollution was made by using the
conditional and unconditional Bayesian problems. Four cases differing by the variance
of measurement results were simulated according to the following formula xi,mes =
+ k N(x; 0, ), k = 1, 2, 3, 4, where N(x; 0, ) is the normally distributed random
quantity with the mathematical expectation equal to zero and variance 2 ; =
(1 , ..., 5 ) = (2519, 592, 81, 0.03, 0.41) and = (1 , ..., 5 ) = (20, 10, 0.2, 0.002, 0.01).
For each value of the variance, five cases were simulated. The results of modeling
(see Table 4) showed that, for two minimum values of the variance, both methods
allowed making correct decisions. At a following increase in the variance, in both
algorithms in one case of five a wrong decision is made. For the maximum value of
the variance, on the basis of the unconditional algorithm, the wrong decision is made
twice, whereas, on the basis of the conditional algorithm, all decisions are correct.
Besides that the results obtained by the conditional Bayesian method are more
reliable, the following fact is obvious. At the solution of the considered problem two
possible errors, namely an unjustified accusation of a concrete object (enterprise) in
the emergency pollution or not to reveal (enterprise) of emergency pollutant of the
river, have different prices and consequences. Depending on the state policy, defining
the priority of protection of the water object from the pollution or the encouragement
of the development of the given branch of industry, the errors of one kind can be and
should be limited and, under these conditions, the errors of other kind should be

69

minimized [Primak, Kafarov & Kachiashvili, (1991)].


Table 3. Hypotheses concerning the objects responsible for emergency pollution of the given section of the river

Hypotheses

Objects

Numbers

(Enterprises)

Chlorides,

Values of Parameters Measured in the Lower Controlled Range


Sulfates,

Ammonia

Petroleum

Iron,

Suspected in

mg/l

mg/l

Nitrogen,

Products,

mg/l

mg/l

mg/l

Pollution
H1

OB1,OB2

1519.53(0)

592.28(0)

81.56(0)

0.03(0)

0.41 (0.253)

H2

OB1,OB2,OB4

1519.53(0)

592.28(0)

109.44

0.04(1.0)

0.80 (0.7733)

0.03(0)

0.59 (0.4933)

(0.79977)
H3

OB1,OB3

1616.33

592.28(0)

(0.6934)
H4

OB1,OB2,OB5

88.54
(0.20023)

1616.33

592.28(0)

116.42(1.0)

0.04(1.0)

0.97 (1.0)

(0.6934)
H5

OB1

1362.53(0)

592.28(0)

81.56(0)

0.03(0)

0.22 (0)

H6

OB1,OB2,OB4,

1728.53(1.0)

634.28(1.0)

109.44

0.04(1.0)

0.80 (0.7733)

OB5

(0.7998)

show true hypothesis for considered measurement result.

Table 4. Hypotheses testing results

Hypotheses

Accepted Hypotheses at Four Different Experiments

Testing
Methods

xi,mes = + N(x; 0, )

xi,mes = + 2N(x; 0, )

xi,mes = + 3N(x; 0, )

xi,mes = + 4N(x; 0, )

Unconditional

H1 ; H1 ; H1 ; H1 ; H1

H1 ; H1 ; H1 ; H1 ; H1

H1 ; H1 ; H3 ; H1 ; H1

H1 ; H1 ; H3 ; H3 ; H1

H1 ; H1 ; H1 ; H1 ; H1

H1 ; H1 ; H1 ; H1 ; H1

H1 ; H1 ; H1 ; H3 ; H1

H1 ; H1 ; H1 ; H1 ; H1

Task
Conditional
Task

Example 4. The essence of problems of detection and tracking of flying objects on


the basis of radar-tracking information consists in the following [Potapov, Vinogradov,
Goritskiy & Pertsov, (1975)]. To isolate a subset of points at which the detection of
flying objects is the most probable from a set of spatial points of possible detection
based on the measurement results. To be specific, let us consider a tracking problem.
The essence of the problem can be represented schematically as follows.
Let, by means of the measurement information in ndimensional space at discrete moments of time t1 , t2 , ..., tm , there be separated k points of possible finding of

70

detected flying objects. The tracking problem consists in finding such sets of points
(Oi1 (t1 ), Oi2 (t2 ), ..., Oik (tm )), ij {1, 2, ..., k}, j {1, 2, ..., k} that correspond to the
valid trajectories of flying objects. Special algorithms separate the most probable
trajectories (we shall designate its number by S) from every possible combination of
these points (we shall designate their number by N ). It is evident that S N, and
the number of possible trajectories is N = k m . By the co-ordinates of the separated
combinations of points, they approximate the trajectories of movement of flying objects. After that, on the basis of the measured values, hypotheses concerning the
truth of the separated trajectories are tested. Errors of two kinds, when true trajectories are incorrectly rejected and when false trajectories are incorrectly accepted,
are possible. It is obvious that in the considered problem the prices of errors of different kinds are different. For example, in air defence it is more preferable to make
a wrong decision that the given trajectory corresponds to enemys flying object and
to bear expenses for its neutralization rather than not to detect a true trajectory
and to miss the enemy who is aimed at destruction. Therefore, when solving such
problems, they bound the probability of not detection of true trajectories and, under
these conditions, minimize the probability of acceptance of false trajectories.
Let us simulate schematically the described problem on the computer. For simplicity of understanding, we will consider the case of one-dimensional measurement
space that corresponds to one-coordinate radio-locators. Let three objects move by
trajectories:
x1 (t) = 2 + 0.7 t + 0.1 t2 ,
x2 (t) = 3 + 1.5 t 0.1 t2 ,
x3 (t) = 5 0.5 t + 0.1 t2 .
The measurements are carried out at the moments of time: t1 = 2, t2 = 4 and
t3 = 6 (see Fig. 4.1). Because of sketchiness of the example, there are omitted the

71

measurement units. In Table 5, in the columns designated by Hi , i = 1, ..., 4, the


resulted hypothetical trajectories by which the movement of the objects is possible
are given. For mean square deviations of measurement errors = 0.8 and = 1,
there are simulated five different measurement results for each, and for = 2 - nine
measurement results. The data in the first three lines correspond to three flying
objects at the moment of time t1 = 2, in the next three lines - to the same objects at
the moment of time t2 = 4 and in the last three lines - to the for each objects at the
moment of time t3 = 6. In Table 5 b) are added three more lines which correspond
to the moment of time t4 = 8. Thus, the data in lines 1, 4, 7 and 10 refer to one
trajectory, the data in lines 2, 5, 8 and 11 - to other trajectory, and the data in lines
3, 6, 9 and 12 - to the third trajectory. From the condition of the problem it is easy to
be convinced that hypothesis H1 is true, i.e. it corresponds to Fig. 4.1. The results
of the work of conditional and unconditional Bayesian problems are given in Table 6.

Figure 4.1: True trajectories of flying objects

In all considered cases, in the conditional Bayesian problem, the significance level
of criterion is = 0.05. For = 0.8 and = 1, the results of work of both algorithms

72

are identical. In particular, for = 0.8 all decisions are correct, and for = 1
both algorithms make one error. If, because of the abovementioned specificity of the
problem, it is essential not to miss the true trajectories of flying objects with more
confidence it is necessary to reduce . Then, for the case of erroneous acceptance of
hypothesis H2 at = 1, we will obtain the following: at 0.0018 0.00498, the
measured values appear in the sub-region of intersection of the regions of acceptance
of hypotheses H1 and H2 , and the algorithm specifies that both hypotheses seem to
be true with the given significance level of criterion. At 0.00001 < 0.0018, the
measured values appear in the sub-region of intersection of the regions of acceptance of
hypotheses H1 , H2 and H4 , and the algorithm specifies that these hypotheses seem to
be true with the given significance level of criterion (0.00001 is the realized-in-program
maximum accuracy of the algorithm). If, on the basis of the given measurement
information, it is expedient not to miss the enemys flying object even at the expense
of additional costs, it is necessary to take small and against all probable trajectories
to realize defensive actions. In this case the enemys flying object will be hit by all
means, though at increased expenses. It is obvious that, in certain conditions, these
expenses are justified, as, in the case of not detecting the enemys flying object,
the consequences could be the most deplorable. If it is possible not to make the
final decision at the given stage, after the next measurement of the co-ordinates of
trajectories, i.e. after obtaining the additional information, the hypotheses testing
procedure is repeated.
At = 2, from nine considered cases, the unconditional Bayesian algorithm gives
the wrong decision three times, i.e. more than in 33% of cases. This is absolutely
unacceptable for the considered problem. In the same cases, the conditional Bayesian
algorithm suspects the first (true) and fourth hypotheses to be true. Naturally, with

73

the reduction of the number of trajectories of suspected to be true does not decrease. It means that, on the basis of this measurement information, it is impossible
to make the unequivocal decision with a high degree of guarantee of the detection of
flying objects. Therefore, it is necessary either to incur additional expenses for neutralization of the enemy and operate against all suspected trajectories, or to obtain
the additional information if it is acceptable in the given situation. The results of
simulation of such actions are given in additional three lines (surrounded by double
lines) of Table 5 b). Hypotheses testing is postponed up to obtaining the results of the
next measurement of flying object co-ordinates, i.e. up to obtaining the measurement
results at the moment of time t4 = 8. As is seen from the lines of Table 6 surrounded
by double lines all decisions made on the basis of such information in the considered
case are correct both in conditional and unconditional Bayesian problems. This results from the fact that obtaining of the additional information for the trajectories of
interest has increased information distances among the tested hypotheses that both
algorithms work with high degree of reliability. In particular, critical values of the
significance level of criterion in the conditional Bayesian problem for all considered
measurements is < 0.005, and for some measurement results it reaches 0.00001, i.e.
by Kiefer they are the strongest conclusive decisions.
Table 5. Simulation results of possible trajectories of flying objects and the radar-tracking measurement information.
a)

H1

H2

H3

H4

Measurement results
= 0.8

=1

3.8

5.6

5.6

5.6

2.335

3.062

4.051

3.938

3.383

5.334

3.939

2.046

3.679

4.279

5.6

4.4

4.4

4.4

5.581

6.462

6.464

4.897

6.382

3.48

7.952

6.072

7.478

5.21

4.4

3.8

3.8

3.8

4.725

5.504

3.793

4.378

3.977

4.598

5.216

4.63

3.027

4.81

6.4

7.4

6.4

4.6

5.663

6.301

6.804

5.986

7.08

6.34

7.729

5.211

6.112

7.708

7.4

6.4

4.6

7.4

7.974

7.647

6.834

8.764

7.935

5.437

6.272

10.171

6.905

7.46

4.6

4.6

7.4

6.4

5.017

4.425

6.322

4.448

4.117

4.27

4.179

5.176

3.515

4.795

9.8

9.8

9.8

9.8

9.032

10.067

8.651

9.554

9.634

10.716

9.741

9.823

9.291

10.671

8.4

8.4

5.6

8.4

8.242

9.354

7.647

8.365

8.838

8.409

10.386

8.663

7.931

10.344

5.6

5.6

8.4

5.6

4.948

6.747

4.64

4.997

4.215

5.604

4.6

7.377

6.92

4.014

74

b)

H1

H2

H3

H4

Measurement results

3.8

5.6

5.6

5.6

1.749

2.399

4.928

1.921

4.305

2.148

-1.22

6.781

1.762

5.6

4.4

4.4

4.4

3.72

8.452

5.604

6.956

4.356

1.123

6.136

5.005

4.238

4.4

3.8

3.8

3.8

4.622

3.42

7.9

8.887

5.915

0.546

5.955

5.447

4.025

6.4

7.4

6.4

4.6

8.451

6.973

6.756

6.836

4.004

5.202

10.555

4.814

5.258

7.4

6.4

4.6

7.4

5.164

3.857

5.678

10.644

5.241

7.587

5.881

10.289

6.57

4.6

4.6

7.4

6.4

4.81

6.726

3.416

4.262

4.616

4.67

8.841

2.726

9.1

9.8

9.8

9.8

9.8

10.299

8.142

10.088

8.808

8.418

6.045

10.482

12.511

7.283

8.4

8.4

5.6

8.4

8.209

8.103

10.184

5.381

9.899

7.622

10.454

5.854

7.859

5.6

5.6

8.4

5.6

7.789

7.437

4.133

5.284

8.01

6.222

4.18

6.491

6.471

10

14

8.6

14

14

14.313

15.096

14.129

15.834

16.294

12.225

13.394

10.737

13.825

11

8.6

14

7.4

7.4

8.569

10.68

7.416

8.429

8.381

7.277

8.594

9.133

4.803

12

7.4

7.4

8.6

8.6

10.707

7.447

11.031

5.957

8.799

3.104

6.124

3.297

9.505

=2

Table 6. Hypotheses testing results by conditional and unconditional Bayesian methods.

Methods of

Accepted Hypothesis at Different Experiments

Hypotheses
Testing

Unconditional

xi,mes = xi (t) + N(x/0; 0.8)

xi,mes = xi (t) + N(x/0; 1)

xi,mes = xi (t) + N(x/0; 2)

H1 ;H1 ;H1 ;H1 ;H1

H2 ;H1 ;H1 ;H1 ;H1

H1 ;H1 ;H1 ;H1 ;H4

(on the basis of 9

H4 ;H1 ;H1 ;H4

measurement results)
Conditional

H1 ;H1 ;H1 ;H1 ;H1

H2 ;H1 ;H1 ;H1 ;H1

H1 ;H1 ;H1 ;H1 ;H1 &

(on the basis of 9

H4 ;H1 &H4 ;H1

measurement results)

H1 ;H1 &H4

Unconditional

H1 ;H1 ;H1 ;H1 ;H1

(on the basis of 12

H1 ;H1 ;H1 ;H1

measurement results)
Conditional

H1 ;H1 ;H1 ;H1 ;H1

(on the basis of 12

H1 ;H1 ;H1 ;H1

measurement results)

4.2

Sensitivity Analysis

In [Berger, (1985)] is presented criticism of the application of loss functions which


are used in different works. In particular, there are the following reasons for such

75

criticism: (i) inappropriateness for inference problems, (ii) the complexity for implementation, and (iii) the non-robustness of obtained results. We will not dwell on
the first disadvantage for a simple reason that we consider only statistical hypotheses
testing problems in this paper. From our point of view the second disadvantage is
not topical taking into account the todays advancement of computer science. The
problem of robustness is very important and delicate in statistics [Grillenzoni, (2008);
Dhar & Chaudhuri, (2008)]. Therefore below, in Tables 7, 8, we present the results
of investigation of this problem for above-described hypotheses testing tasks at the
normality of probability distribution of the vector of measured parameters, i.e. at
(3.2).
Under non-robustness is meant the dependence of hypotheses testing results and
the suitable risk functions on the loss function, on the one hand, and on a priori
probabilities of testing hypotheses, on the other hand. Hence below are given the
computation results showing the dependence of the correctness of decisions made
and the suitable values of risk functions on the loss function values and on a priori
probabilities of tested hypotheses for conditional Bayesian problems 1, 2, 4 and 7
(Tables 7 and 8).
It should be noted that the conditional Bayesian tasks of hypotheses testing are
free from a necessity in artificial introduction of loss functions. Therefore, the results of research of the dependence of conditional Bayesian problems on covariance
matrixes only are given in Table 7. The results of research of the dependencies of
conditional Bayesian problems on the change of a priori probabilities, for Example 6
are given in Table 8. The investigations were performed for two examples. Let us
introduce the data on these examples.

76

Example 5. Tested hypotheses: H1 : 11 = 1, 21 = 1, H2 : 12 = 4, 22 = 4,


H3 : 13 = 8, 23 = 8. Measurement results: x1 = 2, x2 = 2. A priori probabilities
of tested hypotheses: p(H1 ) = 1/3, p(H2 ) = 1/3, p(H3 ) = 1/3.
In both examples, we used the following covariance matrices and loss functions:

2 1.5
4 3
W1 =
, W2 =

1.5 2
3 5

7 5
10 9
W3 =
, W4 =
,
5 8
9 10
Example 6. Tested hypotheses: H1 : 11 = 2, 21 = 2, H2 : 12 = 4, 22 = 4,
H3 : 13 = 7, 23 = 7. Measurement results: x1 = 2.5, x2 = 2.5. A priori probabilities of tested hypotheses: p(H1 ) = 1/3, p(H2 ) = 1/3, p(H3 ) = 1/3.

77

Table 7.

Conditional Hypotheses testing problem


Example 5

Example 6

Covar

Acce

Risk

Lagr

Lagr

Lagr

Signi

Acce

Risk

Lagr

Lagr

Lagr

Signi

iance

pted

func

ange

ange

ange

fican

pted

func

ange

ange

ange

fican

Mat

hypot

tion

multi

multi

multi

ce lev

hypot

tion

multi

multi

multi

ce lev

rix

hesis

plier

plier

plier

el of

hesis

plier

plier

plier

el of

Hi

criter

Hi

criter

Task 1
W1

H1

0.0685

1.11756

0.05

H1

0.2735

1.772

0.07

W2

H1

0.32417

1.48775

0.18

H1

0.41

1.3838

0.29

W3

H1

0.3725

1.33949

0.27

H1

0.447

1.39795

0.39

W4

H1

0.33067

1.33992

0.26

H1

0.379

1.49311

0.4

Task 2
W1

H1

0.08967

0.4248

0.741

0.1122

0.05

H1

0.23783

0.4808

0.5887

0.2348

0.1

W2

H1

0.24183

0.2486

0.4784

0.143

0.27

H2

0.30583

0.2682

0.4667

0.1691

0.4

W3

H2

0.33383

0.2795

0.4988

0.1919

0.34

H2

0.40883

0.3490

0.5349

0.2686

0.42

W4

H2

0.266

0.3321

0.4635

0.2741

0.33

H2

0.35767

0.4026

0.5302

0.3564

0.42

Hi

Average

Task 4
W

Hi

Average

capacity

capacity

W1

H1

0.9270

1.19201

0.05

H1

0.72900

1.64594

0.07

W2

H1

0.6630

1.43825

0.16

H1

0.44283

1.15623

0.14

W3

H1

0.56117

1.17280

0.2

H1

0.40117

0.94346

0.19

W4

H1

0.60717

0.89237

0.2

H1

0.42767

0.76443

0.17

Task 7
W1

H1

0.97367

0.4611

0.5826

0.1181

0.05

H1

0.83117

1.0240

1.2522

0.4043

0.05

W2

H1

0.63867

1.4439

1.7936

0.9693

0.05

H1

0.55050

1.1695

1.0559

0.8837

0.08

W3

H1

0.63917

1.177

1.2195

0.8621

0.05

H1

0.53650

0.9067

0.7825

0.7084

0.11

W4

H1

0.61283

0.9627

0.8960

0.7095

0.07

H1

0.5345

0.7686

0.6863

0.6400

0.01

78

Table 8.

Conditional Hypotheses testing problem


Covar

Variant

Acce

Risk

Lagrange

Lagrange

Lagrange

Signi

iance

prior

prior

prior

pted

func

Multiplier

Multiplier

Multiplier

ficnace

Matrix

proba-

proba-

proba-

hypoti

tion

level

bility

bility

bility

hesis

p(H1 )

p(H2 )

p(H3 )

Hi

0.333

0.333

0.333

H1

0.06617

1.15846

0.05

0.3

0.3

0.4

H1

0.06305

1.08882

0.05

0.2

0.2

0.6

H1

0.04060

1.84007

0.05

0.1

0.1

0.8

H1

0.01630

0.56790

0.05

0.01

0.01

0.98

H1

0.00422

0.47767

0.011

0.333

0.333

0.333

H1

0.06617

1.15846

0.05

0.3

0.4

0.3

H1

0.08540

1.26509

0.05

0.2

0.6

0.2

H2

0.07410

1.11382

0.05

0.1

0.8

0.1

H2

0.06300

1.15645

0.05

0.01

0.98

0.01

H2

0.00201

0.02436

0.05

Task 1
W1

a)

b)

Task 2
a)

b)

0.333

0.333

0.333

H1

0.08617

0.4317

0.5744

0.1401

0.05

0.3

0.3

0.4

H1

0.07255

0.3549

0.5626

0.1351

0.05

0.2

0.2

0.6

H1

0.0513

0.2056

0.4082

0.0662

0.06

0.1

0.1

0.8

H1

0.03005

0.0952

0.2346

0.0206

0.09

0.01

0.01

0.98

H2

0.00756

0.0043

0.1033

0.0010

0.016

0.333

0.333

0.333

H1

0.08617

0.4317

0.5744

0.1401

0.05

0.3

0.4

0.3

H1

0.08400

0.4921

0.5288

0.2018

0.05

0.2

0.6

0.2

H1

0.08785

1.0445

0.1648

0.3956

0.05

0.1

0.8

0.1

H1

0.09167

1.0609

0.1670

0.3650

0.05

0.01

0.98

0.01

H1

0.09738

1.2476

0.0153

0.4156

0.05

Task 4

a)

b)

p(H1 )

p(H2 )

p(H3 )

Hi

0.333

0.333

0.333

H1

0.92667

1.14523

0.05

0.3

0.3

0.4

H1

0.93755

1.09875

0.05

0.2

0.2

0.6

H1

0.96290

0.86380

0.05

0.1

0.1

0.8

H1

0.98395

0.51878

0.05

0.01

0.01

0.98

H1

0.99587

0.44273

0.098

0.333

0.333

0.333

H1

0.92667

1.14523

0.05

0.3

0.4

0.3

H1

0.92225

1.19872

0.05

0.2

0.6

0.2

H2

0.92100

1.21524

0.052

0.1

0.8

0.1

H2

0.93280

1.19274

0.05

0.01

0.98

0.01

H2

0.99765

0.02435

0.05

Task 7
a)

b)

0.333

0.333

0.333

H1

0.97233

0.4416

0.6179

0.1107

0.05

0.3

0.3

0.4

H1

0.98180

0.4025

0.5317

0.1260

0.05

0.2

0.2

0.6

H1

0.98750

0.3160

0.4371

0.1947

0.041

0.1

0.1

0.8

H1

0.98870

0.3307

0.4439

0.5808

0.02

0.01

0.01

0.98

H1

0.99569

0.4459

0.4716

11.876

0.001

0.333

0.333

0.333

H1

0.97233

0.4416

0.6179

0.1107

0.05

0.3

0.4

0.3

H1

0.97130

0.4172

0.7498

0.1106

0.05

0.2

0.6

0.2

H1

0.96890

0.3056

1.2891

0.1064

0.042

0.1

0.8

0.1

H1

0.96080

0.2672

3.6028

0.0944

0.02

0.01

0.98

0.01

H2

0.99795

0.0272

0.0451

0.0115

0.019

79

Small information distances lead to arising of errors at hypotheses testing. In


conditional Bayesian Tasks 1, 4 and 7 correct decisions are made for all considered
information distances in Examples 5 and 6. Unreliable working of Task 2 for small
information distances between the hypotheses can be explained by rigid restrictions
of this task which provide keeping the sizes of the regions of acceptance of hypotheses
at the given level. In the considered examples, this provides expansion of the region
of acceptance of the first hypothesis to the left and that of the third hypothesis - to
the right. Owing to that fact there is kept the given size for the region of acceptance
of the second hypothesis at decreasing information distance between the hypotheses.
Therefore, the second hypothesis is accepted incorrectly. This is confirmed by the
fact that, for measurement results x1 = 1.8 and x2 = 1.8 in Example 5, the decisions
made by Task 2 for all considered information distances are correct, whereas, for
measurement results x1 = 2.3 and x2 = 2.3 in Example 6 at covariance matrix W2 , a
correct decision has already been made by Task 2.
Tasks 2 and 7 are especially steady to the changes of a priori probabilities. They
make correct decisions for all considered cases with one exception for each. Of decisive
importance for correct testing of hypotheses in all considered tasks is the correct (or
close to correct) choice of a priori probabilities for true and close-to-true hypothesis.
If there is no confidence in the correctness of chosen values of a priori probabilities,
it is better to ignore this information and take p(H1 ) = p(H2 ) = ... = p(HS ), where
S is the number of tested hypotheses.
Conditional Bayesian Tasks 1, 4 and 7, unlike Task 2, are steady to the changes
in information distances between the hypotheses. Task 2 is the most sensitive to
these changes among all considered tasks. Tasks 2 and 7 are the least sensitive to the

80

changes of a priori probabilities of hypotheses. Hence there should be chosen the task
(among the offered set of tasks) for hypotheses testing at solving practical problems.
Moreover, it should be taken into account that in Tasks 1, 2 and 3 are from below
restricted differently weighed probabilities of correctly accepted hypotheses and is
minimized the averaged error of the first kind, in Tasks 4, 5, 6 and 7 are restricted
differently weighed probabilities of incorrectly rejected hypotheses and is minimized
the averaged power of criterion. From the specificity of conditional Bayesian Tasks 17, it is obvious that, for each information distance between the hypotheses, there exists
a certain restriction on the level of power of criterion attainable in principle (for Tasks
1-3) or the level of errors of the first kind (for Tasks 4-7). The minimum attainable
levels in the corresponding restrictions are given in Tables 7 and 8. Therefore, the
comparison of risk functions values in Tasks 1-3 makes sense at identical in the
restrictions. Analogously, in Tasks 4-7, it is possible to compare the values of the
function of averaged power only at identical in the restrictions.

4.3

Experimental Research of Peculiarities

For showing the rightness of the reasoning cited in 3.1.1, below are given the results
of calculations for concrete examples, when two hypotheses in relation to mathematical expectations of the 3-dimentional normally distributed vector are tested. In both
examples, in calculations are used the following covariance matrices:

0
0.1 0

W1 =
0 0.1 0

0
0 0.1

1 0 0
2 0.5 0.1

, W2 = 0 1 0 , W3 = 0.5 2 0.1 ,

0 0 1
0.1 0.4 2

81

4 2 1

W4 =
2 4 3

1 3 4

20 18

W7 =
18 20

15 17

8 4 2

, W5 = 4 8 3

2 3 8

15
100

80
,
W
=
17
8

20
70

12 9 7

, W6 = 9 12 6

7 6 12

80 70

100 65
.

65 100

Example 7. On the basis of measurement result x = (1.5, 1.5, 1.5) of three dimensional normally distributed vector X = (X1 , X2 , X3 ), the decision concerning the
correctness of one of the two hypotheses H1 : 1 = (1, 1, 1) and H2 : 2 = (3, 3, 3) is
made. A priori probabilities of tested hypotheses are equal, i.e. p(H1 ) = p(H2 ) = 1/2.

Example 8. On the basis of measurement result x = (2, 2, 2) of three-dimensional


normally distributed vector X = (X1 , X2 , X3 ), the decision concerning the correctness of one of the two hypotheses H1 : 1 = (1, 1, 1) and H2 : 2 = (3, 3, 3) is made.
A priori probabilities of tested hypotheses are equal, i.e. p(H1 ) = p(H2 ) = 1/2.

The calculation results are given in Table 9 from which it is seen that in Example
7, for all considered covariance matrixes both in unconditional and in conditional
Bayesian tasks, right decisions are made. In this case, the averaged risk in the conditional Bayesian task at = 1 is equal to the significance level of the criterion , and it
is also equal to the average risk in the unconditional Bayesian task, which completely
corresponds to the results of the theoretical research. For covariance matrix W1 ,
there is no such a result of computation of the conditional Bayesian task, because, in

82

the appropriate program package, an opportunity of presenting the significance level


of criterion with accuracy up to 2.15 108 is not realized.
From the calculation results of Example 8, it is seen that, in the unconditional
Bayesian task, for all considered covariance matrices is accepted the second hypothesis, which is not exactly correct, as the measurement result is informationally distanced from both hypotheses identically, and therefore both hypotheses are true (or
false) with the identical probability. In this sense, the results of the conditional
Bayesian task are absolutely correct, as for different significance levels of the criterion, either both hypotheses are rejected or both are accepted. In this case, at = 1,
the values of average risk in both tasks are equal, and they are equal to the significance level of criterion in the conditional Bayesian task, which corresponds to the
results of the theoretical research. In the conditional Bayesian task, the value = 1
is the threshold for changing the result of operation of the algorithm from the impossibility of acceptance of a single-valued decision to the impossibility of acceptance of
any decision.

83

Table 9.

Covariance
Matrix

Unconditional task

Conditional task

Accepted

Risk

Accepted

Restriction

Risk

Lagrange

Hypothesis

function

Hypothesis

level

function

Multiplier

Hi

Hi

H1

0.00308

1.0 1016

9.4 1014

Neither

> 0.00308
1.1 1011

1.71 106

0.19319

20.0724

0.0047

0.04994

0.04166

1.00125

0.29725

3.10582

0.05517

0.32208

0.1435

1.00031

0.3759

1.49176

0.17139

0.67033

0.26359

1.00017

0.40793

1.2414

0.24264

0.80541

0.32095

1.00019

0.43292

1.12035

0.30641

0.89242

0.36834

1.00075

0.45243

1.05881

0.35999

0.94449

0.40607

1.00066

0.4778

1.01246

0.4337

0.98769

0.45576

1.00004

Example 7
W1

W2

W3

W4

W5

W6

W7

W8

H1

H1

H1

H1

H1

H1

H1

H1

2.15 108

0.044163

0.14353

0.26354

0.32087

0.36792

0.40554

0.45568

H1

= 0.00001

Both

< 0.00469

H1

= 0.00469

H1

(0.00469, 0.193)

H1

= 0.193

Neither

> 0.193

H1

= 0.0416

Both

< 0.05515

H1

= 0.05515

H1

(0.05515, 0.2972)

H1

= 0.2972

Neither

> 0.2972

H1

= 0.1435

Both

< 0.1714

H1

= 0.1714

H1

(0.1714, 0.37591)

H1

= 0.37591

Neither

> 0.37591

H1

= 0.2635

Both

< 0.2427

H1

= 0.2427

H1

(0.2427, 0.408)

H1

= 0.408

Neither

> 0.408

H1

= 0.3208

Both

< 0.3065

H1

= 0.3065

H1

(0.3065, 0.43302)

H1

= 0.43302

Neither

> 0.43302

H1

= 0.3675

Both

< 0.35996

H1

= 0.35996

H1

(0.35996, 0.4524)

H1

= 0.4524

Neither

> 0.4524

H1

= 0.405

Both

< 0.4337

H1

= 0.4337

H1

(0.4337, 0.4778)

H1

= 0.4778

Neither

> 0.4778

H1

= 0.4556

84

Table 9. (continuation)

Covariance
Matrix

Unconditional task

Conditional task

Accepted

Risk

Accepted

Restriction

Risk

Lagrange

Hypothesis

function

Hypothesis

level

function

Multiplier

Hi

Hi

0.04163

1.00009

0.14354

1.00003

0.26359

1.00017

0.32088

1.00001

0.36793

1.00001

0.40554

1.00001

0.45568

1.00000

Example 8
W1

H2

2.15 108

Neither

(0.00001, 0.9999)

W2

H2

0.04163

Both

0.04163

Neither

> 0.04163

W3

W4

W5

W6

W7

W8

H2

H2

H2

H2

H2

H2

0.14353

0.26354

0.32087

0.36792

0.40554

0.45568

Both

= 0.04163

Both

0.14353

Neither

> 0.14353

Both

= 0.14353

Both

0.2635

Neither

> 0.2635

Both

= 0.2635

Both

0.32087

Neither

> 0.32087

Both

= 0.32087

Both

0.36791

Neither

> 0.36791

Both

= 0.36791

Both

0.40553

Neither

> 0.40553

Both

= 0.40553

Both

0.45568

Neither

> 0.45568

Both

= 0.45568

From the above reasoning it is obvious that, in conditional problems of hypotheses


testing, the situation is similar to the sequential analysis [Wald, (1947)]. There could
arise the situation when, on the basis of the available information, it is impossible to
accept a hypothesis with specified reliability. In this situation, the actions are similar
to the sequential analysis, i.e. it is essential to obtain extra information in the form
of additional observation results or to change the level of reliability of a test.
Thus, the introduced generalization of the classical unconditional Bayesian problem of hypotheses testing in the form of the conditional Bayesian problem allows
passing from the parallel analysis of the data to the sequential analysis (if possible).

85

Therefore, it is natural to suppose that, besides the above-mentioned, the conditional


Bayesian problems of hypotheses testing have the same advantages over the unconditional Bayesian problem that the sequential analysis has over the parallel analysis. In
particular, among them are the guaranteed of errors of both the first and the second
kinds and the minimality on the average of the number of observations necessary for
ensuring the desired significance level of criterion. Though, this statement calls for a
rigorous proof.

4.4

The Results of Experimental Research of Quasioptimal Decision Rule

For the purpose of experimental research of the quality of the offered quasi-optimal
algorithms (see section 3.5), a problem of emergency detection of pollution sources
of river water on the given section is considered [Kachiashvili and Melikdzhanian,
(2006)]. The pollution level of river water is controlled by the content of chlorides,
sulfates, ammonia nitrogen, petroleum products and iron. Concentrations of these
components in the river water are caused by five different pollution sources operating
on the controllable section of the river. We designate these objects as OB1, OB2,
OB3, OB4 and OB5. In normal operating conditions they discharge wastes with one
concentration of the components into the river, and in the emergency conditions - of
other concentration, which is considerably higher. Six hypotheses concerning being
of these objects in the emergency operating conditions are formulated (see Table 3)
[Kachiashvili and Melikdzhanian, (2006)]. We have: the dimension of the observation
vector n = 5 and the number of hypotheses S = 6.

86

As it is seen, the values of water parameters diverge considerably. Therefore, for


avoiding the overflow of registers or the occurrence of division by zero in the computer,
prior to hypotheses testing there is carried out the normalization of the initial data
/

/2

by formulae xi = (xi ci )/(di ci ),, i = i2 /(di ci )2 , i = 1, ..., n, where ci , di


are the minimum and maximum values of the ith controllable parameter for the set
of considered hypotheses, respectively. After such normalization the coordinates of
hypotheses take on the values specified in brackets in Table 3.
The results of the work of the considered optimum and quasi-optimum algorithms
are given in Table 10. For each case, for testing the formulated hypotheses (see
Table 3), there are used five-dimensional observation vectors with identical covariance matrix taken by five. For each case, the observation results are simulated by
formula xmes = + k N(x|0; ), k = 1/2, 1, 2, 3, where N(x|0; ) is the normally
distributed random vector with the mathematical expectation equal to zero and the
vector of variances = (1 , ..., 5 ); = (1 , ..., 5 ) = (1519, 592, 81, 0.03, 0.41) and
= (1 , ..., 5 )= (115.7,34.6,14.2, 0.006,0.1).
In our case, it is known that the originators of emergency pollution are objects
OB1 and OB2, i.e. hypothesis H1 is true [Kachiashvili and Melikdzhanian, (2006)].
In Table 10, the optimum and approximate values of risk functions for the conditional Bayesian task are given in the same columns. Near to optimum values of
risk functions in the brackets the approximate values are given. For the conditional
Bayesian task, the optimum, approximate and quasi-optimum values of risk functions
are given for identical significance levels of the criterion. For conditional tasks, in
the corresponding columns, the minimum values of significance levels of the criterion for which unambiguous decisions are made are given. For comparison with the

87

optimum problem, in the quasi-optimum conditional one, those significance levels of


the criterion are given in brackets for which a hypothesis is accepted in the optimum
conditional task. The values of risk functions for different number of tested hypotheses and different significance levels of the criterion for 2 are given in Table 11. For
clearness, the dependences of risk functions on the number of tested hypotheses, the
module of covariance matrix and the significance level of the criterion are shown in
Figs. 4.2-4.4. By the modelling results (see Table 10), it is evident that the quality of hypotheses testing of quasi-optimum algorithms coincides with the quality of
optimum algorithms for the considered example.
By the data of Tables 10 and 11 and the figures, the validity of statements cited
in Item 3.5.3 is evident. In particular, for < 0.2, the value of the risk function of
the optimum conditional Bayesian task is more than the approximate value and less
than the corresponding value of the quasi-optimum task. For values = 0.2, 0.3, 0.4,
the value of the risk function of the optimum conditional Bayesian task becomes less
than the approximate value. For S = 6 such a change occurs for = 0.3. The
computation results confirmed the evident fact that the values of risk functions r opt ,
r a and r Q are equal for S = 2.

88

Table 10. The results of statistical hypotheses testing by the considered methods.

Covar

Observation result

Optimal conditional task

Quasi-optimal conditional task

iance

Accepted

Risk

Signifi

Acce

Risk

Significance

Matrix

Hypothesis

function

cance

pted

function

level

level

Hypo
thesis

(x1 , ..., xn )

Hi

opt
a
rcon
(rcon
)

Hi

Q
rcon

()

2 /2

(0.0566,1.4944,0.0888,-

H1

0.0970

0.05

H1

0.2015

0.0312(0.05)

H1

H1

H1

H1

H1

H1

H1

H1

H5 ;

0.7090

0.0312(0.05)

(0.1921)

(0.0242(0.1155))

0.7090

0.0312(0.05)

(0.4864)

(0.0241(0.115))

0.7090

0.0312(0.05)

(0.4864)

(0.0245(0.1166))

0.4399,0.2289);

(0.0767,-0.3574,0.2390,

(0.0869)

-0.1898,0.3139);

(0.5341,1.0880,0.1249,
0.1688,0.2302);

(0.1296,-0.4322,0.4862,
0.8181,0.2725);

(-0.0087, 0.0811,-0.2503,0.9805, 0.3361)


2

(0.2565,-0.2249,0.2069,-

(H1 , H5 )

0.3295, 0.0794);

H5 ;

0.258583

0.1155

(0.1921)
(0.3845,2.0334,-0.3063,

(H1 , H3 )

0.0452, 0.3367);

H1 ;

H1 ;
0.259833

0.115

(0.1927)
(0.4556,0.1030, 0.0383,

(H1 , H5 )

-0.4091,0.1675);

H1

H1 ;
0.253583

0.1166

(0.1908)

(-0.1027,-0.6681,0.2551,

H1 ;

0.9105,0.3726);

0.452000

0.05

H1 ;

0.7090

0.0312(0.05)

0.109

H1

0.7090

0.0312(0.05)

(0.5002)

(0.0228(0.109))

(0.3341)

(0.0871,0.8164,-0.1306,

(H1 , H3 )

0.262917

0.4583,0.4756)

H1 ;

(0.2005)

89

Table 10. (continuation)

Covar

Observation result

Optimal conditional task

Quasi-optimal conditional task

iance

Accepted

Risk

Signifi

Acce

Risk

Significance

Matrix

Hypothesis

function

cance

pted

function

level

level

Hypo
thesis

2. 2

(0.1132,2.9888,0.1776,-

(H1 , H5 )

1.20063

0.8798,0.2049);

H1 ;

(0.8806)
0.452917

0.05

(1.5871)

(0.0312(0.05))

H1 ;

0.8675

0.0504 (0.228)

H1 ;

0.8379

0.0539(0.242)

H3 ;

1.3199

0.0186(0.0895)

H1 ;

1.0179

0.0361(0.168)

0.115

H1

1.2012

0.0241(0.115)

0.05

H1 ;

(1.318238)

(0.01(0.05))

1.3282

0.0495(0.2241)

0.228

(0.3142)
(0.1535,-0.7148,0.478,

(H1 , H3

,-0.3796, 0.3748);

H5 )H1 ;

0.432000

0.242

(0.2965)

(1.0682,2.1760, 0.2499,

(H1 , H3 )

0.3376,0.2074);

H3 ;

(0.2593,-0.8645,0.9724,

(H1 , H2 ,

1.6362,0.2921);

H3 , H5 )

0.59950

H1 ;

(0.4116)

0.893667

0.0895

(0.6429)

(-0.0175,0.1622,-0.5007,

(H1 , H3 )

-1.9611, 0.4192)

H1 ;

0.757250

0.168

(0.5467)

3. 2

(-0.5813, 2.7802,0.4397,

(H1 , H2

1.70053

1.5866, 0.3466);

H5 )H1 ;

(1.3076)
0.763917

0.2241

(0.5260)
(-0.3016, -0.7448,-0.6955,

(H1 , H5 )

-0.3628,-0.0938);

H5 ;

0.226

H5 ;

1.3227

0.0499(0.226)

0.255

H3 ;

1.2439

0.0572(0.255)

0.134

H1 ;

1.6474

0.0284(0.134)

0.1996

H5 ;

1.4023

0.0436(0.1996)

0.751583
(0.5219)

(0.5203,-0.8906, 0.9369,

(H2 , H3

1.1071,0.4657);

H6 )H3 ;

0.671667
0.4657)

(-0.4434,-3.0615,-0.8523,

(H1 , H2 ,

0.0107,0.5735);

H3 , H5 )

1.078833

H1 ;

(0.7833)

(0.8846, 0.4363,- 0.2710,

(H1 , H3

0.828833

1.4114, 0.0958)

H5 )H1

(0.5817)

90

Table 11. The values of risk functions for 2

Hypothesis

Risk

Conditional task

function

Significance level
0.001

0.01

0.02

0.05

0.1

0.2

0.3

0.4

ra

0.6435

0.2904

0.2026

0.1077

0.0527

0.0230

0.0110

0.0056

r opt

0.7019

0.3152

0.2069

0.1083

0.0567

0.0179

0.0077

0.0029

rQ

0.7237

0.3523

0.2556

0.1476

0.0846

0.0396

0.0216

0.0124

H1 , H2 , H3

ra

0.8087

0.3807

0.2713

0.1534

0.0876

0.0419

0.0236

0.0139

H4

r opt

0.9545

0.4614

0.3301

0.1719

0.0927

0.0345

0.0168

0.0082

rQ

1.0188

0.5569

0.4260

0.2697

0.1701

0.0909

0.0554

0.0352

H1 , H2 , H3

ra

0.9910

0.5477

0.4218

0.2747

0.1839

0.1124

0.0788

0.0579

H4 , H5

r opt

1.1677

0.6887

0.5329

0.3447

0.2028

0.1049

0.0521

0.0279

rQ

1.3210

0.8456

0.7012

0.5142

0.3786

0.2516

0.1826

0.1366

H1 , H2 , H3

ra

1.3447

0.7131

0.5362

0.3341

0.2133

0.1232

0.0837

0.0605

H4 , H5 , H6

r opt

1.5484

0.9377

0.7274

0.4668

0.2932

0.1452

0.0815

0.0454

rQ

1.7959

1.1646

0.9667

0.7090

0.5226

0.3489

0.2551

0.1925

H1 , H2 , H3

Figure 4.2: Dependence of risk functions on the number of hypotheses in conditional


tasks for 2 and = 0.05.

91

Figure 4.3: Dependence of risk functions on the module of the variance matrix in
conditional tasks for S = 6 and = 0.05.

Figure 4.4: Dependence of risk functions on the significance level in conditional tasks
for S = 6 and 2 .

Conclusion
The statement of Bayesian problem of testing many hypotheses as a conditional
optimization problem and its solution give new opportunities in the theory and practice of hypotheses testing. It allows making a decision at guaranteed value of probability of errors of one type and at as minimum as possible value of probability of errors
of the second type. It allows determining the minimum achievable level of the errors
of a given type for a concrete observation result on the basis of which the decision
is made. On the basis of the last-mentioned, there is an opportunity of introducing
the differentiation of the decisions made by the achieved confidence levels similarly
to [Kiefer, (1977)].
For the conditional Bayesian problems of hypotheses testing, there are such values of Lagrange multipliers at which the regions of acceptance of hypotheses in these
problems coincide with the regions of acceptance of hypotheses in the unconditional
Bayesian problem, i.e. when the conditional problems transform into the unconditional Bayesian problem of hypotheses testing. Thus, the latter is a special case of
the conditional Bayesian problems of hypotheses testing.
There are researched specific features of the regions of acceptance of hypotheses in
conditional Bayesian problems of statistical hypotheses testing. It is shown that the
classical Bayesian statement of the problem of statistical hypotheses testing in the
form of an unconditional optimizing problem is a special case of conditional Bayesian
problems of hypotheses testing set in the form of a conditional optimizing problem.
It is also shown that in conditional problems of hypotheses testing the situation is
similar to the sequential analysis. There is possible an occurrence of the situation
when, on the basis of the available information, it is impossible to make a decision
with specified validity. In such a situation, the actions are similar to the sequential
analysis, i.e. it is necessary to obtain additional information in the form of new
observation results or to change the level of significance of criterion.
92

93

Conditional Bayesian Tasks 1, 4 and 7 work more stably at small information


distances between hypotheses. Conditional Task 2 is sensitive to changes in information distances among the hypotheses. Tasks 2 and 7 are especially steady to the
changes of a priori probabilities. If there is no confidence in the correctness of chosen values of a priori probabilities, it is better to ignore this information and take
p(H1 ) = p(H2 ) = ... = p(HS ), where S is the number of tested hypotheses.
The offered quasi-optimum algorithms of testing many hypotheses simplify considerably the realization of hypotheses testing algorithms in conditional Bayesian
tasks of testing many hypotheses and the computation of corresponding risk function.
In particular, owing to this approximation, for multidimensional normal probability
distribution, it is possible to realize hypotheses testing algorithms analytically, in
seconds, and to compute the corresponding values of risk functions, which is impossible for optimum algorithms. The offered method of approximate computation of
risk functions in conditional Bayesian task allows us, if necessary, to compute simply
and rapidly their estimations. As a result of research, the ratios between the risk
functions of the considered problems were determined. The computation results of
concrete examples justified the obtained results and the conclusions made.

You might also like