Professional Documents
Culture Documents
64 www.erpublication.org
Two Methods of Obtaining a Minimal Upper Estimate for the Error Probability of the Restoring Formal Neuron
n +1
Q Prob ( <=
= 0) f (v)dv
f (v) = f i (vi ) , (7)
i =1
it follows that for a real positive number s ( s > 0)
where (superposition of the addition and multiplication 0
e f (v)dv e
sv sv
signs) is the convolution symbol. Q f (v)dv.
It is obvious that in view of formula (7) the error
probability at the decision element output can be written in But the left-hand part of this inequality is the Laplace
two equivalent forms transform of the function f (v)
0 0 n +1
Q = Prob ( v < 0 ) =
f (v)dv =
f i (vi )dv =
i =1 L[ f (v) ] = e
sv
f (v)dv,
(8)
0 n +1
= [ (1 qi ) (vi ai ) + qi (vi + ai ) ] dv where L is the Laplace transform operator.
i =1
Therefore
and Q L[ f (v) ]. (11)
n 1
Q f (v) f i (vi ) The random value with realizations v is the sum of
i 1
v 0 v 0
(9) independent random variables i having realizations vi . In
n 1
qi( ai vi )/ 2 ai (1 qi )( ai vi )/ 2 ai , that case, as is known, the Laplace transform for the
i 1
v 0
convolution f (v) of functions f i (vi ) is equal to the product
where the probability distribution density f i (vi ) is defined by
of Laplace transforms of convoluted functions:
(5) in the first case and by (6) in the second case. Integration n +1
Note that for practical calculations, formula (9) can be By expression (5) for functions f i (vi ) and the Laplace
written in a more convenient form. Indeed, the complete transform definition, we obtain
number of discrete values of the variable v is 2n1 since
=
svi
=
L[ f i (vi ) ]
v a a a a ,
e f (v )dv i i i
1 2 n n 1
.
where ai is equal either to ai or to ( ai ) , whereas the
e [(1 q ) (v a ) + q (v + a )] dv
svi
= i i i i i i i
proper sign of the weight ai is meant to be within the round
(n 1) co-factors of the form qk or (1 qk ) . Here we should make use of one more property of the Dirac
In particular delta function
g (t ) (t t )dt =
g (t ). 0 0
65 www.erpublication.org
International Journal of Engineering and Technical Research (IJETR)
ISSN: 2321-0869, Volume-02, Issue-02, February 2014
1 1 qi
With this property taken into account, from formula (13)
qi qi
we obtain = exp ln = exp ln
n +1 1 qi 1 qi 2 qi
Q (1 qi )e ai s + qi e + ai s . (14) .
i =1 1 qi 1 q 1 1 qi
= exp ln =
i
exp + ln
qi
Here s , as mentioned above, is an arbitrary real positive qi 2 qi
number. Before we continue simplifying the right-hand part of
inequality (14), we have to define a value of s for which Besides, we denote
expression (14) gives a minimal upper estimate. 1 1 qi
ai ln =i .
Passing to the natural logarithm of inequality (14) we come 2 qi
to the expression
n +1
Then we have
ln Q ln (1 qi )e ai s + qi e + ai s . ai
ai
ei + e i
i =1 qi e 2 + (1 qi )e=
2
2 qi (1 qi ) .
2
Let us define here partial derivatives with respect to
The second co-factor in the right-hand part of this
arguments ai by using the elementary fact that expression is the hypebolic cosine of the argument i :
dy
y
= = f ( x) e f ( x ) ei + e i
dx = ch i .
2
d 1 Therefore
if y = e f ( x ) , and also the fact that ln = f ( x) f ( x) . ai ai
dx f ( x)
qi e 2 + (1 qi )e=
2
i
2 qi (1 qi ) ch=
Hence we obtain
ln Q n +1 1 qi
ai ln q
1
ai s
[ sqi e + ai s s (1 qi )e ai s ].
ai i =1 [(1 qi )e + qi e + ai s ] = 2 qi (1 qi ) ch i .
For the right-hand part of this inequality to be equal to 2
zero, it suffices that the following condition be fulfilled
sqi e + ai s s (1 qi )e ai s =
0, Finally, for estimate (14) we can write
whence it follows that 1 qi
n +1 ai ln q
1 qi .
e +2 ai s = , Q 2 qi (1 qi ) ch i
qi i =1 2
or, which is the same,
1 1 qi For the error probability Q , the right-hand part of the
ai s = ln .
2 qi above inequality is the upper estimate Q + :
If the weights ai of the neuron inputs are put into
1 qi
correspondence with error probabilities qi of these inputs by
n +1 ai ln q
the relations = Q + 2 qi (1 qi ) ch i
.
1 qi i =1 2
ai = ln , (15)
qi
+ +
then the sought value of s will be The minimum Qmin of this upper estimate Q is equal to
1 n +1 n +1
s= . (16) +
Qmin = 2 qi (1 qi ) = 2n +1 qi (1 qi ) . (17)
2
=i 1 =i 1
Using equality (16) in formula (14), we obtain a minimal
It is attained when for the zero argument the hyperbolic
upper estimate for the error probability Q of the restoring
cosine attains the minimum equal to 1.
neuron. Indeed, for the right-hand part of expression (14) the This estimate confirms in a certain sense the advantage of
following chain of identical transforms is valid: the choice of weights of the restoring neuron in compliance
ai a
i with the error probabilities of input signals according to the
ai a
i q e 2
+ (1 q ) e 2
qi e 2 + (1 qi )e= 2
2 qi (1 qi ) i i
= following relations
2 qi (1 qi )
1 qi
ai = ln
qi .
ai a
qi 1 q i
e 2 + i
e 2
1 qi
i 1, n + 1
qi
= 2 qi (1 qi ) . =
2
Let us take into account here that
66 www.erpublication.org
Two Methods of Obtaining a Minimal Upper Estimate for the Error Probability of the Restoring Formal Neuron
S 1. = +
Qmin 2n +1 qi (1 qi ) , (27)
i =1
Since in relation (20), summation is carried out on the set
which coincides with result (17) obtained by the first method.
of two possible values ai and ai of the variable vi , using
The weights ai (i 1, n 1) which match the error
(6) we have
(1 qi ) S a + qi S a
v (S ) = i i
(
probabilities qi = )
i 1, n + 1 are defined from relations (26)
.
i
(21)
= i 1, n + 1 with notation (25) taken into account:
1 q
The substitution of (21) into relation (18) gives ai ln i
n +1 2 ln S 1 qi .
v (S ) = (1 q )S
i =1
i
ai
+ qi S ai .
i 1, n 1
When v 0 , the value S v satisfies the condition Since the value S satisfies condition (22), we have ln S 0
1 and therefore
=Sv > 1,
S( (
v 1 qi
a= K ln
qi ,
i
if of course (28)
0 < S < 1. (22)
= i 1, n + 1
Let us assume that inequality (22) is fulfilled. Then the
where
following relation is valid:
1
= Q f (v ) < S v f (v ) . K=
v<0 v<0 2 ln S . (29)
Since every summand S v f (v) is non-negative, we have the 0 < K <
inequality Thus, the weights ai=
(i 1, n + 1) , which are consistent with
S v f (v ) S v f (v ) .
v<0 v (
the error probabilities qi = )
i 1, n + 1 and attach a minimum
Therefore to the upper estimate of the error probability of the restoring
Q < v (S ) . (23) neuron, are defined to within the a general positive factor K .
The right-hand part of this expression can be taken as the
upper estimate Q + of the error probability Q of the restoring IV. CONCLUSION
neuron A minimal upper estimate of the error probability of the
n +1 restoring formal neuron is defined by formula (17) or, which
Q+ = (1 q )S
i =1
i
ai
+ qi S ai . is the same, by formula (27). In both cases the result can be
The latter relation is easily rewritten in the equivalent form written in the form
n +1
n +1 n +1
q =+
exp A(qi ) ,
Q + = Qi+ = (1 qi ) wi + i ,
Qmin (30)
(24) i =1
=i 1 =i 1 wi
67 www.erpublication.org
International Journal of Engineering and Technical Research (IJETR)
ISSN: 2321-0869, Volume-02, Issue-02, February 2014
where
=
A(qi ) ln 2 qi (1 qi ) . (31)
In view of relations (31) confirming the non-negativity of
the values A(qi ) , formula (30) implies that an increase of
the number n of inputs of the formal decision neuron bings
about a monotone decrease of the minimal upper estimate of
+
the error probability of restoration of the binary signal Qmin Archil I. Prangishvili was born April 13,
1961, in Tbilisi (Georgia). During 1978-1983, he was the student of Faculty
by the exponential law if, certainly, the error probabilities of Automatics and Computer Engineering at Georgian Polytechnic Institute.
(
qi = )
i 1, n + 1 at these inputs are not equal to
1
2
when the
During 1983-1987, he was the Post graduated student at Georgian
Polytechnic Institute. Currently, he is the Doctor of Technical Sciences, Full
professor at Faculty of Informatics and Control Systems of Georgian
+
minimal upper estimate of the error probability Qmin is equal Technical University, full member (academician) of the Georgian National
Academy of Sciences, President of the Georgian Engineering Academy,
to 1. Member of the International Engineering Academy and the International
This result is demonstrates an essenatial inner connection Academy of Informatization of the UN (United Nations), Rector of
with Shannons theorem [10]. According to this theorem, the Georgian Technical University. Archil I. Prangishvili is the expert in the
number of messages of length n (duration ) composed of field of Computer Science, pplied xpert Systems, rtificial Intelligence
and Control Systems, head of Editorial Board of the Journal Automated
individual symbols both in the absence and in the presence Control Systems and member of the Editorial Board of the Georgian
of fixed and probabilistic constraints (in the latter case it is Electronic Scientific Journal (GESJ): Computer Science and
assumed that the source is ergodic) grows by the Telecommunications. Number of published works - more than 120.
including 7 monographs and 5 textbooks.
asymptotically exponential law as n (or ) increases. In
particular we understand this connection as follows: as the
number n of inputs of the restoring formal neuron increases,
the initial information to be used in making the decision Y
increases by the exponential law if there are a number of
possible versions of the input signal, while the minimal upper
+
estimate Qmin of the probability Q that the made decision is
erroneous decreases by the same exponential law.
68 www.erpublication.org