Professional Documents
Culture Documents
cn
-1-
http://www.paper.edu.cn
-2-
http://www.paper.edu.cn
too much detail of samples and cant reflect study is greater than the predetermined value.
the embedded laws of samples. 3 the Applications of BP Network in
2.2 BP Network algorithm digital recognition
The training process of BP network is as
[4] Firstly, the numbers 0 9 should be
below .
(1) Initialization. Endow every connected carried on the digital processing to constitute
weight value and threshold value with a lesser the input samples. Considering that using the
random value. 0-1 image of 55 matrix will represent every
(2) Input the corresponding neurons in input
layer with an component of a eigenvector Xpk number clearly and then it designs and trains
= (Xpk1 ,Xpk2 Xpk3 ,,Xpkn) . an neural network which can recognize the ten
(3) Use the eigenvector of input samples to numbers 0~9 once again. When giving the
calculate the corresponding output value trained network with an input that can
O1pk=f(Xpkn) of neurons in hidden layer. represent some number, network can represent
(4) Use each unit output O1pk in hidden layer this number correctly through the 8421BCD
to calculate the input value in each output code in output terminal. This network can
layer and then further calculate the remember all the ten numbers by study and
corresponding output O2pm ,O2pm training. The neural network training should
=f(O1pk) of each unit in output layer. be supervised to train ten groups of arrays
(5) Calculate the generalized error of each unit which can represent numbers 0~9 and show
in output layer via the teach signals. the corresponding four two-scale numbers of
(6) Use the connected weight value W2km 1~10 in output terminal.
between middle layer and output layer, the 3.1 the Design of network structure
generalized error i in each unit of output From the analysis above, we can get that
layer and the output O1pk of each unit in the neural network need to have N=55 input
neurons and M=4 output neurons. The two
middle layer to calculate the generalized error layer logsig/logsig network adopting the
of each unit in middle layer. logarithm Sigmoid type active function in the
(7) Use the generalized error j of each unit in scope of (0,1) is quite effective to the 0,1
Boolean values. So the network may adopt
output layer and the output O1pk of each unit
the N-K-M structure, where the hidden layer
in middle layer to modify the weight value chooses one layer, N is the number of
w2km between output layer and middle layer neurons in input layer, K is the number of
and the threshold value Yj of each unit in neurons in hidden layer and M is the number
of neurons in output layer. The number 0
output layer. and 9 can represent respectively by 0-1
(8) Use the generalized error ei of each unit in chart as:
middle layer and the input Xpkn of each unit le0=[1 1 1 1 1 le1=[0 0 1 0 0
in input layer to modify the weight value 1 0 0 0 1 0 0 1 0 0
w1nk between input layer and middle layer 1 0 0 0 1 0 0 1 0 0
and the threshold value i of each unit in 1 0 0 0 1 0 0 1 0 0
hidden layer. 1 1 1 1 1] 0 0 1 0 0]
(9) Select the next sample in order and return le2=[1 1 1 1 1 le3=[1 1 1 1 1
2 until the samples in training collection are 0 0 0 0 1 0 0 0 0 1
all studied over. 1 1 1 1 1 1 1 1 1 1
1 0 0 0 0 0 0 0 0 1
(10) Return 2 over again until the error
1 1 1 1 1] 1 1 1 1 1]
function is lower than the predetermined value,
le4=[1 0 0 0 1 le5=[1 1 1 1 1
viz. the times of network constringency or
1 0 0 0 1 1 0 0 0 0
-3-
http://www.paper.edu.cn
-4-
http://www.paper.edu.cn
4 Conclusions
Train the network by five kinds of BP
algorithm and the result shows that for the part
of the rapidity of convergence, the L-M
algorithm is the fast and its precision is the
Fig.2 Training curve of L-M algorithm highest, but other algorithms are not so good
Table 1 the comparison of training effect in as the L-M algorithm relatively. L M
different BP algorithms algorithm is fit to some trainings which have a
biggest great quantity of samples.
Training BP training convergence In addition, the reliability of digital
function algorithm times / precision recognition using neural network can be
actual gotten by using hundreds of vectors that have
training random noise. If a higher recognized precision
times is required, it can lengthen the time of network
Gradual 10000/ 0.0999982 training and make the training error precision
traingd descendent 6842 higher or increase the numbers of the neurons
BPalgorithm in hidden layer. Otherwise, it can enhance the
Gradual resolving power of input vectors, for example,
traingda descendent 5000/ 0.0971766 adopting 1616 lattices and so on.
self-adaptw/l 130
BPalgorithm References:
Gradual 20000/ 0.0999995
[1] Ying Liandong.The Design and Application of
traingdm descendent 18373 BP Neural Network [J]. Information
w/momentum Technology,2003,27(6):18-20.
[2] NG S C, CHEUNG C C,LEUNG S H. Fast
BPalgorithm Convergence for Back - Propagation Network
Gradual with Magnified Gradient Function[ J ].IEEE,
2003, 9 (3) : 1903 - 1908.
traingdx descendent 5000/ 0.0957755 [3] He Qingbi, Zhou Jianli. The convergence and
w/momentum 156 improvements of BP neural network[J]. Journal
Of ChongQing Jiao TTong University,
and self-adapt 2005,24(1):143-145.
BPalgorithm [4] Fan lei, Zhang Yuntao, Chen Zhenjun.
Application of Improved BP Neural Network
-5-
http://www.paper.edu.cn
-6-