You are on page 1of 5

An Improved LS-SVM Based on Quantum PSO Algorithm and Its

Application

Guofeng Pan, Kewen Xia, Yao Dong, Jin Shi


School of Information Engineering, Hebei University of Technology, 300401, Tianjin, China
bluegranule@163.com

Abstract quantum bit to solve any linear system of equations. It not


only accelerates the computing speed, but also avoids the
In order to avoid the problem of inverse matrix matrix inversion calculation and saves the computer
calculation in LS-SVM algorithm, an improved LS-SVM memory.
based on quantum PSO algorithm is presented, the main Therefore, the improved LS-SVM algorithm is applied
process is to encode the particle swarm with quantum bit, in oil layer recognition, some disadvantages in general
then solve the linear equation set with the iterative linear and experience logging technique[5], which is
quantum PSO algorithm. So the training velocity of LS- difficult to adapt complex various environment in oil well,
SVM algorithm is improved, the computer memory is the low recognition accuracy and so on, can be overcame,
saved, and the least square solution is always obtained. and the recognition accuracy and calculation velocity can
The actual application in Changqing oil-field indicates be improved.
the application effect is better than that of classical SVM
and LM neural network in oil layer recognition, the 2. Improved LS-SVM based on QPSO
improved LS-SVM algorithm not only improves the algorithm
accuracy of recognition, but also accelerates the velocity
of convergence, and the result of oil layer recognition is 2.1. QPSO algorithm
fully accord with that of oil trial.
PSO algorithm is a new evolutionary computation
1. Introduction technique which reappears swarm intelligence, after
Kennedy and Eberhart proposed the first PSO algorithm
With the rapid development of the computing model [6], many scholars add the convergence
intelligence techniques, Neural Computation, constant , inertia weight and restraint constant a to
Evolutionary Computation and Granular Computing has the basic model to get the improved model as follows[7]:
been applied in many fields successfully, and also been
vi = [ vi1 +1 r1 ( pbest xi1) +2 r2 (gbest xi1 )] (1)
main techniques of oil layer recognition. For example, in
the condition of few samples, Support Vector Machine x i = x i 1 + a v i (2)
(SVM) based on SRM rules[1]can avoid the problems of
QPSO is presented based on the idea of quantum
over learning, dimension disaster and local minimum[2]
computation, and the particle swarm is encoded by
in the classical study method, and is applied in many
quantum bit. The quantum bit is a double state system,
classification problems successfully [3]. According to the
practice, Least Squares Support Vector Machines (LS- and it denotes the linear combination states except 0 and
SVM) [4] advanced by J.A.K Suyken can overcome the 1 [8] which is different from the classical bit. The state
disadvantage of slow training velocity in the large scale of quantum bit is usually denoted by equation as follows:
problem, as LS-SVM algorithm translates the quadratic
optimization problem into that of solving linear equation = 0 + 1 (3)
set. However, we need to get inverse matrix in LS-SVM Where and are complex numbers,
2
and
2

algorithm, for the large scale problem of the practical


project, it is difficult to calculate on computer. Therefore, denote the probability amplitude of 0 and 1
based on the iterative idea, a new method, namely the respectively, satisfying the normalization condition:
Quantum Particle Swarm Optimization (QPSO), is 2 2
+ =1 (4)
researched, where the particle swarm is encoded by

This work is supported by National Nature Science Foundation


under Grant 60377020, 60673087

Third International Conference on Natural Computation (ICNC 2007)


0-7695-2875-9/07 $25.00 2007
Suppose one particle includes n quantum bits, the state s.t. y k = wT ( x k ) + b + ek , k = 1,2, " N (8)
of one quantum bit is denoted by the complex number In order to solve the optimal problem of equation (8),
pair ( , ) , so each particle is encoded by n pairs of we translate it into the linear equations as follows [4]:
complex numbers, i.e.:
G
0 1 T b 0
G G = G (9)
1 K + I a Y
T 1
x= 1 2 " n (5)
1 2 n G
Where 1 = [1,...,1]T , K = ( x ) T ( x k ) is a kernel function
Where each quantum bit perhaps be the state of 0 or 1, G
Y = [ y1 ,..., y N ]
T
i.e. each particle has a string of quantum bit encoding with Mercer condition, ,
with the length of n . As each quantum bit can denote two G
a = [a1 ,..., a N ] .
T

classical bits, the quantum bit with n bits can denote 2 n We rewrite the equation (9) as follows:
different classical information, so QPSO can search the
Ax = z ( A R mn , z R m ) (10)
broader space than that of basic PSO.
The pseudo code of the procedure is as follows: In LS-SVM algorithm, it is generally solved by least
For each particle square method [9]. However, as the dimension of AT A is
Initialize quantum particle x = (x11 , x12 , ", x mn ) large, it is difficult to solve by computer, therefore, the
iterative calculation method is applied to solve this
% where xij is the j -th quantum bit in particle problem.
encoding of i -th individual, denoted by ij and ij , which PSO algorithm is based on the idea of searching
optimal solution by iteration. Thus, we can solve a large
are initialized by1 2 matrix equation by it. The matrix equation (10) is
End regarded as a problem solved the optimal
Do particle X = ( x1 , x 2 ,..., x n ) T by QPSO algorithm
For each particle
iteratively. In the process of solving the detail matrix
Update the velocity and position of each quantum
equation, the fitness function is defined by the mean
bit in particles by the equation (1) and (2)
The position and velocity are adjusted as follows: square error of residual error (z Ax ) as below:
2
xi = X max , if x i > X max 1 m n

x
i = X , if x X
(6) f (x) = zi aij x j

mn i =1 j =1
(11)
max i max
vi = Vmax , if vi > Vmax Example: To testify the validity of solving the matrix
(7) equation by QPSO algorithm above, we use the improved
vi = Vmax , if vi Vmax PSO and QPSO algorithms to solve a typical example of
Evaluate fitness value Hilbert ill-conditioned linear systems respectively:
If f ( xi ) < f ( pbesti ) Ax = z . Where
then the best fitness value in history pbesti = xi ; 1 1
1 "
End 2 n
If f (xi ) < f (gbest i )
1 1 " 1 is a n-th order Hilbert
then the global optimal solution gbesti = xi ; A = 2 3 n +1
End
" " " "
While maximum iteration or minimum error criteria is 1 1 1
not attained "
n n + 1 2n 1
2.2. Improved LS-SVM algorithm matrix, we choose n=5, the solution x=[1 1 1 1 1], then
z=[2.2833 1.4500 1.0929 0.8845 0.7456]. The maximum
Suppose training set is S = {(xk , yk ) | k = 1,2, ", N } , and minimum singular value of this equation are
respectively max = 1.5671 , min = 3.2879 10 6 , its
where x k R n , y k R are input and output data
condition number k ( A) = max / min = 4.7661 105 .
respectively. However, unlike the classical SVM, the
Obviously, it is a serious ill-conditioned equation system.
minimum object function and constraint condition of LS-
The parameters of the algorithms are set as follows:
SVM are constructed by SRM rule, shown as follows:
the number of particles is 20, the dimension of the
1 1 N
min J (w, e) = wT w + ek2 solution space m =5, the number of quantum bit is 5, the
,b ,e 2 2 k =1
maximum iterative times Tmax =100, r1 = r2

Third International Conference on Natural Computation (ICNC 2007)


0-7695-2875-9/07 $25.00 2007
2 =2.05, = 1 + 2 = 4.1, then
=rand(0,1), 1 = Mean square
0.3177 0.0224
error
convergence constant = 2 =0.729,
| 2 2 4 | From table 1, it can be concluded that QPSO algorithm
not only has the faster convergence velocity and the
is initialized with 0.9, and decreased linearly to 0.1 higher computational accuracy than the improved PSO
with iterative times according to equation (12), so the
algorithm, but also has a satisfied effect to solve the
search capability of the algorithm will be adjusted to
matrix equation. Therefore, the improved LS-SVM based
achieve the aim of optimization. The restraint factor
on the iterative optimization of QPSO algorithm is
a =0.8.
presented, i.e. the optimal parameters {ai }i =1 and b are
N
( min ) Tmax
= max max solved by QPSO algorithm, then the recognition model is
T (12) obtained as follows.
The curves of the iterative error are shown in figure1, N
the iterative error of the improved PSO algorithm is y ( x ) = a k K ( x, x k ) +b (13)
2.7782 10 -3 when the iterative times is 79, while when k =1
the iterative times of QPSO algorithm reaches 37, its error We choose the radial basis function K ( x, xk ) =
is 1.8332 10 -5 . The output x of two algorithms is shown
in the table1. (
exp x x k
2
)
2 2 as kernel function in this paper.

3. Application example
Changqing oil field is a hidden lithologic oil reservoir
with low yield, low oil abundance, large area distribution.
It is difficult to evaluate well logging of oil layer by
conventional recognition method quantitatively.
Therefore, a certain key well segment (1200m~1290m) is
evaluated by the improved LS-SVM algorithm based on
QPSO algorithm. The improved LS-SVM recognition
model for oil layer is designed and shown in figure2.
(a) The improved PSO algorithm
Sample information Attribute Attribute LS-SVM
selecting & discretization & reduction training by
preprocessing generalization QPSO

Redundant attribute

O u tp u t
Unknown elimination & LS-SVM
information preprocessing recognition

Figure 2. The LS-SVM recognition model for


oil layer based on QPSO algorithm

(b) QPSO algorithm 3.1. Selection & preprocessing of samples


Figure 1. The error curves of two algorithms
The sample information should be complete, general
Table 1. The comparison of the outputs by two and be closed with the evaluation of oil layer, we should
algorithms guarantee the selecting information dont overlap. 98
Expected Outputs of the improved Outputs of QPSO samples are selected from well segment (1200m~1290m),
outputs x PSO algorithm algorithm including 38 sample points from oil layer and 60 sample
1 0.7924 0.9919 points from non-oil layer. To avoid the computing
saturation, the sample data should be normalized by
1 2.1016 1.0004
equation (14).
1 0.0707 1.0852
x x min (14)
1 0.5233 0.9699 x' =
xmax xmin
1 1.4191 0.9344

Third International Conference on Natural Computation (ICNC 2007)


0-7695-2875-9/07 $25.00 2007
Where x [ x min , x max ] , x min and x max stands for the
minimum and maximum value respectively.

3.2. Attribute discretization & generalization

The decision attributes of sample are {non-oil layer,


oil layer}, let the decision attribute D = {d } ,
d = {di = i, i = 0,1} , where 0 and 1 denotes non-oil layer Figure 3. The normalized curves of 8 attributes
and oil layer respectively. There are 12 condition
attributes of sample information, i.e. AC, CNL, DEN, 3.4. Iterative optimization
GR, RT, RI, RXO, SP, R2M, R025, BZSP and RA2. The
continuous attributes are discreted by the golden section Set the scale of particle swarm is 30, the dimension of
algorithm. the solution space is 98 which is the number of optimal
parameters, the maximum iterative times is 1000, the
3.3. Attribute Reduction acceleration constant 1 = 2 = 2.05, then convergence
constant = 0.729,initialize inertia weight =0.9,
After reprocessing and discretization of the above 12 restraint constant a =0.8, in LS-SVM algorithm, the
condition attributes, the attribute set {AC, CNL, DEN, regularization parameter =1000 and the width of radial
GR, RT, RI, RXO, SP} is obtained as the minimum
reduction by the attribute reduction algorithm based on basis kernel function 2 =0.125 are selected by cross
the similarity [10]. The numerical range of 8 sample validation method. The improved PSO and QPSO
attributes is listed in table 2. algorithms are applied to solve the equation (9) of LS-
SVM algorithm, then their error curves are shown in
Table2. The numerical range of sample
figure 4, the iterative error of the improved PSO
attributes after reduction
algorithm is 0.5867, when the iterative times is 499, and
Attribute AC CNL DEN GR RT RI RXO SP
when the iterative times of QPSO algorithm reaches 121,
Min 50 10 1 8 3 2 1 -30 the error is 0.3783.
Max 150 50 3 100 90 90 330 -5
8 attributes are normalized according to equation (14)
in the well segment 1200m~1290m, the normalized
curves are shown in Figure 3.

(a) The improved PSO algorithm

(b) QPSO algorithm


Figure 4. The error curves of two algorithms

Third International Conference on Natural Computation (ICNC 2007)


0-7695-2875-9/07 $25.00 2007
4. Result analysis and comparison
The prediction model after iterative searching optimum
is applied to recognize oil layer in the well depth of
1200m~1290m, and the result is compared with those of
LM algorithm and classical SVM algorithm. To evaluate
the recognition capability, we define some performance
indexes as follows: Figure 5. The recognition result of the oil layer
(Note: the shadow intervals represent oil layer,
1 N
Root mean square error: RMSE = ei2
N i =1
(15) the other intervals represent non-oil layer)

Maximum positive error: MAXPE = max{ei ,0} (16) 5. Conclusions


Maximum negative error: MAXNE = min{ei ,0} (17)
LS-SVM algorithm based on SRM rules is a good
Where ei = y i y i , y i and y i are recognition output classification method. In order to avoid the problem of
and expectation output of LS-SVM respectively. solving inverse matrix, an improved LS-SVM based on
LM algorithm is adopted with the network structure of QPSO algorithm is presented, which not only avoids the
5-4-1, learning constant is 0.05, the Tansig function matrix inversion, but also always gets the optimal
f ( x ) = 2 /(1 + e 2 x ) 1 is selected as transport function solution, and the training velocity and solution accuracy
are improved as well. Actual application with the
in the hidden layer and output layer. The kernel function improved LS-SVM shows the effect in oil layer
of classical SVM algorithm is the radial basis function. recognition is notable, and the improved LS-SVM has a
The comparison of performance indexes is shown in table good application foreground.
3.
Table3. The comparison of performance indexes References
Algorithm RMSE MAXPE MAXNE Recognition
rate [1] Vladimir N Vapnik. The Nature of Statistical Learning
LM 0.2602 1 -0.9990 84.1% Theory. New York: Springer, 2000.
algorithm [2] Cortes C, Vapnik V. Support vector machine. Machine
Classical 0.2182 1 -1 94.6% Learing, 1995, 20:273~297.
SVM [3] Nello Cristianini, John Shawe-Taylor. An Introduction to
algorithm Support Vector Machines and Other Kernel-based Learning
Improved 2.8024 104 0.9588 -1.0412 99.7% Methods.Beijing: Publishing House of Electronics Industry,
LS-SVM 2005
algorithm [4] Suykens J A K, Vandewalle J. Least squares support vector
From the results, it indicates that the improved LS- machine classifiers. Neural Processing Letters, 1999, 9
(3):293~300
SVM algorithm is obviously superior to LM algorithm
[5] well logging compile group. Well logging. Beijing.
and classical SVM algorithm on performance indexes, the
Petroleum Industry Press, 1998
improved LS-SVM algorithm and classical SVM [6] Kennedy J, Eberhart R C. Particle swarm optimization.
algorithm are superior to LM algorithm on recognition Proc. IEEE Int. Conf. Neural Networks. Piscataway, NJ:
precision and generalization ability. The improved LS- IEEE Press, 1995, 1942~1948
SVM algorithm based on QPSO not only avoids [7] Shi Y, Eberhart R. A modified particle swarm optimizer.
disadvantages such as over learning, dimension disaster In: IEEE World Congress on Computational Intelligence,
and local minimum in LM algorithm effectively, but also 1998:69~73
solves the problems of high calculation complexity and [8] Michael A Nieisen, Isaac L Chuang. Quantum
slow calculation rate for large scale samples in SVM Computation and Quantum Information. Beijing: Higher
Education Press, 2003.
algorithm.
[9] Qingyang Li, Nengchao Wang, Dayi Yi. Numberical
The recognition result of the improved LS-SVM Analysis. Tsing Hua University Press & Springer Press,
algorithm based on QPSO is shown in figure 5, i.e. the 2001
section depth of 1249.5m~1253.5m, 1256m~1257.25m [10] Kewen Xia, Mingxiao Liu, Zhiwei Zhang,Yao Dong. An
and 1259m~1262.5m are recognized as oil layer, others Approach to Attribute Reduction Based on Attribute
are non-oil layer, and the conclusion is fully accord with Similarity. Journal of Hebei University of Technology,
that of oil trial. The application shows the method 2005, 34(4):20~23.
presented in this paper is very effective.

Third International Conference on Natural Computation (ICNC 2007)


0-7695-2875-9/07 $25.00 2007

You might also like