You are on page 1of 8

SOLUTION CHARACTERISTICS OF QUADRATIC POWER FLOW PROBLEMS

Y. V. Makarov A.M. Kontorovich D. J. Hill I. A. Hiskens


University of Sydney Tel Aviv, Israel University of Sydney University of Newcastle
Australia Australia Australia

KEYWORDS Due to its nonlinearity, the load ow prob-


Load ow analysis, numerical techniques. lem may have a number of distinct solutions.
Studies of the multiple solutions of the load ow
ABSTRACT problem play a role in determining proximity to
A number of facts about quadratic power voltage collapse [9, 13]. In order to obtain mul-
ow problems f (x) = y + g (x) = 0, x 2 Rnx, tiple load ow solutions, Tamura et al used a
y 2 Rny and applied Newton-Raphson meth- set of quadratic load ow equations and the NR
ods with optimal multipliers are presented. The optimal multiplier method [11]. Iba et al used
main results are about solution structure, singu- Tamura's approach and some newly discovered
lar points and Newton-Raphson solutions. Com- convergence peculiarities of the NR method to
ments and discussions are given. Although we nd a pair of closest multiple solutions [12].
address here to the power ow problems, there It is observed from experimental results [12]
are many areas where presented results may be that if a point x comes close to a line connecting
e ectively used. Actually they are valid for any a couple of distinct solutions, a further NR iter-
problem described by an algebraic system of ative process in rectangular form goes along this
quadratic equations. line. The next observation in [12] is that, in the
vicinity of a singular point, the NR method with
INTRODUCTION the optimal multiplier gives a trajectory which
The Newton-Raphson (NR) method and its tends to the straight line connecting a pair of
various modi cations are the most popular nu- closely located but distinct solutions. These fea-
merical techniques used in load ow problems - tures are e ectively used in [12] to locate multi-
see for instance [1]-[5]. ple load ow solutions. The authors asked for a
There are two main forms used for the load theoretical background of these experimentally
ow equations: the polar and rectangular forms. discovered properties. The present paper makes
Both of them have some advantages. The polar a number of proofs of such properties and ex-
form provides signi cant reduction of computa- plains some properties of the load ow problem,
tions. For instance, the method, which uses P- the NR method in rectangular coordinates, and
Q decomposition of the load ow problem [3], is optimal multiplier technique. The main results
widely used on practice. The rectangular form establish the following properties:
of load ow equations can be e ectively used as  A variation of x along a straight line through
well - see [6]-[12]. The most important feature a pair of distinct solutions of the problem f (x) =
of that form is that the power mismatch func- 0 results in variation of the mismatch vector
tion can be exactly expressed using linear and f (x) along a straight line in Rny.
second order terms of the Taylor series.  There is a singular point in the middle of a
It is well known that the NR method has good straight line connecting a pair of distinct solu-
quadratic convergence if the initial estimates are tions x1; x2 in Rnx [9, 10, 14, and others].
close to a solution point. However, if they are  A vector co-linear to a straight line connect-
far from a solution or the load ow problem is ing a pair of distinct solutions in Rnx nulli es the
an ill-conditioned one, convergence of the NR Jacobian matrix J (x) = @f=@x at the centre
method may be slow or not at all. To overcome point the line [9, 10, and others].
this problem, a number of numerical techniques  If x belongs to a straight line connecting a
were proposed - see [4]-[8]. The general idea pair of distinct solutions, the NR iterative pro-
behind all of them is to apply corrections to each cess goes along that line.
step of the methods in such a way that iterative  The maximal number of solutions on any
processes do not oscillate or diverge. straight line in Rnx is two.
1
 Along a straight line through two distinct powers of loads and generators or xed voltages
solutions x1 ; x2, the problem can be reduced to and x 2 Rnx is the state, consisting of nodal volt-
a single scalar quadratic equation which locates ages. The vector function g (x) de nes the sum
these solutions. of power ows or currents into each bus from
 If a loading process y( ) in Rny reaches a sin- the rest of the network. If nodal voltages x are
gular point detJ (x) = 0, the corresponding tra- expressed in rectangular coordinates then f (x)
jectory of x( ) in Rnx tends to the right eigenvec- is a quadratic function of x.
tor nullifying J (x) at the singular point (except The results given here are motivated by, but
in some special cases). not limited to the power ow problem. We refer
 At any singular point, there are two merging to the general form (1) throughout.
solutions (except in some special cases). The optimal multiplier technique [6, 7, 8] uses
 For any two points x1 6= x2 and the Jacobian the following iterative step.
matrix detJ (x1 ) 6= 0, the number and location of
singularities of the quadratic problem f (x) = 0 xi+1 = xi ; iJ ;1 (xi)f (xi) (2)
on the straight line through x1 ; x2 is de ned by where i is a number of iteration, J (x) is the Ja-
real eigenvalues of the matrix J ;1 (x1)J (x2 ). cobian matrix of f (x) calculated at the point x.
 The derivative of the function (; x) = The algorithm is the usual NR one with addition
kf (x + x)k2, where x is the NR correction of the optimal multiplier i , which is chosen for
vector, with respect to the optimal multiplier  xed xi and xi as a solution of the problem
is equal to minus twice the function value [6].
 An optimal multiplier , corresponding to a
minimum of the function (; x), always exists min

(i) = min

kf (xi + ixi)k2 (3)
in the range [0,2] [6, 12]. i i

 A scalar product of the NR correction vector where xi = xi+1 ; xi is calculated for xi+1 in
x and gradient of the function (; x) is equal (2) when i = 1. The multiplier i can be found
to minus twice the value of this function [6]. as a solution of the equation
 The function (; x) taken along a straight 0 = 0 (4)
line through a pair of distinct solutions x1; x2 i

has a maximum in the middle of the line. where 0 denotes @=@i. If rectangular coor-
 Zero gradients of the function (; x) cor- dinates are used and f (x) has quadratic nonlin-
i

respond to either solutions of the problem or earity, the function 0 is a scalar cubic func-
points where f (x) is an orthogonal vector to the tion of i , and solutions of (4) can be easily
i

singular surface detJ (x) = 0. found. Practical implementations of the opti-


 The gradient of the function (; x) in the mal multiplier technique show that it provides
centre of a straight line connecting a pair of dis- reliable convergence for a great number of cases
tinct solutions is an orthogonal vector to this [6]-[8]. An optimal multiplier technique for solu-
line. tion of ill - conditioned load ow problems was
Some of the results are already known - see proposed in [6]-[8].
references above. They are reported to assem- 2. QUADRATIC POWER FLOW STUDIES
ble them with new results presented in the pa-
per. New more clearer explanations are given This section deals with some basic proper-
to them. Some earlier versions of new results ties of the quadratic power ow problems, their
presented here are given in [6, 15, 16]. solutions, singularities, and applied NR method.
1. OPTIMAL MULTIPLIER TECHNIQUE Property 1. For quadratic mismatch functions
f (x), a variation of x along a straight line
A load ow problem consists of the solution through a pair of distinct solutions of the prob-
of nonlinear equations of the general form lem f (x) = 0 results in variation of the mis-
match vector f (x) along a straight line in Rny.
f (x) = y + g(x) = 0 (1) Proof. Let x be a point on the straight line
connecting two distinct solutions x1, x2 :
where y 2 Rny is the vector of speci ed inde-
pendent parameters such as active and reactive x = x1 + (x2 ; x1) = x1 + x21 (5)
where  is a parameter, and x21 = x2 ; x1 . f (x2 ) = f (x1)+ J (x1 )x21 +0:5W (x21) (13)
For quadratic mismatch functions, It is clear that W (;x) = W (x). From (12)
f (x)=f (x1)+J (x1)x21+0:52W (x21) (6) and (13),
f (x2) = f (x1) + J (x1 )x21 + 0:5W (x21) (7) [J (x1 ) + J (x2 )] x21 = 0 (14)
where 0:5W (x21) is the quadratic term of the For a quadratic function f (x), the Jacobian
Taylor series expansion (7). At points x1 , x2 , matrix contains elements which are linear func-
we have f (x1 ) = f (x2) = 0. So, from (7), tions of x. So, it can be represented as
0:5W (x21) = ;J (x1 )x21 (8)
J (x) =
X A x + J (0)
n
i i (15)
Using (8), the equation (6) transforms to i=1

f (x1 + x21 ) = (1 ; )J (x1 )x21 =  (9) where Ai , J (0) are (n  n) constant matrices of
Jacobian coecients, xi 2 x. Using (15), the
where = (1 ; ),  = J (x1 )x21. Thus the equality (14) can be rewritten as
mismatch function f (x1 + x21) varies along
the straight line  in Rny. 2 2J (x0 )x21 = 0 (16)
Comments. This fact was mentioned in [9].
There is an interesting practical application: where x0 = (x1 + x2 )=2. As x21 6= 0, the vector
nding multiple solutions of a quadratic prob- x21 is the right eigenvector corresponding to a
lem f (x) = 0. Note (9) can be rewritten as zero eigenvalue of the Jacobian matrix. More-
over, for all x 6= 0 which are co-linear vectors
f (x1 + x) + ( ; 1)J (x1 )x = 0 (10) with respect to x21, we get J (x0 )x = 0. 2
where x1 is a known solution; x is an unknown Comments. Both the rst and second parts
increment of state variables;  is unknown scalar were proved in [9, 6, 14, and others]. The above
parameter. Except the trivial case x = 0, the proof seems to be more simple and compact.
last equation corresponds to a di erent solution An interesting conclusion follows from Prop-
erties 1 and 2. Variations of x along a straight
x2 = x1 + ;1 x; jj < 1;  6= 0 line connecting a couple of distinct solutions are
actually motions of x along the right eigenvector
The system (10) has n equations and n + 1 un- nullifying J (x) in the middle of the line.
known variables, so it is necessary to add an Property 3. If any point x is on a straight
additional equation in (10), for instance, line connecting two distinct solutions of the
rtx ; 1 = 0 (11) quadratic problem f (x) = 0, the NR iterative
process with initial point from x follows this line.
where r is a nonzero vector. By varying r and Proof. If any point xi is on the line connect-
substitution of newly discovered solutions in- ing two distinct solutions, it can be described
stead of x1 in (10), (11), it is possible to get in the form (5). The middle point of x21 is
all solutions of a quadratic problem. x0 = 0:5(x1 + x2 ). The quadratic mismatch
Property 2. For a quadratic problem f (x) = function can be expressed as
0, there is a point of singularity in the centre
of a straight line connecting a pair of distinct f (xi) = f (x0) + J (x0 )(xi ; x0) + 0:5W (xi ; x0 )
solutions in Rnx, and a vector co-linear to this (17)
line nulli es the Jacobian matrix evaluated in To express the last term of (17) in terms of
the centre point. Jacobian matrices, we write
Proof. Let x1, x2 be two distinct solutions of a
quadratic problem f (x) = 0. A line connecting f (x) = f (0) + J (0)x + 0:5W (x) (18)
these solutions can be de ned as (5), and due to f (0) = f (x) ; J (x)x + 0:5W (x) (19)
quadratic nonlinearity,
By summing of the last two equalities,
f (x1) = f (x2) ; J (x2)x21 + 0:5W (;x21)
(12) W (x) = [J (x) ; J (0)] x; and so
0:5W (xi ; x0) = 0:5[J (xi ; x0 ) ; J (0)](xi ; x0 ) So,
On the other hand, taking into account (; x) = kf (x) + J (x)x + 0:52 W (x)k2 =
quadratic nonlinearity of f (x) and (15), = kf (x)k2 + kJ (x)xk2 +
J (xi ; x0) = J (xi ) ; J (x0 ) + J (0) +k0:52 W (x)k2 + 2f (x)J (x)x+
t

+3 W (x)J (x)x + 2 f (x)W (x)


t t

Therefore, in (17), we have


The function (; x) equals zero if and only if
0:5W (xi ; x0) = 0:5[J (xi) ; J (x0 )](xi ; x0 ) f (x + x) = 0. At a solution point x = x ,
Noting (16), then J (x0 )(xi ; x0 ) = 0. From (17), f (x) = 0, and the function (27) is
f (xi) = f (x0 ) + 0:5J (xi)(xi ; x0) (20) (; x) = 0:254 kW (x)k2 + 3 W (x)J (x)x
t

+2 kJ (x)xk2 = (a2 + b + c)2


For the NR method with initial point xi , we
have the following expression for the correction where a; b; c are the obvious functions of x. For
vector xi any xed direction x = 6 0, (; x) equals zero
in the two following cases
f (xi) + J (xi )xi = 0 (21) (a)  = 0;
(b) a2 + b + c = 0.
From (9), The rst case gives us the original solution
f (xi ) = (1 ; )J (x1 )x21; and so (22) point x = x . The second case corresponds to
solutions x 6= x on the straight line directed
J (x1)x21 = ;1(1 ; );1 f (xi);  6= 0; 1 (23) by x. However, as it is clear from (27), the
function (27) can not be negative. Thus a2 +
At the point  = 0:5 we have xi = x0, and it b + c  0; and in the case (b) it is possible to
follows from (22) and (23) that have only one additional solution except x , but
f (x0) = 0:25;1(1 ; );1 f (xi) (24) not two or more. So, on the line we get one root
x = x , and we can have only one additional
By substitution of (24) into (20), it follows root corresponding to the condition (b). 2
Property 5. For a straight line connecting two
f (xi )= 0:25;1 (1 ; );1f (xi )+0:5J (xi)(xi ; x0 ) solutions in Rnx, the system of quadratic equa-
(25) tions can be reduced to a single scalar quadratic
Multiplying (25) by J ;1 (xi ) and taking into ac- equation which locates these solutions.
count (21), Proof. Let x1 , x2 be unknown distinct solu-
xi = ;2(1 ; )[4(1 ; ) ; 1];1(xi ; x0 ) (26) tions of a quadratic problem f (x) = 0. Suppose
we have a point x and direction x which de-
The equation (26) shows that the NR correct- ne a line connecting the pair of solutions. The
ing vector xi belongs to the straight line di- mismatch function calculated along this line is
rected by the vector (xi ; x0), i.e. the iterative F ( ) = f (x + x ) (28)
process goes along the line connecting x1 , x2 . 2
Comments. This fact was experimentally where is a scalar parameter. Let us take any
discovered in [12], where again the authors asked xed value =  and de ne a constant
for some theoretical proof of the phenomena.
Property 4. The maximum number of solu-  = F (  ) 6= 0 (29)
tions of a quadratic equation f (x) = 0 on each Having (28) and (29), consider the equation
straight line in the state space Rnx is two.
Proof. Let us take the function tF ( ) = 0 (30)
(; x) = f t(x + x)f (x + x) (27) It follows from Property 1 that  and F ( ) are
co-linear vectors, and (30) is true only if F ( ) =
For a quadratic mismatch function f (x), 0. Using (29) and the Taylor series expansion
f (x + x) = f (x) + J (x)x + 0:52W (x) F ( ) = f (x) + J (x )x + 0:5 2W (x)
we get the scalar quadratic equation and W (x) = W (;x): So, for a small incre-
ment  at the point x0 , we have two incre-
a 2 + b + c = 0 (31) ments of x of opposite signs directed along r.
where a = 0:5tW (x ), b = tJ (x )x , Therefore, the rst part of Property 6 has been
c = tf (x ). Equation (31) is to have a pair proved. 2
of distinct real roots 1 , 2 corresponding to Comments. The Property follows from [17,
x1; x2, and we can de ne these solutions as page 217]. A similar fact was mentioned in
x1 = x + 1x, x2 = x + 2x . 2 [18]. The proof presented here was rst given
Property 6. For almost all cases, if any load-
in [19]. It explains the experimental fact ob-
ing process y ( ) ends at a singular point x0 of tained in [12]. It was observed that in vicinity
the problem y ( ) + g (x) = 0, where g (x) is a of a singular point the NR method with optimal
quadratic function of x and is a scalar loading multiplier gives a trajectory which tends to the
parameter, there are two distinct solutions x1 , straight line connecting a pair of closely located
x2 merging at the singular point, and the tra- load ow solutions x1, x2 . Actually, the loading
jectories x1( ), x2( ) tend to the right eigen- trajectory y ( ) in (32) can be represented as the
vector r corresponding to a zero eigenvalue of convergence trajectory of the NR method. If it
the Jacobian matrix J (x0). comes close to the singular margin, it tends to
Proof. Let a loading process with variable
the right eigenvector r which nulli es the Jaco-
and bian matrix in the middle point between closely
y( ) + g (x) = 0 (32) located solutions. Property 3 says that further
iterative process goes along the line directed by
end at a singular point x0 corresponding to = the vector (x1; x2), and that line is co-linear to r.
0. At the singular point, dy = y 0 ( 0)d = So, Properties 3 and 6 explain these phenomena.
Y0 d : The implicit function theorem gives Property 7. For any two points x1 6= x2 and
detJ (x1 ) 6= 0, the number and location of sin-
J (x0 )dx + dy = J (x0)dx + Y0d = 0 (33) gularities of the quadratic problem f (x) = 0 on
By multiplying (33) by st , where s is the left the straight line through x1 ; x2 is de ned by real
eigenvector of J (x0 ) corresponding to a zero eigenvalues of the matrix J ;1 (x1)J (x2 ).
Proof. Let us de ne the line through x1; x2 as
eigenvalue, we get (5). Using (15), it is easy to show that
stJ (x0 )dx + stY0d = st Y0d = 0 J [x1 + (x2 ; x1 )] = (1 ; )J (x1 )+ J (x2 ) (35)
For the general case of loading, st Y0 = 6 0; and, As x1 is a nonsingular point, for  6= 0, the
therefore, d = 0. This means that the loading
parameter reaches its extremal value 0 at the h
expression (35) can be written as
J (x) = J (x1 ) J ;1 (x1)J (x2 ) ; ( ; 1);1I
i
singular point x0 . Alternatively, when st Y0 = 0,
the loading trajectory y ( ) tends to the tangent where I is the identity matrix. So, all singular
hyper-plane to the singular margin (detJ (x) = points on the line (5) can be computed as real
0) of the problem f (x) = 0 at the point x0 . eigenvalues of the matrix J ;1 (x1)J (x2 ). 2
It follows from the fact that s is an orthogonal
vector with respect to the singular margin in Rny. 3. OPTIMAL MULTIPLIERS
Using (33), we have This section is devoted to some interesting
properties of so-called optimal multiplier func-
J (x0)dx = 0 (34) tion (; x) = kf (x + x)k2.
So, the increment dx has the same direction as Property 8. The directional derivative of
the right eigenvector r of the Jacobian matrix (; x) = kf (x + x)k2, along the NR cor-
corresponding to its zero eigenvalue, and the last rection vector x, with respect to the optimal
part of Property 6 has been proved. multiplier  at  = 0, equals twice the value of
On the other hand, having (34), the function evaluated at the same point.
Proof. Let us consider the function
;Y0 = J (x0)x + 12 W (x) ;!
!0 1
2 W (x) (; x) = f t (x + x)f (x + x)
where x is a given vector, x is a NR correction 1. A point where detJ (x) = 0. It is clear that
vector evaluated at x, and  is the optimal mul- in this case the NR correction vector kxk ! 1
tiplier. By di erentiating (; x) with respect while (x) still has a nite value. This means
to , 0(; x) = 2xtJ t (x + x)f (x + x); that x becomes an orthogonal vector with
and respect to ;grad[(0; x)]. So, we have that
0(0; x) = 2xtJ t (x)f (x) = ;2(0; x) (36) cos = 0; where is an angle between the vec-
tors. The vectorx is a tangent vector to the
2 hypersurface (0; x) = const.
Comments. The fact was proved in [6]. It was 2. A point where ;grad[(0; x)] has the same
also mentioned in [6] that (36) can be used as a direction as x. In this case cos = 1:
measure of proximity of a correction vector x 3. Intermediate points. As the right part
to the NR correction vector. This estimation of (40) is always positive, we have that 0 <
can be utilized in numerical algorithms which cos < 1: This means that x at the intermedi-
use simpli ed Jacobian matrices. ate points is directed inside the region restricted
Property 9. The optimal multiplier always ex- by the hypersurface (0; x) = const, and it min-
ists in the range [0,2]. imizes (0; x). But a question remains with the
Proof. It is clear from Property 8 that length of the vector x. In the vicinity of the
singular margin it can be so big that nally the
0(0; x) = ;2f t(x)f (x) = ;2(0)  0 (37) function increases after the corresponding NR
For a NR correction vector x, iteration. Anyway, it is possible to restrict the
step length, and so provide decrease of the func-
0(2; x) = 2xtJ t(x + 2x)f (x + 2x) = tion on each NR iteration. This is an idea of the
= 2xt [J (x) + 2J (x) ; 2J (0)]t optimal multiplier technique.
[f (x) + 2J (x)x + 2W (x)] = Property 11. The function (; x) = kf (x +
= 2k ; f (x) + 2J (x)x ; 2J (0)xk2  0 x)k2 calculated along a straight line connect-
(38) ing a pair of distinct solutions of a quadratic
In (38), we take into account that for a quadratic problem f (x) = 0 has a maximum in the middle
case J (x + 2x) = J (x) + J (2x) ; J (0) = point of this line.
J (x)+2J (x) ; 2J (0), and W (x) = [J (x) ; The gradient of  taken in the middle of a
J (0)]x: By comparing of (37) and (38), we get straight line connecting a couple of distinct so-
that (; x) has at least one extremum in the lutions is an orthogonal vector to this line.
range  2[0,2]. As 0(0; x)  0, the function Proof. For the line of x connecting a pair
(; x) has a minimum in the range  2[0,2]. 2 of distinct solutions x1 ; x2, the function is
Comments. Property 9 was proved in [6]. It (; x) = kf [x1 + (x2 ; x1)]k2: It has extremal
explains the observation about a characteristic values if
of the optimal multiplier given in [12]. 0;x() = 0x x0 = 2f t(x)J (x)(x2 ; x1) = 0
Property 10. The scalar product of the NR (41)
correction vector x and the gradient of the Property 2 says that at the middle point x0 =
function (0; x) = kf (x)k2 is equal to minus 1 (x1 + x2 ) of the line (x1 ; x2), the following
twice the value of the function. 2
Proof. The gradient of the function (0; x) = equality is true J (x0 )(x2 ; x1) = 0. Thus,
kf (x)k2 = f t(x)f (x) is equal to 0 (0:5) = 0 (42)
grad[(0; x)] = 2f t (x)J (x) (39) On the other hand, the function 0 () can be
expressed as the cubic function of :
Multiplying (39) by the NR correction vector 0() = 4a43 + 3a32 + 2a2 + a1 (43)
x gives
and two solutions of (43), i.e.  = 0,  = 1,
grad[(0; x)]x= 2f (x)J (x)x = ;2(0; x) 2
t
correspond to global minima of (; x) at zero
(40) value:
Comments. Property 10 gives a relationship be-
tween NR and gradient methods. To illustrate 0(0; x) = 0 ; (0; x) = 0
this, let us consider the following cases. 0(1; x) = 0 ; (1; x) = 0
It follows from Property 4 that (; x) has only ACKNOWLEDGMENTS
two zero minima on the line. The third zero This work was sponsored in part by an
point (42) corresponds to a maximum of (; x). Australian Electricity Supply Industry Research
The rst part of Property 10 has been proved. Board grant \Voltage Collapse Analysis and
The second part follows directly from (41). Control". The authors wish to thank Professors
Actually, (41) can be rewritten as N. Barabanov (Russia), I. Dobson (USA), A.
grad[(; x)](x2 ; x1) = 0 2 Volberg (USA, France), V. Yakubovich (Russia)
for their interest, examination and consultations
Comments. It should be pointed out that regarding some aspects of the work. The authors
Property 11 does not mean that (; x) has are expressing their gratitude to Mr. Bhudjanga
an absolute maximum at x = x0 . To have Chakrabarti, who made a big work with the
the absolute maximum, the function 0x = bibliography and put very valuable questions,
2f t (x)J (x) has to be zero, and the quadratic which actually initiated the present work.
form xt00xx x has to be negative de nite. So,
Property 11 deals with the maximum of (; x)
References
de ned along the line. In some cases, we can [1] L.A. Krumm, "Applications of the Newton-
have the absolute maximum or minimum of the Raphson method to a stationary load ow
function (; x) at some points of the singular computation in bulk power systems", Izves-
margin. They can be described with the help of tia AN SSSR: Energetika i Transport, No.
Property 11. 1, 1966, pp. 3-12 (in Russian).
Property 12. A zero gradient of the function [2] W.F. Tinney and C.E. Hart, "Power ow
(0; x) corresponds to a solution of the problem solution by Newton's method", IEEE Win-
f (x) = 0, or to points where the function f (x) is ter Power Meeting, N.Y., No. 31, 1967, pp.
an orthogonal vector with respect to the singular 17-25.
margin of the load ow problem. [3] B. Stott and O. Alsac, "Fast decoupled
Proof. The function  can be represented as load ow", IEEE Trans. on Power App. and
Syst., Vol. PAS-93, No. 3, May-June 1974,
(; x) = kf (x)k2 = kf (x) ; f (xi )k2 (44) pp. 859-869.
where xi , i = 1;    ; m, is a solution of the prob- [4] V.A. Matveev, "A method of numerical so-
lem f (x) = 0, m is the number of distinct solu- lution of sets of nonlinear equations", Zhur-
tions. It is clear that (44) is the second power nal Vychislitelnoi Matematiki i Matem-
of the distance from a solution point f (xi ) to a aticheskoi Fiziki, Vol. 4, No. 6, 1964, pp.
point f (x) in Rny. The distance function (0; x) 983-994 (in Russian).
has extrema if its gradient equals to zero: [5] A.M. Kontorovich, Y.V. Makarov and A.A.
grad[(0; x)] = 0x = 2f t(x)J (x) = 0 (45) Tarakanov, "Improved methods for load
ow analysis", Acta Polytechnica, Prace
There are two cases when (45) is satis ed. C VUT v Praze, Vol. 5/III, 1983, pp 121-
(a) The function f (x) = 0, i.e. x = xi , where 125
xi is a solution point. [6] A.M. Kontorovich, "A method of load ow
(b) The Jacobian matrix J (x) is a singular and steady-state stability analysis for com-
one, and f (x) is a left eigenvector corresponding plicated power systems with respect to fre-
to a zero eigenvalue of the Jacobian matrix. quency variations", PhD Thesis, Leningrad
It is clear that in case (b) (due to (44) and Polytechnic Institute, Leningrad, 1979 (in
(45)) the vector f (x) corresponds to a local min- Russian).
ima or maxima of distance from a solution point [7] S.Iwamoto and Y.Tamura, "A load ow cal-
to the singular margin in Rny. 2 culation method for ill-conditioned power
Comments. Equation (45) can play an impor- systems", Proc. of the IEEE PES Sum-
tant role in de nition of a shortest distance from mer Meeting, Vancouver, British Columbia,
a current load ow point to the saddle node bi- Canada, July 1979.
furcation boundary margin [19]-[24].
[8] S.Iwamoto and Y.Tamura, "A load ow cal- Washington, April-May 1995, paper no.
culation method for ill-conditioned power 541.
systems", IEEE Trans. on Power App. and [17] S.N. Chow and J. Hale, "Methods of bi-
Syst., Vol. PAS-100, No. 4, April 1981, pp. furcation theory", N.Y.: Springer-Verlag,
1736-1743 1982 (Section 6.2).
[9] Y.Tamura, Y, K.Sakamoto and Y.Tayama, [18] I. Dobson and L. Lu, "Voltage collapse pre-
"Voltage instability proximity index (VIPI) cipitated by the immediate change in sta-
based on multiple load ow solutions in ill- bility when generator reactive power limits
conditioned power systems", Proc. of the are encountered", IEEE Trans. on Circuits
27th Conference on Decision and Control, and Systems- Fundamental Theory and Ap-
Austin, Texas, December 1988. plications, Vol. 39, No. 9, September 1992,
[10] Y. Tamura, Y. Nakanishi and S. Iwamoto, pp. 762-766.
"On the multiple solution structure, sin- [19] Y.V. Makarov and I.A. Hiskens, "A contin-
gular point and existence condition of the uation method approach to nding the clos-
multiple load ow solutions", Proc. of the est saddle node bifurcation point", Proc.
IEEE PES Winter Meeting, New York, NSF/ECC Workshop on Bulk Power Sys-
February, 1980. tem Voltage Phenomena III, Davos, Swit-
[11] Y. Tamura, K. Iba and S. Iwamoto, "A zerland, published by ECC Inc., Fairfax,
method for nding multiple load- ow so- Virginia, August 1994.
lutions for general power systems", Proc. [20] A. M. Kontorovich, A.V. Krukov, Y. V.
of the IEEE PES Winter Meeting, N.Y., Makarov, et al., "Computer methods of
February 1980. steady-state stability indices computations
[12] K. Iba, H. Suzuki, M. Egawa and T. Watan- for bulk power systems", Irkutsk: Publish-
abe, "A method for nding a pair of mul- ing House of the Irkutsk University, 1988
tiple load ow solutions in bulk power sys- (in Russian).
tems", Proc. of the IEEE Power Industry [21] C.A. Ca~nizares and F.L. Alvarado, "Com-
Computer Application Conference, Seattle, putational experience with the point of col-
Washington, May 1989. lapse method on very large AC/DC sys-
[13] Y.Tamura, H.Mori and S.Iwamoto, "Re- tems", Proc.: Bulk power system voltage
lationship between voltage instability and phenomena - voltage stability and security
multiple load ow solutions in electric po- ECC/NSF workshop, Deep Creek Lake,
wer systems", IEEE Trans. on Power App. MD; published by ECC Inc., Fairfax, Vir-
and Syst., Vol. PAS-102, No. 5, May 1983. ginia, August 1991.
[14] V.I. Idelchik and A.I. Lazebnik, "An an- [22] T. Van Cutsem, "A method to compute re-
alytical research of solution existence and active power margins with respect to volt-
uniqueness of electrical power system load age collapse", IEEE Trans. on Power Sys-
ow equations", Izvestia AN SSSR: Ener- tems, Vol. 6, No. 1, February 1991, pp. 145-
getika i Transport, No. 2, 1972, pp. 18-24 156.
(in Russian). [23] J. Jarjis and F.D. Galiana, "Quantitative
[15] Y.V. Makarov and I.A. Hiskens, "Solution analysis of steady state stability in power
characteristics of the quadratic power ow networks", IEEE Trans. on Power App. and
problem", Technical Report No. EE9377 Syst., Vol. PAS-100, No. 1, January 1981,
(revised), Department of Electrical and pp. 318-326
Computer Engineering, University of New- [24] I. Dobson and L. Lu, "New methods for
castle, Australia, May 1994. computing a closest saddle node bifurca-
[16] Y.V. Makarov, I.A. Hiskens and D.J. Hill, tions and worst case load power margin for
"Study of multisolution quadratic load ow voltage collapse", IEEE Trans. on Power
problems and applied Newton-Raphson like Systems, Vol. 8, No. 3, August 1993, pp.
methods", Proc. IEEE International Sym- 905-913.
posium on Circuits and Systems, Seattle,

You might also like