Professional Documents
Culture Documents
2. A problem in optimal control of Navier-Stokes
ows. Consider a steady, uniform, external, viscous, incompressible
ow of a Newtonian
uid around a body with bounding surface ?. We distinguish two
possibly disjoint regions of the boundary: ?0 , on which the velocity is specied to be zero (i.e. the no-slip condition is specied), and ?c, on which
velocity controls are applied. Thus ? = ?0 [ ?c . To approximate the
fareld velocity condition, we truncate the domain of the problem with an
in
ow boundary ?1 on which is enforced the freestream velocity u1 , and
an out
ow boundary ?2 on which a zero-traction condition is maintained.
The
ow domain is denoted as
. Let us represent the velocity vector,
pressure, stress tensor, density, and viscosity of the
uid by, respectively,
u, p, , , and .
The optimal control problem is to nd the velocity control function uc
acting on ?c , and the resulting
uid eld variables, that minimize the rate of
energy dissipation, subject to the Navier Stokes equations. Mathematically,
the problem is to minimize
(2.1)
T
T
2
(ru + ru ) : (ru + ru ) d
;
subject to
(2.2)
(u r)u ? r = 0 in
;
(2.3)
(2.4)
r u = 0 in
;
(2.5)
u = 0 on ?0;
(2.6)
u = uc on ?c;
(2.7)
u = u1 on ?1;
(2.8)
n = 0 on ?2 ;
where (ru)ij = @uj =@xi , and the symbol \:" represents the scalar product
of two tensors, so that
ru : rv =
X @ui @vi
:
i;j
@xj @xj
Here, (2.1) is the dissipation function, (2.2) the conservation of linear momentum equation, (2.3) the constitutive law, (2.4) the conservation of mass
(actually volume) equation, and (2.5){(2.8) are boundary conditions. We
eliminate the constitutive law and stress by substituting (2.3) into (2.2),
and we further reduce the size of the problem by employing a penalty
method: let us relax (2.4) by replacing it with
(2.9)
r u = ?"p in
:
Clearly as " ! 0, we recover the original equation; in fact, the error in the
derivative of u is of order " [11]. By introducing the pressure in the mass
equation, we can eliminate it from the problem by solving for p in (2.9)
and substituting the resulting expression into (2.3).
In general it is not possible to solve innite dimensional optimization
problems such as (2.1){(2.8) in closed form. Thus, we seek numerical
approximations. Here, we use a Galerkin nite element method. Let
the Sobolev subspace U h be the space of all C 0 continuous piecewisepolynomials that vanish on ?1 and ?0 , and dene the Sobolev subspace
V h similarly, with the added requirement that the functions also vanish
on ?c. By restricting the velocity and control vectors to U h , the innitedimensional optimization problem (2.1){(2.8) becomes nite-dimensional.
We \triangulate" the computational domain to obtain Ns nodes in
and
on ?2 , Nc nodes on ?c , and N1 nodes on ?1 . Corresponding to a node i
with coordinates xi , we have the compactly-supported nite element basis function i (x). Let N represent the total number of unknown nodal
velocity vectors in the optimal control problem, i.e. N = Ns + Nc . Thus,
U h = span f1 ; : : : ; N g;
uhc(x) =
and
uh1 (x) =
N
X
i=Ns +1
NX
+N1
i=N +1
Finally, we can represent uh , the nite element approximation to the velocity eld, as
N
X
i=1
uii (x);
the control variables are simply the nodal values of the control function:
ui = uc(xi ); i = Ns + 1; : : : ; N:
The nite element approximation of the optimal control problem (2.1){
(2.8) can now be stated as nding uh 2 U h so that
h
h T
h
h T
2
(ru + (ru ) ) : (ru + (ru ) ) d
;
is minimized, subject to
(2.10)
(2.11)
h
h T
h
h T
2
(ru + (ru ) ) : (rv + (rv ) ) d
+ 1"
(r uh )(r vh ) d
+
vh (uh r)uh d
= 0 for all vh 2 V h:
The objective function (2.10) is the discrete form of the dissipation function;
the constraints (2.11) are a discretization of the variational form of the
penalized conservation of linear momentum equation.
Let us dene n = dN as the total number of unknown velocity components, where d is the physical dimension of the
ow, i.e. d = 2 or 3. Let
u 2 <n denote the vector of unknown nodal velocity components.1 The
vector u includes both state and control variables, and can be symbolically
partitioned as
u = uusc
"
K
"
K
K
K
"
sc
ss
ss
K = K K ;
K = K" Ksc" :
cs
cc
cs
cc
"
The sparsity of K , K , and the Jacobian of h(u) is dictated by the sparsity
of the graph underlying the nite element mesh. For quasi-uniform meshes,
a node has a bounded number of neighbors, so that the number of nonzeroes
per row is independent of problem size. Thus, these matrices have O(n)
nonzeroes. The constant is determined by d, by the order of the basis
functions (x), and by the structure of the mesh, but is usually small.
The optimization problem (2.10){(2.11) can be rewritten as follows:
nd us 2 <ns and uc 2 <nc so that
1 T
1 T
T
(2.12)
2 us Kss us + 2 uc Kccuc + us Ksc uc
1 Not to be confused with u(x) 2 <3 , the exact velocity eld; the distinction will be
clear from the context.
is minimized, subject to
(2.13)
extend sophisticated PDE solution techniques, such as domain decomposition methods and multilevel preconditioners, to the optimization problem,
as solution of (2.12){(2.13) appears to require. For these reasons, state
equation elimination methods are almost universal for PDE-constrained
problems. For examples of this approach in the context of
ow control, see
the papers contained in [12].
Nevertheless, the approach outlined in the last two paragraphs possesses a distinct disadvantage: it requires exact solution of the state equations at each iteration. This can be an onerous requirement, especially for
highly nonlinear problems such as those governed by Navier-Stokes equations. Instead, we pursue here bona de SQP methods for the problem
(2.12){(2.13). SQP requires satisfaction of only a linear approximation
of the constraints at each iteration, thereby avoiding the need to converge
them fully [10]. Thus, state equations are satised simultaneously as control
variables are converged to their optimal values. Furthermore, we consider
a special reduced Hessian SQP method that retains the three advantages
attributed to GRG in the paragraph above. In order to retain the last advantage, i.e. that existing (GRG-ready) PDE solvers can be used, several
conditions must be met. First, the PDE solver must be Newton-based.
Second, the sensitivity analysis capability of the PDE solver must be based
on an adjoint approach. Finally, the solver must be modular enough so
that the inner Newton linear step can be isolated from the code.
We begin with some denitions. Let cs (u) : <n ! <ns , represent the
residual of the discrete Navier-Stokes equations,
and indenite, and can be partitioned into state and control blocks, in the
same manner as the viscous and pressure matrices. Note that the sparsity
structure of J(u) is identical to that of K . Let the matrix As (u) 2
<ns n represent the Jacobian of cs (u) with respect to u. We can partition
As (u) into Ass (u) 2 <nsns , the Jacobian of cs (u) with respect to us , and
Asc(u) 2 <nsnc , the Jacobian of cs (u) with respect to uc. Explicitly,
H()
ns
X
k=1
k Hk
is nonzero only when nodes i and j belong to the same element. Therefore,
H() has the same mesh-based sparsity structure as the other nite element
matrices (K , K" , and J), as does W(), since W() = K + H(),
The necessary conditions for an optimal solution of the problem (2.12){
(2.13) re
ect the vanishing of the gradient of the Lagrangian (3.1) and can
be expressed by the set of nonlinear equations
8
9
2
Kss
Ksc
Kss + K"ss 3 < us =
4
Kcs + K"cs 5 : uc ;
Kcc
Kcs
(3.2)
"
"
s
0
Kss + Kss Ksc + Ksc
8
<
JTss s
+ : JTsc s
hs (u)
9
=
;
8
<
=:
0
0
0
9
=
;
SQP can be derived as a Newton method for solving the optimality conditions [10]. One step of Newton's method for (3.2) is given by solving the
linear system
2
Kss + Hkss
Ksc + Hksc Kss + K"ss + (Jkss )T 3
Kcc + Hkcc Ksc + K"sc + (Jksc)T 5
(3.3) 4 Kcs + Hkcs
"
k
0
Kss + Kss + Jss Ksc + K"sc + Jksc
8
<
:
pks
pkc
ks +1
9
=
8
<
Kssuks + Kscukc
= ?:
Kcsuks + Kccukc
;
"
(Kss + Kss ) uks + (Ksc + K"sc ) ukc + hks = 0;
9
=
;
for the increments in the state and control velocities, pks and pkc , and for the
new multiplier estimate ks +1 , where the superscript k indicates evaluation
of a quantity at uk or k . The state and control variables are then updated
by
uks +1 = uks + pks ; ukc +1 = ukc + pkc :
Let us rewrite the Newton equations (3.3) in terms of the symbols dened
at the beginning of this section as
9 8
9
8
2
Wssk Wsck (Akss )T 3 < pksk = < ?gsk =
k Wk (Ak )T 5
4 Wcs
pc ; = : ?gck ; :
(3.4)
cc
sc
: k+1
k
k
?cks
Ass Asc
0
s
The major diculty in solving this linear system is its extremely large,
sparse coecient matrix, the Karush{Kuhn{Tucker (KKT) matrix, which
is of order (n + ns ) (n + ns ). For industrial
ow problems on unstructured
meshes, the forward (i.e.
ow simulation) problem is memory-bound, and
even on large supercomputers one can barely aord memory for Ass , let
alone the entire matrix. So sparse factorization of the KKT matrix is not
viable. One possibility is to solve (3.4) using a Krylov method such as the
minimum residual method. However, it's not immediately obvious how to
precondition the coecient matrix.
Instead, consider the following block-elimination. In the remainder of
this section, we drop the superscript k; it is understood that all quantities
depending on u or s are evaluated at uk or ks . First, solve for ps from
the last block of equations to obtain
(3.5)
ps = ?A?ss1(cs + Ascpc);
where the invertibility of Ass is guaranteed by the well-posedness of the
boundary value problem (provided of course that we are away from limit
points). Then, substitute this expression into the rst block of equations.
Finally, premultiply the rst block of equations by ?ATsc A?ssT , and add it
to the second block of equations, thereby eliminating s . The result is a
linear system that can be solved for pc , namely
(3.6) Wz pc = (ATsc A?ssT Wss ? Wcs )A?ss1 cs + ATsc A?ssT gs ? gc ;
10
11
?1
Z = ?AssI Asc
Y = 0I
12
then one sees that the null space step (3.10) is identical to (3.6), the equation for determining the move in the control variables. Indeed, the coecient matrix of (3.6), Wz , is exactly the reduced Hessian ZT WZ, and is
therefore at least positive semidenite in the vicinity of a minimum. The
state variable update (3.5) is comprised of the state equation Newton step,
i.e. (3.9) using the range basis (3.12), as well as the null space contribution ?A?ss1 Asc pz , where pz pc. The choice of bases (3.11) and (3.12)
is known as a \coordinate basis," and has been applied to optimization
problems in inverse heat conduction [19], structural design [24], [25], and
compressible
ow [22], [21]. SQP methods using these bases have been
analyzed in [7], [26], and [2], among others.
As mentioned earlier, one of the diculties with Algorithm 3.1, i.e.
the bona de Newton method, arises when iterative solution of systems
involving Ass is necessary; in this case the benet from solving nc + 2
systems with the same coecient matrix but dierent righthand side is not
as extensive as with sparse LU factorization. Might it be possible to give
up the (local) quadratic convergence guarantee in exchange for the need to
solve fewer systems involving Ass at each iteration?
The answer turns out to be armative if we consider a quasi-Newton,
rather than a true Newton, method. Consider (3.6), the control variable
equation. Its coecient matrix is the reduced Hessian Wz . This matrix is
positive denite at a strict local minimum, as well as being small (nc nc)
and dense. It makes sense to recur a quasi-Newton approximation to it;
thus we can avoid the construction of the matrix A?ss1 Asc . By replacing
Wz with its quasi-Newton approximation, Bz , it is easy to see that (3.5){
(3.7) now entail only ve solutions of systems with Ass or its transpose as
coecient matrix. This is a big reduction, especially for large nc . However,
we can do even better. At the expense of a reduction from one-step to twostep superlinear convergence [2], we ignore the second-order terms (those
involving submatrices of W) on the righthand side of (3.6). Furthermore,
we reduce (3.7) to a rst-order Lagrange multiplier estimate by dropping
terms involving blocks of W. This results in the following algorithm, in
which the BFGS formula is used to update the quasi-Newton approximation
of the reduced Hessian.
13
k =k+1
Algorithm 3.2 requires only two solves involving Ass per iteration, as compared with nc +2 in Algorithm 3.1. This represents a substantial reduction
in eort when iterative solvers are used and when nc is signicant. Of
course, Algorithm 3.2 will generally require more iterations to converge
than Algorithm 3.1, since it does not compute exact curvature information. The rst of the two linear solves has Ass as its coecient matrix,
and we term it the state variable update. It comprises two components:
a Newton step on the state equations (?A?ss1 cs ), and a rst-order change
in the states due to a change in the control variables (?A?ss1 Asc pc ). The
second linear system to be solved at each iteration has ATss as its coecient
matrix, and is termed the adjoint step, because of parallels with adjoint
methods for sensitivity analysis [15], [14]. The steps of this algorithm are
almost identical to a quasi-Newton GRG method; the major dierence is
that in GRG the state equations are fully converged at each iteration, while
in Algorithm 3.2 essentially only a single Newton step is carried out.
Algorithms 3.1 and 3.2 as presented above are not sucient to guarantee convergence to a stationary point from arbitrary initial points. It is well
known that for the forward problem, i.e. solving the discrete Navier-Stokes
equations, Newton's method is only locally convergent. The diameter of
the ball of convergence is of the order of the inverse of the Reynolds number that characterizes the
ow [11]; better initial guesses are thus required
as the Reynolds number increases. The optimal control problem should
be no easier to converge than the forward problem, given that the
ow
equations form part of the rst-order optimality conditions. This suggests
continuation methods, which are popular techniques for globalizing the
forward problem [11]. Here, we use a simple continuation on Reynolds
number. That is, suppose we want to solve an optimal control problem
with a Reynolds number of Re , for which it is dicult to nd a starting point from which Newton's method will converge. Instead, we solve
a sequence of optimization problems characterized by increasing Reynolds
number, beginning with Re =0, and incrementing by Re . Optimization
Re
problem i, with Reynolds number iRe
, is solved by either Algorithm 3.1
or 3.2, to generate a good starting point for problem i + 1. Algorithm 3.1
is initialized with the optimal Lagrange multipliers and state and control
variables from optimization problem i ? 1. Algorithm 3.2 includes the same
initializations, but in addition takes the initial BFGS approximation to the
Lagrangian Hessian matrix to be the Hessian approximation at the solution of problem i ? 1. Note that when Re =0, the nonlinear terms drop
14
15
16
17
0.501
-0.501
-0.501
0.501
18
0.501
-0.501
-0.501
0.501
19
U=1
Tx = 0
L/2
V=0
Ty = 0
U=V=0
rr
Tx = 0, V = 0
Tx = 0, V = 0
L
Where
U, V : Velocities
Tx, Ty : Tractions
r = L/20
innite cylinder.
Table 4.1
point, which occured with SD-GRG for Re =400 and 500, and QN-GRG
for Re =500. QN-GRG takes an order of magnitude fewer iterations than
SD-GRG, due to its ability to approximate curvature of the control variable space. However, CQN-GRG takes almost twice as many iterations as
QN-GRG. The reason is that the sequence of Reynolds number steps is
\packed" into a single optimization iteration with QN-GRG, while CQNGRG \promotes" the continuation on Re to the level of the optimization
problem; thus, these steps contribute to the number reported in Table
4.1. On the other hand, the cost per iteration of CQN-GRG should be
signicantly lower than QN-GRG, since each
ow solution need only be
converged for the current Reynolds number, and the solver benets from
an initial guess taken from the previous converged value. QN-SQP reduces
the number of iterations by almost 50% over CQN-GRG, since it liberates the optimizer from having to follow a path dictated by satisfaction
20
0.4
0.3
0.2
0.1
0
-0.501
0.501
21
0.5
0.4
0.3
0.2
0.1
0
-0.501
0.501
Fig. 4.6. Streamlines for steady
ow around a cylinder, optimal control, Re=500.
Table 4.2
Timings in minutes for GRG and SQP methods on a DEC 3000/700 with 225 MHz
Alpha processor and 512 Mb memory.
22
make N-SQP roughly twice as expensive per iteration as QN-SQP. Still, the
Newton method does take less time; whether this is worth the additional
eort of implementing second derivatives will depend on the particular
application.
Finally, we demonstrate the capability of the reduced Hessian SQP
optimal control method by solving a three-dimensional optimal
ow control
problem. We select a model problem of
ow around a sphere at a Reynolds
number of 130, which is the limit of steady
ow for this problem [1]. Again,
we exploit symmetry, this time about two orthogonal planes, to obtain a
quarter model of the sphere. Along the planes of symmetry, zero tangential
traction and zero normal velocity are specied as boundary conditions.
A spherical far-eld boundary truncates the
ow domain, and on it are
imposed traction-free boundary conditions. The computational domain
is meshed into 455 isoparametric triquadratic prism elements, resulting
in N = 4155 nodes and n = 12465 unknown velocity components. In
the absence of boundary velocity control, the
ow separates and forms a
standing ring-eddy behind the sphere, as shown in Figure 4.7.
Fig. 4.7. Velocity eld for steady ow around a sphere, no control, Re=130.
23
with the ow is shown for Case 1 in Figure 4.8. The gure indicates the
Fig. 4.8. Velocity eld for steady ow around a sphere, Case 1 optimal control, Re=130.
Fig. 4.9. Velocity eld for steady ow around a sphere, Case 2 optimal control, Re=130.
an improvement over the uncontrolled
ow (Figure 4.7), there is some recirculation immediately downstream of the sphere that was not present in
the Case 1 optimal solution. Indeed, we expect that the Case 2 optimal
solution should be at least as good as Case 1, since the former duplicates
the holes of the latter while adding several more. However, the optimizer
\sees" only the value of the objective function (and its derivatives), and in
Case 2 the optimal value of the dissipation function is indeed lower than
24
No control
5.775457
Case 1 Case 2
Case 3
16
33
65
1.729944 1.551276 1.430053
that of Case 1, as seen in Table 4.3. The Case 3 optimal
oweld, shown in
Figure 4.9, resembles Case 2, but as seen in Table 4.3, its dissipation function is lower than that of Case 2, so the result matches our expectation.9
Fig. 4.10. Velocity eld for steady ow around a sphere, Case 3 optimal control,
Re=130.
Because of the large computational burden imposed by the threedimensional problems, we did not conduct careful timings. However, the
wall-clock time required (on the workstation described in Table 4.2) in all
three cases was less than a day. The majority of the time undoubtedly was
spent on factoring the coecient matrix Ass .
5. Final Remarks. Based on the comparison of the previous section,
we conclude that the reduced Hessian SQP methods are overwhelmingly
superior to the GRG methods that are popular for optimization problems
involving PDEs as constraints, oering over an order of magnitude improvement in time required for the optimal
ow control problems considered. In
particular, the methods are so ecient that the optimal control for a twodimensional
ow around a cylinder at Reynolds number 500 is found in
about a half hour on a desktop workstation. Furthermore, the NewtonSQP method is capable of solving some three-dimensional optimal
ow
control problems on the same workstation in less than a day. Even though
9 Even if this were not the case, the issue of local vs. global optimality might explain
such an apparently anomalous result.
25
the Newton-SQP method takes signicantly fewer iterations, its need to
construct exact Hessians and to solve linear systems that number on the
order of the number of control variables makes it only marginally more
ecient than its quasi-Newton counterpart, based on our two-dimensional
results.
The GRG methods described here represent a strict interpretation of
the GRG idea|at each optimization iteration, the
ow equations are converged to a tolerance of 10?7. However, in the spirit of SQP, one might
choose a greater value of the tolerance at early iterations, making sure to
reduce the tolerance to its target value as the optimization iterations converge. Indeed, in the limit of one Newton step on the
ow equations per
optimization iteration, we essentially recover the reduced SQP method. A
related idea is to pose the early optimization iterations on a coarse mesh,
and rene as the optimum is approached; this can be very eective, as advanced in [16] and [27]. Finally, in [3], a number of possibilities are dened
that are intermediate between the extremes of GRG (full
ow convergence
per optimization iteration) and reduced SQP (one state equation Newton
step per optimization iteration).
We imagine that for larger problems, the cost of factoring the state
equation Jacobian matrix will begin to display its asymptotic behavior
and dominate the lower-order terms, leading to increasing eciency of the
Newton method relative to quasi-Newton. However, problem size cannot
continue to grow indenitely and still allow sparse LU factorization. For
example, in solving three dimensional Navier-Stokes
ow control problems
on the workstation described in Table 4.2, we encountered a limit of about
13000 state variables, due to memory. Beyond this size, where undoubtedly most industrial-scale
ow control problems lie, iterative solvers are required, and it remains to be seen whether they can be tailored to multiple
righthand sides suciently well that Newton SQP can retain its superiority
over quasi-Newton SQP.
For the largest problems, parallel computing will become essential. Of
course, there is a long history of parallel algorithms and implementations
for the forward problem, i.e. Navier-Stokes
ow simulation. Algorithms
3.1 and 3.2 are well suited to parallel machines, since the majority of their
work involves solution of linear systems having state equation Jacobians
as their coecient matrix; this is just a step of the forward problem, the
parallelization of which is well-understood. Indeed, in [9] we discuss the
parallel implementation of Algorithm 3.2 for a problem in shape optimization governed by compressible
ows.
26
REFERENCES
[1] G. K. Batchelor, An Introduction to Fluid Dynamics, Cambridge University
Press, 1967.
[2] L. T. Biegler, J. Nocedal, and C. Schmid, A reduced Hessian method for
large-scale constrained optimization, SIAM Journal on Optimization, 5 (1995),
pp. 314{347.
[3] E. J. Cramer, J. E. Dennis, P. D. Frank, R. M. Lewis, and G. R. Shubin,
Problem formulation for multidisciplinary optimization, SIAM Journal on Optimization, 4 (1994), pp. 754{776.
[4] T. A. Davis, Users' guide for the unsymmetric pattern multifrontal package
(UMFPACK), Tech. Rep. TR-93-020, CIS Dept., Univ. of Florida, Gainesville,
FL, 1993.
[5] T. A. Davis and I. S. Duff, An unsymmetric pattern multifrontal method for
sparse LU factorization, Tech. Rep. TR-93-018, CIS Dept., Univ. of Florida,
Gainesville, FL, 1993.
[6] C. Farhat and P.-S. Chen, Tailoring domain decomposition methods for ecient
parallel coarse grid solution and for systems with many right hand sides, in
Domain Decomposition Methods in Science and Engineering, vol. 180 of Contemporary Mathematics, American Mathematical Society, 1994, pp. 401{406.
[7] D. Gabay, Reduced Quasi-Newton methods with feasibility improvement for
nonlinearly constrained optimization, Mathematical Programming Study, 16
(1982), p. 18.
[8] M. Gad-el-Hak, Flow control, Applied Mechanics Reviews, 42 (1989), pp. 261{
292.
[9] O. Ghattas and C. E. Orozco, A parallel reduced Hessian SQP method for
shape optimization, in Multidisciplinary Design Optimization: State-of-theArt, N. Alexandrov and M. Hussaini, eds., SIAM, 1996. To appear.
[10] P. E. Gill, W. Murray, and M. H. Wright, Practical Optimization, Academic
Press, 1981.
[11] M. D. Gunzburger, Finite Element Methods for Viscous Incompressible Flows,
Academic Press, 1989.
[12] M. D. Gunzburger, ed., Flow Control, vol. 68 of IMA Volumes in Mathematics
and its Applications, Springer-Verlag, 1995.
[13] M. D. Gunzburger, L. S. Hou, and T. P. Svobodny, Optimal control and optimization of viscous, incompressible
ows, in Incompressible Computational
Fluid Dynamics, M. D. Gunzburger and R. A. Nicolaides, eds., Cambridge,
1993, ch. 5, pp. 109{150.
[14] R. T. Haftka, Z. Gurdal, and M. P. Kamat, Elements of Structural Optimization, Kluwer Academic Publishers, 1990.
[15] E. J. Haug and J. S. Arora, Applied Optimal Design, Wiley-Interscience, 1979.
[16] W. P. Huffman, R. G. Melvin, D. P. Young, F. T. Johnson, J. E. Bussoletti,
M. B. Bieterman, and C. L. Hilmes, Practical design and optimization in
computational
uid dynamics, in Proceedings of 24th AIAA Fluid Dynamics
Conference, Orlando, Florida, July 1993.
[17] T. J. R. Hughes, W. K. Liu, and A. Brooks, Finite element analysis of incompressible viscous
ow by the penalty function formulation, Journal of Computational Physics, 30 (1979), pp. 1{60.
[18] M. S. Khaira, G. L. Miller, and T. J. Sheffler, Nested dissection: A survey
and comparison of various nested dissection algorithms, Tech. Rep. CMU-CS92-106R, Carnegie Mellon University, 1992.
[19] F. S. Kupfer and E. W. Sachs, A prospective look at SQP methods for semilinear parabolic control problems, in Optimal Control of Partial Dierential
Equations, K. Homann and W. Krabs, eds., Springer, 1991, pp. 145{157.
[20] P. Moin and T. Bewley, Feedback control of turbulence, Applied Mechanics Reviews, 47 (1994), pp. S3{S13.
27
[21] C. E. Orozco and O. Ghattas, Massively parallel aerodynamic shape optimization, Computing Systems in Engineering, 1{4 (1992), pp. 311{320.
[22]
, Optimal design of systems governed by nonlinear partial dierential equations, in Fourth AIAA/USAF/NASA/OAI Symposium on Multidisciplinary
Analysis and Optimization, AIAA, 1992, pp. 1126{1140.
[23] L. Prandtl and O. G. Tietjens, Applied Hydro- and Aeromechanics, Dover,
1934, p. 81.
[24] U. Ringertz, Optimal design of nonlinear shell structures, Tech. Rep. FFA TN
91-18, The Aeronautical Research Institute of Sweden, 1991.
, An algorithm for optimization of nonlinear shell structures, International
[25]
Journal for Numerical Methods in Engineering, 38 (1995), pp. 299{314.
[26] Y. Xie, Reduced Hessian algorithms for solving large-scale equality constrained optimization problems, PhD thesis, University of Colorado, Boulder, Department
of Computer Science, 1991.
[27] D. P. Young, W. P. Huffman, R. G. Melvin, M. B. Bieterman, C. L. Hilmes,
and F. T. Johnson, Inexactness and global convergence in design optimization, in Proceedings of the 5th AIAA/NASA/USAF/ISSMO Symposium on
Multidisciplinary Analysis and Optimization, Panama City, Florida, September 1994.