You are on page 1of 333

Stochastic

Partial
Differential
Equations
Advances in Applied Mathematics

Series Editor: Daniel Zwillinger

Published Titles
Stochastic Partial Differential Equations, Second Edition Pao-Liu Chow
Advances in Applied Mathematics

Stochastic
Partial
Differential
Equations

Pao-Liu Chow
Wayne State University
Detroit, Michigain, USA
CRC Press
Taylor & Francis Group
6000 Broken Sound Parkway NW, Suite 300
Boca Raton, FL 33487-2742
2015 by Taylor & Francis Group, LLC
CRC Press is an imprint of Taylor & Francis Group, an Informa business

No claim to original U.S. Government works


Version Date: 20140818

International Standard Book Number-13: 978-1-4665-7957-6 (eBook - PDF)

This book contains information obtained from authentic and highly regarded sources. Reasonable
efforts have been made to publish reliable data and information, but the author and publisher cannot
assume responsibility for the validity of all materials or the consequences of their use. The authors and
publishers have attempted to trace the copyright holders of all material reproduced in this publication
and apologize to copyright holders if permission to publish in this form has not been obtained. If any
copyright material has not been acknowledged please write and let us know so we may rectify in any
future reprint.

Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced,
transmitted, or utilized in any form by any electronic, mechanical, or other means, now known or
hereafter invented, including photocopying, microfilming, and recording, or in any information stor-
age or retrieval system, without written permission from the publishers.

For permission to photocopy or use material electronically from this work, please access www.copy-
right.com (http://www.copyright.com/) or contact the Copyright Clearance Center, Inc. (CCC), 222
Rosewood Drive, Danvers, MA 01923, 978-750-8400. CCC is a not-for-profit organization that pro-
vides licenses and registration for a variety of users. For organizations that have been granted a photo-
copy license by the CCC, a separate system of payment has been arranged.

Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are
used only for identification and explanation without intent to infringe.
Visit the Taylor & Francis Web site at
http://www.taylorandfrancis.com
and the CRC Press Web site at
http://www.crcpress.com
To Jeanne,
Larry, and Lily
This page intentionally left blank
Contents

Preface to Second Edition xi

Preface xiii

1 Preliminaries 1
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Some Examples . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Brownian Motions and Martingales . . . . . . . . . . . . . . 7
1.4 Stochastic Integrals . . . . . . . . . . . . . . . . . . . . . . . 9
1.5 Stochastic Differential Equations of It
o Type . . . . . . . . . 13
1.6 Levy Processes and Stochastic Integrals . . . . . . . . . . . . 15
1.7 Stochastic Differential Equations of Levy Type . . . . . . . . 18
1.8 Comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

2 Scalar Equations of First Order 21


2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.2 Generalized It
os Formula . . . . . . . . . . . . . . . . . . . . 23
2.3 Linear Stochastic Equations . . . . . . . . . . . . . . . . . . 28
2.4 Quasilinear Equations . . . . . . . . . . . . . . . . . . . . . . 34
2.5 General Remarks . . . . . . . . . . . . . . . . . . . . . . . . 37

3 Stochastic Parabolic Equations 39


3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
3.2 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3.3 Solution of Stochastic Heat Equation . . . . . . . . . . . . . 52
3.4 Linear Equations with Additive Noise . . . . . . . . . . . . . 57
3.5 Some Regularity Properties . . . . . . . . . . . . . . . . . . . 65
3.6 Stochastic ReactionDiffusion Equations . . . . . . . . . . . 79
3.7 Parabolic Equations with Gradient-Dependent Noise . . . . . 88
3.8 Nonlinear Parabolic Equations with Levy-Type Noise . . . . 98

4 Stochastic Parabolic Equations in the Whole Space 105


4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
4.2 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
4.3 Linear and Semilinear Equations . . . . . . . . . . . . . . . . 109
4.4 FeynmanKac Formula . . . . . . . . . . . . . . . . . . . . . 115

vii
viii Contents

4.5 Positivity of Solutions . . . . . . . . . . . . . . . . . . . . . . 118


4.6 Correlation Functions of Solutions . . . . . . . . . . . . . . . 123

5 Stochastic Hyperbolic Equations 129


5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
5.2 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
5.3 Wave Equation with Additive Noise . . . . . . . . . . . . . . 130
5.4 Semilinear Wave Equations . . . . . . . . . . . . . . . . . . . 140
5.5 Wave Equations in an Unbounded Domain . . . . . . . . . . 147
5.6 Randomly Perturbed Hyperbolic Systems . . . . . . . . . . . 151

6 Stochastic Evolution Equations in Hilbert Spaces 161


6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
6.2 Hilbert SpaceValued Martingales . . . . . . . . . . . . . . . 161
6.3 Stochastic Integrals in Hilbert Spaces . . . . . . . . . . . . . 167
6.4 It
os Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
6.5 Stochastic Evolution Equations . . . . . . . . . . . . . . . . . 179
6.6 Mild Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . 182
6.7 Strong Solutions . . . . . . . . . . . . . . . . . . . . . . . . . 197
6.8 Stochastic Evolution Equations of the Second Order . . . . . 215

7 Asymptotic Behavior of Solutions 225


7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
7.2 It
os Formula and Lyapunov Functionals . . . . . . . . . . . 226
7.3 Boundedness of Solutions . . . . . . . . . . . . . . . . . . . . 228
7.4 Stability of Null Solution . . . . . . . . . . . . . . . . . . . . 233
7.5 Invariant Measures . . . . . . . . . . . . . . . . . . . . . . . 237
7.6 Small Random Perturbation Problems . . . . . . . . . . . . . 245
7.7 Large Deviations Problems . . . . . . . . . . . . . . . . . . . 250

8 Further Applications 253


8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
8.2 Stochastic Burgers and Related Equations . . . . . . . . . . 254
8.3 Random Schrodinger Equation . . . . . . . . . . . . . . . . . 255
8.4 Nonlinear Stochastic Beam Equations . . . . . . . . . . . . . 257
8.5 Stochastic Stability of CahnHilliard Equation . . . . . . . . 260
8.6 Invariant Measures for Stochastic NavierStokes Equations . 262
8.7 Spatial Population Growth Model in Random Environment . 265
8.8 HJMM Equation in Finance . . . . . . . . . . . . . . . . . . 267

9 Diffusion Equations in Infinite Dimensions 271


9.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
9.2 Diffusion Processes and Kolmogorov Equations . . . . . . . . 273
9.3 GaussSobolev Spaces . . . . . . . . . . . . . . . . . . . . . . 279
9.4 OrnsteinUhlenbeck Semigroup . . . . . . . . . . . . . . . . 284
Contents ix

9.5 Parabolic Equations and Related Elliptic Problems . . . . . 291


9.6 Characteristic Functionals and Hopf Equations . . . . . . . . 302

Bibliography 309

Index 315
This page intentionally left blank
Preface to Second Edition

Since the first edition of the book was published in 2007, inspired by appli-
cations to the biological and financial problems, there has been a surge of
interest in stochastic partial differential equations driven by the Levy type of
noise. This can be seen by the increasing number of papers published on this
subject. In the meantime, I found numerous typos and careless errors in the
original book. Due to such defects, I owe the readers a sincere apology for my
failings. In view of the above reasons, when I was offered by the publisher to
revise my book, I gladly agreed to take this opportunity to upgrade the book
and to correct the errors therein.
In upgrading the present book, there are two types of changes. The first
type of changes are the infusion of some new material concerning stochastic
partial differential equations driven by the Levy type of noises. The general
theory of partial differential equations driven by Levy noises was presented
in a very comprehensive book by Peszat and Zabczyk [78] in a Hilbert space
setting. As an introduction, I have inserted the subject material in the present
volume in the form of more concrete cases instead of a general theory. To merge
with the original text seamlessly, they are infused in proper sections of the
book as follows. In Chapter 1, two new sections on the Levy type of stochastic
integrals and the related stochastic differential equations in finite dimensions
were added. In Chapter 3, the Poisson random fields and the related stochastic
integrals are introduced in Section 3.2, and the solution of a stochastic heat
equation with Poisson noise is given in Section 3.3. Then, in Sections 3.4 and
3.5, the existence and regularity of mild solutions to some linear parabolic
equations with Poisson noises are treated, and a new section, Section 3.8,
was added to cover the nonlinear case. For stochastic hyperbolic equations, in
Chapter 5, the linear and semilinear wave equations driven by the Poisson type
of noises are treated in Section 5.3 and Section 5.4, respectively. To unify the
special cases, in Chapter 6, the Poisson stochastic integral in a Hilbert space
is introduced in Section 6.2 and the mild solutions of stochastic evolutions
with Poisson noises are analyzed in Sections 6.5 and 6.6.
As for the second type of changes, my aim was to improve the presentation
of the material and to correct the typos and errors detected in the first edi-
tion. For instances, several proofs, such as Lemmas 5.5 and 6.2, were revised.
Theorem 3.4 on explosive solutions of stochastic reaction diffusion equations
is new. Also two applications of stochastic PDEs to population biology and
finance were added in Sections 8.7 and 8.8, respectively. A good part of Chap-
ter 9, Section 9.5 in particular, was rewritten to keep the book up to date.
In addition, there are other minor changes and modifications, too many to be

xi
xii Preface to Second Edition

cited here. Concerning the second goal, I am very grateful to many readers for
kindly providing me with helpful comments and pointing out the typos and
possible errors in the first edition of the book. In particular I would like to
thank Professor Hongbo Fu, from Huazhong University of Science and Tech-
nology in Wuhan, China. He kindly sent me a list of typos compiled after he
ran a graduate seminar in stochastic partial differential equations based on
my book. All of the readers comments have contributed to the elimination of
detected errors and to the improvement of the books quality.
In revising my book I have received a great deal of additional help from
several people. First, I would like to thank my wife for her constant support
and valuable assistance in the LaTex typesetting and error detection. To Mr.
Robert Stern of Chapman Hall/CRC Press, I am very grateful for getting
me interested in writing the book several years ago and then encouraging
me to revise the book to keep it up to date. I would also like to thank Mr.
Michael Davidson and Mr. Robert Ross, both editors of the same publisher,
for their valuable assistance in the process of publishing this revised volume.
In addition many thanks should go to the technical staff members who assisted
me in turning my revised manuscript into a published book.
Preface

This is an introductory book on stochastic partial differential equations, by


which here we mean partial differential equations (PDEs) of evolutional type
with coefficients (including the inhomogeneous terms) being random functions
of space and time. In particular, such random functions or random fields may
be generalized random fields, for instance, spatially dependent white noises.
In the case of ordinary differential equations (ODEs), this type of stochastic
equations was made precise by the theory of It os stochastic integral equa-
tions. Before 1970, there was no general framework for the study of stochastic
PDEs. Soon afterward, by recasting stochastic PDEs as stochastic evolution
equations or stochastic ODEs in Hilbert or Banach spaces, a more coherent
theory of stochastic PDEs, under the cover of stochastic evolution equations,
began to develop steadily. Since then the stochastic PDEs are, more or less,
synonymous with stochastic evolution equations. In contrast with the deter-
ministic PDE theory, it began with the study of concrete model equations in
mathematical physics and evolved gradually into a branch of modern analysis,
including the theory of evolution equations in function spaces. As a relatively
new area in mathematics, the subject is still in its tender age and has not yet
received much attention in the mathematical community.
So far there are very few books on stochastic PDEs. The most well-known
and comprehensive book on this subject is stochastic equations in infinite di-
mensions, (Cambridge University Press, 1992) by G. Da Prato and J. Zabczyk.
This book is concerned with stochastic evolution equations, mainly in Hilbert
spaces, equipped with the prerequisite material, such as probability measures
and stochastic integration in function spaces. When I decided to write a book
on this subject, the initial plan was to expand and update a set of my lecture
notes on stochastic PDEs and to turn it into a book. If proceeded with the
original plan, I could envision the end product to be an expository book, with
somewhat different style, on a subject treated so well by the aforementioned
authors. In the meantime, there was a lack of an introductory book on this
subject, even today, without a prior knowledge of infinite-dimensional stochas-
tic analysis. On second thought, I changed my plan and decided to write an
introductory book that required only a graduate level course in stochastic
processes, in particular, the ordinary Itos differential equations, without as-
suming specific knowledge of partial differential equations. The main objective
of the book is to introduce the readers conversant with basic probability the-
ory to the subject and to highlight some computational and analytical tech-
niques involved. The route to be taken is to first bring the subject back to its
root in the classical concrete problems and then to proceed to a unified the-

xiii
xiv Preface

ory in stochastic evolution equations with some applications. At the end, we


will point out the connection of stochastic PDEs to the infinite-dimensional
stochastic analysis. Therefore the subject is not only of practical interest, but
also provides many challenging problems in stochastic analysis.
As an introductory book, an attempt is made to provide an overview of
some relevant topics involved, but it is by no means exhaustive. For instance,
we will not cover the martingale solutions, Poissons type of white noises or
the stochastic flow problems. The same thing can be said about the references
given herein. Some of them may not be original but are more accessible or fa-
miliar to the author. Since, in various parts of the book, the material is taken
from my published works, it is natural to cite my own papers more often.
In order to come to the subject quickly without having to work through the
highly technical stuff, the prerequisite material, such as probability measures
and stochastic integration in function spaces, is not given systematically at
the beginning. Instead, the necessary information will be provided, somewhat
informally, wherever the need arises. In fact, as a personal experience, many
students in probability theory tend to shy away from the subject due to its
heavy prerequisite material. Hopefully, this less technical book has a re-
deemable value and will serve the purpose of enticing some interested readers
to pursue the subject further. Through many concrete examples, the book
may also be of interest to those who use stochastic PDEs as models in applied
sciences.
Writing this book has been a long journey for me. Since the fall of 2003, I
have worked on the book intermittently. In the process, many people have been
extremely helpful to me. First of all, I must thank my wife for her unfailing
support and encouragement as well as her valuable assistance in technical
typing. I am greatly indebted to my colleagues: Jose-Luis Menaldi, who read
the first draft of the manuscript and pointed out some errors and omissions for
correction and improvement; George Yin, who was very helpful in resolving
many of the LaTeX problems, and to my long-time friend Hui-Hsiung Kuo,
who encouraged me to write the book and provided his moral support. In my
professional life I owe my early career to my mentor Professor Joseph B. Keller
of Stanford University, who initiated my interest in stochastic PDEs through
applications and provided me a most stimulating environment in which to
work, for several years, at the Courant Institute of Mathematical Sciences,
New York University. There, I had the good fortune to learn a great deal from
late Professor Monroe D. Donsker whose inspiring lectures on integration in
function spaces got me interested in the subject of stochastic analysis. To both
of them I wish to express my deepest gratitude.
This book would not have been written if it were not prompted by a
campus visit in fall 2002 by Robert B. Stern, executive editor of Chapman
Hall/CRC Press. During the visit I showed him a copy of my lecture notes on
stochastic PDEs. He then encouraged me to expand it into a book and later
kindly offered me a contract. Afterward, for the reason given before, I decided
to write quite a different type of book instead. That has caused a long delay
Preface xv

in completing the book. To Mr. Stern I would like to give my hearty thanks
for his continuing support and infinite patience during the preparation of my
manuscript. Also I appreciate the able assistance by Jessica Vakili, the project
coordinator, to expedite the books publication.

Pao-Liu Chow
Detroit, Michigan
This page intentionally left blank
1
Preliminaries

1.1 Introduction
The theory of stochastic ordinary differential equations has been well devel-
oped since K. Ito introduced the stochastic integral and the stochastic integral
equation in the mid-1940s [45, 46]. Therefore, such an equation is also known
as an It
o equation in honor of its originator. Owing to its diverse applications
ranging from biology and physics to finance, the subject has become increas-
ingly important and popular. For an introduction to the theory and many
references, one is referred to the books [3, 36, 44, 59, 67, 74] among many
others.
Up to the early 1960s, most works on stochastic differential equations
had been confined to stochastic ordinary differential equations (ODEs). Since
then, spurred by the demand from modern applications, partial differential
equations with random parameters, such as the coefficients or the forcing
term, have begun to attract the attention of many researchers. Most of them
were motivated by applications to physical and biological problems. Notable
examples are turbulent flow in fluid dynamics, diffusion, and waves in random
media [6, 10, 48]. In general, the random parameters or random fields involved
need not be of white-noise type, but in many applications, models with white
noises provide reasonable approximations. Besides, as a mathematical subject,
they pose many interesting and challenging problems in stochastic analysis.
By a generalization of the It o equations in Rd , it seems natural to consider
stochastic partial differential equations (PDEs) of It o type as a stochastic
evolution equation in some Hilbert or Banach space. This book will be mainly
devoted to stochastic PDEs of It o type. Due to a recent surge of interest in
the case of discontinuous noise, stochastic PDEs with the Levy type of noise
will also be treated.
The study of stochastic partial differential equations in a Hilbert space goes
back to Baklan [4]. He proved the existence theorem for a stochastic parabolic
equation or parabolic Ito equation by recasting it as an integral equation with
the aid of the associated Greens function. This is the precursor to what is now
known as the semigroup method and the solution in the It o sense is called a
mild solution [21]. Alternatively, by writing such an equation as an It o integral
equation with respect to the time variable only, the corresponding solution
in a variational formulation is known as a strong solution or a variational

1
2 Stochastic Partial Differential Equations

solution. The existence and uniqueness of strong solutions were first discussed
by Bensoussan and Temam [5, 6] and further developed by Pardoux [75],
Krylov and Rozovskii [53], among many others. The aforementioned solutions
are both weak solutions, instead of classical solutions, in the sense of partial
differential equations. However, there are other notions of weak solutions,
such as distribution-valued solutions and the martingale solutions. The latter
solutions will not be considered in this book in favor of more regular versions of
solutions that are amenable to analytical techniques from stochastic analysis
and partial differential equations.
In the deterministic case, partial differential equations originated from
mathematical models for physical problems, such as heat conduction and wave
propagation in continuous media. They gradually developed into an important
mathematical subject both in theory and application. In contrast, the theo-
retical development of stochastic partial differential equations leapt from con-
crete problems to general theory quickly. Most stochastic PDEs were treated
as a stochastic evolution equation in a Hilbert or Banach space without close
connection to specific equations arising from applications. As an introductory
book, it seems wise to start by studying some special, concrete equations be-
fore taking up a unified theory of stochastic evolution equations. With this in
mind we shall first study the stochastic transport equation and then proceed
to the stochastic heat and wave equations. To analyze these basic equations
constructively, we will systematically employ the familiar tools in partial differ-
ential equations, such as the methods of eigenfunction expansions, the Greens
functions, and Fourier transforms, together with the conventional techniques
in stochastic analysis. Then we are going to show how these concrete results
lead to the investigation of stochastic evolution equations in a Hilbert space.
The abstract theorems on existence, uniqueness, and regularity of solutions
will be proved and applied later to study the asymptotic behavior of solutions
and other types of stochastic partial differential equations that have not been
treated previously in the book.
To be more specific, the book can be divided into two parts. In the first
part, Chapter 1 begins with some well-known examples and a brief review
of stochastic processes and the stochastic ordinary differential equations with
Wiener and Levy-type noises. Then, in view of seemingly a disconnect of the
general theory of stochastic evolution equations to concrete stochastic PDEs,
the book will proceed to study some archetype of stochastic PDEs, such as the
transport equation, or the heat and wave equations. Following the tradition in
PDEs, we will start with a class of first-order scalar equations in Chapter 2.
For simplicity, the random coefficients are assumed to be spatially dependent
white noises in finite dimensions. They are relatively simple and their path-
wise solutions can be obtained by the method of stochastic characteristics.
In Chapter 3 and Chapter 4, the stochastic parabolic equations, in bounded
domain and the whole space, are treated, respectively. They will be analyzed
by the Fourier methods: the method of eigenfunction expansion in a bounded
domain and the method of Fourier transform in the whole space. Combin-
Preliminaries 3

ing the Fourier methods with some techniques for stochastic ODEs and the
convergence of random functions, it is possible to prove, constructively, the
existence, uniqueness and regularity properties of solutions. By adopting a
similar approach, stochastic hyperbolic equations are analyzed in Chapter 5.
So far, based on the Fourier methods and concrete calculations, without resort-
ing to some heavy machinery in stochastic analysis, we are able to study the
solutions of typical stochastic PDEs in explicit form. By the way, the methods
of performing such calculations are also of interest and well worth knowing.
After gaining some feelings about the solution properties of concrete stochas-
tic PDEs, we move on to the second part of the book: the stochastic evolution
equations we alluded to earlier. In Chapter 6, the existence and uniqueness
theorems for linear and nonlinear stochastic evolution equations are proved
for two kinds of solutions: the mild solution and the strong solution. The mild
solution is usually associated with the semigroup approach, which is treated
extensively in the book by Da Prato and Zabczyk [21], while the strong solu-
tion, based on the variational formulation, is not covered in that book. Under
some stronger assumptions, such as the conditions of coercivity and monotone
nonlinearity, the strong solutions have more regularity. As a consequence, the
It
o formula holds for the corresponding stochastic evolution equations. This
allows a deeper study in the behavior of solutions, such as their asymptotic
properties for a long time or for small noises. In particular, the boundedness,
stability, and the existence of invariant measures for the solutions as well as
small perturbation problems will be discussed in Chapter 7. To show that the
theorems given in Chapter 6 have a wider range of applications, and several
more examples arising from stochastic models in turbulence, population biol-
ogy, and finance are provided in Chapter 8. Finally, in Chapter 9, we will give
a brief exposition on the connection between the stochastic PDEs and diffu-
sion equations in infinite dimensions. In particular, the associated Kolmogorov
and Hopf equations are studied in some GaussSobolev spaces.
This chapter serves as an introduction to the subject. To motivate the
study of stochastic PDEs, we shall first give several examples of stochastic
partial differential equations arising from the applied sciences. Then we briefly
review some basic facts about stochastic processes and stochastic ODEs that
will be needed in the subsequent chapters.

1.2 Some Examples


(Example 1) Nonlinear Filtering
In the nonlinear filtering theory, one is interested in estimating the state
of a partially observable dynamical system by computing the relevant con-
ditional probability density function. It was shown by Zakai [97] that an
4 Stochastic Partial Differential Equations

un-normalized conditional probability density function u(x, t) satisfies the


stochastic parabolic equation:

d d
u 1 X u X
= [ajk (x) ]+ [gk (x)u]
t 2 xj xk xk
j,k=1 k=1 (1.1)
+ V (x, t)u, x Rd , t > 0,
u(x, 0) = u0 (x).

Here the coefficients ajk , gk and the initial state u0 are given functions of x,
and V = t

V , where
d
X
V (x, t) = hj (x)wj (t)
k=1

with known coefficients hj and w(t) = (w1 , , wd )(t) is the standard Brow-
nian motion or a Wiener process in Rd . The formal derivative w(t) of the
Brownian motion w(t) is known as a white noise. Due to its important appli-
cations to systems science, this equation has attracted the attention of many
workers in engineering and mathematics.

(Example 2) Turbulent Transport

In a turbulent flow, let u(x, t) denote the concentration of a passive sub-


stance, such as the smoke particles or pollutants, which undergoes the molecu-
lar diffusion and turbulent transport. Let be the diffusion coefficient and let
v(x, t, ) = (v1 , v2 , v3 )(x, t, ) be the turbulent velocity field. The turbulent
mixing of the passive substance contained in a domain D R3 is governed
by the following initial-boundary value problem:

X3
u u
= u vk + q(x, t),
t xk (1.2)
k=1
u
|D = 0, u(x, 0) = u0 (x),
n

where is the Laplacian operator, u0 (x) and q(x, t) are the initial density and

the source distribution, respectively, and n denotes the normal derivative to
the boundary D. When the random velocity field v(x, t) fluctuates rapidly,
it may be approximated by a Gaussian white noise W (x, t) with a spatial
parameter to be introduced later. Then the concentration u(x, t) satisfies the
parabolic It
o equation (1.2).

(Example 3) Random Schr


odinger Equation

As is well known, the Schrodinger equation arises in quantum mechanics.


Preliminaries 5

In the case of a random potential, it takes the form


u
i = u + V (x, t)u, x Rd , t (0, T ),
t (1.3)
u(x, 0) = u0 (x),

where i = 1, u0 is the initial state and V is a random potential. For d =
2, the equation (1.3) arises from a time-harmonic random wave propagation
problem under a forward scattering approximation (see Section 8.3), where
t is the third space variable in the direction of wave propagation. At high
frequencies, V (x, t) is taken to be a spatially dependent Gaussian white noise.

(Example 4) Stochastic SineGordon Equation


The SineGordon equation is used to describe the dynamics of coupled
Josephson junctions driven by a fluctuating current source. As a continuous
model, the time rate of change in voltage u(x, t) at x satisfies the nonlinear
stochastic wave equation

2u
= u + sin u + V (x, t), x Rd , t > 0,
t2 (1.4)
u
u(x, 0) = u0 (x), (x, 0) = u1 (x),
t
where , are some positive parameters, u0 , u1 are the initial states, and
the current source V (x, t) is a spatially dependent Gaussian white noise. The
equation (1.4) is an example of the hyperbolic It o equations.

(Example 5) Stochastic Burgers Equation


The Burgers equation is a nonlinear parabolic equation which was intro-
duced as a simplified model for the NavierStokes equations in fluid mechanics.
In the statistical theory of turbulence, a random force method was proposed
in the hope of finding a physically meaningful stationary distribution of turbu-
lence (see Section 8.6). In the miniature version, one considers the randomly
forced Burgers equation in D R3 . The velocity field u = (u1 , u2 , u3 ) satis-
fies
u
+ (u )u = u + V (x, t)
t (1.5)
u|D = 0, u(x, 0) = u0 (x),
where
3
X
(u ) = uk ,
xk
k=1

> 0 is the diffusion coefficient, V (x, t) is a spatially dependent white noise


in R3 , and u0 (x) is the initial state, which may be random. This equation will
be considered later in Section 8.2.
6 Stochastic Partial Differential Equations

(Example 6) Stochastic Population Growth Model


The reactiondiffusion equations are often used as the spatial population
growth models. In a random environment, the growth rate will fluctuate in
space and time. For an animal population of size u(x, t) in an island D R2 ,
with finite resources, the intense competition among the animals will limit the
population growth. A simple model is given by the stochastic logistic equation
in D:
u (x, t) f (u)],
= u + u[ W
t
(1.6)
u
|D = 0, u(x, 0) = u0 (x),
n
where , are positive constants and f (u) is a power function of u. This type
of equation will be studied in Section 8.7.

(Example 7) Stochastic PDE in Finance


A stochastic model in bond market theory was proposed by Heath, Jarrow,
and Morton [40] in 1992. Let f (t, ) denote the forward interest rate depending
on the maturity time . The model for the interest rate fluctuation is governed
by the so-called HJM equation
df (t, ) = (t, )dt + ((t, ), dWt ), (1.7)
where
n
X
((t, ), dWt ) = i (t, )dwi (t),
i=1
Wt = (w1 (t), , wn (t)) is the standard Brownian motion in Rn . Assume
that (t, ) and (t, ) = (1 (t, ), , n (t, )), for each , are given stochas-
tic processes. The connection of the HJM equation to astochastic PDE was
introduced by Musiela [73] by parameterizing the maturity time . Let = t+
with parameter 0 and define r(t)() = f (t, t + ), a(t)() = (t, t + ) and
b(t)() = (t, t + ). The function r is called the forward curve. Then r(t)()
satisfies the stochastic transport equation

dr(t)() = [ r(t)() + a(t)()] dt + (b(t)(), dWt ). (1.8)

Now assume that the volatility b depends on the forward curve r in the form
b(t)() = G(t, r(t)()). By substituting this relation in equation (1.8), one
obtains the HJMM equation

dr(t)() = [ r(t)() + F (t, r(t))()] dt + (G(t, r(t))(), dWt ),

where Z
F (t, r(t))() = (G(t, r(t))(), G(t, r(t))( ) d ).
0
This equation will be analyzed in Section 8.8.
Preliminaries 7

1.3 Brownian Motions and Martingales


Let (, F , P ) be a probability space, where is a set with elements ; F
denotes the Borel -field of subsets of , and P is a probability measure. A
measurable function X : Rd is called a random variable with values in a
d-dimensional Euclidean space. Let T = R+ or be an interval in R+ = [0, ).
A family of random variables X(t), or written as Xt , t T, is known as a
stochastic process with values in Rd . It is said to be a continuous stochastic
process if its sample function X(t, ) or Xt () is a continuous function of t T
for almost every . Continuity in probability or in the mean can be defined
similarly as in the convergence of random variables. A stochastic process Yt is
said to be a modification or a version of Xt if P { : Xt () = Yt ()} = 1 for
each t T. To check continuity, the following theorem is well known.

Theorem 3.1 (Kolmogorovs Continuity Criterion) Let Xt , t T, be a


stochastic process in Rd . Suppose that there exist positive constants , , C
such that
E|Xt Xs | C|t s|1+ , (1.9)
for any t, s T. Then X has a continuous version. Moreover, the process Xt

is H
older-continuous with exponent < .

Remark: In fact, the above theorem holds for Xt being a stochastic process
in a Banach space B with the Euclidean norm | | in (1.9) replaced by the
B-norm k k (1.4, [55]).

A stochastic process wt = (wt1 , , wtd ), t 0, is called a (standard)


Brownian motion or a Wiener process in Rd if it is a continuous Gaussian
process with independent increments such that w0 = 0 a.s. (almost surely),
the mean Ewt = 0 and the covariances Cov{wti , wsj } = ij (t s) for any
s, t > 0, i, j = 1, , d. Here ij denotes the Kronecker delta symbol with
ij = 0 for i 6= j and ii = 1, and (t s) = min{t, s}.
Let {Ft : t T}, or simply {Ft }, be a family of sub -fields of F . It is
called a filtration of the sub -fields if {Ft } is right continuous, increasing,
Fs Ft for s < t, and Ft contains all P -null sets for each t T. A process
Xt , t T is said to be Ft -adapted if Xt is Ft -measurable for each t T.
Given a stochastic process Xt , t T, let Ft be the smallest -field generated
by Xs , s t. Then Xt is Ft -adapted.
Given a filtration {Ft }, an Ft -adapted stochastic process Xt is said to be
an Ft martingale (or simply martingale) if it is integrable with E |Xt | <
such that the following holds

E{Xt |Fs } = Xs , a.s. for any t > s.


8 Stochastic Partial Differential Equations

If Xt is a real-valued process, it is called a submartingale (supermartingale)


if it satisfies
E{Xt |Fs } () Xs , a.s. for any t > s.
If Xt is a Rd -valued martingale with E|Xt |p < , t T, for some p 1, then
it is called a Lp -martingale. It is easy to check that |Xt |p is a submartingale.
The most well-known example of a continuous martingale is the Brownian
motion wt , t 0. For T = [0, T ] or R+ = [0, ), an extended real-valued
random variable is called a stopping time if { t} Ft holds for any t T.
Given a Rd -valued process Xt , it is said to be a local martingale if there exists
an increasing sequence of stopping times n t a.s. such that the stopped
process Xtn is an Ft -martingale for each n. Local submartingale and local
supermartingale are defined similarly. A continuous Ft -adapted process is said
to be a semimartingale if it can be written as the sum of a local martingale and
a process of bounded variation. In particular, if bt is a continuous Ft -adapted
process and Mt is a continuous martingale with quadratic variation process
Qt , then Xt defined by Z t
Xt = bs ds + Mt ,
0
is a continuous semimartingale. If Qt has a density qt such that
Z t
Qt = [M ]t = qs ds,
0

then the pair (qt , bt ) is called the local characteristic of the semimartingale
Xt . The quadratic variation [M ]t will be defined soon.
The following Doobs submartingale inequalities are well known and will
be used often later on.

Theorem 3.2 (Doobs Inequalities) Let t , t T, be a positive continuous


Lp -submartingale. Then for any p 1 and > 0,

P {sup s > } E{t : sup s }, (1.10)


st st

and, for p > 1, the following holds

E{sup sp } q p E{tp }, (1.11)


st

for any t T, where q = p/(p 1).

Let Xt , t [0, T ], be a continuous real-valued Ft -adapted stochastic pro-


cess. Let T = {0 = t0 < t1 < < tm = T } be a partition of [0, T ] with
|T | = max (tk tk1 ). Define t (T ) as
1km

m
X
t (T ) = (Xtk t Xtk1 t )2 .
k=1
Preliminaries 9

If, for any sequence of partitions Tn , t (Tn ) converges in probability to a limit


[X]t , or [Xt ], as |Tn | 0 for t [0, T ], then [X]t is called the quadratic
variation of Xt . For instance, if Xt = wt is a Brownian motion, then [w]t = t
a.s. If Xt is a process of bounded variation, then [X]t = 0 a.s. Similarly, let
Xt , Yt , 0 t T be two continuous real-valued, Ft -adapted processes. Define
m
X
t (T ) = (Xtk t Xtk1 t )(Ytk t Ytk1 t ).
k=1

Then the mutual variation, or the covariation of Xt and Yt , denoted by hX, Y it


or hXt , Yt i is defined as the limit of t (Tn ) in probability as |Tn | 0. Alter-
natively, the mutual variation can be expressed in terms of the quadratic
variations as follows
1
hX, Y it = {[X + Y ]t [X Y ]t }. (1.12)
4

1.4 Stochastic Integrals


The stochastic integral was first introduced by K. It o based on a standard
Brownian motion. It was later generalized to that of a local martingale and
semimartingale. In this section we shall confine our exposition to the special
case of continuous L2 -martingale. Let Mt be a continuous, real-valued L2 -
martingale and let f (t) be a continuous adapted process in R for 0 t T .
For any partition nT = {0 = t0 < t1 < < tn = T }, define
n
X
Itn = ftk1 t (Mtk t Mtk1 t ). (1.13)
k=1

Then Itn is a continuous martingale with the quadratic variation


Z t
[I n ]t = |fsn |2 d[M ]s ,
0

where ftn
= ftk1 for tk1 t < tk .
Suppose that
Z T
|ft |2 d[M ]t < , a.s. (1.14)
0

Then the sequence Itn will converge uniformly in probability as |nT | 0 to


a limit Z t
It = fs dMs , (1.15)
0
which is independent of the choice of the partition. The limit It is called the
10 Stochastic Partial Differential Equations

o integral of ft with respect to the martingale Mt . Instead of Itn given by


It
(1.13), define
n
X 1
Jtn = [ftk1 t + ftk t ](Mtk t Mtk1 t ). (1.16)
2
k=1

The corresponding limit Jt of Jtn written as


Z t
Jt = fs dMs , (1.17)
0

is known as the Stratonovich integral of ft with respect to Mt . Similar to the


It
o integral, the Stratonovich integral (1.17) is a generalization from the case
when Mt = wt is a Brownian motion.

Remarks: For simplicity, the integrand ft was assumed to be a continuous


adapted process. In fact, it is known that the stochastic integrals introduced
above can be defined for a more general set of integrands, known as predictable
processes. Technically, a predictable -field P is the smallest field on the
product set [0, T ] generated by subsets of the form: (s, t] B with (s, t]
[0, T ], B Fs , and {0} B0 with B0 F0 . A stochastic process ft is said to
be predictable if the function: (t, ) ft () is P-measurable on [0, T ] . In
particular, a left-continuous adapted process is predictable.

Theorem 4.1 Let Mt and ft be given as above. Then the It


Z o integral It =
t
fs dMs is a continuous local martingale satisfying E It = 0 and
0
Z t
[ I ]t = |fs |2 d[M ]s , a.s. (1.18)
0

Moreover, the Stratonovich integral is related to the It


o integral as follows
Z t Z t
1
fs dMs = fs dMs + hf, M it . (1.19)
0 0 2

Now let Zt = (Zt1 , , Ztd ) be a continuous martingale and let bt =


(b1t , , bdt ) be an adapted integrable process over [0, T ] in Rd . Let Xt =
(Xt1 , , Xtd ) be a continuous semimartingale defined by
Z t
i
Xt = bis ds + Zti , i = 1, , d. (1.20)
0

In what follows we will quote the famous It


o formula.

Theorem 4.2 (Itos Formula) Let Xt be a continuous semimartingale


given by (1.20). Suppose that : Rd [0, T ] R is a continuous function
Preliminaries 11

such that (x, t) is continuously differentiable twice in x and once in t. Then


the following formula holds:
Z t

(Xt , t) = (X0 , 0) + (Xs , s) ds
0 s
d
X t Z
+ (Xs , s) dXsi (1.21)
i=1 0
xi
d Z
1 X t 2
+ (Xs , s) dhX i , X j is .
2 i,j=1 0 xi xj

If, in addition, (x, t) is three-time differentiable in x, then the above formula


can be written simply as
Z t

(Xt , t) = (X0 , 0) + (Xs , s) ds
0 s
Xd Z t (1.22)

+ (Xs , s) dXsi .
i=1 0
xi

In particular, let Zt be an Rd -valued It


os integral with respect to the standard
Brownian motion wt in Rm defined by
m Z
X t
Zti = ij (s) dwsj , i = 1, , d, (1.23)
j=1 0

where ij (t), i = 1, , d, j = 1, , m, are predictable processes so that


Z t
2
ij (s)ds < , a.s. for i = 1, , d, j = 1, , m.
0

Then the corresponding It o integrals exist and are local martingales, and
equation (1.21) yields the conventional It o formula
Z t

(Xt , t) = (X0 , 0) + (Xs , s) ds
0 s
Xd Z t

+ (Xs , s)bis ds
i=1 0
x i
Xd X m Z t (1.24)

+ (Xs , s)ij (s) dwsj
i=1 j=1 0
xi
d m Z
1 X X t 2
+ ik (s)jk (s) (Xs , s) ds.
2 i,j=1 0 xi xj
k=1

For x, y Rd , let (x, y) denote the inner product of x and y, and let D(x)
12 Stochastic Partial Differential Equations

be the gradient vector of . In the matrix notation, a vector will be regarded


as a column matrix, and an m n matrix A with entries aij will be written
2 (x)
as [aij ]mn or simply [aij ]. Denote by D2 (x) = [ x i xj
](dd) the Hessian
i j
matrix of and [ Z ]t = [hZ , Z it ](dd) as the quadratic variation matrix of
Z. Then the Ito formulas (1.21) and (1.22) can be written, respectively, as
Z t

(Xt , t) = (X0 , 0) + (Xs , s) ds
Z t 0 Zs
t
+ (D(Xs , s), bs ) ds + (D(Xs , s), dZs ) (1.25)
0Z 0
t
1
+ T r {D2 (Xs , s) d [Z]s },
2 0
Z t

(Xt , t) = (X0 , 0) + (Xs , s) ds
Z t 0 Zs (1.26)
t
+ (D(Xs , s), bs ) ds + (D(Xs , s), dZs ),
0 0
Pd
where, for a matrix A = [aij ]dd , T r A = i=1 aii denotes the trace of A.
Similarly, let = [ij ]dm be a diffusion matrix with its transpose denoted
by . One can also rewrite (1.24) as
Z t

(Xt , t) = (X0 , 0) + (Xs , s) ds
Z t 0 Zs
t
+ (D(Xs , s), bs ) ds + (D(Xs , s), (s) dws ) (1.27)
0Z 0
t
1
+ T r{D2 (Xs , s)(s) (s)} ds.
2 0
By means of It os formula and a Doobs inequality, it can be shown that the
following well-known Burkholder-Davis-Gundy (B-D-G) inequality holds true.

Theorem 4.3 (B-D-G Inequality) Let Mt , t [0, T ] be any continuous real-


valued martingale with M0 = 0 and E|MT |p < . Then for any p > 0, there
exist two positive constants cp and Cp such that

p/2 p/2
cp E[M ]T E{sup |Mt |p } Cp E[M ]T . (1.28)
tT

As a consequence, if Zt is the It
o integral given by (1.23), then we have
the following:

Corollary 4.4 For any p > 0, there exists a constant Kp > 0 such that
Z t Z T
E {sup | (s) dws |p } Kp E { T r [(s) (s)] ds}p/2 , (1.29)
tT 0 0
Z T
provided that E { T r [(s) (s)] ds}p/2 < .
0
Preliminaries 13

1.5 Stochastic Differential Equations of It


o Type
Let b : Rd [0, T ] Rd and : Rd [0, T ] Rdm be vector and matrix-
os differential equation in Rd :
valued functions, respectively. Consider It

dx(t) = b(x(t), t) dt + (x(t), t) dwt ,


(1.30)
x(0) = ,

where wt is a Brownian motion in Rm , and is a F0 - measurable Rd -valued


random variable. By convention, this differential equation is interpreted as the
following stochastic integral equation
Z t Z t
x(t) = + b(x(s), s) ds + (x(s), s)d ws , 0 t T. (1.31)
0 0

Under the usual Lipschitz continuity and linear growth conditions, the follow-
ing existence theorem holds true.

Theorem 5.1 (Existence and Uniqueness) Let the coefficients b(x, t) and
(x, t) of the equation (1.31) be measurable functions satisfying the following
conditions:

(1) For any x Rd and t [0, T ], there exists a constant C > 0, such that

|b(x, t)|2 + T r [(x, t) (x, t)] C (1 + |x|2 ).

(2) For any x, y Rd and t [0, T ],

|b(x, t) b(y, t)|2 + T r {[(x, t) (y, t)][(x, t) (y, t)] } K|x y|2 ,

for some K > 0.

Then, given L2 (; Rd ), the equation has a unique solution x(t) which is


a continuous Ft -adapted process in Rd with E sup |x(t)|2 < .
tT

Remarks:

(1) Suppose that the above conditions are satisfied only locally. That is, for
any N > 0, there exist constants C and K depending on N such that
conditions (1) and (2) are satisfied for |x| N and |y| N . Then it can
be shown that, by considering a truncated equation with coefficients bN
and N , the equation (1.31) has a unique local solution x(t). That is, there
exists an increasing sequence {n } of stopping times converging to a.s.
such that x(t) satisfies the equation (1.31) for t < .
14 Stochastic Partial Differential Equations

(2) Instead of the It


o equation, let us consider the Stratonovich equation
Z t Z t
x(t) = + b(x(s), s) ds + (x(s), s) d ws , 0 t T. (1.32)
0 0

The above can be rewritten as the It o equation:


Z t Z t
x(t) = +
b(x(s), s) ds + (x(s), s) d ws , (1.33)
0 0

where b(x, s) = b(x, s) + d(x, s) with


d m
1 XX
di (x, s) = [ ik (x, s)]jk (x, s), (1.34)
2 j=1 xj
k=1

for i = 1, , d. If conditions (1), (2) in Theorem 4.1 are satisfied with b


replaced by b, equation (1.32) has a unique solution.

(3) The Ito equation (1.31) can be generalized to the semimartingale case in
a straightforward manner. If wt is replaced by a continuous martingale
Mt , 0 t T , in Rm , it yields
Z t Z t
x(t) = + b(x(s), s) ds + (x(s), s)d Ms , 0 t T. (1.35)
0 0

It is possible to give sufficient conditions on the coefficients of the equation


and the quadratic variation matrix {[Mt ]} = [hM i , M j it ] such that there
exists a unique solution. Suppose that there exists a predictable matrix-valued
process Qt satisfying Z t
{[Mt ]} = Qs ds,
0

and conditions (1), (2) in Theorem 4.1 are replaced by the following:

(1a) For any x Rd and t [0, T ], there exists a constant C > 0, such that

|b(x, t)|2 + T r [(x, t)qt (x, t)] C (1 + |x|2 ),

for any x Rd , t [0, T ].

(2a) For any x, y Rd and t [0, T ],

|b(x, t) b(y, t)|2 + T r {[(x, t) (y, t)]qt [(x, t) (y, t)] } K|x y|2 ,

for some K > 0.

Then equation (1.35) has a unique solution.


Preliminaries 15

1.6 L
evy Processes and Stochastic Integrals
Let Xt , t 0, be a stochastic process in Rm defined on a probability space
(, F , P ) equipped with a filtration {Ft }. Then X(t) is said to be a Levy
process if it is stochastically continuous and has independent and stationary
increments with X(0) = 0 a.s. Clearly, the Brownian motion wt is a Levy
process. Another important example is given by the compound Poisson process
Yt defined by
(t)
X
Yt = i , (1.36)
i=1

where i s are i.i.d. random variables taking values in Rm with common dis-
tribution, and (t) is a Poisson process with intensity parameter . In fact,
the afore mentioned processes are two key building blocks for the Levy process
due to the Levy-It o decomposition theorem (p126, [2]).
Given a compound Poisson process Yt , define a Poisson point process

t = Yt Yt , (1.37)

where Yt is the left limit of Y at t > 0. Introduce the Poisson random


measure N (, t) associated to Yt defined by
X
N (t, B) = #{0 s t : s B} = IB (s ), (1.38)
0st

where B is a Borel set in Rm m


0 = {R \ 0} and IB () denotes the indicator
function for set B. In what follows, we shall denote the -field for Rm 0 by
B(Rm 0 ). For t > 0 and , N (t, B, ) counts the number of times for the
sample path Y () to jump into the set B over the time interval [0, t]. For a
given set B, it is known that N (t, B) is a Poisson process with the intensity
parameter (B) so that E N (t, B) = V ar.{N (t, B)} = t (B). Henceforth,
we assume that the intensity
Z measure is a finite measure satisfying the con-
ditions: ({0}) = 0 and ||2 (d) < . Define the compensated Poisson
B
random measure by
(t, B) = N (t, B) t(B),
N
which has mean E {N (t, B)} = 0 and variance E {N (t, B)}2 = t (B). More-
over, N (t, ) is a measure-valued martingale. It also enjoys the favorable prop-
erty: For each t > 0 and any collection of disjoint Borel sets {B1 , B2 , , Bn }
in Rm
0 , the random variables N (t, B1 ), N (t, B2 ), , N (t, Bn ) are indepen-
dent. In what follows, we need to consider the Poisson integrals over t. It is
more convenient to define N ([s, t], B) = N (t, B) N (s, B) for s t, which
can be regarded as a Poisson random measure over the - field of [0, ) Rm .
16 Stochastic Partial Differential Equations

Let f : Rm Rd be a Borel function. For B B(Rm 0 ), define the Poisson


integral of f as follows
Z X
f () N (t, d, ) = f () N (t, {}, )
B B
X (1.39)
= f (s ()) IB (s ()),
0st

which is a finite sum over the points jumping into set B. By taking expectation,
we get Z Z
E{ f () N (t, d)} = t f () (d).
B B
Based on (1.39), we can define
Z Z Z
(t, d) =
f () N f () N (t, d) t f () (d), (1.40)
B B B

so that Z
E{ (t, d)} = 0
f () N
B
and Z Z
E| (t, d)|2 = t
f () N |f ()|2 (d).
B B

Let R+ = [0, ) and denote by D(R+ , Rd ) the class of Rd -valued functions


on R+ that are right continuous with a left limit. As a stochastic process,
both of the Poisson integrals defined above are jump processes that can be so
defined that their sample paths are in D(R+ , Rd ).
Now suppose that g : [0, T ] Rm Rd is an Ft -adapted random
field, and, for each B, , that g(t, , ) is left continuous in t. To put
it simply, we shall say that such a random field g(t, , ) is a left-continuous
predictable function on [0, T ] Rd . By generalizing the integral (1.39), the
Poisson stochastic integral of g is defined by
Z tZ X
g(s, ) N (ds, d) = g(s, s ) IB (s ), (1.41)
0 B 0st

which is a randomR Tfinite


R sum.
Suppose that 0 B E |g(s, )|2 (ds, d) < . We can define the compen-
sated Poisson stochastic integral as follows
Z tZ Z tZ

g(s, ) N (ds, d) = g(s, ) N (ds, d)
0 B Z0 t ZB (1.42)
g(s, ) (d)ds,
0 B
Z T Z
which has mean E { (ds, d)} = 0.
g(s, ) N
0 B
Preliminaries 17

Similar to the It o isometry in defining a Brownian stochastic integral,


regarding N (t, ) as a measure-valued martingale, the Poisson stochastic inte-
gral can also be defined as an L2 (P )-limit as follows. Let 0 = t0 < t1 <
t2 < < tn = T and let A1 , A2 , , Am be disjoint subsets of B such that
mj=1 Aj = B. Define

m,n
X
JTm,n (g) = ((ti1 , ti ], Aj ),
g(ti1 , j )N
i,j=1

with each j Aj . Then, by direct computations, we can show that


E JTm,n (g) = 0 and
m,n
X
E |JTm,n (g)|2 = E |g(ti1 , j )|2 (ti ti1 )(Aj ). (1.43)
i,j=1

Let n = sup1in (ti ti1 ) and dm = sup1jm |Aj |, where |Aj | denotes
the diameter of Aj . Then, as n, m with n , dm 0, the sum on the
right-hand side of (1.43) converges to the integral
Z TZ
E |g(s, )|2 (d)ds.
0 B

Correspondingly, the sequence {JTm,n (g)} will converge to a limit JT (g) in


L2 (), which defines the stochastic integral
Z TZ
JT (g) = (ds, d).
g(s, ) N
0 B

Theorem 6.1 Let g(t, , ) be a left-continuous predictable function on


Z TZ
[0, T ] Rd such that E |g(s, )|2 (ds, d) < . Then the compen-
0 B
sated Poisson stochastic integral
Z tZ
Zt = (ds, d)
g(s, ) N
0 B

as defined by (1.42) is a martingale that has the mean E Zt = 0. Moreover, it


satisfies the following properties
Z tZ Z tZ
E {| 2
g(s, ) N (ds, d)| } = E |g(s, )|2 (d)ds, (1.44)
0 B 0 B
Z s Z
E sup {| g(r, ) N (dr, d)|2 }
0st 0 B
Z tZ (1.45)
4 E |g(s, )|2 (d)ds.
0 B
18 Stochastic Partial Differential Equations

By combining the It o and Poisson stochastic integrals defined in Theorems


(4.1) and (6.1), respectively, we obtain a Levy process of the form
Z t Z t Z tZ
Xt = X0 + b(s) ds + (s) dws + (ds, d),
g(s, ) N (1.46)
0 0 0 B

where we set bs = b(s) and gs = g(s), and wt and N (t, ) are independent.

Theorem 6.2 (Itos Formula) Suppose that the conditions in Theorem 4.1
and Theorem 6.1 are satisfied. Then the Levy process Xt given by (1.46) is
a local semimartingale. If : Rd R is a twice continuously differentiable
function, then the following formula holds:
Z t
(Xt ) = (X0 ) + (D(Xs ), b(s)) ds
Z t 0

+ (D(Xs ), (s)dws )
0Z
1 t
+ T r.{D2 (Xs )(s) (s)} ds
2 (1.47)
Z tZ0

+ {[Xs + g(s, )] (Xs )} N (ds, d)


Z tZ
0 B

+ {[Xs + g(s, )] (Xs )


0 B
(D(Xs ), g(s, ))} ds(d),

where Xs denotes the left limit of X at s.

The proofs of the above theorems can be found in Chapter 5 of the book
by Applebaum [2]. For brevity, they are omitted here.

1.7 Stochastic Differential Equations of L


evy Type
By generalizing the It o differential equation (1.30), consider a Levy-type of
stochastic differential equation in Rd :

dy(t) = b(y(t),
Z t) dt + (y(t), t) dwt
+ (dt, d),
f (y(t), t, ) N (1.48)
B
y(0) = y0 ,

where f : Rd [0, T ] Rm Rd and Y0 is an F0 - measurable Rd -valued


random variable. To be precise, the equation (1.48) is the differential form of
Preliminaries 19

the following stochastic integral equation:


Z t Z t
y(t) = y0 + b(y(s), s) ds + (y(s), s) dws
Z t Z0 0 (1.49)
+ (ds, d),
f (y(s), s, ) N
0 B

where, to be specific, the set B may be taken to be { Rm : 0 < || < c} for


some c > 0. Similar to Theorem 5.1 for It os equation, we have the following
existence and uniqueness theorem.

Theorem 7.1 (Existence and Uniqueness) Let the coefficients b(x, t), (x, t)
and f (x, t, ) of the equation (1.49) be continuous functions satisfying the
following conditions:
(1) For any x Rd and t [0, T ], there exists a constant C > 0, such that
Z
|b(x, t)|2 + T r [(x, t) (x, t)] + |f (x, t, )|2 (d) C (1 + |x|2 ).
B

(2) For any x, y Rd and t [0, T ],


2
|b(x, t) b(y, t)|
Z + T r {[(x, t) (y, t)][(x, t) (y, t)] }
+ |f (x, t, ) f (y, t, )|2 (d) K|x y|2 ,
B

for some K > 0.


Then, given y0 L2 (; Rd ), the equation has a unique solution y(t) which
is an Ft adapted process in D([0, T ], Rd) with E sup |x(t)|2 E |y0 |2 , for
tT
some constant > 0.

Remarks:
(1) The proof is similar to that of an It
o equation and is given in the afore-
mentioned book (Theorem 6.3.1, [2]).

(2) The equation (1.49) can be generalized to the following


Z t Z t
y(t) = y0 + b(y(s), s) ds + (y(s), s) dws
Z t Z0 0

+ (ds, d)
f (y(s), s, ) N (1.50)
Z0 t Z||<c
+ g(y(s), s, ) N (ds, d),
0 ||c

where g(x, t, ) is defined similarly as f . However, by making use of equa-


tion (1.42), one can rewrite the above equation in the form of equation
(1.49).
20 Stochastic Partial Differential Equations
Rt
(3) Obviously, the stochastic integral 0 (y(s), s) dws in (1.49) can be ex-
Rt
tended to an integral 0 (y(s), s) dMs with respect to a continuous mar-
tingale.

1.8 Comments
The material presented in this chapter is intended to motivate the study of
stochastic partial differential equations by examples and to give a brief review
of some basic definitions and the theory of stochastic differential equations in
finite dimensions. There exists a long list of books and articles on the subject,
some of which were cited in the introduction. For Brownian motions and
martingales, the reader is referred to the books [73, 36, 81]. For a more in-depth
study of the stochastic integrals and stochastic differential equations, one can
consult the additional books [18, 44] and [86] among many others. Concerning
the Levy processes and the related stochastic differential equations, the books
[2] and [83] abound with useful information about these subjects.
As a passage to the main topics of stochastic partial differential equations,
in the next chapter, we shall first consider a relatively simple class of equa-
tions: first-order scalar partial differential equation with finite-dimensional
white-noise coefficients. A general theory of this type of equations was first
introduced by Kunita [54] and was generalized to the case of martingale white
noises with spatial parameters [55].
2
Scalar Equations of First Order

2.1 Introduction
Partial differential equations of the first order involving one single unknown
function are regarded as the simplest ones to study. As is well known, in the
deterministic case, integration of such an equation can be reduced to solving
a family of ordinary differential equations. This approach is known as the
method of characteristics [19], [28]. In applications, these equations arise from
continuum mechanics to describe a certain type of conservation laws. For
instance, consider the dispersion of smoke particles in an incompressible fluid,
the conservation of mass in the Euclidean space R3 leads to the first-order
equation [60]

X3
u u
(x, t) + vj (x, t) (x, t) = 0, u(x, 0) = u0 (x), (2.1)
t j=1
xj

where u(x, t) is the particle density at time t and position x = (x1 , x2 , x3 ),


v = (v1 , v2 , v3 ) is the flow velocity, and u0 is the initial density distribution.
Given the velocity field v, the Cauchy problem for the linear equation (2.1)
can be easily solved by introducing the characteristic equation
dx(t)
= v(x, t), x(0) = x. (2.2)
dt
Let (x, t) denote the solution of equation (2.2). Then along the solution
curve, equation (2.1) yields du((t), t)/dt = 0. Therefore, for a smooth initial
density u0 , the solution of equation (2.1) can be written simply as

u(x, t) = u0 [1 (x, t)], (2.3)

where 1 (, t) denotes the inverse map of (, t) on R3 , provided that it exists.


Now, in a turbulent flow, the fluid velocity is a random field. In the presence
of rapid fluctuations, it is plausible to model the turbulent velocity as the sum
of a random field and a spatially dependent white noise
k
X
vi (x, t) = bi (x, t) + ij (x, t) w tj ,
j=1

21
22 Stochastic Partial Differential Equations

or in the differential form


k
X
vi (x, t)dt = bi (x, t)dt + ij (x, t) dwtj , i = 1, 2, 3, (2.4)
j=1

where wt = (wt1 , ..., wtk ) is a standard Wiener process in Rk , and bi (x, t) and
ij (x, t) are random fields adapted to the -field generated by the Wiener pro-
cess. As a matter of convenience, throughout this chapter, the Stratonovich
differential dwtj is often used instead of the It o differential dwtj . In view of
Theorem 1-3.1,1 under some smoothness condition, they differ only by a cor-
rection term [see equation (1.34)]. In view of equation (2.4), the corresponding
equation (2.1) can be interpreted as the
X 3 Z t
u
u(x, t) = u0 (x) bi (x, s) (x, s)ds
i=1 0
xi
Z tX3 X k (2.5)
u j
(x, s)ij (x, s) dws .
0 i=1 j=1 xi

Similar to (2.2), introduce the stochastic characteristic equations


Z t Z t
x(t) = x + b(x(s), s)ds + (x(s), s) dws , (2.6)
0 0
Pk
where b = (b1 , ..., bk ) and dws = j=1 j dwsj . Then, formally, by applying
the Ito formula, we have du[x(t), t] = 0 so that the corresponding solution
u(x, t) of the Cauchy problem (2.1) is given by
u(x, t) = u0 [1 (x, t)], (2.7)
where the solution (x, t) of equation (2.6) is assumed to have an inverse
1 (x, t) a.s. With this simple example in mind, we shall extend this idea to
treat a more general set of linear and nonlinear first-order equations. In this
chapter we shall deal with linear and quasilinear first-order equations with
coefficients being finite-dimensional, spatially dependent white noises. As will
be seen, the solutions are no longer finite dimensional stochastic processes.
They need to be described by a certain type of random fields, known as semi-
martingales with a spatial parameter. To construct a solution by the method
of characteristics, two essential tools in stochastic analysis are required: a gen-
eralized Ito formula and the solution of a stochastic equation as a stochastic
flow of diffeomorphism. These technical preliminaries are given in Section 2.2.
The linear and quasilinear equations are treated in Section 2.3 and Section 2.4,
respectively. The equations are integrated along the stochastic characteristic
curves. This approach gives a unique path-wise solution in an explicit form.
Some general comments are given in Section 2.5.
1 Here and henceforth, when citing a Theorem (Lemma) in Chapter m, Theorem (Lemma)

n-j.k denotes Theorem (Lemma) j.k given in Chapter n for n 6= m, while, if it appears within
Chapter m, it will be cited simply as Theorem (Lemma) j.k .
Scalar Equations of First Order 23

2.2 Generalized It
os Formula
In this section, we shall collect some technical lemmas concerning the It o
integral and the Stratonovich integral depending on a spatial parameter x
Rd . To this end, let wt = (wt1 , ..., wtn ), t 0, be a standard Wiener process in
Rn defined in a complete probability space (, F , P ) and let Fts = {wr , s
r t} be the sub- field of F over the time interval [s, t] [0, ). Let
(x, t, ), or simply (x, t) for t [0, T ] and , be a family of real-
valued random processes with parameter x Rd . As a random function of
x and t, (x, t) will be called a random field. If, for each x Rd , (x, t) is
Ft -adapted (predictable), it will be termed an adapted (predictable) random
field. In general, a random field may be scalar, vector, or matrix-valued. In
the latter cases the components of (x, t) are labeled as i (x, t) and ij (x, t),
etc. In this chapter all random fields are assumed to be continuous in x and t
a.s. and Ft -adapted. We say that (x, t) is a spatially dependent martingale
(semimartingale), if, for each x Rd , it is a martingale (semimartingale)
with respect to the filtration {Ft }. By convention, sometimes we may write
(x, t), i (x, t), as t (x), it (x), . We shall deal with semimartingales
V (x, t) with components Vi (x, t), i = 1..., p, of the special form
Z t Z t
V (x, t) = b(x, s)ds + (x, s) dws ,
0 0

or, in components,
Z t n Z
X t
Vi (x, t) = bi (x, s)ds + ij (x, s) dwsj , (2.8)
0 j=1 0

where bi (x, t) and ij (x, t) are continuous Ft -adapted random fields. The mu-
tual variation function of Vi (x, t) and Vj (y, t) is given by
Z t
< Vi (x, t), Vj (y, t) > = qij (x, y, s)ds, (2.9)
0
Pn
where qij (x, y, s) = l=1 il (x, s)jl (y, s). Let q(x, y, t) = [qij (x, y, t)]pp be
the covariation matrix. Then the local characteristic of V (x, t) is given by
the pair (q(x, y, t), b(x, t)), where b(x, t) = b(x, t) + d(x, t) and d(x, t) is the
Stratonovich correction term as defined in equation (1.31).
For a smooth function f : Rd R, let x f (x) be defined by

||
x f (x) = f (x),
(x1 )1 (xd )d

or simply f = x11 ...xdd f , where = (1 , ..., d ) is a multi-index with


|| = 1 + ... + d . Let Cm (Rd ; R) or Cm in short, denote the set of all
24 Stochastic Partial Differential Equations

m-time continuously differentiable real-valued functions f on Rd so that the


partial derivatives f exist for || m. Define a norm in Cm by

|f (x)| X
kf km = sup + sup | f (x)|.
xRd (1 + |x|) d
1||m xR

For (0, 1], let Cm, (Rd , R) or simply Cm, denotes the set of Cm -functions
f such that all of its partial derivatives f with || m are H older-
continuous with exponent . The corresponding norm on Cm, are given by
X
kf km+ = kf km + k f k ,
||=m

where
|f (x) f (y)|
kf k = sup .
x,yRd
|x y|
x6=y

Let Cm
b denote the set of C bounded functions and Cm,
m
b , the set of C
m,
-
bounded functions.
Similarly, let g(x, y) be real-valued function on Rd Rd . Using the same
notation as before, we say that g Cm if the mixed derivatives x y g(x, y)
exist for ||, || m. Define the Cm -norm of g as

|g(x, y)| X
kgkm = sup + sup |x y g(x, y)|.
(x,y)R2d (1 + |x|)(1 + |y|) 2d
1||,||m (x,y)R

Analogously, the space Cm, consists of Cm -functions g such that


X
kgkm+ = kgkm + kx y gk < ,
||,||=m

where
|g(x, y) g(x , y) g(x, y ) + g(x , y )|
kgk = sup .
(x,y), (x ,y )R2d
|x x | |y y |
x6=x , y6=y

The spaces of bounded functions g in Cm and Cm, will be denoted by Cm


b
and Cm,
b , respectively, as before.

Let (x, t) be a continuous, Ft -adapted, real-valued random field such that


Z t
|(x, s)|2 ds < a.s. for each x,
0

and
R t wt is a Brownian motion in one dimension.
Rt Then the It o integral
0
(x, s)dws and the Stratonovich integral 0
(x, s)dws are well defined for
Scalar Equations of First Order 25

each fixed x. However, they may not be defined as a random field, since the
union of null sets Nx over all x may
R t not be a null set. For instance, in order
o integral It (x) = 0 (x, s)dws as a continuous random fields,
to define the It
it is necessary to impose a certain regularity property, such as H older con-
tinuity, on the integrand (x, t). Then, by applying Kolmogorovs continuity
theorem for a random field, such an integral may be defined for all (x, t) as a
continuous version of the family of random variables {It (x)}. More generally,
consider the semimartingale V (x, t) defined by equation (2.8). By means of
Theorem 3-1.2 in [55], it can be shown that the following lemma holds.

Lemma 2.1 Let bi (, t) and ij (, t) be continuous, Ft -adapted Cm, -


processes for m 1 such that
Z T
sup {kbi (, t)km+ + kij (, t)k2m+1+ }dt < a.s.
i,j 0

Then, for each i, with i = 1, ..., p, the spatially dependent Stratonovich process
Z t X n Z t
Vi (x, t) = bi (x, s)ds + ij (x, s) dwsj ,
0 j=1 0

has a regular version that is a continuous Cm, -semimartingale for any < .
Moreover, the following differentiation formula holds:
Z t n Z t
X
x Vi (x, t) = x bi (x, s)ds + x ij (x, s) dwsj a.s., (2.10)
0 j=1 0

for || m.

Let (x, t) be a continuous Cm -semimartingale with local characteristic


((x, y, t), (x, t)) and let gt Rd be a continuous predictable process such
that Z T
{|(gt , t)| + (gt , gt , t)}dt < a.s.
0
Then we can define the It o (line) integral over gt based on (x, t) as the
stochastic limit:
Z t N
X 1
(gs , ds) = lim {(gtj t , tj+1 t) (gtj t , tj t)}, (2.11)
0 ||0
j=0

where the convergence is uniform over [0, T ] in probability, for any partition
T = {0 = t0 < t1 < < tN = T } with mesh size |T |. Correspondingly,
the Stratonovich integral is defined as
Z t N
X 1
1
(gs , ds) = lim {(gtj+1 t , tj+1 t)
0 |T |0 2 (2.12)
j=0
+(gtj t , tj+1 t) (gtj+1 t , tj t) (gtj t , tj t)}.
26 Stochastic Partial Differential Equations

In particular, we have
Z t Z t Z t
V (gs , ds) = b(gs , s)ds + (gs , s) dws . (2.13)
0 0 0

Lemma 2.2 Let (x, t), x Rd , t [0, T ], be a continuous C1 semi-


martingale with local characteristic satisfying
Z T
{k(, t)k1 + k(, , t)k2+ }dt < a.s.,
0

and let gt = (gt1 , ..., gtd )


be a continuous semimartingale in Rd . Then both the
It
o and the Stratonovich integrals given above are well defined and they are
related by the following formula:
Z Z d Z
t t
1 X t
(gs , ds) = (gs , ds) + h (gs , ds), gti i. (2.14)
0 0 2 i=1 0 xi

As is well known, the Ito formula plays an important role in the stochastic
analysis. If (x, t) were a deterministic function, the formula for (gt , t) can
be written down easily. As it turns out, the formula may be generalized to the
semimartingale case (see Theorem 3.3.2, [55]). The following theorem gives a
generalized It
o formula.

Theorem 2.3 Let (x, t) be a continuous C3 process and a continuous


C2 semimartingale, for x Rd , and t [0, T ]. Suppose that its local char-
acteristic satisfies
Z T
{k(, t)k1 + k(, , t)k2+ }dt < a.s.
0

If gt = (gt1 , ..., gtd )


is a continuous semimartingale, then (gt , t) is a continuous
semimartingale and the following formula holds
Z t Xd Z t

(gt , t) = (g0 , 0) + (gs , ds) + (gs , s) dgsj , (2.15)
0 j=1 0 xj

which, when written in terms of It


o integrals, gives
Z t d Z
X t

(gt , t) = (g0 , 0) + (gs , ds) + (gs , s)dgsi
0 i=1 0 xi
d Z
1 X t 2
+ (gs , s)dhg i , g j is (2.16)
2 i,j=1 0 xi xj
d Z
1 X t
+ h (gs , ds), gti i.
2 i=1 0 xi
Scalar Equations of First Order 27

As a matter of convenience, in the differential form, the Stratonovich for-


mula (2.15) will be written symbolically as

Xd

d(gt , t) = (gt , dt) + (gt , t) dgtj . (2.17)
j=1
xj

Suppose that gt is replaced by a continuous Cr semimartingale V (x, t), x


Rd as given by equation (2.8). By changing the spatial parameter, let
(, t), Rp , be the semimartingale as in the previous lemma. We need to
Rt
differentiate random fields such as [V (x, t), t] and J(x, t) = 0 [V (x, s), ds]
with respect to the parameter . The next lemma, a simplified version of The-
orem 3.3.4 in [55], provides the desired chain rules of differentiation.

Lemma 2.4 Let (, t), Rp with local characteristic (, ) such that


Z T
{k(, t)km+ + k(, , t)km+1+ }dt < a.s.
0

Let V (x, t), x Rd be a continuous Cr, -semimartingale in Rp as given by
(2.8) such that
Z T
sup {kbi (, t)km+ + kij (, t)k2m+1+ }dt < a.s.
i,j 0

Then the random field [V (x, t), t] and the Stratonovich integral J(x, t) =
Rt m0 ,
0 [V (x, s), ds] are well defined as continuous C -semimartingales with

m0 = m r and < . Moreover, the following differentiation formulas
hold
Xp
Vj
[V (x, t), t] = [V (x, t), t] (x, t),
xi j=1
j xi

and p Z
J X t
Vj
(x, t) = (x, s) [V (x, s), ds].
xi j=1 0 xi j

Now we consider the following system of stochastic equations:

dit = Vi (t , dt)
n
X
= bi (t , t)dt + ij (t , t) dwtj , (2.18)
j=1
i0 = xi , i = 1, , d, 0 < t < T.

If bi and ij are smooth, then, under the usual linear growth and the Lips-
chitz continuity assumptions, the stochastic system has a unique continuous
solution t (x) with t = (1t , , dt ). Here we regard t (x) as a random field
28 Stochastic Partial Differential Equations

and are interested in the mapping t () : Rd Rd . Suppose the mapping is


continuous and it has a continuous inverse 1 t () a.s. Then the composition
maps s,t = t s1 , for 0 s t T, form a stochastic flow of home-
omorphism. If s,t () Cm and all of its partial derivatives x s,t (x) with
|| m are continuous in (x, s, t) a.s., then {s,t } is said to be a stochastic
flow of Cm -diffeomorphism. The following is a more precise statement of the
key theorem.

Theorem 2.5 For i = 1, ..., d, j = 1, ..., n, let bi (x, t) and ij (x, t) be con-
tinuous Ft adapted Cm
b -random fields such that, for m 1,
Z T
sup {kbi (, t)km+ + kij (, t)k2m+1+ }dt < a.s.
i,j 0

Then the equation (2.18) has a continuous solution t (x) which is a


Cm, semimartingale for any < . Moreover it generates a stochastic flow
of Cm -diffeomorphism over [0,T].

This theorem follows from Theorem 4.7.3 in [55]. Since the stochastic flow
is generated by Wiener processes, it is known as the Brownian flow of diffeo-
morphism.

2.3 Linear Stochastic Equations


Consider a linear partial differential equation of first order in the variables t
and x as follows:
d
u X u
= vi (x, t) + Vd+1 (x, t)u + Vd+2 (x, t),
t i=1
xi

for t > 0 and x = (x1 , ..., xd ) in Rd , where vi , i = 1, ..., d + 2, are given


functions of x and t. Let V (x, t) = (V1 (x, t), ..., Vd+2 (x, t)) be a continuous
spatially dependent semimartingale given by (2.8) with p = d + 2. In the
differential form, it is written as
n
X
Vi (x, dt) = bi (x, t)dt + ij (x, t) dwtj , (2.19)
j=1

or formally,
n
X
V i (x, t) = bi (x, t) + ij (x, t) w tj , i = 1, ..., d + 2,
j=1
Scalar Equations of First Order 29

which is a spatially dependent white noise with a random drift. Now we replace
each coefficient vi (x, t) of the linear partial differential equation by a white
noise V i (x, t) given above, and consider the associated Cauchy problem:
d
u X u
= Vi (x, t) + V d+1 (x, t)u + V d+2 (x, t),
t i=1
xi (2.20)
u(x, 0) = u0 (x),

where u0 (x) is a given function. By convention, equation (2.20) is interpreted


as the following stochastic integral equation
d Z
X t
u
u(x, t) = u0 (x) + (x, s)Vi (x, ds)
i=1 0 xi
(2.21)
Z t
+ u(x, s)Vd+1 (x, ds) + Vd+2 (x, t).
0

To construct a solution to the Cauchy problem (2.20), we introduce the asso-


ciated characteristic system
Z t
i
t (x) = xi Vi [s (x), ds]
Z t 0
Xn Z t (2.22)
= xi bi [s (x), s]ds ij [s (x), s] dwsj ,
0 j=1 0

for i = 1, , d, and
Z t
t (x, r) = r+ s (x, r)Vd+1 [s (x), ds]
Z t0 (2.23)
+ Vd+2 [s (x), ds],
0

where t (x) = (1t (x), ..., dt (x)) for t [0, T ] is a characteristic curve starting
from x at t = 0. Let t () = (t (x), t ()) with = (x, r). Then the system
(2.22) and (2.23) can be written as a single equation of the form
Z t
t () = + K[s (), s]V [s (), ds],
0

where K(, s) is a (d + 1) (d + 2) matrix. Under some suitable conditions,


the above equation has a unique solution that defines a stochastic flow of
diffeomorphism.

Lemma 3.1 For i = 1, , d + 2 and j = 1, , n, let bi (x, t) and ij (x, t)


be continuous Ft -adapted Cm
b -processes with m 1, (0, 1], such that
30 Stochastic Partial Differential Equations

Z T Z T
kbi (, t)k2m+ dt c1 and kij (, t)k2m+1+ dt c2 a.s.,
0 0

for some positive constants c1 and c2 . Then the system (2.22) and (2.23)
has a unique solution t () = (t (x), t (x, r)) over [0, T ]. Moreover, the so-
lution is a continuous Cm semimartingale that defines a stochastic flow of
Cm diffeomorphism.

This lemma follows immediately from Theorem 2.5. In particular, t :


Rd Rd , 0 t T , is a stochastic flow of Cm -diffeomorphism. Let t = 1 t
denote the inverse of t and define t (x) = t [x, u0 (x)]. With the aid of the
solution of the characteristic system, we can construct a path-wise solution to
the Cauchy problem (3.4) as follows.

Theorem 3.2 Let the conditions in Lemma 3.1 hold true for m 3. For
u0 Cm, , the Cauchy problem (2.20) has a unique solution u(, t) for 0
t T such that it is a Cm -semimartingale which can be represented as
Z t
u(x, t) = u0 [t (x)] exp{ Vd+1 [s (y), ds]}|y=t (x)
Z t Z t 0 (2.24)
+ exp{ Vd+1 [s (y), ds]}Vd+2 [ (y), d ] |y=t (x) .
0

Proof. Let (x, t) = t [x, u0 (x)] and (x, t) = [t (x), t]. We are going to
show that (x, t) is a solution of the Cauchy problem (2.20). From (2.23) we
have Z t
(x, t) = u0 (x) + (x, s)Vd+1 [s (x), ds]
Rt 0 (2.25)
+ 0 Vd+2 [s (x), ds],
so that
[t (x), dt] = [t (x), t]Vd+1 [(t t )(x), dt]
+Vd+2 [(t t )(x), dt] (2.26)
= (x, t)Vd+1 (x, dt) + Vd+2 (x, dt),
due to the fact: (t t )(x) = x. By the generalized It
o formula (2.15),
(x, t) = [t (x), t]
Z t Z t
(2.27)
= u0 (x) + [s (x), ds] + ( )[s (x), s] ds (x),
0 0

where
Xd
(y, s)
( )[s (x), s] ds (x) = { |y=s (x) } dsi (x).
i=1
y i
Scalar Equations of First Order 31

Since d(t t ) = dx = 0, by setting t t = (t , t) and noticing (2.17),


we get
d( t , t) = (t , dt) + (t , t) dt (x) = 0.
The above equation gives

dt = [()1 (t , t)](t , dt),

which, in view of (2.22), yields

dt = [()1 (t , t)]V (x, dt), (2.28)

where we set V = (V 1 , ..., V d ) and the inverse (t )1 exists by Lemma 3.1.


On the other hand, it is clear that

x (t t )(x) = [(t )(t )]x t (x) = Id ,

where Id is the (d d)-identity matrix. It follows that

()1 [t (x), t] = x t (x). (2.29)

In view of (2.28) and (2.29),

( )[s (x), s] ds (x)


= y (y, s)|y=s (x) [x s (x)]V (x, ds) (2.30)
= x (x, s)V (x, ds).

By taking the equations (2.26) and (2.30) into account, equation (2.27) can
be written as
d Z
X t
(x, t) = u0 (x) + xi (x, s)Vi (x, ds)
0 (2.31)
Z t i=1
+ (x, s)Vd+1 (x, ds) + Vd+2 (x, t).
0

Hence (x, t) is a solution of the Cauchy problem (2.20) as claimed. Since


the equation is linear, the solution u = (x, t) is easily shown to be unique.
The fact that u(, t) is a continuous Cm semimartingale is a consequence of
Theorem 2.5. To verify the representation (2.24), by the It o formula, one can
verify that the solution of equation (2.25) is given by
Z t
(y, t) = u0 (y) exp{ Vd+1 [s (y), ds]}
0
Z t Z t
+ exp{ Vd+1 [s (y), ds]}Vd+2 [ (y), d ],
0

which yields the representation (2.24) by setting y = t (x).


32 Stochastic Partial Differential Equations

As an example, consider the one-dimensional problem:

u u
= V (x, t) + u + f (x, t),
t x (2.32)
u(x, 0) = u0 (x),

where is a constant and f is a bounded continuous function, and the random


coefficient is given by

V (x, t) = x [b(t) + (t) w t ],

in which b(t) and (t) are continuous Ft -adapted processes and wt is a scalar
Brownian motion. Then the characteristic equation (2.22) yields
Z t Z t
t (x) = x b(s)s (x)ds (s)s (x) dws ,
0 0

which can be solved to give


Z t Z t
t (x) = x exp{ b(s)ds (s) dws }.
0 0

Its inverse is simply


Z t Z t
t (x) = x exp{ b(s)ds + (s) dws }.
0 0

By the representation formula (2.24), we obtain


Z Z t
 t 
u(x, t) = u0 x exp b(s)ds + (s) dws et
Z t 0 Z 0 Z
(ts)
 t t  
+ e f x exp b( )d + ( ) dw , s ds.
0 s s

We wish to point out that the first-order Stratonovich equation (2.21) is ac-
tually a parabolic It
o equation. This can be seen by rewriting the Stratonovich
integrals in (2.21) in terms of the associated It
o integrals with proper correc-
tion terms. For instance, by applying Lemma 2.2 and taking equation (2.19)
into account, we have
Z t Z t
u(x, s)Vd+1 (x, ds) = u(x, s)Vd+1 (x, ds)
0 0
d Z
1 X t u
+ { qd+1,j (x, x, s) (x, s)ds (2.33)
2 j=1 0 xj
Z t
+ [qd+1,d+1 (x, x, s)u(x, s) + qd+1,d+2 (x, x, s)]ds},
0
Scalar Equations of First Order 33

where
X n

qi,j (x, y, t) = hVi (x, t), Vj (y, t)i = ik (x, t)jk (y, t),
t
k=1

for i, j = 1, ..., d. Similarly, by differentiating equation (2.21) with respect to


xi , we can obtain
Z t
u
(x, s)Vi (x, ds)
x
Z0 t i
u 1 u
= (x, s)Vi (x, ds) + hVi (x, t), (x, t)i
x
Z0 t i 2 x i
u
= (x, s)Vi (x, ds)
0 x i
d Z
1 X t 2u (2.34)
+ qi,j (x, x, s) (x, s)ds
2 j=1 0 xi xj
Z tX d
u
+ [ qi,j (x, x, s) + qi,d+1 (x, x, s)] (x, s)]ds
0 j=1 x i
Z t


+ [qi,d+1 (x, x, s)u(x, s) + qi,d+2 (x, x, s)]ds ,
0

qi,j
where qi,j (x, y, t) = (x, y, t). When the above results (2.33) and (2.34)
yi
are used in equation (2.21), we obtain the parabolic It
o equation
Z t t d Z
X
u
u(x, t) = u0 (x) + Ls u(x, s)ds + (x, s)Vi (x, ds)
0 i=1 0
x i
Z t (2.35)
+ u(x, s)Vd+1 (x, ds) + Vd+2 (x, t),
0

where Lt is a second-order elliptic operator defined by


d d
1 X 2u X u
Lt u = qi,j (x, x, t) + pi (x, t) + r(x, t)u,
2 i,j=1 xi xj i=1
xi
d
1X
pi (x, t) = q (x, x, t) + qi,d+1 (x, x, t) + qd+1,i (x, x, t),
2 j=1 i,j

Xd
1

r(x, t) = qd+1,d+1 (x, x, t) + qj,d+1 (x, x, t) ,
2 j=1

Z d
1 t X
Vd+2 (x, t) = Vd+2 (x, t) +
qj,d+2 (x, x, s) + qd+1,d+2 (x, x, s) ds.
2 0 j=1
34 Stochastic Partial Differential Equations

Corollary 3.3 Let the conditions in Lemma 3.1 hold true for m 3.
For u0 Cm, , the Cauchy problem (2.35) has a unique solution u(, t) for
0 t T such that it is a Cm -semimartingale which can be represented by
the equation (2.24).

2.4 Quasilinear Equations


Suppose that the random coefficients Vi in equation (2.20) may also depend
on u. Then one is led to consider a quasilinear equation of the form:
d
u X u
= Vi (x, u, t) + V d+1 (x, u, t),
t i=1
xi (2.36)
u(x, 0) = u0 (x),

where, formally, V i (x, u, t) = Vt (x, u, t). Similar to (2.19), Vi (x, u, t), (x, u)
i

d+1
R , is a continuous semimartingale given by
Z t Xn Z t
Vi (x, u, t) = bi (x, u, s)ds + ij (x, u, s) dwsj , (2.37)
0 j=1 0

for i = 1, ..., d + 1, where bi (x, u, t) and ij (x, u, t) are continuous adapted


random fields. Of course, the equation (2.36) is equivalent to the integral
equation

Xd Z t
u
u(x, t) = u0 (x) + (x, s)Vi (x, u(x, s), ds)
i=1 0 xi (2.38)
Z t
+ Vd+1 (x, u(x, s), ds).
0

Introduce the associated characteristic system


Z t
it (x, r) = xi Vi [s (x, r), s (x, r), ds], i = 1, ..., d, (2.39)
0

and Z t
t (x, r) = r + Vd+1 [s (x, r), s (x, r), ds]. (2.40)
0
Notice that, in contrast with the system (2.22) and (2.23) for the linear case,
the equation (2.39) is now coupled with (2.40). However, in the semilinear
case when Vi (x, u, t), i = 1, ..., d, do not depend on u, equation (2.39) can be
solved independently as in the linear case. Similar to Lemma 3.1, the following
lemma holds for the above system.
Scalar Equations of First Order 35

Lemma 4.1 For i = 1, ..., d + 1 and j = 1, ..., n, let bi (x, r, t) and ij (x, r, t)
be continuous Ft -adapted Cm b -processes with m 1, (0, 1], such that
Z t Z t
2
kbi (, , t)km+ dt c1 and kij (, , t)k2m+1+ dt c2 a.s.,
0 0

for some positive constants c1 and c2 . Then the system (2.39) and (2.40)
has a unique solution t (x, r) = (t (x, r), t (x, r)) over [0, T ]. Moreover, the
solution is a continuous Cm -semimartingale which defines a stochastic flow of
Cm -diffeomorphism.

Let t (x) = t [x, u0 (x)] and (x, t) = t [x, u0 (x)]. By Lemma 4.1, the
inverse of t exists and is denoted by t . Then the following existence theorem
holds.

Theorem 4.2 Let the conditions in Lemma 4.1 hold true with m 3.
For u0 Cm, , the Cauchy problem (2.36) has a unique solution given by
u(x, t) = [t (x), t] for 0 t T . Moreover, it is a Cm semimartingale.

Proof. Since the proof is quite similar to that of the linear case, it will
only be sketched. In view of (2.38) and (2.39), t (x) and (x, t) satisfy the
following equations:
Z t
i
t (x) = xi Vi [s (x), (x, s), ds], i = 1, ..., d, (2.41)
0

and Z t
(x, t) = u0 (x) + Vd+1 [s (x), (x, s), ds], (2.42)
0
which are both continuous Cm semimartingales by Theorem 2.5. We wish to
show that (x, t) = [t (x), t] is a solution of the Cauchy problem. To this
end, we apply the generalized It o formula to (, t) to get
Z t Z t

[t (x), t] = u0 (x) +
[s (x), ds] + ( )[s (x), s] ds (x),
0 0

which, by noticing (2.42), implies that


Rt
(x, t) = u0 (x) + 0 Vd+1 [x, (x, s), ds]
Z t
(2.43)
+ ( )[s (x), s] ds (x).
0

Similar to (2.28) and (2.30), it can be shown that

dt = [( t )1 (t )]V (t , dt), (2.44)

and
( )[t (x), t]( t )1 [t (x)] = x (x, t), (2.45)
36 Stochastic Partial Differential Equations

where we set V = (V 1 , ..., V d ) and the inverse ( t )1 exists by Lemma 4.1.


With the aid of (2.44) and (2.45), equation (2.43) can be reduced to
Z t
(x, t) = u0 (x) + x (x, s)V [x, (x, s) ds]
Z t 0 (2.46)
+ Vd+1 [x, (x, s), ds],
0

which shows that = [t (x), t] solves the Cauchys problem (2.36). By The-
orem 2.5, we can deduce that (, t) is a continuous Cm -semimartingale.
To prove uniqueness, suppose is another solution. We let u = u and
show that u = 0 a.s. It follows from (2.38) and (2.46) that u satisfies
Z t Z t
u(x, t) = x u(x, s)V [x, u(x, s) ds] x (x, s)V [x, (x, s) ds]
Z t
0 Z t 0

+ Vd+1 [x, u(x, s), ds] Vd+1 [x, (x, s), ds],
0 0

which, by letting u = u + , can be regarded as a stochastic equation in u


for a given . By the assumptions of uniform boundedness conditions on the
local characteristics of the semimartingales Vi s as depicted in Lemma 3.1,
it can be shown that the coefficients of the resulting equation are Lipschitz
continuous and of a linear growth as required for a standard existence and
uniqueness theorem to hold. Since u 0 is a solution, by uniqueness, it is the
only solution.

For the existence of a global solution, we have assumed that the coeffi-
cients of the quasilinear equation (2.36) are uniformly bounded in a Holder
norm. Under some milder conditions, such as boundedness on compact sets,
it is possible to show the existence of a local solution, which may explode in
finite time. For instance, consider the following simple example in one space
dimension:
u u
+ = u2 w t , u(x, 0) = u0 (x), (2.47)
t x
where wt is one-dimensional Brownian motion and u0 (x) is a bounded contin-
uous function. It is readily seen that the characteristic equations are
Z t
t (x) = x + t, (x, t) = u0 (x) + 2 (x, s) dws ,
0

u0 (x)
which yield t (x) = x t and (x, t) = . So the solution of (2.47)
1 u0 (x)wt
is given by
u0 (x t)
u(x, t) = (x t, t) = .
1 u0 (x t)wt
For instance, suppose that u0 c is a nonzero constant. Then the solution
will explode a.s. as t approaches = inf{t > 0 : wt = 1/c}.
Scalar Equations of First Order 37

2.5 General Remarks


The material in this chapter is taken from the well-known book by Kunita
[55]. For simplicity we have assumed that the random coefficients Vi (x, t) s
involved are finite dimensional Wiener processes of the form (2.8) as done
in his original paper [54]. In fact, the results in the previous sections hold
true when Vi (, t) is a general continuous Cm semimartingale decomposable
as a bounded variation part Bi (x, t) plus a spatially dependent martingale
Mi (x, t). The Ito and Stratonovich line integrals can be defined in a similar
fashion. By confining to finite-dimensional white noises, we showed that some
first-order stochastic partial differential equations can be solved by the method
of ordinary It o equations. Most theorems were stated without proofs, which
can be found in [55]. In the subsequent chapters, we shall deal with infinite
dimensional Wiener processes as random coefficients.
The method of stochastic characteristics was introduced by Krylov and
Rozovsky [52] and by Kunita [54] to solve a certain type of linear parabolic
It
o equations, such as the Zakai equation in the nonlinear filtering theory as
mentioned before. This approach, when applied to such equations, has led to
a probabilistic representation of the solution such as equation (2.24). This
representation is known as the stochastic Feynman-Kac formula. This will be
discussed briefly in the Chapter 4. The aforementioned authors used both the
forward and the backward stochastic flows in the construction of solutions.
However, to avoid undue technical complication, the backward stochastic in-
tegrals were not considered here.
In the deterministic case, the method of characteristics is commonly ap-
plied to the first-order hyperbolic systems [88]. The generalization of the scalar
equation (2.36) to a stochastic hyperbolic system has not been done. In par-
ticular, it is of interest to determine what type of random coefficients are
admissible so that the Cauchy problems of a stochastic hyperbolic system is
well posed. Some special type of stochastic hyperbolic systems will be treated
in Chapter 5 by the so-called energy method.
This page intentionally left blank
3
Stochastic Parabolic Equations

3.1 Introduction
In this chapter, we shall study stochastic partial differential equations of
parabolic type, or simply, stochastic parabolic equations in a bounded do-
main. As a relatively simple problem, consider the heat conduction over a
thin rod (0 x L) with a constant thermal diffusivity > 0. Let u(x, t)
denote the temperature distribution at point x and time t > 0 due to a ran-
domly fluctuating heat source q(x, t, ). Suppose both ends of the rod are
maintained at the freezing temperature. Then, given an initial temperature
u0 (x), the temperature field u(x, t) is governed by the initial-boundary value
problem for the stochastic heat equation:

u(x, t) 2u
= + q(x, t, ), x (0, L), t > 0,
t x2
u(0, t) = u(L, t) = 0, (3.1)

u(x, 0) = u0 (x).

Suppose that the noise term q is a regular random field, say,


Z T Z L
E q 2 (x, t)dtdx < ,
0 0

where by convention in q was omitted, and u0 L2 (0, L). Equation (3.1)


can be solved for almost every similar to the deterministic case. In fact, it
can be solved by the well-known method of Fourier series or the eigenfunction
expansion. Now let q(, t) be a spatially dependent white noise. Formally, it
is a generalized Gaussian random field with mean zero and the correlation
function:
E {q(x, t)q(y, s)} = (t s)r(x, y),
for t s and x, y (0, L), where (t) denotes the Dirac delta function
and r(x, y) is the spatial correlation function. In contrast with the previ-
ous chapter, the spatially dependent white noise term, written formally as
(x, t) is not finite-dimensional. Even for this simple problem, the
q(x, t) = W
mathematical meaning of the equation and the related questions, such as, in

39
40 Stochastic Partial Differential Equations

what sense the problem has a solution, are no longer clear. Therefore, from
now on, the whole book will be devoted exclusively to the subject of stochastic
PDEs driven by the spatially dependent Wiener and Poisson noises.
In the next section, we shall first present some basic facts concerning the
elliptic operator and define the Wiener and Poisson random fields and the
related stochastic integrals. Then the heat equation driven by such noises will
be solved by the method of eigenfunction expansion. The results will be used to
reveal the regularity properties of the solution. In the subsequent sections, the
existence, uniqueness, and the regularity questions for linear and semilinear
stochastic parabolic equations will be studied.

3.2 Preliminaries
Let D Rd be a bounded domain with a smooth boundary D. Denote
L2 (D) by H with inner product (, ) and norm k k. For any integer k 0,
let H k denote the L2 -Sobolev space of order k, which consists of L2 -functions
with k-time generalized partial derivatives with norm
Xk X d Z
j (x) 2 1/2
kkk = { | j | dx} . (3.2)
j=0 i=1 D x i

Let H k denote the dual of H k with norm k kk , and the duality pairing
between H k and H k be denoted by h, ik . In particular, we set h, i1 = h, i,
H 0 = H and h, i0 = (, ).
Consider the second-order elliptic operator
d
X X d
2u u
Lu = ajk (x) + aj (x) + c(x)u (3.3)
xj xk j=1 xj
j,k=1

in domain D. Assume that ajk = akj and, for simplicity, all of the coefficients
ajk , aj , c are Cm
b -smooth with m 2 in the closure D of D. Then, clearly, we
can rewrite the operator L in the divergence form:
d
X Xd
u u
Lu = [ajk (x) ]+ bj (x) + c(x)u,
xj xk j=1
xj
j,k=1

d
X ajk (x)
where bj (x, t) = aj (x, t) . The operator L is said to be (uni-
xk
k=1
formly) strongly elliptic if for any vector Rd there exist constants
1 > 0 such that
d
X
||2 ajk (x)j k 1 ||2 . (3.4)
j,k=1
Stochastic Parabolic Equations 41

In particular, by dropping the first-order terms, it yields the operator A =


A(x, x ) given by
d
X u
Au = [ajk (x) ] + c(x)u. (3.5)
xj xk
j,k=1

Let C
0 (D) denote the set of C -functions on D with compact support. A is a
formally self-adjoint strongly elliptic operator in the sense that, for , C
0
we have (A, ) = (, A). This follows from an integration by parts,
Z d
X (x)
{ [ajk (x) ](x) + c(x)(x)(x)} dx
D j,k=1 xj xk

Z d
X (x)
= { [ajk (x) ](x) + c(x)(x)(x)} dx.
D j,k=1 xj xk

Let H0m denote the closure of C m


0 (D) in H . By making use of the ellipticity
condition (3.4), the boundedness of c(x) and an integration by parts, there
exist constants R and > 0 such that, for any H01 ,

h(A I) , i kk2 kk21 , (3.6)

for any > c0 with c0 = supxD |c(x)|, where, by setting k = 1 in (3.2), the
H 1 norm is given by
Z Xd
(x) 2
kk21 = { | | + |(x)|2 } dx. (3.7)
D j=1
xk

In view of (3.6), by definition, the linear operator A : H01 H 1 is said to


satisfy the coercivity condition.
Now consider the elliptic eigenvalue problem:

A(x, x )(x) = (x), x D,


(3.8)
B(x) = 0, x D,

where B denotes a boundary operator. It is assumed to be one of the three


types: B = for the Dirichlet boundary condition, B = n for the Neu-

mann boundary condition and B = n + for the mixed boundary condi-

tion, where n denotes the outward normal derivative to D and > 0. In the
variational formulation, regarding A as a linear operator in H with domain
H 2 H01 , the eigenvalue problem (3.8) can be written simply as

A = , (3.9)

where A is self-adjoint and coercive. The following theorem is the basis for
42 Stochastic Partial Differential Equations

the method of eigenfunction expansion which will be employed later on (see


Section 6.5, [28] or Theorem 7.22, [31]).

Theorem 2.1 For the eigenvalue problem (3.8) or (3.9), there exists a
sequence of real eigenvalues {k } of finite multiplicity such that
< 1 < 2 k
and k as k . Corresponding to each eigenvalue k , there is an
eigenfunction ek such that the set {ek , k = 1, 2, } forms a complete or-
thonormal basis for H. Moreover, if the coefficients of A are C -smooth, so
is the eigenfunction ek for each k 1.

Suppose that A is strictly negative, or there is > 0 such that h A, i


kk2 for all H01 . Then clearly we have all eigenvalues k . As a
special case, let A be given by
A = ( ), (3.10)
which is clearly a strictly negative, self-adjoint, strongly elliptic operator for
, > 0. Therefore, as far as the method of eigenfunction expansion is con-
cerned, the elliptic operators defined by (3.5) and (3.10) are similar. For the
ease of discussion, the elliptic operator to appear in the parabolic and hyper-
bolic equations will be taken to be of the special form (3.10) in the subsequent
analysis. For a comprehensive discussion of elliptic equations and basic solu-
tion properties, one is referred to the books by Evans [28] and Folland [31].

Remark: In view of (3.10) and (3.7), by an integration by parts,


kk21 < A, > ( + ) kk21 .
On the other hand, by the eigenfunction expansion, we can define a norm kk
by

X
kk2 := < A, > = k (, ek )2 .
k=1

Clearly, the norms kk1 and kk are equivalent and we shall often make no
distinction between them.

Let R : H H be a linear integral operator with a symmetric kernel


r(x, y) = r(y, x) defined by
Z
(R)(x) = r(x, y)(y) dy, x D.
D

If r is square integrable so that


Z Z
|r(x, y)|2 dxdy < ,
D D
Stochastic Parabolic Equations 43

then R on H is a self-adjoint HilbertSchmidt operator which is compact, and


the eigenvalues k of R are positive such that [96]

X
2k < ,
k=1

and the normalized eigenfunctions k form a complete orthonormal basis for


H. The HilbertSchmidt norm k kL2 is defined by
Z Z
X
kRk2L2 = |r(x, y)|2 dxdy = 2k .
D D k=1

In case the eigenvalue k goes to zero faster such that



X
k < ,
k=1

the operator R is said to be a nuclear (or trace class) operator. The norm of
such an operator is given by its trace:
X Z
kRkL1 = T r R = k = r(x, x)dx.
k=1 D

Let {wtk } be a sequence of independent, identically distributed standard


Brownian motions in one dimension. Assume that R is a positive nuclear
operator on H defined as above. Define the random field:
n
X
Wtn = W n (, t) = k wtk k . (3.11)
k=1

It is easy to check that W n (, t) is a Gaussian random field in H with mean


E W n (, t) = 0 and the covariance:
E {W n (x, t)W n (y, s)} = rn (x, y) (t s), (3.12)
where n
X
rn (x, y) = k k (x)k (y).
k=1
Note that, for n > m 1,
n
X
kWtn Wtm k2 = k (wtk )2
k=m+1

is a Ft -submartingale. By the B-D-G inequality given by Theorem 1-4.3 (for


the notation, see the footnote on p.24), we get
n
X
E sup kWtn Wtm k2 CE kWTn WTm k2 = CT k
0tT
k=m+1
44 Stochastic Partial Differential Equations

which goes to zero as n, m . Therefore, the sequence {Wtn } of Hvalued


Wiener processes converges in L2 (; C([0, T ]; H)) to a limit Wt given by

X
Wt = lim Wtn = k wtk k . (3.13)
n
k=1

In the meantime, the integral operator Rn with kernel rn converges to R in


the trace norm. Hence Wt is called a Wiener process in H with covariance
operator R, or an R-Wiener process in H. In fact it can be shown that the series
in (3.13) converges uniformly with probability one. (Theorem 6-2.5) An H-
valued process Xt is called a Gaussian process in H if, for each H, (Xt , )
is a real-valued Gaussian process. In this sense Wt is a Gaussian process in H
with covariance operator R.

Theorem 2.2 The R-Wiener process Wt , t 0, is a continuous H-valued


Gaussian process in H with W0 = 0 such that

(1) For any s, t 0 and g, h H.

E (Wt , g) = 0, E (Wt , g)(Ws , h) = (t s)(Rg, h).

(2) Wt has stationary independent increments.

(3) Wt , t 0, is a continuous L2 -martingale with values in H and there exists


C such that
E sup kWt k2 C T (T r R),
0tT

for any T > 0.

(4) W (x, t) is a continuous random field for x D, t 0, if the covariance


older-continuous in D D.
function r(x, y) is H

Proof. Inherited from the properties of its finite-sum approximation Wtn ,


the properties (1) and (2) can be easily verified. Since Wtn is a continuous
H-valued L2 -martingale, as mentioned before, it converges to Wt in L2 (; H)
uniformly in t [0, T ]. Hence Wt is a continuous L2 -martingale in H and, by
a Doobs inequality for submartingale (Theorem 1-2.2),

E sup kWt k2 CE kWT k2 = CT (T r R),


0tT

for some positive constant C. To verify property (4), we notice that, for any
x, y D, the Gaussian process M (x, y, t) = W (x, t) W (y, t) is a martingale
with quadratic variation [M (x, y, )]t = t{r(x, x) 2r(x, y) + r(y, y)}. Hence,
Stochastic Parabolic Equations 45

by the B-D-G inequality,

E sup |W (x, t) W (y, t)|2p


0tT
= E sup |M (x, y, t)|2p
0tT
C1 E [M (x, , y, )]pT
C2 T |r(x, x) 2r(x, y) + r(y, y)|p
C3 (p, T )|x y|p ,

where C1 , C2 , C3 are some positive constants and is the H older exponent


for r. Therefore, by taking p > d/ and applying Kolmogorovs continuity
criterion for random fields (Theorem 1.4.1, [55]), we can conclude that, as a
real-valued random field, W (x, t) is continuous in D [0, T ] as asserted.

In Chapter 2, some stochastic integrals with respect to a finite-dimensional


Wiener process were introduced. Here we shall introduce a stochastic integral
in H with respect to the Wiener random field Wt . To this end, let f (x, t) be
a continuous, Ft -adapted random field such that
Z T Z T Z
2
E kf (, t)k dt = E |f (x, t)|2 dxdt < . (3.14)
0 0 D

We first consider the stochastic integral of f in H with respect to a one-


dimensional Wiener process wt = wt1 given by
Z t
I(x, t) = f (x, s)dws .
0

By convention we write It = I(, t), ft = f (, t) etc.; this integral can be written


as Z t
It = fs dws . (3.15)
0

Similar to the usual Ito integral, under condition (3.14), this integral can be
defined as the limit in L2 (; H) of an approximating sum:
n1
X
It = lim ftj t [wtj+1 t wtj t ],
||0
j=0

where || is the mesh of the partition = {0 = t0 < t1 < t2 < < tn = T }.


It is straightforward to show that It thus defined, for t [0, T ], is a continuous
L2 -martingale in H with mean zero and
Z t
2
E kIt k = E kfs k2 ds < . (3.16)
0

Before defining the stochastic integral of ft with respect to the Wiener random
46 Stochastic Partial Differential Equations

field Wt , we first consider the case of a bounded integrand. To this end, let
(x, t) be an essentially bounded, continuous adapted random field such that
Z T
E sup |(x, t)|2 dt < . (3.17)
0 xD

For any orthonormal eigenfunction k of R, we set


k (x, t) = (x, t)k (x).
Then, in view of (3.17), k satisfies the condition (3.14), since
Z T Z TZ
E k k (, t)k2 dt = E |(x, t)k (x)|2 dxdt
0 0 D
Z T
E sup |(x, t)|2 dt < .
0 xD

Therefore, the Hvalued stochastic integral


Z t Z t
Itk = sk dwsk = ( s dwsk )k
0 0

is well defined and it satisfies


Z T
E kItk k2 E sup |(x, t)|2 dt, (3.18)
0 xD

for any k and t [0, T ].


Let Jtn denote the stochastic integral defined by
Z t Xn Z t n
X
n n k
Jt = s dWs = k ( s dws )k = k Itk .
0 k=1 0 k=1

In view of (3.17) and (3.18), for n > m, there is a constant C > 0 such that
n
X n
X
E kJtm Jtn k2 = k E kItk k2 C k ,
k=m+1 k=m+1

which goes to zero as n > m . Therefore, the sequence {Jtn } of continuous


L2 -martingales in H converges to a limit Jt for t [0, T ]. We define
Z t
Jt = s dWs (3.19)
0

as the stochastic integral of with respect to the R-Wiener process in H. In


fact, it can be shown that the following theorem holds.

Theorem 2.3 Let t = (, t) be a continuous adapted random field sat-


isfying the property (3.17). The stochastic integral Jt given by (3.19) is a
continuous process in H satisfying the following properties:
Stochastic Parabolic Equations 47

(1) For any g, h H, we have E (Jt , g) = 0, and


Z ts
E{(Jt , g)(Js , h)} = E (Q g, h)d, (3.20)
0

where Qt is defined by
Z Z
(Qt g, h) = q(x, y, t)g(x)h(y) dxdy (3.21)
D D

with q(x, y, t) = r(x, y)(x, t)(y, t).

(2) In particular, the following holds


Z t
E kJt k2 = E (Jt , Jt ) = E T r Qs ds, (3.22)
0
Z
where T r Qt = q(x, x, t)dx.
D

(3) Jt is a continuous L2 -martingale in H with local variation operator Qt


defined by Z t
h(J , g), (J , h)it = (Qs g, h) ds, (3.23)
0
Z t
or, simply, [J ]t = Qs ds.
0

The local covariation operator Qt for a H-valued martingale Jt will also


be called a local characteristic operator, and the local characteristic q(x, y, t),
a local covariation function. More generally, we need to define a stochastic
integral Jt with integrand t H satisfying condition (3.14) instead of (3.17).
For simplicity, we will show this is possible under some stronger conditions
than necessary. That is, the integrand is a continuous adapted process and
the covariance function of the Wiener random field is bounded.

Theorem 2.4 Let t = (, t) H be a continuous adapted random field


satisfying the condition
Z T Z T Z
E k(, t)k2 dt = E |(x, t)|2 dxdt < . (3.24)
0 0 D

If the covariance function r(x, y) is bounded such that

r0 = sup r(x, x) < ,


xD

then the stochastic integral Jt given by (3.19) is well defined and has the same
properties as depicted in Theorem 2.3.
48 Stochastic Partial Differential Equations

Proof. Since the set of C


0 -functions on D is dense in H, there exists a
n
sequence {t } of continuous adapted C0 -random fields satisfying condition
(3.17) such that
Z T
lim E ktn t k2 dt = 0. (3.25)
n 0

By Theorem 2.3, we can define


Z t
Jtn = sn dWs ,
0

and, for n > m,


Z t Z t
E kJtn Jtm k2 =Ek (sn sm )dWs k2 =E T r Qmn
s ds,
0 0

where Z
T r Qmn
s = q mn (x, x, t)dx
Z D
= r(x, x)[ n (x, s) m (x, s)]2 dx r0 ksn sm k2 .
D

It follows that
Z T
E kJtn Jtm k2 r0 E ktn tm k2 dt,
0

which goes to zero as n > m due to (3.25). Therefore, the sequence


{Jtn } converges to the limit denoted by Jt as in (3.19). We can check that all
its properties given in Theorem 2.3 are valid. This completes the proof.

Remarks:

(1) Here we restricted the definition of Jt to continuous adapted integrands


and a bounded covariance. The definition can be extended to the case
with weaker conditions, such as a right continuous integrand and an L2 -
bounded covariance function.

(2) Notice that


Z Z
T r Qt = q(x, x, t)dx = r(x, x) 2 (x, t)dx r0 kt k2 ,
D D

which suggests that we may write T r Qt = kt k2R .

(3) Let t : H H be a continuous linear operator for t [0, T ] such that


kt hk Ckhk, f or some C > 0. Then, similar to the proof of Theorem
Rt
2.4, the stochastic evolution integral 0 ts s dWs can be defined for each
t [0, T ], which will be shown in Section 6.3 later on.
Stochastic Parabolic Equations 49

Now we turn to the Levy process and the related stochastic integral in a
Hilbert space. As in the finite-dimensional case discussed in Section 1.6, the
main ingredients in building the Levy process in a Hilbert space H consists
of a Wiener process Wt and a Poisson random field Pt in H (p. 53, [78]).
For the latter, without undue complication, we shall assume that the Poisson
field Pt (x) = P (x, t), x Rd , is generated by a Poisson random measure over
B(Rm 0 ) as introduced in Section 1.6. Recall that N (t, B) = N ([0, t], B), t 0,
denotes a Poisson random measure
R of set B B(Rm0 ) with the L
evy measure
such that ({0}) = 0 and B(Rm ) kk2 (d) < . For a given B, N (t, B) is
0
a Poisson process with intensity (B) and the mean

E N (t, B) = t (B).

The compensated Poisson random measure


(t, B) = N (t, B) t (B),
N (3.26)

has zero mean and


(t, B1 )N
E {N (s, B2 )} = (t s) (B1 B2 ),

for any t 0 and B1 , B2 Rm 0 .


Now we shall introduce the Poisson random field Pt as a stochastic integral
in H with respect to the Poisson random measure. To this end, for x D
Rd , Rm , let (x, t, ) be an Ft -adapted random field such that
Z T Z
E ks ()k2 ds(d) < , (3.27)
0 B

where, by convention, we set t () = (, t, ) and let B Rm


0 be a bounded
open set. In particular, we may take B = { Rm | 0 < || < 1}. Define
Z tZ
P (x, t) = (x, s, ) N (ds, d),
0 B

or Z tZ
Pt () = s () N (ds, d). (3.28)
0 B
(t, B) + t (B), we can write
Since N (t, B) = N

Pt = Jt + Mt ,

where Z tZ
Jt () = s () ds(d)
0 B
Z tZ
Mt () = (ds, d).
s () N
0 B
50 Stochastic Partial Differential Equations

Let M2T denote the Hilbert space of all Ft -adapted processes Xt in H with
norm kXkT such that
Z T
kXk2T = E kXt k2 dt < .
0

It is relatively easy to show that the ordinary integral Jt can be defined as an


H-valued process in M2T . To define the stochastic integral Mt , let {ek } be an
orthonormal basis for H and set jt = (t , ej ). Then the real-valued Poisson
integral Z Z t
mjt () = (ds, d)
js (s, )N
0 B
is well defined. It is a martingale with mean zero and
Z tZ
E |mjt ()|2 = E |js (s, )|2 ds(d).
0 B

Now we define
n
X Z tZ X n
Mtn () = mjt () ej = [ (dsd).
j (s, ) ej ]N (3.29)
j=1 0 B j=1

Then it is easy to show that E Mtn () = 0 and


Z tZ X n
E kMtn ()k2 = E [ |js ()|2 ] ds(d)
0 B j=1
Z Z (3.30)
T
2
E ks ()k ds(d).
0 B

This implies that the sequence {Mtn ()} will converge in M2T to a limit Mt ().
We then define this limit as the Poisson stochastic integral
Z tZ
Mt () = (s, )N (ds, d) (3.31)
0 B

of with respect to the compensated Poisson random measure. Since Mtn ()


is a square integrable martingale, so is the limit Mt (). Similar to the Wiener
integral, it can be shown that Mt is mean-square continuous. We thus arrive
at the following theorem:

Theorem 2.5 If M2T , then the integral Pt (), t [0, T ], given by


(3.28), is a square integrable Ft -adapted process in H, which is mean-square
continuous with mean
Z tZ
E Pt () = E s () ds(d).
0 B
Stochastic Parabolic Equations 51

Moreover, the deviation Mt () = [Pt () E Pt ()] is a Ft -martingale with


the covariance
Z tZ
E k(Mt ()k2 = E k(s, )k2 ds (d), (3.32)
0 B

for t [0, T ].

Remarks: As mentioned before, by the Levy-Ito decomposition, the repre-


sentation of a general Levy process Xt in a Hilbert space H is provided by
Theorem 4.23 [78]. Here, as a special case, assume the process Xt takes the
form
Xt = at + Wt + Pt ,
where a H, Wt is a Wiener process and Pt is given by the Poisson integral
(3.28). By the independence of Wt and Pt , it offers the possibility of treating
the problems with Levy noise by considering the cases with Wiener noise and
Poisson noise separately and then composing the results properly.

In the subsequent sections, to prove the existence theorems, we shall often


use a fixed-point argument, in particular, the contraction mapping principle as
in the case of deterministic partial differential equations. For the convenience
of later applications, we will state the contraction mapping principle for a
random equation in a Banach space. To this end, let X a separable Banach
space of random variables () with norm k kX . Let Y be a closed subset of
X . Suppose that the map : Y Y is well defined. Consider the equation:

= (), Y. (3.33)

By the Banach fixed-point theorem (p. 116, [43]), the following contraction
mapping principle holds.

Theorem 2.6 (Contraction Mapping Principle) Suppose, for any , Y,


there is a constant [0, 1) such that

k() ()kX k kX .

Then there exists a unique solution Y of equation (3.33) which is the


fixed point of the map .

Another useful result in differential equations is the well-known Gronwall


inequality (p. 36, [38]).

Lemma 2.7 (Gronwall Inequality) If is a real constant, (t) 0 and


(t) are real continuous functions on [a, b] R such that
Z t
(t) + (s)(s)ds, a t b,
a
52 Stochastic Partial Differential Equations

then
Z t
(t) exp{ (s)ds}, a t b.
a

3.3 Solution of Stochastic Heat Equation


Let D be a bounded domain in Rd with a smooth, say C2 , boundary D.
We first consider the simple initial-boundary value problem for the randomly
perturbed heat equation:
u (x, t),
= ( )u + W x D, t (0, T ),
t
u|D = 0, (3.34)

u(x, 0) = h(x),
Pd 2
where = i=1 x2i is the Laplacian operator, and are positive constants,
and h(x) is a given function in H = L2 (D). The spatially dependent white
(x, t), by convention, is the formal time derivative W (x, t) of the
noise W t
Wiener random field W (x, t). Let W (, t) be an H-valued Wiener process with
mean zero and covariance function r(x, y) so that

E W (x, t) = 0, E{W (x, t)W (y, s)} = (t s)r(x, y), t [0, T ], x D.

The associated covariance operator R in H with kernel r(x, y) is assumed to


be of finite trace, that is,
Z
TrR = r(x, x)dx < .
D

To construct a solution, we apply the well-known method of eigenfunction


expansion. Let {k } be the set of eigenvalues of ( + ) with a homoge-
neous boundary condition and, correspondingly, let {ek } denote the complete
orthonormal system of eigenfunctions such that

( + )ek = k ek , ek |D = 0, k = 1, 2, ..., (3.35)

with 1 < 2 ... k < ..., and k as k . By Theorem


2.1, they are known to have a finite multiplicity and the corresponding eigen-
functions ek C . Moreover, the eigenvalue has the asymptotic property
[42]:
k = Cd (D)k 2/d + o(k 2/d ) as k , (3.36)
where the constant Cd (D) is independent of k.
Stochastic Parabolic Equations 53

Now we seek a formal solution in terms of the eigenfunctions as follows:



X
u(x, t) = ukt ek (x), (3.37)
k=1

where ukt , k = 1, 2, ..., are unknown processes to be determined. To this end,


let tk = (W (, t), ek ) for each k. To simplify the computation, assume the
covariance operator satisfies the condition (Rej , ek ) = jk k2 for some positive
constants k , j, k = 1, 2, . Then it is easy to check that {tk } is a sequence of
independent Wiener processes in one dimension with mean zero and covariance
E {tj sk } = jk k2 (t s). So we can write tk = k wtk and

X
X
W (x, t) = tk ek (x) = k wtk ek (x), (3.38)
k=1 k=1

where {wtk } is a sequence of standard Wiener processes in one dimension. Upon


substituting (3.37) and (3.38) into (3.34), one obtains an infinite system of It
o
equations:

dukt = k ukt dt + k dwtk , uk0 = hk , k = 1, 2, ..., (3.39)

where hk = (h, ek ) with h L2 (D). The solution of (3.39) is given by


Z t
k k t
u t = hk e + k ek (ts) dwsk , k = 1, 2, ..., (3.40)
0

which are independent O-U (Ornstein-Uhlenbeck) processes with mean

kt = E ukt = hk ek t
u

and covariance
k2 k |ts|
Cov {ukt , uls } = kl {e ek (t+s) }, (3.41)
2k
where kl = 1 if k = l and = 0 otherwise. To show the convergence of the
series solution (3.37), we write

u(x, t) = u
(x, t) + v(x, t).

It consists of the mean solution



X
u(x, t) = ukt ek (x), (3.42)
k=1

and the random deviation



X
v(x, t) = vtk ek (x), (3.43)
k=1
54 Stochastic Partial Differential Equations

where
Z t
vtk = k ek (ts) dwsk . (3.44)
0

We know that the mean solution series (3.42) converges in the L2 sense and
it satisfies problem (3.34) without random perturbation. It suffices to show
that the random series (3.43) converges in some sense to a solution of (3.34)
with h = 0. To proceed, consider the partial sum
n
X
v n (x, t) = vtk ek (x). (3.45)
k=1

In view of (3.44), we have

n
X n
X k2
E kv n (, t)k2 = E |vtk |2 = {1 e2k t }
2k
k=1 k=1 (3.46)
Xn
k2
.
2k
k=1

Recall that k and k2 = (R ek , ek ). The above yields

n
1 X 2 1
sup E kv n (, t)k2 k T r R, (3.47)
tT 2 2
k=1

which is finite by assumption. It follows that



1 X k2
sup E kv(, t) v n (, t)k2 = 0, as n ,
tT 2 2k
k=n+1

and v(, t) is an Ft adapted Hvalued process. In fact, the process is mean-


square continuous. To see this, we compute

X
E kv(, t) v(, s)k2 = E |vtk vsk |2 . (3.48)
k=1

For 0 s < t T ,
Z t Z s
{vtk vsk } = k { ek (sr) dwrk }
ek (tr) dwrk
0 0
Z s Z t
k (tr) k (sr)
= k { [e e ]dwrk + ek (tr) dwrk },
0 s
Stochastic Parabolic Equations 55

so that, noting the independence of the above integrals,


Z s
E |vtk vsk |2 = k2 {E | [ek (tr) ek (sr) ]dwrk |2
0
Z t
+E | ek (tr) dwrk |2 }
s
Z s Z t
2 k (tr) k (sr) 2
= k { [e e ] dr + e2k (tr) dr}
0 s
k2
[1 e2k (ts) ] 2 k2 (t s).
k
Making use of this bound in (3.48), we obtain

X
E kv(, t) v(, s)k2 2 k2 (t s) = 2 (T r R)(t s),
k=1

which shows that v(, t) is H-continuous in mean-square. A question arises


naturally: In what sense does the random series (3.37) represent a solution to
the problem (3.34)? To answer this question, we first show that, merely under
the condition of finite trace, it may not be a strict solution in the sense that
u(, t) is in the domain of A = ( ), a subset of H 2 , and it satisfies, for
each (x, t) D [0, T ],
Z t
u(x, t) = h(x) + ( )u(x, s)ds + W (x, t), a.s.
0

To justify our claim, it is enough to show that the sequence {A v n (, t)} in H


may diverge in mean-square. In view of (3.45) and (3.46),
n
X
E kAv n (, t)k2 = 2k E |vtk |2
k=1
n
X X n
k 2 1
= k
{1 e2k t } (1 e21 t ) k k2 .
2 2
k=1 k=1

For instance, for d = 2, we have k C1 k by (3.36). If k C2 /k for large


k, then the above partial sum diverges since E kAv n (, t)k2 . In general,
this rules out the case of a strict solution. In fact, it turns out to be a weak
solution in the PDE sense. This means that u(, t) is an adapted Hvalued
process such that for any test function in C 0 and for each t [0, T ], the
following equation holds:
Z t
(u(, t), ) = (h, ) + (u(, s), A)ds + (W (, t), ), a.s. (3.49)
0
This equation can be easily verified for its n-term approximation un given by
n
X
un (x, t) = ukt ek (x).
k=1
56 Stochastic Partial Differential Equations

By integrating this equation against the test function over D and making
use of (3.39), we get
n
X n
X Z t
(un (, t), ) = ukt (ek , ) = {hk k uks ds + k wtk }(ek , )
k=1 k=1 0
Z t
= (hn , ) + (un (, s), A)ds + (W n (, t), ),
0

n n
where h and W are the n-term approximations of h and W , respectively.
Since they, together with un , converge strongly in mean-square, by passing to
the limit in each term of the above equation as n , this equation yields
equation (3.49). The fact that the weak solution is unique is easy to verify.
Suppose that u and u are both solutions. Then it follows from (3.38) that
= (u u) satisfies the equation
Z t
((, t), ) = ((, s), A) ds.
0

By letting = ek and kt = ((, t), ek ), the above yields


Z t
kt = k ks ds, k = 1, 2, ,
0

which implies
Z t
|kt |2 2k T |ks |2 ds, k = 1, 2, .
0

By the Gronwall inequality, we can conclude that kt = (u(, t) u (, t), ek ) = 0


for every k. Since the set of eigenfunctions is complete, we have u(, t) = u (, t)
a.s. in H.
We notice that since each term in the series (3.37) is an O-U process, the
sum u(, t) is a H-valued Gaussian process with mean u (x, t) given by (3.42)
and covariance function
Cov. {u(x, t), u(y, s)}

X k2 k |ts| (3.50)
= {e ek (t+s) }ek (x)ek (y).
2k
k=1

Let us summarize the previous results as a theorem.

Theorem 3.1 Let the eigenfunction expansion of u(x, t) be given by (3.37).


Then u(, t), 0 t T is an adapted H-valued Gaussian process which is
continuous in the mean-square sense. Moreover, it is a unique weak solution
of the problem (3.34) satisfying the equation (3.49).

It is well known that the Greens function G(x, y; t) for the heat equation
Stochastic Parabolic Equations 57

in (3.34) can be expressed as



X
G(x, y; t) = ek t ek (x)ek (y). (3.51)
k=1

Then the solution (3.37) can be rewritten as


u(x, t) = u
(x, t) + v(x, t)
X
X Z t
= hk ek t ek (x) + k ek (ts) dwsk ek (x),
k=1 k=1 0

which shows formally that


Z Z tZ
u(x, t) = G(x, y; t)h(y)dy + G(x, y; t s)W (y, ds)dy. (3.52)
D 0 D

Introduce the associated Greens operator Gt defined by


Z
(Gt h)(x) = G(x, y; t)h(y) dy. (3.53)
D

Then (3.52) can be written simply as


Z t
u(, t) = Gt h + Gts W (, ds). (3.54)
0

Also it is easy to check the properties: G0 h = h and Gt (Gs h) = Gt+s h, for all
t, s > 0. As a useful analytical technique, the Greens function representation
will be used extensively in the subsequent sections.
We observe that, in view of (3.36), the partial sum may converge even if
the trace of the covariance operator R is infinite. This is the case, for instance,
when d = 1, r(x, y) = (x y) and W (x, t) becomes a spacetime white noise.
Further properties of solution regularity and their dependence on the noise
will be discussed later on. Also it is worth noting that the theorem holds
when the Dirichlet boundary condition is replaced by a Neumann boundary

condition for which the normal derivative n u|D = 0, or by a proper mixed
boundary condition. However, for simplicity, we shall consider only the first
two types of boundary conditions.

3.4 Linear Equations with Additive Noise


Let f (x, t) and (x, t), x D, t [0, T ] be random fields such that f (, t)
and (, t) are Ft -adapted such that
Z T
E {kf (, s)k2 + k(, s)k2 }ds < . (3.55)
0
58 Stochastic Partial Differential Equations

Regarded as an H-valued process, f (, t), (, t), will sometimes be written


as ft , t and so on for brevity. Suppose that W (x, t) is an RWiener random
field with covariance function r(x, y) bounded by r0 , or supxD r(x, x) r0 .
By Theorem 2.4, the It o integral:
Z t
M (, t) = (, s)W (, ds) (3.56)
0
is well defined and it will also be written as
Z t
Mt = s dWs .
0
It is an H-valued martingale with local covariation operator Qt defined by
Z
[Qt h](x) = q(x, y; t)h(y)dy
D
with the kernel q(x, y; t) = r(x, y)(x, t)(y, t). Notice that, by (3.55) and the
bound on r,
Z T Z TZ
E T r Qs ds = E r(x, x) 2 (x, s)dxds
0 0Z D (3.57)
T
2
r0 E k(, s)k ds < .
0
Let
Z t Z t
V (x, t) = f (x, s)ds + M (x, t), or Vt = fs ds + Mt , (3.58)
0 0
which is a spatially dependent semimartingale with local characteristic
(q(x, y; t), f (x, t)).

Replacing W (x, t) by V (x, t) in (3.34), we now consider the following prob-


lem:
u
= ( )u + V (x, t), x D, t (0, T ),
t (3.59)
Bu|D = 0, u(x, 0) = h(x),
where Bu = u or Bu = and V (x, t) = f (x, t) + (x, t)W

n u,
(x, t). Suggested
by the representation (3.52) in terms of the Greens function (3.51), we rewrite
(3.59) as
Z
u(x, t) = G(x, y; t)h(y)dy
Z tZ
D

+ G(x, y; t s)V (y, ds)dy


Z 0 D Z tZ (3.60)
= G(x, y; t)h(y)dy + G(x, y; t s)f (y, s)dsdy
D Z Z 0 D
t
+ G(x, y; t s)(y, s)W (y, ds)dy,
0 D
Stochastic Parabolic Equations 59

which is the equation for the mild solution of (3.59). We claim it is also a weak
solution satisfying
Z t Z t
(ut , ) = (h, ) + (us , A)ds + (fs , )ds
Z t 0 0 (3.61)
+ (, s dWs ),
0

for any C20 , where A = ( ) is defined as before. To verify this fact,


we let
u(x, t) = u
(x, t) + v(x, t), (3.62)
where
Z Z tZ
u
(x, t) = G(x, y; t)h(y)dy + G(x, y; t s)f (y, s)dyds. (3.63)
D 0 D

It is easily shown that u


(x, t) is a weak solution of the problem (3.59) when
(x, t) = 0. This reduces to showing that
Z tZ
v(x, t) = G(x, y; t s)(y, s)W (y, ds)dy (3.64)
0 D

must satisfy the equation:


Z t Z t
(vt , ) = (vs , A)ds + (, s dWs ). (3.65)
0 0

In view of (3.51), equation (3.64) can be written in terms of the eigenfunctions


as follows:
X
vt = vtk ek , (3.66)
k=1

where Z t
vtk = (vt , ek ) = ek (ts) dzsk , (3.67)
0

and Z t
ztk = (Mt , ek ) = (ek , s dWs ). (3.68)
0

One can check that the process ztk is a continuous martingale with the
quadratic variation
Z t
k
[z ]t = qsk ds,
0

where Z Z
qtk = q(x, y; t)ek (x)ek (y)dxdy = (Qt ek , ek ).
D D
60 Stochastic Partial Differential Equations

Let v n (, t) be the n-term approximation of vt :


n
X
n
v (, t) = vtk ek . (3.69)
k=1

From (3.68), we obtain


Z t
E |ztk |2 = E (Qs ek , ek )ds. (3.70)
0

Therefore, by condition (3.55), we get


n
X Z t n
X
n 2
E kv (, t)k = E |vtk |2 = E e2k (ts) (Qs ek , ek )ds
k=1 0 k=1
Z t n
X Z t
E (Qs ek , ek )ds < E T r Qs ds < .
0 k=1 0

It follows that the sequence {v n } converges in H to v in mean-square uniformly


for t [0, T ], or

sup E kvt v n (, t)k2 0, as n . (3.71)


0tT

Hence we get
Z t Z t
Ekvt k2 = E T r [G2(ts) Qs ]ds E T r Qs ds. (3.72)
0 0

In fact, by writing
Z t
vtk = ztk k ek (ts) zsk ds,
0

with the aid of a Doobs inequality and (3.70), we can obtain the estimate

E sup |vtk |2 4 E sup |ztk |2


0tT 0tT
Z T
16 E |zTk |2 = 16 E (Qs ek , ek )ds.
0

This implies that



X
E sup kvt k2 E sup |vtk |2
0tT
k=1
0tT (3.73)
RT
16 0 E (T r Qs )ds < .
Stochastic Parabolic Equations 61

To show that v(, t) is mean-square continuous, we first estimate


Z s
E |vtk vsk |2 = {E | [ek (tr) ek (sr) ]dzrk |2
Z t
0

+ E| ek (tr) dzrk |2 }
Z s s Z t
k (tr) k (sr) 2 k
= E { [e e ] qr dr + e2k (tr) qrk dr}.
0 s

Since, by noticing (3.73), kvt vs k2 4 sup0tT kvt k2 < , a.s., by invoking


the dominated convergence theorem, the above integrals go to zero as s t, so
that limst E kvt vs k2 = 0. So the process v(, t) is mean-square continuous.
Clearly, by (3.67), vtk satisfies the equation:
Z t
k
vt = k vsk ds + ztk .
0

Hence the equation (3.69) can be written as


Z t
n
v (x, t) = Av n (x, s)ds + M n (x, t), (3.74)
0

where M n (x, t) is the n-term approximation of M (x, t). By taking the inner
product with respect to , this equation yields
Z t
(v n (, t), ) = (vsn , A)ds + (, Mtn ).
0

Since, as n , each term in the above equation converges to the corre-


sponding term in equation (3.65), we conclude that v(x, t) given by (3.64) is
indeed a weak solution. Similar to Theorem 3.1, the uniqueness of solution
can be shown easily. Thereby we have proved the following theorem.

Theorem 4.1 Let condition (3.55) hold true and let u(x, t) be defined
by (3.60). Then, for h H and t [0, T ], u(, t) is an adapted H-valued
process which is continuous in mean-square. Furthermore, it is the unique
weak solution of the problem (3.59) satisfying the equation (3.61).

In fact, we can show more, namely the weak solution is also a mild solution
and the second moment of the solution is uniformly bounded in t over [0, T ].
More precisely, the following theorem holds.

Theorem 4.2 Under the same conditions as given in Theorem 4.1, the
(weak) solution u(, t) given by (3.61) satisfies the following inequality:
Z T
E sup ku(, t)k2 C(T ) { khk2 + E [ kf (, s)k2 + T r Qs ]ds } (3.75)
0tT 0
62 Stochastic Partial Differential Equations

where C(T ) is a positive constant depending on T .

Proof. For a mild solution u to be a weak solution, it must satisfy


Z t Z t Z t
(ut , ) = (h, ) + (Aus , ) ds + (fs , ) ds + (, s dWs ) ds
0 0 0

for any C
0 . In particular, take = ek . Then the above equation yields
Z t Z t Z t
(ut , ek ) = (h, ek ) k (us , ek ) ds + (fs , ek ) ds + (ek , s dWs ) ds,
0 0 0

which can be solved to give


Z t Z t
(ut , ek ) = (h, ek )ek t + (fs , ek )ek (ts) ds + ek (ts) (ek , s dWs )
Z t 0 Z t 0

= (Gt h, ek ) + (Gts fs , ek ) ds + (ek , Gts s dWs ),


0 0

which holds for every ek . Therefore, we can deduce that u is a mild solution
satisfying
Z t Z t
ut = Gt h + Gts fs ds + Gts s dWs .
0 0

It follows from the above equation that


Z t Z t
kut k2 3{kGt hk2 + k Gts fs dsk2 + k Gts s dWs k2 }. (3.76)
0 0

By making use of (3.51), we find that



X
kGt hk2 = e2k t (h, ek )2 khk2 (3.77)
k=1

and Z Z Z
t t t
k Gts fs dsk2 t kGts fs k2 ds t kfs k2 ds. (3.78)
0 0 0

By taking (3.76), (3.77), (3.78), and (3.73) into account, one obtains
Z T Z T
E sup kut k2 3{khk2 + T E kfs k2 ds + 16 E T r Qs ds}.
0tT 0 0

This yields the bound (3.75) with C(T ) = 3 max {16, T }.

Remarks:
Stochastic Parabolic Equations 63

(1) Notice that, since the proofs are based on the methods of eigenfunction
expansion and Greens function, Theorem 4.1 and Theorem 4.2 still hold
true if A = ( ) is replaced by a self-adjoint, strongly elliptic operator
with smooth coefficients. Keep this fact in mind in the subsequent analysis.

(2) If W (x, t) is a spacetime white noise with r(x, y) = (x y), the corre-
sponding Wiener random field W (, t) is known as a cylindrical Brownian
motion in H with the identity operator I as its covariance operator [21].
By (3.36), the eigenvalues k of A grow asymptotically like k 2/d for a large
k. It is easy to show that
X 1
E kv(, t)k2 = (1 e2k t )
2k
k

converges for d = 1 but diverges for d 2. This means that, in the case of
a spacetime white noise, the solution u(, t) cannot exist as an H-valued
process except for d = 1. For this reason, in order for the solution to
have better regularity properties, we shall consider only a smooth Wiener
random field with a finite-trace covariance operator.

Now we consider the problem (3.59) with the Wiener noise replaced by a
Poisson noise:
u
= ( )u + f (x, t) + M (x, t), t (0, T ),
t (3.79)
Bu|D = 0, u(x, 0) = h(x), x D,

where the random field f (x, t) and


Z tZ
M (x, t) = (x, s, ) N (ds, d)
0 B

are given as before. Assume that


Z TZ
E k(, s, )k2 ds(d) < . (3.80)
0 B

Again, by superposition, we can write the mild solution of the problem (3.79)
as
u(x, t) = u
(x, t) + v(x, t), (3.81)
where u
(x, t) is the same as (3.63), but v(x, t) is redefined as
Z tZ Z
v(x, t) = (ds, d)
G(x, y; t s)(y, s, ) dy N
0 B D

or, in short, Z tZ
vt = (ds, d).
Gts (, s, ) N (3.82)
0 B
64 Stochastic Partial Differential Equations

To show that the mild solution given by (3.81) is also a weak solution, for
an afore mentioned reason, it suffices to show that vt = v(, t) satisfies the
following equation
Z t Z tZ
(vt , g) = (vs , A g)ds + (ds, d),
((, s, ), g) N (3.83)
0 0 B

for every g C20 (D). By the


P Pn eigenfunction expansion, express g as g =

k=1 (g, ek )ek and set gn = k=1 (g, ek )ek . Consider
Z t n
X Z t
(vs , Agn )ds = (g, ek ) (k )(vs , ek ) ds, (3.84)
0 k=1 0

where, in view of (3.82),


Z t Z tZ s Z
(k )(vs , ek ) ds = (dr, d)ds
(k )(Gsr (, r, ), ek ) N
0 0 0 B
Z tZ s Z
= (dr, d).
(k )ek (sr) ((, r, ), ek ) N
0 0 B

Let mks = (M (, s), ek ). Then the above gives


Z t Z tZ s
(k )(vs , ek ) ds = (k )ek (sr) dmkr ds,
Z t Z 0s 0 Z0
t (3.85)
d k (sr) k
= e dmr ds = ek (ts) dmks ds mkt .
0 0 ds 0

By making use of (3.85) in (3.84), we obtain


Z t n
X Z t
(vs , Agn )ds = (g, ek ){ ek (ts) dmks ds mkt }
0 k=1 0

= (vt , gn ) (M (, t), gn ),
or Z Z tZ
t
(vt , gn ) = (vs , Agn )ds + (ds, d).
((, s, ), gn ) N
0 0 B
Since the sequence gn in the domain of A converges to g, it is not hard to
see that, as n , the above equation yields equation (3.83) for a weak
solution. This means that ut given by (3.81) is a weak solution of the problem
(3.79). The uniqueness can be verified easily. Further results similar to that
in Theorems 4.1 and 4.2 can be obtained, but the proof is omitted.

Theorem 4.3 Assume that f (x, t) and (x, t, ) for x D, B, are


given predictable random fields such that
Z T Z TZ
E{ kf (, s)k2 ds + k(, s, )k2 ds(d)} < .
0 0 B
Stochastic Parabolic Equations 65

Let u(x, t) be defined by (3.81) with h H. Then, for t [0, T ], u(, t) is an


adapted H-valued process which is continuous in mean-square. Furthermore,
it is the unique weak solution of the problem (3.79) satisfying
Z T
sup E ku(, t)k2 C(T ) { khk2 + E kf (, s)k2 ds
0tT 0
Z TZ
+E k(, s, )k2 ds(d)},
0 B

for some C(T ) > 0.

3.5 Some Regularity Properties


In this section we will study further regularity of the weak solution u(x, t)
to (3.59) given by (3.60). In particular, we shall consider the Sobolev H 1 -
regularity in space and Holder continuity in time. Recall that, for any integer
k, the Sobolev space H k = H k (D) denotes the k-th order Sobolev space of
functions on D with H 0 = H = L2 . As a matter of convenience, instead of
the usual H k -norm defined by (3.2), we shall often use an equivalent norm on
H k with k 0, which is still denoted by k kk and defined as follows

X
kkk = { kj (, ej )2 }1/2 = {h( + )k , i}1/2 , Hk. (3.86)
j=1

Referring back to (3.60)(3.62), let u = u


+ v, where we rewrite u
and v
as Z t
ut = Gt h + Gts fs ds (3.87)
0
and Z t
vt = Gts dMs . (3.88)
0
Rt
Recall that M (x, t) = 0 (x, s)W (x, ds) is a martingale with local covariation
function q(x, y; t) = r(x, y)(x, t)(y, t), where (x, t) is a predictable random
field and r(x, y) is bounded by r0 . Before proceeding to the regularity question,
we shall collect several basic inequalities as technical lemmas which will be
useful in the subsequent analysis. Most of these results were derived in the
previous section.

Lemma 5.1 Suppose that h H and f (, t) H is a predictable random


field such that Z T
E kf (, t)k2 dt < .
0
66 Stochastic Partial Differential Equations

Then u
(, t) is a continuous adapted process in H such that

u L2 (; C([0, T ]; H)) L2 ( (0, T ); H 1 ).

Moreover, the following inequalities hold:


Z T
1
sup kGt hk khk; kGt hk21 dt khk2 , (3.89)
0tT 0 2
Z t Z T
E sup k Gts f (, s)dsk2 T E kf (, t)k2 dt; (3.90)
0tT 0 0
Z t Z T
1
E sup k Gts f (, s)dsk2 E kf (, t)k21 dt, (3.91)
0tT 0 0

and Z Z Z
T t T
T
E k Gts f (, s)dsk21 dt E kf (, t)k2 dt. (3.92)
0 0 2 0

Proof. First suppose that u is regular as indicated. The first inequalities in


(3.89) and (3.90) were shown in (3.77) and (3.78), respectively. The remaining
ones can be easily verified. For instance, consider the last inequality (3.92) as
follows:
Z T Z t Z TZ t
2
k Gts fs dsk1 dt tkGts fs k21 dsdt
0 0 0 0
Z
X T Z t
T k e2k (ts) (ek , fs )2 dsdt
k=1 0 0

X Z T Z T
T T
= (1 e2k (T t) )(ek , ft )2 dt kft k2 dt,
2 0 2 0
k=1

which yields (3.92) after taking the expectation.


It is known that, for h H, Gt h C([0, T ]; H) L2 ((0, T ); H 1 ) [88]. To
Rt
show that u L2 (; C([0, T ]; H)) L2 ( (0, T ); H 1 ), let t = 0 Gts fs ds,
R t
and tn = 0 Gts fsn ds, where we let
n
X
ftn = (ft , ek )ek .
k=1

Then it is easy to check that tn L2 (; C([0, T ]; H)) L2 ( (0, T ); H 1 ).


Moreover, we can obtain
XZ T
E sup kt tn k2 T E (ft , ek )2 dt
0tT 0
k>n
Stochastic Parabolic Equations 67

and Z T
T XZ T
E kt tn k21 dt E (ft , ek )2 dt,
0 2 0 k>n
both of which go to zero as n . Hence, the limit t belongs to
L2 (; C([0, T ]; H)) L2 ( (0, T ); H 1 ) as claimed.

Lemma 5.2 Suppose that


Z t
v(, t) = Gts (, s)W (, ds)
0

as given by (3.88) and assume that


Z T Z TZ
E (T r Qt )dt = E r(x, x) 2 (x, t)dxdt < . (3.93)
0 0 D

Then v(, t) is an H 1 -process, with a continuous trajectory in H over [0,T],


and the following inequalities hold:
Z t
E kv(, t)k2 E (T r Qs )ds, (3.94)
0
Z T
E sup kv(, t)k2 16E (T r Qt )dt, (3.95)
0tT 0
and Z Z
T T
1
E kv(, t)k21 dt E T r Qt dt. (3.96)
0 2 0

Proof. Under condition (3.93), the inequality (3.95) was shown in (3.73).
Let v n (, t) be the n-term approximation of v(, t) given by (3.69), where vtk is
continuous in H. Hence, in view of (3.73), {v n (, t)} is a Cauchy sequence in
L2 (; C([0, T ]; H)) convergent to v and it has a continuous version.
The inequalities (3.94) and (3.95) were verified by (3.72) and (3.73),
respectively.
Pn To show (3.96), consider the n-term approximation v n (, t) =
k
k=1 vt ek , where
Z t
vtk = (v(, t), ek ) = ek (ts) (ek , (, s)dWs ).
0
So we have
Z T Z T n
X Z t
n
E kv (, t)k21 dt = E k e2k (ts) (ek , s dWs )2 dt
0 0 k=1 0
Z T n
X Z t
= E k e2k (ts) (Qs ek , ek )dsdt
0 k=1 0
Z TX
Z T
1 1
E (Qs ek , ek )ds = E T r Qt dt.
2 0 2 0
k=0
68 Stochastic Partial Differential Equations
Z T
It can be shown that E kv(, t) v n (, t)k21 dt 0 as n . Therefore
0
the inequality (3.96) holds true.

With the aid of Lemma 5.1 and Lemma 5.2, we can prove the following
theorem, which is a more regular version of Theorem 4.1.

Theorem 5.3 Let the condition (3.55) be satisfied and let h H. The linear
problem (3.59) has a unique weak solution u(, t) given by (3.60), which is an
adapted H 1 -valued process with a continuous trajectory in H for t [0, T ]
such that
Z T
E sup ku(, t)k2 + E ku(, t)k21 dt
0tT 0
Z T (3.97)
2 2
C(T ) { khk + E [ kf (, s)k + T r Qs ]ds },
0

for some constant C(T ) > 0. Moreover, the energy equation holds true:
Z t
ku(, t)k2 = khk2 + 2 hAu(, s), u(, s)ids
Z t 0

+2 (u(, s), f (, s))ds (3.98)


Z 0t Z t
+2 (u(, s), M (, ds)) + T r Qs ds,
0 0

where M (, ds) = (, s)W (, ds).

Proof. In view of Lemma 5.1 and Lemma 5.2, since the problem is linear,
u = u + v is a unique weak solution satisfying the regularity property u
L2 ( [0, T ]; H 1 ) L2 (; C([0, T ]; H)). Making use of the simple inequalities:
Z t
kut k2 3{kGt hk2 + k Gts fs dsk2 + kvt k2 },
0

and
Z T Z T Z T Z t
kut k21 dt 3{ kGt hk21 dt + k Gts fs dsk21 dt
0 0 0 0
Z T
+ kvt k21 dt},
0

we can apply the inequalities (3.89), (3.91), (3.95), and (3.96) to obtain the
bound in (3.97).
Concerning the energy equation (3.98), recall that ut = u t + vt as defined
by (3.62) and denote the n-term approximation: un (, t) = u nt + v n (, t). By
Stochastic Parabolic Equations 69

definitions, it is easy to show that un (, t) satisfies equation (3.98). That is,

Z t
n 2
ku (, t)k = kh k + 2 n 2
(Aun (, s), un (, s))ds
Z t 0Z t Z t (3.99)
+2 (un (, s), fsn )ds + 2 (un (, s), dMsn ) + T r Qns ds,
0 0 0

where ftn and Mtn are the n-term approximations of ft and Mt , respectively.
Clearly, we have

kut un (, t)k2 2{k


ut un (, t)k2 + kvt v n (, t)k2 }. (3.100)

From (3.63), with the aid of Lemmas 5.1, we can deduce that

Z T
n 2 n 2
sup E k
ut u (, t)k 2{khh k +T E kft ftn k2 dt} 0, as n .
0tT 0

Making use of (3.95), we can get

Z T
E sup kvt v n (, t)k2 16 E (T r Qns )ds,
0tT 0

d
with Qnt = [M n ]t where Mtn = (Mt Mtn ) and [M ]t denotes the
dt
covariation operator for Mt . Similar to the derivation of (3.64)(3.71), it can
be shown that
Z T
E (T r Qns )ds 0, as n . (3.101)
0

In view of the above results, the equation (3.100) yields

sup E kut un (, t)k2 0, as n , (3.102)


0tT

which implies

E kun (, t)k2 E kut k2 , as n . (3.103)

By means of Lemmas 5.1 and 5.2, and the fact that |hA, i| kk1 kk1 for
70 Stochastic Partial Differential Equations

, H 1 , we can deduce that


Z T Z T
E| hAut , ut i dt (Aun (, t), un (, t))dt|
Z T
0 0

E {|hA(ut un (, t)), ut i|
0
+ |hAun (, t), ut un (, t)i|}dt
Z T
E (kut k1 + kun (, t)k1 ) kut un (, t)k1 dt
Z T
0 (3.104)
{E (kut k1 + kun (, t)k1 )2 dt}1/2
0R
T
{E 0 kut un (, t)k21 dt}1/2
Z T
C{E kut un (, t)k21 dt}1/2
Z0
T
2C{E (k n (, t)k21 + kvt v n (, t)k21 ) dt}1/2 ,
ut u
0

for some C > 0. It follows that


Z T Z T
lim (Aun (, t), un (, t))dt = hAut , ut idt (3.105)
n 0 0

in L1 (). This is so because


Z T X Z T Z t
E k n (, t)k21 dt = E
ut u k (Gt h + Gts fs ds, ek )2 dt
0 k>n 0 0

X Z T Z T Z t
2k t 2
2E k { e (h, ek ) dt + E e2k (ts) (fs , ek )2 dsdt}
k>n 0 0 0
Z
T X 2 T X T
(h, ek ) + E (ft , ek )2 dt,
2 2 0
k>n k>n

which goes to zero as n , and


Z T Z T X Z t
n
E kvt v (, t)k21 dt = E k (ek , e2k (ts) s dWs )2 dt
0 0 k>n 0

XZ T Z t
=E k e2k (ts) (Qs ek , ek )dsdt
k>n 0 0
Z
1 X T
E (Qs ek , ek )ds 0, as n .
2 0 k>n

Rt Rt
If we can show that 0
(uns , dMsn ) and 0
T r Qns ds converge, respectively,
Stochastic Parabolic Equations 71
Rt Rt
to 0 (us , dMs ) and 0 T r Qs ds, then by passing the limits in (3.99) termwise,
the equation (3.98) follows. For instance, consider
Z t Z t
E| (un (, s), dMsn )|
(us , dMs )
0 0
Z t Z t
E| (un (, s), dMs )| + E | (un (, s), dMsn )|,
0 0
Rt
where we set un = (u un ). First it is easy to show that 0 T r Qns ds
Rt 1
0 T r Qs ds in L (). Next, by a submartingale inequality,
Z t Z t
E| (un (, s), dMs )| C1 E { (Qs un (, s), un (, s))ds}1/2
0 0
Z T
C1 {E sup kut un (, t)k2 E (T rQs )ds}1/2 ,
0tT 0

which, by (3.102), goes to zero as n . Similarly, we have


Z T Z T
E| (un (, s), d Msn )| C1 E { (Qns un (, s), un (, s))ds}1/2
0 0
Z T
n 2
C1 {[E sup ku (, t)k ] E T r (Qns )ds}1/2 0, as n .
0tT 0

Hence the theorem is proved.

Next we consider the Lp -regularity of the solution. Before stating the the-
orem, we shall present the following two lemmas.

Lemma 5.4 Let u (x, t) be given by (3.87). Suppose that h H and f (, t)


is an Ft -adapted process in H such that
Z T
E{ kf (, t)k2 dt}p < , (3.106)
0

(, t) is an adapted H 1 -valued process being continuous in


for p 1. Then u
H such that
Z T
2p
E sup k u(, t)k + E ( u(, t)k21 dt)p
k
0tT 0
Z T (3.107)
Cp {khk2p + E ( kf (, t)k2 dt)p },
0

for some constant Cp > 0. If h H 1 and condition (3.106) holds for p > 1,
the process u(, t), t [0, T ], is H
older-continuous in H with exponent <
(p 1)/2p.
72 Stochastic Partial Differential Equations

Proof. It follows from Lemma 5.1 that u(, t), being continuous in H, is an
H 1 -valued process, and for h H, we have
Z T
2p
sup kGt hk + ( kGt hk21 dt)p C1 khk2p . (3.108)
0tT 0

Similarly,
Z t Z T Z t
2p
E sup k Gts fs dsk + E{ k Gts fs dsk21 dt}p
0tT 0 0 0
Z T (3.109)
C2 E { kfs k2 dt}p ,
0
for some positive constants C1 and C2 . Therefore the inequality (3.107) is a
consequence of (3.108) and (3.109). To show the Holder continuity, for t, (t +
) [0, T ], we have
k k2 2{k(Gt+ Gt )hk2
ut+ u
Z t+ t Z t
(3.110)
+ k Gt+ s fs ds Gts fs dsk2 }.
0 0

By the series representation (3.51) for the Green function Gt , for h H 1 , it


can be shown that
| |
kGt+ h Gt hk2 khk21 . (3.111)
2
Clearly,
Z t+ Z t
k Gt+ s fs ds Gts fs dsk2
0 Z t+ 0 Z t (3.112)
2{k Gt+ s fs dsk2 + k (Gt+ s Gts )fs dsk2 },
t 0
and
Z t+ Z t+ Z T
k Gt+ s fs dsk2 ( kfs kds)2 | | kfs k2 ds. (3.113)
t t 0
We claim that the last integral in (3.112) satisfies (see the following Remark
(1))
Z t Z T
2
k (Gt+ s Gts )fs dsk C| | kfs k2 ds, (3.114)
0 0
for some constant C > 0. By taking (3.109)(3.114) into account, we can
deduce that, for any p > 1, there exists a constant C(p, T ) > 0 such that
Z t+ Z t
Ek Gt+ s fs ds Gts fs dsk2p
0 0
Z T
C(p, T )| |p E { |fs |2 ds)}p .
0
By the Kolmogorov continuity criterion, the above shows that u
(, t) is H
older-
continuous with exponent < (p 1)/2p.
Stochastic Parabolic Equations 73

Remarks:
(1) To verify (3.114), take > 0 and t, t + [0, T ]. By using the series
representation of the Greens function and the bound et e1 (1/t)
for any > 0, t > 0, it can be shown that
k(Gt+ Gt )hk C1 ( /t)khk
for any h H, t > 0, and for some C1 > 0. Hence there is a constant
C2 > 0 such that
k(Gt+ Gt )hk C2 ( /t)khk, t > 0,
where ( /t) = {1 ( /t)}. It follows that
Z t Z t

k (Gt+ s Gts )fs dsk2 C22 { ( )kfs k ds}2
0 0 t s
Z t Z t Z T
2 2 2
C2 ( /s)ds kfs k ds C kfs k2 ds,
0 0 0

which verifies (3.114).

(2) If the initial state h H instead of H 1 , one can only show that u
(, t) is
Holder-continuous in [, T ] for any > 0.

Lemma 5.5 Suppose that (, t) is a predictable H-valued process such


that, for p 1,
Z T Z TZ
E{ T r Qt dt}p = E { q(x, x, t)dxdt}p < . (3.115)
0 0 D

Then v(, t) is a continuous process in H such that the following inequality


holds Z T
E sup kv(, t)k2p + E { kv(, t)k21 dt}p
0tT 0
Z T (3.116)
p
Cp E { (T r Qt )dt} ,
0
for some constant Cp > 0. Moreover, if (, t) is a predictable H 1 -valued
process such that
E sup (T r Qt )p < , (3.117)
0tT

then, for p > 2, the process v(, t) has a H


older-continuous version in H with
exponent < (p 2)/4p.

Proof. From the energy equation (3.98) with h = 0 and f = 0, and by


invoking the simple inequality: (a + b)p 2p (|a|p + |b|p ), we obtain
Z t Z t
kvt k2p < 4p {| (vs , dMs )|p + | (T r Qs )ds|p }. (3.118)
0 0
74 Stochastic Partial Differential Equations

By the B-D-G inequality,


Z t Z T
E sup | (vs , dMs )|p C1 E { (Qs vs , vs )ds}p/2
0tT 0 0
Z T
p
C1 E sup kvt k { (T r Qs )ds}p/2 (3.119)
0tT 0
Z T
E sup kvt k2p + C E { (T r Qs )ds}p ,
0tT 0

where use was made of the fact: C1 (ab) (a2 + C b2 ) with C = (C12 /4) for
any > 0. In view of (3.118) and (3.119), we get
Z T
E sup kvt k2p 4p {E sup kvt k2p + (C + 1)E [ (T r Qs )ds]p }.
0tT 0tT 0

By choosing = 1/(2 4p ), the above yields the inequality


Z T
E sup kvt k2p C2 E { (T r Qs )ds}p (3.120)
0tT 0

with C2 = 2 4p (C + 1). From the energy equation (3.98), we can deduce that
Z t Z t Z t
2
kvs k1 ds C3 {| (vs , dMs ) | + (T r Qs )ds}.
0 0 0

Following similar estimates from (3.118) to (3.120), we can get


Z T Z T
E{ kvs k21 ds}p C4 E { (T r Qs )ds}p . (3.121)
0 0

Now the inequality (3.116) follows from (3.120) and (3.121) with Cp = C2 +C3 .
To show the H
older continuity, referring to equation (3.88). we can obtain,
for s < t,
Z t Z s
vt vs = G(tr) dMr + (G(ts) I)G(sr) dMr .
s 0

It follows that
Z t
2p p
E kvt vs k 4 {E k G(tr) dMr k2p
Z s s (3.122)
+ Ek (G(ts) I)G(sr) dMr k2p }.
0

To estimate the above convolutional stochastic integrals, we need a maximal


inequality, similar to the B-D-G inequality, which will be given later as The-
orem 6.2. By applying this maximal equality, we obtain
Z t Z t
2p
Ek Gtr dMr k C1 E { T r Qr dr}p
s s (3.123)
C1 sup E (T r Qt )p (t s)p ,
tT
Stochastic Parabolic Equations 75

for some constants C1 > 0. Similarly, consider the last term in equation
(3.122).
Z s Z s
2p
Ek (G I)Gsr dMr k C2 E { T r {K (s r)Qr K (s r)}dr}p ,
0 0

where we let K (s r) = (G I)Gsr . In terms of the eigenfunctions, we


write
X
T r {K (s r)Qr K (s r)} = (K (s r)Qr K (s r) en , en )
X n
= (1 en )2 e2n (sr) (Qr en , en ),
n

which converges uniformly over [0, T ]. By means of the simple inequality


1/2 et < (et)1/2 for > 0, it can be shown that

en )2 e2n (sr) (1 en ) e2n (sr)


(1 Z

1/2
1/2
n e
n r
dr(1/2
n e
2n (sr)
) C3 ( ) ,
0 sr

for some C3 > 0. In view of the above two inequalities, we get


1/2
T r {K (s r)Qr K (s r)} C3 ( ) T r Qr
sr
and hence Z s
Ek (G I)Gsr dMr k2p
0Z s
1/2
C1 [ C3 ( ) T r Qr dr]p (3.124)
0 sr
C4 E sup( T rQt )p p/2 .
tT

By making use of the inequalities (3.123), (3.124), and the condition (3.117)
in the equation (3.122), we can show that there exists a constant C5 > 0 such
that
E kvt vs k2p C5 p/2 = C5 |t s|p/2 ,
for p > 2. By applying the Kolmogorovs continuity criterion, we conclude
that v(, t) has a regular version as a H
older-continuous process in H with
exponent < (p 2)/4p.

With the aid of Lemma 5.4 and Lemma 5.5, it is not hard to show the
following theorem holds true.

Theorem 5.6 Let the conditions (3.106) and (3.115) be satisfied with p 1.
For any h H, the solution u(, t) of the problem (3.59) given by (3.60) is a
76 Stochastic Partial Differential Equations

predictable H 1 -valued process which is continuous in H such that the energy


equation (3.98) holds and

E sup ku(, t)k2p C{ khk2p


0tT
Z T Z T (3.125)
+E[ kf (, t)k2 dt ]p + E [ (T r Qt )dt ]p },
0 0

where C is a positive constant depending on T and p. If h H 1 and the con-


ditions (3.106) and (3.117) are satisfied for p > 2, then it is H
older-continuous
in H with exponent < (p 2)/4p.

Similarly, we can consider spatial regularity of solutions in Sobolev spaces.


For example, under some smoothness conditions, the solution will have im-
proved regularity as shown by following theorem.

Theorem 5.7 In the linear stochastic equation (3.59), assume that h H01
and there exist constants C1 , C2 > 0 such that the following conditions are
satisfied Z T
E kfs k21 ds C1 , (3.126)
0
Z T
E T r.[(A)Qs ] ds C2 . (3.127)
0
Then the solution u of the problem (3.59) has the additional regularity: u
Z T
2 2
L ((0, T ) ; H ) with E kus k22 ds < .
0

Proof. It suffices to show that ut is in the domain D(A) of A and


Z T
E kAus k2 ds < .
0

This will be proved by the method of eigenfunction expansion as done in


Theorem 5.3. We will only provide the main steps of the proof.
Let V n denote the subspace of H spanned by the set of eigenvectors
{e1 , , en } and let PP n : H Vn denote the orthogonal projection oper-
ator defined by Pn h = nk=1 (h, ek )ek for h H. Define An = APn : H Vn ,
which
Pn is a bounded linear operator on H so that, for h H, An h =
k=1 ( k )(h, ek )ek . It is easy to show that, for g H2 , kAg An gk 0 as
n .
Referring to Equations (3.62) and (3.63), we can write

An ut = tn + tn , (3.128)

where Z t
tn = An Gt h + An Gt fs ds, (3.129)
0
Stochastic Parabolic Equations 77
Z t
tn = An Gts dMs . (3.130)
0

From (3.128), we get


Z T Z T Z T
2
E kAn us k ds 2{E ksn k2 ds +E ksn k2 ds}. (3.131)
0 0 0

By noting (3.129), it is not hard to show that


Z T Z T
E ksn k2 ds K1 {khk21 + E kfs k21 ds}, (3.132)
0 0

for some
P constant K1 > 0. To estimate the last integral in (3.131), we first let
Ms = k=1 m k k
s k with ms = (Ms , ek ). Then it follows from (3.130) that
e
n
X Z t
tn = (1) k [ ek (ts) dmks ]ek ,
k=1 0

and
Z T n Z
X T Z t
E ksn k2 ds = E [k ek (ts) dmks ]2 dt
0 k=1 0 0
n Z
X T Z t
=E 2k e2k (ts) qsk dsdt
0 0
k=1
n Z Z TXn
(3.133)
1 X T k 1
E k qs ds = E k (Qs ek , (A) ek )ds
2 0 2 0 k=1
Z k=1
1 T
= E T r[(A)Qs )] ds.
2 0

In view of the above estimates (3.128)(3.133), we can conclude that there


exists constant K3 > 0 such that
Z T Z T Z T
E kAn us k2 ds K3 {khk21 + E kfs k21 ds + E T r[(A)Qs )] ds}.
0 0 0

By taking n and noticing the conditions: h H01 , equations (3.126) and


RT
(3.127), it follows that E 0 kAus k2 ds < or u L2 ((0, T ) , H 2 ).

Remarks:

(1) Here, for simplicity, we assumed that initially u(, 0) = h H is non-


random. It is clear from the proof that the theorem holds true if h(x) is
an F0 measurable random field such that h L2p (, H). In the estimate
(3.125), we simply change khk2p to E khk2p .
78 Stochastic Partial Differential Equations

(2) Owing to the H 1 regularity of the solution, it is not hard to show that u
satisfies the so-called variational equation:
Z t
(ut , ) = (h, ) + < Aus , > ds
Z t 0 Z t (3.134)
+ (fs , )ds + (, dMs ),
0 0

for any H 1 . In contrast with the notion of mild solution, a solution to


the variational equation (3.134) is known as a strong solution.

(3) The continuity properties of the solution u in both space and time can
also be studied. However, to avoid more technical complication, they will
not be discussed here.

Before passing, we shall briefly discuss the heat equation (3.79) with Pois-
son noise. In this case, the mild solution is also more regular than stated in
Theorem 4.3 for a Wiener noise. It will be shown that, analogous to Theorem
5.3, the solution u given by (3.81) is in L2 ((0, T ) ; H 1 ).

Theorem 5.8 Let the conditions of Theorem 4.3 be satisfied. Then the
mild solution u(, t) defined by (3.81) is a adapted process in H01 such that
Z T
sup E ku(, t)k2 + E ku(, t)k21 dt
0tT 0
Z T
2
C(T ) { khk + E kf (, s)k2 ds
Z TZ 0

+ E k(, s, )k2 ds(d)},


0 B

for some C(T ) > 0.

Proof. The proof is parallel to that of Theorem 5.3. The key point is to
show that v(, t), redefined here as
Z tZ
v(, t) = (ds, d),
(, s, ) N
0 B

belongs to L2 ((0, T ) ; H01 ). To this end, we follow


Pnthe approach in Lemma
5.2. Consider the n-term approximation v n (, t) = k=1 vtk ek , where
Z tZ
vtk = (v(, t), ek ) = (ds, d).
ek (ts) (ek , (, s, ))N
0 B
Stochastic Parabolic Equations 79

It follows that
Z T
E kv n (, t)k21 dt
0
Z T n
X Z tZ
= E k e2k (ts) (ek , (, s, ))2 ds(d)
0 k=0 0 B
Z T Z X

1
E (ek , (, s, ))2 ds(d)
2 0 B k=0
Z T Z
1
= E k(, s, )k2 ds(d).
2 0 B

By letting n , the above yields

Z T Z T Z
1
E kv(, t)k21 dt E k(, s, )k2 ds(d),
0 2 0 B

which shows that v L2 ((0, T ); H01 ). The rest of the proof will be omitted.

3.6 Stochastic ReactionDiffusion Equations


Consider the nonlinear initial-boundary value problem:

u
= ( )u + f (u, x, t) + V (u, x, t), t (0, T ),
t (3.135)
Bu|D = 0, u(x, 0) = h(x), x D.

In the above equation, the nonlinear term depends only on u but not on its
gradient x u. This is known as a reactiondiffusion equation [89] perturbed
by a state-dependent white noise


V (u, x, t) = g(x, t) + (u, x, t) W (x, t), (3.136)
t

where g(x, t), x D, and the nonlinear terms f (u, x, t) and (u, x, t) for
(u, x) R D are given predictable random fields to be specified later.
Let G(x, y; t) be the Greens function as before with the associated Greens
operator Gt . Then the system (3.135) can be converted into the integral equa-
80 Stochastic Partial Differential Equations

tion:
Z
u(x, t) = G(x, y; t)h(y)dy
Z
D Z
t
+ G(x, y; t s)g(y, s)dsdy
Z0 t ZD (3.137)
+ G(x, y; t s)f (u(y, s), y, s)dsdy
Z0 t ZD
+ G(x, y; t s)(u(y, s), y, s)W (y, ds)dy.
0 D

Now we consider the mild solution for the problem (3.135). Rewriting the
above as an integral equation in H, we will also write ut = u(, t), Ft (u) =
f (u(, t), , t), gt = g(, t), t (u) = (u(, t), , t) and dWt = W (, dt), so that
equation (3.137) yields
Z t Z t
ut = Gt h + Gts gs ds + Gts Fs (u)ds
Z t 0 0 (3.138)
+ Gts s (u) dWs .
0
2
Here we say that u L ([0, T ]; H) is a mild solution of the problem (3.135)
if u(, t) is an adapted process in H which satisfies the integral equation (3.138)
for a.e. (, t) [0, T ] such that
Z T
E { kFt (u)k2 + (R t (u), t (u)) }dt
0
Z T Z
=E { |f (u(x, t), x, t)|2 + r(x, x) 2 (u(x, t), x, t) }dtdx < .
0 D

To prove the existence theorem, we impose the following conditions:


(A.1) f (r, x, t) and (r, x, t) are predictable random fields. There exists a con-
stant K1 > 0 such that
kf (u, , t)k2 + k(u, , t)k2 K1 (1 + kuk2 ), a.s.
for any u H, t [0, T ].
(A.2) There exists a constant K2 > 0 such that
kf (u, , t) f (v, , t)k2 + k(u, , t) (v, , t)k2 K2 ku vk2 , a.s.
for any u, v H, t [0, T ].
(A.3) g(, t) is a predictable H-valued process such that
Z T
E( kgt k2 dt)p < ,
0

for p 1, and W (x, t) is an R-Wiener random field of finite trace and the
covariance function r(x, y) is bounded by r0 .
Stochastic Parabolic Equations 81

Let Xp,T denote the set of all continuous Ft -adapted processes in H for
0 t T such that E sup0tT ku(, t)k2p < , for a given p 1. Then
Xp,T is a Banach space under the norm:

kukp,T = {E sup kut k2p }1/2p . (3.139)


0tT

Define an operator in Xp,T as follows:


Z t
t u = Gt h + Gts Fs (u)ds
Z t 0 Z t (3.140)
+ Gts gs ds + Gts s (u)dWs ,
0 0

for u Xp,T . In the following two lemmas, we will show that the operator
is well defined and Lipschitz continuous in Xp,T .

Lemma 6.1 Under the conditions (A.1)(A.3) with p 1, the mapping


given by (3.140) is a well-defined bounded operator which maps Xp,T into
itself such that
Z T
kuk2p
p,T b 1 {1 + khk 2p
+ E {( kgt k2 dt)p + kuk2p
p,T }, (3.141)
0

for some constant b1 > 0, depending only on p, r0 and T .

Proof. By condition (A.1), we have


Z T Z T
E( kFt (u)k2 dt)p K1p E { (1 + kut k2 )dt }p
0 0
Z T
p p
(2K1 ) {(T + E ( kut k2 dt)p }
0
(2T K1 )p {1 + E sup kut k2p }.
0tT

Therefore, there exists a positive constant C1 , depending on p and T , such that


Z T
E( kFt (u)k2 dt)p C1 (1 + kuk2p
p,T ). (3.142)
0

Recall that, in Remark (2) following Theorem 2.4, we introduced the no-
tation
Z
kt (u)k2R = T r Qt (u) = r(x, x) 2 (u(x, t), x, t)dx. (3.143)
D

Then, noting (A.1), we can get

kt (u)k2R r0 kt (u)k2 K1 r0 (1 + kut k2 ).


82 Stochastic Partial Differential Equations

As before, we can find a constant C2 > 0 such that


Z T
E[ kt (u)k2R dt ]p C2 (1 + kuk2p
p,T ). (3.144)
0

Now we let v = u in (3.140). Due to the inequalities (3.142) and (3.144), we


can apply Theorem 5.6 with u replaced by v to assert that v(, t) is a continuous
and adapted process in H and to obtain from (3.125) the estimate:
Z T
E sup kut k2p C{ khk2p + E [ kFt (u)k2 dt ]p
0tT 0
Z T Z T (3.145)
2 p 2 p
+E( kgt k dt) + E [ kt (u)kR dt ] }.
0 0

By making use of (3.142) and (3.144) in (3.145), the desired bound (3.141)
follows with some constant b1 > 0. Therefore the map : Xp,T Xp,T is well
defined and bounded as asserted.

Lemma 6.2 Suppose the conditions (A.1) to (A.3) hold true. Then the map
: Xp,T Xp,T is Lipschitz continuous. Moreover, for any u, u Xp,T with
0 < T 1, there exists a positive constant b2 independent of T (0, 1] such
that
ku u kp,T b2 T ku u kp,T . (3.146)

Proof. By condition (A.2), we have


Z T Z T
E[ 2
kFt (u) Ft (u )k dt] p
K2p E ( kut ut k2 dt)p
0 0 (3.147)
(K2 T )p ku u k2p
p,T .

Again, by conditions (A.1) and (A.2), we get


kt (u) t (u )k2R r0 kt (u) t (u )k2
K2 r0 kut ut k2 .
With the aid of this inequality, it is clear that
Z T
E[ kt (u) t (u )k2R dt]p
0 Z T
p (3.148)
(K2 r0 ) E ( kut ut k2 dt)p
0
[K2 r0 T ]p ku u k2p
p,T .

Let v = u, v = u and v = v v . Then, in view of (3.140), v satisfies


Z t
vt = Gts [Fs (u) Fs (u )]ds
0
Z t
+ Gts [t (u) t (u )]dWs .
0
Stochastic Parabolic Equations 83

By applying Theorem 5.6 to the above equation, the estimate (3.125) gives
rise to
E sup kvt k2p = E sup kt (u) t (u )k2p
0tT
Z T 0tT
CE { [ kFt (u) Ft (u )k2 dt ]p (3.149)
Z T 0

+[ kt (u) t (u )k2R dt]p }.


0

By taking (3.147) and (3.148) into account, we can deduce from (3.149) that

kut ut k2p p p 2p
p,T C(1 + r0 )(K2 T ) ku u kp,T , (3.150)
p
which implies the inequality (3.146) with b2 = K2 {C(1 + r0p )}1/2p .

With the aid of the above lemmas, it is rather easy to prove the existence
theorem for the integral equation (3.137) for a mild solution of (3.135). The
proof follows from the contraction mapping principle (Theorem 2.6).

Theorem 6.3 Let the conditions (A.1) to (A.3) be satisfied and let h be
a F0 -measurable random field such that E khk2p < for p 1. Then the
initial-boundary value problem for the reactiondiffusion equation (3.135) has
a unique (mild) solution u(, t), which is a continuous adapted process in H
such that u L2p (; C([0, T ]; H)) satisfying
Z T
2p 2p
E sup ku(, t)k C{1 + E khk +E( kgt k2 dt)p }, (3.151)
0tT 0

for some constant C > 0, depending on p, r0 , and T . Moreover, the energy


inequality holds
Z t
2 2
E ku(, t)k E { khk + 2 (us , Fs (u))ds
Z t Z t 0 (3.152)
+2 (us , gs )ds + T r Qs (u)ds},
0 0

for t [0, T ], where T r Qs (u) = ks (u)k2R .

Proof. For the first part of the theorem, we need to prove that the integral
equation (3.138) has a unique solution in Xp,T . Now Lemma 6.1 and Lemma
6.2 show that the map : Xp,T Xp,T is bounded and Lipschitz-continuous.
More precisely, it satisfies (3.146) so that
1
ku u kp,T ku u kp,T , (3.153)
2
if T T1 with T1 = 1/(4b22 ). Therefore is a contraction mapping in the Ba-
nach space Xp,T1 and, by Theorem 2.6, it has a unique fixed point u satisfying
84 Stochastic Partial Differential Equations

ut = t u for 0 t T1 . This means that u is a unique local solution of the


integral equation (3.138) over [0, T1 ]. The solution can be extended over any
finite interval [0, T ] by continuing the solution to [T1 , 2T1 ], [2T1 , 3T1 ], , and
so on.
Now, from (3.145) one can obtain
Z T
2p 2p
E sup kut k CE { khk + [ kgt k2 dt ]p
0tT 0
Z T Z T (3.154)
+[ kFt (u)k2 dt ]p + [ kt (u)k2R dt ]p }.
0 0

By making use of some inequalities leading to (3.142) and (3.144), we can


deduce from (3.154) that there is a constant C1 > 0 such that
Z T
E sup kut k2p C1 E { 1 + khk2p + [ kgt k2 dt ]p
0tT 0
Z T
+[ kut k2 dt ]p }
Z T 0 Z T
C1 { 1 + khk2p + E [ kgt k2 dt ]p + T (p1) E sup kus k2p dt]},
0 0 0st

which, by the Gronwall lemma, implies the inequality (3.151). For u


L2p (; C([0, T ]; H)), by condition (A.1), it is easy to check that ft = Ft (u)
and t = t (u) satisfy the conditions (3.106) and (3.115), respectively. By
invoking Lemmas 5.3 and 5.4, the inequality (3.151) follows easily.
To verify the energy inequality, let Pn be a projection operator from H into
the linear space V n D(A) spanned by the first n eigenfunctions {e1 , , en }
of A so that, for any h H,
n
X
Pn h = hn = (h, ek )ek .
k=1

Let un = Pn u, Fsn = Pn Fs and so on. Then, after applying Pn to equation


(3.138) and noticing Pn Gt = Gt Pn , we get
Z t Z t
n n n
u (, t) = Gt h + Gts gs ds + Gts Fsn (u)ds
0 0
Z t
+ Gts ns (u) dWs .
0

Since un (, t) D(A), the above equation can be rewritten as


Z t Z t Z t
un (, t) = hn + Auns ds + Fsn (u)ds + gsn ds
Z t 0 0 0 (3.155)
+ ns (u) dWs .
0
Stochastic Parabolic Equations 85

By applying the Ito formula to kun (, t)k2 in finite dimensions, taking the
expectation, and recalling the fact (Aun , un ) 0, we obtain
Z t Z t
E kun (, t)k2 khn k2 + 2 E (Fsn (u), uns )ds + 2 E (gsn , uns )ds
0 0
Z t
+E T r Qns (u)ds,
0

which yields the energy inequality (3.152) as n .

For example, consider the initial-boundary value problem:

u
(x, t) = ( )u + a sin u + g(x, t)
t
(x, t),
+ 0 (x, t) u W (3.156)

u(, t)|D = 0, u(x, 0) = h(x),

for x D, t (0, T ), where a is a constant and 0 (x, t) is a predictable


bounded random field such that

|0 (x, t)| C, a.s. for each (x, t) D [0, T ], (3.157)

for some constant C > 0. In (3.135), f (u, x, t) = a sin u and (u, x, t) =


0 (x, t) u. Clearly the conditions (A.1) and (A.2) are satisfied. So the following
is a corollary of Theorem 6.3.

Corollary 6.4 Assume that conditions (3.157) and (A.3) hold. Given a F0 -
measurable random field h such that E khk2p < for p 1, the initial-
boundary value problem (3.156) has a unique mild solution u(, t) which is a
continuous adapted process in H. Furthermore, u L2p (; C([0, T ]; H)) such
that the inequality (3.151) holds.

In Theorem 6.3, the global conditions A.1 and A.2 on linear growth and
Lipschitz-continuity can be relaxed to hold locally. However, without addi-
tional constraint, they may lead to an explosive solution in finite time. With
these in mind, we impose the following conditions:
(An .1) f (r, x, t) and (r, x, t) are predictable random fields. There exists a
constant Cn > 0 such that

kf (u, , t)k2 + k(u, , t)k2 Cn , a.s.

for any n > 0, u H with kuk n, t [0, T ].


(An .2) There exists a constant Kn > 0 such that

kf (u, , t) f (v, , t)k2 + k(u, , t) (v, , t)k2 Kn ku vk2 , a.s.


86 Stochastic Partial Differential Equations

for any u, v H, with kuk kvk n, t [0, T ].


(A.4) There exists constant C1 > 0 such that
1
(u, f (u, , t)) + T r Qt (u) C1 (1 + kuk2 ), a.s.,
2
for any u H, t [0, T ].

It can be shown that, under conditions (An .1), (An .2), and (A.3), the
problem has a local solution. If, in addition, condition (A.4) holds, then the
solution exists in any finite time interval. The proof is based on a truncation
technique by making use of a mollifer n on [0, ) to be defined as follows.
For n > 0, n : [0, ) [0, 1] is a C -function such that

1, for 0 r n
n (r) = (3.158)
0, for r > 2n.

Theorem 6.5 Let the conditions (An .1), (An .2), and (A.3) be satisfied
and let h be a F0 -measurable random field such that E khk2 < . Then the
initial-boundary value problem for the reactiondiffusion equation (3.135) has
a unique local solution u(, t) which is an adapted, continuous process in H.
If, in addition, condition (A.4) holds, then the solution exists for t [0, T ]
with any T > 0 and u L2 (; C([0, T ]; H)) satisfies

E sup ku(, t)k2 C{1 + E khk2 }, (3.159)


0tT

for some constant C > 0, depending on T .

Proof. Instead of the equation (3.135), consider the truncated system:


u
(x, t) = ( )u + fn (u, x, t) + g(x, t)
t
(x, t),
+ n (u, x, t)W (3.160)

Bu|D = 0, u(x, 0) = h(x),

where fn (u, x, t) = f (Jn u, x, t) and n (u, x, t) = (Jn u, x, t) with Jn u =


n (kuk)u. Then the conditions (An .1) and (An .2) imply that fn and n satisfy
the global conditions (A.1) and (A.2) for a fixed n. For instance, we will show
that (An .2) implies (A.2). Without loss of generality, let kuk > kvk. Then

kfn (u, , t) fn (v, , t)k2 + kn (u, , t) n (v, , t)k2


= kf (Jn u, , t) f (Jn v, , t)k2 + k(Jn u, , t) (Jn v, , t)k2
Kn k n (kuk)u n (kvk)v k2 )
Kn k n (kuk)(u v) + v [ n (kuk) n (kvk) ] k2
Kn { ku vk + kvk | n (kuk) n (kvk) | }2 .
Stochastic Parabolic Equations 87

For r < s, by the mean-value theorem, there is (r, s) such that

|n (r) n (s)| |n ()| |r s| 2nn |r s|,

noticing the derivative n () = 0 for < n or > 2n, where we set


n = maxn2n |n ()|. Now condition (A.2) follows from the above two
inequalities. Hence, by Theorem 6.3, the system has a unique continuous so-
lution un (, t) in H over [0, T ]. Introduce a stopping time n defined by

n = inf{t > 0 : kun (, t)k > n}

if it exists, and set n = T otherwise. Then, for t < n , ut = un (, t) is the


solution of the problem (3.135). Since n is increasing in n, let = limn n
a.s.. For t < , we have t < n for some n > 0, and define ut = un (, t). Then
limt kut k = if < T and hence ut is a local (maximal) solution. For
uniqueness, suppose that there is another solution u t , t < for a stopping
time . Then ut = un (, t) for t < n . It follows that ut = ut for t < and
= .
To show the existence of a global solution, by making use of condition
(A.4), similar to the energy inequality (3.152), we can obtain
Z tn
2 2
E kutn k khk + E kgs k2 ds
0
Z tn Z tn
2
+ kus k ds + C1 E (1 + kus k2 ) ds
0 Z T 0 Z t
khk2 + C1 T + E kgs k2 ds + (C1 + 1)E kusn k2 }ds,
0 0

which, by Gronwalls inequality, yields the following bound:

E kuT n k2 CT , (3.161)

for some constant CT > 0 independent of n. On the other hand, we have

E kuT n k2 E {I(n T )kuT n k2 } n2 P {n T }, (3.162)

where I() denotes the indicator function. In view (3.161) and (3.162), we
obtain
CT
P {n T } 2
n
so that, by the BorelCantelli lemma,

P { > T } = 1,

for any T > 0. Hence u(, t) = lim un (, t) is a global solution as claimed.


n
88 Stochastic Partial Differential Equations

As an example, consider the stochastic reactiondiffusion equation in


D = (0, 1):
u 2u
= 2 u + (kuk)u + 0 u W (x, t),
t x t
u(0, t) = u(1, t) = 0, 0 < t < T, (3.163)

u(x, 0) = h(x), 0 < x < 1,

where , , 0 are given positive constants, R, : [0, ) [0, ) is


a given continuously differentiable function, h H = L2 (0, 1) with norm
k k, and W (x, t) is a Wiener random field with covariant function r(x, y)
bounded by r0 . Then, referring to (3.135), f (u, x, t) = (kuk)u, (u, x, t) =
0 u and g(x, t) 0. For the existence result, clearly condition (A.3) holds
by assumption. To check conditions (An .1) and (An .2), since is linear in u,
we need only to show that f is locally bounded and Lipschitz continuous in
L2 (0, 1). Since is a C1 function on [0, ), and are bounded on any finite
interval [0, n]. It follows that conditions (An .1), (An .2), and (A.3) are satisfied.
Therefore the problem (3.135) has a unique local solution by Theorem 6.5. If
the parameter 0, we have
1
(u, f (u, , t)) + 21 T r Qt (u) = (kuk)(u, u) + 02 kuk2R
2
1
r0 02 kuk2 ,
2
which shows condition (A.4) is also met. Thus, for 0, the solution exists
in any time interval [0,T].

3.7 Parabolic Equations with Gradient-Dependent Noise


So far we have treated parabolic equations for which the nonlinear terms do
not depend on the gradient u of u. Now we consider a nonlinear problem as
follows:
u
= ( )u + f (u, u, x, t) + V (u, u, x, t),
t (3.164)
Bu|D = 0, u(x, 0) = h(x),

for x D, t (0, T ), where



V (u, , x, t) = g(x, t) + (u, , x, t) W (x, t), (3.165)
t
f (u, , x, t) and (u, , x, t) are predictable random fields, with parameter
(u, , x) R Rd D, and g(x, t) and W (x, t) are given as before. In contrast
Stochastic Parabolic Equations 89

with the previous problem (3.135), the dependence of f and on the gradient
u will cause some technical complication. To see this, it is instructive to go
over an elementary example. Consider the following simple equation in one
space dimension:
u 2u u
= 2 + 0 w(t), (3.166)
t x x
where and 0 are positive constants, and w(t) is a standard Brownian motion
in one dimension. Suppose that the equation is subject to the periodic bound-
ary conditions: u(0, t) = u(2, t) and x u(0, t) = x u(2, t). The associated
eigenfunctions are {en (x) = 12 enix } with i = 1, for n = 0, 1, 2, .
Given u(x, 0) = h(x) in L2 (0, 2), this problem can be solved easily by the
method of eigenfunction (Fourier series) expansion:

X
u(x, t) = unt en (x), (3.167)
n=

where
1
unt = hn exp{n2 ( 02 )t + ni 0 w(t)}
2
with hn = (h, en ). It is clear that if
1
( 02 ) 0, (3.168)
2
the solution series converges properly. However, in contrast with the case of
gradient-independent noise, if 21 02 < 0, the problem is ill-posed in the
sense of Hadamard [35]. This can be seen by taking h = n12 en . Then khk =
1 0 as n , but ku(, t)k = |unt | = n12 exp{n2 | 12 02 |t}
n 2
as n , for any t > 0. Therefore, the solution does not depend continu-
ously on the initial data. This example suggests that, for a general parabolic
initial-boundary value problem containing a gradient-dependent noise, the
well-posedness requires the imposition of a stochastic coercivity condition
similar to (3.168). Physically, this means that the noise intensity should not
exceed a certain threshold set by the diffusion constant . Heuristically, a
violation of this condition leads to a pseudo heat equation with a negative
diffusion coefficient, for which the initial(-boundary) value problem is known
to be ill-posed.
Before dealing with the nonlinear problem, we first consider the following
linear case:
u
= ( u) + a(x, t)u + [(b(x, t) x u] W (x, t),
t t (3.169)
Bu|D = 0, u(x, 0) = h(x),

for x D, t (0, T ), where a(x, t) and b(x, t) are given predictable random
Pd
fields with b = (b1 , , bd ). For any y Rd , b y = k=1 bk yk denotes the
90 Stochastic Partial Differential Equations

inner product in Rd . To find a mild solution, we convert the system (3.169)


into an integral equation:
Z t Z t
ut = Gt h + Gts as us ds + Gts (bs us )dWs . (3.170)
0 0

In order to control the gradient noise term, suggested by the estimate (3.125),
introduce a Banach space YT equipped with the norm:
Z T
kukT = E { sup kut k2 + kut k21 dt}}1/2 . (3.171)
0tT 0

Let denote a mapping in YT defined by


Z t Z t
t u = Gt h + Gts as us ds + Gts (s)[b us ]dWs . (3.172)
0 0

Theorem 7.1 Assume that a(x, t) and b(x, t) are predictable random fields,
and there exist some positive constants , such that

|a(x, t)| , a.s. for any (x, t) D [0, T ],

and the following coercivity condition holds


1
hAv, vi + k(bt v)k2R kvk21 , a.s. (3.173)
2
for (0, 1) and for any v H 1 , t [0, T ]. Then, given h H, the linear
equation (3.169) has a unique solution u(, t) as an adapted, continuous process
in H. Moreover, u belongs to L2 (; C([0, T ]; H))L2 ([0, T ]; H01 ) such that
Z T
2
E { sup kut k + kut k21 dt} < . (3.174)
0tT 0

Proof. The theorem will be proved by the contraction mapping argument.


However, the usual approach does not work. It is necessary to introduce an
equivalent norm to overcome the difficulty as we shall see. As before, we
consider the map given by (3.172) and first show it is well defined. For
u YT , let
vt = t u = Gt h + t + t , (3.175)
where Z t
t = Gts as us ds,
0
and Z t
t = Gts (bs us )dWs .
0
Stochastic Parabolic Equations 91

By Lemma 5.1, it is easy to obtain


3
kG hk2T khk2 . (3.176)
2
Furthermore, we have
Z t
kt k2 = k Gts as us dsk2
Z t 0 Z t (3.177)
t kas us k2 ds T kus k2 ds,
0 0

and Z Z T Z t
T
||t ||21 dt = || Gts as us ds||21 dt
0 Z 0 0 Z (3.178)
T T T T
kas ut k2 dt kut k2 dt.
2 0 2 0
Recall that, sometime, for convenience, we define kvk21 = hAv, vi. The
coercivity condition (3.173) can be rewritten as

kbt vk2R 2kvk21 , (3.179)

with = (1 ) (0, 1). By means of Lemma 5.2 and (3.179), we can deduce
that
Z t
2
E sup kt k = E sup k Gts (bs us )dWs k2
0tT 0tT 0
Z T Z T (3.180)
16E k(bs ut )k2R dt 32 E kut k21 dt,
0 0

and
Z T Z T Z t
E kt k21 dt =E kGts (bs us )dWs k21 dt
0 Z T
0 Z T
0 (3.181)
1
E k(bs ut )k2R dt E kut k21 ds.
2 0 0

From (3.175), we have

kuk2T 3(kG hk2T + kk2T + kk2T ).

By taking the inequalities (3.176), (3.177), (3.178), (3.180), and (3.181) into
account, we can find a constant C1 (T ) > 0 such that

kuk2T C1 (T )( khk2 + kuk2T ).

Therefore the linear operator : YT YT is well defined and bounded.


It remains to show that is a contraction. To this end, for some technical
92 Stochastic Partial Differential Equations

reason to be seen, we need to introduce an equivalent norm in YT , depending


on a parameter > 0, defined as follows:
Z T
kuk,T = E { sup kut k2 + kut k21 dt}1/2 . (3.182)
0tT 0

Let u, u YT . Then, in view of (3.172), = (u u ) satisfies


Z t Z t
t = Gts as s ds + Gts (bs s )dWs . (3.183)
0 0

In the meantime,
Z T
ku u k2,T = E { sup kt k2 + kt k21 dt}. (3.184)
0tT 0

By making use of (3.183) and the simple inequality (a + b)2 C a2 + (1 + )b2


with C = (1 + )/, for any > 0, we get
Z t
E sup kt k2 = E { sup k Gts as s ds
0tT 0tT 0
Z t
+ Gts (bs s )dWs k2 }
0 Z t (3.185)
E sup {C k Gts as s dsk2
0tT 0
Z t
+(1 + )k Gts (bs s )dWs k2 },
0

and, similarly,
Z T Z T Z t
E kt k21 dt = E { k Gts as s ds
0 Z t 0 0

+ Gts (bs s )dWs k21 dt}


Z0 T Z t (3.186)
E {C k Gts as s dsk21 dt
0Z 0Z
T t
+(1 + ) k Gts (bs s )dWs k21 dt}.
0 0

By applying the estimates (3.177), (3.178), (3.180), and (3.181)(3.185) and


(3.186), and making use of the results in (3.184), one can obtain

kk2,T C T 2 (1 + /2) E sup kt k2


0tT
Z T (3.187)
+ [ (1 + )(1 + 32/) ] E kt k21 dt.
0
Stochastic Parabolic Equations 93

Recall
p that C = (1 + )/ and 0 < 1. Choose = 32/ and =
( (1 + )/2 1) so that

(1 + )(1 + 32/) = (1 + )/2 < 1. (3.188)

Clearly
C T 2 (1 + /2) < 1, (3.189)
for a sufficiently small T . In view of (3.187), (3.188), and (3.189), we have

kk,T < kk,T

for some (0, 1). Therefore, is a contraction in YT for a small T . By


Theorem 2.6, this implies the existence of a unique local solution in YT , which
can be continued to any finite time interval [0,T] as mentioned before.

Now let us return to the nonlinear problem (3.164). Assume that the fol-
lowing conditions are satisfied:
(B.1) For r R, x, y Rd and t [0, T ], f (r, x, y, t) and (r, x, y, t) are pre-
dictable, continuous random fields. There exists a constant > 0 such
that

kf (v, , v, t)k2 + k(v, , v, t)k2R (1 + kvk2 + kvk21 ), a.s.

for any v H 1 , t [0, T ].


(B.2) There exist positive constants , 1 and 2 with (1 + 2 ) < 1 such that

kf (v, , v, t) f (v , , v , t)k2 kv v k2 + 1 kv v k21 , a.s.

and

k(v, , v, t) (v , , v , t)k2R kv v k2 + 2 kv v k21 , a.s.

for any v, v H 1 , t [0, T ].


(B.3) Let g(, t) be a predictable H-valued process such that
Z T
E kgt k2 dt < ,
0

and let W (x, t) be an R-Wiener random field as depicted in condition


(A.3).

Introduce the operator in YT associated with the problem (3.164):


Z t Z t
t u = Gt h + Gts Fs (us , us )ds + Gts gs ds
Z0 t 0 (3.190)
+ Gts s (us , us )dWs .
0
94 Stochastic Partial Differential Equations

Under conditions (B.1) to (B.3), the map : YT YT is well defined, and it


is a contraction mapping as will be shown in the following existence theorem.
Since the proof is similar to the linear case, it will be sketchy.

Theorem 7.2 Suppose the conditions (B.1) to (B.3) are satisfied. Then,
given h H, the nonlinear problem (3.164) has a unique solution u(, t) as an
adapted, continuous process in H. Moreover, u belongs to L2 (; C([0, T ]; H))
L2 ( [0, T ]; H01 ) such that
Z T
2
E { sup kut k + kut k21 dt}} < . (3.191)
0tT 0

Proof. By means of Lemma 5.1 and condition (B.1), we have


Z t
E sup k Gts Fs (us , us )dsk2
0tT 0
Z T
TE kFs (us , us )k2 ds (3.192)
0Z
T
T E ( 1 + kut k2 + kut k21 ) dt
0
T {T + (T + 1)kuk2T },
and Z Z
T t
E k Gts Fs (us , us )dsk21 ds
0 0Z
T
T (3.193)
E kFt (us , us )k2 dt

2 0
1
T {T + (T + 1)kuk2T }.
2
Similarly, by making use of Lemma 5.2 and condition (B.1), it is easy to check
that
Z t
E sup k Gts s (us , us )dWs k2 16{T + (T + 1)kuk2T }, (3.194)
0tT 0

and
Z T Z t
1
E k Gts s (us , us )dWs k21 dt {T + (T + 1)kuk2T }. (3.195)
0 0 2
The fact that Gt h YT together with (B.3) and the inequalities (3.192)
(3.195) imply that the map : YT YT is bounded.
To show is a contraction operator in YT , we use an equivalent norm
k k,T as defined by (3.182). For u, v YT ,
ku vk2,T = E { sup kt u t vk2
0tT
Z T (3.196)
+ kt u t vk21 dt}.
0
Stochastic Parabolic Equations 95

By invoking Lemma 5.1 and Lemma 5.2, similar to the linear case, we have

E sup kt u t vk2
0tT
Z T
C T E kFt (us , us ) Ft (vs , vs )k2 dt (3.197)
0 Z
T
+16(1 + )E kt (us , us ) t (vs , vs )k2R dt,
0

and
Z T
E kt u t vk21 dt
0 Z T
T2 C E kFt (us , us ) Ft (vs , vs )k2 dt (3.198)
0 Z
T
1
+ (1 + )E kt (us , us ) t (vs , vs )k2R dt.
2 0

By making use of (3.197), (3.198), and conditions (B.1)(B.3), the equation


(3.196) yields

ku vk2,T 1 (, , T ) E sup kut vt k2


Z T 0tT (3.199)
+ 2 (, , T ) E kut vt k21 dt,
0

where
1 = T [C T (1 + /2) + (1 + )(16 + /2)],
1
2 = [C T (1 + /2)1 + (1 + )(16 + /2)2 ].

As in the linear case, it is possible to choose sufficiently large and , T
sufficiently small such that = (1 2 ) < 1 (see the following Remark (1)).
Then
ku vk2,T ku vk,T ,
so that the conclusion of the theorem follows.

Remarks:
(1) To show the possibility of choosing the parameters , , and T to make
2 < 1, we first let T < and recall C = (1 + )/. Then 2 can be
bounded by
1
2 (1 + )(16 + /2)(1 + 2 ).

Since (1 + 2 ) < 1, the above yields 2 < 3/4 if we choose = 32(1 + )/
and = 1/4. It is evident that 1 can be made less than 3/4 by taking T
to be sufficiently small.
96 Stochastic Partial Differential Equations

(2) Similar to Theorem 6.5, the conditions (B.1) and (B.2) in the above theo-
rem can be relaxed to hold only locally to get a unique local solution, and
a global solution can be obtained if there exists a certain energy inequality.
This will be discussed in Chapter 6 (Theorem 7.5) for a general class of
stochastic evolution equations under weaker conditions.

So far, the equations under consideration have been restricted to a single


noise term. The previous results can be easily extended to similar equations
with multiple noise terms. In lieu of (3.164), consider the generalized problem:

u
= u u + f (u, x u, x, t) + g(x, t)
t m
X
+ k (x, t),
k (u, x u, x, t)W (3.200)
k=1
Bu|D = 0, u(x, 0) = h(x),

for x D, t (0, T ). Here, for k = 1, ..., m, k (r, y, x, t) is a continuous


Ft -adapted random field and Wk (x, t) is a Wiener random field. Moreover,
assume the following
(B.4) There exist positive constants , and 2 with 1 + 2 < 1 such that
m
X
kk (v, , v, t)k2Rk (1 + kvk2 + kvk21 ), a.s.
k=1

and
m
X
kk (v, , v, t) k (v , , v , t)k2Rk
k=1
kv v k2 + 2 kv v k21 , a.s.

for any v, v H 1 , t [0, T ], where 1 is the same constant as in (A.2)


and kk ( )k2Rk is similarly defined as before.
(B.5) {Wk (x, t), k = 1, ..., m} is a set of independent Wiener random fields and
the covariance operator Rk for Wk (, t) with the corresponding kernel
rk (x, y) is bounded by r0 .

Corollary 7.3 Let f (u, x u, x, t) and g(x, t) be given as in Theorem 7.2.


Suppose the conditions (B.4) and (B.5) are satisfied. Then, given h H, the
nonlinear problem (3.200) has a unique solution u(, t) as a continuous adapted
process in H. Moreover, u belongs to L2 (; C([0, T ]; H)) L2( [0, T ]; H01).

Except for notational complication, the proof of this corollary follows the
Stochastic Parabolic Equations 97

same arguments as that of Theorem 7.2 and will therefore be omitted here. As
an example, consider a model problem for turbulent diffusion with chemical
reaction in D R3 as follows:
3
u X u
+ vk (x, t) = u u + b(u, x, t)
t xk
k=1
Xm
+ k (x, t),
k (u, x u, x, t)W (3.201)
k=1
u
|D = 0, u(x, 0) = h(x),
n
for x D, t (0, T ), where v = (v1 , v2 , v3 ) denotes the random flow velocity.
Suppose that the velocity consists of the mean velocity g = E v and a white-
noise fluctuation so that
k (x, t),
vk (x, t) = gk (x, t) + k (x, t)W

for k = 1, 2, 3, where k (x, t) is a predictable random field. The reaction term


b(u, x, t) is assumed to be non-random. In the notation of equation (3.200),
we have
f (u, x u, x, t) = b(u, x, t),
and
u
k (x u, x, t) = k (x, t) .
xk
Here we impose the following conditions:
(C.1) Let gk (x, t) and rk (x, y) be continuous for k = 1, 2, 3, such that there exist
positive constants , r0 such that

max sup |gk (x, t)| , max sup rk (x, x) r0 ,


1k3 (x,t)DT 1k3 xD

where we let DT = D [0, T ].


(C.2) b(s, x, t) is a continuous function for s R, (x, t) DT , and there exist
positive constants c1 and c2 such that

kb(u, , t)k2 c1 (1 + kuk2),

and
kb(u, , t) b(u , , t)k2 c2 ku u k2 ,
for any u, u H 1 , t [0, T ].
(C.3) For k = 1, 2, 3, k (x, t) is a bounded predictable random field and there
exists 0 > 0 such that

sup |k (x, t)| 0 a.s. with 02 < /r0 .


(x,t)DT
98 Stochastic Partial Differential Equations

Under conditions (C.1)(C.3), it is easy to verify that the conditions for


Theorem 7.2 are met. For instance, to check condition (B.4), we have
m
X m
X u 2
kk (u, , t)k2Rk = kk (, t) k
xk Rk
k=1 k=1
m
X u 2 r0 02
r0 kk (, t) k kuk21 ,
xk
k=1
Pm u 2
where we used an equivalent H 1 -norm defined as kvk1 = { k=1 k x k
k +
2 1/2
kuk } . Since k (u, , t) is linear in u, we have k (u, , t)k (u , , t) =
k ((u u ), , t) so that
m
X r0 02
kk (u, , t) k (u , , t)k2Rk ku u k21 .

k=1

r0 02
Hence, noticing 1 = 0, condition (B.4) is satisfied if 2 = < 1. By means

of Theorem 7.2, we obtain the following result.

Corollary 7.4 Assume that the conditions (C.1), (C.2), and (C.3) are
satisfied. Then, given h H, the problem (3.201) has a unique solution
u(, t) as an adapted, continuous process in H. Moreover, u belongs to
L2 (; C([0, T ]; H)) L2 ( [0, T ]; H01).

3.8 Nonlinear Parabolic Equations with L


evy-Type
Noise
Let N (t, U ) be the Poisson random measure of Borel set U with a finite Levy
measure introduced in Section 1.7. Let D be a bounded domain in Rd and let
g(x, t, ) be an Ft adapted continuous random field for x D, t 0, Rm
such that Z Z Z T
E |g(x, t, )|2 dt(d)dx < . (3.202)
0 B D
For x D, by Theorem 3-7.1, we can define the stochastic integral with
respect to the compensated Poisson measure
Z tZ
M (x, t) = (ds, d),
g(x, s, ) N (3.203)
0 B

which is a martingale with zero mean and


Z tZ Z tZ
E{ (ds, d)}2 =
g(x, s, ) N E |g(x, s, )|2 (d)ds.
0 B 0 B
Stochastic Parabolic Equations 99

To define it as an integral in the Hilbert space H = L2 (D), let Mt = M (, t)


and fs () = f (, s, ) . Rewrite equation (3.203) as
Z tZ
Mt = (ds, d).
gs () N (3.204)
0 B

Let {ek } be an orthonormal basis for H, which, for convenience, is chosen to


be the eigenfunctions of A. For the n-term approximation gsn of gs given by
n
X
gsn () = (gs (), ek )ek ,
k=1

we define Z tZ
Mtn = (ds, d)
gsn () N
n
X Z0 t ZB (3.205)
= { (ds, d)}ek .
(gs (), ek ) N
k=1 0 B

Then itR isReasy to check by Theorem 1-7.1 that, in the above sum, each integral
t (ds, d) is an L2 -martingale with zero mean and
mkt = 0 B (gs (), ek ) N
Z tZ
E kmkt k2 = E (gs (), ek )2 (d)ds.
0 B

Therefore, from (3.205), we can get


n Z tZ
X Z tZ
E kMtn k2 = E (gsn (), ek )2 (d)ds = E kgsn ()k2 (d)ds,
k=1 0 B 0 B

which shows the isometry between the random variables Mtn L2 (, H)


and the processes gsn L2 ( (0, t) (B, ); H). Similar to the case of
o integral, this implies that the sequence {Mtn } converges to a limit
the It
Mt in L2 (, H). This limit defines the Poisson stochastic integral given by
equation (3.204). From the previous results, it can be shown that the following
properties for Mt hold.

Theorem 8.1 Let g(x, t, ) be an Ft -adapted continuous random field for


x D, t > 0, Rm such that condition (3.202) holds. Then the Poisson
stochastic integral Mt defined by (3.204) is an L2 -martingale in H such that
E Mt = 0, and, for g, h H ; t [0, T ], we have
Z t
E (Mt , g)(Mt , h) = E (K g, h) d, (3.206)
0
where Kt is defined by
Z Z
(Kt g, h) = (x, y, t)g(x)h(y)dxdy, (3.207)
D D
100 Stochastic Partial Differential Equations
R
with (x, y, t) = B
g(x, t, )g(y, t, )(d).

Now let us consider the parabolic It


o equation (3.135) with an additional
Poisson noise as follows
u
= ( )u + f (u, x, t) + V (u, x, t) + Z(u,
x, t),
t (3.208)
Bu|D = 0, u(x, 0) = h(x),

for x D, t (0, T ), where


Z t
V (u, x, t) = (u, x, s) W (x, ds)
0

and Z tZ
Z(u, x, t) = (ds, d).
(u, x, s, )N
0 B

In the above integrals, the random fields (r, x, s) and (r, x, s, ), for r
R, x D, s (0, T ) and B, are to be specified later. We shall seek a mild
solution by first converting problem (3.208) into the integral equation
Z
u(x, t) = G(x, y; t)h(y)dy
Z tZ D
+ G(x, y; t s)f (u(y, s), y, s)dsdy
Z 0t Z D (3.209)
+ G(x, y; t s)(u(y, s), y, s)W (y, ds)dy
Z0 t ZD Z
+ G(x, y; t s) (ds, d)dy,
(u(y, s), y, s, )N
0 D B

which can be written in short as


Z t Z t
ut = Gt h + Gts Fs (us )ds + Gts s (us ) dWs
Z tZ 0 0 (3.210)
+ (ds, d).
Gts s (us , )N
0 B

In the above equations, the convolutional stochastic integrals of Poisson type


can be defined similar to in the Wiener case. Let Ft denote the -algebra
generated jointly by Ws and Ns for 0 s t. Define MT as the space of all
Ft -adapted processes {ut } in H for 0 t T with norm kukT given by
Z T
kukT = { E kut k2 dt}1/2 < .
0

To prove an existence theorem, we assume the following conditions hold:


Stochastic Parabolic Equations 101

(D1) The random fields f (u, x, t) and (u, x, s) are Ft -adapted and satisfy the
conditions (A1) and (A2) for Theorem 6.3.

(D2) There exists a constant K3 > 0 such that


Z
sup k(u, , t, )k2 (d) K3 (1 + kuk2 ) a.s.
0tT B

for any u H.

(D3) There exists a constant K4 > 0 such that


Z
sup k(u, , t, ) (v, , t, )k2 (d) K4 ku vk2 a.s.
0tT B

for any u, v H.

Before stating the existence theorem, we first give a lemma which follows
easily from Theorem 8.1.
Z tZ
Lemma 8.2 Let Mt = (ds, d) as given by (3.204), where
gs () N
0 B
gt () = g(, t, ) is an Ft -adapted random field satisfying the condition (3.202).
Then the following inequality holds
Z tZ Z tZ
Ek (ds, d)k2 E
Gts gs ()N kgs ()k2 (d)ds.
0 B 0 B

Theorem 8.3 Suppose that the conditions (D1) to (D3) are satisfied. Then,
for any h H, the problem (3.208) has a unique mild solution u which satisfies
the equation (3.210) and the inequality

sup E kut k2 C(T ) {1 + khk2 }, (3.211)


0tT

for some C(T ) > 0.

Proof. For simplicity, we shall prove the theorem by setting 0 so that


the equation (3.210) reduces to
Z t
ut = Gt h + Gts Fs (us )ds
Z tZ 0 (3.212)
+ (ds, d).
Gts s (us , )N
0 B

As done before, the inclusion of the Wiener integral term does not cause any
technical problem. Similar to the proof of Theorem 6.3, it will be carried
102 Stochastic Partial Differential Equations

out by applying the fixed-point theorem. To this end, define the mapping
: MT MT by
Z t
t u = Gt h + Gts Fs (us )ds
Z tZ 0 (3.213)
+ Gts s (us , )N (ds, d).
0 B

It follows that
Z t
E kt uk2 3 E {kGt hk2 + k Gts Fs (us )dsk2
Z tZ 0

+k Gts s (us , )N (ds, d)k2 }


0 B Z (3.214)
t
3 {khk2 + T E kFs (us )k2 ds
Z tZ 0

+E ks (us , )k2 (d)ds},


0 B

where use was made of Lemma 5.1 and Lemma 9.1. Now, in view of bound-
edness conditions (A1) and (D2), we can deduce from the above inequality
that
kuk2T = sup E kt uk2 dt C1 (T ){1 + kuk2T },
0tT

for some C1 (T ) > 0. This shows that the map : MT MT is bounded.


To show that the map is a contraction, for u, v MT , consider
Z t
2
E kt u t vk 2 E {k Gts [Fs (us ) Fs (us )]dsk2
Z t Z0
+k (ds, d)k2 }.
Gts [s (us , ) s (vs , )]N
0 B

Similar to the previous estimates, by invoking conditions (A2) and (D3), we


can show that the above equation yields the inequality
Z t
E kt u t vk2 2 (K3 T + K4 ) E kus vs k2 ds,
0

which implies
ku vk2T 2 (K3 T + K4 ) T ku vk2T .
Hence the map on MT is a contraction for a sufficiently small T . The fixed
point u MT gives the mild solution which can be continued to any T > 0.
To verify the inequality (3.211), it follows from (3.214) that
Z t
E kut k2 3{khk2 + T E kFs (us )k2 ds
Z tZ 0

+E ks (us , )k2 (d)ds}.


0 B
Stochastic Parabolic Equations 103

By invoking conditions (A1) and (D2), we can then apply the Gronwalls
inequality to obtain the desired inequality (3.211).
This page intentionally left blank
4
Stochastic Parabolic Equations in the
Whole Space

4.1 Introduction
This chapter is concerned with some parabolic It o equations in Rd . In Chap-
ter 3 we considered such equations in a bounded domain D. The analysis was
based mainly on the method of eigenfunction expansion for the associated
elliptic boundary-valued problem. In Rd the spectrum of such an elliptic op-
erator may be continuous and the method of eigenfunction expansion is no
longer suitable. For an elliptic operator with constant coefficients, the method
of Fourier transform is a natural substitution. Consider the initialvalue or
Cauchy problem for the heat equation on the real line:

u 2u
= + f (x, t), < x < , t > 0,
t x2 (4.1)
u(x, 0) = h(x),

where f and g are given smooth functions. Let u(, t) denote a Fourier trans-
form of u in the space variable defined by
r Z
1
u
(, t) = eix u(x, t)dx
2

with i = 1, where R is a parameter and the integration is over the real
line. By applying a Fourier transform to the heat equation, it yields a simple
ordinary differential equation

d
u
+ f(, t),
= 2 u u
(, 0) = h(), (4.2)
dt

which can be easily solved to give


Z t
2 2
u
(, t) = h()e t
+ e (ts)
f(, s)ds.
0

105
106 Stochastic Partial Differential Equations

By the inverse Fourier transform, we obtain the solution of the heat equation
(4.1) as follows:
r Z
1
u(x, t) = eix u(, t)d
2
Z Z tZ
= K(x y, t)h(y)dy + K(x y, t s)f (y, s)dyds,
0

where the Greens function K is the well-known Gaussian or heat kernel given
by
1
K(x, t) = exp{x2 /4t}. (4.3)
4t
This shows that, similar to the eigenfunction expansion, the Fourier transform
reduces the partial differential equation (4.1) to an ordinary differential equa-
tion (4.2), which can be analyzed more easily. With proper care, this method
can also be applied to stochastic partial differential equations as one will see.
In the following sections, Section 4.2 is concerned with some basic facts
about the Fourier transforms of random fields and a special form of It o for-
mula. Similar to the case in a bounded domain, the existence and unique-
ness questions for linear and semilinear parabolic It o equations are treated
in Section 4.3. For a linear parabolic equation with a multiplicative noise, a
stochastic Feynman-Kac formula will be given in Section 4.4 as a probabilistic
representation of its solution. As a model equation, solutions to a parabolic It o
equation representing, say, the mass density, are required to be nonnegative
on physical grounds. This issue is studied in Section 4.5. Finally, in Section
4.6, the derivation of partial differential equations for the correlation functions
of the solution to a certain linear parabolic It o equation will be provided.

4.2 Preliminaries
Let H be the Hilbert space L2 (Rd ) of real or complex valued functions with
the inner product Z
(g, h) =
g(x)h(x)dx,

where the integration is over Rd and the over-bar denotes the complex con-
jugate. As in Chapter 3, we shall use the same notation for the corresponding
Sobolev spaces H m with norm k km and so on.
For g H, let g() denote the Fourier transform of g defined as follows:
Z
1 d/2
g() = (F g)() = ( ) g(x)eix dx, Rd , (4.4)
2
Stochastic Parabolic Equations in the Whole Space 107

where the dot denotes the inner product in Rd . The inverse Fourier transform
of g is given by
Z
1
g(x) = (F 1 g)(x) = ( )d/2 g()eix d, x Rd . (4.5)
2
By Plancherels theorem and Parsevals identity, we have

kgk2 = k
g k2 ; g , h),
(g, h) = ( (4.6)

which shows that F : H H is an isometry. For g C m


0 , define the H -norm
kgkm of g by
Z
kgkm = km/2 gk = { m ()|g ()|2 d }1/2 , (4.7)

where
() = (1 + ||2 ).
For g L1 and h Lp , the convolution : g h of g and h is defined as
Z
(g h)(x) = g(x y)h(y)dy. (4.8)

Then it satisfies the following property:



(g h)b = (h g)b = (2)d/2 g h, (4.9)

and Youngs inequality holds:

|g h|p |g|1 |h|p (4.10)

for 1 p , where | |p denotes the Lp -norm with | |2 = k k (see [31], [88]


for details).

Let M (, t), t [0, T ], be a continuous H-valued martingale with M (, 0) =


0 and the covariation operator
Z t
[M ]t = Qs ds,
0

where the local characteristic operator Qt has q(x, y; t) as its kernel. Suppose
that Z T
2
E sup kM (, t)k CE T r Qs ds < . (4.11)
0tT 0

Let M (, t) denote the Fourier transform of M (x, t). Then M (, t) is a continu-


ous martingale in H with local characteristic operator Q t with kernel q(, ; t)
so that Z t
hM (, ), M
(, )it = q(, ; s)ds.
0
108 Stochastic Partial Differential Equations

It is easy to check that


Z Z
1 d
q(, ; t) = ( ) q(x, y; t)ei(xy) dxdy, (4.12)
2
t.
and T r Qt = T r Q
Let K(x, t) be a function on Rd [0, T ] such that
Z
sup |K(x, t)|dx C1 , (4.13)
0tT

for some constant C1 > 0. Consider the stochastic convolution


Z tZ
J(x, t) = K(x y, t s)M (y, ds)dy. (4.14)
0

To show it is well defined, let J n (x, t) denote its n-finite sum approximation
for J(x, t) so that
n Z
X
J n (x, t) = K(x y, t tj1 )[M (y, tj ) M (y, tj1 )]dy,
j=1

with 0 = t0 < t1 < < tj < < tn = t. Its Fourier transform is given by
n
X
Jn (, t) = (2)d/2 t tj1 )[M
K(, (, tj ) M
(, tj1 )]. (4.15)
j=1

t) is bounded in Rd [0, T ]. From this and condition


By condition (4.13), K(,
(4.11) one can show that Jn (, t) converges in L2 (; H), uniformly in t, to the
limit Z t
t) = (2)d/2
J(, t s)M
K(, (, ds). (4.16)
0

Since the Fourier transform F is an L2 -isometry, we can conclude that J n (, t)



converges in L2 (; H) to J(, t) given by (4.14) and J(, t) = {F 1 J}(, t).

Let Z t
V (x, t) = V0 (x) + b(x, s)ds + M (x, t),
0

where V0 H and b(, t) is a continuous adapted process in H 1 . For any


H 1 , consider the following equation
Z t
(V (, t), ) = (V0 (), ) + hb(, s), ids + (M (, t), ).
0

Define v(t) = (V (, t), ). Then v(t)v(0) is a real-valued semimartingale with


local characteristic (q(t); b(t)), where b(t) = hb(, t), i and q(t) = (Qt , ).
Stochastic Parabolic Equations in the Whole Space 109

For j H 1 , let vj (t) = (V (, t), j ), bj (t) = hb(, t), j i, zj (t) =


(M (, t), j ) and qjk (t) = (Qt j , k ), for j, k = 1, , m. Let F (x1 , , xm )
be a C2 -function on Rm . Set v m (t) = (v1 (t), , vm (t)). By the usual It o
formula, we obtain
m Z
X t
F [v m (t)] = F [v m (0)] + Fj [v m (s)]bj (s)ds
j=1 0
m Z
X t
+ Fj [v m (s)]dzj (s) (4.17)
j=1 0
m
X Z t
1
+ Fjk [v m (s)]qjk (s)ds,
2 0
j,k=1

F 2F
where we denote Fj = and Fjk = .
xj xj xk

4.3 Linear and Semilinear Equations


First let us consider the heat equation (3.59) with = = 1 and D = Rd :

u (x, t),
= ( 1)u + f (x, t) + (x, t)W t (0, T ),
t (4.18)
d
u(x, 0) = h(x), x R ,

where h H and f (, t), (, t) are continuous adapted processes in H, and


W (x, t) is a continuous Wiener random field with covariance function r(x, y)
bounded by r0 . Assume that
Z T
E { kf (, s)k2 + k(, s)k2 }ds < , (4.19)
0

which implies that


Z T Z T Z
E k(, s)k2R ds = E r(x, x) 2 (x, s)dxds
0 0
Z T
r0 E k(, s)k2 ds < .
0

Rewrite (4.18) as an integral equation


Z t Z t
u(x, t) = h + Au(x, s)ds + f (x, s) ds + M (x, t), (4.20)
0 0
110 Stochastic Partial Differential Equations

where A = ( 1) and
Z t
M (x, t) = (x, s)W (x, ds).
0

Applying a Fourier transform to (4.20), we obtain


Z t Z t
u
(, t) = h() () u
(, s)ds + f(, s)ds + M
(, t), (4.21)
0 0

for Rd , t (0, T ), where we recall that () = (1 + ||2 ) and M (, t) is the


Fourier transform of M (x, t). For each , the above equation has the solution
Z t

u(, t) = h()e () t
+ e()(ts) f(, s)ds
Z t 0 (4.22)
+ e()(ts) M (, ds).
0

By means of the inverse Fourier transform, it yields

u(x, t) = u
(x, t) + v(x, t), (4.23)

where Z t
u
(x, t) = (Gt h)(x) + (Gts f )(x, s)ds (4.24)
0
and Z t
v(x, t) = (Gts M )(x, ds). (4.25)
0
Let Gt denote the Green operator defined by
Z
(Gt h)(x) = K(x y, t)h(y)dy,

where K(x, t) is a Gaussian kernel given by


Z
|x|2
K(x, t) = (2)d eix()t d = (4t)d/2 exp { + t}. (4.26)
4t
Similar to Lemma 3-5.1, by Plancherels theorem and Parsevals equality
in Fourier transform, the following lemma can be easily verified.

Lemma 3.1 Let h H and let f (, t) be a continuous predictable process


in H such that the condition (4.19) is satisfied. Then u(, t) is a continuous,
adapted process in H such that u L2 (; C([0, T ]; H)) L2 ( (0, T ); H 1 ).
Moreover, the following estimates hold:
Z T
1
sup kGt hk khk; kGt hk21 dt khk2 , (4.27)
0tT 0 2
Stochastic Parabolic Equations in the Whole Space 111
Z t Z T
E sup k Gts f (, s)dsk2 ds T E kf (, t)k2 dt, (4.28)
0tT 0 0
and Z Z Z
T t T
T
E k Gts f (, s)dsk21 dt E kf (, t)k2 dt. (4.29)
0 0 2 0

Proof. The first inequality in (4.27) follows immediately from Youngs in-
equality (4.10) for p = 2. To verify the second inequality in (4.27), we have
Z T Z TZ
2
kGt hk1 dt
()e2()t |h()| 2
ddt
0 0
1 2 1
khk = khk2 .
2 2
For the remaining ones, we will only verify the last inequality (4.29).
Clearly we have
Z t Z t Z t
2 2
k Gts fs dsk1 [ kGts fs k1 ds ] T kGts fs k21 ds.
0 0 0

Since, in view of (4.7) and (4.9),


kGts fs k21 = k1/2 ()e()(ts) fs k2 ,
we get
Z T Z t
E k Gts fs ds k21 dt
0 0
Z T Z tZ
TE 2()(ts)
()e |f(, s)|2 ddsdt
0 0
Z T
T
E kft k2 dt,
2 0

by interchanging the order of integration and making use of the fact:


kfs k = kfs k.

As a counterpart of Lemma 3-5.2, we shall verify the next lemma.


Rt
Lemma 3.2 Let v(, t) = 0 Gts M (, ds) be the stochastic integral given
by (4.25) with M (, ds) = (, s)W (, ds) such that
Z T
E k 2 (, t)k2 dt < . (4.30)
0
1
Then v(, t) is an H -process, with a continuous trajectory in H over [0,T],
and the following inequalities hold:
Z t
E kv(, t)k2 E k(, s)k2R ds, (4.31)
0
112 Stochastic Partial Differential Equations
Z T
E sup kv(, t)k2 16 E k(, t)k2R dt, (4.32)
0tT 0
and Z Z
T T
1
E kv(, t)k21 dt E k(, t)k2R dt. (4.33)
0 2 0

Proof. Applying a Fourier transform to v, we get


Z t
v(, t) = (, ds),
e()(ts) M (4.34)
0
which can be shown to be a continuous, adapted process in H. Hence, so is
v(x, t). To show (4.31), it follows from (4.34) that
Z tZ
2 2
E kvt k = E k vt k = E e2()(ts) q(, ; s) dds
0
Z t Z t
E T r Qs ds = E ks k2R ds.
0 0

By rewriting (4.34) as
Z t
(, t) ()
v(, t) = M (, s)ds,
e()(ts) M
0

we can obtain (4.32) in a similar manner to the corresponding inequality in


Lemma 3-5.2. As for (4.33), we have
Z T Z T Z tZ
E kvt k21 dt = E ()e2()(ts) q(, , s)ddsdt
0 0 0
Z Z T Z T
= E ()e2()(ts) q(, , s)dtdsd
0 s
Z T Z T
1 1
E T r Qs ds = E kt k2R dt,
2 0 2 0

where we made use of the equality T r Qs = T r Qs and a change in the order


of integrations, which can be justified by invoking Fubinis theorem.

By means of Lemmas 3.1 and 3.2, the following theorem can be proved
easily.

Theorem 3.3 Let the condition (4.19) be satisfied. For h H, the solution
u(, t) of the linear problem (4.18) given by (4.23) is an Ft adapted H 1 -valued
process over [0, T ] with a continuous trajectory in H such that
Z T
2
E sup ku(, t)k + E ku(, t)k21 dt
0tT 0
Z T (4.35)
C(T ) { khk2 + E [ kf (, s)k2 + k(, s)k2R ]ds },
0
Stochastic Parabolic Equations in the Whole Space 113

for some constant C(T ) > 0. Moreover, the energy equation holds true:
Z t
2 2
ku(, t)k = khk + 2 hAu(, s), u(, s)ids
Z t 0

+2 (u(, s), f (, s))ds (4.36)


Z0 t Z t
+2 (u(, s), M (, ds)) + k(, s)k2R ds.
0 0

Proof. In view of (4.23), by making use of Lemmas 3.1 and 3.2, similar to the
proof of Theorem 3-5.3, we can show that the solution u has the regularity
property: u L2 ([0, T ]; H 1)L2 (; C([0, T ]; H)), and the inequality (4.35)
holds true. By applying the It o formula to | u(, t)|2 and noticing (4.21), we
get
Z t Z t
|
u(, t)|2 = |h()| 2
2() u(, s)|2 ds + 2 Re {
| (, s), f(, s)ds}
u
0 0
Z t Z t
+2 Re {
u(, s)M (, ds)} + q(, , s) ds,
0 0

where the over-bar denotes the complex conjugate and Re means the real part
of a complex number. By integrating the above equation with respect to ,
making use of Plancherels theorem and Parsevals equality, it gives rise to the
energy equation (4.36).

Now consider the semilinear parabolic equation in Rd :


u
= ( 1)u + f (u, u, x, t)
t
+ g(x, t) + (u, u, x, t)W (x, t), (4.37)

u(x, 0) = h(x),

for x Rd , t (0, T ), where g(x, t), f (u, , x, t) and (u, , x, t) are given
predictable random fields, as in the case of a bounded domain D in Section
3.7. Analogous to conditions (B.1)(B.3), we assume the following:

(C.1) f (r, , x, t) and (r, , x, t), for r R; , x Rd and t [0, T ], are pre-
dictable random fields. There exists a constant > 0 such that

kf (v, , v, t)k2 + k(v, , v, t)k2R (1 + kvk2 + kvk21 ), a.s.

for any v H 1 , t [0, T ].


(C.2) There exist positive constants , 1 and 2 with (1 + 2 ) < 1 such that

kf (v, , v, t) f (v , , v , t)k2 kv v k2 + 1 kv v k21 , a.s.,


114 Stochastic Partial Differential Equations

and

k(v, , v, t) (v , , v , t)k2R kv v k2 + 2 kv v k21 , a.s.

for any v, v H 1 , t [0, T ].

(C.3) Let g(, t) be a predictable H-valued process such that

Z T
E kg(, t)k2 dt < ,
0

and W (, t) be an RWiener random field in H with a bounded covariance


function r(x, y).

With the aid of Lemmas 3.1 and 3.2 together with Theorem 3.3, under
conditions (C.1)(C.3), the existence theorem for the problem (4.37) can be
proved in a fashion similar to that of Theorem 3-7.2. Therefore we shall simply
state the theorem and omit the proof.

Theorem 3.4 Suppose the conditions (C.1), (C.2), and (C.3) are satisfied.
Then, given h H, the semilinear equation (4.37) has a unique solution
u(, t) as an adapted, continuous process in H. Moreover, it has the regularity
property: u L2 (; C([0, T ]; H)) L2 ( [0, T ]; H 1 ) and the energy equation
holds:
Z t
ku(, t)k2 + 2 ku(, s)k21 ds
Z t 0 Z t
= khk2 + 2 (u(, s), g(, s))ds + T r Qs (u, u)ds
Z t 0 0 (4.38)
+2 (u(, s), f [u(, s), u(, s), , s)])ds
Z0 t
+2 (u(, s), M [u(, s), u(, s), , ds]).
0

Moreover, we have

Z T
2
E { sup ku(t)k + ku(, s)k21 ds}
0tT 0
Z T (4.39)
C {khk2 + E kg(, t)k2 dt},
0

for some C > 0.


Stochastic Parabolic Equations in the Whole Space 115

4.4 FeynmanKac Formula


Let us consider the linear parabolic equation with a multiplicative noise:

u 1
= u + uV (x, t) + g(x, t), x Rd , t (0, T ),
t 2 (4.40)
u(x, 0) = h(x),

where V (x, t) =
t V (x, t) and
Z t
V (x, t) = b(x, s)ds + N (x, t),
Z0 t (4.41)
N (x, t) = (x, s) W (x, ds).
0

In order to obtain a path-wise smooth solution, the random fields: W (, t),


g(, t), b(, t) and (, t) are required to have certain additional regularity prop-
erties to be specified. Recall that in Section 2.3 we gave a probabilistic repre-
sentation of the solution to a linear parabolic equation, where, to avoid intro-
ducing the Wiener random field, the noise is assumed to be finite-dimensional.
In fact, for a continuous Cm -semimartingale, under suitable conditions, the
probabilistic representation given by Theorem 2-3.2 is also valid. Before stat-
ing this result, we must extend the definitions of the stochastic integrals with
respect to the semimartingale, such as (4.41). By Theorem 3.2.5 in [55], the
following lemma holds.

Lemma 4.1 Let (x, t) be a C1 -semimartingale with local characteristic


((x, y, t); (x, t)) for x, y Rd , and let t be a continuous, adapted process
in Rk such that
Z T
{|(s , s)| + (s , s , s)}dt < , a.s.
0

Rt Rt
Then the Ito integral 0 (s , ds) and the Stratonovich integral 0 (s , ds)
can be defined by (2.11) and (2.12), respectively. Moreover, for m 1, the
following formula holds:
Z Z d Z
t t
1 X t
(s , ds) = (s , ds) + h (s , ds), ti i, (4.42)
0 0 2 i=1 0 xi

where h, i denotes the mutual variation symbol.

Theorem 4.2 Suppose that W (x, t) is a Cm -Wiener random field such that
116 Stochastic Partial Differential Equations

the correlation function r(x, y) belongs to Cm,b . Let b(, t), (, t) and g(, t)
be continuous adapted Cm, b -processes such that
Z T Z T
2
kb(, t)km+ c1 , kg(, t)k2m+ c2 ,
0 0

and
sup k(, t)k2m+1+ c3 , a.s.,
0tT

for some positive constants ci , i = 1, 2, 3. Then, for h Cbm+1, with m


3, the solution u(, t), 0 t T , to the Cauchy problem (4.40) is a Cm
semimartingale which has a probabilistic representation given by:
 Z t
u(x, t) = Ez h[x + z(t)] exp{ V [x + z(t) z(s), ds] }
Z t 0

+ g[x z(t) + z(s), s] (4.43)


0 Z t 
exp{ V [x + z(t) z(r), dr] }ds ,
s

where Ez denotes the partial expectation with respect to the standard Brow-
nian motion z(t) = (z1 (t), , zd (t)) in Rd independent of W (x, t).

Proof. Consider the first-order equation:


d
v X v
= zi (t) + v V (x, t) + g(x, t),
t xi (4.44)
i=1
u(x, 0) = h(x), x Rd , t (0, T ),
where {z1 , , zd } are independent standard Brownian motions as mentioned
before. As indicated at the end of Chapter 2, by Theorem 6.2.5 in [55], Theo-
rem 2-3.2 holds when the finite-dimensional noise is replaced by a Cm -Wiener
random field. Here we let Vi = zi (t), i = 1, , d; Vd+1 = V and Vd+2 = g.
The solution of the first-order equation (4.44) has the probabilistic represen-
tation:
 Z t
v(x, t) = h[t (x)] exp{ V [s (y), ds]}
Z t Z t 0  (4.45)
+ g[s (y), s] exp{ V [r (y), dr]}ds ,
0 s y=t (x)

where t (x) = x z(t) and t (x) = 1


t (x) = x + z(t). By Lemma 4.1,
Z Z d Z
t t
1X t
V [s (y), ds] = V [s (y), ds] + h V [s (y), ds], zj (t)i
0 0 2 j=1 0 xj
Z t
= V [s (y), ds],
0
Stochastic Parabolic Equations in the Whole Space 117

since z(t) is independent of W (x, t). Therefore, (4.45) can be written as


 Z t
v(x, t) = h[x + z(t)] exp{ {V [x + z(t) z(s), ds]}
Z t 0

+ g[x + z(t) z(s), s] (4.46)


0 Z t 
exp{ V [x z(t) + z(r), dr]}ds .
s

By making use of the fact

v 1 v
zi (t) = u + zi (t),
xi 2 xi

and taking a partial expectation of the equation (4.46) with respect to the
Brownian motion z(t), it yields the equation (4.43). It follows from this ob-
servation and (4.46) that
 Z t
u(x, t) = Ez {v(x, t)} = Ez h[x + z(t)] exp{ V [x + z(t) z(s), ds]}
0
Z t Z t 
+ g[x + z(t) z(s), s] exp{ V [x + z(t) z(r), dr]}ds
0 s

is the solution of (4.40) as claimed.

Remarks: The stochastic Feynman-Kac formula for the equation (4.40) has
been obtained by other methods. For instance, the formula can be derived by
first approximating the Wiener random field by a finite dimensional Wiener
process
n
X
Wtn = k wtk k ,
k=1

where k is the eigenfunction of the covariance operator R associated with


the eigenvalue k and {wtk } is a sequence of independent copies of stan-
dard Brownian motion in one dimension. Then it can be shown that, un-
der a Cm -regularity condition for W , the corresponding solution un (x, t) has
a FeynmanKac representation which converges to the formula (4.43) in a
Sobolev space as n . However, as seen from results given in Section 2.3,
the present approach allows us to generalize the probabilistic representation of
the solution to a stochastic parabolic equation when the Laplacian in equation
(4.40) is replaced by an elliptic operator A that is the infinitesimal genera-
tor for a diffusion process yt . In this case, the Feynman-Kac formula can be
obtained similarly with zt replaced by yt .
118 Stochastic Partial Differential Equations

4.5 Positivity of Solutions


As physical models, the unknown function u(x, t) often represents the density
or concentration of some substance that diffuses in a randomly fluctuating
media. In such instances, given the positive data, the solution u must remain
positive (non-negative). Due to the presence of noisy coefficients, this fact is far
from obvious and needs to be confirmed. For example, consider linear equation
(4.40). In view of the Feynman-Kac formula (4.43), if the data h(x) 0 and
g(x, t) 0 a.s., then the solution u(x, t) is clearly positive. In fact, the
following theorem holds.

Theorem 5.1 Suppose that the conditions for Theorem 4.2 hold true. If
g(x, t) 0 and h(x) > 0 for all (x, t) Rd [0, T ], the solution u(x, t) of the
equation (4.40) remains strictly positive in Rd [0, T ].

Proof. As mentioned above, the solution is known to be nonnegative, or


u(x, t) 0. In view of the formula (4.43), it suffices to show that

(x, t) = Ez {h[x + z(t)] exp (x, t)} > 0, (4.47)

where Z t
(x, t) = V [x z(t) + z(s), ds].
0

This can be shown by applying the CauchySchwarz inequality as follows

{Ez h1/2 [x + z(t)]}2 = {Ez (h[x + z(t)]e(x,t) )1/2 e(x,t)/2 }2


Ez {h[x + z(t)]e(x,t) }Ez e(x,t) ,

or, in view of (4.47), the above yields

{Ez h1/2 [x + z(t)]}2


(x, t) > 0,
Ez e(x,t)

for any (x, t) Rd [0, T ].

Now we will show that, under suitable conditions, the positivity of the
solution holds true for a class of semilinear parabolic It
o equations of the
form:
u
= ( )u + f (u, u, x, t) + g(x, t)
t
(x, t),
+ (u, u, x, t)W (4.48)

u(x, 0) = h(x),

for x Rd , t (0, T ), where the positive parameters , are reintroduced


Stochastic Parabolic Equations in the Whole Space 119

for physical reasons. Before stating the theorem, we need to introduce some
auxiliary functions. For r R, let (r) denote the negative part of r, or

r, r < 0,
(r) = (4.49)
0, r 0,

and put k(r) = 2 (r) so that



r2 , r < 0,
k(r) = (4.50)
0, r 0.

For > 0, let k (r) be a C2 -regularization of k(r) defined by


2
r 2 /6, r < ,

r3 r 4
k (r) = ( + ), r < 0, (4.51)

2 3
0, r 0.

Let u(x, t) denote the solution of the parabolic It o equation (4.48). By


convention we set ut = u(, t), [ft (u)](x) = f [u(x, t), u(x, t), x, t] and so on.
Also let
M [(u(t), dt)](x) = M [u(x, t), u(x, t), x, dt]
= [u(x, t), u(x, t), x, t]W (x, dt).
Denote by q(s, , x; s , , x ; t) the local characteristic of the martingale

M (s, , x, t), for s, s R; , , x, x Rd ; t [0, T ],

and set q(s, , x; t) = q(s, , x; s, , x; t). Then we have

q(s, , x, t) = r(x, x) 2 (s, , x, t),

and, for brevity, denote

[
qt (u)](x) = q[u(x, t), u(x, t), x; t].

Define Z
(ut ) = (1, k [u(, t)]) = k [u(x, t)]dx. (4.52)

Apply the It
o formula to k [u(x, t)] and then integrate over x to obtain
Z t
(ut ) = (h) + < ( )us , k (us ) > ds
Z t 0 Z t
+ (k (us ), fs (u))ds + (k (us ), gs )ds (4.53)
Z0 Z0 t
1 t (us , ds)),
(k (us ), qs (u))ds + (k (us ), M
2 0 0
120 Stochastic Partial Differential Equations

where k (r), k (r) denote the first two derivatives of k . Since, by an integra-
tion by parts,
Z t Z t

< us , k (us ) > ds = (k (us ), |us |2 )ds,
0 0

we have the following lemma:

Lemma 5.2 Let ut = u(, t) denote the solution of the parabolic It


o equa-
tion (4.48). Then the following equation holds
Z t
(ut ) = (h) (k (us ), |us )|2 )ds
Z t 0

+ (k (us ), fs (u) us + gs )ds (4.54)


0Z Z t
1 t (us , ds)).
+ (k (us ), qs (u))ds + (k (us ), M
2 0 0

Theorem 5.3 Suppose that the conditions for Theorem 3.4 hold. In addi-
tion, assume the following conditions:
(1) h(x) 0 and g(x, t) 0, a.s., for almost every x Rd and t [0, T ].

(2) There exists a constant c1 > 0 such that


1
q(s, , x; t) ||2 c1 s2 ,
2
for all s R, , x, Rd , t [0, T ].

(3) For any v H 1 and t [0, T ],

((v), f (v, v, , t)) 0,

where we set [(v)](x) = [v(x)] and (r) is given by (4.49).


Then the solution of the semilinear parabolic equation (4.48) remains pos-
itive so that u(x, t) 0, a.s., for almost every x Rd , t [0, T ].

Proof. From Lemma 5.2, after taking an expectation of the equation (4.53),
it can be written as
Z t
1
E (ut ) = (h) + E (k (us ), qs (u) |us |2 )ds
Z t 0 2

+ E (k (us ), fs (u) us + gs )ds.
0

By making use of conditions (1) and (2), and, in view of (4.50), the properties:
Stochastic Parabolic Equations in the Whole Space 121

k (r) = 0 for r 0; k (r) 0 and k (r) 0 for any r R, the above equation
yields
Z t
E (ut ) c1 E (k (us ), |us |2 )
Z t 0 (4.55)

+ E (k (us ), fs (us ) us )ds.
0

Referring to (4.50) and (4.51), it is clear that, for each r R, we have k (r)
k(r) = 2 (r), k (r) 2(r) 0, as 0. Also notice that 0 k (r)
2(r) for any > 0, where (r) = 1 for r < 0, and (r) = 0 for r 0. By
taking the limits in equation (4.55) as 0, we get

E (ut ) = E k(ut )k2


Z t Z t
2c1 E ((us ), |us |2 )ds 2 E ((us ), fs (us ))ds
Z0 t Z t 0
= 2 c1 E k(us )k2 ds 2 E ((us ), fs (us ))ds.
0 0

By making use of condition (3), the above yields


Z t
2
E k(ut )k 2 c1 E k(us )k2 ds,
0

which, by means of the Gronwall inequality, implies that


Z
E k(ut )k2 = E 2 [u(x, t)]dx = 0, for every t [0, T ].

It follows that [u(x, t)] = u (x, t) = 0, a.s. for a.e. x Rd , t [0, T ]. This
proves the positivity of the solution as claimed.

Remarks:
(1) By a similar proof, one can show that this theorem also holds true in a
bounded domain D Rd .

(2) It should be pointed out that this approach, based on the It o formula
applied to a regularized comparison function k , was introduced in [75] for a
special semilinear parabolic Ito equation. Theorem 5.3 is a generalization of
that positivity result to a larger class of stochastic parabolic equations.

(3) Notice that condition (2) of the theorem constitutes a stronger


form of coercivity condition, and condition (3) holds if the function f is posi-
tive.

(4) As mentioned before, the theorem holds for multidimensional noise as well
and this can be shown by an obvious modification in the proof. For instance,
122 Stochastic Partial Differential Equations

suppose that
d
X
(, t) =
( , t)W i (, t),
i ( , t)W
i=1

where Wi (, t) are Wiener random fields with covariance matrix [rij (x, y)]dd .
If, in condition (2), we define
d
X
q(s, , x; t) = rij (x, x)i (s, , x, t)j (s, , x, t), (4.56)
i,j=1

then the corresponding solution is positive.

As an example, consider the turbulent diffusion model equation with chem-


ical reaction:
u
= u u u3 + g(x, t)
t
Xd
u i (x, t), (4.57)
i (x, t)W
i=1
xi
u(x, 0) = h(x),
where , and are positive constants. Referring to equation (4.57), suppose
that there is constant a q0 > 0 such that
d
X
q(, x; t) = rij (x, x)i (x, t)j (x, t)i j q0 ||2 , (4.58)
i,j=1

which is independent of s for all , x R, t [0, T ].

Corollary 5.4 Let h(x) 0 and g(x, t) 0, a.e. x Rd and t [0, T ].


If the condition (4.58) holds for q0 2, then the solution of (4.57) remains
positive a.s. for any t [0, T ], a.e. x Rd .

Proof. By the assumptions and (4.58), in Theorem 5.3, condition (1) is


met, and
1 1
q(, x, t) ||2 (q0 2)||2 0,
2 2
so that condition (2) is also satisfied. In addition, we have ft (v) = v 3 so
that
((v), ft (v)) = ((v), v 3 ) 0.
Hence, condition (3) is valid, and the conclusion follows from Theorem 5.2.

Physically this means that the turbulent diffusion model (4.57) is feasible
if the coercivity condition holds. Then the concentration u(x, t) will remain
positive as it should.
Stochastic Parabolic Equations in the Whole Space 123

4.6 Correlation Functions of Solutions


Let u(, t) be a solution of a parabolic It o equation. For x1 , , xm Rd , t
1 m
[0, T ], a function m (x , , x , t) is said to be the m-th order correlation
function, or the correlation function of order m, for a solution u if m ( , t)
H(Rdm ) and, for any 1 , , m H,

E {(1 , u(, t)) (m , u(, t))} = (( , t), 1 m )m


Z Z
= m (x1 , , xm , t)1 (x1 ) m (xm )dx1 dxm ,

where (, )m denotes the inner product in H(Rmd ) and means a tensor


product. Similarly, the duality pairing between H 1 (Rmd ) and H 1 (Rmd )
is denoted by h, im . Clearly,m(x1 , , xm , t) is symmetric in the space vari-
ables under permutations. We are interested in the possibility of deriving the
correlation equations, or the partial differential equations governing m s, that
can be solved recursively.
In general this is impossible, as can be seen from equation (4.48). Formally,
if we take an expectation of this equation, it does not lead to an equation for
1 . From the computational viewpoint, this is a source of difficulty in dealing
with nonlinear stochastic equations. However, we will show that, for a certain
type of linear parabolic It o equation, if all correlations up to a certain order
exist, then they satisfy a recursive system of partial differential equations. To
be specific, consider the linear equation with a multiplicative noise in Rd :

u (x, t),
= u + a(x, t)u + g(x, t) + u (x, t)W
t (4.59)
u(x, 0) = h(x),

where the functions a(x, t), (x, t), and g(x, t) are assumed to be non-random.

Lemma 6.1 Suppose that a, Cb (Rd [0, T ]), g L ([0, T ]; H), and
r0 = supxRd r(x, x) < . Then, given h H, the solution u of equation
(4.59) belongs to Lp ( (0, T ); H) L2 ( (0, T ); H 1 ) for any p 2 such
that
Z T
sup E kut kp Cp {khkp + kgt kp dt}, (4.60)
0tT 0

for some constant Cp > 0.

By the regularity assumptions on the data, we can apply Theorem 3.4 to


conclude that the equation has a unique solution u Lp ( (0, T ); H)
L2 ( (0, T ); H 1 ) for any p 2. Let k H 1 for k = 1, 2, . Since u is a
124 Stochastic Partial Differential Equations

strong solution, we have


Z t
(u(, t), 1 ) = (h, 1 ) + {h[ + a(, s)]u(, s) + g(, s), 1 i}ds
0
Z t
+ (1 , u(, s)(, s)W (, ds)).
0

By taking an expectation, and changing the order of integration and so on,


the above equation yields the equation for the first-order correlation or the
mean solution:
Z t
(1 (, t), 1 ) = (h, 1 ) + h[ + a(, s)]1 (, s), 1 ids
Z t 0 (4.61)
+ (g(, s), 1 )ds.
0

These operations can be justified and will be repeated in later computations


without mentioning. Let
Z t
vk (t) = vk (0) + bk (s)ds + zk (t), (4.62)
0

where

vk (t) = (u(, t), k ), bk (t) = h[ + a(, s)]u(, s) + g(, s), k i

and zk (t) is the martingale term:


Z t
zk (t) = (k , u(, s)(, s)W (, ds)).
0

For any j, k, the local characteristic qjk (u, t) of zj (t) and zk (t) is given by
Z Z
qjk (u, t) = q(x, y; t)u(x, t)u(y, t)j (x)k (y)dxdy

with q(x, y; t) = r(x, y)(x, t)(y, t).


Let F (v1 (t), v2 (t)) = v1 (t)v2 (t). In view of (4.62), by the It o formula (4.17),
we have
Z t
[v1 (t)v2 (t)] = [v1 (0)v2 (0)] + {b1 (s)v2 (s) + b2 (s)v1 (s)}ds
0
Z t Z t
+ {v2 (s)dz1 (s) + v1 (s)dz2 (s)} + q12 (u, s)ds.
0 0

In the above equation, after taking an expectation and recalling the definitions
Stochastic Parabolic Equations in the Whole Space 125

of vj (t), bk (t) and so on, we can obtain the equation for the second-order
correlation equation in the form:

(2t , 1 2 )2 = (20 , 1 2 )2
Z t
+ h [ I + I ]2s , 1 2 i2 ds
Z0 t
(4.63)
+ ( [as I + I as + q(s)I] 2s , 1 2 )2 ds
Z t0

+ (gs 1s + 1s gs , 1 2 )2 ds,
0

where we write m ( , t) as m
t and I is an identity operator. Explicitly, the
equation (4.63) reads
Z Z
2 (x1 , x2 , t)1 (x1 )2 (x2 )dx1 dx2
Z Z
= 2 (x1 , x2 , 0), 1 (x1 )2 (x2 )dx1 dx2
Z tZ Z
+ (1 + 2 ) 2 (x1 , x2 , s), 1 (x1 )2 (x2 )dx1 dx2 ds
0
Z tZ Z
+ [ a(x1 , s) + a(x2 , s) + q(x1 , x2 , s) ]
0
2 (x1 , x2 , s)1 (x1 )2 (x2 )dx1 dx2 ds
Z tZ Z
+ [ g(x1 , s)1 (x2 , s) + g(x2 , s)1 (x1 , s) ]
0
1 (x1 )2 (x2 )dx1 dx2 ds,

which yields a parabolic equation in R2d in a distributional sense:

2 (x1 , x2 , t)
= (1 + 2 )2 + [a(x1 , t) + a(x2 , t)
t
+ q(x1 , x2 , t)]2 + g(x1 , t)1 (x2 , t) + g(x2 , t)1 (x1 , t), (4.64)

2 (x1 , x2 , 0) = h(x1 )h(x2 ),

where j means the Laplacian in variable xj .


In general, let us consider the m-th order correlation m (x1 , , xm , t),
for m 1. We let

F (v1 (t), , vm (t)) = v1 (t) vm (t).

As in the case of (m = 2), by applying the It


o formula and taking an expec-
126 Stochastic Partial Differential Equations

tation, we can obtain the m-th correlation equation as follows:

m (x1 , , xm , t)
m
t m
X X
=( j )m + [a(xj , t) + a(xk , t)
j=1 j,k=1,j6=k
+ q(xj , xk , t)]m (x1 , , xj , , xk , , xm , t) (4.65)
Xm
+ g(xj , t)m1 (x1 , , x j , , xk , , xm , t),
j,k=1,j6=k

m (x1 , , xm , 0) = h(x1 ) h(xm ),

where xj in the argument


Pm of a function means the variable xj is deleted.
md
Notice that the sum j=1 j is just the Laplacian in R . Therefore, for
any N 2, the N -th moment N can be determined by solving the parabolic
equation (4.65) recursively with m = 1, , N . Since a linear deterministic
parabolic equation is a special case of the stochastic counterpart, we can apply
the existence theorem (Theorem 3.4) to show the existence of solutions to the
correlation equations.

Theorem 6.2 Suppose that a, Cb (Rd [0, T ]; R), g L ([0, T ]; H),


and r0 = supxRd r(x, x) < . Then, given h H, there exists a unique
solution m ( , t) of the correlation equation (4.65) such that, for any m 1,

m C([0, T ]; H(Rmd)) L2 ((0, T ); H1 (Rmd )).

Proof. The theorem can be easily proved by induction. For m = 1, since


h H(Rd ) and g L2 ((0, T ); H), by Theorem 3.4, there exists a unique
solution 1 C([0, T ]; H(Rd)) L2 ((0, T ); H 1 (Rd )).
For m = 2, it is easy to see that h h H(R2d ) and [g(, t) 1 (, t) +
1 (, t) f (, t)] L2 ((0, T ); H(R2d )), since
Z T Z T
kg(, t) 1 (, t) + 1 (, t) g(, t)k2 dt 4 kg(, t) 1 (, t)k2 dt
0 0
Z T Z Z Z T
=4 [g(x1 , t)1 (x2 , t)]2 dx1 dx2 dt = 4 kg(, t)k2 k1 (, t)k2 dt
0 0
Z T
4 sup kg(, t)k2 k1 (, t)k2 dt.
0tT 0

Now suppose that the theorem holds for (m 1) such that

m1 C([0, T ]; H(R(m1)d)) L2 ((0, T ); H(R(m1)d)).


Stochastic Parabolic Equations in the Whole Space 127
Qm
Then j=1 h(xj ) belongs to H(Rmd ) and, similar to the case m = 2,
Z T Z Z m
X
2
j , , xk , , xm , t)
g(xj , t)m1 (x1 , , x
0 j,k=1,j6=k

dt dx1 dxm
Z T Z Z
Cm |m1 (x1 , , xm1 , t)g(xm , t)|2 dtdx1 dxm
0
Z T
= Cm km1 ( , t)k2 kg(, t)k2 dt
0
Z T
Cm sup kg(, t)k2 km1 ( , t)k2 dt,
0tT 0

for some constant Cm > 0. It follows that the equation (4.65) has a unique
solution m C([0, T ]; H(Rmd)) L2 ((0, T ); H(Rmd)). Therefore, by the
induction principle, this is true for any m 1.

Remarks:

(1) Clearly, if instead of Rd , the domain D is bounded, the corresponding mo-


ment functions for the solution are also given by (4.65) subject to suitable
homogeneous boundary conditions.

(2) As it stands, the moment functions may not be continuous. According to


the theory of parabolic equations [33], if the data h, a, g, and q are suffi-
ciently smooth, the correlation equations will have classical solutions that
satisfy the parabolic equations point-wise. If the m-th moment E [u(x, t)]m
exists at x, then it is equal to m (x, , x).

(3) The correlation equations can be derived for other types of linear stochastic
parabolic equations. An interesting example will be provided in in the
following:

Consider the following turbulent diffusion model equation (compared with


(4.57)) in Rd :
Xd
u u
= u + Vi (x, t) + g(x, t),
t xi (4.66)
i=1
u(x, 0) = h(x),
where the random velocity Vi is of the form
Z t Z t
Vi (x, t) = vi (x, s)ds + i (x, s)Wi (x, ds),
0 0

in which vi (x, t) is the i-th component of the mean velocity, and


128 Stochastic Partial Differential Equations

i (x, t), Wi (x, t) are given as before. Suppose that vi and i are bounded
and continuous on [0, T ] Rd, and the covariance functions rij for the Wiener
fields Wi and Wj are bounded. Then the correlation function m for the so-
lution can be derived similarly for any m. For instance, the first- and the
second-order correlation equations are given by
d
X
1 (x, t) 1
= 1 + vi (x, t) + g(x, t),
t xi (4.67)
i=1
1 (x, 0) = h(x),

and
d
2 (x1 , x2 , t) 1 X 2 2
= (1 + 2 )2 + qij (x1 , x2 , t) 1 2
t 2 i,j=1 xi xj
d
X 2 2
+ [ vi (x1 , t) + vi (x2 , t) 2 ] (4.68)
i=1
x1i xi

+ [ g(x1 , t)1 (x2 , t) + g(x2 , t)1 (x2 , t) ],


2 (x1 , x2 , 0) = h(x1 )h(x2 ),

where qij (x1 , x2 , t) = rij (x1 , x2 )i (x1 , t)j (x2 , t). The equations for higher-
order correlation functions can also be derived but will be omitted here. Notice
the extra diffusion and drift terms appear in (4.68) to account for the effects
of the turbulent motion of the fluid.
5
Stochastic Hyperbolic Equations

5.1 Introduction
Wave motion and mechanical vibration are two of the most commonly ob-
served physical phenomena. As mathematical models, they are usually de-
scribed by partial differential equations of hyperbolic type. The most well-
known one is the wave equation. In contrast with the heat equation, the first-
order time derivative term is replaced by a second-order one. Excited by a
white noise, a stochastic wave equation takes the form
2u (x, t)
= c2 u + 0 W
t2
in some domain D, where c is known as the wave speed and 0 is the noise
intensity parameter. If the domain D is bounded in Rd , the equation can de-
scribe the random vibration of an elastic string for d = 1, an elastic membrane
for d = 2, and a rubbery solid for d = 3. When D = Rd , it may be used to
model the propagation of acoustic or optical waves generated by a random
excitation. For a large-amplitude vibration or wave motion, some nonlinear
effects must be taken into consideration. In this case we should include ap-
propriate nonlinear terms in the wave equation. For instance, it may look
like
2u (x, t),
= c2 u + f (u) + (u)W
t2
where f (u) and (u) are some nonlinear functions of u. More generally, the
wave motion may be modeled by a linear or nonlinear first-order hyperbolic
system. A special class of stochastic hyperbolic system will be discussed later
on.
In this chapter, we briefly review some basic definitions of hyperbolic equa-
tions in Section 5.2. In the following two sections, we study the existence of
solutions to some linear and semilinear stochastic wave equations in a bounded
domain. As in the parabolic case, we shall employ the Greens function ap-
proach based on the method of eigenfunction expansion. Then, in Section 5.5,
this approach coupled with the method of Fourier transform is used to analyze
stochastic wave equations in an unbounded domain. Finally, in Section 5.6,
a similar approach will be adopted to study the solutions of a class of linear
and semilinear stochastic hyperbolic systems.

129
130 Stochastic Partial Differential Equations

5.2 Preliminaries
Consider a second-order linear partial differential equation with constant co-
efficients:
d
X Xd
2u 2u u
= a jk u + bj u + c u + f (x, t), (5.1)
t2 xj xk j=1
xj
j,k=1

where ajk , bj , c are constants, and f (x, t) is a given function. Let A = [ajk ] be
a d d coefficient matrix with entries ajk . According to the standard classifi-
cation, equation (5.1) is said to be strongly hyperbolic if all of the eigenvalues
of A are real, distinct, and nonzero. In this case, by a linear transformation of
the coordinates, the equation can be reduced, without changing notation for
u, to a perturbed wave equation
Xd
2u 2 bj u u + f(x, t),
= c u +
t2 j=1
xj

where c, bj , are some new constants, and g is a known function.


Similarly we consider a system of n linear first-order equations:
n d n
ui X X k uj X
= aij + bij uj + fi (x, t), (5.2)
t j=1
xk j=1
k=1

for i = 1, , n. For each k, let Ak = [akij ] be the n n matrix with entries akij .
Pd
Given Rd , we form the n n matrix A() = k=1 k Ak . For || = 1, if all
of the eigenvalues of A() are real, then the system is said to be hyperbolic. The
system is strongly hyperbolic if the eigenvalues are real, distinct, and nonzero.
It is well known that the wave equation can be written as a hyperbolic system.
For the stochastic counterpart, the classification is similar.

5.3 Wave Equation with Additive Noise


As in Chapter 3, let D be a bounded domain in Rd with a smooth bound-
ary D. We first consider the initial-boundary value problem for a randomly
perturbed wave equation in D:
2u
= ( )u + f (x, t) + M (x, t), t (0, T ),
t2
u (5.3)
u(x, 0) = g(x), (x, 0) = h(x), x D,
t
Bu|D = 0,
Stochastic Hyperbolic Equations 131

where = c2 and are positive constants; g(x), h(x) are given functions, and,
as introduced before, M (x, t) is the formal time derivative of the martingale
M (x, t) given by the It
o integral:

Z t
M (x, t) = (x, s)W (x, ds). (5.4)
0

Similar to the case of a parabolic equation in Sections 3.3 and 3.4, for g, h H,
if f (, t) and (, t) are predictable processes in H such that

Z T
E kf (, s)k2 ds < , (5.5)
0

and
Z T Z T
E k(, s)k2R ds r0 k(, s)k2 ds < , (5.6)
0 0

R
where we recall that k(, s)k2R = T r Qs = D q(x, x, s)dx, with q(x, y, s) =
r(x, y)(x, s)(y, s). Then the problem (5.3) can be solved by the method of
eigenfunction expansions. To this end, let k and ek (x) denote the associated
eigenvalues and the eigenfunctions of (A) = ( + ), respectively, for
k = 1, 2, . Let

X
u(x, t) = ukt ek (x), (5.7)
k=1

with ukt = (u(, t), ek ). By expanding the given functions in (5.3) in terms of
eigenfunctions, we obtain

d2 ukt
= k ukt + ftk + ztk ,
dt2 (5.8)
d k
uk0 = gk , u = hk , k = 1, 2, ,
dt 0

d k
where ftk = (f (, t), ek ), gk = (g, ek ), hk = (h, ek ) and ztk = dt zt . Recall that
zt = (M (, t), ek ) is a martingale with local characteristic qtk = (Qt ek , ek ).
k

The equation (5.8), sometimes called a second-order It o equation, is to be


interpreted as the system:

dukt = vtk dt,


dvtk = k ukt dt + ftk dt + dztk , (5.9)
uk0 = gk , v0k = hk ,
132 Stochastic Partial Differential Equations

which can be easily solved to give the solution



p sin k t
k
ut = gk cos k t + hk
Z t Z
k
t
sin k (t s) k sin k (t s) k
+ fs ds + dzs ,
0 k 0 k (5.10)
p p p
vtk = k (gk sin k t + hk cos k t)
Z t p Z t p
+ cos k (t s)fsk ds + cos k (t s)dzsk .
0 0

In view of (5.7) and (5.10), the formal solution of equation (5.3) can be written
as Z Z
u(x, t) = K (x, y, t)g(y)dy + K(x, y, t)h(y)dy
DZ
tZ
D

+ K(x, y, t s)f (y, s)dsdy (5.11)


Z0 t ZD
+ K(x, y, t s)M (y, ds)dy,
0 D

where K (x, , y, t) = t K(x, y, t) and K(x, y, t) is the Greens function for the
wave equation given by

X sin k t
K(x, y, t) = ek (x)ek (y). (5.12)
k=1
k

In contrast with the heat equation, it can be shown that, for d 2, the above
series does not converge in H H. In general, K(, , t) is a generalized function
or a distribution, and so is K (x, , y, t) for d 1.
Let Gt denote the Greens operator with kernel K(x, y, t) so that, for any
smooth function g,
Z
(Gt g)(x) = K(x, y, t)g(y)dy, (5.13)
D

d
and set Gt = dt Gt , which is the derived Greens operator with kernel

X p
K (x, y, t) = (cos k t)ek (x)ek (y). (5.14)
k=1

Then the equation (5.11) can be written simply as


Z t Z t
ut = Gt g + Gt h + Gts fs ds + Gts dMs . (5.15)
0 0

(k) dk (0)
In the following lemma, for k = 0, 1, 2, we let Gt = G
dtk t
with Gt = Gt ,
Stochastic Hyperbolic Equations 133
(k) k

where the kernel of Gt is given by K (k) (x, y, t) = tk K(x, y, t). As in Chapter
m m
3, for the Sobolev space H = H (D), we shall use the equivalent norm

X
kgkm = { m 2 1/2
j (g, ej ) } for g H m .
j=1

Lemma 3.1 Let k and m be nonnegative integers. Then, for any function
g H k+m1 , the following estimates hold:
(k)
sup kGt gk2m kgk2k+m1 , for 0 k + m 2. (5.16)
0tT

Proof. The estimates can be verified by using the series representation


(5.12) of the Greens function. We will show that (5.16) holds for k = m = 1
so that
sup kGt gk21 kgk21 .
0tT

In view of (5.14), the above estimate can be easily verified as follows:



X p
sup kGt gk21 = sup k (cos2 k t)(g, ek )2
0tT 0tT
k=1

X
k (g, ek )2 = kgk21 .
k=1

The other cases can be shown similarly.

Lemma 3.2 Let f (, t) L2 ( (0, T ); H) satisfy (5.5). Then (, t) =


Rt
0
Gts f (, s)ds is a continuous, adapted H 1 -valued process and its time
derivative (, t) is a continuous H-valued process such that
Z T
E sup k(, t)k2k Ck T E kf (, s)k2k1 ds, for k = 0, 1, (5.17)
0tT 0

with C0 = 1/1 , C1 = 1, and


Z T
E sup k (, t)k2 T E kf (, s)k2 ds. (5.18)
0tT 0

Proof. We will verify only the first inequality for k = 1. Clearly,


Z t Z t
kt k21 = k Gts fs dsk21 [ kGts fs k1 ds]2
0 0
Z t
2
T kGts fs k1 ds,
0
134 Stochastic Partial Differential Equations

so that, by Lemma 3.1,


Z T
E sup ks k21 TE kfs k2 ds.
0tT 0

The continuity of (, t) and (, t) will not be shown here.

Lemma 3.3 Let


Z t Z t
(, t) = Gts M (, ds) = Gts (, s)W (, ds), (5.19)
0 0

which satisfies the condition (5.6). Then (, t) is a continuous adapted H 1 -



valued process in [0,T], and its derivative (, t) = t (, t) is continuous in
H. Moreover, for k, m = 0, 1, the following inequalities hold:
Z T
E k(k) (, t)k2m Ck,m E k(, t)k2R dt (5.20)
0
and Z T
E sup k(k) (, t)k2m Dk,m E k(, t)k2R dt, (5.21)
0tT 0
with 0 k + m 1, where C0,0 = 1/1 , C0,1 = C1,0 = 1, and D0,0 =
8/1 , D0,1 = D1,0 = 8.

Proof. We will first check the inequality (5.20) when k = m = 0. Similar to


the proof of Lemma 4-5.2, let n (, t) be the n-term approximation of (, t)
given by
Xn
n (x, t) = kt ek (x), (5.22)
k=1
where Z
t
sin k (t s) k
kt = ((, t), ek ) = dzs . (5.23)
0 k
Since

XZ t
sin2 k (t s) k
E k(, t) n (, t)k2 = E qs ds
0 k
k>n

1 XZ t
1 XZ t
E qsk ds = E (Qs ek , ek )ds,
1 0 1 0
k>n k>n

it is easily seen that, by condition (5.6), limn E k(, t) n (, t)k2 = 0


uniformly in t [0, T ]. Therefore we have
n Z t
X sin2 k (t s) k
2 n 2
E k(, t)k = lim E k (, t)k = lim E qs ds
n n k
k=1 0
X Z t Z t
1 1
E qsk ds = E k(, s)k2R ds,
1 0 1 0
k=1
Stochastic Hyperbolic Equations 135

which verifies (5.20) with k = m = 0.


Next, we will verify the inequality (5.21) with k = m = 0. It follows from
(5.23) that
Z t p Z t p
k 2 2 k 2
|t | {| (cos k s)dzs | + | (sin k s)dzsk |2 },
k 0 0

which, by invoking a Doobs inequality, yields


Z T p
8
E sup |kt |2 E{ (cos2 k s)qsk ds
k
Z0tT
T p
0
Z T (5.24)
2 k 8
+ (sin k s)qs ds} = E qsk ds.
0 k 0

From the inequality (5.24), (5.22), and condition (5.6), we can deduce that

8 XZ T
E sup k(, t) n (, t)k2 E (Qs ek , ek )ds 0
0tT 1 0 k>n

as n , and

E sup k(, t)k2 = lim E sup kn (, t)k2


0tT n 0tT
Z
X T Z T
8 8
E (Qs ek , ek )ds = E k(, s)k2R ds.
1 0 1 0
k=1

Therefore, we can conclude that the inequality (5.21) holds. Since n (, t) is a


continuous adapted process in H 1 , so is (, t). The inequalities in (5.21) with
k + m 6= 0 can be verified similarly.

In view of the above lemmas, it can be shown that the formal solution
(5.15) constructed by the eigenfunction expansion is a weak solution. Here, a
test function (x, t) is taken to be a C 0 function on D[0, T ] with (x, T ) =

t (x, T ) = 0 and (, t) has a compact support in D for each t [0, T ]. Then
we say that u is a weak solution of (5.3) if u(, t) is a continuous adapted
process in H and, for any test function , the following equation holds:
Z T
(u(, t), (, t))dt = (u(, 0), (, 0)) (u (, 0), (, 0))
0 Z T Z T
+ (u(, t), A(, t))dt + (f (, t), (, t))dt (5.25)
Z0 T 0

+ ((, t), M (, dt)),


0

in which the prime denotes the time derivative.

Theorem 3.4 Suppose that the conditions in Lemma 3.1 and 3.2 hold true.
136 Stochastic Partial Differential Equations

Then, for g, h H, u(, t) defined by the equation (5.15) is the unique weak so-
lution of the problem (5.3) with u L2 (; C([0, T ]; H)). Moreover, it satisfies
the following inequality:

E sup ku(, t)k2 C(T ) { kgk2 + khk2


0tT
Z T (5.26)
+E [ kf (, s)k2 + k(, s)k2R ]ds },
0

where C(T ) is a positive constant depending on T .

Proof. For convenience, let us rewrite the equation (5.15) as

u(, t) = u
(, t) + (, t),

where Z t
(, t) = G t g + Gt h +
u Gts fs ds
0

and is given by (5.19). From Lemmas 3.1 and 3.2, it is easy to see that u(, t)
is a continuous adapted process in H. By a n-term approximation and then
taking the limit, one can show that, for any test function , u (, t) satisfies
the equation
Z T
u(, t), (, t))dt = (g, (, 0)) (h, (, 0))
(
0Z Z T (5.27)
T
+ (
u(, t), A(, t))dt + (f (, t), (, t))dt.
0 0

According to Lemma 3.3, (, t) is a continuous process in H. By the


linearity of the equation and Lemma 3.3, it remains to show that (, t) satisfies
Z T Z T
((, t), (, t))dt = ((, t), A(, t))dt
0 Z T 0 (5.28)
+ ((, t), M (, dt)).
0

To this end, let us consider the n-term approximation n (, t) given by (5.22).


In view of (5.23), the expansion coefficient kt satisfies the It
o equations

dkt = tk dt, dtk = k kt dt + dztk ,

with k0 = 0k = 0. Let kt = ((, t), ek ). By applying an integration by parts


and the It
o formula, we can obtain
Z T Z T Z T

kt kt dt = k kt kt dt + kt dztk .
0 0 0
Stochastic Hyperbolic Equations 137

Therefore
Z T n Z
X T
n
( (, t), (, t))dt = kt kt dt
0 k=1 0
n
X Z T Z T
= {k kt kt dt + kt dztk },
k=1 0 0

which can be rewritten as


Z T Z T
n
( (, t), (, t))dt = (n (t), A(, t))dt
0Z 0
T
+ ((, t), M n (, dt)).
0

As n , each term in the above equation converges properly in L2 () to


the expected limit. The equation (5.28) is thus verified. The uniqueness of
solution follows from the fact that the difference of any two solutions satisfies
the homogeneous wave equation with zero initial-boundary conditions. The
inequality (5.26) follows easily from equation (5.15) and the bounds given in
Lemmas 3.1 to 3.3.

If g H 1 , h H, we will show that the solution u(, t) is an H 1 -valued


process and the associated energy equation holds.

Theorem 3.5 Suppose that g H 1 , h H and the random fields f (, t) and


(, t) are given as in Lemmas 3.2 and 3.3. Then the solution u(, t) of (5.3) is
an H 1 -valued process with the regularity properties: u L2 (; C([0, T ]; H 1 ))

and v(, t) = t u(, t) L2 (; C([0, T ]; H)). Moreover, the energy equation
holds
Z t
kv(, t)k2 + ku(, t)k21 = khk2 + kgk21 + 2 (f (, s), v(, s))ds
Z t Z t0 (5.29)
+2 (v(, s), M (, ds)) + T r Qs ds.
0 0

Proof. In view of Theorem 3.4, with the aid of the estimates from Lem-
mas 3.1 to 3.3, the regularity results u L2 (; C([0, T ]; H 1 )) and v
L2 (; C([0, T ]; H)) can be proved in a manner similar to the parabolic case
(see Theorem 3-5.6).
Concerning the energy equation, we first show it holds for the n-term
approximate equations

dun (, t) = v n (, t)dt,
dv n (, t) = [Aun (, t) + f n (, t)]dt + M n (, dt),
138 Stochastic Partial Differential Equations

where the expansion coefficients ukt and vtk for un and v n satisfy the system:

dukt = vtk dt,


dvtk = (k ukt + ftk )dt + dztk .

Apply the It
o formula to get

d|vtk |2 = 2vtk dvtk + qtk dt = 2(k ukt + ftk )vtk dt + 2vtk dztk + qtk dt,

which, when written in the integral form, leads to


Z t Z t Z t
|vtk |2 + k |ukt |2 = |v0k |2 + k |uk0 |2 + 2 fsk vsk ds + 2 vsk dzsk + qsk ds.
0 0 0

Summing the above over k from 1 to n, we obtain

kv n (, t)k2 + kun (, t)k21 = kv n (, 0)k2 + kun (, 0)k21


Z t Z t
n n
+2 (f (, s), v (, t))ds + 2 (v n (, s), M n (, ds))
0 0
Z t
+ T r Qns ds,
0

which converges in the mean to the energy equation (5.29) as n .

Let us now consider the linear wave equation (5.3) with the Wiener noise
replaced by an additive Poisson noise:

2u
= ( )u + f (x, t) + P (x, t), t (0, T ),
t2
u (5.30)
u(x, 0) = g(x), (x, 0) = h(x), x D,
t
Bu|D = 0,

where Z tZ
P (x, t) = (ds, d).
(x, s, ) N
0 B

In terms of the associated Greens operator, similar to (5.15), the formal so-
lution of equation (5.30) can be written as

ut = u
t + t , (5.31)

where Z t
ut = Gt g + Gt h + Gts fs ds,
Z t 0

t = Gts dPs .
0

We will first show that the expression given by (5.31) is a weak solution. To
Stochastic Hyperbolic Equations 139

proceed, similar to the deterministic case, it is not hard to verify that, for
ft L2 ((0, T ) ; H), ut is a strong solution of the initial-boundary value
problem in the absence of the Poisson noise. As in the parabolic case in Section
3.4, it suffices to prove that t is a weak solution of the equation

2
= ( ) + P (x, t),
t2
subject to the homogeneous initial-boundary conditions. This implies that
L2 ((0, T ) ; H) satisfies the following equation
Z T Z T Z T
(t , gt )dt = (t , Agt )dt + (gt , dPt ), (5.32)
0 0 0

for any test function g. C


0 ([0, T ] D) with compact support, where we set
A = ( ).

Lemma 3.6 Assume that


Z TZ
E ks ()k2 ds(d) < . (5.33)
0 B

Then, for k, m = 0, 1 with k + m = 1, there exist constants Ck,m > 0 such


that Z TZ
(k)
sup E kt k2m Ck,m E ks ()k2 ds(d). (5.34)
0tT 0 B

Proof. The proof is similar to part of Lemma 3.3. Let ek and k denote
the k-th eigenfunction and eigenvalue of A, respectively. Consider the k-th
component of t :
Z t
k sin k (t s) k
t = ((, t), ek ) = dps ,
0 k
R
(ds, d). We shall only verify inequality (5.34) for
with dpks = B (s (), ek )N
k = 1 and m = 0. Let the prime denote the time derivative. Since
X Z t p Z
2
n 2
E k (, t) (, t)k = E cos k (t s) (s (), ek )2 ds(d)
k>n 0 B
X Z T Z
E (s (), ek )2 ds(d),
k>n 0 B

it follows that, by condition (5.34), limn E k (, t) n (, t)k2 = 0, uni-


formly in t [0, T ]. This, in turn, implies that
Z T Z
sup E k t k2 = sup lim E k n (, t)k2 E ks ()k2 ds(d).
0tT 0tT n 0 B
140 Stochastic Partial Differential Equations

Now we turn our attention to the weak solution. More precisely, we will
prove the following results.

Theorem 3.7 Suppose that ft = f (, t) is an H-valued process such that


RT
E 0 kft k2 dt < , and the condition (5.33) in Lemma 3.6 holds. Then, for
g, h H, the problem has a unique weak solution given by (5.31). Moreover,
there exists a constant C(T ) > 0 such that
sup E ku(, t)k2 C(T ) { kgk2 + khk2
0tT
Z T Z TZ (5.35)
+E kf (, s)k2 ds + E ks ()k2 ds(d) }.
0 0 B

Proof. For the reason mentioned above, for the weak solution of equation

(5.30), it can be reduced to verifying equation (5.32). Similar to the proof of


Lemma 3.4, this can be done by first showing that an n-term approximation
n (, t) satisfies equation (5.32) and then taking the limits as n . The
detail is omitted here.
To verify the inequality (5.35), we obtain from equation (5.31) that
Z t
2 2 2
E kut k 4 { kGtgk + kGt hk + E k Gts fs dsk2 + E kt k2 }.
0

Now, by applying Lemmas 3.1, 3.2, 3.3, and 3.6, the above yields the desired
inequality (5.35).

Remark. With the aid of Lemma 3.6 and Theorem 3.7, instead of the
energy equation, we can obtain the following mean energy inequality
sup E {ku(, t)k21 + kv(, t)k2 } C(T ) { kgk21 + khk2
0tT
Z T Z TZ
+E kf (, s)k2 ds + E ks ()k2 ds(d) }.
0 0 B

5.4 Semilinear Wave Equations


Let f (s, x, t) and (s, x, t) be predictable random fields depending on the
parameters s R and x D. We consider the initial-boundary value problem
for a stochastic wave equation as follows:
2u
= ( )u + f (u, x, t) + M (u, x, t),
t2
Bu|D = 0, (5.36)
u
u(x, 0) = g(x), (x, 0) = h(x),
t
Stochastic Hyperbolic Equations 141

for x D, t (0, T ), where

M (u, x, t) = (u, x, t)W


(x, t). (5.37)

In terms of the Greens operator (5.13), we rewrite the system (5.36) as


the integral equation
Z t Z t
u(, t) = Gt g + Gt h + Gts fs (u)ds + Gts M (u, ds), (5.38)
0 0

where we set fs (u) = f (u(, s), , s) and M (u, ds) = M (u(, s), , ds). In lieu of
(5.36), for g, h H, if u(, t) is a continuous adapted process in H satisfying the
equation (5.38), then it is a mild solution. Suppose that, for g H 1 , h H,
u(, t) is a mild solution with the regularity property u L2 ([0, T ] ; H 1 ).
Then, as will be seen, such a mild solution will turn out to be a strong solution.
For the existence of a mild solution, let f (, , t) and (, , t) be predictable
random fields that satisfy the usual conditions (to be called conditions A) as
follows:

(A.1) There exist positive constants b1 , b2 such that

|f (s, x, t)|2 + |(s, x, t)|2 b1 (1 + |s|2 ) a.s.,

(A.2) |f (s, x, t) f (s , x, t)|2 + |(s, x, t) (s , x, t)|2 b2 |s s |2 a.s., for any


s, s R; x D and t [0, T ].

(A.3) W (, t) is an R-Wiener process in H with covariance function r(x, y)


bounded by r0 , for x, y D.

Theorem 4.1 Suppose that conditions A hold true. Then, given g, h H,


the wave equation (5.36) has a unique mild solution u L2 (; C([0, T ]; H))
such that
E sup ku(, t)k2 < .
0tT

Proof. To prove the theorem by the contraction mapping principle, let XT


be a Banach space of continuous adapted processes (, t) in H with the norm

kkT = {E sup k(, t)k2 }1/2 < . (5.39)


0tT

Define the operator in XT by


Z t Z t
t = Gt g + Gt h + Gts fs ()ds + Gts s ()dWs . (5.40)
0 0
142 Stochastic Partial Differential Equations

By making use of Lemmas 3.1 to 3.3 together with conditions (A.1) and (A.3),
equation (5.40) yields
Z T
2 2
kkT C1 { kgk + khk + E 2
[ k f ((, s), , s) k2
0
+ k ((, s), , s) k2 ]ds }

C1 { kgk2 + khk2 + b1 T (1 + kk2T ) },

for some constants C1 , C2 > 0. This shows that : XT XT is bounded and


well defined. Let and XT . Then, by taking (5.40), Lemmas 3.23.3,
and conditions A into account, we obtain
Z t
2
k kT = E sup k Gts [ fs () fs ( ) ]ds
0tT 0
Z t
+E Gts [ s () s ( ) ]W (, ds)k2
0
Z T Z T
T 2 8
2E { k fs () fs ( ) k ds + ks () s ( )k2R ds}
1 0 1 0
Z T
E k s s k2 ds,
0

b2
where = 2 (T 8r0 ). It follows that
1

k k2T T k k2T ,

which shows that is a contraction mapping in XT for a sufficiently small


T . Therefore, by Theorem 3-2.6, there exists a unique fixed point u which is
the mild solution as asserted. The solution can then be extended to any finite
time interval [0, T ].

More generally, the nonlinear terms in the wave equation may also depend

on the velocity v = t u (in the case of damping) and the gradient u. Then
the corresponding initial-boundary value problem is given by

2u
= ( )u + f (Ju, v, x, t) + M (Ju, v, x, t),
t2
Bu|D = 0, (5.41)
u
u(x, 0) = g(x), (x, 0) = h(x),
t
for x D, t (0, T ), where we let Ju = (u; u); f (z, s, x, t), (z, s, x, t) are
predictable random fields with parameters z Rd+1 , x D, s R, t [0, T ],
and
M (Ju, v, x, t) = (Ju, v, x, t)W
(x, t).
Stochastic Hyperbolic Equations 143

In this case, we rewrite the equation (5.41) as a pair of integral equations:


Z t
u(, t) = Gt g + Gt h + Gts fs (Ju, v)ds
Z t 0

+ Gts s (Ju, v)dWs ,


0 Z t (5.42)

v(, t) = Gt g + Gt h + Gts fs (Ju, v)ds
Z t 0

+ Gts s (Ju, v)dWs ,


0

where we put

fs (Ju, v) = f (Ju(, s), v(, s), , s) and s (Ju, v) = (Ju(, s), v(, s), , s).

Introduce a product Hilbert space H = H01 H, and let Ut = (u(, t); v(, t))
with U0 = (g; h), Fs (U ) = (0; fs (Ju, v)) and s (U ) = (0; s (Ju, v)). Then the
above system can be regarded as an equation in H:
Z t Z t
Ut = Gt U0 + Gts Fs (U )ds + Gts s (U )dWs , (5.43)
0 0

where the vectors Ut , Fs , and s are written as a column matrix and Gt is


the Greens matrix:  
Gt Gt
Gt = . (5.44)
Gt Gt
For = (; ) H, let e() = e(, ) be an energy function defined by

e() = kk21 + kk2 , (5.45)

which induces an energy norm k ke in H as follows

kke = [e()]1/2 . (5.46)

We will show that the equation (5.41) has a strong solution by proving the
existence of a unique solution to the equation (5.43) in H. To this end, let
f ( , t) and ( , t) be predictable random fields satisfying the following
conditions B:

(B.1) There exist positive constants c1 , c2 such that

|f (s, y, , x, t)|2 + |(s, y, , x, t)|2 c1 {1 + |s|2 + |y|2 + ||2 } a.s.

(B.2) The Lipschitz condition holds

|f (s, y, , x, t) f (s , y , , x, t)|2 + |(s, y, , x, t) (s , y , , x, t)|2


c2 {|s s |2 + |y y |2 + | |2 } a.s.,

for any s, s , , R; y, y Rd ; x, x D and t [0, T ].


144 Stochastic Partial Differential Equations

(B.3) W (, t) is an R-Wiener process in H with covariance function r(x,y)


bounded by r0 , for x, y D.

Theorem 4.2 Suppose the conditions B are satisfied. For g H 1 and h H,


the semilinear wave equation (5.41) has a unique solution u which belongs to

L2 (; C([0, T ]; H01 )) with v = t u L2 (; C([0, T ]; H)). Moreover, there is a
constant C > 0, depending on T, c1 , c2 and initial data, such that
E sup { ku(, t)k21 + kv(, t)k2 } C. (5.47)
0tT

Proof. To show the existence of a unique solution, let YT denote the Banach
space of Ft -adapted, continuous H-valued processes Ut = (ut ; vt ) with the
norm
|||U |||T = {E sup e(Ut )}1/2 = {E sup [ kut k21 + kvt k2 ] }1/2 . (5.48)
0tT 0tT

For U = (u; v) YT , define the mapping on YT by


Z t
t (U ) = Gt U0 + Gts Fs (U )ds
Z t 0 (5.49)
+ Gts s (U )W (, ds).
0
We first show that the operator maps YT into itself. For U YT , let
t U = (t ; t ). Then, in view of (5.48) and (5.49), we have
|||U |||2T E sup k(, t)k21 + E sup k(, t)k2 . (5.50)
0tT 0tT

By noticing (5.42) and making use of Lemmas 3.13.3, similar to the proof of
the previous theorem, it can be shown that there exist constants C1 and C2
such that
Z t
2
E sup k(, t)k1 = E sup k Gt g + Gt h + Gts fs (Ju, v)ds
0tT 0tT 0
Z t
+ Gts s (Ju, v)dWs k21
0
Z T
C1 { kU0 k2e + E [ kfs (Ju, v)k2 + ks (Ju, v)k2 ]ds },
0
and
Z t
2
E sup k(, t)k = E sup kGt g + Gt h + Gts fs (Ju, v)ds
0tT 0tT 0
Z t
+ Gts s (Ju, v)dWs k2
0
Z T
C2 { kU0 k2e + E [ k fs (Ju, v)k2 + ks (Ju, v)k2 ]ds }.
0
Stochastic Hyperbolic Equations 145

Therefore, the equation (5.50) yields


Z T
|||t U |||2T (C1 + C2 ){ kU0 k2e + E [ kfs (Ju, v)k2
0
(5.51)
+ks (Ju, v)k2 ]ds}.

By making use of condition (B.1) and the fact that (kgk + kgk) c0 kgk1 for
some c0 > 0, we can deduce from (5.51) that there is a constant C, depending
on T, e(U0 ) and so on, such that

|||U |||2T C(1 + |||U |||2T ).

Hence, the map : YT YT is well defined.


To show that is a contraction map, for U, U YT , we consider

|||U U |||2T = E sup {k(, t) (, t)k21 + k(, t) (, t)k2 }.


0tT

in which both terms on the right-hand side of the equation can be estimated
by applying Lemmas 3.13.3 to give the upper bound
Z t
C3 E {kfs (Ju, v) fs (Ju , v )k2 + ks (Ju, v) s (Ju , v )k2 }ds,
0

for some C3 > 0. Now, making use of the Lipschitz condition (B.2), the above
inequality gives rise to
Z T
2
|||U U |||T c2 C3 E {kus us k2
0
+kus us k2 + kvs vs k2 }ds C4 T ||| U U |||2T ,

for some C4 > 0. Hence, for small enough T , the map is a contraction
which has a fixed point U = (u; v) YT . Hence u is the unique solution with
the asserted regularity. To show that this solution is a strong solution, like
the parabolic case, we first show that the mild solution is a weak solution
satisfying
Z T
< us , ns > ds = (g, n0 ) (h, n0 )
0Z Z T
T
n
+ < ( )us , s > ds + (fs (Ju, v), ns )ds
Z0 T 0

+ (ns , M (Ju, v, ds)),


0

for a sequence ns C 1
0 approaching a limit s H0 . As n , the above
yields the variational equation for a strong solution.

Before changing the subject, we shall briefly consider the semilinear wave
146 Stochastic Partial Differential Equations

equation (5.36) with a Poisson noise. Let f (r, x, t) and (r, x, , t) be pre-
dictable random fields depending on the parameters r R, B and x D.
We consider the initial-boundary value problem for a stochastic wave equation
2u
= A u + f (u, x, t) + P (u, x, t), x D, t (0, T ),
t2
u (5.52)
u(x, 0) = g(x), (x, 0) = h(x),
t
Bu|D = 0,

where A = ( ) and
Z tZ
P (u, x, t) = (ds, d).
(u, x, , s)N (5.53)
0 B

For the existence of a mild solution, we assume that there exist positive con-
stants c1 , c2 such that
Z tZ
2
|f (r, x, t)| + |(r, x, , s)|2 ds(d) c1 (1 + r2 ), (5.54)
0 B

|f (r, x, t) f (r , x, t)|2

Z tZ
(5.55)
+ |(r, x, , s) (r , x, , s)|2 ds(d) c2 |r r |,
0 B

for each r, r R, x D, B and t [0, T ].
In analogy with Theorem 4.1, the following existence theorem holds.
Theorem 4.3 Let the conditions (5.54) and (5.55) hold. Then, for g, h H,
the problem has a unique mild solution ut , which is an adapted H-valued
process over [0, T ] such that

sup E kut k2 K < ,


0tT

where K > 0 is a constant depending on c1 , c2 ,, and T .

Proof. The proof is again based on the contraction mapping principle. Here,
let XT denote the Banach space of adapted processes ut in H over [0, T ] with
the norm Z t
kukT = { sup E kus k2 ds}1/2 .
0tT 0

For XT , define the operator by


Z t Z t

t = Gt g + Gt h + Gts fs ()ds + Gts s ()dPs .
0 0

With the aid of Theorem 3.7 and condition (5.54), it is easy to show that the
map : XT XT is bounded and well defined.
Stochastic Hyperbolic Equations 147

To show the map is a contraction for a small T , for , XT , we make use


of Lemmas 3.2, 3.6, and the Lipschitz condition (5.54) to obtain the following
estimates
Z t
kt t k2 2 {E k Gts [fs () fs ( )]dsk2
Z tZ 0

+Ek (ds, d)k2 }


Gts [s (, ) s ( , )] N
Z t
0 B

C E ks s k2 ds,
0

for some constant C > 0. This implies that

k k2T C T k k2T .

Hence, : XT XT is a contraction for small T and the fixed point t = ut is


the mild solution. It is easy to show that the norm kukT is finite for any T > 0.
This can be done by applying the Lemmas 3.1, 3.2, 3.6, and condition (5.53)
to the integral equation for the mild solution and then invoking Gronwalls
inequality.

Remarks: It is possible to obtain the H 1 -regularity result and the mean


energy inequality for the mild solution. Also, the results analogous to Theorem
4.2 for the Poisson noise can also be worked out. But for brevity, they are
omitted here.

5.5 Wave Equations in an Unbounded Domain


As in the case of parabolic equations, we will show that the results obtained
in the last section for the Wiener noise hold true when D = Rd by means of
a Fourier transform. First consider the Cauchy problem for the linear wave
equation as in (5.3):

2u
= ( 1)u + f (x, t) + M (x, t),
t2 (5.56)
u
u(x, 0) = g(x), (x, 0) = h(x),
t

for t (0, T ), x Rd , where we set = = 1 for convenience, and M (x, t) =


(x, t)W (x, t).
Assume the random fields f and are regular as usual. Following the same
notation as in Chapter 4, apply a Fourier transform to the problem (5.56) to
148 Stochastic Partial Differential Equations

get the second-order It


o equation:

d2
c
u(, t) + f(, t) + M
u(, t) = () (, t),
dt2 (5.57)
du
u
(, 0) = g(), (, 0) = h(),
dt
c(, t) is the Fourier
for Rd , t (0, T ), where () = (1 + ||2 ), and M
transform of M (x, t). This initial-value problem can be solved to give

u
(, t) = g()K K(,
(, t) + h() t)
Z t Z t
t s)f(, s)ds + c(, ds),
t s)M (5.58)
+ K(, K(,
0 0

where the prime denotes the time derivative and

t) = sin ()t ,
K(, (5.59)
()
p
with () = () = (1 + ||2 )1/2 . By an inverse Fourier transform, the
equation (5.58) yields
Z Z
u(x, t) = K (x y, t)g(y)dy + K(x y, t)h(y)dy
Z tZ
+ K(x y, t s)f (y, s)dyds (5.60)
Z0 t Z
+ K(x y, t s)M (y, ds)dy,
0

or Z Z
t t
u(, t) = Gt g + Gt h + Gts fs ds + Gts dMs , (5.61)
0 0

where Gt is the Greens operator with kernel G(x, t) = (1/2)d/2 K(x, t) and
so that:
K is the inverse Fourier transform of K
Z
1 sin ()t ix
G(x, t) = ( )d e d, (5.62)
2 ()

which exists in the sense of distribution. Recall that in Rd , we defined the


norm k kk in the Sobolev space H k in terms of the Fourier transform:
Z Z
2 d = k ()|()|
kk2k = (1 + ||2 )k |()| 2 d.

Parallel to Lemmas 3.1 to 3.3, we can obtain the same estimates for the terms
on the right-hand side of the solution formula (5.61). They will be summarized
in the following as a single lemma. Formally, due to the corresponding Fourier
analysis, the assertion of the following lemma is not surprising. Since the
Stochastic Hyperbolic Equations 149

techniques for the proof are similar, we will only check on a few of them as
an illustration of the general idea involved.

Lemma 5.1 Let f (, t) and (, t) be predictable random fields such that


Z T
E kf (, s)k2 ds < , and
0
Z T Z T
E k(, s)k2R ds r0 k(, s)k2 ds < .
0 0
Rt Rt
Let (, t) = 0 Gts f (, s)ds and (, t) = 0 Gts M (, ds). Then all the in-
equalities in Lemmas 3.1, 3.2 and 3.3 hold true with 1 = 1 in Lemma 3.3.

Proof. In Lemma 3.1, we will only verify that the inequality (5.16) holds
for k = m = 1 so that
sup kGt gk21 kgk21 .
0tT

(, t) = cos ()t. Hence


In view of (5.59), we have K
Z
kGt gk21 = k (Gt g)bk2 = ()[ cos2 ()t ] |
g ()|2 d
Z
()| g ()|2 d = kgk21,

which verifies the inequality (5.16) for k = m = 1.


Next, let us verify the inequality (5.18) in Lemma 3.2:
Z T
2
E sup k (, t)k T E kfs k2 ds.
0tT 0

Consider
Z t
k (, t)k2 = k Gts fs ds k2
0
Z t Z t
T k Gts fs k2 ds = T fs k2 ds
kK ts
0 0
Z tZ Z T
T cos2 ()(t s)|f(, s)|2 ds d T kfs k2 ds,
0 0

which implies the desired inequality.


For the inequality (5.21) with k = 0, m = 1, in Lemma 3.3:
Z T
E sup k(, t)k21 8 E k(, t)k2R dt.
0tT 0
150 Stochastic Partial Differential Equations

We recall that the Fourier transform of the stochastic integral (, t) can be


written as
Z t Z t
d/2 c sin[()(t s)] c

(, t) = (2) K(, t s)M (, ds) = M (, ds).
0 0 ()
Therefore, we have
Z Z t
k(, t)k21 = k
(, t)k2 = | sin[()(t s)]Mc(, ds)|2 d
0
Z Z t Z t
2 {| cos[()s]Mc(, ds)|2 + | c(, ds)|2 }d.
sin[()s]M
0 0

It follows that
Z Z t
E sup k(, t)k21 2 E sup { | cos[()s]M c(, ds)|2
0tT 0tT
Z t Z0 T Z
+| c 2
sin[()s]M (, ds)| }d 8 E q(, , s)dds
0 Z Z T 0
T
= 8E T r Qs ds = 8 E k(, t)k2R dt,
0 0

which verifies the inequality mentioned above.

With the aid of the estimates given above in Lemma 5.1, it is clear that,
by following the proofs of Theorems 4.1 and 4.2 with D replaced by Rd , the
following two theorems for the Cauchy problem (5.56) can be proved verbatim.
Therefore, we shall only state the existence theorems without proof.

Theorem 5.2 Consider the Cauchy problem:


2u
= ( )u + f (u, x, t) + M (u, x, t), t (0, T ),
t2
(5.63)
u
u(x, 0) = g(x), (x, 0) = h(x), x Rd .
t
Suppose that conditions A hold true for Theorem 4.1 with D = Rd . Then,
given g, h H, the Cauchy problem (5.63) has a unique mild solution u
L2 (; C([0, T ]; H)) such that
E sup ku(, t)k2 < .
0tT

Theorem 5.3 Consider the Cauchy problem:


2u
= ( )u + f (Ju, v, x, t) + M (Ju, v, x, t),
t2
(5.64)
u
u(x, 0) = g(x), (x, 0) = h(x), x D, t (0, T ).
t
Stochastic Hyperbolic Equations 151

Suppose the conditions B for Theorem 4.2 are satisfied with D = Rd .


For g H 1 and h H, the Cauchy problem (5.64) has a unique so-
lution u L2 (; C([0, T ]; H 1 )) with a continuous trajectory in H 1 with

v = t u L2 (; C([0, T ]; H)). Moreover, there is a constant C > 0, de-
pending on T, c1 and initial data, such that

E sup {ku(, t)k21 + kv(, t)k2 } C.


0tT

Remarks: A well-known example is given by the Sine-Gordon equation per-


turbed by a state-dependent noise:
2u (x, t)
= ( )u + sin u + uM
t2
for x Rd , t [0, T ], where is a constant. Then, with the initial conditions
given in (5.64), Theorem 5.2 is applicable to show the existence of a unique
solution as depicted therein. So far we have proved the existence results based
on the usual assumption that the nonlinear terms satisfy the global Lipschitz
condition. Under weaker assumptions, such as a local Lipschitz condition, a
solution may explode in a finite time. Also the analytical techniques used here
can be adapted to treating other wave-like equations, such as the elastic plate
and the beam equations as well as the Schrodinger equation with noise terms.
These matters will be discussed in Chapter 8.

5.6 Randomly Perturbed Hyperbolic Systems


In mathematical physics, the wave motion in a continuous medium is often
modeled by a first-order hyperbolic system [19]. In particular we first consider
the following random linear system:
n d
ui X X k uj
= aij ui + fi (x, t) + M i (x, t),
t xk (5.65)
j=1 k=1
ui (x, 0) = hi (x), i = 1, 2, , n,

for t (0, T ), x Rd , where > 0 and the coefficients akij are assumed to be
constant,
m
X
M i (x, t) = j (x, t),
ij (x, t)W (5.66)
j=1

fi (x, t) and ij (x, t) are predictable random fields; W1 (x, t), , Wm (x, t), are
independent, identically distributed Wiener fields in H with a common covari-
ance function r(x, y) bounded by r0 ; and h1 , , hn are the initial data. Let
152 Stochastic Partial Differential Equations

qij denote the local covariation function of Mi and Mj so that


m
X
qij (x, y, t) = r(x, y) ik (x, t)jk (y, t), (5.67)
k=1

with the associated local covariation operator Qij (t). Let Qt denote the matrix
operator [Qij (t)]. Then
n
X n Z
X
T r Qt = T r Qii (t) = qii (x, x, t)dx. (5.68)
i=1 i=1

Let the vectors u = (u1 , , un ), , h = (h1 , , hn ) be regarded as a col-


umn matrix. Define the (n n) coefficient matrix Ak = [akij ] and the n m
diffusion matrix = [ij ]. Then the above system can be written in the matrix
form:
d
u X k u
= A u + f (x, t) + M (x, t), t (0, T ),
t xk (5.69)
k=1
d
u(x, 0) = h(x), x R ,

where M (x, t) = (x, t)W (x, t). By applying a Fourier transform to equation
(5.69), we obtain, for t (0, T ),

d
u
c
(, t) u(, t) + f(, t) + M
= [iA() I] (, t),
dt (5.70)
u(, 0)
= h(), R ,d

Pd
where we set A() = ( k=1 k Ak ), I is the identity matrix of order n, and
c(, t) is the Fourier transform of M (x, t). The local covariation operator Qt
M
c(, t) has the matrix kernel q(, ; t) = [
for M qij (, ; t)] in which the entries
are given by
Z Z
1 d
qij (, ; t) = ( ) ei(xy) qij (x, y; t)dxdy.
2

Then it can be shown that T r Q t = T r Qt as in the scalar case.


The system (5.65) is said to be strongly hyperbolic if the eigenvalues
1 (), , n (), of the matrix A() are real distinct and non-zero for || =
1 [72]. Let {1 (), , n ()} denote the corresponding set of orthonormal
eigenvectors. The solution of the linear It o equation (5.70) is given by
Z t
u
(, t) = K(,
t)h() + t s)f(, s)ds
K(,
Z t 0 (5.71)
+ t s)M
K(, c(, ds),
0
Stochastic Hyperbolic Equations 153
is the fundamental matrix
where K
t) = etB() ,
K(, with B() = {iA() + I}. (5.72)

By an inverse Fourier transform, the above result (5.71) leads to the solution
of (5.65):
Z t
u(x, t) = (Gt h)(x) + (Gts f )(x, s)ds
Z t 0 (5.73)
+ (Gts M )(x, ds).
0
Here, Gt denotes the Greens operator defined by
Z
(Gt g)(x) = (1/2)d/2 K(x y, t)h(y)dy,

where Z
1
K(x, t) = ( )d/2 eixB()t d.
2
The above integral exists in the sense of distribution, that is, for any test
function C d n
0 (R ; R ),
Z Z Z
1 d/2
K(x, t)(x) dx = ( ) eixB()t (x) d dx.
2
To study the regularity of the solution, let H = (H)n denote the Cartesian
product of n L2 -spaces. We shall use the same notation for the inner product
(, ) and the norm kk in H as in H, provided
Pn that there is no confusion.
Pn There-
fore, for g, h H, we write (g, h) = j=1 (gj , hj ) and kgk2 = j=1 kgj k2 . As
in the case of the Fourier transform of real-valued functions, the Plancherels
theorem and the Parsevals identity also hold for the vector-valued functions.
For linear partial differential equations with constant coefficients, the Fourier
transform is a useful tool in analyzing the solutions as shown in the previous
chapter.

Lemma 6.1 For h H, let fi (, t) and ij (, t) be predictable H-valued


processes and let W1 , , Wm , be independent, identically distributed Wiener
random fields with a bounded common covariance function r(x, y) as assumed
before. Suppose that
Z T Z T
E{ kfi (, t)k2 dt + kij (, t)k2 dt} < , (5.74)
0 0

for i, j = 1, , n. Then the following inequalities hold:

kGt hk2 khk2 , (5.75)


Z t Z T
sup E k Gts f (, s)dsk2 T E kf (, s)k2 ds, (5.76)
0tT 0 0
154 Stochastic Partial Differential Equations

and Z Z
t T
sup E k Gts M (, ds)k2 E T r Qs ds, (5.77)
0tT 0 0
n Z
X
where T r Qs = qjj (x, x; s)dx.
j=1

Proof. In view of (5.72), we see that the matrix-norm |K(, t)| et . The
2
first inequality (5.75) follows immediately from the L -isometry as mentioned
above,
2 e2t khk
t)hk
kGt hk2 = kK(, 2 khk2 .
Next, by making use of the above result, we obtain
Z t Z T
sup Ek Gts fs dsk2 T E [ kGts f (, s)k2 ds,
0tT 0 0

which gives the inequality (5.76).


Rt
For the last inequality, let (, t) = 0 Gts M (, ds) and let
(, t) denote
its Fourier transform, which can be rewritten as the It o equation

d
(, t) = B() c(, dt),
(, t)dt + M t (0, T ),
(5.78)
u(, 0) = 0, Rd ,

where B() = [iA() I]. By the It


o formula,

t k2 = (
dk t ,
t ) + (dt ,
t )
= [(
t , B
t ) + (B t ,
t )]dt + ( c(, dt)) + (M
t , M c(, dt), b t dt
t ) + T r.Q
= 2 k 2
t k dt + 2 Re ( c b
t , M (, dt)) + T r.Qt dt,

where we made use of the fact that i A() is a Hermitian matrix. By taking
an expectation and integration, the above equation yields
Z t
E kt k2 = E k
t k 2 = E e2(ts) T r.Qs ds,
0

which implies the inequality (5.77).

Theorem 6.2 Suppose that the conditions for Lemma 6.1 are fulfilled.
Then, given h H, the mild solution ut of the Cauchy problem given by
(5.73) belongs to L2 (; H), and there is CT > 0 such that the energy inequality
holds:
Z T
2 2
sup E ku(, t)k CT E { khk + kf (, s)k2 ds
0tT 0
Z T (5.79)
+ k(, s)k2 ds},
0
Stochastic Hyperbolic Equations 155
Pn Pm
where k(, t)k2 = j=1 k=1 kjk (, t)k2 .

Proof. By the L2 -isometry mentioned above, we will first show the Fourier
transform ut of the solution u belongs to the space L2 (; H). By applying the
inequalities in Lemma 6.1, we can obtain

sup E ku(, t)k2 = 3 sup E sup {kGt hk2


0tT 0tT 0tT
Z t Z t
+k (Gts f )(, s)dsk2 + k (Gts M )(, ds)k2 }
0 0
Z T Z T
2 2
3 { kgk + T E kfs k ds + E T r Qs ds.}
0 0
Pn R
Recall that T r Qs = j=1 qjj (x, x, s)dx, where, by (5.67),
m
X m
X
2 2
qjj (x, x, s) = r(x, x) jk (x, s) r0 jk (x, t).
k=1 k=1

It follows that sup E ku(, t)k2 < , and the energy inequality (5.79) holds
0tT
true.

Now let us consider a semilinear hyperbolic system for which the random
fields fj and jk in (5.65) depend on the unknowns u1 , , un . The associated
Cauchy problem reads
n d
ui X X k uj
= aij ui + fi (u1 , , un , x, t)
t j=1 k=1
xk
(5.80)
+ M i (u1 , , un , x, t), t (0, T ),
ui (x, 0) = gi (x), i = 1, 2, , n; x Rd ,

in which
m
X
M i (u1 , , un , x, t) = j (x, t).
ij (u1 , , un , x, t)W (5.81)
j=1

Here, fi (u1 , , un , x, t) and ij (u1 , , un , x, t) are predictable random


fields, and W1 , , Wm , are the same Wiener random fields as given before.
By using the matrix notation, the system (5.80) can be written as
d
u X k u (x, t),
= A u + f (u, x, t) + (u, x, t)W
t xk (5.82)
k=1
u(x, 0) = g(x),

for t (0, T ), x Rd . To prove the existence theorem for the above system,
156 Stochastic Partial Differential Equations

we suppose that, for i = 1, , n; j = 1, , m, fi (, x, t) and ij (, x, t) are


predictable random fields with parameters Rn , x Rd , such that the
following conditions P hold:

(P.1) For i = 1, , n, let fi (, , t) be a predictable H-valued process and set


fi(, t) = fi (0, , t). Suppose there exists positive constant b1 such that
Z T n Z
X T
E kf(, t)k2 dt = E kfi (, t)k2 dt b1 , a.s.
0 i=1 0

(P.2) For i = 1, , n; j = 1, , m, ij (, x, t) is a predictable H-valued pro-


cess and there exists a positive constant b2 such that
n X
X m
|(, x, t)|2 = |ij (, x, t)|2 b2 (1 + ||2 ) a.s.,
i=1 j=1

for any Rn , x Rd , and t [0, T ].

(P.3) There exist positive constants b3 and b4 such that


n
X
|f (, x, t) f ( , x, t)|2 = |fi (, x, t) fi ( , x, t)|2 b3 | |2 ,
i=1

and
n X
X m
|(, x, t) ( , x, t)|2 = |ij (, x, t) ij ( , x, t)|2
i=1 j=1

b4 | |2 a.s.,

for any , Rn , x Rd and t [0, T ].

(P.4) The independent identically distributed Wiener random fields


W1 (x, t), , Wm (x, t) are given as before
R with common covariance func-
tion r(x, y) bounded by r0 and T rR = r(x, x)dx = r1 is finite.

With the aid of Lemma 6.1 and Theorem 6.2, we shall prove the following
existence theorem for the semilinear hyperbolic system (5.80) or (5.82) under
conditions P.

Theorem 6.3 Suppose that conditions (P.1)(P.4) are satisfied. For h H,


the Cauchy problem for the hyperbolic system (5.80) has a unique (mild)
solution ut L2 (; H)) such that

sup E ku(, t)k2 CT ,


0tT
Stochastic Hyperbolic Equations 157

for some constant CT > 0.

Proof. We rewrite the system (5.82) in the integral form


Z t Z t
ut = Gt h + Gts fs (u)ds + Gts s (u)dWs . (5.83)
0 0

To show the existence of a solution to this equation, introduce a Banach space


XT of continuous adapted H-valued processes u(, t) in [0, T ] with norm |||u|||T
defined by
|||u|||T = { sup E kut k2 }1/2 .
0tT

Let be an operator in XT defined by


Z t Z t
t u = Gt h + Gts fs (u)ds + Gts s (u)dWs . (5.84)
0 0

We will first show that : XT XT is bounded. For u XT , clearly we


have
sup E kt uk2 3 sup E {kGt hk2
0tT 0tT
Z t Z t (5.85)
+k Gts fs (u)dsk2 + k Gts s (u)dWs k2 },
0 0
provided that the upper bound is finite. This is the case because, by Lemma
6.1, we have kGt hk2 khk2 ,
Z t Z T
sup E k Gts fs (u)dsk2 T E kfs (u)k2 ds,
0tT 0 0

and Z Z
t T
2
sup E k Gts s (u)dWs k KT E T r Qs (u)ds,
0tT 0 0

for some constant KT > 0. Furthermore, by condition (P.1), we obtain


kfs (u)k2 2(kfs k2 + kfs (u) fs (0)k2 )
2{kfs k2 + b3 (1 + kut k2 )},
so that
Z T Z T
E kfs (u)k ds 2
2E { kfs k2 + b3 (1 + kus k2 ) }ds
0 0
(5.86)
2{b1 + b3 T (1 + |||u|||2T )}.
By invoking conditions (P.2) and (P.4), we get
Z T Z TZ
E T r Qs (u)ds = E r(x, x)|(u, x, s)|2 dxds
0Z
T Z
0
(5.87)
E r(x, x)b2 [ 1 + |u(x, s)|2 ]dx ds
0
b2 T (r1 + r0 |||u|||2T ).
158 Stochastic Partial Differential Equations

When (5.86) and (5.87) are used in (5.85), it gives

|||u|||2T 3{ khk2 + 2[b1 + b3 T (1 + |||u|||2T )] + 4b2 T (r1 + r0 |||u|||2T )},

which verifies the boundedness of the map .


To show that it is a contraction map, let u, v H and consider
Z t
kt u t vk2 = k Gts fs (u, v)ds
0Z
t
Gts s (u, v)dWs k2
+ (5.88)
Z t Z t 0

2{k Gts fs (u, v)dsk2 + k Gts s (u, v)dWs k2 },


0 0

where we set fs (u, , v) = [fs (u) fs (v)] and s (u, v) = [s (u) s (v)]. By
means of Lemma 6.1 and condition (P.3), we can deduce that
Z t Z T
2
sup k E Gts fs (u, v)dsk T E kfs (u) fs (v)k2 ds
0tT 0 0
Z T
b3 T E kus vs k2 ds b3 T 2 |||u v|||2T ,
0

and
Z t Z T
2
sup k E Gts s (u, v)dWs k KT E T r Qs (u, v)ds
0tT 0 0
Z T Z
= KT E r(x, x)|(u, x, s) (v, x, s)|2 dx ds
0
Z T
KT b4 r0 E kus vs k2 ds KT b4 r0 T |||u v|||2T .
0

By taking the above two upper bounds into account, the inequality (5.88)
gives rise to
|||u v|||2T 2T (b3 T + KT b4 r0 ) |||u v|||2T ,
which shows that is a contraction map for small T . Again, by the principle of
contraction mapping, the equation (5.83) has a unique solution ut L2 (; H))
as claimed. Since u = u, from the estimates in (5.85)(5.87), we can show
there exist constants C3 , C4 such that
Z T
|||u|||2T C3 + C4 |||u|||2t dt,
0

which, by Gronwalls inequality, implies that |||u|||2T = sup E ku(, t)k2 CT


0tT
for some CT > 0.
Stochastic Hyperbolic Equations 159

As an example, consider the problem of the jet noise propagation in one


dimension. A simple model consists of an acoustic wave propagating over
the mean flow field due to a randomly fluctuating source. Let v0 , , and c
denote the unperturbed flow velocity, fluid density, and the local sound speed,
respectively, which are assumed to be constants. Then the acoustic velocity
v(x, t) and pressure p(x, t) for this model satisfy the following system:
v v 1 p
+ v0 + = f1 (v, p, x, t) + M 1 (x, t),
t x x
p p v (5.89)
+ v0 + c2 = f2 (v, p, x, t) + M 2 (x, t),
t x x
v(x, 0) = h1 (x), p(x, 0) = h2 (x),
for t (0, T ), x R. In the above equations, fi (v, p, x, t), i = 1, 2, are pre-
dictable random fields which account for the fluctuation of the flow field;
M i (x, t), i = 1, 2, are the random source terms, and hi (x), i = 1, 2, are the
initial data. In particular, let fi (v, p, x, t) be linear in v and p so that
2
X
fi (v, p, x, t) = fi (x, t) + bij (x, t)uj (x, t), (5.90)
j=1

and, as before,
2
X
M i (x, t) = j (x, t),
ij (x, t)W i = 1, 2, (5.91)
j=1

where we set u1 = v, u2 = p, and fi (x, t), bij (x, t), ij (x, t) are some given
predictable random fields. With the notation u = (v; p), the system (5.89) can
be rewritten in the matrix form (5.82), with d = 1, n = 2, = 0, as follows
u u (x, t),
=A + f (u, x, t) + (x, t)W t (0, T ),
t x (5.92)
u(x, 0) = h(x), x R,
where  
v0 1/
A=
c2 v0
has two distinct real eigenvalues = v0 c. Therefore, the system (5.89) is
strongly hyperbolic. It follows from Theorem 6.3 that the system has a unique
solution under some suitable conditions.

Theorem 6.4 Let fi (v, p, x, t), M i (x, t), i = 1, 2, be given by (5.90), (5.91),
respectively. Suppose that fi (, t) and ij (, t) are predictable processes in H
such that
Z T Z T
E{ kfi (, s)k2 ds + kij (, s)k2 ds} < , i, j = 1, 2,
0 0
160 Stochastic Partial Differential Equations

and bij (x, t), i = 1, 2, are bounded, continuous adapted random fields. If con-
dition (P.4) holds, for hi H, i = 1, 2, the Cauchy problem for the system
(5.89) has a unique solution (vt ; pt ) L2 (; H H)) such that

sup E {kv(, t)k2 + kp(, t)k2 } CT ,


0tT

for some constant CT > 0.

From the assumptions, it is easy to check that the conditions (P.1)(P.4)


for Theorem 6.3 are satisfied and, consequently, the conclusion of Theorem
6.4 is true.
6
Stochastic Evolution Equations in Hilbert
Spaces

6.1 Introduction
So far we have studied two principal types of stochastic partial differential
equations, namely, the parabolic and hyperbolic equations. To provide a uni-
fied theory with a wider range of applications, we shall consider stochastic
evolution equations in a Hilbert space driven by the Wiener and/or Levy
noises. We first introduce the notion of Hilbert spacevalued martingales in
Section 6.2. Then stochastic integrals in Hilbert spaces with respect to the
Wiener process and the Poisson measure are defined in Section 6.3. In Section
6.4, an Itos formula is presented. After a brief introduction to stochastic evo-
lution equations in Section 6.5, we consider, in Section 6.6, the mild solutions
by the semigroup approach. This subject was treated extensively by Da Prato
and Zabczyk [21] and the article by Walsh [94]. For further development, one
is referred to this classic book. Next, in Section 6.7, the strong solutions of lin-
ear and nonlinear stochastic evolution equations are studied mainly under the
so-called coercivity and monotonicity conditions. These conditions are com-
monly assumed in the study of deterministic partial differential equations [61]
and were introduced to the stochastic counterpart by Bensoussan and Teman
[5], [6], and Pardoux [75]. Their function analytic approach will be adopted to
study the existence, uniqueness, and regularity of strong solutions. We should
also mention that a real analytical approach to strong solutions was devel-
oped by Krylov [51]. Finally, the strong solutions of second-order stochastic
evolution equations will be treated separately, in Section 6.8, due to the fact
that they are neither coercive nor monotone. Some simple examples are given
and more interesting ones will be discussed in Chapter 8.

6.2 Hilbert SpaceValued Martingales


Let H, K be real separable Hilbert spaces with inner products (, )H , (, )K
1/2 1/2
and norms k kH = (, )H and k kK = (, )K , respectively. The subscripts

161
162 Stochastic Partial Differential Equations

will often be dropped when there is little chance of confusion. Suppose that
is a linear operator from K into H. Introduce the following classes of linear
operators:

(1) L(K; H): the space of bounded linear operators : K H with the
uniform operator norm kkL .

(2) L2 (K; H): the space of HilbertSchmidt (H.S.) operators with norm de-
fined by

X
kkL2 = { kek k2 }1/2 ,
k=1

where {ek } is a complete orthonormal basis for K.

(3) L1 (K; H): the space of nuclear (trace class) operators with norm given by

X
kkL1 =
= T r k , ek ),
(e
k=1

= ( )1/2 and denotes the adjoint of .


where

Clearly, we have the inclusions: L1 (K; H) L2 (K; H) L(K; H). If


K = H, we set Lk (H, H) = Lk (H), for k = 0, 1, 2, with L0 (H; H) = L(H).
Notice that if L1 (H) is self-adjoint, kkL1 = T r . The class of self-
adjoint operators in L1 (K) will be denoted by Lb1 (K). The following lemma
will be found useful later on.

Lemma 2.1 Let A : K H and B : H H be bounded linear operators.


Then the following inequalities hold.

(1) kAkL kAkL2 kAkL1 .

(2) kBAkL1 kAkL2 kBkL2 .

(3) kBAkLj kBkL kAkLj , j = 1, 2.

Let (, F , P ) be a complete probability space and let {Ft , t [0, T ]} be a


filtration of increasing sub -fields of F . In what follows we shall present some
basic definitions and facts about the martingales and some related stochastic
processes in Hilbert spaces. The proofs of the theorems can be founded in
several books, such as [21], [69], [81], and [86].
Let Mt , t [0, T ], be a Ft -adapted K-valued stochastic process defined on
such that EkMt k = EkMt kK < . It is said to be a K-valued martingale
if
E{Mt | Fs } = Ms , a.s.,
Stochastic Evolution Equations in Hilbert Spaces 163

for any s, t [0, T ] with s t. Clearly, the norm kMt k is a real-valued


submartingale so that

E{kMt k | Fs } kMs k a.s.

If there exists a sequence of stopping times {n } with n a.s. such that


Mtn is a K-valued martingale for each n, then Mt is said to be a K-valued
local martingale.
If EkMt kp < with p 1, Mt is said to be a K-valued Lp -martingale.
For such a martingale, if there exists an integrable Lb1 (K)-valued process Q
bt
over [0, T ] such that
t g, h), a.s. g, h K,
h(M, g), (M, h)it = (Q (6.1)

b t as the covariation operator of Mt . If it has a density


then define [M ]t = Q
Qt such that
Z t
[M ]t = Qs ds,
0

then Qt is said to be the local covariation operator or the local characteristic


operator of Mt . Let Nt be another continuous K-valued L2 -martingale. The
covariation operator hM, N it = Q MN and the local covariation operator QMN
t t
for M and N are defined similarly as follows,
Z t
MN g, h) =
h(M, g), (N, h)it = (Q (QMN g, h)ds, a.s. (6.2)
t s
0

for all g, h K.

Lemma 2.2 Let Mt be a continuous L2 -martingale in K with M0 = 0 and


t Lb1 (K). Then
covariation operator Q

t ).
E kMt k2 = E T r [M ]t = E (T r Q (6.3)

Proof. Let {ek } be a complete orthonormal basis for K. Then



X
E kMt k2 = E (Mt , ek )2 . (6.4)
k=1

Since mkt = (Mt , ek ) is a real continuous martingale with mk0 = 0, we have


Z t
|mkt |2 = 2 mks dmks + [mk ]t ,
0

so that
t ek , ek ).
E |mkt |2 = E[mk ]t = E (Q
164 Stochastic Partial Differential Equations

When this equation is used in (6.4), we obtain



X
E kMt k2 = t ek , ek ) = E (T r Q
E (Q t ),
k=1

which verifies the lemma.

Let MpT (K) denote the Banach space of continuous, K-valued Lp - mar-
tingales with norm given by

kM kp,T = (E sup kMt kp )1/p .


0tT

Theorem 2.3 Let Mt , t [0, T ], be a continuous K-valued, Lp -


martingale. Then the following inequalities hold.

(1) For p 1 and for any > 0,

P { sup kMt k } p E kMT kp . (6.5)


0tT

(2) For p = 1,
E{ sup kMt k} 3E{T r [ M ]T }1/2 . (6.6)
0tT

(3) For p > 1,


p p
E{ sup kMt kp } ( ) EkMT kp . (6.7)
0tT p1

A proof of the inequality (6.6) can be found in (p. 6, [75]). Since kMt k is a
positive submartingale, the Doob inequalities (6.5) and (6.7) are well known
(see p. 21, [86]).

Theorem 2.4 Let Mt be a M2T (K)-martingale with M0 = 0, and let {ek }


be a complete orthonormal basis for K. Then Mt has the following series
expansion:
X
Mt = mkt ek , (6.8)
k=1

where mkt = (Mt , ek ) and the series converges in M2T (K).

Proof. Let Mtn denote the partial sum:


n
X
Mtn = mkt ek , (6.9)
k=1

where mkt is a real-valued martingale with covariation qtk = (Q t ek , ek ). It


suffices to show that {Mtn } is a Cauchy sequence of martingales in the space
Stochastic Evolution Equations in Hilbert Spaces 165

M2T (K). To this end, we see clearly that Mtn is a continuous martingale and,
by the maximal inequality (6.7),
kM n k22,T = E sup kMtn k2 4E kMTn k2
0tT
n
X n
X
= 4E |mkT |2 = E (Q b T ),
T ek , ek ) E(T r Q
k=1 k=1

which, in view of Lemma 2.2, implies that M n M2T (K). For m > n, let
Mtm,n = (Mtm Mtn ), which is a continuous martingale in K. As before, we
have
kM m,n k22,T = E sup kMtm,n k2
0tT
m
X m
X
4E |mkT |2 = 4E T ek , ek ) 0,
(Q
k=n+1 k=n+1
P
as m, n , since E(T r QT ) = E {M n } is
k=1 (QT ek , ek ) < . Therefore
P
2 k
a Cauchy sequence converging in MT (K) to a limit M = k=1 m ek . Since
(M , ek ) = mk = (M, ek ) for every k 1, we can conclude that the series
representation (6.8) for M is valid.

A stochastic process Xt in a Hilbert space K is said to be a K-valued


Gaussian process if for any g K, (Xt , g) is a real-valued Gaussian process.
Let R Lb1 (K) be a self-adjoint nuclear operator on K. A K-valued stochastic
process {Wt , t 0} is called a Wiener process in K with covariance operator
R, or an R-Wiener process , if
(1) Wt is a continuous process in K for t 0 with W0 = 0 a.s.,
(2) Wt has stationary, independent increments, and
(3) Wt is a centered Gaussian process in K with covariance operator R such
that
E (Wt , g) = 0, E (Wt , g)(Ws , h) = (t s)(Rg, h), (6.10)
for any s, t [0, ), g, h K.
It is easy to verify that Wt is a continuous Ft -adapted L2 -martingale in
K with local covariation operator Qt = t R a.s. t 0.

Theorem 2.5 Let {ek } be the complete set of eigenfunctions of the


trace-class covariance operator R with eigenvalues {k } such that Rek = k ek ,
k = 1, 2, . Then, for 0 t T , an R-Wiener process Wt in K has the
following series representation:
p
X
Wt = k bkt ek , (6.11)
k=1
166 Stochastic Partial Differential Equations

where {bkt }, k = 1, 2, is a sequence of independent, identically distributed


standard Brownian motions in one dimension. Moreover, the series converges
uniformly on any finite interval [0, T ] with probability one.

Proof. Let Wtn denote the partial sum of (6.11)


n
X
Wtn = mkt ek ,
k=1

where mkt = k bkt is a real martingale with mean zero and covariation qt =
t(Rek , ek ) = t k . For each n, Wtn is a continuous martingale in K. Similar
to the proof of Theorem 2.4, we can show that the series (6.11) converges in
C([0, T ]; K) to the limit Wt in mean-square and thus in probability. It is a
Gaussian process in K with E Wt = 0 and, for g, h K,

X
E (Wt , g)(Ws , h) = (t s) k (g, ek )(h, ek ) = (t s)(Rg, h).
k=1

Hence, it is an R-Wiener process in K. To show it converges in C([0,T];K)


almost surely, we invoke a theorem (Theorem 4.3, [21]) which says, if the terms
of the series are mutually independent and symmetrically distributed random
variables in a Banach space, then the convergence in probability implies the
almost sure convergence. Clearly, the sequence of random variables mkt ek =

k bk ek in C([0, T ]; K) satisfies these conditions. Therefore, the almost sure


convergence of the series follows.

So far we have considered the R-Wiener process Wt with a trace-class


covariance operator. It is also of interest to consider the so-called cylindrical
Wiener process Bt in K. Formally, Bt = R1/2 Wt has the covariance operator
I, the identity operator. In fact, it makes sense to regard Bt as a generalized

Wiener process with values in the dual space KR of KR , where KR denotes
the completion of (R K) with respect to norm k kR = kR1/2 k. Since
1/2

the inclusion i : K KR is HilbertSchmidt, the triple (i, K, KR ) forms a
so-called Abstract Wiener space (p. 63, [57]). The cylindrical Wiener process

Bt can be defined as an RWiener process in KR . In this case, KR is also
known as a reproducing kernel space (p. 40, [21]).

Remarks:


(1) Bt , t 0, is a Gaussian process in KR with E hBt , gi = 0 and

E hBt , gihBs , hi = (t s)(g, h)K ,

for any g, h KR , where h, i denotes the duality product between KR



and KR .
Stochastic Evolution Equations in Hilbert Spaces 167

(2) Let n : KR K be a projection operator such that n I strongly in

KR . Then one can show that (p. 66, [57])
Wt = R1/2 Bt = lim R1/2 n Bt
n

in the mean-square.
(3) The formal derivative B t is a spacetime white noise satisfying
E hB t , gi = 0, E hB t , gihB s , hi = (t s)(g, h)K ,
for any g, h KR , where (t) denotes the Dirac delta function. B t can be
regarded as a white noise in the sense of Hida [58].

6.3 Stochastic Integrals in Hilbert Spaces


Let K, H be real separable Hilbert spaces and let Wt , t 0, be an R-Wiener
process in K defined in a complete probability space (, F , P ) with a filtra-
tion Ft , t 0 of increasing sub -fields of F . To define a stochastic integral
with respect to the Wiener process Wt , let KR be the Hilbert subspace of K
introduced in the last section with norm k kR and the inner product

X 1
(g, h)R = (R1/2 g, R1/2 h)K = (g, ek )K (h, ek )K , (6.12)
k
k=1

for g, h KR . Denote by L2R the space L2 (KR , H) of H.S. operators. We claim


that the following lemma holds.

Lemma 3.1 The class L2R of H.S. operators is a separable Hilbert space
with norm
1/2
kF kR = ((F, F ))R = [T r(F RF )]1/2 , (6.13)
induced by the inner product ((F, G))R = T r(GRF ), for any F, G L2R ,

where denotes the adjoint.

For the proof, one is referred to Theorem 1.3 in [57]. Now we turn to the
stochastic integrals in H.

Let t () L2R , for t [0, T ], be an Ft -adapted L2R -valued process such


that Z T
E ks k2R ds < . (6.14)
0
We will first define an H-valued stochastic integral of the form
Z T
( W ) = s dWs , (6.15)
0
168 Stochastic Partial Differential Equations

from which we then define


Z T Z t
( W )t = I[0,t] (s)s dWs = s dWs , t T, (6.16)
0 0

where I () denotes the indicator function of the set .


As usual we begin with a simple operator of the form:
n
X
ns = I[tk1 ,tk ) (s)k1 , (6.17)
k=1

where = {t0 , t1 , , tk , , tn }, with 0 = t0 < t1 < < tk < < tn = T ,


is a partition of [0, T ], and k is an Ftk adapted random operator in L2R such
that
E kk k2R = E T r(k Rk ) < , (6.18)
for k = 0, 1, , n.
Let ST (KR , H) denote the set of all simple operators of the form (6.17).
For ST (KR , H), define
Z T n
X
( W ) = s dWs = k1 (Wtk Wtk1 ), (6.19)
0 k=1

and, in this case, define the integral (6.16) directly as


Z t n
X
( W )t = s dWs = k1 (Wttk Wttk1 ). (6.20)
0 k=1

Lemma 3.2 For ST (KR , H), Xt = ( W )t is a continuous H-valued


L2 -martingale such that, for any g, h H, the following holds
Z ts
E (Xt , g) = 0, E (Xt , g)(Xs , h) = E (Qr g, h)dr, (6.21)
0

and Z t
E kXt k2 = E T r Qr dr, (6.22)
0

where (, ) = (, )H and Qt = (t Rt ), with denoting the adjoint.

Proof. Let = n be a simple operator, where n is given by (6.17).


Then, by definition (6.20),
n
X
(Xt , g) = (g, k1 Wttk ), (6.23)
k=1
Stochastic Evolution Equations in Hilbert Spaces 169

with Wttk = (Wttk Wttk1 ). It is easy to check that the sum (6.23) is
a continuous martingale in H. Since

E (g, k1 Wttk ) = E E{(k1 g, Wttk )|Ftk1 } = 0,

we have E (Xt , g) = 0. For j < k, t < s,

E[(g, j1 Wttj )(h, k1 Wstk )]


= E{(g, j1 Wttj )E[(h, k1 Wstk )|Ftk1 ]} = 0.

It follows that, for t s,


X
E(Xt , g)(Xs , h) = E[(g, k1 Wtk )(h, k1 Wtk )],
tk t

where

E[(g, k1 Wtk )(h, k1 Wtk )]


= E {(k1 g, Wtk )(k1 h, Wtk )|Ftk1 }
= E(R(k1 g, k1 h)(tk tk1 ) = E(k1 Rk1 g, h)(tk tk1 ).

Therefore
X
E (Xt , g)(Xs , h) = E (k1 Rk1 g, h)(tk tk1 )
tk ts
Z ts
= E (r Rr g, h)dr,
0

which verifies (6.21). To show (6.22), take a complete orthonormal basis {fk }
for H. For g = h = fk , the equation (6.21) yields
Z t
E (Xt , fk )2 = E (r Rr fk , fk )dr,
0

so that

X Z tX
Z t
E kXt k2 = E (Xt , fk )2 = E (r Rr fk , fk )dr = E T rQr dr,
k=1 0 k=1 0

where the interchange of expectation, integration, and summation can be jus-


tified by the monotone convergence theorem and the fact that
Z t
E T rQr dr T sup E (T rQk ) < ,
0 1kn

by invoking the property (6.18).

To extend the integrand to a more general random operator in L2R , we


170 Stochastic Partial Differential Equations

introduce the class PT . Let BT be the -algebra generated by sets of the


form {0} B0 and (s, t] B for 0 s < t < T, B0 F0 , B Fs . Then
a L2R -valued process t is said to be a predictable operator if the mapping
: [0, T ) L2R is BT -measurable. Define PT to be the space of predictable
operators in L2R equipped with the norm
Z T
kkPT = {E T r(t Rt )dt}1/2 . (6.24)
0
n
In particular, it is known that PT . The following lemma is essential
in extending a simple integrand to a predictable operator. A proof can be
found in (p. 96, [21]).

Lemma 3.3 The set PT is a separable Hilbert space under the norm (6.24),
and the set ST (KR , H) of all simple predictable operators is dense in PT .

In view of Lemma 3.2 and (6.24), for n ST (KR , H), let XTn = (n W ).
Then we have
Z T Z T
n 2 n n
E kXT kH = E T r(t Rt )dt = E knt k2R dt. (6.25)
0 0
For PT satisfying
Z T
E T r(t Rt )dt < , (6.26)
0

by Lemma 3.3, there exists n ST (KR , H) such that k n kPT 0 as


n . The equation (6.25) shows that the stochastic integral is an isomet-
ric transformation from ST (KR , H) PT into the space M2T of H-valued
martingales. Define It
os stochastic integral of as the limit
Z T Z T
( W ) = s dWs = lim ns dWs , (6.27)
0 n 0

in M2T , and then set


Z T Z t
( W )t = I[0,t] (s)s dWs = s dWs , (6.28)
0 0
which is a continuous H-valued martingale.

Remarks:
(1) In fact, the It
o integral (6.28) can be defined as the limit of a sequence of
integrals with simple predictable operators under a weaker condition [21]:
Z T
P{ T r(t Rt )dt < } = 1.
0

In this case the convergence in (6.28) is uniform in t [0, T ] with proba-


bility one, and Xt will be a continuous local martingale in H.
Stochastic Evolution Equations in Hilbert Spaces 171

(2) As a special case, for t : K R, we shall write


Z t
( W )t = (s , dWs ). (6.29)
0

As in the simple predictable case (Lemma 3.2), the It


o integral defined
above enjoys the same properties.

Theorem 3.4 For PT , the stochastic integral Xt = (W )t defined


by (6.45), 0 t T , is a continuous L2 -martingale in H with the covariation
operator Z t
[( W )]t = (s Rs )ds, (6.30)
0
and it has the following properties:
Z ts
E (Xt , g) = 0, E (Xt , g)(Xs , h) = E (r Rr g, h)dr, (6.31)
0

for any g, h H. In particular,


Z t
2
E kXt k = E T r(s Rs )ds.  (6.32)
0

Here we will skip the proof, which can be shown by applying Lemmas 3.2
and 3.3 via a sequence of simple predictable approximations.
Instead of stochastic integral in H with respect to a Wiener process Wt ,
as in the finite-dimensional case, we can also define a stochastic integral with
respect a continuous martingale in K. For instance, if ft H is a bounded
and predictable process over [0, T ], similar to (6.28), we can define (f X)t as
the integral Z Z
t t
(f X)t = (fs , dXs ) = (fs , s dWs ). (6.33)
0 0
Then, by following the same procedure in defining the It
o integral, we can
verify the following lemma which will be useful later.

Lemma 3.5 Let ft and gt be any H-valued processes which are bounded
and predictable over [0, T ]. Then the following holds.
E (f X)t = 0,
Z t
E (f X)t (g X)t = E (s Rs fs , gs )ds.
0

In the subsequent applications, we need to define a stochastic integral of


the form Z t
(Ft W )t = F (t, s)s dWs , (6.34)
0
172 Stochastic Partial Differential Equations

where F : [0, T ] [0, T ] L(H) is bounded and uniformly continuous for


s, t [0, T ]. One way to proceed is as follows. For t (0, T ], instead of (6.33),
define an integral of the form:
Z T
Yt = F (t, s)s dWs , (6.35)
0
RT
as a process in H with E 0 kYt k2 dt < . Then we let (Ft W )t =
RT
0 I[0,t) (s)F (t, s)s dWs . To define the integral (6.35), an obvious way is
to proceed as before. Starting with a simple predictable approximation, let
n ST (KR , H) and let
n
X
F n (t, s) = I[tk1 ,tk ) (s)F (t, tk1 ).
k=1

Then we define
Z T n
X
n
Yt = F n (t, s)ns dWs = F (t, tk1 )k1 (Wtk Wtk1 ), (6.36)
0 k=1

where, for 0 = t0 < t1 < < tn1 < tn = T , k is an Ftk adapted random
operator from K into H. Since F is bounded and continuous, we have
Z T
E T r[F (t, tk1 )k1 Rk F (t, tk1 )]dt
0
(6.37)
CT E kk1 k2R < ,

for some CT > 0. Clearly we have E Ytn = 0, and


Z T Z T n
X
E kYtn k2 dt =E { T r[F (t, tk1 )k1 Rk1
0 0 k=1
F (t, tk1 )](tk tk1 )}dt (6.38)
Z T Z T
= E T r[F (t, s)ns Rn
s F (t, s)]dsdt.
0 0

In general, let PT and F (t, s) L(H), s, t [0, T ] is bounded, uni-


formly continuous. Then we have
Z TZ T
E T r[F (t, s)s Rs F (t, s)]dsdt
0 0 Z T
2
TE T r(s Rs )ds < ,
0

where = sups,t kF (t, s)kL(H) . By Lemma 3.3, there exists n ST (KR ; H)


such that Z Z T T
E kF m (t, s)m n n 2
s F (t, s)s kR dsdt 0
0 0
Stochastic Evolution Equations in Hilbert Spaces 173

as n . Correspondingly, {Y n } is a Cauchy sequence in L2 ( (0, T ); H)


Z T
which converges to Y and denotes Y (t) = F (t, s)s dWs as the stochastic
0
integral in (6.35). Then we define the stochastic integral (6.34) by
Z T Z t
(Ft W )t = I[0,t] (s)F (t, s)s dWs = F (t, s)s dWs . (6.39)
0 0

This integral will be called an evolutional stochastic integral. For the special
case when F (t, s) = F (t s), we have
Z t
Yt = F (t s)s dWs ,
0

which will be called a convolutional stochastic integral. The following theorem,


which seems obvious, will be useful later. The proof is to be omitted.

Theorem 3.6 Let F (t, s) L(H) be bounded, uniformly continuous in


s, t [0, T ] such that
sup kF (t, s)kL < . (6.40)
s,t[0,T ]
Rt
Then the evolutional stochastic integral Yt = 0 F (t, s)s dWs has a continu-
ous version which is an Ft -adapted process in H such that EYt = 0 and
Z ts
E (Yt , Ys ) = T r[F (t, r)s Rr F (s, r)]dr.  (6.41)
0

Remark: Here, for simplicity, we assumed that F (s, t) is uniformly contin-


uous in s, t [0, T ]. In fact, the integral Yt can also be defined under a weaker
condition, such as F (t, s) is a strongly continuous linear operator for t 6= s, or
an Ft - predictable random operator F (, t).

So far we have defined stochastic integrals with respect to a Wiener process.


In a similar fashion, a generalization to stochastic integrals with respect to
a continuous K-valued L2 -martingale Mt can be easily carried out. Suppose
that Mt has a local characteristic operator Qt . Let t be a predictable random
operator in L(K; H) such that
Z T
E T r(t Qt t )dt < . (6.42)
0

Then we can define the stochastic integral with respect to the martingale
Z t
Zt = ( M )t = s dMs , (6.43)
0

analogously as in the case of a Wiener process.


174 Stochastic Partial Differential Equations

Theorem 3.7 Let Mt be a continuous K-valued martingale with M0 = 0


and local characteristic operator Qt for 0 t T . Suppose that t is a
predictable process with values in L(K, H) such that the property (6.42) is
satisfied. The stochastic integral Zt given by (6.43) is a continuous H-valued
L2 -martingale with mean E Zt = 0 and covariation operator
Z t
[ Z ]t = (s Qs s )ds.  (6.44)
0

Now we turn to the stochastic integral with respect to a Poisson random


measure. To be specific, as in the previous sections, we consider N (t, ) to be a
Poisson measure on Borel subsets in Rm 0 instead of a Hilbert space. Technically
there is little difference between them. Since N (t, ) = N (t, )+t (), it suffices
to consider the integral with respect to the compensated random Poisson
measure. Let B be a Borel set in B(Rm 0 ). First we define a real-valued integral
over [0, T ] B. Let n = {0 = t0 , t1 , t2 , , tn = T } be a partition of [0, T ]
and let n = {B1 , B2 , , Bn } be a partition of set B. Introduce a sequence
of real-valued random variables {Xj , j = 1, 2, , n} such that, for each j, Xj
is Ftj -adapted with E |Xj |2 < . Let ST (B) be the set of simple processes
: [0, T ] B R of the form
n
X
n (t, ) = Xj I(tj1 ,tj ] (t)IBj (), (6.45)
j

where I denotes the indicator function. We then define


Xn Z TZ
JT (n ) =
Xj N ((tj1 , tj ], Bj ) = (dt, d).
n (t, )N (6.46)
j=1 0 B

By easy calculations, we obtain E JT (n ) = 0 and


n
X
E [JT (n )]2 = E |Xj |2 (tj tj1 )(Bj )
j=1
Z (6.47)
T Z
2
= E |n (t, )| dt(d).
0 B

Let MT2 denote the set of all Ft -predictable random fields {(t, ), t
[0, T ], B} such that
Z T Z
kk2T =E |(t, )|2 dt(d) < .
0 B

Under the norm k kT , MT2 is a Hilbert space. Now the set of simple processes
ST (B) is known to be dense in MT2 . Given MT2 , it can be approximated by
a sequence of simple processes of the form (6.45). It follows from (6.47), by the
Stochastic Evolution Equations in Hilbert Spaces 175

It
o isometry argument, that the integral JT (n ) defined in (6.46) converges
to a limit JT () in L2 (). Then we define the Poisson integral
Z TZ
JT () = (dt, d).
(t, )N (6.48)
0 B

More precisely, the above results can be found in Section 8.4 [78], in which
Theorem 8.6 implies the following lemma.

Lemma 3.8 If MT2 , then the integral Jt (), t [0, T ], is a


square-integrable Ft -martingale which is stochastically continuous with mean
E Jt () = 0 and
Z tZ
E {Jt ()}2 = E |(r, )|2 dr(d).
0 B

To define a Hilbert spacevalued Poisson stochastic integral in H, for t


[0, T ] and B, let (t, ) be a Ft -predictable random field with values in
H such that Z Z T
||||||2T = E k(t, )k2 dt(d) < ,
0 B

and denote by M2T the collection of all such random fields. Let {ek } be an
orthonormal basis for H and set j = (, ej ). Then we have j MT2 and the
Poisson integral Z tZ
Jtj () = j (t, )N (dt, d)
0 B
is well defined by Lemma 3.8. Now we define
Xn Z tZ X n
n j (ds, d).
Jt () = Jt () ej = [ j (s, ) ej ]N (6.49)
j=1 0 B j=1

Then it is easy to show that E Jtn () = 0 and


Z tZ X n
E kJtn ()k2 = E [ |j (s, )|2 ] ds(d)
0 B j=1
Z Z (6.50)
T
2
E k(s, )k ds(d).
0 B

It can be shown readily that the sequence {Jtn ()} will converge in mean-
square to a limit Jt () uniformly in t [0, T ]. We then define
Z tZ
Jt () = (t, )N (dt, d) (6.51)
0 B

as the Poisson integral of . Since Jtn () is a square integrable martingale,


by taking the limits in (6.49) and (6.50), we arrive at the following theorem:
176 Stochastic Partial Differential Equations

Theorem 3.9 If M2T , then, for t [0, T ], the integral Jt () defined by


(6.51) is a square-integrable Ft -martingale which is stochastically continuous
with mean E Jt () = 0 and
Z tZ
2
E kJt ()k = E k(r, )k2 dr(d),
0 B

for h H, t [0, T ].

Remarks:

(1) The general theory of Levy processes and the related compound Poisson
processes in a Hilbert space can be found in Chapter 4 of the informative
book by Paszat and Zabczyk [78]. By the LevyIto decomposition, the
representation of a general Levy process Xt on a Hilbert space is given in
Theorem 4.23 [78]. The process Xt can be decomposed as follows:

Xt = at + Wt + Pt ,

where a H, Wt is an R-Wiener process and, as a special case, Pt =


Z
N (dtd) is a Poisson integral over B Rm
0 . The integral with re-
B
spect to a Levy process Xt in a Hilbert space can be defined as a sum
of an ordinary integral, the Wiener integral, and the Poisson integral.
From
R t R the stochastic analysis viewpoint, the deterministic type of integral
0 B
(s, ) ds(d) will play a minor role. So, separated from Wiener
stochastic integrals, we have considered only the compensated Poisson
type of integrals in the stochastic PDEs.
(2) In defining the Hilbert space H-valued Poisson integral, it is possible to
do so directly similar to that of an It
o integral in a Hilbert space. Here
we first defined a real-valued Poisson integral and then, by means of an
orthonormal basis of H, extended it from a sequence of finite-dimensional
integrals to an H-valued Poisson integral. This approach seems more useful
in later applications.

6.4 It
os Formula
Let be an F0 random variable in H with Ekk2 < . Suppose that Vt is
a predictable H-valued process integrable over [0, T ] and Xt is a continuous
H-valued martingale with X0 = 0 with local characteristic operator t such
that Z Z
T T
E{ kVt k2 dt + T r t dt} < . (6.52)
0 0
Stochastic Evolution Equations in Hilbert Spaces 177

Let Yt be a semimartingale given by


Z t
Yt = + Vs ds + Xt . (6.53)
0


For a functional F (, ) : H [0, T ] R, let t F (, t) = t F (, t) and
denote the first two partial (Frechet) derivatives of F with respect to by
F (, t) H and F (, t) L(H), respectively. F is said to be in class C2U -
It
o functionals if it satisfies the following properties:

(1) F : H [0, T ] R and its partial derivatives t F (, t), F (, t), and


F (, t) are continuous in H [0, T ].

(2) F and its derivatives t F and F H are bounded and uniformly contin-
uous on bounded subsets of H [0, T ].

(3) For any L1 (H), the map: (, t) T r[F (, t)] is bounded and
uniformly continuous on bounded subsets of H [0, T ].

Under the above conditions, we shall present the following It


o formula
without proof.

Theorem 4.1 Let Yt , t [0, T ], be a semimartingale given by (6.53) satis-


fying condition (6.52) and let F (t, ) be a C2U Ito functional defined as above.
Then the following formula holds a.s.
Z t
F (Yt , t) = F (, 0) + {s F (Ys , s) + (F (Ys , s), Vs )}ds
Z t 0 Z (6.54)
1 t
+ (F (Ys , s), dXs ) + T r[F (Ys , s)s ]ds. 
0 2 0

Remarks: The It o formula in a Hilbert space setting is given in [75] and in


Theorem 4.17 of [21]. A more general Itos formula for the Levy type of semi
martingale Yt can be found in (Chapter 3, [69]) and on p.393 [78]. However
this formula will not be quoted here.

In particular, when Xt represents the stochastic integral (6.16), we obtain


the following Itos formula as a corollary of Theorem 4.1. A direct proof of
this theorem is given in Theorem 4.17 [21].

Theorem 4.2 Let


Z t Z t
Yt = + Vs ds + s dWs , (6.55)
0 0
178 Stochastic Partial Differential Equations

and let F be a C2U -It o functional. Then the following formula holds.
Z t
F (Yt , t) = F (, 0) + {s F (Ys , s) + (F (Ys , s), Vs )}ds
Z t 0 Z
1 t (6.56)
+ (F (Ys , s), s dWs ) + T r[F (Ys , s)s Rs ]ds.
0 2 0

Later on it will be necessary to generalize the formula under weaker condi-
tions in connection with stochastic evolution equations. As an application of
the It
o formula (6.56), we shall sketch a proof of the BurkholderDavisGundy
inequality which has been used several times before.
Rt
Lemma 4.3 Let Xt = 0 s dWs such that, for p 1,
Z T
E{ T r[s Rs ]ds]}p < .
0

Then there exists a constant Cp > 0 such that


Z t
2p
E sup kXs k Cp E{ T r[s Rs ]ds] }p , 0 < t T. (6.57)
0st 0

Proof. Since Xt is a martingale, clearly (6.57) holds for p = 1. For p > 1,


let F () = kk2p , H. Then t F = 0, F (0) = 0, and
F () = 2pkk2(p1) ,
F () = 2pkk2(p1) I + 4p(p 1)kk2(p2) ( ),
where I is the identity operator and denotes the tensor product. Let t =
sup0st kXs k2p . Notice that, for Lb1 (H),

T r[F ()] 2pkk2(p1) T r + 4p(p 1)kk2(p2) (, )


(6.58)
2p(2p 1)kk2(p1) T r,
and, by the maximal inequality (6.7),
E( sup kXs k2p ) cp EkXt k2p , (6.59)
0st

for some constant cp > 0. By applying the It o formula (6.56) and making use
of (6.58), we get
Z t
2p
EkXt k p E kXs k2(p1) T r[s Rs ]ds
Z t 0

+ 2p(p 1)E kXs k2(p2) (s Rs Xs , Xs )ds. (6.60)


0 Z t
p(2p 1)E{ sup kXs k2(p1) T r(s Rs )ds}.
0st 0
Stochastic Evolution Equations in Hilbert Spaces 179

By H olders inequality with exponents p > 1 and q = p/(p 1), the above
yields
EkXt k2p p(2p 1) {E sup kXs k2p }(p1)/p
0st
Z t (6.61)
{E[ T r(s Rs )ds]p }1/p .
0

Now it follows from (6.59) and (6.61) that

E( sup kXs k2p ) cp p(2p 1){E sup kXs k2p }(p1)/p


0st 0st
Z t
{E[ T r(s Rs )ds]p }1/p ,
0

or Z t
E( sup kXs k2p ) Cp E[ T r(s Rs )ds]p ,
0st 0

which is the desired inequality (6.57) with Cp = [p(2p 1)cp ]p .

6.5 Stochastic Evolution Equations


In the previous chapters, we have studied stochastic parabolic and hyperbolic
partial differential equations. For a unified approach, they will be treated as
two special cases of stochastic ordinary differential equations in a Hilbert or
Banach space, known as stochastic evolution equations. This approach, though
more technical mathematically, has the advantage of being conceptually sim-
pler, and the general results can be applied to a wider range of problems. To
fix the idea, let us revisit the parabolic It
o equation considered in Chapter 3:

u (x, t),
= Au + f (Ju, x, t) + (Ju, x, t) W
t
u(x, 0) = h(x), x D, (6.62)
u(, t)|D = 0, t (0, T ),

where A = ( ), Ju = (u; x u) and


m
X
(x, t) =
(Ju, , x, t) W k (x, t).
k (Ju, x, t)W
k=1

As done previously, let H = L2 (D) and write ut = u(, t) and Wtk =



Wk (, t) with Wt = (Wt1 , , Wtm ), where Wtk s are independent Rk -Wiener
processes in H. Introduce the m-product Hilbert space K = (H)m . Then Wt
is an R-Wiener process in K with R = diag{R1 , , Rm } being a diagonal
180 Stochastic Partial Differential Equations

operator in K. We also set Ft (u) = f (Ju, , t) and define the (multiplication)


operator t (u) : K H by
m
X
[t (u)](x) = (Ju, x, t) (x) = k (Ju, x, t)k (x),
k=1

for = (1 m ) K. In view of the homogeneous boundary condition


in (6.62), we restrict A to the domain D(A) = H 2 H01 . Denote H01 by V ,
H 1 by V . Let A : V V be continuous and coercive, Ft () : V H and
t () : V L2R satisfy certain regularity conditions. The equation (6.62) can
be interpreted as an It o equation in V :
Z t Z t Z t
ut = h + Aus ds + Fs (us )ds + s (us )dWs , (6.63)
0 0 0

where the last term is an It


o integral defined in Section 6.3.

Alternatively, let g(t s, x, y) denote the Greens function for the linear
problem associated with equation (6.62). As in Chapter 3, let Gt denote the
Greens operator defined by
Z
[Gt ](x) = g(t, x, y)(y)dy, H, t > 0.
D

It is easy to check, as a linear operator in H, Gt : H H is bounded


and strongly continuous in [0, ). Moreover the family {Gt , t 0} forms a
semigroup of bounded linear operators on H with the properties G0 = I and
Gt Gs = Gt+s for any t, s 0. The differential operator A is known as the
infinitesimal generator of the semigroup Gt , which is often written as etA (see
[77]). When written as an integral equation in terms of the Greens operator,
it yields the stochastic integral equation in H:
Z t Z t
ut = Gt h + Gts Fs (us )ds + Gts s (us )dWs , (6.64)
0 0

where the last term is a convolutional stochastic integral with a Greens op-
erator Gt as defined in Section 6.3.
The two interpretations given by (6.63) and (6.64) lead to two different
notions of solutions: namely the strong solution and the mild solution, respec-
tively. Recall that the strong solution is also known as a variational solution.
The formulation (6.64) is often referred to as a semigroup approach. We will
generalize the above setup to a large class of problems.
In general, let K, H be two real separable Hilbert spaces. Most results to
be presented in this section hold when H is complex by a change of notation.
Let V H be a reflexive Banach space. Identify H with its dual H and
denote the dual of V by V . Then we have

V H
= H V ,
Stochastic Evolution Equations in Hilbert Spaces 181

where the inclusions are assumed to be dense and continuous. The triad
(V, H, V ) is known as a Gelfand triple.

For t [0, T ], let At () be a family of closed Ft -adapted random linear


operators in H. Ft (, ) : V H and t (, ) : V L2R , , are Ft -
adapted random mappings. In K the R-Wiener process is denoted again by
Wt , t 0. Corresponding to equation (6.62), we consider

dut = At ut dt + Ft (ut )dt + t (ut )dWt , t (0, T ),


(6.65)
u0 = h H.

As before, unless necessary, the dependence on will be omitted. The equation


(6.63) is now generalized to
Z t Z t Z t
ut = h + As us ds + Fs (us )ds + s (us )dWs . (6.66)
0 0 0

For 0 s t T , let G(t, s) be the Greens operator or the propagator


associated with At satisfying
dG(t, s)
= At G(t, s), G(s, s) = I, (6.67)
dt
where I is the identity operator in H. Then, corresponding to (6.64), the
stochastic integral equation for a mild solution takes the form
Z t Z t
ut = G(t, 0)h + G(t, s)Fs (us )ds + G(t, s)s (us )dWs , (6.68)
0 0

where G(t, s) is a random Greens operator and the last term is an evolutional
stochastic integral as mentioned in Section 6.3. In particular, if At , Ft and t
are independent of t, the equation (6.65) is said to be autonomous.
Next, instead of (6.62), we consider the parabolic equation with a Poisson
noise:
u
= Au + f (u, x, t) + g(u, x, t)P (x, t),
t
u(x, 0) = h(x), x D, (6.69)
u(, t)|D = 0, t (0, T ),
where f and g are some given functions. Formally, denote P = P
t and
Z tZ
P (x, t) = (ds, d),
(x, s, ) N (6.70)
0 B

where (x, t, ) is a predictable scalar random field such that


Z TZ Z
E |(x, s, )|2 dx ds (d) < .
0 D B
182 Stochastic Partial Differential Equations

To simplify the notation, define

(r, x, t, ) = g(r, x, t)(x, t, ), t [0, T ],

which is a predictable random field for r R, x D, B. As done previ-


ously, let Ft (u) = f (u, , t) and t (u, ) = (u, , t, ). Then equation (6.69)
can be written as
Z
dut = Aut dt + Ft (ut ) dt + (dt, d),
t (ut , )N
B (6.71)
u0 = h.
To consider a mild solution, the above equation will be converted into the
integral equation
Z t Z tZ
ut = Gt h + Gts Fs (us ) ds + (ds, d),
Gts s (us , )N (6.72)
0 0 B

where Gt , t 0, is a strongly continuous semigroup on H with infinitesimal


generator A.
The following sections will be concerned with the existence and uniqueness
questions for equations (6.65) and (6.69), respectively. We will take up the mild
solution first, because it is technically less complicated.

6.6 Mild Solutions


In the semigroup approach, it is necessary to assume that At = A is in-
dependent of t with domain D(A) dense in H such that A generates a
strongly continuous C0 -semigroup Gt , for t 0, which satisfies the prop-
erties limt0 Gt h = h, h H, and kGt kL M exp{t}, for some positive
constants M, [79]. In addition, suppose that, for a.e. (t; ) [0, T ] ,
Ft (, ) : H H and t (, ) : H L(K, H) are Ft -adapted and satisfy a
certain integrability condition. First we consider the stochastic equation:
dut = [Aut + Ft (ut )]dt + t (ut )dWt , t (0, T ),
(6.73)
u0 = h H.
Then the integral equation for a mild solution takes the form:
Z t Z t
ut = Gt h + Gts Fs (us )ds + Gts s (us )dWs . (6.74)
0 0

Let us first consider the convolutional stochastic integral


Z t
Xt = Gts s dWs , (6.75)
0
Stochastic Evolution Equations in Hilbert Spaces 183

where Gt is a C0 -semigroup and PT is a predictable process in L2R . Then


we have

Lemma 6.1. Let PT . Then the stochastic integral Xt given by (6.75)


is an Ft adapted, mean-square continuous H-valued process over [0, T ] with
mean EXt = 0 and the covariance operator
Z t
t = E (Gts s Rs Gts )ds, (6.76)
0

so that Z t
2
E kXt k = E T r(Gts s Rs Gts )ds. (6.77)
0

Proof. For any , H, we have


Z t Z t
E (Xt , ) = E (, Gts s dWs ) = E (Gts , s dWs ),
0 0

and
Z t Z t
E (Xt , )(Xt , ) = E[ (Gts , s dWs )][ (Gts , s dWs )].
0 0

For a fixed t, let fs = Gts and gs = Gts . It follows from Lemma 3.5
that

E (Xt , ) = 0,
E (Xt , )(Xt , ) = E (t , ) (6.78)
Z t
=E (Gts s Rs Gts , )ds,
0
which imply that EXt = 0 and the equation (6.76) holds. Let {k } be any
complete orthonormal basis of H. By taking = = k in the second
equation of (6.78) and summing over k, we obtain (6.77).
To show the mean-square continuity, consider
Z t+
2
EkXt+ Xt k = E k Gt+ s s dWs
Z t t

+ (Gt+ s Gts )s dWs k2


0 Z t+ (6.79)
2{E k Gt+ s s dWs k2
Z t t

+E k (Gt+ s Gts )s dWs k2 } = 2(J + K ).


0

Similar to (6.77), we can get


Z t+ Z t+
J = E k(Gt+ s s k2R ds CE ks k2R ds,
t t
184 Stochastic Partial Differential Equations

and
Z t Z t
K = E kGts (G I)s k2R ds CE k(G I)s k2R ds,
0 0

where we recalled that kk2R = T r(R ) and made use of the bound
kGt k2L(H) C. Since J 0 and K 0 as 0 by the dominated
convergence theorem, the mean-square continuity of X follows from (6.79).

In what follows, we shall present two maximal inequalities that are essential
in studying mild solutions. To proceed, we need a couple of definitions. A
C0 -semigroup Gt with infinitesimal generator A is said to be a contraction
semigroup if kGt hk khk for h H and (A, ) 0 for D(A). In
particular, when {Ut , t < } is a C0 -group of unitary operators in a
Hilbert space H, we have kUt hk = khk and Ut = Ut1 = Ut , where Ut1 and
Ut denote the inverse and the adjoint of Ut , respectively [79].

Theorem 6.2 Suppose that A generates a contraction semigroup Gt on H


and PT satisfies the condition
Z T
E{ [T r(s Rs )]ds}p < , (6.80)
0

for p 1. Then there exists a constant Cp > 0 such that, for any t [0, T ],
Z r Z t
2p
E{ sup k Grs s dWs k } Cp E{ ks k2R ds}p , (6.81)
0rt 0 0

where ks k2R= T r(s Rs ).


Moreover, the stochastic integral Xt given by
(6.75) has a continuous version.

Proof. Since the C0 -semigroup {Gt , t 0, } is a contraction, by making use


of a theorem of Sz.-Nagy and Foias (Theorem I.8.1, [87]), there exists a Hilbert
space H H and a unitary group of operators {Ut , t } on H such
that Gt h = P Ut h, h H, where P : H H is an orthogonal projection.
Therefore, noticing Uts = Ut Us , the convolutional stochastic integral (6.75)
can be written as Z t
Xt = P Uts s dWs = P Ut Yt , (6.82)
0
where Z t
Yt = Us s dWs , (6.83)
0
is a well-defined stochastic integral in H. Since
Z T
E{ [T rH (Us s Rs Us )]ds}p
0Z (6.84)
T
E{ [T r(s Rs )]ds}p < ,
0
Stochastic Evolution Equations in Hilbert Spaces 185

it follows from Lemma 4.3 that


Z t
E{ sup kYr k2p
H } Cp E{ [T r(s Rs )]ds}p . (6.85)
0rt 0

In view of (6.82) and (6.85), we obtain

E{ sup kXr k2p } E{ sup kYr k2p


H}
0rt 0rt
Z t (6.86)
Cp E{ [T r(s Rs )]ds}p ,
0

which verifies the maximal inequality (6.81). The continuity of Xt follows from
the fact that the stochastic integral Yt has a continuous version in H.

Remark: The maximal inequality for the convolutional stochastic integral


has been established by several authors (see, e.g., [90], [50]). A simple proof
given here based on expressing Gt as the product of the projector P and the
unitary operator Ut was adopted from the paper [39].

The above theorem was extended to the case of a general C0 -semigroup Gt


(see Proposition 7.3, [21]), where, by the method of factorization, the maximal
inequality was proved with a slightly weaker upper bound.

Theorem 6.3 Let Gt be a strongly continuous semigroup on H and let the


following condition hold:
Z t
E{ ks k2p
R ds} < .
0

Then, for p > 1, t [0, T ], there exists constant Cp,T > 0 such that the
following inequality holds
Z r Z t
E{ sup k 2p
Grs s dWs k } Cp,T E{ ks k2p
R ds}. (6.87)
0rt 0 0

Moreover, the process Xt in H has a continuous version.

Now consider the linear equation with an additive noise:

dut = [Aut + ft ]dt + t dWt , t (0, T ),


(6.88)
u0 = h H,

where ft is an Ft -adapted, locally integrable process in H. Then the mild


solution of (6.88) is given by
Z t Z t
ut = Gt h + Gts fs ds + Gts s dWs . (6.89)
0 0
186 Stochastic Partial Differential Equations

With the aid of the above theorems, we can prove the following result.

Theorem 6.4 Suppose that Gt is a strongly continuous semigroup on H,


ft is an Ft -adapted, locally integrable process in H and PT such that,
for p 2,
Z T
E ( kfs kp + ks kpR )ds < . (6.90)
0

Then, for p = 2, the mild solution of the linear equation (6.88) given by
(6.89) is a mean-square continuous, Ft -adapted process in H and there exists
a constant C1 (T ) > 0 such that
Z T
sup E kut k2 C1 (T ){khk2 + E ( kfs k2 + ks k2R )ds}. (6.91)
0tT 0

For p > 2, the solution has a continuous sample path in H with the properties
u Lp (; C([0, T ]; H)) and there exists a constant Cp (T ) > 0 such that
Z T
p
E{ sup kut k } Cp (T ){khk + E p
( kfs kp + ks kpR )ds}. (6.92)
0tT 0

Proof. From (6.89), we have


Z t
kut kp 3p { kGt hkp + k Gts fs dskp
Rt 0
+ k 0 Gts s dWs kp } (6.93)
Z t
3p { p khkp + p T (p1) k kfs kp ds + kXt kp },
0

for some > 0, where Xt is given by (6.75). For p = 2, it is easy to show


that ut belongs to C([0, T ]; L2(; H)) and the inequality (6.91) follows after
taking an expectation of (6.93).
For p > 2, by making use of Theorem 6.3, the inequality (6.93) yields
Z t
E { sup kur kp } 3p { p khkp + p T (p1) E kfs kp ds
0rt 0
+ E ( sup kXr kp ) }
Z0rt
T Z t
p p
3 { khk + T (p1)
E p
kfs k ds + Cp/2,T E ks kpR ds },
0 0

which implies the desired inequality (6.92). The continuity of the solution can
be shown by making use of the Kolmogorov criterion.

Remark:
Here and hereafter, for conciseness, the initial datum u0 = h is assumed
Stochastic Evolution Equations in Hilbert Spaces 187

to be non-random. The theorem is still true if u0 = H is an F0 -adapted


Lp -random variable such that Ekkp < . In this case, we simply replaced
the term khkp in (6.92) by Ekkp .

We now turn to the nonlinear stochastic equation (6.73) with Lipschitz-


continuous coefficients and a sublinear growth. To be precise, assume the
following conditions A:

(A.1) A is the infinitesimal generator of a strongly continuous semigroup Gt , t


0, in H.

(A.2) For h H, t [0, T ], Ft (h, ) is an Ft -adapted, locally integrable H-valued


process, and t (h, ) is a L2R -valued process in PR .

(A.3) There exist positive constants b and c such that


Z T
E [ kFs (0, )kp + ks (0, )kpR ]ds b,
0

with p 2, and for any g H,

kFt (g, ) Ft (0, )k2 + kt (g, ) t (0, )k2R c(1 + kgk2 ),

for a.e. (t, ) [0, T ] .

(A.4) There exists a constant > 0 such that, for any g, h H,

kFt (g, ) Ft (h, )k2 + kt (g, ) t (h, )k2R kg hk2 ,

for a.e. (t, ) [0, T ] .

Under conditions A, we can generalize Theorem 6.4 to the nonlinear case


as follows.

Theorem 6.5 Suppose that the conditions (A.1)(A.4) are satisfied. Then,
for h H and p 2, the nonlinear equation (6.73) has a unique mild solution
ut , 0 t T with u C([0, T ]; Lp (; H)) such that
Z T
E{ sup kut kp } Kp (T ){ 1 + E [ kFs (0)kp + ks (0)kpR ] ds }, (6.94)
0tT 0

for some constant Kp (T ) > 0. For p > 2, the solution has continuous sample
paths with u Lp (; C([0, T ]; H)).

Proof. We will first take care of the uniqueness question. Suppose that
188 Stochastic Partial Differential Equations

ut and ut are both mild solutions satisfying the integral equation (6.74). Set
vt = ut ut . Then the following equation holds:
Z t
vt = Gts [ Fs (us ) Fs (
us ) ]ds
0Z (6.95)
t
+ Gts [ s (us ) s (us ) ]dWs .
0

Let = sup0tT kGt kL(H) . By making use of condition (A.3), we can deduce
from (6.95) that
Z t
E kvt k2 22 T { us )k2 ds
kFs (us ) Fs (
0
Z t
+ ks (us ) s (us )k2R ] ds}
0
Z t
22 T E kvs k2 ds.
0

It then follows from the Gronwall inequality that

E kvt k2 = E kut u
t k2 = 0,

uniformly in t over [0, T ]. Therefore, the solution is unique.

The existence proof is based on the standard contraction mapping princi-


ple. Since the proof of existence and the inequality (6.94) for p = 2 is relatively
simple, we will prove only the second part of the theorem with p > 2. This
will be done in steps.

(Step 1) For p > 2, let Xp,T denote the Banach space of Ft -adapted, contin-
uous processes in H with norm

kukp,T = {E sup kut kp }1/p . (6.96)


0tT

Define the mapping : Xp,T Xp,T as follows:


Z t Z t
t = t (u) = Gt h + Gts Fs (us )ds + Gts s (us )dWs . (6.97)
0 0

To show this map is well defined, let I1 (t) = Gt h;


Z t
I2 (t) = Gts Fs (us )ds, and
0
Z t
I3 (t) = Gts s (us )dWs ,
0
Stochastic Evolution Equations in Hilbert Spaces 189
3
X
so that t = t (u) = Ii (t). Then
i=1
3
X 3
X
kukp,T kIi kp,T = i (T ), (6.98)
i=1 i=1

where i (T ) = {E( sup kIi (t)kp )}1/p , i = 1, 2, 3.


0tT
We shall estimate ip (T ) separately. First, we have
1p (T ) = sup kGt hkp p khkp , (6.99)
0tT

and
Z t
2p (T ) = E( sup k Gts Fs (us )dskp )
0tT 0
Z T
p E[ kFs (us )kds ]p
0
Z T
p E{ [ kFs (us ) Fs (0)k + kFs (0)k ]ds}p .
0

By making use of condition (A.3) and the H older inequality, we can deduce
from the above that
Z T
p p p1
2 (T ) (2) T E{ (2c) p/2
(1 + kus kp )ds
Z T 0

+ kFs (0)kp ds } (6.100)


0

(2)p T p1 { b + (2c)p/2 T (1 + kukpp,T ) }.


Similarly, by Theorem 6.3 and (A.3), we have
Z T
3p (T ) Cp/2,T {E ks (us )k2R ds}p/2
0 (6.101)
p
2 Cp/2,T T (p2)/2
{ b + (2c) p/2
T (1 + kukpp,T ) }.
By means of Theorem 6.4, it can be shown that t = t (u) is continuous in t.
This fact, together with (6.98) and the estimates (6.99), (6.100), and (6.101)
for Ii , show that maps Xp,T into itself.

(Step 2) We will show that is a contraction mapping for a small T = T1 . To


Xp,T , we define
this end, for u, u
Z t
J1 (t) = Gts [ Fs (us ) Fs (
us ) ]ds,
0
Z t
J2 (t) = Gts [ s (us ) s (
us ) ]dWs .
0
190 Stochastic Partial Differential Equations

Then
k(u) (
u)kp,T kJ1 kp,T + kJ2 kp,T . (6.102)
By condition (A.3),
Z t
kJ1 (t)k kFs (us ) Fs (
us )kds,
0

so that
kJ1 kp,T 1/2 T ku ukp,T . (6.103)
By virtue of Theorem 6.3 and condition (A3), we can get
Z t
kJ2 kpp,T Cp/2,T E us )kpR ds }
ks (us ) s (
0
Z T
Cp/2,T p/2 E kpp,T ,
s kp ds Cp/2,T ()p/2 T ku u
kus u
0
or
kJ2 kp,T 1/2 (Cp/2,T T )1/p ku u
kp,T . (6.104)
By taking (6.102), (6.103), and (6.104) into account, we get
k(u) (
u)kp,T (T )ku ukp,T ,
where (T ) = { T + (Cp/2,T )1/p T 1/p }. Let T = T1 be sufficiently small
1/2

so that (T1 ) < 1. Then is a Lipschitz-continuous contraction mapping in


Xp,T which has a fixed point u. This element ut is the unique mild solution
of (6.73). For T > T1 , we can extend the solution by continuation, from T1 to
T2 and so on. This completes the existence and uniqueness proof.

(Step 3) To verify the inequality (6.94), we make a minor change of the esti-
mates for 2p and 3p in (6.100) and (6.101) to get
Z T
2p (T ) (2)p T p1 { b + (2c)p/2 ( T + kukpp,s ds) },
0

and Z T
p (p2)/2 p/2
3 (T ) 2 Cp/2,T T { b + (2c) (T + kukpp,s ds) }.
0
Therefore, it follows from (6.98) and the previous estimates that
3
X
kukpp,T = kukpp,T 3p ip (T ) C1 (1 + khkp )
i=1
Z T Z T
+C2 { kukpp,s ds + E [ kFs (0)kp + ks (0)kpR ]ds },
0 0

for some constants C1 , C2 depending on p, T . With the aid of Gronwalls


lemma, the above inequality implies the desired inequality (6.94).

Remarks:
Stochastic Evolution Equations in Hilbert Spaces 191

(1) As in the linear case, the theorem holds when the initial datum u0 = is
an F0 -measurable Lp -random variable in H with khkp in (6.94) replaced
by Ekkp . Also, here we assumed the Hilbert space H is real. The theorem
holds for a complex Hilbert space as well.

(2) For simplicity, we impose the standard Lipschitz continuity and linear
growth conditions on the nonlinear terms. As seen from concrete problems
in the previous chapters, the Lipschitz condition may be relaxed to hold
locally, but then the solution may also be local, unless there exists an a
priori bound on the solution.

(3) If the semigroup is analytic and the coefficients are smooth, it is possible
to study the regularity properties of a mild solution in Sobolev subspaces
of H. More extensive treatment of mild solutions is given in [21].

Now we turn to the mild solution satisfying the following equation with a
Poisson noise:
Z t Z tZ
ut = Gt h + Gts Fs (us )ds + (ds, d),
Gts s (us , )N (6.105)
0 0 B

where t (u, ) is a predictable random field with respect to the -field Ft


generated by the Poisson measure N (s, ) for 0 s t.
First consider the Poisson integral
Z tZ
Zt = (ds, d),
Gts s ()N (6.106)
0 B

where t (), B, is a predictable H-valued random field such that


Z T Z
E ks ()k2 ds (d) < . (6.107)
0 B

Then, by Theorem 3.8, the following lemma can be verified readily.

Lemma 6.6 Let Zt be the integral defined by (6.106) satisfying condition


(6.107). Then Zt is an Ft adapted, mean-square continuous process in H with
mean E Zt = 0 and
Z T Z
E kZt k2 = E kGts s ()k2 ds(d). (6.108)
0 B

Since the proof is similar to that of Lemma 6.1, it is thus omitted here. By
mimicking the proof of Theorem 6.2, one can show that the analogous maximal
inequality for Zt holds. The result will be presented below as a lemma without
proof.
192 Stochastic Partial Differential Equations

Lemma 6.7 Suppose that Gt , t 0, is a semigroup of contraction on H.


Instead of condition (6.107), the following condition holds:
Z T Z
E ks ()kp ds (d) < ,
0 B

for p 2. Then there is Cp > 0 such that


Z tZ
E { sup k (ds, d)kp }
Gts s ()N
0tT 0 B
Z TZ
Cp E ks ()kp ds(d).
0 B

For the existence theorem, we shall consider only the mild solution in an
L2 setting, even though the Lp theory can be worked out as in the Wiener
case.

Theorem 6.8 Let Ft : H H be an Ft -adapted continuous function


and let t : H B H be a predictable random field. Assume that there
exist positive constants K1 , K2 such that the following conditions hold for all
t [0, T ], u, v H,
Z
2
kFt (u)k + kt (u, )k2 (d) K1 (1 + kuk2 ) a.s., (6.109)
B
Z
kFt (u) Ft (v)k2 + kt (u, ) t (v, )k2 (d)
B (6.110)
K2 ku vk2 a.s.
Then, for any h H, the equation (6.71) has a unique solution ut , t [0, T ],
which is an adapted process in H such that

sup E kut k2 C (1 + khk2 ), (6.111)


0tT

for some constant C > 0.

Proof. Let YT denote the Banach space of Ft -adapted processes in H with


norm
kukT = { sup E kut k2 }1/2 .
0tT

We shall prove that the integral equation (6.72) has a unique solution in YT
by the contraction principle. To proceed, define the mapping : YT YT by
Z t
t (u) = Gt h + Gts Fs (us ) ds
Z tZ 0 (6.112)
+ (ds, d).
Gts s (us , ) N
0 B
Stochastic Evolution Equations in Hilbert Spaces 193

To show that the map is well defined, let t (u) = vt . Then the above equation
yields
E kvt k2 3{J1 (t) + J2 (t) + J3 (t)}, (6.113)
where
J1 (t) = kGt hk2 C1 khk2
Z t Z t
2
J2 (t) = E k Gts Fs (us ) dsk C2 E kFs (us )k2 ds
0 0

and by applying Theorem 3.8,


Z tZ
J3 (t) = E k (ds, d)k2
Gts s (us , ) N
0 Z
B
tZ
C3 E ks (us , )k2 ds(d),
0 B

where C1 , C2 , C3 are some positive constants. Now, by making use of the


above upper bounds on Ji , i = 1, 2, 3, and together with the condition (6.109)
in (6.113), one can obtain
Z t
2 2
E kvt k C4 {1 + khk + E kus k2 ds},
0

which gives

sup E kvt k2 C4 {1 + khk2 + T sup E kut k2 }


0tT 0tT

for a constant C4 > 0. Hence the map : YT YT is bounded.


YT . From equation
To show that the map is a contraction, let u, u
(6.112), we get
Z t
kt (u) t (
u)k 2 {k us )] dsk2
Gts [Fs (us ) Fs (
Z tZ 0

+k Gts [s (us , ) s ( (ds, d)k2 }.


us , )] N
0 B

By taking expectation over the above inequality and applying Theorem 3.9,
the above inequality can be reduced to
Z t
u)k2 C5 E {
E kt (u) t ( us )k2 ds
kFs (us ) Fs (
Z tZ 0

+ us , )k2 ds(d)}
ks (us , ) s (
0 ZB
t
C6 E kus us k2 ds,
0
194 Stochastic Partial Differential Equations

where use was made of the Lipschitz condition (6.110), and C6 , C7 are some
positive constants. It follows that
u)k2 C6 T sup E kut u
sup E kt (u) t ( t k2
0tT 0tT

or p
kt (u) t (
u)kT C6 T E ku ukT .
This shows that is a contraction on YT for sufficiently small T . Therefore it
has a unique fixed point u, which is the mild solution of equation (6.105). The
inequality (6.111) can be verified readily by some bounds as done previously
and the Gronwall inequality.

Remarks:

(1) In view of Theorem 3.9, the Poisson integral on the right-hand side of
equation (6.105) is stochastic continuous and so is the mild solution.

(2) Here we consider only the L2 solution. With the aid of Lemma 6.7, it is
possible to consider the Lp solution for p 2 as the Wiener case, provided
that Gt is a contraction semigroup.

(3) By combining the results for the Wiener and the Poisson noises, we can
also deal with the problem with the Levy type of noise in the L2 setting.
Extensive treatment of the Levy noise case can be found in the book [78].
To connect the general theorems with specific problems studied in Chap-
ters 3 and 4, we will give some examples. Further applications will be provided
later.

(Example 6.1) Parabolic It


o Equation

As in Chapter 3, let D Rd be a bounded domain with a smooth boundary


D. We assume the following:
(1) Let A be a second-order strongly elliptic operator in D given by
d
X Xd

A= [ajk (x) ]+ aj (x) + a0 (x), (6.114)
xj xk j=1
x j
j,k=1

such that
with smooth coefficients a s on D
Xd Z
(x) (x)
ajk (x) dx
D xj xk
j,k=1
d Z
X Z
(x)
> aj (x) (x)dx + a0 (x)2 (x)dx,
j=1 D xj D

for any C1 -function on D.


Stochastic Evolution Equations in Hilbert Spaces 195

(2) Let f (r, x, t) and (r, x, t) be real-valued continuous functions on R D


[0, T ]. Suppose that there exist constants b1 , b2 > 0 such that
|f (r, x, t)|2 + |(r, x, t)|2 b1 (1 + |r|2 ),
|f (0, x, t)| + |(0, x, t)| b2 ,
for any (r, x, t) R D [0, T ].
(3) There is a positive constant b3 such that
|f (r, x, t) f (s, x, t)| + |(r, x, t) (s, x, t)| b3 |r s|,
for any r, s R, x D and t [0, T ].
(4) Wt (x) is an R-Wiener process in L2 (D) with covariance function r(x, y)
which is bounded and continuous for x, y D.
Now consider the parabolic It
o equation
u
= Au + f (u, x, t) + (u, x, t) W (x, t),
t t
u|D = 0, (6.115)

u(x, 0) = h(x),
for t (0, T ), x D. Let H = L2 (D). Under condition (1), by the semi-
group theory (p. 210, [79]), with the domain D(A) = H 2 H01 , A generates
a contraction semigroup Gt in H. Setting Ft (g) = f (g(), , t) and regarding
t (g) = (g(), , t) as a multiplication operator from K = H into H, it is easy
to check that conditions (1)(3) imply that conditions (A1)(A3) for Theorem
6.5 are satisfied. Moreover, condition (4) means that Wt = W (, t) is an R-
Wiener process K with T rR < . By applying Theorem 6.5, we conclude that
the initial-boundary problem (6.114) has a unique continuous mild solution
u Lp (; C([0, T ]; H)) for any p > 2, and there exists constant K(p, T ) > 0
such that
Z Z T Z
E{ sup ( |u(x, t)|2 dx)p/2 } Kp (T ){1 + [ f 2 (0, x, t) dx]p/2 dt
0tT D 0 D
Z T Z
+ [ r(x, x) 2 (0, x, t)dx]p/2 }dt.
0 D

(Example 6.2) Hyperbolic It


o Equation

Consider the initial-boundary value problem in domain D:


2v
= Av + f (v, x, t) + (v, x, t) W (x, t),
t2 t
v|D = 0, (6.116)
v
v(x, 0) = g(x), (x, 0) = h(x).
t
196 Stochastic Partial Differential Equations

Here we assume that A is given by (6.114) with aj = 0 as follows:


d
X
A= [ajk (x) ] + a0 (x), (6.117)
xj xk
j,k=1

2 1
where ajk = akj and a0 are smooth coefficients. ThenA with  domain H H0
v g
is a self-adjoint, strictly negative operator. Let u = with v0 = ,
     v  h
0 1 0 0
A= , Ft (u) = , and t (u) = .
A 0 f (v, , t) (v, , t)
Introduce the spaces H = L2 (D) and H = H01 H with the energy norm
kuke = {kvk21 + kv k2 }1/2 , where k k1 and k k denote the norms in H 1 (D)
and L2 (D), respectively. Let D(A) = D(A) H 1 . Then the equation (6.116)
can be put in the standard form (6.73) with A replaced by A. It is known that
A generates a strongly continuous semigroup, in fact, a group in H (p.423,
[96]). Under the conditions (1)(4) in Example 6.1, the assumptions (A1)
(A4) are satisfied. Therefore, by applying Theorem 6.5, we can conclude that,
for g H01 , h L2 (D), the problem (6.116) has a unique continuous mild
solution v Lp (; C([0, T ]; H01 )) for any p > 2, and there exists constant
Kp (T ) > 0 such that
Z T Z
u
E{ sup (ku(, t)k21 + k (, t)k2 ) {1 + [ f 2 (0, x, t) dx]p/2 dt
0tT t 0 D
Z T Z
+ [ r(x, x) 2 (0, x, t)dx]p/2 }dt.
0 D

(Example 6.3) Parabolic Equation with Poisson Noise

Let us return to the initial-boundary value problem (6.69) in a bounded


domain D, where A is the strong elliptic operator given as in (Example 6.1).
Suppose that f (r, x, t) and g(r, x, t) are real-valued functions and there exist
positive constants c1 , c2 such that, for all r, s R, x D, t [0, T ], the
following conditions hold:
(1)
|f (r, x, t)|2 + |g(r, x, t)|2 c1 (1 + |r|2 ),

(2)
|f (r, x, t) f (s, x, t)|2 + |g(r, x, t) g(s, x, t)|2 c2 |r s|2 .

(3) (x, t, ) is an adapted real-valued random field with x D, t [0, T ] and


B such that
Z
sup |(x, t, )|2 (d) c3 a.s.
xD,0tT B
Stochastic Evolution Equations in Hilbert Spaces 197

Under the above conditions, we claim that the conditions (6.109) and (6.110)
in Theorem 6.8 are satisfied. To verify (6.109), by making use of conditions
(1) and (3), we can get, for u H,
Z
kFt (u)k2 + kt (u, )k2 (d)
Z B Z Z
= |f (u, x, t)|2 dx + |g(u, x, t)(x, t, )|2 (d) dx
DZ D B Z
2
c1 (1 + |u| ) dx {1 + sup |(x, t, )|2 (d)}
D xD,0tT B
c1 (1 + c3 )(|D| + kuk2 ) K1 (1 + kuk2 ),

where |D| denotes the volume of D and K1 = c1 (1 + c3 ) max{1, |D|}.


Similarly, by invoking conditions (2) and (3), the inequality (6.110) can be
verified as follows. For u, v H, we have
Z
kFt (u) Ft (v)k2 + kt (u, ) t (v, )k2 (d)
Z B
= |f (u, x, t) f (v, x, t)|2 dx
ZDZ
+ | [g(u, x, t) g(v, x, t)](x, t, )|2 (d) dx
DZ B Z Z
c2 |u v|2 dx + c2 |u v|2 |(x, t, )|2 (d) dx
D Z D B
c2 (1 + c3 ) |u v|2 dx K2 ku vk2 ,
D

with K2 = c2 (1 + c3 ). Therefore the conditions (6.109) and (6.110) hold true.


By Theorem 6.8, for h H, the initial-boundary value problem (6.69) has a
unique mild solution u such that
Z Z
sup E |u(x, t)|2 dx C (1 + |h(x)|2 dx)
0tT D D

for some constant C > 0.

6.7 Strong Solutions


As in Section 6.5, let K, H be two real Hilbert spaces and let V H be a
reflexive Banach space with dual space V . By identifying H with its dual H ,
we have
V H = H V
where the inclusions are assumed to be dense and continuous. Following the
previous notation, let the norms in V, H, and V be denoted by k kV , k k and
198 Stochastic Partial Differential Equations

k kV , respectively. The inner product of H and the duality scalar product


between V and V will be denoted by (, ) and h, i.
Let At () : V V , Ft (, ) : V V and t (, ) : V L2R a.e.
(t, ) T = [0, T ] . As usual, Wt is a K-valued R-Wiener process with
T rR < . Here we say that an Ft adapted V -valued process ut is a strong
solution, or a variational solution, of the equation (6.62) if u L2 (T ; V )
and, for any V , the following equation
Z t Z t
(ut , ) = (h, ) + hAs us , ids + (Fs (us ), )ds
Z t 0 0 (6.118)
+ (, s (us )dWs )
0

holds for each t [0, T ] a.s. Alternatively, ut satisfies the It


o equation (6.63)
in V .
We shall impose the following conditions on At :

(B.1) Let At be a continuous family of closed random linear operators with


domain D(A) (independent of t, ) dense in H such that At : V V
and, for any v V , At v is an adapted continuous V -valued process.
(B.2) For any u, v V there exists > 0 such that

|hAt u, vi| kukV kvkV , a.e. (t, ) T .

(B.3) At satisfies the coercivity condition: There exist constants > 0 and
such that

hAt v, vi kvk2V + kvk2 , v V, a.e. (t, ) T .

Before dealing with the equation (6.118), consider the linear problem:

dut = At ut dt + ft dt + t dWt , 0 < t < T,


(6.119)
u0 = h,
or Z Z
t t
ut = h + As us ds + fs ds + Mt , (6.120)
0 0

where ft is an adapted, locally integrable process in L2 (T ; H); t is a pre-


Rt
dictable operator in PT and Mt = 0 s dWs is a V0 -valued martingale, where
V0 D(A) is a Hilbert space dense in H. We shall first prove the existence of
a unique solution to (6.119) by assuming that the martingale Mt is smooth.

Lemma 7.1 Suppose that the conditions (B1)(B3) hold and ft is an Ft


adapted process in L2 (T ; H). Assume that Mt is a V0 -valued process such
Stochastic Evolution Equations in Hilbert Spaces 199

that Nt = At Mt is an L2 martingale in H. Then, for h H, the equation


(6.120) has a unique strong solution u L2 (; C([0, T ]; H)) L2 (T ; V ) such
that the energy equation holds:
Z t Z t
2 2
kut k = khk + 2 hAs us , us ids + 2 (fs , us )ds
Z t 0 Z t 0 (6.121)
+2 (us , s dWs ) + ks k2R ds.
0 0

Proof. Since the martingale in (6.120) is smooth, it is possible, by a simple


transformation, to reduce the problem to an almost deterministic case. To
proceed we let
vt = ut Mt . (6.122)
Then it follows from (6.120) that vt V and it satisfies
Z t Z t
vt = h + As vs ds + (fs + Ns )ds, (6.123)
0 0

for almost every (t, ) T . Since ft = (ft + Nt ) L2 ((0, T ); H) a.s., by


appealing to a deterministic result in [62], the equation (6.123) has a unique
solution v C([0, T ]; H) L2 ((0, T ); V ) satisfying
Z t Z t
kvt k2 = khk2 + 2 hAs vs , vs ids + 2 (fs + Ns , vs )ds a.s. (6.124)
0 0

By condition (B.3) and the Gronwall inequality, we can deduce that


Z T
E{ sup kvt k2 + kvs k2V ds} K,
0tT 0

for some K > 0. In view of (6.122) and the above inequality, we can assert
that u belongs to L2 (; C([0, T ]; H)) L2 (T ; V ) and it is the unique solu-
tion of (6.120). The energy equation (6.121) can be obtained from (6.124) by
substituting v = (u M ) back in (6.124) and making use of It os formula.

Now we will show that the statement of the lemma holds true when Mt is
an Hvalued martingale.

Theorem 7.2 Let conditions (B.1)(B.3) hold true. Assume that ft is a


predictable H-valued process and PT such that
Z T
E ( kfs k2 + ks k2R )ds < . (6.125)
0

Then, for h H, the linear problem (6.119) has a unique strong solution
200 Stochastic Partial Differential Equations

u L2 (; C([0, T ]; H)) L2 (T ; V ) such that the energy equation holds:


Z t Z t
kut k2 = khk2 + 2 hAs us , us ids + 2 (fs , us )ds
Z t 0 Z t 0 (6.126)
+2 (us , s dWs ) + ks k2R ds.
0 0

Proof. For uniqueness, suppose that u and u are both strong solutions of
(6.119). Let v = (u u). Then v satisfies the homogeneous equation

dvt = At vt dt, v0 = 0,

which, as in the deterministic case, implies that


Z t
kvt k2 = 2 hAs vs , vs ids a.s.
0

By condition (B.3), the above yields


Z t Z t
2
kvt k + 2 kvs k2V ds 2 kvs k2 ds.
0 0

It follows from Gronwalls inequality that

kvt k2 = kut u
t k2 = 0,

and
Z T
t k2V dt = 0,
kut u a.s.
0

We will carry out the existence proof in several steps.

(Step 1) Approximate Solutions:


Let {k }, with k V0 , k = 1, 2, , be a complete orthonormal basis for
H, which also spans V0 . Let Hn = span{1 , , n }. Suppose P that Pn : H Hn
n
is an orthogonal projection operator defined by Pn g = k=1 (g, k )k for
2
g H. Then Pn g D(A). Since PT , Mt is an L -martingale in H. Define
Mtn = Pn Mt so that Mtn is an L2 martingale in D(A). In lieu of (6.120), for
n = 1, 2, , consider the approximate equations:
Z t Z t
unt = h + As uns ds + fs ds + Mtn . (6.127)
0 0

Because Ntn = At Mtn is an L2 -martingale in H, it follows from Lemma 7.1


Stochastic Evolution Equations in Hilbert Spaces 201

that the approximate equation (6.127) has a unique solution un belonging to


L2 (; C([0, T ]; H)) L2 (T ; V ) such that it satisfies
Z t Z t
kun (, t)k2 = khk2 + 2 (As uns , uns )ds + 2 (fs , uns )ds
Z t 0 Z t 0 (6.128)
n n n 2
+2 (us , dMs ) + ks kR ds.
0 0

(Step 2) Convergent Sequence of Approximations:


Let YT denote the Banach space of Ft -adapted, V -valued processes over
[0,T], with continuous sample paths in H and norm k kT defined by
Z T
kukT = {E [ sup kut k2 + kus k2V ds ]}1/2 . (6.129)
0tT 0

Let um be the solution of (6.127) with n replaced by m and put umn =


(u un ) and Mtmn = (Mtm Mtn ). Then it follows from (6.127) that umn
m

satisfies the equation:


Z t
mn
ut = As umn
s ds + Mt
mn
. (6.130)
0

By the coercivity condition (B.3) and (6.128), we obtain from (6.130) that
Z t Z t
mn 2
kumn
t k 2
+ 2 ku s k V ds 2 kumn 2
s k ds
Z t 0 Z t 0 (6.131)
+ T r Qmn
s ds + 2 (u mn
s , dM mn
s ),
0 0

where Qmn
s denotes the local covariation operator for Msmn . After taking the
expectation, the above gives
Z t
Ekumn
t k 2
+ 2E kumn 2
s kV ds
Z t 0 Z t (6.132)
2 Ekumns k 2
ds + E T r Q mn
s ds.
0 0

Therefore, by condition (6.125) and the Gronwall lemma, we can deduce, first,
that Z t
sup Ekumn 2
t k K1 E T r Qmn
s ds, (6.133)
0tT 0

so that (6.132) yields


Z t Z t
2E kumn 2
s kV ds 2T sup Ekumn 2
t k +E T r Qmn
s ds
0 0tT 0
Z t
(2K1 T + 1)E T r Qmn
s ds,
0
202 Stochastic Partial Differential Equations
Z T Z T
or E kumn
s k 2
V ds K 2 E T r Qmn
s ds, (6.134)
0 0

for some constants K1 , K2 > 0. Moreover, it can be derived from (6.131) that
Z t
E sup kumn r k 2
2 E sup kumn 2
r k ds
0rt 0 0rs
Z t Z r (6.135)
mn
+E T r Qs ds + 2E sup | (umn mn
s , dMs )| ds,
0 0rt 0

where Z Z
t t
T r Qmn
s ds = k(Pm Pn )s k2R ds. (6.136)
0 0

By a maximal inequality in Theorem 2.3,


Z r
E sup | (umn mn
s , dMs )|
0rt 0
Z t
3 E{ (Qmn mn mn
s us , us )ds}
1/2
0 Z t (6.137)
3E{ sup kur k}{ T r Qmn
mn
s ds}
1/2
0rt 0 Z t
1
E sup kumn k 2
+ 9E T r Qmn
s ds.
4 0rt r 0

By taking (6.135) and (6.137) into account and invoking the Gronwall inequal-
ity again, we can find constant K3 > 0 such that
Z T
E sup kumn 2
t k K3 E T r Qmn
s ds. (6.138)
0tT 0

From (6.134) and (6.138), we obtain


Z T
kumn k2T = kum un k2T (K2 + K3 )E T r Qmn
s ds.
0

Rt
In view of (6.136), we can deduce that E 0 T r Qmn
s ds 0 as m, n .
Therefore {un } is a Cauchy sequence in YT which converges to a limit u.
(Step 3) Strong Solution:

We will show that the limit u thus obtained is a strong solution. To this
end, let V and apply the It
o formula to get
Z t Z t
(unt , ) = (h, ) + (As uns , )ds + (fs , )ds + (Mtn , ). (6.139)
0 0
Stochastic Evolution Equations in Hilbert Spaces 203

By taking the limit as n , the equation (6.139) converges term-wise in


L2 () to
Z t Z t
(ut , ) = (h, ) + hAs us , ids + (fs , )ds + (Mt , ), (6.140)
0 0
for any V . Therefore u is the strong solution.

(Step 4) Energy Equation:


To verify the energy equation (6.126), we proceed by taking the term-
wise limit in the approximate equation (6.128). It is easy to show that
Rt Rt
kun (, t)k2 kut k2 in the mean and 0 (fs , uns )ds 0 (fs , us )ds in mean-
Rt
square. By the monotone convergence, the term 0 kns k2R ds converges in the
Rt
mean to 0 ks k2R ds. For the remaining two terms in (6.128), first consider
Z t Z t
| hAs us , us ids (As uns , uns ids|
0 0
Z t Z t
n
|hAs (us us ), us i|ds + |hAs uns , us uns i|ds
0 0
Z t
( kus kV + kuns kV )kus uns kV ds,
0
where use was made of condition (B.2). Therefore,
Z t Z t
E| hAs us , us ids hAs uns , uns ids|2
0 0
Z T Z T
2
2 {E ( kus k2V + kuns k2V ds } {E kus uns k2V ds},
0 0
which converges to zero as n . Finally, for the stochastic integral term,
we have
Z t Z t Z t Z t
E| (us , dMs ) n n
(us , dMs )| E| n
(us us , dMs )| + E| n )|,
(uns , dM s
0 0 0 0
n = Ms M n . Now, by a maximal inequality, we have
where M s s
Z t Z T
n
E sup | (us us , dMs )| 3E{ ks k2R k(us uns )k2 ds}1/2
0tT 0 0
Z T
3{E sup kus uns k2 }1/2 {E ks k2R ds}1/2 0,
0tT 0

as n . Similarly,
Z t Z T
E sup | n )| 3 E {
(uns , d(M k(s ns )k2R kuns k2 ds}1/2
s
0tT 0 0
Z T
3 {E sup kuns k2 }1/2 {E ks ns k2R ds}1/2 ,
0tT 0
204 Stochastic Partial Differential Equations

which alsoZ tends to zero as n . The above three inequalities imply that
t
E sup | (us uns , dMs )| 0. Finally, by taking all of the above inequalities
0tT 0
into account, we obtain the energy equation as given by (6.126).

Remark: Here we assumed that ft is a predicable H-valued process. It is


not hard to show that the theorem holds for f L2 (T ; V ) as well.

As a consequence of this theorem, the following lemma will be found useful


later.

Lemma 7.3 Assume that all conditions for Theorem 7.2 are satisfied. Then
the following energy inequality holds:
Z T
E sup kut k2 + E kus k2V ds
0tT
Z T0 (6.141)
2
CT {khk + E ( kfs k2 + ks k2R )ds},
0

for some constant CT > 0.

Proof. As expected, the inequality of interest follows from the energy equa-
tion (6.126). Since the proof is similar to that in (Step 2) of Theorem 7.2, it
will only be sketched. By the coercivity condition (B.3), we can deduce from
(6.126) that
Z t Z t
kut k2 + 2 kus k2V ds khk2 + (2 + 1) kus k2 ds
Z t 0 Z t 0 (6.142)
+ ( kfs k2 + ks k2R )ds + 2 (us , s dWs ).
0 0

By taking the expectation over the above equation and making use of Gron-
walls inequality, it can be shown that there exist positive constants C1 and
C2 depending on T such that
Z T
2 2
sup Ekut k C1 { khk + E ( kfs k2 + ks k2R )ds }, (6.143)
0tT 0

and
Z T Z T
E kus k2V 2
ds C2 { khk + E ( kfs k2 + ks k2R )ds }. (6.144)
0 0

Returning to (6.142), we can get


Z T
E sup kut k2 khk2 + (2 + 1) E kus k2 ds
0tT 0
Z T Z t (6.145)
2 2
+E ( kfs k + ks kR )ds + 2 E sup | (us , s dWs )|.
0 0tT 0
Stochastic Evolution Equations in Hilbert Spaces 205

With the aid of (6.143) and a maximal inequality, it is possible to derive from
(6.145) that
Z T
2 2
E sup kut k C3 { khk + E ( kfs k2 + ks k2R )ds }. (6.146)
0tT 0

Now the energy inequality follows from (6.144) and (6.146).

We will next consider the strong solution of the nonlinear equation (6.118)
with regular nonlinear terms. More precisely, let them satisfy the following
linear growth and the Lipschitz conditions:

(C.1) For any v H, Ft (v) and t (v) are Ft -adapted processes with values in
H and L2R , respectively. Suppose that there exist positive constants b, C
such that Z T
E { kFt (0)k2 + kt (0)k2R }dt b,
0
and
kFt (v)k2 + k
t (v)k2 C(1 + kvk2 ), a.e. (t, ) T ,
R

where we set

Ft (v) = Ft (v) Ft (0), t (v) = t (v) t (0).


(C.2) There exists constant > 0, such that, for any u, v H, the Lipschitz
continuity condition holds:

kFt (u) Ft (v)k2 + kt (u) t (v)k2R ku vk2 , a.e. (t, ) T .

Theorem 7.4 Let conditions (B.1)(B.3), (C1), and (C2) hold true. Then,
for h H, the nonlinear problem (6.65) has a unique strong solution u
L2 (; C([0, T ]; H)) L2 (T ; V ) such that the energy equation holds:
Z t Z t
2 2
kut k = khk + 2 hAs us , us ids + 2 (Fs (us ), us )ds
Z0 t 0Z
t (6.147)
+ 2 (us , s (us )dWs ) + ks (us )k2R ds.
0 0

Proof. The theorem can be proved by the contraction mapping principle.


To this end, let the YT be the space introduced in the proof of Theorem 7.2
with norm k kT defined by (6.129). Given u YT , let be the solution of
the equation
Z t Z t Z t
t = h + As s ds + Fs (us )ds + s (us )dWs . (6.148)
0 0 0
206 Stochastic Partial Differential Equations

Then, by condition (C.1), we have


Z T
E [ kFs (us )k2 + ks (us )k2R ]ds
0
Z T
2E { [ kFs (us )k2 + k
s (us )k2 ] + [ kFs (0)k2 + k s (0)k2 ] }ds
R R
0
Z T
2 [ C( T + E kus k21 ds ) + b ].
0

For a given u YT , by Theorem 7.2, equation (6.148) has a unique solution


YT . Let denote the solution map so that t = t u. By Lemma 7.3, the
map : YT YT is bounded. To show the contraction mapping, for u, u YT ,
let = u and u. It follows from (6.148), by definition, that = (
= )
satisfies Z Z t t
t = As s ds + [Fs (us ) Fs (
us )]ds
0Z 0 (6.149)
t
+ [s (us ) s (
us )]dWs .
0

By making use of the energy equation (6.126) and taking conditions (B.3) and
(C.2) into account, we can deduce that
Z t Z t
kt k2 + 2 ks k21 ds 2 ks k2 ds
Z t 0 0

+2 ( [Fs (us ) Fs ( us ) ], s )ds


Z t0
+ ks (us ) s ( us )k2R ds
0Z (6.150)
t
+2 (s , [s (us ) s ( us )]dWs )
0 Z t Z t
2
(2 + 1) ks k ds + 2 kus u s k2 ds
Z t 0 0

+2 (s , [ s (us ) s ( us ) ]dWs ).
0

Now, since the following estimates are similar to that in the proof of Lemma
7.3, we omit the details. First, by taking expectation of (6.150) and then by
Gronwalls inequality, we can get
Z t
E sup kr k2 C1 (T ) s k2 ds,
kus u
rt 0

and Z Z
T T
E ks k21 ds C2 (T )E s k2 ds,
kus u (6.151)
0 0
Stochastic Evolution Equations in Hilbert Spaces 207

where C1 = 2 exp (2 + 1)T and C2 = (C1 T + 2)/2. Next, by taking the


supremum of (6.150) over t and then the expectation, it can be shown that
Z T
2
E sup kt k C3 (T ) s k2 ds,
kus u (6.152)
0tT 0

where C3 = 2[C1 T (2 + 1) + 20]. It follows from (6.151) and (6.152) that


Z T
2 2
ku ukT = kkT (C2 + C3 )E kus us k2 ds
0
(C2 + C3 )T ku uk2T ,

which shows that is a contraction mapping in YT for small T . Therefore, by


continuation, there exists a unique strong solution for any T > 0. The energy
equation follows from the corresponding result in Theorem 7.2.

In the above theorem, the nonlinear terms were assumed to be bounded


from H into H. If they are only bounded in V , the problem becomes more
complicated technically. A proof based on the contraction mapping principle
fails unless the nonlinear terms are small in some sense. Instead, under certain
reasonable assumptions, we will prove an existence theorem by a different
approach. To be specific, assume that the following conditions hold:

(D.1) For any v V , Ft (v) is a Ft -adapted process in H, t (v) is a predictable


operator in PT , and there exist positive constants b, C such that
Z T
E {kFt (0)k2 + kt (0)k2R }dt b,
0

and, for any N > 0, there exists constant CN > 0 such that

|(Ft (v), v)| + k


t (v)k2R CN (1 + kvk2V ), a.e. (t, ) T ,

for any v V with kvkV < N .


(D.2) For any N > 0, there exists constant KN > 0 such that the local Lipschitz
continuity condition holds:

kFt (u) Ft (v)k2 + kt (u) t (v)k2R KN ku vk2V , a.e. (t, ) T ,

for any u, v V with kukV < N and kvkV < N .


(D.3) For any v V , there exist constants > 0, and C1 such that the
coercivity condition holds:
1
hAt v, vi + (Ft (v), v) + k 2
t (v)kR
2
kvk2V + kvk2 + C1 , a.e. (t, ) T .
208 Stochastic Partial Differential Equations

(D.4) For any u, v V , the monotonicity condition holds:

2hAt (u v), u vi + 2(Ft (u) Ft (v), u v)


+kt (u) t (v)k2R ku vk2 , a.e. (t, ) T ,

for some constant .

Theorem 7.5 Let conditions (B.1)(B.3) and (D1)(D4) hold true. Then,
for h H, the nonlinear problem (6.65) has a unique strong solution u
L2 (; C([0, T ]; H)) L2 (T ; V ). Moreover, the energy equation holds:
Z t Z t
kut k2 = khk2 + 2 hAs us , us ids + 2 (Fs (us ), us )ds
Z t0 Z 0t (6.153)
+ ks (us )k2R ds + 2 (us , s (us )dWs ).
0 0

Proof. For uniqueness, suppose that u and u are strong solutions. Then, in
view of (6.153) and condition (D.4), we can easily show that
Z t
t k2 = 2 E
E kut u hAs (us u
s ), us us ids
0
Z t
+E { 2(Fs (us ) Fs (
us ), us u us ) k2R }ds
s ) + ks (us ) s (
0
Z t
E s k2 ds,
kus u
0

which, with the aid of Gronwalls inequality, yields

t k2 = 0,
Ekut u t [0, T ],

which implies the uniqueness.


A detailed existence proof is rather lengthy. For clarity, we will sketch the
proof in several steps to follow.

(Step 1) Approximate Solutions:


Let {vk } be a complete orthonormal basis for H with vk V and let Hn =
span{v1 , , vn }. Suppose
Pthat Pn : H Hn is the orthogonal projector such
n
that, for h H, Pn h = k=1 (h, vk )v Pk n. Extend Pn to a projection operator
Pn : V Vn defined by Pn w = k=1 hw, vk ivk for w V
. Clearly we
have Vn = Hn = Vn . Let {ek } be the set of eigenfunctions of R and let
Kn = span{e1 , , enP }. Denote by n the projection operator from K into
Kn such that n = nk=1 (, ek )ek . Introduce the following truncations:

Ant v = Pn At v, Ftn (v) = Pn Ft (v), nt (v) = Pn t (v), (6.154)


Stochastic Evolution Equations in Hilbert Spaces 209

for v V . Consider the approximate equation to (6.118):

dun (, t) = Ant un (, t)dt + Ftn (un (, t))dt


+ nt (un (, t))dWtn , (6.155)
un0 =h ,n

for t (0, T ), where Wtn = n Wt and hn = Pn h for h H.


Notice that the above equation can be regarded as an It o equation in
Rn . It can be shown that, under conditions (D1)(D4), the mollified coeffi-
cients F n and n are locally bounded and Lipschitz continuous and monotone.
Therefore, by an existence theorem in finite dimension, it has a unique solu-
tion un (, t) in Vn in any finite time interval [0, T ]. Moreover, it satisfies the
property: un L2 (T ; V ) L2 (; C([0, T ]; H) = YT .

(Step 2) Boundedness of Approximate Solutions:


To show the sequence {un } is bounded in the Banach space YT , it follows
from the energy equation for (6.155) that:
Z t
kun (, t)k2 = khn k2 + 2 (Ans uns , uns )ds
Z t Z t
0

+2 (Fsn (uns ), uns )ds + kns (uns )k2R ds (6.156)


Z 0t 0

+2 (uns , ns (uns )n dWs ).


0

By the simple inequality: 2(Fs (v), v) + ks (v)k2R 2[(Fs (v), v) + k


s (v)k2R ] +
2
2[(Fs (0), v) + ks (0)kR ], and by invoking condition (D.3), we can obtain from
(6.156) that
Z t
kun (, t)k2 + 2 kuns k2V ds khn k2 + 2C1 T
Z t 0 Z t
+(2 + 1) kuns k2 ds + 2 [ kFs (0)k2 + ks (0)k2R ]ds (6.157)
Z t 0 0

+2 (uns , ns (uns )n dWs ).


0

By using similar estimates to derive the inequalities (6.151) and (6.152), we


can deduce from (6.157) that
Z T
E kuns k2V ds K1 , and E sup kun (, t)k2 K2 , (6.158)
0 0tT

for some positive constants K1 , K2 .

(Step 3) Weak Limits:


210 Stochastic Partial Differential Equations

From (6.158) it follows that there exists a subsequence, denoted by uk ,


which converges weakly to u in L2 (T ; V ). Moreover, by virtue of (6.158)
and condition (D.1), we can further deduce that F n (un ) 99K f in L2 (T ; H),
n (un )n 99K in L2 (T ; L2R ) and unT 99K in L2 (; H), where 99K denotes
the weak convergence.
For technical reasons, we extend the time interval to (, T + ) with
> 0, and set the terms in the equation equal to zero for t 6= [0, T ]. Let
C1 (, T + ) with (0) = 1 and, for vk V , define vtk = (t)vk . Apply
o formula to (unt , vtk ) to get
the It
Z T Z T
(unT , vTk ) = (hn , v) + (uns , v sk )ds + hAs uns , vsk ids
Z T 0 Z T 0 (6.159)
n n k k n n
+ (Fs (us ), vs )ds + (vs , s (us )n dWs ),
0 0
d
where v sk
= [ ds By letting n in (6.159), in view of the above
(s)]vk .
weak convergence results, we can obtain
Z T Z T
k
k
(uT , vT ) = (h, vk ) + (us , v s )ds + hAs us , vsk ids
Z T
0 Z T
0 (6.160)
+ (fs , vsk )ds + (vsk , s dWs ).
0 0

To justify the limit of the stochastic integral term, let n = n (un )n


in L2 (T ; L2R ). The fact n 99K in PT means that, for any Q
PT , (n , Q)PT (, Q)PT , where the inner product (, Q)PT =
RT
E 0 kQs s k2R ds. Now the limit of the stochastic integral follows from the
inequality
|(n ), Q)PT | |(n (un ) , Q)PT | + |(n (un ), Q Qn )PT |,
which goes to zero as n .
Now choose m C1 (, T + ) with m (0) = 1, for m = 1, 2, , such
that m 99K t and m 99K t , where t (s) = 1, for s t and 0 otherwise, and
t (s) = (t s) is the Dirac function. Let vtk,m = m (t)vk . By replacing vtk
with vtk,m in (6.160) and taking the limits as m , it yields
Z t Z t
(ut , vk ) = (h, vk ) + hAs us , vk ids + (fs , vk )ds
Z t 0 0 (6.161)
+ (vk , s dWs ), for 0 < t < T,
0

with (uT , vk ) = (, vk ), for any vk V. Since V is dense in H, the above two


equations hold with any v V in place of vk . Therefore u is a strong solution
of the linear Ito equation:
Z t Z t Z t
ut = h + As us ds + fs ds + s dWs , uT = , (6.162)
0 0 0
Stochastic Evolution Equations in Hilbert Spaces 211

and, by Theorem 7.2, it satisfies the energy equation:


Z t Z t
kut k2 = khk2 + 2 hAs us , us ids + 2 (fs , us )ds
Z t 0 Z t 0 (6.163)
+2 (us , s dWs ) + ks k2R ds.
0 0

(Step 4) Strong Solution and Energy Equation:


We will show that fs = Fs (us ) and s = s (us ) so that ut is a strong
solution of the nonlinear problem (6.65). To this end, for any v L2 (T ; V ),
we let
Z T
Zn = 2 E es [hAs (uns vs ), uns vs i kuns vs k2
0Z
T
+2E es (Fsn (uns ) Fsn (vs ), uns vs )ds (6.164)
0
Z T
+E es kns (uns ) ns (vs )k2R ds 0,
0

by the monotonicity condition (D.4). In what follows, to simplify the compu-


tations, we set = 0, (otherwise, replace At by (At )). Rewrite Zn as
Zn = Zn + Zn 0, (6.165)
where
Z T Z T
Zn = 2E hAs uns , uns ids + 2 E (Fsn (uns ), uns )ds
Z0 T
0 (6.166)
+E kns (uns )k2R ds,
0
Z T
Zn
= 2E [ hAs vs , vs i hAs uns , vs i hAs vs , uns i ]ds
Z T 0

+2E [ (Fsn (vs ), vs ) (Fsn (uns ), vs ) (Fsn (vs ), uns ) ]ds (6.167)
0
Z T
+E [ kns (vs )k2R 2(ns (uns ), ns (vs ))R ] ds,
0
and (, )R denotes the inner product in L2R . In view of (6.156) and (6.166),
we get
Zn = E kunT k2 khn k2 EkunT k2 khk2 ,
which, with the aid of Fatous lemma and the equation (6.162), implies that
Z T
lim inf Zn E kuT k2 khk2 = 2 E hAs us , us ids
n
Z T Z0 T (6.168)
+2E (fs , us )ds + E ks k2R ds.
0 0
212 Stochastic Partial Differential Equations

In the meantime, noting that Zn has termwise limits, we can deduce from
(6.164)(6.168) that
Z T
2E hAs (us vs ), us vs i
0Z
T
+2E (Fs (us ) fs , us vs )ds (6.169)
Z T0
+E ks (us ) s k2R ds lim inf Zn 0.
0 n

Hence, by setting us = vs , the above gives


Z T
E ks (us ) s k2R ds = 0,
0

or s = s (us ) a.s.. Then it follows from (6.169) that


Z T
2E hAs (us vs ), us vs ids
0Z (6.170)
T
+2E (Fs (vs ) fs , us vs )ds 0.
0

Let v = u w, where w L2 (T ; V ) and > 0. Then the above inequality


becomes
Z T Z T
E hAs (ws , ws ids + E (Fs (us ws ) fs , ws )ds 0.
0 0

Letting 0, we get
Z T
E (Fs (us ) fs , ws )ds 0, w L2 (T ; V ),
0

which implies that fs = Fs (us ) a.s. Therefore the weak limit u is in fact a
strong solution.
The energy equation follows immediately from (6.163) after setting fs =
Fs (us ) and s = s (us ). Finally, from the equations (6.65) for u and (6.155)
for un together with the associated energy equations, one can show that
E sup kut un (, t)k2 0 as n and, hence, u L2 (; C([0, T ]; H))
0tT
L2 (T ; V ).

Remarks: Here the proof, based on functional analytic techniques, was


adopted from the works of Bensoussan and Temam [5] and Pardoux [75]. If
the coercivity and monotonicity conditions are not met, there may exist only
a local solution. We wish to point out the obvious fact: Since At is a linear
operator-valued process, in contrast with the equation (6.73), the semigroup
approach fails to apply to the problem (6.65) treated in this section.
Stochastic Evolution Equations in Hilbert Spaces 213

(Example 7.1) Consider the following initial-boundary value problem for a


parabolic equation with random coefficients:
X d Xd
u u u
= [ajk (x, t) ]+ bk (x, t)
t xj xk xk
j,k=1 k=1 (6.171)

+ c(x, t)u + f (x, t) + (x, t)W (x, t), x D, t (0, T ),
u|D = 0, u(x, 0) = h(x),
where ajk , bk , c, f and are all given adapted random fields such that all of
them are bounded and continuous in D [0, T ] a.s. and there exist positive
constants , such that, for any Rd ,
d
X
2
|| ajk (x, t)j k ||2 , (x, t) D [0, T ], a.s. (6.172)
j,k=1

As usual, W (x, t) is a Wiener random field in L2 (D) with a bounded covariance


function r(x, y). Let H = K = L2 (D), V = H01 and V = H 1 and define
d
X X d

At = [ajk (, t) ]+ bk (, t) + c(, t), V. (6.173)
xj xk xk
j,k=1 k=1
Rt
We set ft = f (, t) and Mt = 0 (, s)dW (, s). Then the equation (6.171)
takes the form (6.120). By an integration by parts, we get
Z d
X
hAt , i = [ {ajk (x, t) (x) (x) ]dx
D j,k=1 xk xj
Z d
(6.174)
X
+ [ bk (x, t) (x) + c(x, t)(x ](x)dx,
D k=1 xk

for any , V . By condition (6.172) and the boundedness of the coefficients,


we can verify from (6.174) that conditions (B1)(B3) are satisfied. Therefore,
by Theorem 7.2, if h H and
Z T Z Z
E { |f (x, t)|2 dx + r(x, x) 2 (x, t)dx}dt < ,
0 D D

the equation (6.171) has a unique solution u L2 (; C([0, T ]; H))


L2 (T ; H01 ).

(Example 7.2) Let us consider the randomly convective reaction-diffusion


equation in domain D R3 with a cubic nonlinearity:
X3
u u
= ( )u u3 + f (x, t) + Wk (x, t),
t xk t (6.175)
k=1
u|D = 0, u(x, 0) = h(x),
214 Stochastic Partial Differential Equations

where , , are given positive constants, f (, t) is a predictable process in


H and Wk (x, t), k = i, , d, are independent Wiener random fields with
continuous covariance functions rk (x, y) such that
Z TZ
E |f (x, t)|2 dxdt < , sup |rk (x, y)| r0 , (6.176)
0 D x,yD

for k = 1, 2, 3. Let the spaces H and V and the function ft be defined as in


Example 7.1. Here we define K = (H)3 to be the triple Cartesian product of H.
Let At = ( ) and Ft (u) = u3 + ft . For Z = (Z 1 , Z 2 , Z 3 ) K, define
P3 k 1 2 3
[t (u)]Z = k=1 ( xk u)Z with t (0) = 0, and set Wt = (Wt , Wt , Wt ).
Then the equation (6.175) is of the form (6.65). The conditions (B1)(B3) are
clearly met. To check condition (D.1), noticing (6.176), we have
3 Z
X v v
|(F (v), v)| + k(v)k
2
R = |v|44 + rjk (x, x) (x) (x)dx
D xj yk
j,k=1

|v|44 + r0 kvk2 |v|44 + C1 kvk21 ,

for some constant C1 > 0. To check condition (D.2), for u, v H 1 , we have

kF (u) F (v)k2 + k(u) (v)k2R ku3 v 3 k + C1 ku vk21 ,

where

ku3 v 3 k2 = k(u2 + uv + v 2 )(u v)k2 8{ku2(u v)k2 + kv 2 (u v)k2 }.

Making use of the Sobolev imbedding theorem (p. 97, [1]), we can get: the L4 -
norm |u|4 C1 kuk1 and ku2 vk2 C2 kuk41 kvk21 , for some constants C1 , C2 > 0.
Hence, there exists C3 > 0 such that the above inequality yields

ku3 v 3 k2 C3 (kuk41 + kvk41 )ku vk21 ,

which satisfies condition (D.2). The condition (D.3) is also true because

hAv, vi + (F (v), v) + k(v)k


2
R = hv, vi kvk
2

(v 3 , v) + k(v)k2R ( r0 )kvk2 kvk2 .

So condition (D.3) holds, provided that > r0 . This condition also implies
the monotonicity condition (D.4), since

2hA(u v), u vi + 2(F (u) F (v), u v) + k(u) (v)k2R


(2 r0 )k(u v)k2 2ku vk2 .

Consequently, by applying Theorem 7.5, we can conclude that, if the condition


(6.176) holds with r0 < , then, given h H, the equation (6.175) has a unique
solution u L2 (; C([0, T ]; H))L2 (T ; H01 ). Moreover, it can be shown from
the energy equation that u L4 (T ; H).
Stochastic Evolution Equations in Hilbert Spaces 215

6.8 Stochastic Evolution Equations of the Second Order


Let us consider the Cauchy problem in V :

2 ut t,
= At ut + Ft (ut , vt ) + t (ut , vt )W
t2 (6.177)
u0
u0 = g, = h,
t
d
for t [0, T ], where vt = dt ut ; At , Ft () and t () are defined similarly as in
the previous section, and g V, h H are the initial data. We rewrite the
equation (6.177) as the first-order system

dut = vt dt,
dvt = [At ut + Ft (ut , vt )]dt + t (ut , vt )dWt , (6.178)
u0 = g, v0 = h,

or, in the integral form


Z t
ut = g + vs ds,
Z 0t
vt = h + [As us + Fs (us , vs )]ds (6.179)
Z0 t
+ s (us , vs )dWs .
0

In Section 6.6 we studied the mild solution of such a first-order stochastic


evolution equation. Here we will consider the strong solution of the system.
To establish an a priori estimate for the nonlinear problem, we first consider
the linear case:
dut = vt dt,
dvt = [At ut + ft ]dt + dMt , (6.180)
u0 = g, v0 = h,
where ft is a predictable H-valued process and Mt is a V -valued martingale
with local characteristic operator Qt . In addition to conditions (B.1)(B.3),
we impose the following conditions on At :

(B.3) There exists > 0 such that, for any v V ,

hAt v, vi kvk2V a.e.(t, ) T .

d
(B.4) Let At = dt At : V V and there is a constant 1 > 0 such that, for any
v V,
|hAt v, vi| 1 kvk2V , a.e.(t, ) T .
216 Stochastic Partial Differential Equations

Introduce the Hilbert space H = V H and C([0, T ]; H) = C([0, T ]; V )


C([0, T ]; H). Let V0 D(A) be a Hilbert space dense in H. The lemma given
below is analogous to Lemma 7.1.

Lemma 8.1 Suppose that the conditions (B.1), (B.2), and (B.3) hold and
ft is a Ft -predictable process in L2 (T ; H). Assume that Mt is a continuous
V0 -valued martingale such that At Mt is a continuous L2 ()-process in H.
Then, for g V and h H, the system (6.180) has a unique strong solution
(u; v) L2 (; C([0, T ]; H)) such that the energy equation holds:
Z t
kvt k2 hAt ut , ut i + hA0 g, gi = khk2 + 2 hAs us , us ids
Z t Z t 0 Z
t (6.181)
+2 (fs , vs )ds + 2 (vs , dMs ) + T r Qs ds.
0 0 0

Proof. TheRproof is similar to that of Lemma 7.1 and will only be sketched.
t
Define Nt = 0 Ms ds. Let t = ut Nt and t = vt Mt . Then it follows
from (6.180) that t and t satisfy

dt = t dt,
dt = At t dt + ft dt, (6.182)
u0 = g, v0 = h,

where, by assumption on Mt , C([0, T ]; V ), C([0, T ]; H), and ft =


(ft + At Nt ) L2 ((0, T ); H). By appealing to the deterministic theory [62]
for the corresponding equation, we can deduce that the system (6.180) has
a unique solution: C([0, T ]; V ) and L2 ((0, T ); H) a.s. such that the
energy equation holds:
Z t
kt k2 hAt t , t i + hA0 g, gi = khk2 + 2 hAs s , s ids
Z t Z t 0 (6.183)
+2 (fs , s )ds + 2 (As Ns , s )ds.
0 0

Therefore, u and v satisfy the system (6.180) with the same degree of regular-
ity as that of and , correspondingly. The energy equation (6.181) can be
derived from (6.183) by expressing s and s in terms of us and vs and then
applying the Ito formula.

With the aid of this lemma, we will prove the existence theorem for the
linear equation. To this end, we introduce the energy function e(u, v) on H
defined by
e(u, v) = kuk2V + kvk2 , (6.184)
and the energy norm k(u; v)ke = {e(u, v)}1/2 .
We shall seek a solution of the system (6.180) with a bounded energy.
Stochastic Evolution Equations in Hilbert Spaces 217

Since the idea of proof is the same as that of Theorem 7.2, some details will
be omitted.

Theorem 8.2 Let conditions (B.1), (B2), (B.3), and (B.4) hold true. As-
sume that ft is a predictable H-valued process and Mt is a continuous L2 -
martingale in H with local covariation operator Qt such that
Z T
E{ { kfs k2 + T r Qs }ds < . (6.185)
0

Then, for g V, h H, the linear problem (6.180) has a unique strong solution
(u; v) with u L2 (; C([0, T ]; V )) and v L2 (; C([0, T ]; H)) such that the
energy equation (6.181) holds true.

Proof. By means of the energy equation (6.181), the uniqueness question


can be easily disposed of. For the existence proof, we are going to take similar
steps as done in Theorem 7.2.

(Step 1) Approximate Solutions:


Let {Mtn } be a sequence of continuous martingales in V0 such that At Mtn
is a continuous L2 -process in H and E sup0tT kMt Mtnk2 0 as n .
Consider the approximate system of (6.180):

dunt = vtn dt,


dvtn = [At unt + ft ]dt + dMtn , (6.186)
un0 = g, v0n = h.

By Lemma 8.1, the above system has a unique solution (un ; v n )


L2 (; C([0, T ]; H)) such that the energy equation holds:

kvtn k2 hAt unt , unt i + hA0 g, gi


Z t Z t
= khk2 + 2 hAs uns , uns ids + 2 (fs , vsn )ds
(6.187)
Z t 0 Z t 0

+2 (vsn , dMsn ) + T r Qns ds.


0 0

(Step 2) Convergence of Approximation Sequences:


Let ZT be the Banach space of adapted continuous processes in the space
L2 (; C([0, T ]; H)) with norm k kT defined by

k(u; v)kT = {E sup e(ut , vt )}1/2 , (6.188)


0tT

where e(, ) denotes the energy function given by (6.184).


Let (um ; v m ) and (un ; v n ) be solutions. Set (umn ; v mn ) = (um un ; v m v n )
218 Stochastic Partial Differential Equations

and Mtmn = (Mtm Mtn ). Then, in view of (6.180), the following equations
hold: Z t
mn
ut = vsmn ds,
Z t
0 (6.189)
vtmn = At umn
s ds + M mn
t .
0

The corresponding energy equation reads


Z t
mn 2 mn mn
kvt k hAt ut , ut i = 2 hAs umn mn
s , us ids
Z t Z t 0 (6.190)
+2 (vsmn , dMsmn ) + T r Qmn
s ds,
0 0

where Qmnt denotes the local covariation operator for Mtmn . By making use
of conditions (B.3), (B.4) and some similar estimates as in Theorem 7.2, we
can show that there exists C > 0 such that
Z T
E sup {kumn t k 2
V + kvt
mn 2
k } CE T r Qmn
s ds 0 (6.191)
0tT 0

as m, n . Hence {(un ; v n )} is a Cauchy sequence in ZT converging to


(u; v) L2 (; C([0, T ]; H)).

(Step 3) Strong Solution and Energy Equation:


For any H and V , by using the It o formula, we have
Z t
(unt , ) = (g, ) + (vsn , )ds,
0
Z t Z t
n
(vt , ) = (h, ) + (As uns , )ds + (fs , )ds + (Mtn , ).
0 0

By taking the termwise limit, the above equations converge in L2 () to


Z t
(ut , ) = (g, ) + (vs , )ds,
0
Z t Z t
(vt , ) = (h, ) + < As us , > ds + (fs , ) ds + (Mt , ),
0 0

which implies that (u; v) is a strong solution.


Finally, the energy equation can be obtained by taking the limits in the
approximation equation (6.187) in a similar fashion as in the proof of Theorem
7.2.

As a corollary of the theorem, we can easily obtain the following energy


inequality.
Stochastic Evolution Equations in Hilbert Spaces 219

Lemma 8.3 Assume the conditions of Theorem 8.2 are satisfied. Then the
following energy inequality holds:

E sup {kut k2V + kvt k2 }


0tT
Z T (6.192)
2 2
CT {kgkV + khk + E (kfs k2 + T r Qs )ds},
0

for some constant CT > 0.

Now we turn to the nonlinear system (6.178) and impose the following
conditions on the nonlinear terms:

(E.1) For any u V, v H, let Ft (u, v) and t (u, v) be Ft -adapted processes


with values in H and L2R , respectively. Suppose there exist positive con-
stants b, C1 such that
Z T
E [ kFt (0, 0)k2 + kt (0, 0)k2R ]dt b,
0

and

kFt (u, v)k2 + k


t (u, v)k2R C1 (1 + kuk2V + kvk2 ), a.e. (t, ) T ,

where Ft (u, v) = Ft (u, v) Ft (0, 0) and


t (u, v) = t (u, v) t (0, 0).

(E.2) For any u, u V and v, v H, there is a constant C2 > 0 such that the
Lipschitz condition holds:

kFt (u, v) Ft (u , v )k2 + kt (u, v) t (u , v )k2R


C2 (ku u k2V + kv v k2 ), a.e. (t, ) T .

Theorem 8.4 Let conditions (B.1), (B.2), (B.3), (B.4), and (E.1), (E.2)
hold true. Then, for g V, h H, the nonlinear problem (6.178) has a unique
strong solution (u; v) with u L2 (; C([0, T ]; V )) and v L2 (; C([0, T ]; H))
such that the energy equation holds:

kvt k2 hAt ut , ut i = khk2 hA0 g, gi


Z t Z t
+2 hAs us , us ids + 2 (Fs (us , vs ), vs )ds
(6.193)
Z0 t 0 Z t
+2 (vs , s (us , vs )dWs ) + ks (us , vs )k2R ds.
0 0

Proof. We will prove the theorem by the contraction mapping principle.


In the proof, for brevity, we will set Ft (0, 0) = 0 and t (0, 0) = 0 so that
220 Stochastic Partial Differential Equations

Ft = Ft and t = t . As seen from the proof of Theorem 6.5, such terms


do not play an essential role in the proof. Given u L2 (; C([0, T ]; V )) and
v L2 (; C([0, T ]; H)), consider the following system
Z t
t = g + s ds,
Z 0t
Z t
t = h + As s ds + Fs (us , vs )ds (6.194)
Z t 0 0

+ s (us , vs )dWs .
0
Z t
Let ft = Ft (ut , vt ) and Mt = s (us , vs )dWs with local characteristic op-
0
erator Qt . Then, by condition (E.1), we have
Z T
E { kft k2 + T r Qt }dt
0Z
T
=E { kFt (ut , vt )]k2 + kt (ut , vt )k2R }dt (6.195)
0
C1 T {1 + E sup [ kut k2V + kvt k2 ]} < .
0tT

Therefore, for (u; v) L2 (; C([0, T ]; H)), we can apply Theorem 8.2 to assert
that the system (6.194) has a unique solution (; ) = (u, v) ZT , and that
the solution map : ZT ZT is well defined.

To show the map is a contraction for small T , we let u, u


L2 (; C([0, T ]; V )) and v, v L2 (; C([0, T ]; H)). Let (
; ) = (
u, v). Set
= and = . Then and satisfy the system:
Z t
t = s ds,
Z0 t Z t
t = As s ds + [ Fs (us , vs ) Fs (
us , vs ) ]ds (6.196)
Z 0t 0

+ [ s (us , vs ) s (
us , vs ) ]dWs .
0

By applying Lemma 8.3 and taking condition (E.2) into account, we can de-
duce from (6.196) that

E sup {kt k2V + kt k2 }


0tT
Z T
CE us , vs )k2
{ kFs (us , vs ) Fs (
0
(6.197)
us , vs )k2R ) }ds
+ ks (us , vs ) s (
C2 T E sup {kut u t kV + kvt vt k2 },
2
0tT
Stochastic Evolution Equations in Hilbert Spaces 221

for some C2 > 0. Hence : ZT ZT is a contraction mapping for small T ,


and the unique fixed point (u; v) is the solution of the system (6.194) which
can be continued to any T > 0. By making use of the equation (6.188) in
Theorem 8.2, the energy equation (6.193) can be verified.

Under the linear growth and the uniform Lipschitz continuity conditions,
it was shown that the solution exists in any finite time interval. If the nonlin-
ear terms satisfy these conditions only locally, such as the case of polynomial
nonlinearity, the solution may explode in finite time [13]. To obtain a global
solution, it is necessary to impose some additional condition on the nonlinear
terms to prevent the solution from exploding. The next theorem is a statement
about this fact. Before stating the theorem, we make the following assump-
tions:

(F.1) For u V, v H, let Ft (u, v) and t (u, v) be Ft -predictable processes


with values in H and L2R , respectively. For any u V, v H and N > 0
with e(u, v) N 2 , there exist positive constants b, CN such that
Z T
E [kFt (0, 0)k2 + kt (0, 0)k2R ]dt b,
0
kFt (u, v)k2 + k
t (u, v)k2R CN , a.e. (t, ) T ,

where Ft (u, v) = Ft (u, v) Ft (0, 0) and


t (u, v) = t (u, v) t (0, 0).

(F.2) For any u, u V and v, v H with e(u, v) N 2 and e(u , v ) N 2 ,


there is a constant KN > 0 such that the Lipschitz condition holds:

kFt (u, v) Ft (u , v )k2 + kt (u, v) t (u , v )k2R


KN (ku u k2V + kv v k2 ), a.e. (t, ) T .

(F.3) Fix T > 0. For any C([0, T ]; V ) C1 ([0, T ]; H), there is a constant
> 0 such that
Z t Z t
2
{2(Fs (s , s ), s ) + ks (s , s )kR }ds { 1 + e(s , s ) ds)},
0 0

for t [0, T ], where we set t = d


dt t .

Theorem 8.5 Let conditions (B.1), (B2), (B.3),(B.4), and (F.1), (F.2) be
fulfilled. Then, for g V, h H, the nonlinear problem (6.178) has a unique
continuous local solution (ut ; vt ) with ut V and vt H for t < , where
is a stopping time. If, in addition, condition (F.3) is also satisfied, then the
solution exists in any finite time interval [0, T ] with u L2 (; C([0, T ]; V ))
and v L2 (; C([0, T ]; H)). Moreover, the energy equation (6.193) holds true
as well.
222 Stochastic Partial Differential Equations

Proof. As before, for the sake of brevity, we set Ft (0, 0) = 0 and t (0, 0) =
0. First we will show the existence of a local solution by truncation. For
u V, v H and N > 0, introduce the truncation operators N and N
given by
Nu Nv
N (u) = , N (v) = , (6.198)
(N kukV ) (N kvk)
and define

FtN (u, v) = Ft [N (u), N (v)], N


t (u, v) = t [N (u), N (v)]. (6.199)

Consider the truncated system:

duN N
t = vt dt,
dvtN = [At uN N N N N N N
t + Ft (ut , vt )]dt + t (ut , vt )dWt , (6.200)
uN
0 = g, v0N = h.

Since N (u) and N (v) are bounded and Lipschitz continuous, it then follows
from condition (F.2) and (6.199) that FtN and N t satisfy the conditions (E.1)
and (E.2). Therefore, by applying Theorem 8.4, the system has a unique strong
solution (uN N
t ; vt ) over [0, T ] with the depicted regularity.
Introduce a stopping time N = inf{t (0, T ] : e(uN N 2
t , vt ) > N } and
N N
put it equal to T if the set is empty. Let (u (, t); v (, t)) be the solution
of Equation (6.200). Then, for t < N , (ut ; vt ) = (uN N
t ; vt ) is the solution
of Equation (6.178). Since the sequence {N } increases with N , the limit
= limN N exists a.s. For t < < T , we have t < n for some n > 0, and
define (ut ; vt ) = (unt ; vtn ). Then (u;v) is the desired local strong solution.
To obtain a global solution under condition (F.3), we make use of the local
version of the energy equation (6.193):

kvtN k2 hAtN utN , utN i = khk2 hA0 g, gi


Z tN Z tN

+2 hAs us , us ids + 2 (Fs (us , vs ), vs )ds
0 0 (6.201)
Z tN Z tN
+2 (us , s (us , vs )dWs ) + ks (us , vs )k2R ds.
0 0

By taking the expectation over the above equation and making use of condi-
tions (B3), (B.4) and (F.3), we can deduce that

E{kvtN k2 + kutN k2V } khk2 + kgk2V


Z t Z t
+ 21 E kusN k2V ds + {1 + E e(usN , vsN )ds},
0 0

which implies that there exist positive constants C0 and such that
Z t
E e(utN , vtN ) C0 + Ee(usN , vsN )ds.
0
Stochastic Evolution Equations in Hilbert Spaces 223

Therefore, by means of the Gronwall inequality, we get

E e(uT N , vT N ) C0 eT . (6.202)

On the other hand, we have


E e(uT N , vT N ) E {I(N T )e(uT N , vT N )}
(6.203)
N 2 P {N T },

where I() denotes the indicator function. In view of (6.202) and (6.203), we
get
P {N T } C0 eT /N 2 ,
which, by invoking the BorelCantelli lemma, implies that P { T } = 0.
This shows that the strong solution exists in any finite time interval [0, T ].
Finally, the energy equation now follows from (6.201) by letting N .

(Example 8.1) Consider the SineGordon equation in domain D in Rd


perturbed by a white noise:
2u u
2
= ( )u + sin u + f (x, t)
t t

+ (x, t) W (x, t), (6.204)
t
u|D = 0,
u
u(x, 0) = g(x), (x, 0) = h(x),
t
where > 0, and are given constants, f (, t) and (, t) are continuous
adapted processes in H = L2 (D), and W (, t) is a Wiener random field in
K = H with covariance function r(x, y). Assume that
Z TZ Z TZ
E{ |f (x, t)|2 dxdt + r(x, x)2 (x, t)dxdt} b,
0 D 0 D

for some b > 0. Let A = ( ), Ft (u, v) = ( sin u v) + f (, t) and t =


(, t). Then it is easy to check that, for V = H01 , all conditions in Theorem
8.4 are satisfied. Hence, for g H01 , h H, the equation (6.204) has a unique

strong solution u L2 (; C([0, T ]; H01 )) with t u L2 (; C([0, T ]; H)).

(Example 8.2) Let D be a domain in Rd for d 3. Consider the stochastic


wave equation in D with a cubic nonlinear term:
2u u
= ( )u + u3 + + f (x, t)
t2 t

+ (x, t) W (x, t), (6.205)
t
u|D = 0,
u
u(x, 0) = g(x), (x, 0) = h(x),
t
224 Stochastic Partial Differential Equations

where all the coefficients of the equation are given as in the previous example.
Let A = ( ), Ft (u, v) = (u3 v) + f (, t). Making use of the estimates
for the cubic term u3 in Example 7.2, it is easy to show that conditions
(F.1) and (F.2) are satisfied. Therefore, by the first part of Theorem 8.5, the
problem (6.205) has a unique local solution. To check condition (F.3), for any
C([0, T ]; V ) C1 ([0, T ]; H), we have
Z t
{2(Fs (s , s ), s ) + k
s (s , s )k2R }ds
0
Z t Z t Z t
= {2(3s , s )ds + 2 ( s , s )ds + ks k2R }ds
0 0 0
Z t
1
(|t |44 |0 |44 ) + 2 k s k2 ds + b
4 0
Z t
|| |0 |44 + b + 2|| e(s , s )ds, t [0, T ],
0

provided that < 0. In this case, (B.3) holds true. Thus, by Theorem 8.5,
the equation (6.205) has a unique strong solution u L2 (; C([0, T ]; V )) with
2
t u L (; C([0, T ]; H)) for any T > 0. In fact, from the energy inequality,
we can show u L4 (T ; H). However, if < 0, it can be shown that the
solution may blow up in finite time [13].
7
Asymptotic Behavior of Solutions

7.1 Introduction
In the previous chapter, we have discussed the existence and uniqueness ques-
tions for stochastic evolution equations. Now some properties of the solutions
will be studied. In particular, we shall deal with some problems concerning the
asymptotic properties of solutions, such as boundedness, asymptotic stability,
invariant measures and small random perturbations. For stochastic equations
in finite dimension, the asymptotic problems have been studied extensively
for many years by numerous authors. For the stability and related questions,
a comprehensive treatment of the subject is given in the classic book by Khas-
minskii [47]. In particular, his systematic development of the stability analysis
based on the method of Lyapunov functions has a natural generalization to
an infinite-dimensional setting. As will be seen, the method of Lyapunov func-
tionals will play an important role in the subsequent asymptotic analysis. For
stochastic processes in finite dimensions, the small random perturbation and
the related large deviations problems are treated in the well-known books
by Freidlin and Wentzell [32], Deuschel and Stroock [26], Varadhan [91], and
Dembo and Zeitouni [25]. For stochastic partial differential equations, there
are relatively fewer papers in asymptotic results. A comprehensive discussion
of some stability results for stochastic evolution equations in Hilbert spaces
can be found in a book by Liu [63], and the subject of invariant measures is
treated by Da Prato and Zabczyk [22] in detail.
This chapter consists of seven sections. In Section 7.2, a generalized It o
formula and the Lyapunov functional are introduced. By means of the Lya-
punov functionals, the boundedness of solutions and the asymptotic stability
of the null solution to some stochastic evolution equations will be treated in
Section 7.3 and Section 7.4, respectively. The question on the existence of
invariant measures is to be examined in Section 7.5. Then, in Section 7.6,
we shall take up the small random perturbation problems. Finally the large
deviations problem will be discussed briefly in Section 7.7.

225
226 Stochastic Partial Differential Equations

7.2 It
os Formula and Lyapunov Functionals
Consider the linear equation:
Z t Z t
ut = u0 + As us ds + fs ds + Mt . (7.1)
0 0
Rt
In particular, for u0 = h and Mt = 0 s dWs , this equation yields equation
(6.120) for strong solutions. We shall generalize the It o formula (6.54) to the
one for the strong solution ut of (7.1). Recall that, in Section 6.4, we defined
an Ito functional F : H [0, T ] R to be a function F (v, t) with derivatives
t F, F , F satisfying conditions (1)(3) in Theorem 6-4.1. Due to the fact
As us V , the previous It o formula is no longer valid. To extend this formula
to the present case, we need to impose stronger conditions on F . To this
end, let U H be an open set and let U [0, T ] = UT . Here a functional
: UT R is said to be a strong It o functional if it satisfies the following:
(1) : UT R is locally bounded and continuous such that its first two
partial derivatives t (v, t), (v, t) and (v, t) exist for each (v, t) UT .
(2) The derivatives t and H are locally bounded and continuous in
UT .
(3) For any L1 (H), the map: (v, t) T r[ (v, t)] is locally bounded
and continuous in (v, t) UT .
(4) (, t) : V U V is such that h (, t), v i is continuous in t [0, T ] for
any v V and
k (v, t)kV (1 + kvkV ), (v, t) (V U ) [0, T ],
for some > 0.
The following theorem gives a generalized It
o formula for the strong solu-
tion of equation (7.1).

Theorem 2.1 Suppose that At satisfies conditions (B1)(B3) given in Sec-


tion 6.7, f L2 ((0, T ); H) is an integrable, adapted process, and Mt is a
continuous L2 -martingale in H with local characteristic operator Qt . Let ut
be the strong solution of (7.1) with u0 L2 (; H) being F0 -measurable. For
any strong It o functional on UT , the following formula holds
Z t Z t
(ut , t) = (u0 , 0) + s (us , s) ds + hAs us , (us , s)ids
Z t 0 Z t 0

+ (fs , (us , s))ds + ( (us , s), dMs ) (7.2)


0Z 0
1 t
+ T r[ (us , s)Qs ]ds.
2 0
Asymptotic Behavior of Solutions 227

Proof. The idea of proof is based on a sequence of approximations to the


equation (7.1) by regularizing its coefficients so that Theorem 6-4.2 is applica-
ble. Then the theorem is proved by a limiting process. The proof is long and
tedious [75, 76]. To show some technical aspect of the problem, we will sketch
a proof for the special case when Mt is a continuous L2 -martingale in V .
By invoking Theorem 6-7.2, the equation (7.1) has a strong solution u
L2 (T ; V ) L2 (; C([0, T ], H)). Let u = (u M ) and vt = (At ut + ft ).
Then, d u = vdt, and for M L2 (T ; V ), we have u L2 (T ; V ) and vt
L2 (T ; V ).
Let YT = L2 (T ; V ) L2 (; C([0, T ], H)) with norm kukT defined as in
Theorem 6-7.2. Suppose that { un } is a sequence in YT with u n0 = u0 such
n 1 n 2 d n
that u C ((0, T ); H) a.s., u u in L (T ; V ) and dt u = vn v in
2 n n
L (T ; V ). Let u = u + M . Then, by Theorem 6-4.1, the following It o
formula holds
Z t
( n
ut + Mt , t) = (u0 , 0) + s (uns + Ms , s)ds
Z t 0 Z t
+ (vsn , (
uns + Ms , s))ds + ( (
uns + Ms , s), dMs ) (7.3)
0Z 0
t
1
+ T r[ ( uns + Ms , s)Qs ]ds.
2 0
To construct such a sequence u n , let A0 : V V be a constant closed,
coercive linear operator with domain D(A). Let un = un M n , where AM n
L2 (T ; H) such that M n M in L2 (T ; V ) and un u in YT . In particular,
we define un by the equation:
dun (, t) = A0 un (, t)dt + ftn dt + dMtn ,
(7.4)
un0 = Pn u0 ,

where ftn = Pn (vt A0 u); Pn : H V and Pn : V V are two projectors


such that Pn u0 u0 in H and Pn vt vt in L2 (T ; V ). By Theorem 6-7.2,
we can assert that the problem (7.4) has a unique strong solution un YT
and, furthermore, un u in YT . Hence we have un u in YT . To check
n C1 ((0, T ); H), in view of (7.4), u
u n satisfies

unt
d nt dt + (ftn + A0 Mtn )dt
= A0 u
n0
u = Pn u0 ,

n C1 ((0, T ); H)
which, by a regularity property of PDEs [61], has a solution u
a.s. Now we claim that, with the aid of the above convergence results and the
conditions on , the Ito formula (7.2) can be verified by taking the limit in
(7.3).
For the general case, we can approximate Mt in H by a sequence Mtn of
V -valued martingales such that M n M in L2 (T ; H), and prove the It o
formula (7.2) by a limiting process. However, this will not be carried out here.

228 Stochastic Partial Differential Equations

Let us consider the strong solution of the equation considered in Section


6.7:
dut = At ut dt + Ft (ut )dt + t (ut )dWt , t 0, (7.5)
where the coefficients At , Ft , and t are assumed to be non-random or deter-
ministic. Then the following theorem follows from Theorem 2.1. where
1
Ls (v, s) = (v, s) + T r[ (v, s)s (v)Rs (v)]
s 2 (7.6)

+hAs v, (v, s)i + (Fs (v), (v, s)).

Let U H be a neighborhood of the origin. A strong Ito functional :


U R+ R is said to be a Lyapunov functional for the equation (7.5), if

(1) (0, t) = 0 for all t 0, and, for any > 0, there is > 0 such that

inf (h, t) , and


t0,khk

(2) for any t 0 and v U V ,

Lt (v, t) 0. (7.7)

In particular, if = (v) is independent of t, we have Lt (v) = L(v)


with
1
L(v) = T r[ (v)s (v)Rs (v)] + hAs v, (v)i + (Fs (v), (v)). (7.8)
2

Remark: For second-order stochastic evolution equations treated in Section


6.8, It
os formula is not valid in general due to the fact that the solutions are
insufficiently regular. However, it does hold for a quadratic functional, such as
the energy equation given in Theorem 6-8.4. The energy-related functionals
will play an important role in studying the asymptotic behavior of solutions
to second-order equations, such as a stochastic hyperbolic equation.

7.3 Boundedness of Solutions


Let uht be a strong solution of the equation (7.5) with uh0 = h. Here and
throughout the remaining chapter, Wt denotes a K-valued Wiener process
with a finite-trace covariance operator R. The solution is said to be non-
explosive if
lim P { sup kuht k > r} = 0,
r 0tT
Asymptotic Behavior of Solutions 229

for any T > 0. If the above holds for T = , the solution is said to be
ultimately bounded .

Lemma 3.1 Let : U R+ R+ be a Lyapunov functional and let uht


denote the strong solution of (7.5). For r > 0, let Br = {h H : khk r}
such that Br U . Define
= inf{t > 0 : uht Brc , h Br },

with Brc = H \ Br . We put = T if the set is empty. Then the process


t = (uht , t ) is a local Ft -supermartingale and the following Chebyshev
inequality holds
(h, 0)
P { sup kuht k > r} , (7.9)
0tT r
where
r = inf (h, t).
0tT,hUBrc

Proof. By It
os formula and the properties of a Lyapunov functional, we
have Z t
h
(ut , t ) = (h, 0) + Ls (uhs , s)ds
Z t 0

+ ( (uhs , s), s (uhs ))dWs )


Z t
0

(h, 0) + ( (uhs , s), s (uhs )dWs ),


0

so that t = (uht , t ) is a local supermartingale and

Et E0 = (h, 0).

But
ET = E(uhT , T ) E{(uh , ); T }
inf (h, t)P { T }
0tT,khk=r

r P { sup kuht k > r},


0tT

which implies (7.9).

As a consequence of this lemma, the following two boundedness theorems


can be proved easily.

Theorem 3.2 Suppose that there exists a Lyapunov functional : H


R+ R+ such that
r = inf (h, t) , as r .
t0,khkr
230 Stochastic Partial Differential Equations

Then the solution uht is ultimately bounded.

Proof. By Lemma 3.1, we can deduce that


(h, 0)
P {sup kuht k > r} = lim P { sup kuht k > r} .
t0 T 0tT r
Therefore we have
lim P {sup kuht k > r} = 0,
r t0

which implies the assertion.

o functional : H R+ R+ and a
Theorem 3.3 If there exists an It
constant > 0 such that
Lt (v, t) for any v V,
and the infimum inf t0,khkr (h, t) = r exists such that limr r = ,
then the solution uht does not explode in finite time.

Proof. Let (v, t) = et (v, t). Then it is easy to check that


Lt (v, t) 0,
so that is a Lyapunov functional. Hence, by Lemma 3.1,
(h, 0) (h, 0)
P { sup kuht k > r} = 0
0tT r r
as r , for any T > 0.

(Example 3.1) Consider the reactiondiffusion equation in D R3 :


X3
u u
= u + f (u) + Wk (x, t),
t xk t (7.10)
k=1
u|D = 0, u(x, 0) = h(x),

where f () : H = L2 (D) Lp (D) is a nonlinear function with p 3, and


Wk (x, t) are Wiener random fields with bounded covariance functions rjk (x, y)
such that
X3
rjk (x, x)j k r0 ||2 , R3 , (7.11)
j,k=1

for some r0 > 0. Suppose that, by Theorem 6-7.5, the problem (7.10) has
a strong solution u(, t) V with V = Lp+1 (D) H01 = H01 due to the
Sobolev imbedding H01 Lp+1 (D), for p 5. Consider the reduced ordinary
differential equation
dv
= f (v).
dt
Asymptotic Behavior of Solutions 231

Assume there exist a C2 -function (v) 0 and a constant such that (v) =
0 implies v = 0, and it satisfies
f (v) (v) (v), (v) 0, and (v) as v . (7.12)
Define Z
(v) = [v(x)]dx, v V. (7.13)
D
Let (v) = v and R = [rjk ]33 . Then we have
1
Lt (v) = (v, (v)) + (f (v), (v)) + T r[ (v)(v)R (v)]
Z Z2
= [v(x)]|v(x)|2 dx + f [v(x)] [v(x)]dx
D D
Z 3
X
1 v(x) v(x)
+ [v(x)] rjk (x, x) dx.
2 D xj xk
j,k=1

In view of (7.11) and (7.12), the above yields


Z
Lt (v) ( r0 /2) [v(x)]|v(x)|2 dx + (v). (7.14)
D

Therefore, if r0 /2, by Theorem 3.3, the solution does not explode in finite
time. From Theorem 3.2, if r0 /2 and 0, the solution is ultimately
bounded.
On the other hand, if the nonlinear terms in the equation are not globally
Lipschitz continuous, the solution may explode in finite time. To illustrate
such a phenomenon, we consider the following initial-boundary value problem
for the reactiondiffusion equation
u
= u + f (u) + (u, x) Wk (x, t),
t t (7.15)
u|D = 0, u(x, 0) = h(x),
where the initial state h(x) is positive in D. Under suitable conditions on f (u)
and (u, x) given in Theorem 3-6.5 and Theorem 4-5.1, the equation (7.15)
has a unique positive local solution. We will show such a solution may explode
in an Lp -sense. First, consider the eigenvalue problem
v = v in D, v|D = 0.
It is known that the principal eigenvalue 1 > 0 and the corresponding eigen-
function 1 does not change sign in D (p. 356, [28]). So we can choose such
that Z
1 (x) > 0 for x D, and 1 (x) dx = 1. (7.16)
D

Theorem 3.4 Suppose that f (u) and (u, x) are continuous functions such
that the problem has a positive local strong solution. Moreover, assume that:
232 Stochastic Partial Differential Equations

(1) The function f (r) is non-negative for r R and convex for r > 0.

(2) There is a constant M > 0 such that f (r) > 1 r for r > M and
Z
dr
< .
M f (r) 1 r
R
(3) 0 := D h(x)1 (x) dx > M.

Then, for any p 1, there exists Tp > 0 such that


Z
lim |u(x, t)|p dx = . (7.17)
tTp D

Proof. We shall prove the theorem by contradiction. Suppose there exists


a positive solution u and some p 1 such that
Z
sup E |u(x, t)|p dx < (7.18)
0tT D

for any T > 0. Define


Z
u(t) := u(x, t)1 (x) dx > 0. (7.19)
D

By (7.16), 1 can be regarded as the probability density function for a random


variable in D independent of Wt . The integral (7.19) may be interpreted as
the expectation u (t) = E u(, t) with respect to the random variable . In
view of (7.15) and (7.19), we can obtain the equation
Z t Z t Z t
(t) = (h, 1 ) 1
u u
(s) ds + (f (us ), 1 ) ds + (1 , (us ) dWs ).
0 0 0

After taking an expectation, the above yields


Z t Z t
(t) = (h, 1 ) 1 (s) ds + E (f (us ), 1 ) ds,
0 0

or, in differential form


Z
d(t)
= 1 (t) + E f (u(x, t))1 (x) dx,
dt D (7.20)
(0) = 0 ,

where we set (t) = E u(t) and 0 = (h, 1 ). Since, in view of condition (1),
the function f (r) is convex for r > 0, by Jensens inequality, we have
Z
E f (u(x, t))1 (x) dx = E E f (u(, t)) f ((t)). (7.21)
D
Asymptotic Behavior of Solutions 233

Upon using (7.21) in (7.20), one obtains


d(t)
f ((t)) 1 (t),
dt
(0) = 0 ,
which implies, for (0) > M ,
Z (T ) Z
dr dr
T < ,
(0) f (r) 1 r M f (r) 1 r

by making use of conditions (2) and (3). Hence the inequality


Z cannot hold
for all T > 0. This contradiction shows that (t) = E u(x, t)1 (x)dx will
D
blow up at a time Z
dr
Te .
(0) f (r) 1 r
we can apply H
Since 1 is bounded and continuous on D, olders inequality
for each p 1 to get
Z
(t) Cp {E |u(x, t)|p dx}1/p ,
D
Z
where Cp = { |1 (x)|q dx}1/q with q = p/(p 1). Therefore the solution
D
explodes at a time Tp Te as asserted by equation (7.17).

7.4 Stability of Null Solution


To study the stability problems, we assume throughout this section that, for
given u0 = h H, the initial-value problem (7.5) has a global strong solution
uh L2 ( (0, T ); V ) L2 (; C((0, T ); H)) for any T > 0. Suppose that the
equation has an equilibrium solution u . For simplicity, suppose Ft (0) = 0 and
0 is such a solution for all t > 0. In particular, we shall
t (0) = 0 so that u
study the stability of the null solution.
For definitions of stability, we say that the null solution u 0 of (7.5)
is stable in probability in H if for any 1 , 2 > 0, there is > 0 such that if
khk < , then
P {sup kuht k > 1 } < 2 .
t>0

The null solution is said to be asymptotically p-stable in H for p 1 if there


exists > 0 such that if khk < , then
lim Ekuht kp = 0.
t
234 Stochastic Partial Differential Equations

If, in addition, there are positive constants K(), and T such that

Ekuht kp K()et t > T,

then the null solution is exponentially p-stable in H. For p = 2, they are


known as asymptotically stable in mean-square and exponentially stable in
mean-square, respectively.
The null solution is said to be a.s. (almost surely) asymptotically stable in
H if there exists > 0 such that if khk < , then

P { lim kuht k = 0} = 1,
t

and it is a.s.(almost surely) exponentially stable in H if there exist positive


constants , K(), and a random variable T () > 0 such that if khk < ,
then
kuht k K()et t > T, a.s.
In view of the above definitions, it is clear that the exponential stability implies
the asymptotic stability. If an asymptotic stability condition holds for any
> 0, then we say that it is asymptotically globally stable in H.

Theorem 4.1 Suppose there exists a Lyapunov functional : H R+


R+ . Then the null solution of (7.5) is stable in probability.

Proof. Let r > 0 such that Br = {h H : khk r} U . For any


1 , 2 > 0, by Lemma 3.1, we have
(h, 0)
P { sup kuht k > 1 } P { sup kuht k > (r 1 )} .
0tT 0tT (r1 )

As T , we get
(h, 0)
P {sup kuht k > 1 } .
t0 (r1 )
Since (h, 0) is continuous with (0, 0) = 0, given 2 > 0, there exists > 0
such that the right-hand side of the above equation becomes less than 2 if
khk < .

Theorem 4.2 Let the equation (7.5) have a global strong solution uht such
that Ekuht kp < for any t > 0. Suppose there exists a Lyapunov functional
(h, t) on H R+ such that

Lt (v, t) (v, t), v V, t > 0, (7.22)

and
khkp (h, t) khkp , h H, t > 0, (7.23)
for some positive constants , , . Then the null solution is exponentially
p-stable.
Asymptotic Behavior of Solutions 235

Proof. For any > 0, let h B . By It os formula, we have


Z t
et (uht , t) = (h, 0) + es [Ls (uhs , s) + (uhs , s)]ds
0
Z t
+ es ( (uhs , s), s (uhs )dWs ),
0

so that, by condition (7.22),


Z t
t
Ee (uht , t) = (h, 0) + es E[Ls (uhs , s) + (uhs , s)] ds
0
(h, 0).

In view of condition (7.23), the above implies

(h, 0) t
Ekuht kp e , t > 0.

Theorem 4.3 Suppose there exists a Lyapunov functional : R+ H


R+ such that
Lt (v, t) (v, t), v V, t > 0, (7.24)
for some constant > 0. Then the null solution is a.s. asymptotically stable.
If, in addition, there exist positive constants , p, such that

(h, t) khkp , h H, t > 0, (7.25)

then the null solution is a.s. exponentially stable.

Proof. For h H, let

(uht , t) = et (uht , t),

and let n = inf t>0 {kuhtk > n}. By Itos formula and condition (7.24), we
have
Z tn
(uhtn , t n ) = (h, 0) + (Ls + )(uhs , s)ds
Z tn 0
+ ( (uhs , s), s (uhs )dWs ).
0

It is easy to check that, as n , t = (uht , t) is a positive supermartingale.


By Theorem 4.2 and a supermartingale convergence theorem (p. 65, [81]), it
converges almost surely to a positive finite limit () as t . That is, there
exists 0 with P {0 } = 1 such that

lim (uht , t) = lim t () = (), 0 .


t t
236 Stochastic Partial Differential Equations

It follows that, for each 0 , there exists T () > 0 such that

(uht , t) 2()et , t > T (), (7.26)

which implies that the null solution is a.s. asymptotically stable. If condition
(7.25) also holds, the inequality (7.26) yields

kuht kp 2 h ()et ,

or
kuht k K h et , t > T, a.s.,
with K h = (2/)1/p and = /p. Hence the null solution is a.s. exponen-
tially stable.

(Example 4.1) Suppose that the coefficients of the equation (7.5) satisfy
the coercivity condition as in Theorem 6-7.5:

2hAt v, vi + 2(Ft (v), v) + kt (v)k2R kvk2 kvk2V , v V, (7.27)

for some constants , with > 0. Then the following stability result holds.

Theorem 4.4 Let the coercivity condition (7.27) be satisfied. If =


kvk2V
inf > /, then the null solution of (7.5) is a.s. exponentially stable.
vV kvk2

Proof. Let (h) = khk2 . Then, by condition (7.27),

L(v) = 2hAt v, vi + 2(Ft (v), v) + kt (v)k2R


kvk2V
kvk2 kvk2V ( )kvk2
kvk2
( )kvk2 ,

or
L(v) (v), for v V,
with = > 0. Therefore, by Theorem 4.3, the null solution is a.s.
exponentially stable.

(Example 4.2) Let us reconsider (Example 3.1) under the same assump-
tions. Suppose f (0) = 0 so that u 0 is a solution of (7.10). Let be the
Lyapunov functional given by (7.13). In view of (7.14), if r0 /2 and < 0,
then
L(v) ||(v), for v V.
By invoking Theorem 4.2, the null solution is a.s. asymptotically stable.
Asymptotic Behavior of Solutions 237

7.5 Invariant Measures


Consider the autonomous version of the equation (7.5):

dut = Aut dt + F (ut )dt + (ut )dWt , t 0,


(7.28)
u0 = ,

where L2 (; H) is an F0 measurable random variable. Suppose that the


equation has a unique global strong solution ut , t 0, as depicted in Theorem
6-7.5. Then, as in finite dimensions, it is easy to show that the solution ut
is a time-homogeneous, continuous Markov process in H with the transition
probability function P (h, t, ) defined by the conditional probability

P (h, t s, B) = P {ut B|us = h},

for 0 s < t, h H, and B B(H), the Borel field of H. For any bounded
continuous function Cb (H), the transition operator Pt is defined by
Z
(Pt )(h) = (g)P (h, t, dg) = E {(ut )|u0 = h }. (7.29)
H

Recall that a probability measure on (H, B(H)) is said to be an invariant


measure for the given transition probability function if it satisfies the equation
Z
(B) = P (h, t, B)(dh),
H

or, equivalently, the following holds:


Z Z
(Pt )(h)(dh) = (h)(dh), (7.30)
H H

for any bounded continuous function Cb (H).


For Cb (H), if Pt (h) is bounded and continuous for t > 0, h H,
then the transition probability P (h, t, ) is said to possess the Feller property.
Let M1 (H) denote the space of probability measures on B(H). A sequence
of probability measures {n } is said to converge weakly to in M1 (H), or
simply, n , if
Z Z
lim (h)n (dh) = (h)(dh)
n H H

for any Cb (H).

Lemma 5.1 Let t = L{ut } denote the probability measure for the solution
ut of (7.28) that has the Feller property. If t as t , then is an
invariant measure for the solution process.
238 Stochastic Partial Differential Equations

Proof. Let = L{} be the initial distribution. For t > 0 and Cb (H),
since t , we have
Z Z Z
(h)t (dh) = (Pt )(h)(dh) (h)(dh),
H H H

as t . Hence
Z Z
(Pt+s )(h)(dh) (h)(dh), (7.31)
H H

for any s > 0. On the other hand,


Z Z
(Pt+s )(h)(dh) = [Pt (Ps )](h)(dh)
H Z H (7.32)
(Ps )(h)(dh),
H

as t with fixed s, since Ps Cb (H) by the Feller property. In view of


(7.31) and (7.32), we have
Z Z
(Ps )d = d
H H

for any s > 0, or is an invariant measure by (7.30).

Lemma 5.2 Let ht () = P (h, t, ) be the distribution of uht given = h H.


If ht for all h H, then is the unique invariant for the solution process
of equation (7.28).

Proof. By the weak convergence, for Cb (H),


Z Z
(Pt )(h) = (g)ht (dg) (g)(dg).
H H

Now, for any invariant measure M1 (H), we have


Z Z Z
(h)(dh) = lim (Pt )(g)(dg) = (g)(dg),
H t H H

or is the unique invariant measure.

Theorem 5.3 Let be an invariant measure for the equation (7.28) with
the initial distribution L{} = . Then the solution {ut , t > 0} is a stationary
process in H.

Proof. Let i , i = 1, 2, , n be bounded measurable functions and let


0 t1 < t2 < < tn . We must show that, for any > 0,

Mn = E{1 (ut1 + )2 (ut2 + ) n (utn + )}


Asymptotic Behavior of Solutions 239

is independent of , for any n 1. This can be verified by induction. For


n = 1, due to the invariant ,
Z
M1 = E1 (ut1 + ) = (Pt1 + 1 )d
Z H Z
= P (Pt1 1 )d = (Pt1 1 )d = E1 (ut1 ).
H H

Now suppose that

Mn1 = E{1 (ut1+ )2 (ut2 + ) n1 (utn1 + )}

is independent of . Then, making use of the Markov property,

Mn = E{[1 (ut1 + ) n1 (utn1 + )]E[n (utn + )|Ftn1 + ]}


= E{[1 (ut1 + ) n2 (utn2 + )][n1 (utn1 + )Ptn tn1 n (utn1 + )]}
= E{[1 (ut1 ) n2 (utn2 )][n1 (utn1 )Ptn tn1 n (utn1 )]}

by the Markov property, so that Mn is also independent of .

As a special case of (7.28), consider the linear equation with an additive


noise:
dut = Aut dt + dWt , t > 0,
(7.33)
u0 = .
Since A is coercive, it generates a strongly continuous semigroup Gt on H. It
is known that the solution of (7.30) is given by
Z t
ut = Gt + Gts dWs , (7.34)
0

which has mean Eut = Gt (E) and covariance operator


Z t
t = (Gs RGs )ds. (7.35)
0

Theorem 5.4 Suppose that Gt is a contraction semigroup in H such that


(1) there are positive constants C, such that

kGt hk = C khket t > 0, h H, and


Z
(2) T r(Gs RGs )ds = < .
0

Then there exists an unique invariant measure for the equation (7.33) and
N (0, ), Rthe Gaussian measure in H with mean zero and covariance

operator = 0 (Gs RGs )ds.
240 Stochastic Partial Differential Equations

Proof. Fix = h H. We know that the solution uht given by (7.34) is a


Gaussian process with mean mht = Gt h and covariance t defined by (7.35).
Denote the corresponding Gaussian measure by ht . Let ht (), H, be the
characteristic functional of uht given by

1
ht () = E exp{i(uht , )} = exp{i(mht , ) (t , )}, H. (7.36)
2
To show ht , it suffices to prove that

1
lim ht () = () = exp{ (, )}, }
t 2
uniformly in on any bounded set H. By condition (1), it is easily seen
that
|(mht , )| C khk kket 0,
uniformly
R in for any h H, as t . By condition (2), we see that

t
T r(Gs RG s )ds 0 as t . Therefore, we have
Z
|(, ) (t , ))| = ( [ t ], ) = ( (Gs RGs )ds, )
t
Z
2
kk T r(Gs RGs )ds 0
t

uniformly in as t . Now, in view of the above results, we deduce


that
1
|ht () ()| | exp{i(mht , )} exp{ ( [ t ], ) }|
2
i(mh ,) 1
| exp t 1| + |1 exp{ ( [ t ], )}|
2
h 1
|(mt , )| + | [ t ], )}| 0
2
uniformly on for each h H. Hence, the corresponding probability distri-
butions ht N (0, ) independent of h, and the invariant measure is
unique by Lemma 5.2.

Remark: If the initial distribution of is , then, by Theorem 5.3, the


solution ut is a stationary Gaussian process, known as the OrnsteinUhlenbeck
process.

For N > 0, let KN = {v V : kvkV N } and AN = {h H \ KN }.


Recall that the inclusion V H is compact so that KN is a compact set in
H. We will prove the following theorem.

Theorem 5.5 Suppose that the transition probability function P (h, t, )


Asymptotic Behavior of Solutions 241

for the solution process ut has the Feller property and satisfies the following
condition:
For some h H, there exists a sequence Tn such that
Z Tn
1
lim P (h, t, AN )dt = 0, (7.37)
N Tn 0

uniformly in n. Then there is an invariant measure on (H, B(H)).

Proof. For any B B(H), define


Z Tn
1
n (B) = P (h, t, B)dt. (7.38)
Tn 0

Then {n } is a family of probability measures on (H, B(H)). By conditions


(7.37) and (7.38), for any > 0, there exists a compact set KN H such that

n {H \ KN } = n {AN } < , n 1.

Therefore, by the Prokhorov theorem [8], the family {n } is weakly compact.


It follows that there is a subsequence {nk } such that nk in H.
To show being an invariant measure, let Cb (H). By the Feller
property, Pt Cb (H). In view of the weak convergence, we have
Z Z Z Tn Z
1 k
Pt d = lim Pt dnk = lim Pt (g)P (h, s, dg)ds
H k H Z k T nk 0 H
Tnk
1
= lim (Pt+s )(h)ds,
k Tnk 0

where use was made of the Fubini theorem and the Markov property. Now,
for any fixed t > 0 and h H, we can write
Z Tn Z Tn Z Tn +t
1 k 1 k k
(Pt+s )(h)ds = { (Ps )(h)ds + (Ps )(h)ds
Tnk 0 Tnk 0 Tnk
Z t
(Ps )(h)ds}.
0

Since kPt k kk, we can deduce from the last two equations that
Z Z Tn
1 k
Pt d = lim (Ps )(h)ds
H k T n
Z k 0 Z
= lim dnk = d,
k H H

by the Fubini theorem and the weak convergence. Hence is an invariant


measure as claimed.

The following theorem is the main result of this section.


242 Stochastic Partial Differential Equations

Theorem 5.6 Suppose that the coefficients of equation (7.28), as a special


case of equation (6.56), satisfy all the conditions of Theorem 6-7.5. In addition,
assume condition (D.3) is strengthened to read

2hAv, vi + 2(F (v), v) + k(v)k2R kvk2V , v V, (7.39)

for some constants and > 0, and the constant the constant is negative,
< 0, in condition (D.4). Then there exists an unique invariant measure for
the solution process.

Proof. We shall apply Theorem 5.5 to show the existence of an invariant


measure. To verify the Feller property, suppose ugt and uht of (7.28) with initial
conditions = g and h in H, respectively. Then it follows from the energy
equation and the monotonicity condition (D.4) that the difference ugt uht
satisfies
Z t
Ekugt uht k2 = kg hk2 + E {2hA(ugs uhs ), ugs uhs i
0
+2 (F (ugs ) F (uhs ), ugs uhs ) + k(ugs ) (uhs )k2R }ds
Z t
kg hk2 Ekugs uhs k2 ds,
0

where = and > 0. It follows from the above inequality that

Ekuht uht k2 et kg hk2 , for t 0. (7.40)

Since any Cb (H) can be approximated pointwise by a sequence of func-


tions in C1b (H), to verify the Feller property, it suffices to take a bounded
Lipschitz-continuous function . Then, for s, t [0, T ] and g, h H,

k(Pt )(g) (Ps )(h)k k[(Pt Ps )](g)k + k(Pt )(g) (Pt )(h)
E k(ugt ) (ugs )k + Ek(ugt ) (uht )k

k{(Ekugt ugs k2 )1/2 + (E kugt uht k2 )1/2 },

where k is the Lipschitz constant. Making use of the fact that ugt is mean-
square continuous and (7.40), the Feller property follows.
Now we will show that the condition (7.37) in Theorem 5.5 is satisfied. By
the energy equation and condition (7.39), we can obtain
Z t
Ekuht k2 = khk2 + E {2hAuhs , uhs i + 2(F (uhs ), uhs ) + k(uhs )k2R }ds
0 Z t
2
khk + t E kuhs k2V ds,
0

which implies Z t
1
E kuhs k21 ds (t + khk2 ). (7.41)
0
Asymptotic Behavior of Solutions 243

Since uht is a V -valued process, its probability measure ht has support in V


so that P (h, t, AN ) = P {kuhtk1 > N }. By invoking the Chebyshev inequality
and (7.41), we deduce that
Z T Z T
1 1
P (h, t, AN )dt = P {kuhtkV > N }dt
T 0 T 0
Z T
1 1
Ekuht k2V dt (T + khk2 ),
N 2T 0 N 2 T

which converges to zero as N uniformly in T 1. Therefore condition


(7.37) in Theorem 5.5 is met and ht for any h H. Moreover, by Lemma
5.2, the invariant measure is unique.

Remarks: The results in Theorems 5.5 and 5.6 were adopted from the paper
[17]. In fact it can shown that the invariant measure has support in V .
This and more general existence and uniqueness results can be found in the
aforementioned paper and the book [22].

(Example 5.1) Consider the reactiondiffusion equation in D Rd :

X d
u u
= [ajk (x) ] c(x)u + f (u, x)
t xj xk
j,k=1 (7.42)
+ (x)W (x, t), x D, t (0, T ),
u|D = 0, u(x, 0) = h(x),

where the coefficients ajk (x), c(x), (x), and f (u, x) are bounded continuous
on D. Suppose the following:

(1) There exist positive constants 1 , 2 , 3 such that


d
X
1 ||2 ajk (x)j k 2 ||2 , Rd
j,k=1

and c(x) 3 , for all x D.

(2) There exist C1 , C2 > 0 such that

|f (r, x)| = C1 (1 + |r|2 ),


|f (r, x) f (s, x)| C2 |r s|, x D, r, s R,

and rf (r, x) 0, x D.

(3) The Wiener randomR field W (x, t) has a bounded correlation function
r(x, y) such that D r(x, x) 2 (x)dx = < .
244 Stochastic Partial Differential Equations

As before, let H = L2 (D) and V = H01 , here, with the norm kvk21 =
Pd u
kvk2 +kvk2 . Set Au = j,k=1 x j
[ajk x k
]cu, F (u) = f (u, ) and = ().
Then, under the above assumptions, it is easy to check that all conditions
for Theorem 5.6 are satisfied. In particular, to verify the condition (7.39), we
make use of the assumptions (1)(3) to obtain

2hAv, vi + 2(F (v), v) + k(v)kR


Z Xd
v(x) v(x)
= 2 { ajk (x) + c(x)v 2 (x)}dx
D j,k=1 xj xk
Z Z
+2 v(x)f (v, x)dx + r(x, x) 2 (x)dx
D D
Z
2 {1 |v(x)|2 + 3 |v(x)|2 }dx +
D
kvk21 ,

with = 2(1 3 ). Therefore, by Theorem 5.6, there exists a unique invariant


measure for this problem.

(Example 5.2) Referring to (Example 7.2) in Chapter 6, consider the


stochastic reactiondiffusion equation in domain D R3 with a cubic non-
linearity:

X 3
u u
= ( )u u3 + Wk (x, t),
t xk t (7.43)
k=1
u|D = 0, u(x, 0) = h(x),

where , , are given positive constants and Wk (x, t), k = i, , d, are inde-
pendent Wiener random fields with continuous covariance functions rk (x, y)
such that
sup |rk (x, y)| r0 , k = 1, 2, 3. (7.44)
x,yD

Let the spaces H and V be defined as in (Example 5.1). It was shown that
the equation (7.43) has a unique strong solution u L2 (; C([0, T ]; H))
L2 (T ; H01 ). Similar to (Example 4.1), we can obtain the following estimate:

2hAv, vi + 2(F (v), v) + k(v)kR


(2 r0 )kvk2 2kvk2 2|v|44 ,

which implies the condition (7.39) of Theorem 5.6 provided that r0 < 2. In
this case the equation (7.43) has a unique invariant measure.
Asymptotic Behavior of Solutions 245

7.6 Small Random Perturbation Problems


Consider the equation (7.28) with a small noise:
dut = Aut dt + F (ut )dt + (ut )dWt , t > 0,
(7.45)
u0 = h,
where ut shows the dependence of the solution on a small parameter (0, 1),
and h H. The equation (7.45) is regarded as a random perturbation of the
deterministic equation:
dut = Aut dt + F (ut )dt, t > 0,
(7.46)
u0 = h.
We wish to show that, under suitable conditions, the seemingly obvious fact:
ut ut on [0, T ] in some probabilistic sense. To this end, we recall conditions
B and conditions D in Theorem 6-7.5 in Section 6.7, which are assumed to
hold for the coefficients of equation (7.45).

Theorem 6.1 Assume the conditions for Theorem 6-7.5 hold and, in addi-
tion, there is b > 0 such that
k(v)k2R b(1 + kvk2V ), v V. (7.47)
Then the perturbed solution ut of equation (7.45) converges uniformly on
[0,T] to the solution ut of Eq. (7.46) in mean-square. In fact, there exists a
constant CT > 0 such that
E sup kut ut k2 CT (1 + khk2 ), (0, 1). (7.48)
0tT

Proof. By invoking Theorem 6-7.5, equation (7.45) has a unique strong


solution satisfying the energy equation
Z t Z t

2 2
kut k = khk + 2 hAus , us ids + 2 (F (us ), us )ds
Z t 0 Z t 0 (7.49)
2 2
+ k(us )kR ds + 2 (us , (us )dWs ).
0 0

Making use of similar estimates as in the proof of the existence theorem, we


can deduce from (7.49) that there is C1 (T ) > 0 such that
Z T
E kut k2V dt C1 (T ), (7.50)
0

where C1 (T ) is independent of . Let v = ut ut . Then v satisfies


dvt = Avt dt + [F (ut ) F (ut )]dt + (ut )dWt , t 0,
(7.51)
v0 = 0.
246 Stochastic Partial Differential Equations

By applying the It
o formula, we get
Z t Z t
kvt k2 = 2 hAvs , vs ids + 2 (F (us ) F (ut ), vs )ds
Z t 0 Z t 0 (7.52)
+2 k(us )k2R ds + 2 (vs , (us )dWs ).
0 0

By making use of condition (D.4) (assuming 0), (7.47), and a submartin-


gale inequality, the above yields
Z T Z T
E sup kvt k2 2, E kvt k2 dt + E 2
k(ut )k2R dt
0tT 0 0
Z T
+6 { E ((ut )R (ut )vt , vt )dt }1/2
0
Z T Z T
2E kvt k2 dt 2
+ b ( + 18)E (1 + kutk2V )dt
0 0
1
+ E sup kvt k2 ,
2 0tT

or, in view of (7.50),


Z T
E sup kvt k2 4 E sup kvs k2 dt + 4b(18 + )(C1 (T ) + T ).
0tT 0 0st

It follows from Gronwalls inequality, there exists CT > 0 such that, for any
(0, 1),
E sup kvt k2 CT ,
0tT

with CT = 80b(C1 + T )e4T .

Remark: In comparison with (7.48), if we take the expectation of (7.49), we


can obtain the estimate:

sup Ekut ut k2 CT 2 ,
0tT

for some CT > 0.

Let denote the probability measure for u in C([0, T ]; H). By Theorem


6.1, since u converges to u in mean-square, converges weakly to 0 =
u , the Dirac measure concentrated at u. We are interested in the rate of
convergence, such as an estimate for the probability

(Bc ) = P { sup kut ut k > }, (7.53)


0tT
Asymptotic Behavior of Solutions 247

for small , > 0, where Bc = H \ B and


B = {v H : sup kvt ut k }.
0tT

Before taking up this question, we need a useful exponential estimate which


will be presented as a lemma.

Lemma 6.2 Let Mt be a continuous H-valued martingale in [0, T ] with the


local characteristic operator Qt such that
sup T rQt N < , a.s., (7.54)
0tT

for some N > 0. Then the following estimate holds


r2
P { sup kMs k r} 3 exp{ }, (7.55)
0st 4N t
for any t (0, T ] and r > 0.

Proof. Introduce a functional on H as follows:


(v) = (1 + kvk2 )1/2 , v H, (7.56)
which depends on a positive parameter . Then the first two derivatives of
are given by (v) = 1 (v)v and (v) = 1 2 3
(v)I (v)(v v),
where I is the identity operator on H and denotes the tensor product.
Apply the It
o formula to (Mt ) to get
Z t
(Mt ) = 1 + 1
(Ms )(Ms , dMs )
Z t 0
1
+ {1 2 3
(Ms )T rQs (Ms )(Qs Ms , Ms )}ds
2 0 (7.57)
Z
1 t 1
1 + t + { (Ms )T rQs
2 0
+2 [2 (Ms ) 3
(Ms )](Qs Ms , Ms )}ds,

where we set Z t
t= 1
(Ms )(Ms , dMs )
R 0 (7.58)
1 2 t 2
2 0 (Ms )(Qs Ms , Ms )ds.
It is easy to check that
(v) 1 and

1
[2 3
(v) (v)](Qs v, v) T rQs ,

for any 0, > 0 and v V , so that (7.57) yields
Z t
(Mt ) 1 + t + T rQs ds
0
(7.59)
(1 + N t) + t ,
248 Stochastic Partial Differential Equations

for 0 < t T by condition (7.54). Define

Zt = exp{t }, t [0, T ], (7.60)

which is known to be an exponential martingale [86] so that

EZt = EZ0 = 1. (7.61)

Now, by (7.56) and (7.59),

P { sup kMs k r} = P { sup (Mt ) (1 + r2 )1/2 }


0st 0st
(7.62)
P { sup Zs exp[(1 + r2 )1/2 (1 + N t)]}.
0st

By applying the Chebyshev inequality and Doobs martingale inequality, the


above gives rise to

P { sup kMs k} r} exp[(1 + r2 )1/2 + (1 + N t)]},


0st

which holds for any > 0. In particular, by choosing


r 2 1
=( ) 2 > 0,
2N t r
the inequality (7.62) yields the estimate (7.55) when r2 > 2N t. Since, for
r2 2N t, (7.55) is trivially true, the desired estimate holds for any r > 0.

Remarks: The exponential inequality was first obtained in [15] where


Rt
Mt = 0 s dWs is assumed to be a stochastic integral in H. The extension to
a continuous martingale is obvious. Under some suitable conditions, we shall
apply this lemma to the randomly perturbed stochastic evolution equation to
obtain results for the exponential rate of convergence and the a.s. convergence.

Theorem 6.3 Assume the conditions for Theorem 6-7.5 hold, provided that
0 in condition (D.4) so that

hA(u v), u vi + (F (u) F (v), u v) 0, u, v V, (7.63)

and there is > 0 such that

k(v)k2R 2 , v V. (7.64)

Then the solution measure for (7.45) converges weakly to = u at an


exponential rate. More precisely, for any r > 0 and t (0, T ], we have

r2
P { sup kus us k} r} 3 exp{ }. (7.65)
0st 4 2 2 t
Asymptotic Behavior of Solutions 249

Furthermore, the perturbed solution ut converges a.s. to ut uniformly in [0, T ]


as 0.

Proof. The proof is quite similar to that of Theorem 6.1 and Lemma 6.2.
Let vt = ut ut and is given by (7.56). Again, by applying the It
o formula,
we obtain
Z t
(vt ) = 1 + 1 (vt )[hAvs , vs i
0
+ (F (us ) F (us ), vs )]ds
Z t Z (7.66)
2 t
+ 1 (vs )(vs , (us )dWs ) + {1 2
(vs ) k(us )kR
0 2 0
2 3 (vs )((us )R (us )vs , vs )}ds.

By (7.63) and (7.64), we can deduce from (7.66) that

Z
2 t
(vt ) 1 + t + {1 2
(vs )k(us )kR
2 0
+ 2 [2 (vs ) 3 (7.67)
(vs )] ((us )R (us )vs , vs )}ds
(1 + 2 2 T ) + t ,

where
Z t
t = 1 (vs )(vs , (us )dWs )
0 Z t (7.68)
1
2 2 2
(vs ) ((us )R (us )vs , vs )}ds.
2 0

Similar to the estimate (7.62), we can get

P { sup kus us k} r} exp[(1 + r2 )1/2 + (1 + 2 2 t)]},


0st

which yields the desired result (7.65) after setting = (r/2t)2 (1/r2 ).
To show the a.s. convergence, we can choose = n = n1 and = n =
n in (7.65). It follows from the BorelCantelli lemma that ut n ut a.s. in
2

H uniformly over [0,T].

Remarks: The exponential estimates presented in this section are known


as a large deviations estimate. A precise estimate of the rate of convergence
is given by the rate function in the large deviations theory, which will be
discussed briefly in the next section.
250 Stochastic Partial Differential Equations

7.7 Large Deviations Problems


For ease of discussion, we will only consider the special case of equation (7.45)
when the noise is additive. Then it becomes

dut = Aut dt + F (ut )dt + dWt , t 0,


(7.69)
u0 = h,

where Wt is an R-Wiener process in H.


Let = L{W } denote the R-Wiener measure in C([0, T ]; H) and let =
L{W } with W = W . Given Cb ([0, T ] H) and B being a Borel set in
H, we are interested in an asymptotic evaluation of the integral
Z
J (B) = exp{(v)/2 } (dv) (7.70)
B

as 0. To gain some intuitive idea, suppose that W is replaced by a sequence


{k } of independent, identically distributed Gaussian random variables with
mean zero and variance 2 . Then the integral (7.70) would have taken the
form Z
I(x)
Jn (B) = Cn () exp{ 2 }dx, (7.71)
B
where Cn () = (1/22 )n/2 , x = (x1 , , xn ), B is a Borel set in Rn and

1
I(x) = (x) + |x|2 .
2 2
Then, since Lp -norm tends to L -norm as p , we obtain
Z
lim 2 log Jn (B) = lim log{ {exp[I(x)]}p dx}1/p = inf I(x),
0 p B xB

or (B) = inf xB I(x) is the exponential rate of convergence for the integral
(7.71). Therefore it is reasonable to call I a rate function. Now let Wt be a
standard Brownian motion in one dimension, and {t0 = 0, t1 , , tn = T } is
a partition of [0, T ]. Then the joint probability density for (Wt1 , , Wtn ) is
given by
n
X |x(tn ) x(tn1 )|2
p(x(t1 ), , x(tn )) = Cn exp{ },
2 2 (tn tn1 )
k=1

Qn
where Cn = j=1 [2 2 (tj tj1 )]1/2 . In spite of the fact that a Brownian
path is nowhere differentiable, heuristically, as the mesh size of the partition
RT
goes to zero, the exponent of the density tends to (1/2 2 ) 0 |x(t)|
2
dt, with
Asymptotic Behavior of Solutions 251
d
x = dt x. Therefore, for a one-dimensional Brownian motion, it seems plausible
to guess that the rate function is given by
Z T
1 2
I(x) = (x) + |x(t)|
dt,
2 2 0

if the above integral makes sense. This kind of formal argument can be gener-
alized to the case of an R-Wiener process in H. However, to make it precise,
one needs the large deviations theory [26].
Let I be an extended real-valued functional on C([0, T ] H). Then I is
said to be a rate function if the following are true:

(1) I : C([0, T ] H) [0, ] is lower semicontinuous.

(2) For any r > 0, the set {u C([0, T ] H) : I(u) r} is compact.

A family of probability measures {P } on C([0, T ]; H) is said to obey the


large deviations principle if there exists a rate function I such that

(1) for each closed set F ,

lim sup 2 log P (F ) inf I(v), and


0 vF

(2) for any open set G,

lim inf 2 log P (G) inf I(v).


0 vG

After the digression, let us return to the Wiener measures { }. We denote


by R(H) the range of R1/2 , which is a HilbertSchmidt operator. Introduce
the rate function as follows:
Z
1 T
I(v) = kR1/2 v t k2 dt, (7.72)
2 0
d
if v0 = 0, v t = dt vt R(H) such that R1/2 v t L2 ((0, T ); H), and set
I(v) = , otherwise. Then the following lemma holds [21].

Lemma 7.1 The family { } of Wiener measures on C([0, T ]; H) obeys the


large deviations principle with the rate function given by (7.72).

Now consider the family { } of solution measures for the equation (7.69).
A useful tool in dealing with such a problem is Varadhans contraction prin-
ciple [91] which says the following:

Lemma 7.2 Let S map C([0, T ]; H) into itself continuously and let Q =
252 Stochastic Partial Differential Equations

P S 1 . If {P } obeys the large deviations principle with rate function I, so


does the family {Q } with rate function J(v) given by
J(v) = {I(u) : S(u) = v}. (7.73)
Instead of (7.69), consider the deterministic equation:
Z t
ut = h + [Aus + F (us )]ds + vt , (7.74)
0
d
where h H and v C([0, T ]; H) with dt v L2 ((0, T ); V ). Then, by condi-
tions on A and F , it is known that the integral equation (7.74) has a unique
solution u C([0, T ]; H). Moreover, given h H, the solution u depends
continuously on v. We write
ut = St (v)
so that the solution operator S : C([0, T ]; H) C([0, T ]; H) is continuous.
Also, from (7.74), we have
Z t
vt = St1 (u) = ut h [Aus + F (us )]ds. (7.75)
0

In view of Lemma 7.1 and (7.72)(7.75), we can apply Lemma 7.2 to conclude
the following.

Theorem 7.3 The family { } of solution measures on C([0, T ]; H) for


the equation (7.69) obeys the large deviations principle with the rate function
given by
Z
1 T
J(u) = kR1/2 [ u t Aut F (ut ) ] k2 dt, (7.76)
2 0
if u C([0, T ]; H) with u0 = h and [u t Aut F (ut )] R(H) such that the
above integral converges, and set J(u) = , otherwise.

As a simple application, consider the convergence rate (Bc ) for (Bc )


given by (7.53). The exact rate can be determined by the equation (7.76) as
follows: Z T
c 1
(B ) = inf kR1/2 [u t Aut F (ut )]k2 dt.
2 uBc 0
Though the exact rate is difficult to compute, the above constrained mini-
mization problem may be solved approximately.

Remarks: Theorem 7.1 is one of the large deviations problems for stochastic
partial differential equations perturbed by a Gaussian noise. When (v) is
nonconstant, we have a multiplicative noise. Even though the basic idea is
similar, technically, such problems are more complicated (see [11], [85] and
[9], among others). The problem of exit time distributions is treated in [21].
A thorough treatment of the general large deviations theory is given in the
books [26], [32], and [25].
8
Further Applications

8.1 Introduction
In the previous two chapters, suggested by the parabolic and hyperbolic It o
equations, we studied the existence, uniqueness, and asymptotic behavior of
solutions to some stochastic evolution equations in a Hilbert space. Even
though we have given a number of examples to demonstrate some applica-
tions of the general results obtained therein, they are relatively simple. In this
chapter we will present several more examples to show a wider range of ap-
plicability of the general results. A major source of applications for stochastic
partial differential equations comes from statistical physics, in particular, the
statistical theory of turbulence in fluid dynamics and the related problems.
So most of the examples to be presented are given in this area. Recently we
have seen some interesting applications of stochastic PDEs to mathematical
finance and population biology. Two examples will be given in these areas.
For other applications in biology and systems science, see, e.g., the references
[7], [24], [30], and [82].
In this chapter we shall present seven applied problems. In Section 7.2
the stochastic Burgers equation, as a simple model in turbulence, is studied
by means of a Hopf transformation. It is shown that the random solution
can be obtained explicitly by the stochastic FeynmanKac formula. Then the
mild solution of the Schrodinger equation with a random potential is ana-
lyzed in Section 7.3. Its connection to the problem of wave propagation in
random media is elucidated. The model problem given in Section 7.4 is con-
cerned with the vibration of a nonlinear elastic beam excited by turbulent
wind. It will be shown that the existence of a unique strong solution with an
H 2 -regularity holds in any finite time interval. In the following section we con-
sider the linear stability of the CahnHilliard equation with damping under a
random perturbation. This equation arises from a dynamical phase transition
problem. We obtain the conditions on the physical parameters so that the
equilibrium solution is a.s. exponentially stable. Section 8.6 is concerned with
the model equations for turbulence, the randomly forced NavierStokes equa-
tions. As in the deterministic case, the existence and uniqueness questions in
three dimensions remain open. We shall sketch a proof of the existence the-
orem in two dimensions and show that there exists an invariant measure for
this problem. In Section 8.7, we consider a stochastic reactiondiffusion equa-

253
254 Stochastic Partial Differential Equations

tion arising from a spatial population growth model with a nonlinear random
growth rate. Under some conditions, it will be shown that the equation has
a positive global solution with a unique invariant measure. In the last sec-
tion we shall present a nonlinear stochastic transport equation, known as the
HJMM equation, describing the change of interest rate in a bond market. The
existence of a solution to this equation will be discussed briefly.

8.2 Stochastic Burgers and Related Equations


The Burgers equation was first introduced to mimic the NavierStokes equa-
tions in hydrodynamics. As a simple model of turbulence, a randomly forced
Burgers equation in one dimension was proposed [34]. It can be related to
following nonlinear parabolic It
o equation:
v 1
= v + |v|2 + V (x, t), x Rd , t (0, T ),
t 2 2 (8.1)
v(x, 0) = h(x),

where V (x, t) = b(x, t) + (x, t)W


(x, t), or
Z t Z t
V (x, t) = b(x, s)ds + (x, s)W (x, ds) (8.2)
0 0

is a Cm -semimartingale. For d = 2, this equation was used to model the


dynamic growth of a liquid interface under random perturbations [34]. In
(8.1), v(x, t) denotes the surface height with respect to a reference plane,
the surface tension, v the gradient of v, and h(x) is the initial height. Assume
that the solution of (8.1) is smooth. Let u = v. It follows from (8.1) that
u satisfies
u
= u (u )u V (x, t), x Rd , t (0, T ),
t 2 (8.3)
u(x, 0) = h(x),
which is known as a d-dimensional Burgers equation perturbed by a conser-
vative random force for the velocity field u(x, t) in Rd . This equation was
analyzed as a stochastic evolution equation in a Hilbert space [22]. Here we
shall consider its solution as a continuous random field and adopt the method
of probabilistic representation discussed in Chapter 3. To this end, we first
connect the equation (8.1) to the linear parabolic equation with a multiplica-
tive white noise in the Stratonovich sense:
1
= + V (x, t), x Rd , t (0, T ),
t 2 (8.4)
(x, 0) = g(x).
Further Applications 255

By applying the Ito formula, it can be shown that the solutions of the equa-
tions (8.1) and (8.4) are related by
v(x, t) = log (x, t), v(x, 0) = h(x) = log g(x). (8.5)

It follows that the solution of the Burgers equation can be expressed as


(x, t)
u(x, t) = , u(x, 0) = h(x). (8.6)
(x, t)

For simplicity, we set = 1. By Theorem 4-4.2, suppose that g Cm, b ,


W (x, t) is a Cm -Wiener random field, and b(, t) and (, t) are continuous
adapted Cm, -processes such that they satisfy the conditions for Theorem 4-
4.2 for m 3. Then the equation (8.4) has a unique Cm -solution given by the
FeynmanKac formula:
Z t
(x, t) = Ez {g[x + z(t)] exp V [x z(t) + z(s), ds]}, (8.7)
0

where Ez denotes the partial expectation with respect to the Brownian motion
z(t). Since g(x) = eh(x) > 0, by Theorem 4-5.1, the solution (x, t) of (8.4) is
strictly positive so that the function v given by (8.5) is well defined. Moreover,
v(, t) is a continuous Cm -process for m 3, and it satisfies the equation
(8.1). This, in turn, implies that u(, t) Cm1 is the solution of the Burgers
equation (8.3). In view of (8.6) and (8.7), the solution u of (8.3) has the
probabilistic representation
Ez {g[x + z(t)] + g[x + z(t)](x, t)} exp (x, t)
u(x, t) = ,
Ez {g[x + z(t)] exp (x, t)}
Z t
where (x, t) = V [x z(t) + z(s), ds].
0

8.3 Random Schr


odinger Equation
In quantum mechanics, the Schrodinger equation plays a prominent role. Here
we consider such an equation with a time-dependent random potential:
u
= i{u + V (x, t)u}, x Rd , t (0, T ),
t (8.8)
u(x, 0) = h(x),

where i = 1; , are real parameters, h is the initial state, and V is a
random potential. In particular, we assume that

V (x, t) = b(x) + W
(x, t), (8.9)
256 Stochastic Partial Differential Equations

where b(x) is a bounded function on Rd and W (x, t) is a Wiener random field


with covariance function r(x, y). To treat the equation (8.8) as a stochastic
evolution in a Hilbert space, we let H be the
Z space of complex-valued functions
in L2 (Rd ) with inner product (f, g) = f (x)g(x)dx, where the over-bar
Rd
denotes the complex conjugate. Then we rewrite (8.8) as a stochastic equation
in H 1 :
dut = Aut dt + ut dWt ,
(8.10)
u0 = h,
where we set A = i( + b). Suppose that

(1) the functions h H and b Lp (Rd ) for p (2 d/2), and


(2) the covariance function r(x, y), x, y Rd , is bounded and continuous.

Under condition (1), it is known that A : H 2 H generates a group of


unitary operators {Gt } on H (p. 225, [79]). Therefore, the equation (8.10) can
be written as Z t
ut = Gt h + Gts us dWs . (8.11)
0
As mentioned before, even though Theorem 6-6.5 was proved in a real Hilbert
space, the result is also valid for the complex case. Here, for g H, Ft (g) 0
and t (g) = g is linear with
Z
kt (g)k2R = (Ru, u) = 2 r(x, x)|g(x)|2 dx.
Rd

In view of condition (2), we have

ks (g)k2R Ckgk2 , ks (g) s (h)k2R Ckg hk2 ,

for any g, h H, where C = 2 supx r(x, x). Therefore the conditions for the
above-mentioned theorem are satisfied and we can conclude that the equation
(8.8) has a unique mild solution u(, t) which is a continuous H-valued process
such that
E{ sup ku(, t)kp } K(p, T )(1 + kf kp ),
0tT

for some constant K > 0 and any p 2. The stochastic Schrodinger equa-
tion also arises from a certain problem in wave propagation through random
media. For electromagnetic wave propagation through the atmosphere, by a
quasi-static approximation, the time-harmonic wave function v(y), y R3 , is
assumed to satisfy the random Helmholtz equation [48]:

v + k 2 (y)v = 0, (8.12)

where k is the wave number and (y) is a random field, known as the refractive
index. On physical grounds, the wave function must satisfy a radiation condi-
tion. Let y = (x1 , x2 , t), where t denotes the third space variable. For an optical
Further Applications 257

beam wave transmission through a random slap: {y R3 : 0 t T }, by


a so-called forward-scattering approximation [49], assume v(x, t) = u(x, t)eikt
with a large k and an initial aperture field v(x, 0) = h(x). When this product
2
solution is substituted into equation (8.12) and the term t 2 v is neglected in

comparison with 2ik t v, it yields the random Schrodinger equation:

u i
= {u + k 2 (x, t)u}, x R2 , t (0, T ),
t 2k (8.13)
u(x, 0) = h(x).

In the Markovian approximation, let E(x, t) = b(x) and (x, t)


b(x) = W (x, t). Then the equation (8.13) is a special case of the stochastic
Schrodinger equation for d = 2 with = 1/2k and = k/2. This approxima-
tion has been used extensively in engineering applications.

8.4 Nonlinear Stochastic Beam Equations


Semilinear stochastic beam equations arise as mathematical models to de-
scribe the nonlinear vibration of an elastic panel excited by, say, an aerody-
namic force. In the presence of air turbulence, a simplified model is governed
by the following nonlinear stochastic integro-differential equation subject to
the given initial-boundary conditions [16]:

Z l
2 2
2
u(x, t) = [ + | u(y, t)|2 dy] 2 u(x, t)
t 0 y x
4
4 u(x, t) + q(u, t u, x u, x, t)
x
+ (u, t u, x u, x, t)W (x, t), (8.14)

u(0, t) = u(l, t) = 0, u(0, t) = u(l, 0) = 0,
x x

u(x, 0) = g(x), u(x, 0) = h(x),
t

for 0 t < T, 0 x l, where , , are positive constants, and g, h


are given functions. The Wiener random field W (x, t) is continuous for 0
x l, with covariance function r(x, y) which is bounded and continuous for
x, y [0, l]. The functions q(, y, z, x, t) and (, y, z, x, t) are continuous for
x (0, l), t [0, T ] and , y, z R, such that they satisfy q(0, 0, 0, x, t) =
(0, 0, 0, x, t) = 0, and the linear growth and the Lipschitz conditions:

|q(, y, z, x, t)|2 + |(, y, z, x, t)|2 C(1 + 2 + y 2 + z 2 ), (8.15)


258 Stochastic Partial Differential Equations

and
|q(, y, z, x, t) q( , y , z , x, t)|2
+ |(, y, z, x, t) ( , y , z , x, t)|2 (8.16)
K(| |2 + |y y |2 + |z z |2 ),
for any t [0, T ], x [0, l], , y, z, , y , z R and for some positive con-
stants C and K.
To regard (8.14) as a second-order stochastic evolution equation, introduce
the Hilbert spaces H = L2 (0, l), H m = H m (0, l) and V = H02 = H 2 satisfying
the homogeneous boundary conditions. Define the following operators
2u 4u
Au = ,
x2 x4
Z l (8.17)
2u
B(u) = [ | u(y, t)|2 dy] 2 ,
0 y x
and
t (u, v) = q(u, v, x u, , t),
(8.18)
t (u, v) = (u, v, x u, , t).
Then A : V V = H 2 is a continuous and coercive linear operator, because
of the fact that
2 2
hA, i k 2
k k k2 , V, (8.19)
x x
2 2
and inf 2 {k k /kk2 } = > 0.
H0 x2
Let H 1,4 denote the L4 -Sobolev space of first order, namely, the set of
4 d
L -functions on (0, l) with generalized derivatives dx L4 (0, l). Then we
2 1,4
have B : H0 H H. Since the imbedding H0 H 1,4 is continuous,
2

the nonlinear operator B : V H is locally bounded. To show it is locally


Lipschitz continuous, we consider
Z l
2
kB() B()k = k [ (| (y, t)|2 | (y, t)|2 )dy ] 2
Z l0 y y
2 2
x

+[ | (y, t)|2 dy] ( 2 )k
0 y Z l x x2
2

k 2 k | ( | (y, t)|2 | (y, t)|2 )| dy
Zxl
0 y y
2 2 2
+ [ | (y, t)| dy] k 2 k,
0 y x x2
so that
2
kB() B()k { k k ( k k + k k )k k
x2 x x x x
2 2

+k k2 k 2 k },
x x x2
Further Applications 259

for any , V . It follows that there exists a constant C > 0 such that

kB() B()k C(kk22 + kk22 )k k2 , (8.20)

which shows that B is locally Lipschitz continuous. By conditions (8.15) and


(8.16), we can show that t : V H H and t : V H L(H) such that,
for any t [0, T ], , V and , H,

kt (, ) t ( , )k2 C1 ( k k22 + k k2 ), (8.21)

and

kt (, ) t ( , )k2R C2 ( k k22 + k k2 ), (8.22)

where C1 , C2 are some positive constants.


Now rewrite the equation (8.14) as a first-order system:

dut = vt dt,
dvt = [At ut + Ft (ut , vt )]dt + t (ut , vt )dWt , (8.23)
u0 = g, v0 = h,

where we let
Ft (u, v) = B(u) + t (u, v). (8.24)
For any C([0, T ]; V )C1 ([0, T ]; H), by noticing (8.21), (8.22), and (8.24),
we can deduce that

Z t Z t Z t
(B(s ), s )ds +
(Fs (s , s ), s )ds = (s (s , s ), s )ds
0 0 0
Z
1 t
( kx t k4 kx 0 k4 ) + { k(s (s , s )k2 + k s k2 }ds
4 2 0
Z
1 t
kx 0 k4 + {C1 ks k22 + (1 + C1 )k s k2 }ds,
4 2 0
and Z Z
t t
ks (s , s )k2R ds C2 (ks k22 + k s k2 )ds,
0 0
d
where t = t and x = x
. It follows that there exists a constant > 0
dt
such that
Z t
{(Fs (s , s ), s ) + ks (s , s )k2R }ds {1 + e(s , s )}ds, (8.25)
0

= kk2 + kk
where e(, ) 2 is the energy function.
2
In view of (8.19), (8.20), (8.21), (8.22), and (8.25), the conditions for The-
orem 6-8.5 are fulfilled, except for the monotonicity condition, which can be
260 Stochastic Partial Differential Equations

verified easily. Therefore, for g H02 , h H, the equation (8.14) has a unique
solution u L2 (; C([0, T ]; H02 ) with t u L2 (; C([0, T ]; H)). In fact, from
the energy equation, we can deduce that u L4 ( (0, T ); H 1 ). It is also
possible to study the asymptotic properties, such as the ultimate bounded-
ness and stability, of solutions to this equation by the relevant theorems in
Chapter 6.

8.5 Stochastic Stability of CahnHilliard Equation


The CahnHilliard equation was proposed as a mathematical model to de-
scribe the dynamics of pattern formation in phase transition (p. 147, [89]). In
a bounded domain D Rd with a smooth boundary D, this equation, with
damping, and its initial-boundary conditions reads

v(x, t) + 2 v + v + v v 3 = 0,
t
v v (8.26)
|D = |D = 0,
n n
v(x, 0) = g(x),
2
where and = denote the Laplacian and the bi-harmonic operators,

respectively; , , and are positive parameters; n v denotes the normal
derivative of v to the boundary; and g is a given function. Let H = L2 (D)
and let

V = H02 = { H 2 (D) : |D = 0}.
n
Then it is known (p. 151, [89]) that, in the variational formulation, for g H,
the problem (8.26) has a unique strong solution v C([0, T ]; H)L2((0, T ); V )
for any T > 0. Suppose that this problem has an equilibrium solution v = (x)
in V satisfying the equation
2 + + 3 = 0, (8.27)
subject to the homogeneous boundary conditions as in (8.26). We are inter-
ested in the linear stability of this pattern perturbed by a state-dependent
white noise. To this end, let v = + u, where u is a small disturbance. Then,
in view of (8.26) and (8.27), the linearized equation for u satisfies the following
system:
(x, t),
u(x, t) + 2 u + u + u 3(2 u) = uW
t
u u (8.28)
|D = |D = 0,
n n
u(x, 0) = h(x),
Further Applications 261

where the constant > 0 and W (x, t) is Wiener random field with a bounded
continuous covariance function r(x, y), for x, y D. Define
Au = {2 u + u + u} + 3(2 u), (8.29)
and (u) = u. Then we consider (8.28) as a stochastic evolution equation in
V as follows

dut = Aut dt + (ut )dWt ,


(8.30)
u0 = h.
The linear operator A : V V = H 2 is continuous. In fact, we will show
that, under suitable conditions, it is coercive. For g V , using integration by
parts, we can obtain
hAg, gi = {h2 g, gi + hg, gi + (g, g)} + 3((2 g), g)
kgk2 + kgk kgk kgk2 + 320 kgk kgk,
where we set 0 = supxD |(x)|. It follows that, by recalling the inequality:
|ab| |a|2 + (1/4)|b|2 for any > 0,
2hAg, gi + k(g)k2R 2( 1 320 2 )kgk2
(8.31)
2( /42 320 /42 2 r0 /2)kgk2 ,
for any 1 , 2 > 0, and r0 = supxD |r(x, x)|. Owing to the fact that the
V-norm kgk1 is equivalent to the norm {kgk2 + kgk2 }1/2 (p. 150, [89]),
the inequality (8.31) implies the coercivity condition (D3) for Theorem 6-7.5
provided that
( 1 320 2 ) > 0. (8.32)
Moreover, since the equation (8.28) is linear, the monotonicity condition
(D.4) holds if the additional condition
( /41 320 /42 2 r0 /2) > 0 (8.33)
is satisfied. Then, by choosing 1 , 2 such that the conditions (8.32) and (8.33)
are met, for h H, Theorem 6-7.5 ensures that the equation (8.28) has a
unique solution u L2 (; C([0, T ]; H)) L2 ( (0, T ); V ). Furthermore, by
appealing to Theorem 7-4.3, it is easy to show that the equilibrium solution
v = of (8.26), or the null solution of (8.28), is a.s. asymptotically stable.
We claim that (u) = kuk2 is a Lyapunov functional for the problem (8.30)
under conditions (8.32) and (8.33). This is indeed the case since, by noticing
(8.31),
L(u) = 2hAu, ui + k(u)k2R (u), (8.34)
for any u V and for some > 0. For instance, choose 1 = (/4) and
2 = /620 . Then we can deduce that
1
L(u) kuk2 kuk2 ,
4
262 Stochastic Partial Differential Equations

with = ( 2 / 9 2 40 /2 2 r0 /2). Therefore, if

> 2 / + 9 2 40 /2 + 2 r0 /2,

then the conditions (8.32), (8.33), and (8.34) are satisfied and the equilibrium
solution to (8.26) is a.s. exponentially stable.

8.6 Invariant Measures for Stochastic NavierStokes


Equations
We will consider randomly perturbed NavierStokes equations in two space
dimensions, a turbulence model for an incompressible fluid [93]. In particular,
one is interested in the question of the existence of an invariant measure.
The velocity u = (u1 ; u2 ) and the pressure p of the turbulent flow in domain
D R2 are governed by the stochastic system:

X2 X2
ui 1 p 2 ui
ui (x, t) + ui = +
t j=1
xj xi j=1
x2j
+W i (x, t),
X2 (8.35)
uj
= 0, x D, t > 0,
j=1
x j

ui |D = 0, ui (x, 0) = gi (x), i = 1, 2,
where is the kinematic viscosity, is the fluid density, the initial data gi s
are given functions, and W1 , W2 are Wiener random fields with covariance
functions rij , i, j = 1, 2. In a vectorial notation, the above equations take the
form:
1 (x, t),
u(x, t) + (u )u = p + u + W
t
u = 0, x D, t > 0, (8.36)
u|D = 0, u(x, 0) = g(x),
where W = (W1 ; W2 ) and g = (g1 ; g2 ).
To set up the above system as a stochastic evolution equation, it is custom-
ary to introduce the following function spaces [93]. Let Lp = Lp (D)Lp (D) =
[Lp (D)]2 ,

S = {v [C 2
0 (D)] : v = 0},
H = the closure of S in L2 ,
V = the closure of S in [H01 ]2 ,

H = {v = p, for some p H 1 (D)},
Further Applications 263

and V denotes the dual space of V. Again, the norms in H and V will be
denoted by k k and k k1 , respectively. It is known that the space L2 has the
direct sum decomposition:
L2 = H H .
Let : L2 H be an orthogonal projection. For v S, define the operators A
and F () by Av = v and F (v) = (v )v. Then they can be extended
to be continuous operators from V to V . After applying the projector to
equation (8.27), we obtain the stochastic evolution equation in V of the form:

dut = [Aut + F (ut )]dt + dVt ,


(8.37)
u0 = g,

where Vt = Wt , and Wt = W (, t) is assumed to be an H-valued Wiener


process with covariance operator Q, and g H.
The existence and uniqueness of solution to the above equation has been
studied by several authors (see, e.g., [93]). Here, we sketch a proof given in
[68] by a method of monotone truncation. First, define the bilinear operator
B from V V V as follows
2 Z
X vj
hB(u, v), wi = ui wj dx, u, v, w V, (8.38)
i,j=1 D xi

and set B(u) = B(u, u). Then we have

hB(u, v), wi = hB(u, w), vi, (8.39)

which implies hB(u, v), vi = 0. First, it is easy to check that A : V V is


coercive. Notice that F (u) = B(u). Making use of (8.39) and some Sobolev
inequalities, the following estimates can be obtained [93]:

|hF (u), wi| = |hB(u, u), wi| C1 kuk21 kwk1 ,

and
|hF (u) F (v), wi| |hB(u, w) B(v, w), vi|
|hB(u v, w), ui| + |hB(v, w), u vi|
C2 ( kuk1 + kvk1 ) ku vk1 kwk1 .
Hence, the operator F : V V is locally bounded and Lipschitz continuous.
For uniqueness, let u, v L2 (; C([0, T ]; H)) L2 ( (0, T ); V) be two
solutions of (8.37), as in the deterministic case, the following estimate holds
a.s. for some > 0,
Z t
kut vt k2 ku0 v0 k2 exp{ |us |44 ds}.
0

Here |us |4 denotes the L4 (D)-norm of us , which exists because the imbedding
264 Stochastic Partial Differential Equations

V L4 is continuous for d = 2. It follows that ut = vt a.s. and the solution


is unique.
To show the existence of a solution, we mollify the operator F by using a
L4 -truncation. Let Bn = {u L4 (D) : |u|4 n} be a ball in L4 centered at
the origin with radius n. It can be shown [68] that the operator A + F () is
monotone in Bn due to the following estimate:

16n2
hA(u v), u vi + hF (u) F (v), u vi ku vk2 ku vk21 .
3 2
Now introduce a mollified operator Fn defined by
n
Fn (u) = ( )4 F (u),
n |u|4

which is well defined in V. In fact it can be shown that A + Fn () is a mono-


tone operator from V into V . Let us approximate the equation (8.37) by the
following one:

dun (, t) = [Aun (, t) + Fn (un (, t))]dt + dVt ,


(8.40)
u0 = g,

where the covariance operator Q of Vt is assumed to satisfy the trace condition

T r Q q0 , (8.41)

for some constant q0 > 0. In view of the coercivity and the monotonic-
ity conditions for A and Fn indicated above, we can apply Theorem 6-
7.5 to the problem (8.40) and claim that it has a unique strong solution
un L2 (; C([0, T ]; H)) L2 ( (0, T ); V), for any T > 0. Furthermore,
noting the property (8.39), the following energy equation holds
Z t Z t
kunt k2 = ku0 k2 + 2 hAuns , uns ids + 2 (uns , dWs ) + t T r Q. (8.42)
0 0

From this equation, similar to a coercive stochastic evolution equation, we can


deduce that
Z t
E{ sup kunt k2 + 2 kuns k21 ds} CT T r Q,
0tT 0

for some constant CT > 0. Therefore the sequence {un } is bounded in XT =


L2 (; C([0, T ]; H)) L2 ( (0, T ); V), and there exists a subsequence, still
denoted by {un }, that converges weakly to u in XT . As n , we have
Aun Au and Fn (un ) F0 weakly in L2 ( (0, T ); V). By taking the limit
in (8.40), we obtain

dut = [Aut + F0 (t)]dt + dVt ,


(8.43)
u0 = g.
Further Applications 265

It remains to show that F0 (t) = F (ut ) for each t (0, T ) a.s. The idea of the
proof is similar to that of Theorem 6-7.5. A complete proof can be found in
[68]. Therefore, we can conclude that the problem (8.35) for stochastic Navier
Stokes equations has a unique strong solution u XT , which is known to be a
Markov process in H with the Feller property. By passing to a limit as n ,
the energy equation (8.42) yields
Z t Z t
kut k2 = ku0 k2 + 2 hAus , us ids + 2 (us , dVs ) + t T r Q. (8.44)
0 0

Finally, by following the approach in [17], we will show that there exists an
invariant measure for the problem (8.35). Since A is coercive and it satisfies

hAv, vi kvk21 , v V,

for some > 0, we can deduce from the energy equation (8.44) that
Z T
2
EkuT k + 2 E kut k21 dt kgk2 + T T r Q.
0

It follows that, for any u0 = g H and T0 > 0, there exists a constant M > 0
such that Z
1 T
Ekugt k21 dt M, for any T > T0 , (8.45)
T 0
where ugt denotes the solution ut with u0 = g. By means of the Chebyshev
inequality and (8.45), we obtain
Z
1 T
lim sup { P (kugt k1 > R)dt}
R T >T0 T 0
Z T (8.46)
1 g 2
lim sup { 2 Ekut k1 dt} = 0.
R T >T0 R T 0

Therefore, we can apply Theorem 7-5.5 to conclude that there exists an in-
variant measure on the Borel field of H. Moreover, it can be shown that the
is supported in V and, if g is a H-valued random variable with the invari-
ant distribution , the solution ut will be a stationary Markov process. The
uniqueness question is not answered here but was treated in [29] by a different
approach.

8.7 Spatial Population Growth Model in Random Envi-


ronment
The reactiondiffusion equations are often used to model population growth
in spatial ecology [84] and the evolution of gene frequencies with the spatial
266 Stochastic Partial Differential Equations

migration and selection [64]. In a random environment, the population growth


rate or the birth rate g(x, t) may fluctuate in time and space. With limited
resources, due to competition, the death rate is assumed to be proportional
to the square of the population size. Under these considerations, we arrive
at the following population growth model described by a stochastic reaction
diffusion equation in a bounded domain D Rd with smooth boundary D:
u
= u + u[g(x, t) u2 ],
t
(8.47)
u
|D = 0, u(x, 0) = (x),
n
for x D, t (0, T ), where u(x, t) is the population size at (x, t), and are
u
positive constants, n |D denotes the normal derivative to the boundary, and
(x) is the initial population size. The random growth rate g(x, t) is assumed
to fluctuate about the mean rate as follows
(x, t),
g(x, t) = + (x, t)W (8.48)

where (x, t) is a bounded continuous function and W (x, t) is the Wiener


random field. Assume that the covariance function r(x, y) is also bounded
and continuous for x, y D.
Let A = for some > 0 as before and set 1 = + . Rewrite the
equation (8.47) as a stochastic evolution equation in H:

d ut = [A ut + f (ut )]dt + t (ut ) dWt ,


(8.49)
u0 = ,

where
f (u) = u(1 u2 ), (8.50)
t (u) dWt = t u dWt . (8.51)
In view of (8.50), f (u) is locally Lipschitz continuous. The conditions (An 1)
and (An 2) of Corollary 3-6.4 are satisfied. To verify condition (A.4), we have

(u, f (u)) = (u, u(1 u2 )) = 1 kuk2 ku2 k2


2 |D|
1 kuk2 kuk4 1 ,
|D| 4
where |D| denotes the volume of D, and
Z
1 1 1
T r.[t (u)Rt (u)] = r(x, x) 2 (x, t)u2 (x) dx r0 02 kuk2 ,
2 2 D 2
where r0 and 0 are the upper bounds for r(x, x) and (x, t), respectively. In
view of the above two inequalities, there exists a constant C1 > 0 such that
1
(u, f (u)) + T r.[t (u)Rt (u)] C1 (1 + kuk2 ), (8.52)
2
Further Applications 267

which verifies condition (A.4). Hence, by Corollary 3-6.4, the problem (8.47)
has a unique global strong solution.
For u to be a population size, it must be positive. To prove it, we appeal
to Theorem 4-5.3. To this end, let ut = et vt for some > 0. Clearly, ut is
positive if and only if vt is positive. Choosing = 1 in equation (8.47), one
can obtain the following equation for vt :

d vt = [A vt + ft (vt )]dt + t (vt ) dWt ,


(8.53)
v0 = ,

where
ft (v) = e2t v 3 . (8.54)
As a remark of Theorem 3-5.3, the theorem holds in a bounded domain as
well [14]. For the time-dependent nonlinear function ft (v), the key condition
(3) for the theorem remains unchanged, provided that it holds for all t 0.
Here we have

(v , ft (v)) = e2t (v , v 3 ) > 0, for v < 0.

Other conditions are obviously satisfied. Hence, the solution vt is positive for
a given positive initial density . This implies the population size ut is indeed
positive. Finally, we consider the question about the existence of invariant
measures. To apply Theorem 7-5.6 to equation (8.49), it is necessary to verify
the key condition

1
< Au, u > +(u, f (u)) + T r.[t (u)Rt (u)] C1 C2 kuk21 . (8.55)
2
By definition of A and equation (8.52), we can get

1
T r.[t (u)Rt (u)]
< Au, u > +(u, f (u)) +
2
kuk2 kuk2 + C1 (1 + kuk2 ) C1 C2 kuk21,

if we choose such that C2 = C1 > 0. Then, by Theorem 7-5.6, there


exists a unique invariant measure for the population dynamical model (8.47).

8.8 HJMM Equation in Finance

A mathematical model in bond market theory was proposed by Heath, Jarrow,


and Morton [40] in 1992. Let P (t, ), 0 t , denote the market price at
time t of a bond which pays the amount of one unit, say $1, at the time of
268 Stochastic Partial Differential Equations

maturity. Introduce the random forward rate function f (t, ) such that P (t, )
can be expressed as
Z
P (t, ) = exp { f (s, )ds}, 0 t .
t

The model for the interest rate fluctuation is governed by the so-called HJM
equation
d f (t, ) = (t, )dt + ((t, ), dWt ), (8.56)
where
n
X
((t, ), dWt ) = i (t, )dwi (t);
i=1
Wt = (w1 (t), , wn (t)) is the standard Brownian motion in Rn . Assume
that (t, ) and (t, ) = (1 (t, ), , n (t, )) depend continuously on
and, for each , they are Ft -adapted stochastic processes. The connection of
the HJM equation to a stochastic PDE was introduced by Musiela [73] by
parameterizing the maturity time . Let = t + with parameter 0 and
define r(t)() = f (t, t + ), a(t)() = (t, t + ) and b(t)() = (t, t + ). The
function r is called the forward curve. Then equation (8.56) can be written as
r(t)() = f (t, t + )
Z t Z t
= f (0, t + ) + (s, t + )ds + ( (s, t + ), dWs )
Z0 t 0 Z t
= r(0)(t + ) + a(s)(t s + )ds + ( b(s)(t s + ), dWs ),
0 0

which can be formally written in differential form as the stochastic transport


equation

dr(t)() = [ r(t)() + a(t)()] dt + (b(t)(), dWt ). (8.57)

To obtain a certain martingale property in the model, impose the following
HJM condition [40]
Z
(t, ) = ((t, ), (t, )d ), t T.
t

where (, ) denotes the inner product in Rn . Then equation (8.57) yields


Z

d r(t)() = [ r(t)() + (b(t, ), b(t)( )d ) ] dt
0
(8.58)
+ (b(t)(), dWt ).
Now assume that the volatility b depends on the forward curve r in the form
b(t)() = G(t, r(t)()). By substituting this relation into equation (8.58), we
obtain

d r(t)() = [ r(t)() + F (t, r(t)())] dt
(8.59)
+ (G(t, r(t)()), dWt ),
Further Applications 269

where Z
F (t, r(t)()) = (G(t, r(t)()), G(t, r(t)( )) d ).
0

In contrast with equation (8.57), the equation (8.59) is a nonlinear


stochastic transport equation. The detailed derivation of this equation in a
more general setting of Levy noise can be found in the book by Peszat and
Zabczyk [78]. The authors also proved that, under some suitable conditions on
the function G, the HJMM equation has a unique mild solution in the space
H = L2 ([0, ), e d) for some > 0.

Remark: Note that the equation (8.59) is a first-order semilinear


stochastic PDE considered in Chapter 2. If G(t, r) is differentiable with respect
to r, then, by Lemma 2-2.2, it can be rewritten in the
Stratonovich form

dr(t)() = [ r(t)() + F (t, r(t)())] dt + (G(t, r(t)()), dWt ),

where F is equal to F minus the It o correction term. Then, under suitable


differentiability conditions, we can apply Theorem 2-4.2 to assert the existence
of a unique continuous local solution in a H older space Cm, ([0, )).
This page intentionally left blank
9
Diffusion Equations in Infinite Dimensions

9.1 Introduction
In finite dimensions, it is well known that the solution of an It o equation is a
diffusion process and the expectation of a smooth function of the solution pro-
cess satisfies a diffusion equation in Rd , known as the Kolmogorov equation.
Therefore it is quite natural to explore such a relationship for the stochastic
partial differential equations. For example, consider the randomly perturbed
heat equation (3.34). P By means of the eigenfunction expansion, the solution

is given by u(, t) = k=1 ukt ek , where ukt , k = 1, 2, , satisfy the infinite
system of It o equations (3.39) in [0, T ]:

dukt = k ukt dt + k dwtk , uk0 = hk , k = 1, 2, ...


P
Let un (, t) = nk=1 ukt ek be the n-term approximation of u. Then un (, t) is
an OrnsteinUhlenbeck process, a time-homogeneous diffusion process in Rn .
Let (y1 , , yn ) Rn . Then, for a smooth function on Rn , the expectation
function (y1 , , yn , t) = E {(u1t , , unt )|u10 = y1 , , un0 = yn } satisfies
the Kolmogorov equation:

= An , (y1 , , yn , 0) = (y1 , , yn ), (9.1)
t
where An is the infinitesimal generator of the diffusion process un (, t) given
by
n n
1 X 2 2 X
An = k 2 k yk .
2 yk yk
k=1 k=1
Formally, as n , we get

1 X 2 2 X
A= k 2 k yk . (9.2)
2 yk yk
k=1 k=1

The corresponding Kolmogorov equation reads



= A, (y1 , y2 , , 0) = (y1 , y2 ). (9.3)
t
However, the meaning of this infinite-dimensional differential operator A and

271
272 Stochastic Partial Differential Equations

the associated Kolmogorov equation is not clear and needs to be clarified.


The early work on the connection between a diffusion process in a Hilbert
space and the infinite-dimensional parabolic and elliptic equations were done
by Baklan [4] and Daleskii [20]. More refined and in-depth studies of such
problems in an abstract Wiener space were initiated by Gross [37] and carried
on by Piech [79], Kuo [56] and some of his former students. Later, in the devel-
opment of the Malliavin calculus, further progress had been made in the area
of analysis in Wiener spaces [95], in particular, the WienerSobolev spaces.
More recently, this subject has been studied systematically in the books [23],
[65] among others.
For the analysis in Wiener spaces, when k = k = 1, k, A will play
the role of the infinite-dimensional Laplacian (or LaplaceBeltrami operator,
and (A) is known as a number operator. In contrast, there are two technical
problems in dealing with the diffusion equation associated with the stochastic
partial differential equations. First, the differential operator A has singular
coefficients. This can be seen from the right-hand side of equation (9.2) in
which the eigenvalues k , while k 0 as k . Second, it is well
known that, in infinite dimension, there exists no analog of Lebesgue measure,
a consequence of the basic fact that a closed unit ball in an infinite-dimensional
space is non-compact. To replace the Lebesgue measure, the Gaussian measure
is a natural choice. But a simple Gaussian measure may not be compatible
with the differential structure of the equation. For a class of problems to
be considered later, by introducing a Sobolev space based on an Gaussian
invariant measure for the linearized stochastic partial differential equation of
interest, the two technical problems raised above can be resolved as first shown
in the paper [12].
This chapter is organized as follows. In Section 9.2, we show that the
solutions of stochastic evolution equations are Markov diffusion processes.
The properties of the associated semigroups and the Kolmogorov equations
are studied in a classical setting. To facilitate an L2 theory, we introduce the
Gauss-Sobolev spaces in Section 9.3 based on a reference Gaussian measure .
For a linear stochastic evolution equation, the associated OrnsteinUhlenbeck
semigroup in L2 () is analyzed in Section 9.4 by means of Hermite polyno-
mial functionals. Then, by making use of this semigroup, the mild solution
of a Kolmogorov equation is considered in Section 9.5. Also the strong solu-
tions of the parabolic and elliptic equations are studied in a Gauss-Sobolev
space. The final section is concerned with the Hopf equations for some simple
stochastic evolution equations. Similar to a Kolmogorov equation, they are
functional differential equations which govern the characteristic functionals
for the solution process.
Diffusion Equations in Infinite Dimensions 273

9.2 Diffusion Processes and Kolmogorov Equations


Consider the autonomous stochastic evolution equation as in Chapter 7:

dut = Aut dt + F (ut )dt + (ut )dWt , t 0,


(9.4)
u0 = ,

where A : V V , F : H H, : V L2R , Wt is an R-Wiener process in


K, and is an F0 -measurable random variable in H. Throughout this chapter,
R is assumed to be of trace class and strictly positive definite. Suppose that
the following conditions G hold:

(G.1) A : V V is a closed linear operator with domain D(A) dense in V , and


there exists a constant > 0 such that |hAv, vi| kvk2V , v V.

(G.2) There exist positive constants C1 , C2 > 0 such that

kF (v)k2 + k(v)k2R C1 (1 + kvk2V ),


1
kF (v) F (v )k2 + k(v) (v )k2R C2 kv v k2V , v, v V.
2

(G.3) There exist constants > 0, such that

1
hAv, vi + (F (v), v) + k(v)k2R kvk2V + kvk2 , v V.
2

(G.4) There exists constant 0 such that

1
hA(v v ), v v i + (F (v) F (v ), v v ) + k(v) (v )k2R
2
kv v k2 , v, v V.

(G.5) Wt is an R-Wiener process in K with T rR < . Moreover, R is strictly


positive definite.

Theorem 2.1 Under conditions (G.1)(G.5), the (strong) solution ut of


equation (9.4) is a time-homogeneous Markov process and it depends contin-
uously on the initial data in mean-square. Moreover, for g, h H, let ugt and
uht denote the solutions with = g and = h, respectively. Then there exists
b > 0 such that
E kugt uhs k2 b(|t s| + kg hk2 ), (9.5)
for any s, t [0, ).
274 Stochastic Partial Differential Equations

Proof. The conditions (G.1)(G.5) are sufficient for Theorem 6-7.4 to


hold. Hence, the problem (9.4) has a unique strong solution u
L2 (; C([0, T ]; H)) L2 (T ; V ) such that the energy equation holds:
Z t Z t
2 2
kut k = kk + 2 hAus , us ids + 2 (F (us ), us )ds
Z0 t 0Z
t (9.6)
+ 2 (us , (us )dWs ) + k(us )k2R ds.
0 0
n
Similar to It
os equation in R , the fact that ut is a homogeneous Markov
diffusion process is easy to show. We will verify only the inequality (9.5).
Clearly we have, for s < t,

E kugt uhs k2 2{E kugt ugs k2 + E kugs uhs k2 }. (9.7)

It follows from (9.4), (9.6) and condition (G.3) that


Z t
g g 2
E kut us k = 2 E {hAugr , ugr i + (F (ugr ), ugr )
s Z (9.8)
t
+ 21 k(ugr )k2R } dr 2 || E kugs k2 ds.
s

In the proof of Theorem 6-7.4, we found that

E sup kugt k2 K1 ,
0tT

for some constant K1 > 0, so that (9.8) yields

E kugt ugs k2 2|| K1 |t s|. (9.9)

Similarly, by (9.4), (9.6), and condition (G.4), we have


Z s
E kugs uhs k2 = kg hk2 + 2E {hA(ugr uhr ), ugr uhr i
0
1
+(F (ugr F (uhr ), ugr uhr ) + k(ugr ) (uhr )k2R } dr
Z 2
s
kg hk2 + 2 E kugr uhr k2 dr,
0

which, by the Gronwall inequality, implies that

E kugs uhs k2 K2 kg hk2 , (9.10)

for some K2 > 0. Now, in view of (9.7), (9.9) and (9.10), the desired inequality
(9.5) follows.

As in Section 7.5, let P (h, t, B), t 0, B B(H), be the transition


probability function for the time-homogeneous Markov process ut , and let Pt
Diffusion Equations in Infinite Dimensions 275

denote the corresponding transition operator. With the aid of this theorem,
we can show the following.

Lemma 2.2 The transition probability P (h, t, ) for the solution process has
the Feller property and the associated transition operator Pt : X = Cb (H)
Cb (H) satisfies the following strongly continuous semigroup properties:
(1) P0 = I, (I denotes the identity operator on X).

(2) Pt+s = Pt Ps , for every s, t 0. (Semigroup property).

(3) lim Pt = , for any X. (Strong continuity).


t0

The semigroup {Pt , t 0} given above is known as a Markov semigroup.


Recall the (strong) It
o functionals introduced in Section 7.2 in connection with
a generalized Ito formula. For the sets of time-dependent and -independent
o functionals on H, let C2b (H) denote the subset of It
It o functionals on
C(H) such that the Frechet derivatives : V V and : V L(H)
are bounded. Similarly, Cb2,1 (H) denotes the subset of It o functionals in

C(H [0, T ]) such that (, t) C2b (H) and t (, t) is bounded in V [0, T ].
Rename the second-order differential operator L in (7.8) as A given by
1
A = T r [(v)R (v) (v)] + hAv + F (v), (v)i, (9.11)
2
for C2b (H) and v V .
Now consider the expectation functional
(v, t) = E (uvt ), v V, (9.12)
where uvt denotes the solution of equation (9.4) with uv0 = v. To derive the
Kolmogorov equation for , we apply the It o formula for a strong solution to
get, for t (0, T ),
Z t Z t
v v
(ut ) = (v) + A(us )ds + ( (uvs ), (uvs )dWs ). (9.13)
0 0

If we can show that


E A(uvt ) = A(t, v), v V, (9.14)
then, by taking the expectation of (9.13), one would easily arrive at a diffusion
equation for . To show (9.14), we have to impose additional smoothness
assumptions on the coefficients as follows.

(H.1) A : V V is a self-adjoint linear operator with domain D(A) dense


in V , and there exist constants , > 0, and such that |hAu, vi|
kukV kvkV , and
hAv, vi kvk2 kvk2V , u, v V.
276 Stochastic Partial Differential Equations

(H.2) F : H V and : H L(K, V ) such that

kF (v)k2V + k(v)k2V,R (1 + kvk2V ), v V,

for some constant > 0, where k(v)k2V,R = T r [(v)R (v)]V and T r []V
denotes the trace in V .

(H.3) There is > 0 such that


1
kF (u) F (v)k2V + k(u) (v)k2V,R ku vk2V , u, v V.
2

Then the corresponding solution is also smoother and the following lemma
holds.

Lemma 2.3 Let conditions G and (H.1)(H.3) be satisfied. Then, if u0 =


v V , the solution uvt of the problem (9.4) belongs to L2 (; C([0, T ]; V ))
L2 ( [0, T ]; D(A)).

Proof. We will sketch the proof. Let B = (A)1/2 be the square-root


operator. Then it is known that B : V H is also a positive self-adjoint
operator [79]. Let Pn : H Vn and Pn : V Vn be the projection operators
introduced in the proof of Theorem 6-7.5. Then Bn = BPn : V V is
bounded with Bn Av = ABn v for v V . Notice that Pn h = Pn h for h H.
Apply Bn to equation (9.4) to get
Z t Z t
unt = v n + [Auns + Bn F (us ) ] ds + Bn (us )dWs , (9.15)
0 0

where we set unt n


= Bn ut and v = Bn v. Now consider the linear equation:
Z t Z t
t = Bv + [As + BF (us ) ] ds + B(us )dWs . (9.16)
0 0

By conditions G and H, we can apply Theorem 6-7.4 to assert that the


above equation has a unique strong solution L2 (; C([0, T ]; H)) L2 (
[0, T ]; V ). From (9.15) and (9.16) it follows that nt = (t unt ) satisfies
Z t
n
t = (I Pn )Bv + [Ans + (I Pn )BF (us ) ] ds
Z t 0 (9.17)
+ (I Pn )B(us )dWs .
0

By making use of the energy equation for (9.17), it can be shown that
Z T
E { sup kt unt k2 + kt unt k2V dt} 0,
0tT 0
Diffusion Equations in Infinite Dimensions 277

so that unt But = t in mean-square, uniformly in t, as n . Since B


is a closed linear operator and V -norm k kV is equivalent to B-norm kB k,
the conclusion of the theorem follows.

With the aid of the above lemma, we can show that the expectation func-
tional satisfies a diffusion equation in V .

Theorem 2.4 Let the conditions in Lemma 2.3 hold true. Suppose that
C2b (H) is an It o functional on H satisfying the following properties:
There exist positive constants b and c such that
k (v)kV + k (v)kL b, and (9.18)
k (u) (v)kV + k (u) (v)kL c ku vkV , (9.19)
for all u, v V . Then A defined by (9.11) is the infinitesimal generator of
the Markov semigroup Pt and the expectation functional defined by (9.12)
satisfies the Kolmogorov equation:

(v, t) = A(v, t), v V,
t (9.20)
(v, 0) = (v), v H.
Moreover, the expectation functional (v, t) (9.12) is the unique solution of
the equation (9.20).

Proof. We apply the Ito formula for a strong solution to get, for t (0, T ),
Z t Z t
(uvt ) = (v) + A(uvs )ds + ( (uvs ), (uvs )dWs ). (9.21)
0 0

After taking the expectation of the equation (9.21), we obtain


Z t
(v, t) = (v) + E A(uvs )ds. (9.22)
0

Suppose that E {A(uvs )} is bounded. Then (v, t) is differentiable in t such


that

(v, t) = E A(uvt ). (9.23)
t
To show this is the case, from equation (9.11), we get
1
|E A(uvt )| E T r[(uvt )R (uvt ) (uvt )] + E |hAuvt , (uvt )i|
2
+ E | (F (uvt ), (uvt )) | = J1 + J2 + J3 .
We shall estimate J1 , J2 , and J3 separately as follows. By invoking conditions
(G.1), (G.2), and (9.18), we can get
1
J1 = E T r[(uvt )R (uvt ) (uvt )]
2
1
E k(uvt )k2R k (uvt )kL C1 E {1 + kuvt k2V }
2
278 Stochastic Partial Differential Equations

for some constant C1 > 0,

J2 = E |hAuvt , (uvt )i| C2 E kuvt kV k (uvt )kV


b C2 E kuvt kV

with some positive constant C2 , and

J3 = E | (F (uvt ), (uvt )) | b E kF (uvt )k


C3 E (1 + kuvt k2 )1/2 ,

for some C3 > 0. In view of the above three inequalities, we can conclude that
there exists a constant C > 0 such that

|E A(uvt )| C E (1 + kuvt k2V ) <

for each t (0, T ). Hence (v, t) is differentiable in t and the equation (9.23)
holds.
To verify the equation (9.14), let G(v) = (v, t) and E v G(ut ) =
E {G(ut )|u0 = v}. Then, by the Markov property and (9.12), we have, for
s > 0,
E v G(us ) = E v (us , t) = E v [E us (ut )]
= E v {E v [(ut+s )|Fs ]} = E v (ut+s ).

It follows that

E v G(us ) G(v) (v, t + s) (v, t) (v, t)


lim = lim = .
s0 s s0 s t

On the other hand, by definition, we have

E v G(us ) G(v) Ps G(v) G(v)


lim = lim = A (v, t),
s0 s s0 s

and (v, 0) = (v). Hence, the equation (9.20) holds true.


For uniqueness, let (v, t) be another solution of equation (9.20). For a
fixed t, by applying the It
o formula, one obtains
Z s

(uvs , t s) = (v, t) + { (uvr , t r) + A (uvr , t r)}dr
Z s 0 t
+ ( (uvr , t r), (uvr )dWr )
0 Z s
= (v, t) + ( (uvr , t r), (uvr )dWr ).
0

By taking expectation and then setting s = t, the above equation yields


(v, t) = E [(uvt , 0)] = E (uvt ) = (v, t).
Diffusion Equations in Infinite Dimensions 279

Remarks:

(1) In finite-dimension, for H = Rn , the infinitesimal generator A can be


defined everywhere in H. In contrast, here in infinite dimension, A is only
defined in a subset V of H. As will be seen, even though V is dense in H,
it may be a set of measure zero with respect to a reference measure on
(, B(H)).

(2) For partial differential equations in Rn , the technique of integration by


parts, such as Greens formula, plays an important role in analysis. With-
out the standard Lebesgue measure in infinite-dimension, to develop an
Lp theory, it is paramount to choose a reference measure which is most
suitable for this purpose. As we will show, in some special cases, an in-
variant measure for the associated stochastic evolution equation seems
to be a natural choice.

(3) For the Markov semigroup Pt on Cb (H), clearly the set C2b (H) is in the
domain D(A) of its generator A. Equipped with a proper reference measure
, a semigroup theory can be developed in an L2 ()Sobolev space.

9.3 GaussSobolev Spaces


In the study of parabolic partial differential equations in Rd , the Lebesgues
Lp -spaces play a prominent role. To develop an Lp theory for infinite-
dimensional parabolic equations, the Kolmogorov equation in particular, an
obvious substitute for the Lebesgue measure is a Gaussian measure. For p = 2,
such spaces will be calledGaussSobolev spaces, which will be introduced in
what follows.
Let H denote a real separable Hilbert space with inner product (, ) and
norm k k as before. Let N (0, ) be a Gaussian measure on (H, B(H))
with mean zero and nuclear covariance operator . Since, by assumption,
: H H is a positive-definite, self-adjoint operator with T r < , its
square-root operator 1/2 is a positive self-adjoint HilbertSchmidt operator
on H. Introduce the inner product

(g, h)0 = (1/2 g, 1/2 h), for g, h 1/2 H. (9.24)

Let H0 H denote the Hilbert subspace, which is the completion of 1/2 H


1/2
with respect to the norm kgk0 = (g, g)0 . Then H0 is dense in H and the
inclusion map i : H0 H is compact. The triple (i, H0 , H) forms an abstract
Wiener space [57]. In this setting, one will be able to make use of several
results in the Wiener functional analysis.
Let H = L2 (H, ) denote the Hilbert space of Borel measurable functionals
280 Stochastic Partial Differential Equations

on the probability space with inner product [, ] defined by


Z
[, ] = (v)(v)(dv), for , H, (9.25)
H

and norm kkkk = [, ]1/2 .


In H we choose a complete orthonormal system {k } as its basis. In par-
ticular, we will take k s to be the eigenfunctions of . Introduce a class of
simple functionals on H as follows. A functional : H R is said to be a
smooth simple functional, or a cylinder functional if there exists a C -function
on Rn and n-continuous linear functionals 1 , , n on H such that

(h) = (h1 , , hn ), (9.26)

where hi = i (h), i = 1, , n. The set of all such functionals will be denoted


by S(H). Obviously, if is a polynomial, then the corresponding polynomial
functional belongs to S(H). For the linear functionals, one may take i (h) =
(h, i ). However, this is not a proper one as will be seen.
Now we are going to introduce the Hermite polynomials in R. Let pk
denote the Hermite polynomial of degree k defined by
2 dk x2 /2
pk (x) = (1)k (k!)1/2 ex /2
e , k = 1, 2, , (9.27)
dxk
with p0 = 1. It is well known that {pk } is a complete orthonormal basis for
2
L2 (R, (1/ 2)ex /2 dx). Denote n = (n1 , n2 , , nk , ) with nk Z+ , the
non-negative integers. Let Z = {n = (n1 , n2 , , nk , ) : nk Z+ , n =
set of P

|n| = j=1 nj < }, so that, in n, nk = 0 except for a finite number of nk s.
Define Hermite polynomial functionals, or simply Hermite functionals, on H
as
Y
Hn (h) = pnk [k (h)], (9.28)
k=1

for n = (n1 , n2 , , nk , ) Z. Clearly the Hermite polynomials Hn s are


smooth simple functionals. As in one dimension, they span the space H [58].
For the linear functional, we will denote k (h) = (1/2 h, k ), which appears
to be defined only for h H0 . However, regarding h as a random variable in
H, we have E {2k (h)} = kk k2 = 1. Then k (h) can be defined a.e. h H,
similar to defining a stochastic integral.

Lemma 3.1 Let k (h) = (h, 1/2 k ), k = 1, 2, . Then the set {Hn }
of all Hermite polynomials on H forms a complete orthonormal system for
H = L2 (H, ). Hence, the set of all such functionals are dense in H. Moreover,
we have the direct sum decomposition:

H = K0 K1 Kj =
j=0 Kj ,

where Kj is the subspace of H spanned by {Hn : |n| = j}.


Diffusion Equations in Infinite Dimensions 281

Proof. As pointed out earlier, we may regard as a Wiener measure in the


Wiener space (i, H0 , H). Similar to a Wiener process in finite dimension, the
CameronMartin space H0 is known to be a set of measure zero. Notice that,
if we let k = 1/2 k , k = 1, 2, , then

(j , k )0 = (j , k ) = jk

so that {k } is a complete orthonormal system for H0 . Also we can check that

[j , k ] = (j , k )0 , j, k = 1, 2, .

Therefore, by appealing to the WienerIto decomposition, we can follow a


proof in the paper [80] or in (Prop. 1.2 [95]) to reach the desired conclusion.

Let be a smooth simple functional given by (9.26). Then the Frechet


derivatives, D = and D2 = in H can be computed as follows:
n
X
(D(h), v) = [k (h1 , , hn )]k (v),
k=1
Xn (9.29)
(D2 (h)u, v) = [j k (h1 , , hn )]j (u)k (v),
j,k=1


for any u, v H, where k = hk . Similarly, for m > 2, Dm (h) is a
(m)
m-linear form on (H)m = (H H) with inner product (, )m . We have
[Dm (h)](v1 , , vm ) = (Dm (h), v1 vm )m , for h, v1 , , vm H.
Now, for v H, let Tv : H H be a translation operator so that

Tv h = v + h, for h H.

Denote by v = Tv1 , the measure induced by Tv . Then the following is


known as the CameronMartin formula [92].

Lemma 3.2 If v H0 , the measures and v are equivalent ( v ), and


the RadonNikodym derivative is given by
dv 1
() = exp{(1/2 v, 1/2 ) k1/2 vk2 }, (9.30)
d 2

for H, a.s., where 1/2 denotes a pseudo-inverse.

Proof. Let k denote the eigenvalues of with eigenfunction k , for k =


1, 2, . For v H0 , it can be expressed as

X
v= vk k ,
k=1
282 Stochastic Partial Differential Equations

where k = 1/2 k and vk = (v, k )0 is such that



X
kvk20 = k1/2 vk2 = 1 2
k vk < .
k=1
n
Let v be the n-term approximation of v:
n
X
vn = vk k
k=1

and define n
X 1/2
n 1 n
h = v = k vk k .
k=1
Let us define a sequence of measures n by
1
n (d) = exp{(, hn ) (hn , hn )} (d).
2
Since N (0, ) is a centered Gaussian random variable in H, for positive
integers m, n, we have

E |(, hm ) (, hn )|2 = E |(, hm hn )|2


= ((hm hn ), hm hn ) = k1/2 (vm vn )k2
= kvm vn k20 0,

as m, n . Therefore (, hn ) converges in L2 () to a random variable


(), which will be written as (1/2 v, 1/2 ). Also it can be shown that
(hn , hn ) (1/2 v, 1/2 v) = k1/2 vk2 . Since all random variables in-
volved are Gaussian, we can deduce that
1
lim exp{(1/2 hn , 1/2 ) (hn , hn )}
n 2
1
= exp{(1/2 v, 1/2 ) k1/2 vk2 }
2
in L1 (H, ). It follows that n converges weakly to . To show = v , it is
equivalent to verifying that they have a common characteristic functional. We
know that the characteristic functional for v is given by
1
() = exp{i(v, ) (, )}, H.
2
Let n () be the characteristic functional for n , which can be computed as
follows
Z
1
n () = exp{i(, ) + (, hn ) (hn , hn )} (d)
H 2
1
= exp{i(hn , ) (, )}
2
Diffusion Equations in Infinite Dimensions 283

which converges to the characteristic functional () as n .

To develop an L2 -theory for parabolic or elliptic equations in infinite di-


mensions, we need an analog of Greens formula in finite dimensions. There-
fore, it is necessary to perform an integration by parts with respect to the
Gaussian measure . With the aid of the above lemma, we are able to verify
the following integration-by-parts formulas.

Theorem 3.3 Let S(H) be a smooth simple functional and let


N (0, ) be a Gaussian measure in H. Then, for any g, h H, the following
formulas hold:
Z Z
(v, h)(v)(dv) = (D(v), h)(dv), (9.31)
H H

Z Z
(v, g)(v, h)(v)(dv) = (v)(g, h)(dv)
H
ZH (9.32)
2
+ (D (v)g, h)(dv).
H

Proof. We will only sketch a proof of (9.31). The formula (9.32) can be
shown in a similar fashion.
For u H0 , we have, by Lemma 3.2,
Z Z
(v + u)(dv) = (v)u (dv)
H
ZH (9.33)
= (v)Zu (v)(dv),
H

du
where we let Zu (v) = (v). Therefore, by letting g = h and making use
d
of (9.30) and (9.33), we can verify (9.31) by the following computations:
Z Z
(D(v), h)(dv) = (D(v), g)(dv)
H H
Z Z

= (v + g)|=0 (dv) = (v + g)(dv)|=0
H H
Z Z

= (v) Zg (v)|=0 (dv) = (1/2 v, 1/2 g)(v)(dv)
H H
Z
= (v, h)(v)(dv),
H

where, since is smooth, the interchanges of integration and differentiation


with respect to can be justified by the dominated convergence theorem.
284 Stochastic Partial Differential Equations

Remark: In view of the proof, the integration-by-parts formulas (9.31) and


(9.32) can actually be shown to hold for C2b (H) as well [23].

As a generalization of the Sobolev space H k (D) with D Rd , we now


introduce GaussSobolev spaces. In the previous chapters, we saw that H k (D)
can be defined either by the L2 -integrability of the functions and its weak
derivatives up to order k, or by the boundedness of its Fourier series or integral
via the spectral decomposition. This is also true for Wiener-Sobolev spaces,
due to a theorem of Meyer [71]. Here, in the spirit of Chapter 3, we will
introduce the L2 ()-Sobolev spaces based on a Fourier series representation.
By Lemma 3.1, for H, it can be represented as

X
(v) = n Hn (v), (9.34)
n=0

where n = |n| and n = (n1 , , nr ). Let n = n1 nr be a sequence of


positive numbers with n > 0, such that n as n . Define
X
kkkkk, = { (1 + n )k |n |2 }1/2 , k = 1, 2, ,
n X (9.35)
kkkk0, = kkkk = { |n |2 }1/2 ,
n

which is L2 ()-norm of . For the given sequence = {n }, let Hk, denote


the completion of S(H) with respect to the norm kk kkk, . Then Hk, is called
a GaussSobolev space of order k with parameter . The dual space of k|k|k,
is denoted by kk kkk,

Remarks:

(1). In later applications, the sequence {n } will be fixed and we shall simply
denote Hk, by Hk .

(2). If we choose the sequence {n } to be a sequence of positive integers, then


Hk s coincide with the L2 ()-Sobolev spaces with the Wiener measure .

9.4 OrnsteinUhlenbeck Semigroup


For simplicity, we shall be concerned mainly with the Markov semigroup as-
sociated with the linear stochastic equation:

dut = Aut dt + dWt , t > 0,


(9.36)
u0 = h H.
Diffusion Equations in Infinite Dimensions 285

Here we assume that A : V V is a self-adjoint, strictly negative opera-


tor satisfying conditions (B.1)(B.3) in Section 6.7, and Wt is an RWiener
process in H. Then we know that the solution is given by
Z t
h
ut = Gt h + Gts dWs , (9.37)
0

where Gt is the associated Greens operator, or it is a linear operator in the


semigroup generated by A. The distribution of the solution uht is a Gaussian
measure ht in H with mean Gt h and covariance operator
Z t
t = Gs RGs ds.
0

By Theorem 7-5.4, there exists a unique invariant measure for the equation
(9.36), which is a centered Gaussian measure in H with covariance operator
given by Z
= Gt RGt dt. (9.38)
0

By Theorem 2.1, the solution of (9.36) is a time-homogeneous Markov


process with transition operator Pt . For H, we have
Z
(Pt )(h) = (v)ht (dv) = E (uht ). (9.39)
H

Suppose that (A) and R have the same set of eigenfunctions {ek } with
positive eigenvalues {k } and {k }, respectively. Then R commutes with Gt
and the covariance operator given by (9.38) can be evaluated to give
Z
1
v = R G2t v dt = R(A)1 v, v H.
0 2

Notice that, if AR v = RA v for v D(A), then A and R have a common set


of eigenfunctions. Therefore the following lemma holds.

Lemma 4.1 Suppose that A and R satisfy the following:

(1) A : V V is self-adjoint and there is > 0 such that

hAv, vi kvk2V , v V.

(2) A commutes with R in the domain of A.


Then the equation (9.36) has a unique invariant measure , which is a Gaus-
sian measure on H with zero mean and covariance operator = 12 R(A)1 =
1 1
2 (A) R.

Remarks: In what follows, this invariant measure will be used for the
286 Stochastic Partial Differential Equations

GaussSobolev spaces. Even though the assumption on the commutativity of


A and R is not essential, it simplifies the subsequent analysis. Besides, as
a consequence, the explicit form of can be obtained. This is necessary for
using as a reference measure. Notice that, by condition (1), the eigenvalues
of (A) are strictly positive, k , for some > 0. Let ek s denote the
corresponding eigenfunctions.

Let S(H) be a smooth simple functional. By setting k = ek in (9.26),


it takes the form
(h) = (1 (h), , n (h)),
where k (h) = (h, 1/2 ek ). Define a differential operator A0 on S(H) by
1
A0 (v) = T r [RD2 (v)] + hAv, D(v)i, v H, (9.40)
2
which is well defined, since D D(A) and hAv, D(v)i = (v, A D(v)).

Lemma 4.2 Let Pt be the transition operator as defined by (9.39). Then


the following properties hold:

(1) Pt : S(H) S(H) for t 0.

(2) {Pt , t 0} is a strongly continuous semigroup on S(H) so that, for any


S(H), we have P0 = I , Pt+s = Pt Ps , for all t, s 0, and lim Pt =
t0
.

(3) A0 is the infinitesimal generator of Pt so that, for each S(H),


1
lim (Pt I) = A0 .
t0 t

Proof. For S(H), we have

(Pt )(h) = E [1 (uht ), , n (uht )]


(9.41)
= E [1 (Gt h + 1 (vt ))), , n (Gt h + n (vt ))],
Rt
where vt = 0 Gts dWs N (0, t ) with t = (1/2)A1 R(I G2t ). Notice
that i = i (vt ), i = 1, , n, are jointly Gaussian random variables with mean
zero and covariance matrix t = [tij ]nn . By a translation i i i (ht )
and Lemma 3.2, we can rewrite (9.41) as
Z
(Pt )(h) = (x) exp{(t1 x, i (ht ))
Rn (9.42)
1
|t1 (ht )|2 }nt (dx),
2
where ht = Gt h, x Rn and (h) = (1 (h), , n (h)) for h H. Let nt
Diffusion Equations in Infinite Dimensions 287

be the joint Gaussian distribution for (1 , , n ). Since C (Rn ) is of a


polynomial growth, we can deduce from (9.42) that (Pt ) S(H).
The semigroup properties (2) follow from the Markov property of the so-
lution ut , as shown in Lemma 2.2. To verify (3), let us first assume that
S(H) is bounded. Then, by applying the It o formula and then taking
expectation, we obtain
Z t
v
E {(ut ) (v)} = E A0 (uvs ) ds,
0

so that, by boundedness, continuity, and Lebesgues theorem, we can show


that
1 1
lim (Pt I)(v) = lim E {(uvt ) (v)}
t0 t t0 t
Z
1 t
= lim E A0 (uvs ) ds = A0 (v).
t0 t 0

For any S(H), we can still proceed as before with the aid of a localized
o formula. Then we let N .
stop time N and replace t by (tN ) in the It

Lemma 4.3 Let Hn (h) be a Hermite polynomial functional given by (9.28).


Then the following hold:

A0 Hn (h) = n Hn (h), (9.43)

and
Pt Hn (h) = exp{n t} Hn (h), (9.44)
for any n = (n1 , , nr ) and h H, where
r
X
n = ni i .
i=1

Proof. First, let n = nk so that

Hn (h) = pnk (k ), k = k (h) = (1/2 ek , h).

Then we have
DHn (h) = pnk (k )(1/2 ek ),
D2 Hn (h) = pnk (k )(1/2 ek 1/2 ek ).
Hence, by Lemma 4.1 and the fact that A and R have the same eigenfunctions
ek with eigenvalues k and k , respectively, it follows that

hAh, DHn (h)i = (h, ADHn (h)) = k k pnk (k ), (9.45)


288 Stochastic Partial Differential Equations

T r [RD2 Hn (h)] = k (1/2 ek , ek )pnk (k )


(9.46)
= 2k pnk (k ),
which imply that, for n = (0, , 0, nk , 0, ),
A0 Hn (h) = k [pnk (k ) i pnk (k )]
(9.47)
= nk k pnk (k ),
where use was made of the identity for a Hermite polynomial:
pk (x) xpk (x) = kpk (x).
In general, consider

Y
Hn (h) = pnk (k ), n Z.
k=1

Since n = |n| < , there is a positive integer m such that nk = 0 for k > m.
So one can rewrite
Ym
Hn (h) = pnk (k ).
k=1

By means of the differentiation formulas (9.29), we can compute


m
X
DHn (h) = k Hn (h)(1/2 ek )
k=1 (9.48)

X pnk (k ) 1/2
= Hn (h) ( ek ),
pnk (k )
k=1
m
X
D2 Hn (h) = i j Hn (h)(1/2 ei ) (1/2 ej )
i,j=1
X
pni (i ) 1/2 (9.49)
= Hn (h) ( ei ) (1/2 ei )
i=1
p n i (i )

+{terms with i 6= j}.


We can deduce from (9.46) to (9.49) that

X pnk (k )
< Ah, DHn (h) >= Hn (h) k k ,
pnk (k )
k=1

and
X pnk (k )
T r [RD2 Hn (h)] = 2 Hn (h) k ,
pnk (k )
k=1

from which we obtain (9.43),


X
A0 Hn (h) = ( nk k ) Hn (h).
k=1
Diffusion Equations in Infinite Dimensions 289

To show (9.44), since Hn S(H), we can apply Lemma 4.2 to get

d
Pt Hn = A0 Pt Hn = Pt A0 Hn = n Pt Hn ,
dt

with P0 Hn = Hn . It follows that

Pt Hn = en t Hn

as asserted.

Lemma 4.4 Assume the conditions for Lemma 4.3 hold. Then, for any
, S(H), the following Greens formula holds:
Z Z Z
1
(A0 )d = (A0 )d = (RD, D)d. (9.50)
H H 2 H

Proof. For , S(H), we have

n
X
T r [RD2 (h)] = (D2 (h)ek , Rek (h))
k=1 (9.51)
Xn
= k (D2 (h)ek , ek ),
k=1

for some n 1. Now it is clear that

(D2 (h)ek , ek ) = (D(D(h), ek ), ek ),

and

(D2 (h)ek , ek ) = (D[(D(h), ek )], ek ) (D(h), ek )(D(h), ek ).

Therefore, by the integration-by-parts formula (9.31) given in Theorem 3.3,


we can get
Z
(h)(D2 (h)ek , ek )(dh)
HZ
+ (D(h), ek )(D(h), ek )(dh)
Z H (9.52)
= (D[(D(h), ek )], ek )(dh)
ZH
= (h)(1 ek , h)(D(h), ek )(dh),
H
290 Stochastic Partial Differential Equations

where 1 is a pseudo-inverse. It follows from (9.51) and (9.52) that


Z
X Z
T r [RD2 ]d = k (D2 ek , ek )d
H k=1 H

X Z Z
1
= k { ( ek , h)(D, ek )d (D, ek )(D, ek )d}
k=1 H H
Z
X Z X

= 2 hAh, ek i(D, ek )d k (D, ek )(D, ek )d
H k=1 H k=1
Z Z
= 2 hAh, Did (RD, D)d,
H H

which implies Greens formula (9.50).

Now recall the definition of the GaussSobolev space Hk, defined in the
last section. Let n = n be fixed from now on and denote the corresponding
H,k simply by Hk with norm kk kkk defined by
X
kkkkk = { (1 + n )k |n |2 }1/2 , k = 0, 1, 2, , (9.53)
n

with kkkk0 = kkkk.

Theorem 4.5 Let the conditions on A and R in Lemma 4.1 hold. Then
Pt : H H, for t 0, is a contraction semigroup with the infinitesimal
The domain of A contains H2 and we have A = A0 in S(H).
generator A.

Proof. The semigroup properties are easy to verify. To show that Pt is a


contraction map, let H. Since
Z
v
(Pt )(v) = E (ut ) = (v + h)t (dh),
H

where t is the solution measure for ut with u0 = 0, we have


Z Z Z
kkPt kk2 = [Pt (v)]2 (dv) = [ (h)P (t, v, dh)]2 (dv)
H H H
Z Z
2 (h)P (t, v, dh)(dv) = kkkk2 ,
H H

where use was made of the CauchySchwarz inequality and a property of the
invariant measure . Hence Pt : H H is a contraction mapping.
Let A denote the infinitesimal generator of Pt with domain D(A)
dense in
H [79]. Clearly, by Lemma 4.2, A = A0 , for S(H). For any H2 ,
by writing in terms of Hermite polynomials, it can be shown that A H
so that H2 D(A).
Diffusion Equations in Infinite Dimensions 291

Theorem 4.6 Let the conditions for Theorem 4.5 hold true. The differential
operator A0 defined by (9.40) in S(H) can be extended to be a self-adjoint
linear operator A in H with domain H2 .

Proof. Let VN be the closed linear span of {Hn , |n| N }, and let PN : H
VN be a projection operator defined by
X
PN = n Hn , H,
|n|N

with n = [, Hn ]. Let AN = A0 PN . Then, for H2 ,


X
AN = n n Hn .
|n|N

Thus, for any positive integers N < N ,


X
kkAN AN kk2 = 2n |n |2
N |n|N

kkPN PN kk22 0,
as N, N . Hence {AN } is a Cauchy sequence in H2 and denote its limit
by A, that is,
lim AN = A, H2 .
N
In the meantime, by passing through the sequences PN and PN , one can
show by invoking the integral identity (9.50) that, for , H2 ,
Z
1
[A, ] = [, A] = (RD, D)d,
2 H
so that [A, ] 0. Therefore A is a symmetric, semi-bounded operator in
H with a dense domain H2 . By appealing to Friedrichs extension theorem
(p. 317, [96]), it admits a self-adjoint extension, still to be denoted by A, with
domain D(A) = H2 .

Remarks: Since both A and A are extensions of A0 to a domain contain-


ing H2 , they must coincide there. Also it is noted that the above result is a
generalization for the case of the Wiener measure. If K = H, A = R = I, and
is the Wiener measure in H, then n is a non-negative integer, and (A)
is known as the number operator [79].

9.5 Parabolic Equations and Related Elliptic Problems


In view of Greens formula (9.50), one naturally associates the differential
operator A with a bilinear (Dirichlet) form as in finite dimensions. To this
292 Stochastic Partial Differential Equations

end, let Hk be the GaussSobolev space of order k with norm kk kkk and
denote its dual space by Hk with norm kk kkk . Thus we have

Hk H Hk , k 0.

The duality between Hk and Hk has the following pairing

hh, iik , Hk , Hk .

As before, we set H0 = H, kk kk0 = kk kk, and hh, ii1 = hh, ii, hh, ii0 =
[, ].
Now consider a nonlinear perturbation of equation (9.36):

dut = Aut dt + F (ut )dt + dWt , t > 0,


(9.54)
u0 = h H,

where F : H H is Lipschitz continuous and of linear growth. Then the


associated Kolmogorov equation takes the form:

(v, t) = A(v, t) + (F (v), D(v, t)), a.e. v H,
t (9.55)
(v, 0) = (v),

where, as defined in Theorem 4.5, A : H2 H is given by


1
A = T r [RD2 (v)] + hAv, D(v)i. (9.56)
2
For a regular F to be specified later, the additional term (F (v), D(v, t)) can
be defined -a.e. v H. It appears that the above differential operator is
a special case of A defined by equation (9.11). However, there is noticeable
difference between them in that the latter was defined only for v V , while
the operator (9.55) is defined -a.e. in H. Moreover, the classical solution
of the Kolmogorov equation (9.54) requires a C2b (H)-datum [21]. Here we
may allow to be in H, but the solution will be given in a generalized sense.
In particular, we will consider the mild and the strong solutions of equation
(9.54) as in finite dimensions.
Let > 0 be a parameter. By changing t to et t in (9.55), the new t
will satisfy the equation:

(v, t) = A (v, t) + (F (v), D(v, t)), a.e. v H,
t (9.57)
(v, 0) = (v),

where A = (A I) and I is the identity operator in H. Clearly, the prob-


lems (9.55) and (9.57) are equivalent, as far as the existence and uniqueness
questions are concerned. It is more advantageous to consider the transformed
problem (9.57).
Diffusion Equations in Infinite Dimensions 293

We first consider the case of a mild solution of (9.57). By a semigroup


property given in Theorem 4.5, we rewrite the equation (9.57) in the integral
form:
Z t
t
(v, t) = e (Pt )(v) + e(ts) [Pts (F, Ds )](v) ds, (9.58)
0

where we denote = () and s = (, s). The following lemma will be


needed to prove the existence theorem.

Lemma 5.1 Let L2 ((0, T ); H). Then, for any > 0, there exists C > 0
such that Z t Z T
kk e(ts) Pts s dskk2 C kks kk21 ds. (9.59)
0 0

Proof. By Lemma 3.1, we can expand (s) into a series in Hermite poly-
nomials: X
s = n (s)Hn , (9.60)
n
so that
Z t XZ t
e(ts) Pts s ds = e(+n )(ts) n (s) ds Hn , (9.61)
0 n 0

where n (s) = [s , Hn ]. It follows from (9.61) that


Z t X Z t
kk e(ts) Pts s dskk2 = | e(+n )(ts) n (s)|2 ds
0 n 0
X Z t
1
|n (s)|2 ds
n
2( + n ) 0

1 XZ T 1
Z T
|n (s)|2 ds = C kks kk21 ds,
2( 1) n 0 (1 + n ) 0

where C = 1/2( 1). This verifies (9.59).

Theorem 5.2 Suppose that F : H H satisfies the usual Lipschitz conti-


nuity and linear growth conditions such that, for any H, v V ,

kk(F (v), D(v))kk21 Ckk(v)kk2 , v V, (9.62)

for some C > 0. Then, for H, the initial-value problem (9.55) has a unique
mild solution C([0, T ]; H).

Proof. Let XT denote the Banach space C([0, T ]; H) with the sup-norm

kkkkT = sup kkt kk. (9.63)


0tT
294 Stochastic Partial Differential Equations

Let Q be a linear operator in XT defined by


Z t
Qt = et Pt + e(ts) Pts (F, Ds )ds, (9.64)
0

for any XT . Then, by Theorem 4.5 and Lemma 5.1, we have


Z t
kkQt kk2 2 {kk Pt kk2 + kk e(ts) Pts (F, Ds ) dskk2 }
0
Z t
2 { kkkk2 + C kk(F, Ds )k|21 ds }
0
Z t
2
2 kkkk + C1 kks kk2 ds,
0

for some C1 > 0. Hence


kkQkkT C2 (1 + kkkkT ),
for some positive C2 depending on , , and T . Therefore, the map Q : XT
XT is well defined. To show the map is contraction for a small t, let , XT .
Then
Z t
kkQt Qt kk2 = kk e(ts) Pts [(F, Ds ) (F, Ds )]dskk2
0
Z t
C kk(F, Ds Ds )kk21 ds
0
Z t
C3 kks s kk2 ds,
0

for some C3 > 0. It follows that


p
kkQ Q kkT C3 T kk kkT ,
which shows that Q is a contraction map for a small T . Hence the Cauchy
problem (9.57) has a unique mild solution.

We now consider the strong solutions via a variational formulation. Let us


introduce a bilinear form a on H1 H1 defined by
Z
1
a(1 , 2 ) = (RD1 , D2 )d, for 1 , 2 H1 , (9.65)
2 H
which, by Greens formula, has the following property:
a(1 , 2 ) = hhA1 , 2 ii = hhA2 , 1 ii. (9.66)
In view of (9.65) and (9.66), by setting 1 = 2 = and expanding in
terms of Hermite polynomials, it can be shown that
1
a(, ) = kkR1/2 Dkk2 = kkkk21 kkkk2 . (9.67)
2
Diffusion Equations in Infinite Dimensions 295

From (9.65) and (9.67), it follows that

1
|a(1 , 2 )| kkR1/2 D1 kk kkR1/2D2 kk
2 (9.68)
kk1 kk1 kk2 kk1 .

The inequalities (9.67) and (9.68) show that the bilinear form is continuous
on H1 H1 and it is H1 -coercive. The following lemma, which is well known
in the theory of partial differential equations (p. 41, [33]), is valid in a general
Hilbert space setting.

Lemma 5.3 The Dirichlet form (9.65) defines a unique bounded linear op-
erator A : H1 H1 such that

1 , 2 ii,
a(1 , 2 ) = hhA for 1 , 2 H1 , (9.69)

H2 = A.
and the restriction A|

For a sufficiently smooth F , define B : H2 H as follows:

B(v) = A(v) + (F (v), D(v)), a.e. v H. (9.70)

To B we associate a bilinear form b defined by


Z
b(1 , 2 ) = a(1 , 2 ) + (F (v), D1 (v))2 (v)(dv), (9.71)
H

for 1 , 2 H1 . Then, similar to the above lemma, we have

Lemma 5.4 Suppose that F : H H0 is essentially bounded and there is


M > 0 such that
sup kR1/2 F (v)k M, a.e. (9.72)
vH

Then the bilinear form b on H1 H1 is continuous and H1 -coercive. Moreover,


it uniquely defines a bounded linear operator B : H1 H1 so that

1 , 2 ii,
b(1 , 2 ) = hhB for 1 , 2 H1 , (9.73)

H2 = B.
and the restriction B|

Proof. Similar to Lemma 5.3, it suffices to show that the form b is contin-
uous and coercive. To this end, consider the integral term in (9.71) and notice
the condition (9.73) to get
Z Z
| (F, D1 )2 d| kR1/2 F k kR1/2 D1 k |2 | d
H H
M kk1 kk1 k|2 kk.
296 Stochastic Partial Differential Equations

It is also true that, for any > 0,


Z
| (F, D)d| M kkkk1kkkk
H (9.74)
1
M (kkkk21 + kkkk2 ).
4
The above two upper bounds together with (9.68), (9.69), and (9.71) show the
continuity and the coercivity of b by choosing so small that M < 12 .

First, let us consider the associated elliptic problem. Introduce the elliptic
operator
B = B I,
where > 0 and I is the identity operator in H. For Q H, consider the
solution of the equation:
B U = Q. (9.75)
We associate B to the bilinear form on H1 H1 :

b (1 , 2 ) = b(1 , 2 ) + [1 , 2 ]. (9.76)

Then a function U H1 is said to be a strong solution to the elliptic equation


(9.75) if the function U satisfies

hhB U, ii = [Q, ], H1 . (9.77)

The following existence theorem can be easily proved.

Theorem 5.5 Let the conditions for Lemma 5.4 be satisfied. Then, for
H, there exists 0 > 0 such that the elliptic equation (9.77) has a unique
strong solution U H1 .

Proof. Consider the equation:

b (U, ) = [Q, ] H1 . (9.78)

By Lemma 5.4, it is easily seen that b : H1 H1 R is continuous. To show


the coercivity, we invoke (9.67), (9.74), and (9.76) to get
Z
b (, ) kkkk21 + ( 1) kkkk2 |(F (v), D1 (v))2 (v)| (dv)
H
kkkk21 + ( 1 M/4)kkkk2 M kkkk21
kkkk21 ,

where we chose small enough so that = 1 M > 0 and so large that


0 = 1+M/4. Hence, the form is also H1 -coercive. Now, in view of (9.77)
and (9.78), we can apply the LaxMilgram theorem (p. 92, [96]) to assert that
Diffusion Equations in Infinite Dimensions 297

there exists a continuous injective linear operator, denoted by (B ) from H1


into H1 and a unique U H1 such that

b (U, ) = hhB U, ii = [Q, ] H1 .

This proves that the elliptic equation (9.77) has a unique strong solution.

Next we consider the strong solution of the parabolic equation (9.57) with
a nonhomogeneous term:


(v, t) = A (v, t) + B((v, t)) + Q(v, t), a.e. v H,
t (9.79)
(v, 0) = (v),

where B : H1 H, A = (A I) with > 0, and Q L2 ((0, T ); H), H.


By the definition of a strong solution, we must show that there exists a
function C([0, T ]; H) L2 ((0, T ); H1 ) such that the following equation
holds
Z t
[t , ] = [, ] + As , ds
Z t 0 Z
t (9.80)
+ [B(s ), ] ds + [Qs , ] ds,
0 0

for any H1 . Similar to the elliptic problem, one could follow Lions ap-
proach [62] by introducing a bilinear form on X X and applying the Lax
Milgram theorem, where X = H1 ((0, T ); H1 ) L2 ((0, T ); H1 ). But this ap-
proach seems more technical. Instead, we shall adopt an alternative approach
here. First, we will show that, under suitable conditions, the mild solution has
H1 -regularity and then verify that the mild solution satisfies equation (9.80).
To proceed, we rewrite the equation (9.79) in the integral form
Z t Z t
t = Gt + Gts B(s ) ds + Gts Qs ds, (9.81)
0 0

where Gt = et Pt is the associated Greens operator for equation (9.79). To


prove the existence theorem, we need the following estimates.

Lemma 5.6 For Qt L2 ((0, T ); H), there exist some constants a, b > 0
such that
Z t Z t
kk Gts Qs dskk a kkQs kk21 ds (9.82)
0 0

and
Z t Z t
kk Gts Qs dskk1 b kkQs kk2 ds. (9.83)
0 0
298 Stochastic Partial Differential Equations

Proof. To prove the first inequality (9.82), as in Lemma 5.1, we can express
Qt in terms of the Hermit functionals,
X
Qt = [Qt , Hn ] Hn
n

and X
Gts Qs = en (ts) [Qs , Hn ] Hn ,
n

where n = + n . It follows that


Z t X Z t
kk Gts Qs dskk2 = { en (ts) [Qs , Hn ] ds}2 . (9.84)
0 n 0

Since Z Z t
t
n (ts) 1 2
{ e [Qs , Hn ] ds} [Qs , Hn ]2 ds
0 Z t 2n 0
1
(1 + n )1 [Qs , Hn ]2 ds
21 0

with 1 = ( 1), the equation (9.84) yields


Z t X Z t
2 1
kk Gts Qs dskk a (1 + n ) [Qs , Hn ]2 ds
0 0
Z t n

= a kkQs kk21 ds,


0

1
where a = 21 .The above result was obtained by interchanging the sum-
mation and integration, which can be justified by the bounded convergence
theorem. The second inequality (9.83) can be verified in a similar fashion and
the proof will be omitted.

Now assume that B : H1 H is continuous and there exist constants


K1 , K2 such that
2 2
(B.1) kkB()kk K1 (1 + kkkk1 ), for H1 , and

k2 K2 (1 + kk k
(B.2) kkB() B()k k2 ), for ,
H1 .
1

Theorem 5.7 Let the conditions (B.1) and (B.2) be satisfied. Given H
and Q L2 ((0, T ); H), the Cauchy problem for the parabolic equation (9.80)
has a unique strong solution C([0, T ]; H) L2 ((0, T ); H1 ).

Proof. Under conditions (B.1) and (B.2), we will first show that the integral
equation (9.81) has a unique solution belonging to the space as depicted
in the theorem. This will be shown by the method of contraction mapping.
Diffusion Equations in Infinite Dimensions 299

To proceed, let YT denote the Banach space of functions on H with norm


defined by
Z T
kkkkT = { sup kkt kk2 + kks kk21 ds}1/2 .
0tT 0

Define the operator : YT YT by


Z t Z t
t () = Gt + Gts B(s ), ds + Gts Qs ds,
0 0

for 0 t T . Then, by equation (9.82) in Lemma 5.6, we have


 Z t Z t 
kkt ()kk2 3 kkGt kk2 + kk Gts B(s ) dskk2 + kk Gts Qs dskk2
 Z t 0 Z t  0
2 2 2
C1 kkkk + kkB(s )kk ds + kkQs kk ds ,
0 0

for some C1 > 0. By condition (B.1), one can obtain from the above that
 Z t Z t 
2 2 2 2
kkt ()kk C2 1 + kkkk + kks kk ds + kkQs kk ds , (9.85)
0 0

where C2 > 0 is a constant depending on T . Similarly, with the aid of equation


(9.83) in Lemma 5.6 and condition (B.2), it can be shown that
 Z t Z t 
2 2 2 2
kkt ()kk1 C3 1 + kkkk + kks kk1 ds + kkQs kk ds , (9.86)
0 0

for a different constant C3 > 0. In view of (9.85) and (9.86), the map :
YT YT is bounded and well defined.
To show that is a contraction map, for , YT , consider
Z t
k2 = kk
kkt () t ()k s )] dskk2
Gts [B(s ) B(
Z t 0

a kkB(s ) B( s )kk2 ds
1
0 Z
t
a K2 s kk2 ds,
kks
0

where use was made of equation (9.82) and condition (B.2). It follows that
k2 a K2 T sup kks
sup kkt () t ()k s kk2 . (9.87)
0tT 0tT

Next, by equation (9.83), we have the estimate


Z t
k2 b
kkt () t ()k kkB(s ) B( s )kk2 ds
1
0 Z
t
b K2 s kk2 ds,
kks 1
0
300 Stochastic Partial Differential Equations

which implies that


Z T Z T
k21 ds b K2 T
kkt () t ()k s kk21 ds.
kks (9.88)
0 0

In view of (9.87) and (9.88), one can conclude that



kT C T kk k
kk() ()k kT ,

for some C > 0. Hence, for small T , is a contraction map and the integral
equation (9.90) has a unique solution YT , which can be continued to any
T > 0. To show that the solution satisfies the variational equation (9.80), for
simplicity, we set = 0 and Qt = 0, and it yields
Z t Z t
[t , ] = As , ds + [B(s ), ] ds. (9.89)
0 0

The corresponding integral equation (9.90) reduces to


Z t
t = Gts B(s ) ds. (9.90)
0

We first assume the test function belongs to the domain of A . Suppose


YT is the solution of (9.90). Then
Z t Z tZ r
[A , s ] ds = [A , Grs B(s )] ds dr
0Z
t Z r Z tZ r
0 0
d
= [, A Grs B(s )] ds dr = [, Grs B(s )] ds dr
Z tZ t
0 0 0 0 dr
Z t
d
= [, Grs B(s )] ds dr = [t , ] [B(s ), ] ds,
0 s dr 0

where the interchange of the order of integration can be justified by the Fubini
theorem. The above equation yields the variational equation (9.89).
In general, for any H1 , it can be approximated by a sequence {n } in
the domain of A . It can be shown that the corresponding variational equation
with replaced by n will converge to the desired limit. The detail is omitted
here.
Diffusion Equations in Infinite Dimensions 301

Remarks:

(1) As a corollary of the theorem, the Kolmogorov equation (9.55) has a unique
solution C([0, T ]; H) L2 ((0, T ); H1 ) given by t = et Ut with Q = 0.

(2) In infinite dimensions, nonlinear parabolic equations can arise, for in-
stance, from stochastic control of partial differential equations in the form
of HamiltonJacobiBellman equations [23]. The analysis will be more
difficult technically.

As an example, consider a stochastic reactiondiffusion equation in a


bounded domain D Rd as follows:
Z
u t (x),
= ( )u + k(x, y)f (u(y, t))dy + W
t D (9.91)
u|D = 0, u(x, 0) = v(x),

for x D, t (0, T ), where k(x, y), v(x) are given functions, and Wt () is a
Wiener process in H = L2 (D) withR a bounded covariance function r(x, y).
Let
R A = ( ), F (v) = D k(, y)f (v(y))dy and denote (R v)(x) =
1
D
r(x, y)v(y)dy. Here we set V = H 0 and impose the following conditions:
Condition (C): Assume that covariance operator R commutes with and
F : H R1/2 H is such that

kR1/2 F (v)k C, vH

for some constant C > 0.


Under the above condition, it will be shown that the conditions for Theo-
rem 5.7 are satisfied.
Let {k } and {k } be the eigenvalues for (A) and R, respectively, with the
common eigenfunctions {ek }. The invariant measure is a centered Gaussian
measure with covariance function r(x, y) given by

X k
r(x, y) = ek (x)ek (y).
2k
k=1

The associated Kolmogorov equation can be written as

1
t = T r.[RD2 t ]+ < Av, Dt > +(F (v), Dt ),
t 2 (9.92)
0 = .

To make it more explicit, we shall use the variational derivative notation [66].
302 Stochastic Partial Differential Equations

For a smooth function on H, we write


Z
(v, t)
(D(v), h) = h(x)dx,
DZ v(x)
2 (v, t)
(RD2 g, h) = r(x, y) g(x)h(z) dxdydz,
D3 v(y))v(z)
for g, h H. Then the equation (9.92) can be written as
Z Z
1 2 (v, t)
(v, t) = r(x, y) dxdy
t 2 D D v(x)v(y)
Z
(v, t)
+ [( )v(x)] dx
D v(x) (9.93)
Z Z
(v, t)
+ k(x, y) f (v(y)) dxdy, a.e. v H,
D D v(x)
(v, 0) = (v).
To show that, under condition (C), the conditions B(1) and (B2) for Theorem
5.7 are satisfied, where B() = (F, D). By condition (C), we can show that
2 2 2 2
kkB()kk = kk(R1/2 F, R1/2 D)kk C 2 kkR1/2 D)kk C 2 kk)kk1 ,
which implies (B.1). Similarly,

kkB() B()k k2 = kk(R1/2 F, R1/2 [D D])k


k2
k2 C 2 kk k
C 2 kkR1/2 [D D])k k2 ,
1

which verifies (B2). Therefore, by applying Theorem 5.7, we can conclude


that the Kolmogorov equation (9.92) has a unique strong solution t which
is continuous in L2 (H, ) for 0 t T and it is square-integrable over (0, T )
in H1 . Moreover, the solution has the probabilistic representation: (v, t) =
E {(ut )|u0 = v}.

Remark: An extensive treatment of the Kolmogorov equations and the


parabolic equations in infinite dimensions can be found in the book [23], where
many recent research papers on this subject are given in its bibliography.

9.6 Characteristic Functionals and Hopf Equations


Let us return to the autonomous stochastic evolution equation (9.4), which is
recaptured below for convenience:
dut = Aut dt + F (ut )dt + (ut )dWt , t 0,
(9.94)
u0 = .
Diffusion Equations in Infinite Dimensions 303

Assume the conditions for the existence and uniqueness of a strong solution
in Theorem 6-7.5 are met. Given the initial distribution 0 for , let t denote
the solution measure for ut so that

t (S) = P rob.{ : ut () S},

where S is a Borel subset of H. For every t [0, T ], the characteristic func-


tional for the solution ut is defined by
Z
i(ut ,)
t () = E {e }= ei(v,) t (dv), H, (9.95)
H

with Z
0 () = E {ei(,) } = ei(v,) 0 (dv), H. (9.96)
H

Here our goal is to study the time evolution of the solution measure t by
means of its characteristic functional t . In particular, we are interested in
the derivation of an evolution equation for t , if possible. Instead of working
with the solution measure, the characteristic functional is much easier to deal
with analytically. Moreover, it may serve as the moment-generating functional
to compute moments of the solution. To do so, we need the following lemma,
which can be proved as in finite dimension (see, e.g., [27]).

Lemma 6.1 Suppose that ut has k finite moments such that

sup E kut kk < , k = 1, 2, , m.


0tT

Then, for each t [0, T ], t () is k-time continuously differentiable and the


Frechet derivatives Dk t are bounded in the sense that

sup kDk t kL2 < , k = 1, 2, , m,


0tT

(k)
where k kL2 denotes the norm in L2 (H H; R), or the HilbertSchmidt
norm of a k-linear form.
Conversely, if t () is k-time continuously differentiable at = 0, then ut
has k finite moments and, for any hi H, i = 1, , k,

E {(ut , h1 ) (ut , hk )} = (i)k Dk t (0)(h1 hk ). (9.97)

By Theorem 7-2.2, the It


o formula holds for a strong solution. By applying
the formula to the exponential functional exp{i(ut , )} and then taking the
304 Stochastic Partial Differential Equations

expectation, we get, for V ,


Z t
i(,)
t () = E {e + ei(us ,) [ihAus , i
0
1
+i(F (us ), ) (QR )(us ), )]ds}
Z t 2 (9.98)
i(us ,)
= 0 () + E e {ihAus + F (us ), i
0
1
(QR (us ), )}ds,
2
where QR (v) = (v)R (v).
Referring to (9.95) and (9.97), equation (9.98) can be rewritten as
Z Z
ei(v,) t (dv) = ei(v,) 0 (dv)
HZ Z H (9.99)
t
+ ei(v,) G(, v)s (dv)ds,
0 H

for t [0, T ], V , where


1
G(, v) = ihAv + F (v), i (QR (v), ). (9.100)
2
This equation is known sometimes as the Hopf equation for equation (9.94).
Given the initial distribution 0 , the equation (9.99) can be regarded as an
integral equation for the evolution of the solution measure t . It can be shown
that the equation has a unique solution. By a finite-dimensional projection in
Vn and making use of some weak convergence results, the following theorem
is proved in (pp. 378-381, [93]). Since the proof is somewhat long, it will not
be reproduced here.

Theorem 6.2 Assume that the conditions for the existence Theorem 6-7.5
hold. Then, for a given initial probability measure 0 on H, there exists a
unique probability measure t on H such that it satisfies the equation (9.99)
and the condition
Z Z TZ
2
sup kvk t (dv) + kvk2V t (dv)dt < .
0tT H 0 H

It would be of great interest to obtain an equation for the characteris-


tic functional t from the Hopf equation (9.98). Unfortunately, this is only
possible when the coefficients F (v) and (v) are of a polynomial type. For
the purpose of illustration, we will consider only the linear equation with an
additive noise:
dut = Aut dt + ut dt + dWt , t 0,
(9.101)
u0 = ,
Diffusion Equations in Infinite Dimensions 305

where A : V V is a self-adjoint operator, R, Wt is an H-valued Wiener


process with trace-finite covariance operator R commuting with A, and the
initial distribution D{} = 0 . In this case the equation (9.100) becomes

1
G(, v) = hAv, i + (v, ) (R, ). (9.102)
2

Now we need the following lemma.

Lemma 6.3 The following identities hold:


Z
h, Dt ()i = i h, viei(v,) t (dv), V , (9.103)
H

and
Z
2
hD t ()g, hi = (g, v)(h, v)ei(v,) t (dv), g, h H, (9.104)
H

where D() denotes the Frechet derivative of at V .

Proof. Notice that, by the existence theorem for (9.94), t is supported in


V . Similar to the drift term in the Kolmogorov equation, the pairing h, vi,
though undefined for v H, may be defined t -a.e. in H as a random variable.
In (9.103) and (9.104), by differentiating t () in the integral (9.95) and
interchanging the differentiation and integration, these two identities can be
easily verified. The formal approach can be justified by invoking Lemma 6.1,
the Fubini theorem, and the dominated convergence theorem.

By means of this lemma, we can derive a differential equation for the


characteristic functional as stated in the following theorem.

Theorem 6.4 For the linear equation (9.101), let the conditions for Lemma
5.4 hold. Then the characteristic functional t for the solution process ut
satisfies the Hopf equation:
Z t
1
t () = () + { T r[RD2 s ()]
0 2 (9.105)
+hA, Ds ()i + (, Ds ())}ds,

for t [0, T ], V , with 0 () = () = E {ei(,) }.


If E kk2 < , then the equation has a unique strong solution, which
satisfies the Hopf equation a.s.

Proof. In view of (9.101), (9.102), by applying Lemma 6.3, the equation


(9.98) leads to the Hopf equation (9.105).
306 Stochastic Partial Differential Equations

Let W t be a Wiener process in H with covariance operator R and consider


the stochastic equation
Z t
ut = + (A + I) t.
us ds + W (9.106)
0

It is clear that the Hopf equation (9.105) is in fact the Komogorov equation
for the solution u t . It follows from Theorem 5.7 that there is a unique strong
solution ut . By the condition E kk2 < and C2b (H), it follows that
t () = E ( ut ) is a strong solution, which, by Theorem 2.4, satisfies the
Hopf equation a.s.

o equation in Rd :
As a simple example, consider the linear parabolic It
u (x, t),
= ( + )u + W x Rd , t (0, T ),
t (9.107)
u(x, 0) = (x),

where Wt = W (, t) is an R-Wiener process in H = L2 (Rd ), commutes


with R in H 2 , and is a random variable in H with moments of all orders.
Therefore, the characteristic functional () is analytic at = 0 and it has a
Taylor series expansion:
X n
i (n) (n)
() = 1 + (, , ), (9.108)
n=1
n!

where
(n) = Dn (0)
(n)
is a n-linear form on (H)n =H H, and its kernel is denoted by
(n) (x1 , , xn ), for xi Rd , i = 1, , n. (n) is the n-point moment func-
tion of
In the variational derivative notation, the Hopf equation (9.105) for equa-
tion (9.107) can be written as
Z Z
1 2 t ()
t () = r(x, y) dxdy
t 2 (x)(y)
Z
t () (9.109)
+ [( + )(x)] dx,
(x)
0 () = (),

for -a.e. H. Since has all of its moments, so does the solution ut . By a
Taylor series expansion for t (), we get
X n
i (n) (n)
t () = 1 + t (, , ), (9.110)
n=1
n!
Diffusion Equations in Infinite Dimensions 307
(n) (n)
where t is a n-linear form as yet to be determined. Let t (x1 , , xn )
denote its kernel, which is the moment function of order n for the solution ut
so that
(n) (n) (n)
t (, , ) = (t , ).
By substituting the series (9.108) and (9.110) into the equation (9.109), one
obtains the sequence of parabolic equations for the moment functions as fol-
lows:
(1)
t (1)
= ( + )t ,
t
(1)
0 (x) = (1) (x),
(9.111)
(2)
t (2)
= (1 + 2 + 2)t (x1 , x2 ) 1
+ r(x , x ), 2
t
(2)
0 (x1 , x2 ) = (2) (x1 , x2 ),

where i denotes the Laplacian in the variable xi . For n > 2, the moment
function tn satisfies
(n) n
X
t (n)
= (i + )t (x1 , , xn )
t i=1
n
X (n2) (9.112)
+ r(xi , xj )t (x1 , , x
i , , x
j , , xn ),
i6=j,i,j=1

(n)
0 (x1 , , xn ) = (n) (x1 , , xn ),

where x i means that the variables xi and xj are deleted from the argument of
(n2)
t . Suppose that the covariance function r(x, y) is bounded, continuous,
and r(x, x) is integrable. Similar to the proof of Theorem 4-6.2 for moment
(n)
functions, it can be shown that the solutions t of the moment equations
are bounded so that the series (9.110) for the characteristic functional t ()
converges uniformly on any bounded subset of [0, T ] H.

Remarks: The Hopf equation was introduced by the namesake for the sta-
tistical solutions of the NavierStokes equations [41]. This subject was treated
in detail in the book [30]. This type of functional equation has also been ap-
plied to other physical problems, such as in quantum field theory [66]. The
Hopf equation for the characteristic functional, if it exists, is a reasonable
substitute for the Kolmogorov forward equation for the probability density
function, which does not exist in infinite dimensions in the Lebesgue sense.
This page intentionally left blank
Bibliography

[1] Adams, R.A.: Sobolev Spaces, Academic Press, New York, 1975.

[2] Applebaum, D. : L evy Processes and Stochastic Calculus, Cambridge


studies in advanced mathematics, Second Edition, Cambridge University
Press, 2009.

[3] Arnold, L.: Stochastic Differential Equations: Theory and Applications,


John Wiley & Sons, New York, 1974.

[4] Baklan, V.V.: On the existence of solutions of stochastic equations in


Hilbert space, Depov. Akad. Nauk. Ukr. USR., 10 (1963), 1299-1303.

[5] Bensoussan, A., Temam, R.: Equations aux derivees partielles
stochastiques non lineaires, Israel J. Math., 11 (1972), 95-129.

[6] Bensoussan, A., Temam, R.: Equations stochastiques du type Navier
Stokes, J. Funct. Analy., 13 (1973), 195-222.

[7] Bensoussan, A., Viot, M.: Optimal control of stochastic linear distributed
parameter systems, SIAM J. Control Optim., 13 (1975), 904-926.

[8] Billinsley, P.: Convergence of Probability Measures, Wiley, New


York, 1968.

[9] Budhiraja, A., Dupuis, P. and Maroulas, V.: Large deviations for infinite
dimensional stochastic dynamical systems, Ann. Probab. 36 (2008), 1390-
1420.

[10] Chow, P-L.: Stochastic partial differential equations in turbulence-related


problems, in Probabilistic Analysis and Related Topics, Vol. I, Academic
Press, New York (1978), 1-43.

[11] Chow, P-L.: Large deviations problem for some parabolic It


o equations,
Comm. Pure and Appl. Math., 45 (1992), 97-120.

[12] Chow, P-L.: Infinite-dimensional Kolmogorov equations in GaussSobolev


spaces, Stoch. Analy. Applic., 14 (1996), 257-282.

[13] Chow, P-L.: Stochastic wave equations with polynomial nonlinearity, Ann.
Appl. Probab., 12 (2002), 84-96.

309
310 Bibliography

[14] Chow, P-L.: Unbounded positive solutions of Nonlinear Parabolic It


o
equations, Comm. Stoch. Analy. 3 (2009), 211-222.
[15] Chow, P-L., Menaldi, J-L.: Exponential estimates in exit probability for
some diffusion processes in Hilbert spaces, Stochastics, 29 (1990), 377-
393.
[16] Chow, P-L., Menaldi, J.L.: Stochastic PDE for nonlinear vibration of
elastic panels, Diff. Integ. Equations, 12 (1999), 419-434.
[17] Chow, P-L., Khasminskii, R.Z.: Stationary solutions of nonlinear
stochastic evolution equations, Stoch. Analy. Applic., 15 (1997), 671-699.
[18] Chung, K-L., Williams, R.J.: Introduction to Stochastic Integration,
Birkh
auser, Boston, 1990.
[19] Courant, R., Hilbert, D.: Methods of Mathematical Physics II, Inter-
science, New York, 1962.
[20] Daleskii, Yu. L.: Differential equations with functional derivatives and
stochastic equations for generalized processes, Dokl. Akad. Nauk. SSSR,
166 (1966), 220-223.
[21] Da Prato, G., Zabczyk, J.: Stochastic Equations in Infinite Dimensions,
Cambridge University Press, Cambridge, 1992.
[22] Da Prato, G., Zabczyk, J.: Ergodicity for Infinite Dimensional Systems,
Cambridge Univ. Press, Cambridge, 1996.
[23] Da Prato, G., Zabczyk, J.: Second Order Partial Differential Equations
in Hilbert Spaces, Cambridge Univ. Press, Cambridge, UK, 2002.
[24] Dawson, D.A.: Stochastic evolution equations, Math. Biosci., 15 (1972),
287-316.
[25] Dembo, A., Zeitouni,O.: Large Deviations Techniques and Applications,
Springer-Verlag, New York, 1998. Academic Press, New York, 1989.
[26] Deuschel, J-D., Stroock, D.W.: Large Deviations, Academic Press,
New York, 1989.
[27] Doob, J.L.: Stochastic Processes, John Wily, New York, 1953.
[28] Evans, L.C.: Partial Differential Equations, Graduate Studies in
Math.,Vol. 19, Amer. Math. Soc., Providence, RI, 2010.
[29] Flandoli, F., Maslowski, B.: Ergodicity of the 2-D NavierStokes equations
under random perturbations, Comm. Math. Phys., 171 (1995), 119-141.
[30] Fleming, W.A., Viot, M.: Some measure-valued processes in population
genetic theory, Indiana Univ. Math. J., 28 (1979), 817-843.
Bibliography 311

[31] Folland, G.B.: Introduction to Partial Differential Equations, Princeton


Univ. Press, Princeton, 1976.
[32] Freidlin, M.I, Wentzell, A.D: Random Perturbations of Dynamical Sys-
tems, Springer-Verlag, New York, 1984.
[33] Friedman, A.: Partial Differential Equations of Parabolic Type, Prentice
Hall, Englewood Cliffs, N.J., 1964.
[34] Funaki, T., Woycznski, W.A.: Eds., Nonlinear Stochastic PDEs: Hydro-
dynamic Limit and Burgers Turbulence, IMA Vol. 77, Springer-Verlag,
New York, 1996.
[35] Garabedian, P.R.: Partial Differential Equations, John Wiley, New York,
1967.
[36] Gihman, I.I., Skorohod, A.V.: Stochastic Differential Equations, Springer-
Verlag, New York, 1974.
[37] Gross, L.: Abstract Wiener spaces, Proc. 5th. Berkeley Symp. Math.
Statist. and Probab., 2 (1967), 31-42.
[38] Hale, J.K.: Ordinary Differential Equations, Wiley-Interscience, New
York, 1969.
[39] Hausenblas, E., Seidler, J.: A note on maximal inequality for
stochastic convolutions, Chzech. Math. J., 51 (2001), 785-790.
[40] Heath, D, Jarrow, R. and Morton, A.: Bond pricing and the term struc-
ture of interest rates: A new methodology, Econometrica, 61 (1992), 77-
105.
[41] Hopf, E: Statistical hydrodynamics and functional calculus, J. Rational
Mech. Analy., 1 (1952), 87-123.
[42] Hormander, L.: Linear Partial Differential Operators, Springer-Verlag,
Berlin, 1963.
[43] Hutson, V., Pym, J.S.: Applications of Functional Analysis and Operator
Theory, Academic Press, New York, 1980.
[44] Ikeda, N., Watanabe, S.: Stochastic Differential Equations and Diffusion
Processes, North-Holland, Amsterdam, 1989.
[45] It
o, K.: Stochastic integral, Proc. Imper. Acad. Tokyo, 20 (1944), 519-524.
[46] It
o, K.: On a stochastic integral equation, Proc. Imper. Acad. Tokyo, 22
(1946), 32-35.
[47] Khasminskii, R.Z.: Stochastic Stability of Differential Equations, Second
Edition, Springer-Verlag, Berlin-Heidelberg, 2012.
312 Bibliography

[48] Keller, J.B.: On stochastic equations and wave propagation in random


media, Proc. Symp. Appl. Math., 16 (1964), 145-170.

[49] Klyatskin, V.I. Tatarski, V.I.: The parabolic equation approximation for
wave propagation in a medium with random inhomogeneities, Sov. Phys.
JETP, 31 (1970), 335-339.

[50] Kotelenez, P.: A maximal inequality for stochastic convolution integrals


on Hilbert spaces and space-time regularity of linear stochastic partial
differential equations, Stochastics, 21 (1987), 245-358.

[51] Krylov, N.V: On Lp -theory of stochastic partial differential equations in


the whole space, SIAM J. Math. Analy., 27 (1996), 313-340.

[52] Krylov, N.V., Rozovskii, B.L.: On the Cauchy problem for linear stochas-
tic partial differential equations, Izv. Akad. Nauk. SSSR., 41 (1977), 1267-
1284.

[53] Krylov, N.V., Rozovskii, B.L.: Stochastic evolution equations, J. Soviet


Math., 14 (1981), 1233-1277.

[54] Kunita, H.: Stochastic flows and stochastic partial differential equations,
Proc. Intern. Congr. Math., Berkeley, (1986), 1021-1031.

[55] Kunita, H.: Stochastic Flows and Stochastic Differential Equations, Cam-
bridge University Press, Cambridge, 1990.

[56] Kuo, H-H., Piech, M.A.: Stochastic integrals and parabolic equations in
abstract Wiener space, Bull. Amer. Math. Soc., 79 (1973), 478-482.

[57] Kuo, H-H.: Gaussian Measures in Banach Spaces, Lecture Notes in Math.
No. 463, Springer-Verlag, New York, 1975.

[58] Kuo, H-H.: White Noise Distribution Theory, CRC Press, New York,
1996.

[59] Kuo, H-H.: Introduction to Stochastic Integration, Springer, New York,


2006.

[60] Landou, L., Lifschitz, E.: Fluid Mechanics, Addison-Wesley, New York,
1963.

[61] Lions, J.L.: Equations Differentielles, Op
erationelles et Probl`
emes aux
Limites, Springer-Verlag, New York, 1961.

[62] Lions, J.L., Magenes, E.: Nonhomogeneous Boundary-Value Problems and


Applications, Vol. I, Springer-Verlag, New York, 1972.

[63] Liu, K.: Stability of Infinite Dimensional Stochastic Differential


Equations with Applications, Chapman-Hall/CRC, New York, 2006.
Bibliography 313

[64] Lou,Y., Nagylaki, T., and Ni, W-M.: An introduction to migration-


selection models, Disc. and Conti. Dynam. Systems, 33 (2013), 4349-4373.

[65] Malliavin,P.: Stochastic Analysis, Springer-Verlag, New York, 1997.

[66] Martin, T.M., Segal, I.: Eds., Analysis in Function Spaces, The MIT
Press, Cambridge, Mass., 1964.

[67] McKean, H. P. Jr.: Stochastic Integrals, Academic Press, New York, 1969.

[68] Menaldi, J. L., Sritharan, S. S.: Stochastic 2D Navier-Stokes equations,


Appl. Math. and Opitim., 46 (2002), 31-53.

[69] Metevier, M., Pellaumail, J.: Stochastic Integration, Academic Press, New
York, 1980.

[70] Metevier, M.: Semimartingales: A Course in Stochastic Processes, Walter


de Gruyter & Co., Paris, 1982.

[71] Meyer, P.A.: Notes sur les processus dOrnsteinUhlenbeck, Seminaire de


Prob. XVI, LNM bf 920, Springer-Verlag, (1982), 95-133.

[72] Mizohata, S.: The Theory of Partial Differential Equations, Cambridge


University Press, Cambridge, UK, 1973. Academic Press, New York, 1980.

[73] Musiela, M.: Stochastic PDEs and the term structure model, Jour. In-
ternat. de France IGRAFFI, La Baule, bf 69, (1993). Springer-Verlag,
(1982), 95-133.

[74] ksendal, B.: Stochastic Differential Equations, Sixth Edition, Springer,


Berlin-Heidelberg, New York, 2007.

[75] Pardoux, E.: Equations aux derivees partielles stochastiques non lineaires
monotones, These, Universite Paris, 1975.

[76] Pardoux, E.: Stochastic partial differential equations and filtering of dif-
fusion processes, Stochastics, 3 (1979), 127-167.

[77] Pazy, A.: Semigroups of Linear Operators and Applications to Partial


Differential Equations, Springer-Verlag, New York, 1983.

[78] Peszat, S., Zabczyk, J.: Stochastic Partial Differential Equations with
L
evy Noise. An Evolution Equation Approach, Encyclop. Math. and Ap-
pls. Vol. 113, Cambridge Univ. Press, Cambridge, UK, 2007.

[79] Piech, M.A.: Diffusion semigroups on abstract Wiener space, Trans.


Amer. Math. Soc., 166 (1972), 411-430.

[80] Piech, M.A.: The OrnsteinUhlenbeck semigroups in an infinite dimen-


sional L2 setting, J. Funct. Analy. 18 (1975), 271-285.
314 Bibliography

[81] Revuz, D., Yor, M.: Continuous Martingales and Brownian Motion,
Springer, Berlin, 1999.
[82] Rozovskii, B.L.: Stochastic Evolution Equations: Linear Theory and Ap-
plications to Non-linear Filtering, Kluwer Academic Pub., Boston, 1990.
[83] Sato, K.I. : L
evy Processes and Infinite Divisibility, University of Cam-
bridge Press, 1999.
[84] Smoller, J.: Shock Waves and ReactionDiffusion Equations, Springer-
Verlag, New York, 1983.
[85] Sowers, R.: Large deviations for the invariance measure for the reaction
diffusion equation with non-Gaussian perturbations, Probab. Theory Re-
lat. Fields, 92 (1992), 393-421.
[86] Stroock, D.W., Varadhan, S.R.S.: Multidimensional Diffusion Processes,
Springer-Verlag, Berlin, 1979.
[87] Sz.-Nagy, B., Foias, C.: Harmonic Analysis of Operators on Hilbert
Spaces, North-Holland, Amsterdam, 1970.
[88] Treves, F.: Basic Linear Partial Differential Equations, Academic Press,
New York, 1975.
[89] Temam, R.: Infinite-Dimensional Dynamical Systems in Mechanics and
Physics, Springer-Verlag, New York, 1988.
[90] Tubaro, L.: An estimate of Burkholder type for stochastic processes de-
fined by stochastic integrals, Stoch. Analy. and Appl., 2 (1984), 197-192.
[91] Varadhan, S.R.S.: Large Deviations and Applications, SIAM Pub.
Philadelphia, 1984.
[92] Varadhan, S.R.S.: Stochastic Processes, Cournat Institute Lecture Note,
New York, 1968.
[93] Vishik, M.J., Fursikov, A.V.: Mathematical Problems in Statistical Hy-
drodynamics, Kluwer Academic Pub., Boston, 1988.
[94] Walsh, J.B.: An introduction to stochastic partial differential equations,
Lect. Notes Math., 1180 (1984), Springer-Verlag, Berlin, 265-435.
[95] Watanabe, S.: Stochastic Differential Equations and Malliavin Calculus,
Tata Inst. Fund. Research, Springer-Verlag, Berlin, 1984.
[96] Yosida, K.: Functional Analysis, Springer-Verlag, New York, 1968.
[97] Zakai, M.: On the optimal filtering of diffusion processes, Z. Wahrsch.
Verw. Geb., 11 (1969), 230-243.
This page intentionally left blank

You might also like