You are on page 1of 106

Signed-Measure Valued Stochastic Partial

Differential Equations with Applications in 2D


Fluid Dynamics
by
Bradley T. Seadler

Submitted in partial fulfillment of the requirements


for the degree of Doctor of Philosophy

Thesis Advisor: Peter Kotelenez, Ph.D.

Department of Mathematics
CASE WESTERN RESERVE UNIVERSITY

May 2012

CASE WESTERN RESERVE UNIVERSITY


SCHOOL OF GRADUATE STUDIES
We hereby approve the dissertation of Bradley Seadler, candidate for
the Doctor of Philosophy degree.

Prof. Peter Kotelenez Peter Kotelenez


(chair of the committee)

Prof. Elizabeth Meckes Elizabeth Meckes

Prof. Marshall Leitman Marshall Leitman

Prof. Manfred Denker Manfred Denker

Date of Defense: March 22, 2012

Contents
1 Introduction

2 Completeness and Signed Measures

11

2.1

Definitions of Metrics . . . . . . . . . . . . . . . . . . . . 11

2.2

Incompleteness Issues . . . . . . . . . . . . . . . . . . . . 14

2.3

New Results . . . . . . . . . . . . . . . . . . . . . . . . . 17

3 Signed Measure Valued SPDE

27

3.1

Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . 27

3.2

Existence and Uniqueness of SODE . . . . . . . . . . . . 29

3.3

SPDE Results . . . . . . . . . . . . . . . . . . . . . . . . 42

3.4

Extension by Continuity . . . . . . . . . . . . . . . . . . 45

3.5

Hahn-Jordan Decompositions . . . . . . . . . . . . . . . 50

3.6

New Results . . . . . . . . . . . . . . . . . . . . . . . . . 55

4 Smooth Stochastic Navier-Stokes

61

4.1

Formulation . . . . . . . . . . . . . . . . . . . . . . . . . 61

4.2

New Results . . . . . . . . . . . . . . . . . . . . . . . . . 68

5 Conclusion

72

6 Appendix

76

6.1

Fluid Dynamics . . . . . . . . . . . . . . . . . . . . . . . 76

6.2

Stochastics . . . . . . . . . . . . . . . . . . . . . . . . . . 78

Bibliography

97

ii

Acknowledgements
There are several people whom I would like to thank for their support. Without
their help and guidance, I certainly would not have been successful at my time here
at Case. First and most importantly, I would like to thank to my research advisor,
Prof. Peter Kotelenez, for the endless support. His patience and guidance throughout
the process is something I never took for granted. I was very lucky to have him to
turn to for ideas and guidance. He also taught me a lot of life lessons, and he was
the reason I decided on the Case math program and my field of studies.
Prof. Daniela Calvetti has been a wonderful supporter of the graduate students
at Case, but she has been an especially strong advocate for me. I am indebted to her
for the innumerable ways that she has benefited my education and career. During
my time at Case, Prof. Calvetti has put lots of time and energy to create a great
environment for the graduate studies. I am very grateful for this, as it fostered very
strong friendships with my fellow students and our professors.
Prof. Elizabeth Meckes, Prof. Marshall Leitman, Prof. Manfred Denker, and
Prof. Peter Kotelenez put forth a lot of energy and time into serving on my thesis
committee. I would like to thank them for their effort and time to evaluate my thesis.
Chris Butler started me on this journey through mathematics. He gave me my
first exposure to the math department when he hired me as an S.I. in my second year
of undergrad. I am very grateful to him for giving me this opportunity.
Diane Robinson, Jeanne Jurkovich and Gaythreesa Lewis are the unsung heroines
of the math department. They have taken care of so many issues for me during the
last seven years. I am very appreciative of all their efforts.
The graduate students at Case are so much more than just students in the same

iii

program. We are united by the same difficult trials and stresses throughout our years
here. They have provided me with friendship and motivation, as well as distraction
when I needed it. Thank you for all the memories at Yost.
My family always took the time to listen to my excitement and my frustration. I
cannot thank them enough for the support that they provide me.
Finally, my biggest supporter is my girlfriend, Emily. She has been by my side
throughout my entire time in the Ph.D. program. She has seen my biggest successes
and hardest failures, and she has always provided support and an ear to let me vent.
If there is anyone who is more excited for me to be completing my Ph.D., it is she. I
will always be indebted to her for the patience she has shown me during the difficult
periods of the program. I dedicate this thesis to her.

iv

Abstract
Signed Measure Valued Stochastic Partial Differential Equations
BRADLEY T. SEADLER

We note the interesting phenomenon that the Kantorovich-Rubinstein


metric is not complete on the space of signed measures. Consequently, we
introduce a new metric with a useful partial completeness property. With
this metric, a general result about the Hahn-Jordan decomposition of solutions of stochastic partial differential equations is shown. These general
results are applied to the smoothed Stochastic Navier-Stokes equations.
As an application, we derive that the vorticity of the fluid is conserved for
a solution of the Stochastic Navier-Stokes equations.

Introduction
The behavior of a fluid in motion is a phenomenon that intrigues and fascinates

mathematicians and untrained observers alike. Historically, this fascination led to


the development of the field of fluid dynamics. From Archimedes, Isaac Newton, and
Leonhard Euler to George Stokes, Claude Navier, and Olga Ladyzhenskaya, many
of the great mathematical minds contributed to the development of the field. To
completely discuss even the major contributions to fluid dynamics would extend well
beyond the scope of this work.1 Rather, the Euler and Navier-Stokes equations shall
provide the sufficient fluid dynamic basis for the following analysis.
To introduce these equations, consider an incompressible fluid2 in motion in R2 .
To understand the evolution of the fluid in time, one examines the behavior of the
velocity field and the vorticity. The vorticity is a scalar that represents the tendency
of a fluid to rotate at a given point. For a rigid body, the vorticity is twice the angular
momentum. Denote the velocity field by U (r, t) = (U1 (r, t), U2 (r, t))T R2 and the
vorticity of the fluid by X (r, t) R where T denotes transpose, r R2 and t 0.
Assume that the vorticity satisfies:

X (r, t) = X (r, t) (U (r, t)X (r, t))


t
U2 U1
X (r, t) = curl U (r, t) =

,
r1
r2

(1.1)

U 0

where is the kinematic viscosity of the fluid, is the Laplacian, is the gradient,
and is the inner product on R2 . For the case of an ideal or inviscid fluid, = 0,
and these equations represent the Euler equations in vorticity form. For > 0, these
1
For a full introduction to the field of mathematical fluid dynamics, refer to the classic texts of
[Lad69] or [CM93].
2
An incompressible fluid is a fluid which has a divergence free velocity field.

equations represent the Navier-Stokes equations in vorticity form.


The condition, U 0, is known as the incompressibility condition, and it has
several ramifications for (1.1). For the Euler equation, the vorticity must be conserved
along particle paths in the fluids. That is, if r(t, r0 ) is the position of a fluid particle
under (1.1) at t 0 with position r0 at t = 0, then

X (r(t, r0 ), t) = X (r0 , 0).

(1.2)

This follows as (1.1) represents the differential form of the continuity equation3 for
the vorticity, as remarked by [Lon88].
Another consequence of the incompressibility condition is that one can explicitly
express the velocity field in terms of the vorticity distribution as follows.4
Z
U (r, t) =
Z
Here

K(r q)X (q, t)dq

(1.3)

()dq denotes integration over R2 with respect to Lebesgue measure and K()

is the Biot-Savart kernel which for r = (r1 , r2 ) R2 is given by:


K(r) =


1
1
ln(|r|) =
(r2 , r1 )
2
2|r|2

(1.4)

,
) and |r|2 = r12 + r22 . Solving the Euler or Navier-Stokes
r2 r1
equations encounters several difficulties due in part to the singularity of the Biotwhere = (

Savart kernel at 0. Rather than explicitly solve the equations, many researchers
sought to use (1.1) as a way to simulate flows. To create a numerical method for the
Euler equations, Alexandre Chorin developed the regularized point-vortex method
3
4

See Lemma 6.1 for more details.


Refer to Lemma 6.2 for the derivation.

in [Cho73]. The methods introduced in [Cho73] followed the introduction of vortex


simulation in [Ros31].
The point-vortex method in [Cho73] transforms (1.3) to a more convenient form
by implementing two techniques. The first is to use mollifiers to avoid the singularity
of the Biot-Savart kernel. That is, one replaces K() with K () where K (r) is a
1
smooth function agreeing with K(r) for 0 < < |r| < and > 0. Consequently,

one has the following smoothed form of the velocity field.


Z
U (r, t) =

K (r q)X (q, t)dq

(1.5)

The second technique assumes the initial vorticity, X (r, 0), has N N point vorticies
with nonzero intensities, ak R, and positions, r0k R2 . Thus, for k = 1, . . . , N

X (r, 0) =

N
X

(1.6)

ak r0k

k=1

where x represents the point-measure concentrated at x. Assume the positions of the


point-vortices satisfy the following system of differential equations. For i = 1, . . . , N :
N

dri X
ak K (ri rk ),
=
dt
k=1

ri (0) = r0i .

(1.7)

It follows that the empirical process associated with (1.7), given by

XN (t) :=

N
X

ak rk (t) ,

(1.8)

k=1

is a weak solution5 of the Euler-equation, (1.1), with the smoothed Biot-Savart ker5
In this article, weak solutions shall always be in the sense of partial differential equations (hereafter PDE).

nel.6 This means that if is a smooth function from R2 to R with compact support
and < , > is the duality between generalized functions and smooth functions, then
we have the following differential:

d < XN (t), > = hXN (t), (U,N )i dt

(1.9)

where
Z
U,N (r, t) :=

K (r q)XN (dq, t).

(1.10)

Further, Chorin devises and implements a sampling algorithm to approximate arbitary vortex distributions by discrete vortex distributions of the form (1.6). As
remarked in [Cho73], the algorithm is successful as (1.7) is a rectangular quadrature rule to (1.3). Also, the approximation of initial vortex distributions leads to
approximations in the general vortex distibution.
The techniques developed in [Cho73] generate significant progress in the case of
Eulers equation. However, the Navier-Stokes equation still provides difficulties due
to the viscosity term (the term involving in (1.1)). One approach is to examine
the random point vortex method and the Stochastic Navier-Stokes equations. The
random point vortex method modifies (1.7) to be a stochastic ordinary differential
equation (hereafter SODE) rather than a deterministic differential equation. [MP82],
[GHL90], [Lon88], [Kot95], [Ami00] all consider (1.7) with a stochastic driving term
of the following form. For i = 1, . . . , N ,

dr (t) =

N
X

ak K (ri (t) rk (s))dt +

2dmi (rN (t), t)

(1.11)

k=1
6

Since this property is an essential part of the new analysis presented in this work, a proof is
provided in (3.40)-(3.42)

where (1.11) is written in differential form. The mi () are continuous, square-integrable


martingales (with respect to an underlying filtration and probability space) that may
depend on the positions of the point vortices, rN (t) = (r1 (t), . . . , rN (t))T . The benefit
of such an approach is two-fold.
From a fluid dynamic viewpoint, one can interpret (1.11) as particle motion
with random perturbation. As a result, the SODE system represents a model
of turbulence in fluid flows.
As in [Cho73] for the Euler equation, the empirical process associated with
(1.11) for a discrete initial vorticity is formally a weak solution of a SPDE.
This SPDE represents a random perturbation of the Navier-Stokes equations.
7

Thus, to find solutions for an arbitrary initial vorticity, it suffices to derive a

method to approximate initial vorticities by discrete vorticities.


To show the second remark, one applies Itos formula8 to the empirical process associated with (1.11), XN (t). This yields the following for smooth and with compact
support.
d < XN (t), > = < XN (t), (U,N ) > dt

N
X
i=1

2
X

ai

2
kl
((ri (t)))d << mik (rN , t), mil (rN , t) >>

k,l=1

N
X

(1.12)

ai (ri (t)) dmi (rN , t)

i=1

2
where kl
denotes the second derivative with respect to the spatial coordinates
7
8

We call such a SPDE a Stochastic Navier-Stokes Equation (hereafter SNSE).


For It
os formula, see Theorem 6.16 in the appendix.

rk , rl and << mik (rN , t), mil (rN , t) >> denotes the mutual quadratic variation process
of the one-dimensional components of mi ().9
In [MP82], [Lon88], and [GHL90] (as well as numerous other approaches), the
authors choose mi (rN (t), t) as independent 2-dimensional Brownian motions, i (t),
for i = 1, . . . , N . This reduces (1.12) to the following:
d < XN (t), > = < XN (t), (U,N ) > dt
+ < XN (t), > dt +

N
X

(1.13)
i

ai (r (t)) d (t).

i=1

Note that if the stochastic term did not appear in (1.13), the equation would represent
the weak form of the smoothed Navier-Stokes equations. Hence, (1.13) represents a
SNSE.
In both [Lon88] and [GHL90], the authors focus on the numerical aspects of the
algorithms following from [Cho73]. The authors in [GHL90] show that the point
vortex algorithm for the 2D Euler equation is consistent and stable. Further, the
authors derive a second order estimate on the error. Long examines the random point
vortex method as a numerical algorithm in [Lon88]. Here the author proves a nearly
optimal rate of convergence according to the Central Limit Theorem. Yet, in [MP82],
the authors establish an interesting advance in the general theory. The authors show
the following with some additional hypotheses. As the number of vortices, N , tends
to infinity, the empirical process, XN (t), will approximate weak solutions of (1.1).10
We call limits where N continuum-limits, and such limits play a central role in
the later analysis.
The results of [MP82], [Lon88], and [GHL90] provide important advances to the
9
10

See (6.14) for the definition.


See Chapter 4 for a more comprehensive review of [MP82].

theory of SNSE. Yet, the choice of mi (rN (t), t) = i (t) has several drawbacks. In
[Kot95], the author summarizes the limitations of using independent Brownian motions in (1.11) as follows:
In (1.13), the position of each particle is perturbed by its own fluctuation
force, i (t). This creates an identification that is preserved in the stochastic
term, yet disappears in the deterministic term. Consequently, the identification
prevents (1.13) from representing a smoothed signed-measure valued SNSE.
(1.13) represents a SNSE, but only in a formal way.
The choice of mi (rN (t), t) = i (t) yields fluctuation forces which are state independent (i.e. do not depend on rN (t)). From the physical interpretation, it is
more desirable that the fluctuation forces are state dependent.
Choosing mi (rN (t), t) as independent Brownian motions rather than spatially
correlated increases the high singularity to an associated SPDE on signedmeasures.
In order to address these limitations, [Kot95] introduces correlation functions,
 (r, q), in [Kot95] to describe the fluctuation in the motion of the particles. Here

 : R2 R2 M22 where M22 is the set of 2 2 matrices over R. This leads to

the removal of the taggings and fluctuation forces that are state dependent, spatially
correlated, and driven by Brownian sheets. In this method, the following choice for
the square-integrable martingales is made.

Z tZ

m (rN , t) :=

 (ri (s), p)w(dp, ds)

(1.14)

where w(dp, dt) = (w1 (dp, dt), w2 (dp, dt))T is standard space-time Gaussian white

noise differentials.11 With this choice of mi (rN , t), (1.12) becomes the following.
d < XN (t), > = < XN (t), (U ) > dt

+ < XN (t), > dt +


2 XN (t)

 (t, p)w(dp, dt),


(1.15)

Note that integrating (1.15) by parts in the sense of generalized functions shows
that XN (t) is a weak solution of the following smoothed, signed-measure valued
Stochastic Navier-Stokes Equation:

dX (t) = [X (U X )] dt

 Z

 (, p) w(dp, dt).
2 X

(1.16)

As in [Cho73] and [MP82], the author in [Kot95] derives continuum-limit results to


generalize the solution of (1.16) from discrete initial vorticities to arbitrary initial vorticities. Recall from (1.2), it is a natural consequence that the vorticity is conserved.
Consequently, the author requires the conservation of vorticity in solutions of (1.16)
as an additional condition for solving any SNSE. It follows from the construction in
[Kot95] that for discrete initial vorticities, the vorticity is conserved. When passing
to the continuum limit, [Kot95] employs a product Wasserstein metric. However,
restricting the analysis to this metric, the author could not show the preservation of
the Hahn-Jordan decomposition in the continuum limit.
In [Ami00], the author extends the results of [Kot95] by analyzing a SODE system
similar to (1.11). The author includes a term driven by a Poisson random measure.
Such a term is useful as a correction factor to accommodate for unusual physical
phenomenons involving the kinematic viscosity. In [Ami07], the author revisits the
11

See Definition 6.15 for formulation.

stochastic vortex theory. Here, Amirdjanova remarks on the difficulties that incompleteness imposes on the analysis. However, in [Ami07], the author can only show
the existence and uniqueness of the SODE for positive measures. As a result of the
work in [Kot95] and [Ami07], several questions naturally arise.
One of the standard choices for a metric in SPDE theory is not complete on
the space of signed-measures. Can one alter the definition of the metric to yield
completeness on this space, but without destroying the usefulness of this metric
in the analysis?
The continuum-limit in [Kot95] does not show conservation of vorticity. With
an alternative choice of metric, can one show conservation of vorticity in the
continuum-limit?
Seeking answers to these questions led to the original work derived in this work. The
analysis created new results in several areas, and we divide this work into chapters
based on these areas.

Chapter 2: Completeness on the Space of Signed Measures


For a complete, separable metric space, we show that the space of finite, Borel
signed measures is not complete under the Kantorovich-Rubinstein metric. Further, the natural extension to a product metric is shown to be incomplete on
the signed measures. Instead, a new metric is introduced using a quotient-space
inspired approach. Further, a useful partial completeness property is derived
for the metric.

Chapter 3: Hahn-Jordan Decomposition for Signed-Measure Valued SPDEs


We consider a general class of signed-measure valued SPDE and the HahnJordan decomposition of a solution. Similar to the argument in (1.12), the
9

empirical process for an associated SODE-system will satisfy the SPDE. Work in
[Kot10] shows that the Hahn-Jordan decomposition for an initial signed-measure
is preserved provided that either the initial measure is discrete or that the
coefficients of the SODE are sufficiently smooth. Using the partially-complete
metric from Chapter 2, we generalize the Hahn-Jordan result to the case where
the coefficients of the SODE are only assumed to be Lipschitz.

12

Chapter 4: Smooth SNSE


The results of the general signed-measure valued SPDEs from Chapter 3 apply
to the smoothed version of the SNSE, (1.16). From this, we establish that the
vorticity is conserved in the SNSE.

12
13

13

This result is from the work in [KS12b].


This result is from the work in [KS12a].

10

Completeness and Signed Measures


In this chapter, the standard metrics on the signed measures are introduced,

most notably the Kantorovich-Rubinstein metric. It is shown that this metric fails
completeness on the signed measures despite being complete on positive measures.
Identifying a relationship between signed measures and diagonal sets inspires a new
metric similar to the quotient-space metric. With the goal of completeness in mind,
this metric fails convergence for arbitrary Cauchy sequences of signed measures. However, the metric satisfies a more useful partial-completeness result. A Cauchy sequence
of signed measures converges if and only if the following property holds. The HahnJordan decompositions of a subsequence converge to a limit which can be identified as
a signed measure. Such a result is very desirable in the subsequent analysis. One can
conclude a pair of measures represents the Hahn-Jordan decomposition of a signed
measure by the convergence properties of the metric.

2.1

Definitions of Metrics

Let (S, ) be a complete, separable metric space with countable, dense set T . One
typically takes S as Rd for d N and as the Euclidean metric; however, the following
results hold in more generality. Define %(r, q) := (r, q) 1 where r, q S and
denotes minimum. By [Mun00], % induces the same topology as . Consequently, it
follows that (S, %) is also a complete, separable metric space. Denote Mf (S) as the
space of finite, Borel measures on (S, %). Many of the later applications require a
metric on Mf (S) to derive certain a priori estimates. To introduce such metrics, the
Wasserstein distance must be defined. For , Mf (S), we define C(, ) as the set
of joint representations of and . This means that Q C(, ) is a measure on the

11

product space, S S, that satisfies Q(A S) = (A)(S), Q(S B) = (S)(B)


for arbitrary Borel sets A, B of S. Define the Wasserstein distance for , Mf (S)
as
Z
W1 (, ) :=

inf

%(r, q)Q(dr, dq).

(2.1)

QC(,)

By [Dud02], the Wasserstein distance defines a metric on the probability measures in


Mf (S), which is denoted by P1 (S).
Although widely used, the above definition of the Wasserstein distance is not advantageous in general. In particular, the Wasserstein distance does not define a metric
on Mf (S) unless one only considers measures of equal mass. The famous KantorovichRubinstein Theorem provides an alternative representation of the Wasserstein metric
in terms of a dual norm. This dual norm is used to define a metric on the set of
finite, Borel, signed measures. One needs several definitions prior to stating the
theorem. Denote the space of all Lipschitz continuous functions from S into R by
CL (S, R), and denote the space of all uniformly bounded, Lipschitz functions from
S into R by CL, (S, R). One can endow CL, (S, R) with the norm kkL, where
kkL, = kk kkL , denotes maximum and

kf k := sup |f (s)|,

kf kL :=

sS

sup
r,qS,r6=q

|f (r) f (q)|
%(r, q)


.

(2.2)

Theorem 2.1 Kantorovich-Rubinstein Theorem


For , P1 (S), define

f (, ) :=

sup
kf kL, 1

Z



f (r)( )(dr)

then, f (, ) = W1 (, ).

12

(2.3)

f (, ) is often written as f ( ) due to the linearity of the definition. The


original statement of the Kantorovich-Rubinstein Theorem has the supremum taken
over kf kL 1. This case is shown as a step in the proof.
Proof: See Theorem 6.3 for the complete proof.
Clearly, one can extend the definition of f to Mf (S) by (2.3). The spaces,
(P1 (S), W1 ) and (Mf (S), f ) inherit properties from the underlying metric space,
(S, %).
Theorem 2.2 For a complete, separable metric space (S, %), the spaces, (P1 (S), W1 )
and (Mf (S), f ), are also complete and separable.
Proof: Define
(
Mf,d :=

:=

N
X

)
ai ti , N N, ti T and ai [0, ) Q, i = 1, . . . , N

i=1

(2.4)
By Theorem 6.7, Mf,d forms a countable, dense set in Mf (S) for f . Similarly,
N
X
restricting Mf,d to elements with
ai = 1 is a dense set for (P1 , W1 ).
i=1

To show completeness, we follow the argument in [Kot08]. From the definition


of f , it is clear that f represents a functional norm on the dual of CL, (S, R).

(S, R). To
(S, R), and note that Mf (S) is a cone in CL,
Denote this dual by CL,

show that (Mf (S), f ) is complete, it suffices to show that it is closed in the norm
topology. Suppose n Mf (S) converges in the norm topology to a limit . As

CL,
(S, R) is a Banach space, convergence in the norm topology implies convergence

in the weak- topology. By the Riesz Representation Theorem, it follows that must
be a finite, Borel measure, and thus Mf (S). The other case follows by the
Kantorovich-Rubinstein Theorem.

13

2.2

Incompleteness Issues

Denote the space of finite, Borel signed measures on (S, %) by Mf,s (S). From
(2.3), the definition of f (, ) easily extends from Mf (S) to Mf,s (S). The extension,
(Mf,s (S), f ), is a normed vector space, and f is called the Kantorovich-Rubinstein
distance. Due to its convenient dual form, the Kantorovich-Rubinstein distance is
used throughout the analysis. One might expect that a theorem similar to Theorem
2.2 holds for (Mf,s (S), f ). However, the following result shows that the signed
measures under f do not have the same properties.
Theorem 2.3 Mf,s (S) is not complete with respect to f , but (Mf,s (S), f ) is separable.
Proof: Fix s S and k N choose tk T such that %(tk , s) <

1
. For all n N,
k2

define n Mf,s (S) by:


n
X
(1)k tk .
n =
k=1

We verify that {n }
n=1 is Cauchy with respect to f . For m, n N with m n and
f CL, (S, R) with kf kL, 1:
Z



f (r)(n (dr) m (dr))



m
n

X
X


(1)k f (tk )
= (1)k f (tk )


k=1

k=1



n
X



k
=
(1) f (tk )

k=m+1
n1
X

|f (tk ) f (tk+1 )|

k=m+1

14

n1
X
k=m+1

%(tk , tk+1 )

Thus, f (n m ) :=

n1
X

(%(tk , s) + %(s, tk+1 ))

k=m+1
n1
X

k=m+1
n1
X

1
1
+
)
2
k
(k + 1)2

2
0 as n, m
2
k
k=m+1

sup
kf kL, 1

Z



f (r)(n m )(dr) 0 as n, m .

Consequently, {n }
n=1 is Cauchy with respect to f . Suppose there exists a finite
signed measure, , such that f (n ) 0 as n . Then, f CL, (S, R)
Z
with kf kL, 1,
f (r)(n )(dr) 0 as n . Yet, consider f 1
CL, (S, R). For this choice of f ,
Z



f (r)(n )(dr)



Z

n
X





= 1(r)(n )(dr) = (1)k (S)


k=1

|1 + (S)| if n = 2` + 1 for some ` N


=

|(S)|
if n = 2` for some ` N.

Consequently, such a Mf (S) cannot exist. To show that (Mf,s (S), f ) is


separable, consider the following:
(
Mf,s,d :=

:=

N
X

)
ai ti , N N, ti T and ai Q i = 1, . . . , N

(2.5)

i=1

Theorem 6.7 shows that Mf,s,d is dense in Mf,s (S) under f . Since Mf,s,d is countable,
the claim follows.
The Kantorovich-Rubinstein distance, f , fails completeness on Mf,s (S), but its
restriction to Mf (S) is complete. From this remark, one might believe that using the
Hahn-Jordan decomposition of a signed measure may yield completeness on Mf,s (S).
15

For convenience, the Hahn-Jordan decomposition is stated in the current notation.


Proposition 2.4 Hahn-Jordan Decomposition
For Mf,s (S), there exists + , Mf (S) such that = + and + .
Further, + , are unique up to -null sets.
Proof: Refer to [Fol99] for the proof.
Using the Hahn-Jordan Decomposition and the completeness of (Mf (S), f ), the
product space of measures may satisfy completeness for Mf,s (S). Define the product
space of measures and signed measures as:
f (S) := {
M
= (1 , 2 ) : 1 , 2 Mf (S)}

(2.6)

f,s (S) := {
M
= (1 , 2 ) : 1 , 2 Mf,s (S)}.
f (S) (respectively, M
f,s (S)) the space of measure pairs (respectively, signed
Call M
f,s (S) is a real vector space with componentwise addition
measure pairs). Note that M
f (S) is a cone in M
f,s (S) as M
f (S) is closed
and scalar multiplication. Hence, M
under componentwise addition, but only scalar multiplication for positive scalars is
f,s (S):
well-defined. Define the product metric for
= (1 , 2 ), = (1 , 2 ) M

f (
, ) := f (1 , 1 ) + f (2 , 2 )

(2.7)

f,s (S), f ) is a separable metric


From Theorem 2.2 and Theorem 2.3, it follows that (M
f (S), f is complete.
space, and restricted to M

16

2.3

New Results

Identify a signed measure, Mf,s (S), with the measure pair


:= (+ , )
f,s (S) by the representation = + where is the Hahn-Jordan decompoM
sition of . The following example shows that f does not, in general, preserve the
Hahn-Jordan decomposition in limits.
Example 2.5 Suppose x, y S and there exist two sequences of elements {vn }n1
and {un }n1 in S such that un 6= vn , un 6= x, and vn 6= x n N. Further, assume
that vn x and un x as n under the metric %. Now, consider the
signed measure n = y + vn un . For sufficiently large n, one can identify n with
f to (y + x , x ). To establish

n converges in
n := (y + vn , un ). We claim that
this, it suffices to show that un x under f . The other terms follow analogously.
f (un , x ) =

sup
kf kL, 1

sup

Z



f (r)(un x )(dr)


|f (un ) f (x)| %(un , x) 0 as n

kf kL, 1

Consequently, we conclude that

n (y + x , x ) as n whereas (y , 0)
is the Hahn-Jordan decomposition of the signed measure (y + x , x ), and 0 is the
measure which assigns 0 to all Borel sets of S.
As a result, (Mf,s (S), f ) is not a complete space. Indeed, one cannot identify
limits under f as signed measures due to the Hahn-Jordan representations. However,
the set of signed measures plays a special role in the product space. One can identify
f (S). Let
the signed measures with a quotient space under M
f,s (S) : = in Mf,s (S)}.
s := {(, ) M
D

17

s is a closed subspace of the vector space, M


f,s (S), and one can consider the quotient
D
space,

f,s (S)
M
/Ds .

Denote the quotient map that sends a signed measure pair,


, to

f,s (S)
its equivalence class, (
), by : M

f,s (S)
M
/Ds .

With this quotient space,

f,s (S), then


note that if = (1 , 2 ),
= (1 , 2 ) M
s
((1 , 2 )) = ((1 , 2 )) (1 , 2 ) (1 , 2 ) D
(1 1 , 2 2 ) = (, ) where Mf,s (S)
1 1 = 2 2 in Mf,s (S))

(2.8)

1 2 = 1 2 in Mf,s (S)

and define the same signed measure.

This important relationship between the signed measures and the quotient space
is crucial in the following analysis. Identifying a signed measure with its equivalence
class avoids the Hahn-Jordan preservation issues apparent in Example 2.5. Futhermore, the relationship provides the correct setting for a well-defined, KantorovichRubinstein-type metric on the space of signed measures. Prior to defining such a
metric, we make one more observation regarding the Hahn-Jordan decomposition in
the quotient-space setting. Let
= (+ , ) be the Hahn-Jordan decomposition of
f (S), then we
a signed measure . Note that if also = 1 2 where (1 , 2 ) M
have that for some Mf,s (S):
=

+ = 1 2
:= 2 = 1 +
2 = +

(2.9)

1 = + + .

The supports of + and are disjoint up to a -null set. Thus, Mf (S) since
otherwise 1 or 2 would be a signed measure. Extend the Hahn-Jordan decomposi-

18

tion for the case when 1 = 2 = by setting (, ) = (0, 0). This implies that for
f (S) that
all
M
+
(
) =
+ D

(2.10)

+ := {(, ) Mf (S) : = in Mf (S)}. Note that since D


+ is closed in
where D
f (S), so is (
M
).
Proposition 2.6 The following defines a metric on the space of signed measures
under its identification with the quotient space,

f,s (S)
M
/Ds .

f (S)
For
, M

((
), (
)) := inf [
f (
) f (

)].
+
D

(2.11)

Proof: We obviously have symmetry: ((


), (
)) = ((
), (
)).
Now, note that if (
) = (
), then
= , and the infimum on the right side
+ such that
of (2.11) is 0. If ((
), (
)) = 0, then there exist a sequence {
n } D
n (
) and n (

) from which it follows that
= and
(
) = (
).
f,s (S), and choose d1 , d2 D
+
For the triangle inequality, let  > 0, ,
, M
such that:

((
), (
)) +


> f (

d1 ) f (
d1 )
2

+  > f (
d2 ) f (
d2 ).
((
), ())
2
+ . Since for nonnegative constants a, b, c, d, we have that:
Note that d1 + d2 D

(a + b) (c + d) (a c) + (b d),

19

it follows that:

f (
((
), ())
(d1 + d2 )) f ( (d1 + d2 ))
[
f (

d1 ) + f (
d2 )] [
f (
d1 ) + f (
d2 )]
[
f (

d1 ) f (
d1 )] + [
f (
d2 ) f (
d2 )]
+ .
< ((
), (
)) + ((
), ())
2

Letting  0 yields the triangle inequality.

The Kantorovich-Rubinstein distance, f , and its associated product metric, f ,


are used to define the quotient-type metric, . Consequently, there are several inequalities relating the metrics. These inequalities are used frequently in understanding the role of completeness for . To state the inequalities, define the mapping
f,s (S) Mf,s by (
f,s (S).
:M
) = 1 2 where
= (1 , 2 ) M
f,s (S),
Proposition 2.7 For
, M

((
), (0)) = f (
)

(2.12)

and

f ((
) (
)) ((
), (
)) f (
) ((
), (
)) f (
).
(2.13)

20

+ for and in the definition of yields


D
Proof : Choosing 0
f (
) f (
)] f (
).
((
), (0)) := inf [
+
D

Yet, considering the second term,


f (
) = f (
+ ) = f (+ + ) + f ( + )
Z
=

sup

f (r)( + )(dr) +

kf kL, 1

sup

f (r)( + )(dr)

kf kL, 1

= + (S) + (S) + (S) + (S) = f (


) + 2(S).
The last equalities follow as the supremum is attained if f 1. Finally, we have that
((
), (0)) inf f (
) = inf
+
D

+
D


f (
) + 2(S) = f (
)

which establishes (2.12).


Take an arbitrary f CL, (S, R) with kf kL, 1, and let Mf (Rd ) be
arbitrary.
Z



f (x)((

))(dx)


Z




+


= f (x) ( ) (dx)
Z




+
+


= f (x) ( ) (dx)

21

Z
Z




+
+




f (x)( )(dx) + f (x)( )(dx)

sup
kf kL, 1


 Z


f (x)(+ + )(dx)


Z



+ f (x)( )(dx)

f (
)
where = (, ). Thus, taking the supremum over f CL, (S, R) with kf kL, 1
yields
f ((
) (
)) f (
).
Since Mf (Rd ) was arbitrary, it follows from the preceding inequality that
f ((
) (
)) inf f (
)
+
D

and, consequently, we have that

f ((
) (
)) inf [
f (
) f (
+ )] = ((
), (
)).
+
D

The first inequality in (2.13) now follows as

f ((
) (
)) = f (+ ( + )) f (+ ) + f ( + ) = f (
).

The second inequality follows immediately, and the third follows simply by choosing
= (0, 0) in (2.11).

22

The motivation for the work was to construct a metric that is complete on Mf,s (S).
We next examine whether is a complete metric on the space of signed measures.
Suppose {(
n )}nN is a Cauchy sequence for with a subsequence {(
nk )}. If
{(
nk )}nN converges, then it would follow that (Mf,s (S), ) is complete. However, the following analysis shows that this subsequence does not converge in general.
Rather, the following derivation yields something that is in a way more powerful than
completeness itself.
Theorem 2.8 A Cauchy sequence {(
n )nN } converges in iff a subsequence of

f ) whose limit,
, satisfies
measure pairs {

nk } is a Cauchy sequence in (Mf (S),

=
In particular, the limit,
, satisfies the identification of the Hahn-Jordan
decomposition of a signed measure.
Proof : For the reverse implication, note that by Proposition 2.7,

((
nk ), (
)) f (

) 0 as k .
nk ,
So, {(
n )nN } must converge as it is Cauchy.
For the forward implication, choose a subsequence {(
nk )}kN of {(
n )}nN such
that
((
nk ), (
nk+1 )) < 2k k N.
+ such that
Set 1 := 0 and by the definition of choose k D

f (

f (
nk
nk +1 + k ) < 2k k N.
nk
nk +1 k )

23

+ by k :=
Further, define another sequence in D

k1
X

i , k N. The first term in

i=0

the definition of , (2.11), satisfies

f (

k N.
nk + k
nk +1 k+1 ) < 2

Setting k :=

nk + k Mf (S), we obtain
f (k k+1 ) < 2k .

D
f (S), f ) is complete and by (2.10), there is a unique = + ,
+
Since (M
such that
f


+ .
k =

nk + k

Due to this convergence and the argument in Proposition 2.7,

sup f (

nk + k ) = sup(nk (S) + nk (S) + 2k (S)) < ,


kN

kN

Consequently, the monotone limit := lim k =


k

+.
i exists and D

i=1

Further, for m N,
X
X
X
= 2f (
f (
i )
i )
2i (S) 0 as m .
i>m

im

i>m

Thus,
X
0 as m .
f (
i )
im

24

Therefore, we may write

nk +k ++ and conclude that {


nk } Mf (S)

.
By considering the equivalence
has

:= +
nk + Mf (S). Denote
f (S) and D
+ such that
class of this limit and using (2.10), there are
M
Thus, we obtain

+ = + .

+ .
nk

(2.14)

To complete the proof of Theorem 2.5, it suffices to show the following lemma:
Lemma 2.9
0 iff = 0
+ ))
((

nk ), (
where 0 = (0, 0).
Proof : The reverse implication follows from Proposition 2.7 as

((

)) f (

)
nk ), (
nk
and

in f as k .
nk
> 0. Thus,
For the forward implication, it follows that if 6= 0, then f ()
((

)) inf f (

) inf f (

).
nk ), (
nk
nk
+
D

+
D

Let  > 0, and compute the second term:


)
inf f (

) = inf f (
+

nk
nk

+
D

+
D

inf f ( ) f (
+

nk )
+
D

inf f ( + ) 
+
D

25

(2.15)

for sufficiently large n since, by (2.14), f (


+

nk ) 0. Hence,
> 0.
) inf f ( + ) = f ()

lim inf inf f (


nk
n D
+

+
D

Therefore, we also have

> 0.
lim inf ((
nk ), (
)) f ()
n

With Theorem 2.8 established, is a Kantorovich-Rubinstein-type metric that has


a useful form of partial-completeness. One can use the partial-completeness property
f (S), f ) is the Hahn-Jordan deto conclude that a limit of measure pairs in (M
composition of a signed measure. This property will allow significant results in the
field of Signed Measure Valued SPDE. One can infer the Hahn-Jordan decomposition of a solution by properties of . Note that none of the Wasserstein distance,
Kantorovich-Rubinstein distance, or their product metrics can yield such a result directly. Consequently, the new results presented here show a significant advancement
in the theory of metrics on signed measures.

26

Signed Measure Valued SPDE


In the following, we examine a general class of signed-measure valued SPDE and

an associated system of SODE. The approach is to pass a solution of the SODE


system to a solution of the SPDE by Itos Formula. For a discrete initial signed
measure, it follows that the Hahn-Jordan decomposition of the solution of the SPDE is
preserved. However, the extension to arbitrary signed measures requires justification
due to incompleteness issues. From [Kot10], the extension holds provided one assumes
smoothness on the coefficients of the associated SODE. Using Theorem 2.8, we derive
the extension provided the coefficients of the SODE only satisfy Lipschitz conditions.
From a stochastic analysis viewpoint, this result is ideal as Lipschitz conditions form
the basis for existence and uniqueness of solutions.

3.1

Definitions

Prior to stating the SODE-system or SPDE, we state some of the standard assumptions and notation for the stochastic analysis. Consider a probability space,
Z
(, F, P ), and a filtration {Ft }t0 contained in F. Denote E() and ()dP as the
mathematical expectation with respect to the probability measure. To avoid certain
pathological issues that may arise, we make the following standard assumptions on
the filtration:
F0 contains all the P -null sets (and hence so does Ft for all t 0).
The filtration is right continuous (i.e. Ft = Ft+ := st Fs ).
One reason for making such assumptions on the filtration is for convenience when
working with stopping times. A stopping time is a mapping, : [s, ), such
27

that the event { t} Ft for all t 0. Assuming that the filtration is right
continuous implies that the events { < t} Ft for a stopping time, .
For notation, we need to define the standard Lp spaces for p 1. Let K be
a metric space with metric . L0,Fs (K) is the space of Kvalued Fs -measurable
random variables . Lp,Fs (K) L0,Fs (K) is the set such that for Lp,Fs (K),
E p (, ) < where K is an arbitrary fixed element.
Denote, C([s, T ]; K) as the set of continuous functions from [s, T ] to (K, )
where s, T [0, ). L0,F (C([s, T ]; K)) is the space of random variables with values in C([s, T ]; K) which are adapted to the filtration Ft . Lp,F (C([s, T ]; K))
L0,F (C([s, T ]; K)) is the space of p-th integrable random variables with values in
C([s, T ]; K). Finally, Lloc,p,F (C([s, T ]; K)) is the space of processes () such that
there is a sequence of stopping times {n }nN with ( n ) Lp,F (C([s, T ]; K)) and
n . Such a sequence of stopping times is said to be localizing.
To aid in the notation, let f,s (, ) denote the partially-complete metric from
Chapter 2.14 Let wl (dp, dt) denote i.i.d. standard Gaussian space-time white noise
differentials15 where l = 1, . . . , d, and define w(dp, dt) = (w1 (dp, dt), . . . , wd (dp, dt))T .
Now, let F : Rd Mf,s (Rd ) [0, ) Rd , J : Rd Rd Mf,s (Rd ) [0, )
Mdd , where is the correlation length,16 and Mdd denotes the set of d d matrices
over R. We can now state the SODE system in differential form.17 For i = 1, . . . , N
N, and s t T with s, T [0, )

dri

F (ri (t), Y(t), t)dt

Z
+

14

J (ri (t), p, Y(t), t)w(dp, dt)

(3.1)

f,s was denoted by in Chapter 2.


See Definition 6.15 for formulation.
16
represents the distance where the correlation between particles driven by w(dp, dt) can be
observed.
17
See Definition 6.13 for additional background.
15

28

with initial conditions:


ri (s) = q i Rd ,

(3.2)

and Y(t) is a signed-measure valued, adapted process. One may think of the system
as representing N particles moving in a random medium with positions given by ri .
The signed measure input process is represented by Y(t) in Rd , and it provides the
mathematical interpretation of the random medium.

3.2

Existence and Uniqueness of SODE

To address questions of existence and uniqueness to (3.1), one must specify more
conditions on the coefficients, F and J . Assume that F and J are jointly measurable
in all arguments. Also, assume that the one dimensional components of J , denoted
by J, ij for i, j = 1, . . . , d, are square integrable with respect to Lebesgue measure, dq,
on Rd . Also, assume the following Lipschitz and growth constants on the coefficients
in terms of f,s and %(r, q) := |r q| 1 where18 | | denotes the Euclidean metric on
Rd and r, q Rd .
f,s (Rd ), t s
There exists19 a cF,J (0, ) such that for all r, ri Rd ,
,
i M
and i = 1, 2:

|F (r1 , (
1 ), t) F (r2 , (
2 ), t)| cF,J {(
f (
1 ) f (
2 ))(r1 , r2 ) + f,s (1 , 2 )}
(3.3)
18

We write %(q) for %(q, 0) and %(r q) for %(r, q) without further mention.
Throughout this work, we use the notation that the subscripts on a constant indicate dependence
on those parameters.
19

29

d Z
X

(J,k` (r1 , p, (
1 ), t) J,k` (r2 , p, (
2 ), t))2 dp

k,`=1

 2

2
c2F,J (
f (
1 ) f2 (
2 ))2 (r1 , r2 ) + f,s
(1 , 2 )
(3.4)
|F (r, (
), t)|2 +

d Z
X

2
J,k`
(r, p, (
), t)dp cF,J f2 (
).

(3.5)

k,`=1

The constant cF,J in (3.3)-(3.5) may also depend on the space dimension, d. Alternatively, if we relax the boundedness assumption on the coefficients in (3.5), we need
to impose a linear growth condition. This growth condition is assumed in addition
to the corresponding Lipschitz conditions from (3.3)-(3.4) in terms of the Euclidean
metric.
f,s (Rd ), t s
There exists a cF,J (0, ) such that for all r, ri Rd ,
,
i M
and i = 1, 2:

|F (r1 , (
1 ), t) F (r2 , (
2 ), t)| cF,J {(
f (
1 ) f (
2 ))|r1 r2 | + f,s (1 , 2 )}
(3.6)
d Z
X

(J,k` (r1 , p, (
1 ), t) J,k` (r2 , p, (
2 ), t))2 dp

k,`=1

 2

2
c2F,J (
f (
1 ) f2 (
2 ))|r1 r2 |2 + f,s
(1 , 2 )
(3.7)
|F (r, (
), t)|2 +

d Z
X

2
J,k`
(r, p, (
), t)dp c2F,J (1 + |r|2 )
f2 (
).

(3.8)

k,`=1

With the appropriate Lipschitz assumptions on the coefficients, the following existence
and uniqueness result follows the standard techniques in the literature.20 However,
due to its importance in the later analysis, the complete proof of the result is given.
20

See [Pro04], [MP80], or [IW89].

30

The case for positive measures is established in [Kot08]. The following argument
follows the work in [Kot08] and generalizes it to signed measures. In the following
propositions, we simply write the localizing stopping times {n }n1 as . Furthermore,
let % {%, | |}.
Proposition 3.1 Suppose either (3.3)-(3.5) or (3.6)-(3.8) holds:
i. Let the initial condition, qs = (qs1 , . . . , qsN ) have qsi L2,Fs (Rd ) for i = 1, . . . , N ,
and the signed valued input process have, Y Lloc,2,F (C((s, T ]; Mf,s (Rd ))). There
exists a unique, continuous, adapted solution of (3.1):

qs , s) = (r1 (t, Y,
qs1 , s), . . . , rN (t, Y,
qsN , s) Lloc,2,F (C([s, T ]; Rd )).
r (t, Y,

ii. For initial conditions, qji L2,Fs (Rd ), i = 1, . . . , N, j = 1, 2, signed measure valued processes Yj Lloc,2,F (C((s, T ]; Mf,s (Rd ))), j = 1, 2 and a localizing stopping
time, :
E

sup
stT

2 (ri (t, Y1 , q1i , s), ri (t, Y2 , q2i , s))


Z
i
2 i
cT,F,J , E (q1 , q2 )1{ s} + E

2
f,s
(Y1 (u), Y2 (u))1{ s} du.

(3.9)
iii. For any N N there is a RdN -valued map in the variables (t, , , r, s), 0 s
t < such that for any fixed s 0:

r (, . . . , , s) : [s, T ] C([s, T ]; Mf,s (Rd )) RdN C([s, T ]; RdN ),


(3.10)
31

such that:
(a) For any t [s, T ], r (t, , . . . , , s) is Gs,t FMf,s (Rd ),s,t B dN B dN measurable. Here, Gs,t is the completed -algebra generated by w(dp, du) between s
and t, FMf,s (Rd ),s,t denotes the -algebra of cylinder sets on Mf,s (Rd ), and
B dN denotes the Borel sets on RdN .

(b) The ith component of r = (r1 , . . . , rN ) depends only on the ith initial condition, w(dq, dt), and Y L2,F (C([s, T ]; Mf,s (Rd ))):
qsi , s).
(r (t, , Y , qsi , s))i = ri (t, Y,

Proof: (i). Without loss of generality, we will prove the result on [0, T ]. For notational convenience, we will suppress the dependence on and the particle index in
the argument. Assume (3.3)-(3.5) as the other case will follow similarly. Since the
stopping times are localizing, we have that for all > 0 and for all b > 0, there is a
stopping time, , and an F0 measurable set b such that:

P

sup
0tT



f,s (Y1 (t, )) f,s (Y2 (t, ))1b () b 1 .

Therefore, without loss of generality, we may assume that:

cY := ess sup sup

f,s (Y1 (t, )) f,s (Y2 (t, )) < .

(3.11)

0tT

Similarly, the same holds with f replacing f,s . To show existence, the approach
is to use completeness and the Picard-Lindel
of approximation procedure. To use
this approach, we need to show that the approximations lie in the Banach Space
32

of L2,F (C([0, ); Rd )). To this end, assume that qj ( ) L2,F (C([0, ); Rd )) for
j = 1, 2, and define:
Z
qj (t) := qj (0) +

F (qj (s), Yj (s), s)ds +

Z tZ

J (qj (s), p, Yj (s), s)w(dp, ds).

(3.12)
For j = 1, 2,
2

Z

E sup (
qj (t )) 9E (qj (0)) + 9E sup
0tT

0tT

Z

+ 9E sup
0tT

F (qj (s), Yj (s), s)ds

J (qj (s), p, Yj (s))w(dp, ds).

(3.13)
By Cauchy-Schwarz,21 (3.5), and (3.11), we have that there is a constant cF,J ,T
such that:
E sup
0tT

Z

F (qj (s), Yj (s), s)ds

Z
E sup T
0tT

|F (qj (s), Yj (s), s)|2 ds cF,J T E

1ds cF,J ,T .
0

(3.14)
Applying Doobs Inequality,22 (3.5), and (3.11) to the random term yields another
21
22

See Theorem 6.9 for formulation.


See Theorem 6.10 for formulation.

33

constant, cF,J ,T , with:

E sup

Z

J (qj (s), p, Yj (s), s)w(dp, ds)

0tT

d
X

d
X

k=1

!2
J,ki (qj (s), p, Yj (s), s)

dpds

i=1

(3.15)
2

4d

d
X
k,i=1

Z
E

2
J,ki
(qj (s), p, Yj (s), s)dpds

cF,J ,T .
where we have used (6.15) in the first inequality. Combining (3.13), (3.14), and (3.15)
yields that there is a finite constant, cF,J ,T,d, , such that:
E sup 2 (
qj (t )) 9E 2 (qj (0)) + cF,J ,T,d, .

(3.16)

0tT

Thus, we have shown that if the initial condition satisfies qj (0) L2,F0 (Rd ), then
qj (t ) defined by (3.12) is in L2,F (C([0, ); Rd )). Now, we state estimates to
compare terms of the type defined by (3.12). We may assume that cF,J 1, and note
that a c (a 1)c for all a 0 and c 1. Thus,





s) F (q2 (s), Y2 (s), s) cF,J F (q1 (s), Y1 (s), s), F (q2 (s), Y2 (s), s) .
F (q1 (s), Y(s),
(3.17)

34

Now, by using Cauchy-Schwarz, (3.11), (3.17), and (3.3), we have that


2

Z

F (q1 (s), Y1 (s), s)ds,

F (q2 (s), Y2 (s), s)ds

F (q1 (s), Y1 (s), s)ds

2

F (q2 (s), Y2 (s), s)ds
(3.18)

|F (q1 (s), Y1 (s), s) F (q2 (s), Y2 (s), s)|2 ds

T
0

Z
cF,J ,Y

2
f,s
(Y1 (s), Y2 (s))ds

(q1 (s), q2 (s))ds +


0


.

Similarly, by using (3.17), (3.4), and (6.15), we have the following:


2

Z

J (q1 (s), p, Y1 (s), s)w(dp, ds),

J (q2 (s), p, Y2 (s), s)w(dp, ds)


2
d X
d Z t Z

X

E
(J,ki (q1 (s), p, Y1 (s), s) J,ki (q2 (s), p, Y2 (s), s))wi (dp, ds)



0
k=1

d
X

i=1

k=1

d E

2
Z X
d


(J,ki (q1 (s), p, Y1 (s), s) J,ki (q2 (s), p, Y2 (s), s)) dpds


i=1

d Z
X
k,i=1

|(J,ki (q1 (s), p, Y1 (s), s) J,ki (q2 (s), p, Y2 (s), s)|2 dpds

Z
cF,J ,Y,d E

t
2

(q1 (s), q2 (s))ds +


0

t
2
f,s
(Y1 (s), Y2 (s))ds

(3.19)


.

It now follows from (3.18), (3.19) and Doobs Inequality that there exist a constant
35

cF,J ,T,,d such that:


E sup


(
q1 (t ), q2 (t )) cF,J ,T,,d E 2 (q1 (0), q2 (0))
2

0tT

(q1 (s ), q2 (s ))ds + E

+E
0

2
(Y1 (s
f,s

), Y2 (s ))ds .

(3.20)
From [Arn92], apply the Picard-Lindel
of procedure. Define iteratively for n N
:= Y1 () Y2 (),
with Y()
Z
qn+1 (t) := q(0) +

F (
qn (s), Y(s),
s)ds +

Z tZ

J (
qn (s), p, Y(s),
s)w(dp, ds)

q0 (t) := q(0).
(3.21)
By (3.20), it follows that with cN := cF,J ,T,,d :

E sup

(
qn+1 (t ), qn (t )) cN

0tT

E sup 2 (
qn+1 (s), qn (s))ds.

(3.22)

0tT

Thus, for a constant :

E sup

2 (
qn+1 (t ), qn (t ))

0tT

cnN T n
.
n!

(3.23)

From (3.16), qn lies in the complete space, L2,F (C([0, ); Rd )). So, by (3.23) it
follows that the sequence converges uniformly to a limit, q , in mean square on the
compact interval [0, T ]. By the construction, q is continuous as each qn is continuous.

36

Now, we claim that q is a solution of (3.1). By Fatous Lemma and (3.23):


T

Z


Z
|
q (s) qn (s)| ds lim sup E
2

T
2

|
qm (s) qn (s)| ds


0

(3.24)

as n . Further, by Cauchy-Schwarz and (3.3):


Z

F (
qn (t), Y(t), t)dt
0

F (
q (t), Y(t), t)dt

(3.25)

J (
q (t), p, Y(t), t)w(dp, dt)

(3.26)

as n . By Doobs Inequality and (3.4):


Z

J (
qn (t), p, Y(t), t)w(dp, dt)
0

as n . It follows that q is a solution of (3.1) as claimed. To show uniqueness,


suppose that both q () and r () are solutions of (3.1) with initial condition
q (0) = r (0) = q0 . By (3.20)
E sup

2 (
q (t ), r (t ))

0tT

 Z
cF,J ,T,,d E

(
q (s ), r (s ))ds + E

Z

f,s
(Y(s),
Y(s))ds

(
q (s ), r (s ))ds .

= cF,J ,T,,d E
0

(3.27)
Applying Gronwalls Inequality23 yields that:

E sup

2 (
q (t ), r (t )) = 0.

0tT
23

See Theorem 6.11.

37

(3.28)

Thus, we have uniqueness as desired as this clearly implies that q () = r () on


[0, T ].
(ii). Now, (3.9) follows immediately from the previous arguments. In particular,
(3.18) and (3.19) are the critical inequalities in establishing the claim. Applying
Gronwalls lemma yields (3.9).
(iii). For a proof of the measurability refer to [Kot10] where the result for measures
is established. In fact, with no additional argument, the result in [Kot10] generalizes
to the signed measure case.
The SODE system, (3.1), is an RdN -valued decoupled SODE system in that the
signed-measure process, Y(t), does not explicitly depend on the solutions ri (t), i =
1, . . . , N . For a more general framwork, one can also consider a coupled SODE system.
For i = 1, . . . , N and s t T where s, T [0, )

dri

F (ri (t), XN (t), t)dt

Z
+

J (ri (t), p, XN (t), t)w(dp, dt)

(3.29)

with
XN (t) :=

N
X

ai ri (t)

(3.30)

i=1

where ai R for i = 1, . . . , N . Since for each , the initial value, XN (s), is a


finite sum of point measures, it must be finite. Due to Proposition 3.1, it follows that
XN (s) L0,Fs (Mf,s (Rd )). Solving the coupled system, (3.29), essentially follows the
same lines to show that (3.1) has a unique solution. However, there is an important
distinction that changes the argument. The coupled system, (3.29), depends on the
number of particles, N , whereas the decoupled system, (3.1), is independent of N .

38

Consequently, following [Kot08], define the metric %N on RdN by


%N (rN , qN ) := max %(ri , q i )

(3.31)

1iN

where rN := (r1 , . . . , rN ), qN := (q 1 , . . . , q N ) RdN . The following proposition is


claimed in [KS12b], and again follows the method for measures established in [Kot08].
Proposition 3.2 Assume either (3.3)-(3.5) or (3.6)-(3.8) holds as well as
|XN |(s) L0,Fs (Mf (Rd )). Then, to each qN (s) L0,Fs (RdN ), there exists a unique
continuous solution of (3.29), qN (, qN (s)) L0,F (C([s, ); RdN ))
Proof: As in the case of the decoupled system, assume without loss of generality that
s = 0. Further, we show the case where we assume (3.3)-(3.5). Note without loss of
generality, we can choose a b > 0 such that

|XN |(0, , R ) :=

N
X

|ai |ri (t) (R ) =

i=1

N
X

|ai | b a.s..

i=1

Following the argument for the decoupled case, define for j = 1, 2, i = 1, . . . , N and
nN

i
qj,n+1
(t)

:=

i
qj,n
(0)

Z
+

Z tZ

t
i
F (qj,n
(s), Xj,n (s), s)ds

i
J (qj,n
(s), p, Xj,n (s), s)w(dp, ds)

(3.32)
i
i
and qj,0
= qN
(0) where

Xj,n (t) :=

N
X

i (t) .
ai qj,n

(3.33)

i=1

Note that the arguments from (3.18)-(3.20) hold in the coupled case with Yj (t) =
Xj,n (t). However, to compare the qj,n (), one must derive the relationships between
the associated empirical measures, (3.33). As these empirical measures are signed

39

measures, we need the following definitions:

+
Xj,n
(t)

:=

N
X

i (t)
ai 1{ai >0} qj,n

(3.34)

i (t) .
ai 1{ai <0} qj,n

(3.35)

i=1

and

Xj,n
(t) :=

N
X
i=1

These definitions correspond to the Hahn-Jordan decomposition of the empirical mea+

sures: Xj,n (t) := Xj,n


(t) Xj,n
(t). Now, one can compute the following:

f,s (X1,n (t), X2,n (t)) f

X1,n (t), X2,n (t)



+
+

= f X1,n
(t), X2,n
(t) + f X1,n
(t), X2,n
(t)

Z
=

sup
kf kL, 1

!
+
+
f (s)d(X1,n
(s) X2,n
(s))

Z
+

sup
kf kL, 1

sup

N
X

kf kL, 1 i=1

f (s)d(X1,n
(s) X2,n
(s))

!
 i

i
ai 1{ai >0} f (q1,n (t)) f (q2,n
(t))

sup

N
X

kf kL, 1 i=1

40

!
ai 1{ai <0}


i
i
f (q1,n
(t)) f (q2,n
(t))

N
X


i
i
(t)
(t), q2,n
ai 1{ai >0} % q1,n

i=1

N
X

i
i
ai 1{ai <0} % q1,n
(t), q2,n
(t)

i=1

N
X

i
i
|ai |% q1,n
(t), q2,n
(t)

i=1

(3.36)
b%N (q1,n (t), q2,n (t)) .

Consequently, using the estimates (3.18)-(3.20) and (3.36), we have that:

E sup
0tT

i
i
(t))
(t), q2,n+1
(q1,n+1


cF,J ,T, E%2 (q1i (0), q2i (0))
(3.37)

cb %2N (q1,n (s), q2,n (s))ds

+E

where cF,J ,T, and cb are nonnegative constants depending on the associated parameters. From which it follows that
E sup
0tT

%2N (q1,n+1 (t), q2,n+1 (t))


cF,J ,T E%2N (q1 (0), q2 (0))
(3.38)

Z
+E

cb %2N (q1,n (s), q2,n (s))ds .

Consequently, using the analogous arguments as (3.16)-(3.28), one can derive the existence of a unique, continuous solution by the contraction mapping principle.24
24

See Theorem 6.12.

41

Without further mention, assume the measurability and integrability


conditions of initial conditions and the Lipschitz and growth conditions
needed to derive unique solutions of the SODE systems, (3.1) and (3.29).

3.3

SPDE Results

With the existence and uniqueness of solutions to (3.1) and (3.29), we can now
examine a general type of SPDE. As we will soon derive, the empirical processes of
(3.1) and (3.29) automatically yield weak solutions of a SPDE. Prior to stating the
SPDE, one also needs to define the one and two particle diffusion matrices. These
matrices represent a measure of correlations in the stochastic term in (3.1) and (3.29).
Define the diffusion matrices as follows. For l, k = 1, . . . , d, i, j = 1, ..., N, ri , rj
R2 , Mf,s (Rd )

kl (ri , rj , , t) :=
D

d Z
X

J,kq (ri , p, , t)J,lq (rj , p, , t)dp

q=1

(3.39)

and

i , ri , , t).
D(ri , , t) := D(r

For m N{0}, let C m (Rd ; R) be the space of m times continuously differentiable


functions from Rd into R. Further, let C0m (Rd ; R) be the subspace of C m (Rd ; R), whose
elements together with all their derivatives vanish at infinity. Recall that we denoted
2
the inner product on Rd by and k and k`
the first and second partial derivatives

42

the spatial coordinates rk and rk , r` , respectively.


In the following, XN represents the empirical process associated with the SODE
system, (3.29). However, the same result will hold for (3.1). Applying Itos formula
to hXN (t), i for C02 (Rd ; R), yields the following.

!
N
N
N
X
X
X
i
i
dhXN (t), i = dh
ai r (t) , i = d
ai (r (t)) =
ai d(ri (t))
i=1

N
X

ai ()(ri (t))

i=1

d(ri )

i=1

i=1

N
d
1X X 2
kl (ri (t))d << ri,k (t), ri,l (t) >>
ai
+
2 i=1 k,l=1

(where << , >> denotes the mutual


quadratic variation of the one dimensional components.)

N
X

ai ()(ri (t)) F (ri (t), XN (t)(t), t)dt

i=1

N
X

ai ()(ri (t))

J (ri (t), p, XN (t)(t), t)w(dp, dt)

i=1

(3.40)

N
d
1X X 2
+
ai
(ri (t))d << ri,k (t), ri,l (t) >>
2 i=1 k,l=1 kl

Now, note that


d << ri,k (t), ri,l (t) >> = Dkl (ri , XN , t)dt

(3.41)

as any terms with finite total variation in (3.29) do not contribute to (3.41). Combining (3.40) and (3.41) yields:
43

N
X

ai ()(ri (t)) F (ri (t), XN (t)(t), t)dt

i=1

N
X

ai ()(ri (t))

J (ri (t), p, XN (t)(t), t)w(dp, dt)

i=1

d
N
1X X 2
ai
kl (ri (t))Dkl (ri , XN , t)dt
2 i=1 k,l=1

= hXN (t), ()() F (, XN (t), t)dti




Z
+ XN (t), ()() J (, p, XN (t), t)w(dp, dt)



d
1X 2
+ XN (t),
( )()Dkl (, XN , t)dt
2 k,l=1 kl

d
X

hXN (t)Fk (, XN (t), t)dt, k i

k=1

d 
X

Z
XN (t)


J,k (, p, XN (t), t)w(dp, dt), k
(3.42)

k=1

 X

d
1
2
+
Dkl (, XN , t)XN (t)dt, kl .
2 k,l=1

44

Thus, it follows that we have that


 X

d
1
2
d < XN , > =
(Dkl (, XN , t)XN (t))dt,
2 k,l=1 kl


(XN (t)F (, XN (t), t)dt) ,

(3.43)



 
Z
XN (t) J (, p, XN (t), t)w(dp, dt) , .

We can now conclude the following:


Proposition 3.3 XN () is a weak solution of the signed-measure valued SPDE:

dY =

!
d

1X 2
YDkl (, Y, t) 5 (YF (, Y, t)) dt
2 k,l=1 kl
(3.44)
 Z

5 Y J (, Y, p, t)w(dp, dt)

with initial condition Ys = Xs =

N
X

ai rsi and Hahn-Jordan decomposition Ys = Xs .

i=1

3.4

Extension by Continuity

Proposition 3.3 inspires the following method and representation. The empirical
process associated with (3.29) satisfies (3.44). Thus, one can construct a weak solution
of (3.44) provided that the initial signed measure, Ys , is discrete. One approach to
construct solutions for arbitrary signed measures is Extension by Continuity. This
method approximates arbitrary signed measures by discrete signed measures in an

45

appropriate metric. One then shows that the solutions associated with the discrete
signed measures converge. Finally, one shows that this limit is the solution of (3.44)
with the arbitrary initial signed measure.
The first step in the methods argument is justifying approximations of initial
signed measures by discrete signed measures. To show such a statement, it is necessary to derive certain a priori estimates on the empirical processes. These estimates
show that the empirical processes associated with solutions of (3.1) or (3.29) can be
estimated in terms of the initial signed measures. Again, to simplify notation, we
assume that s = 0.
Proposition 3.4 Suppose for i = 1, 2, Xi (0) L2,F0 (Mf,s (Rd )) are the initial signed
measures and Y Lloc,2,F (C((s, T ]; Mf,s (Rd ))) is the signed-measure input process.
Further, Yi () are the empirical processes, given by (3.30) associated with Xi (0) for
X1 (0) and X2 (0). Then
i = 1, 2. Finally, let be a localizing stopping time for Y,
there is a constant cT,F,J , such that

E

sup
0tT

2
f,s
(Y1 (t), Y2 (t))1{ >0}



cT,F,J , E f2 (X1 (0), X2 (0))1{ >0} .
(3.45)

Proof: The following proof is claimed in [KS12a] and follows the measure case established in [Kot08]. Suppose that the initial distributions are non-random. Once we
establish the result under this case, we can condition on the -algebra, F0 , to establish the general claim. Further, assume that Yi (t) has Hahn-Jordan decomposition
Yi (t) with measure Yi (0, Rd ) = m
i 0 for i = 1, 2.

46

By Proposition 2.7:



E

sup
0tT

2
(Y1 (t), Y2 (t))
f,s

sup
0tT


2E

sup
0tT

f2 (Y1+ (t), Y2+ (t))

f2 (Y1 (t), Y2 (t))

+ 2E

(3.46)


sup
0tT

f2 (Y1 (t), Y2 (t))

We focus on the first term as the second will follow by a similar argument. The
following argument is based on a relationship between the Kantorovich-Rubinstein
distance and the Wasserstein distance. These details are given in Proposition 6.8. By
this proposition, we have the following.
E

sup
0tT

f2 (Y1+ (t), Y2+ (t))


E

sup
0tT

(m+
1

Y1+ (t)
m+
)W
(
,
1
2
m+
1

2
Y2+ (t)
+
+
) + |m1 m2 |
m+
2

Focus on the first term on the right side.



E

sup
0tT

(m+
1

(m+
1

Y1+ (t)
+
m2 )W1 ( + ,
m1

2
m+
2) E

2
Y2+ (t)
)
m+
2
Z Z

sup
0tT

inf

X + (0) X + (0)
Q+ C( 1 + , 2 + )
m1
m2

47

2
q), r(t, Y,
q))Q+ (dq, d
%(r(t, Y,
q)

Now, let Q+ C(

X1+ (0) X2+ (0)


, m+ )
m+
1
2


E

sup
0tT

(m+
1

(m+
1

(m+
1

be arbitrary.25

Y1+ (t)
m+
)W
(
,
1
2
m+
1

2
m+
2) E

2
m+
2) E

Z Z
sup

2
Y2+ (t)
)
m+
2
2

%(r(t, Y, q), r(t, Y, q))Q+ (dq, d


q)

0tT

Z Z Z Z
sup

q), r(t, Y,
q))
%(r(t, Y,

0tT

p), r(t, Y,
p))Q+ (dp, d
%(r(t, Y,
p)Q+ (dq, d
q)

By the Cauchy-Schwarz inequality and Proposition 3.1, we have that:


Z Z Z Z
E

sup

q), r(t, Y,
q))%(r(t, Y,
p), r(t, Y,
p))Q+ (dp, d
%(r(t, Y,
p)Q+ (dq, d
q)

0tT

Z Z Z Z r
E

q), r(t, Y,
q))
sup %2 (r(t, Y,
0tT

r
E

p), r(t, Y,
p))Q+ (dp, d
sup %2 (r(t, Y,
p)Q+ (dq, d
q)
0tT

Z Z Z Z p
p
%2 (q, q) %2 (p, p)Q+ (dp, d
p)Q+ (dq, d
q ).
cT,F,J ,

As Q+ was an arbitrary element of C(


25

X1+ (0) X2+ (0)


, m+ ),
m+
1
2

Recall the definition from (2.1).

48

combining the these results yields

that

E

sup
0tT

(m+
1

X1+ (0)
+
m2 )W1 (
,
m+
1

2
X2+ (0)
)
m+
2

+ 2
2
cT,F,J , (m+
1 m2 ) W1

X1+ (0) X2+ (0)


, m+
m+
1
2

+
Yet, by Proposition 6.8, adding |m+
1 m2 | to the right side represents a metric

that is equivalent to f (, ). Consequently, it follows that:



E

sup
0tT

f2 (Y1+ (t), Y2+ (t))


cT,F,J , E f2 (X1+ (0), X2+ (0)) .

(3.47)

The term for the negative components of the signed measures follows similarly. Combining these terms yields the claim.
The technique of extension by continuity is employed in [Kot95], [KX99], [MP82],
and [KK10]. We shall comment on [MP82] and [Kot95] in Chapter 4 as an application
of the general results. However, the other articles have generated general results
in signed measure valued SPDE. In [KX99], the authors focus on a more general
class of SPDE. In particular, the associated SODE (3.1) (or (3.29)) includes another
random term driven by independent Brownian motions as an external noise term.
Furthermore, the case allows for the particle weights, ai , to be time dependent and
not constant. However, there are several drawbacks from the approach in [KX99]:
The authors use that the initial conditions of the weights and the positions are
exchangeable in addition to square integrable and measurable.
The authors assume Lipschitz and growth conditions such as (3.3)-(3.5) (3.6)(3.8) for every possible representation of a signed measure as the difference
of two measures. This assumption is not only difficult to verify, but such a
strong assumption implies that the assumptions are not the ideal setting for the
49

analysis.
Despite these drawbacks, the authors derive existence and uniqueness result for their
SODE system, and similarly pass the solution to a solution of an SPDE by Itos
Formula.
Following the results from [KX99], [KK10] uses the results on exchangeable pairs
to derive limit results for deterministic PDE. In particular, the authors examine the
same class of SPDE as in this work, but with a different result in mind. The authors
derive macroscopic limit results for solutions of SPDE. That is, the correlation length,
, converges to 0, and consequently, the limit of the solutions of SPDE converge to a
solution of a deterministic PDE.

3.5

Hahn-Jordan Decompositions

Another important work intimately related to this thesis is [Kot10]. In this work,
the author examines the same SODE system and class of SPDE, and derives the
following interesting result.
Proposition 3.5 Assume that the coefficients of the SODE systems, (3.1) and (3.29),
satisfy either the Lipschitz conditions, (3.3)-(3.4) or (3.6)-(3.7). Then, the particles
satisfying (3.1) or (3.29) do not coalesce in finite time with probability one. That is:

:= inf{t > 0 : ri (t) = rj (t) for some i 6= j} = a.s..

(3.48)

With this proposition and some additional hypothesis, the author establishes that
a solution to the SPDE, (3.44), preserves the Hahn-Jordan decomposition of the initial
empirical process. An important part of the argument in [Kot10] relates solutions
50

to a certain flow representation. To show this representation, consider the empirical


measures associated with (3.1). For convenience, fix Y Lloc,2,F (C((s, T ]; Mf,s (Rd ))),
and assume that q RdN . Without loss of generality, choose s = 0 in the following.
Consequently, (3.1) has a unique solution:

r(t, , q) := r(t, , Y(t),


q) = (r1 (t, , Y(t),
q), . . . , rN (t, , Y(t),
q)).

(3.49)

By Proposition 3.1, r(t, , q) is measurable in (t, , q). Consequently,

(, q) 7 r(t,,q) , Rd 7 Mf (Rd ),

(3.50)

is Ft B d BMf (Rd ) measurable, where BMf (Rd ) is the Borel algebra on (Mf (Rd ), f ).
The following proposition from [Kot10] shows the definition and the importance of
the flow representation.
Proposition 3.6 Suppose X0 L2,F0 (Mf,s (Rd )) and Y(t, ) Mf,s (Rd ) is defined
by26

Y(t, ) := Y(t, , Y(t),


X0 ()) =

Z
r(t,,q) X0 (dq, ).

(3.51)

Then, Y(t, ) is a solution to the SPDE, (3.44), where Y replaces Y in the arguments
of D, F , and J .
Proof: The proof is given in [Kot10]. We reproduce it to fill in the details for
the argument in this thesis. To simplify notation, denote r(t, q) = r(t, , q) and
Z

m(r(s, q), ds) :=


J (r(s, p), p, Y(s),
s)w(dp, ds). Let {n }nN be a complete, orthonormal system in L2,Fs (Rd ), and define n = n Id where Id is the identity matrix
26

The right side is by definition the image of the initial measure X0 under the flow q r(t, , q).

51

in Mdd . Set
Z tZ

(t) :=

n (p)w(dp, ds).
0

The n () are i.i.d. standard Rd -valued Brownian motions by [Kot08]. Also, note
that we have the following
Z

J (r(s, p), p, Y(s),


s)w(dp, ds) =

n (r(t, q), Y(t),


t) n (dt)

n=1

where
Z
J (r, p, , t)
n (p)dp.

n (r, , t) :=

Let C02 (Rd , R). By Itos Formula, we obtain that


Z
< Y(t), > =

Z
(r)Y(t, dr) =

Z
=

Z
(r)

r(t,q) (dr)X0 (dq)

Z Z

()(r(s, q)) F (r(s, q), Y(s),


s)dsX0 (dq)

(q)X0 (dq) +
0

Z Z

()(r(s, q)) m(r(s, q), ds)X0 (dq)

+
0

d Z Z t
1X
2

(kl
)(r(s, q))Dkl (r(s, q), Y(s),
s)X0 (dq).
+
2 k,l=1
0

Call these terms I1 , I2 , I3 and I4 , respectively.


Now, note that

mk (r(s, q), ds) =

X
d
X

n,kl (r(s, q), Y(s),


s)ln (ds)

n=1 l=1

52

which implies that,


Z Z

()(r(s, q)) m(r(s, q), ds)X0 (dq)

I3 (t) =
0

X
d Z Z
X

X
d Z tZ
X
n=1 k,l=1

(k )(r)n,kl (r, Y(s),


s)ln (ds)

r(s,q) X0 (dq)

(k )(r)n,kl (r, Y(s),


s)Y(s, dr)ln (ds)

X
d Z
X
n=1 k,l=1

n=1 k,l=1

< n,kl (, Y(s),


s)Y(s), (k )() > ln (ds)

X
d Z
X
n=1 k,l=1

< k (n,kl (, Y(s),


s)Y(s)), () > ln (ds)

(Integrating by parts in the sense of generalized functions.)

=<

Z tX
d
0 k=1

Z
=<

Y(s)

X
d
X

n,kl (, Y(s),
s) ln (ds), () >

n=1 l=1



Z

Y(s) J (, p, Y(s), s)w(dp, ds) , () > .

53

Using similar arguments yields the following


< Y(t), > =< X0 , >
Z

<

Y(s)F (, Y(s), s)ds , () >

Z
<



Z

Y(s) J (, p, Y(s), s)w(dp, ds) , () >

d Z
1X t 2

+<
(Y(s)Dkl (, Y(s),
s)ds), () > .
2 k,l=1 0 kl

Yet, note that this is a weak form of (3.44) with Y replacing Y in D, F , and J .
The empirical process associated with the SODE system, (3.1), has such a flow
representation given by

XN (t, ) = XN (t, , Y(t),


X0 ()) =

N
X

Z
ai ri (t,,Y(t),X
=

0)

r(t,,Y(t),q)
X0 (dq, ).

i=1

(3.52)
Further, the empirical process has a Hahn-Jordan decomposition given by the following.

X (t, ) = X (t, , Y(t),


X0 ()) =

r(t,,Y(t),q)
X0 (dq, ).

(3.53)

Here X0 is an initial discrete signed measure, with Hahn-Jordan decomposition, X0 .


One would like to conclude that the Hahn-Jordan decomposition of X (t, , X0 ) also
has such a flow representation for general initial signed measures. However, to show
such a result for the arbitrary initial signed measures, [Kot10] adds the assumption
that the coefficients of the SODE, (3.1) and (3.29), must be bounded with bounded
54

derivatives up to an order m for some m N.

3.6

New Results

With this approach in mind, we wish to generalize the results of [Kot10]. With
the apriori estimates given in Proposition 3.4, we are now ready to address the role
of the Hahn-Jordan decomposition. Proposition 3.5 implies that the Hahn-Jordan
decomposition of the empirical process is preserved through the flow representation,
(3.53), provided the initial empirical process, X0 , is discrete. If the particles driving
the SODE system never coalesce, then the weights, ai , in the definition of XN (t)
cannot change values. We now extend this result to arbitrary initial signed measures.
Corollary 3.7 The Hahn-Jordan decomposition of the initial condition is preserved
in the solution of (3.44) for all t > 0 for arbitrary F0 -measurable X0,N . That is, for
the flow representation,

Y(t, ) := Y(t, , Y(t),


X0,N ()) =

Z
r(t,,q) X0,N (dq, )

(3.54)

the Hahn-Jordan decomposition can be found by the flow representation against the
Hahn-Jordan decomposition of the initial signed measures

Y (t, ) := Y (t, , Y(t),


X0,N ()) =

r(t,,q) X0,N
(dq, ).

(3.55)

Proof : Let N N. As mentioned, (3.55) holds for discrete random initial signed
measures, X0,N .

For the general case, choose a sequence, {X0,N }N N such that

E
f2 (X0,N , X0 ) 0.

55

Truncating the initial signed measure if necessary, we may assume that the initial
signed measures are square integrable with respect to f . By (3.45), for M N
2
Ef,s
(Y(t, X0,N ), Y(t, X0,M )) cT,F,J , E(
f2 (X0,N , X0,M )).

Hence, as N, M

2
(Y(t, X0,N ), Y(t, X0,M )) 0,
Ef,s

By passing to a subsequence and relabeling if necessary, we have that Y(t, X0,N )


is a Cauchy sequence for f,s . Furthermore, the sequence converges in f,s as the
sequence of measure pairs converges in the product space. Employing Theorem 2.8
completes the proof.
With this result on the Hahn-Jordan decomposition, we wish to show there is a
solution of the SPDE, (3.44), for arbitrary adapted initial signed measures. However,
to establish such a claim, it is necessary to first derive another a priori estimate. This

estimate shows how solutions depend on their input signed measure processes, Y().
Proposition 3.8 Suppose Y1 , Y2 Lloc,2,F (C((s, T ]; Mf,s (Rd ))). Let Y(t, Y1 ) and
Y(t, Y2 ) be two solutions of (3.44) with X (0, ) := Y(0, Y1 ) = Y(0, Y2 ) and flow
representations, (3.54). Then, T > 0 there is a positive cT < such that
E sup
0tT





2
f,s
Y(t, Y1 ), Y(t, Y2 ) E sup f2 Y (t, Y1 ), Y (t, Y2 )
0tT

Z
cT

T
2
Ef,s

Y1 (s), Y2 (s) ds cT

E
f2

Y1 (s), Y2 (s) ds.

(3.56)

56

Proof : Truncating the initial measure, X (0, ), if necessary, we may without loss of
generality assume that

ess sup

f (X (0, )) c < .

(3.57)

Hence,


2
f,s
Y(t, Y1 ), Y(t, Y2 )

E sup
0tT

E sup
0tT

= E sup

f2

sup

Y (t, Y1 ), Y (t, Y2 )

X Z

0tT kf kL, 1

(by Proposition 2.7)

(f (r(t, Y1 , q)) f (r(t, Y2 , q)))X0 (dq)

2


2
X Z

%(r(t, Y1 , q), r(t, Y2 , q))X0 (dq)


E sup


0tT

cT

Z
E

(X0+ (Rd ) + X0 (Rd ))2 sup %2 (r(t, Y1 , q), r(t, Y2 , q))X0 (dq)
0tT

T
2
f,s
(Y1 (u), Y2 (u))du

cT E
0

(by Proposition 3.1 and the boundedness of measures)

Z
cT E

f2

Y1 (s), Y2 (s) ds

57

where the last inequality follows from the definition of f,s .


We can now state our general theorem which shows that for any initial signed measure, the SPDE, (3.44), has a solution. Furthermore, the Hahn-Jordan decomposition
of the solution of (3.44) is preserved through the flow representation.
Theorem 3.9

i. There is a weak solution of the SPDE, (3.44), with initial condi-

tion, X0 , and Hahn-Jordan decomposition, X0 .


ii. This solution has the representation
Z
X (t) := X (t, X , X0 ) =

r(t,,X ,q) X0 (dq).

(3.58)

Further,

X (t) := X (t, X , X0 ) =

r(t,,X ,q) X0 (dq).

(3.59)

Essentially, the theorem states that


Z

r(t,,X ,q) X0 (dq)

is the Hahn-Jordan decomposition X (t) of X (t) for all t > 0. Thus, the Hahn-Jordan
decomposition is preserved.

Proof: Using the flow representation, one can define recursively

Y0 (t) X0 ,

Yn (t)

Z
:=

r(t,Yn1 ,q) X0 (dq).

(3.60)

Without loss of generality, assume that the total variation of initial signed measure
is bounded uniformly in . Due to conservation of the intensities by Proposition 3.5,
58

the same holds for the measures Yn (t). By Proposition 3.8,

E sup
0tT




(t)
f2 Yn (t), Ym

cT




E
f2 Yn1
(s), Ym1
(s) ds

Z
cT




E sup f2 Yn1
(s), Ym1
(s) ds.
0st

f (Rd ) yield a unique


The contraction mapping principle and the completeness of M
f (Rd )-valued process X () C([0, ); M
f (Rd )) a.s. such that
adapted M


E sup f2 Yn (t), X (t) 0, as n .

(3.61)

0tT

Setting X := X + X , we now define

X (t) :=

Z
r(t,X ,q) X0 (dq),

and by Proposition 3.7, we have the Hahn-Jordan decomposition


Z

X (t) =

r(t,X ,q) X0 (dq).

(3.62)

Applying Proposition 3.8 yields:

E sup
0tT

f2

Z


Yn (t), X (t) cT

E
f2

59

Yn1 (s), X (s) ds 0, as n .

(t) X (t). Hence, by (3.62), we


By the uniqueness of limit in (3.61), we have X
obtain the desired representation. We note that Proposition 3.6 shows that X () is a
weak solution of (3.44) as X () has a flow representation.
The results derived in this chapter provide an advancement in the understanding of
signed-measure valued SPDE. In particular, Theorem 3.9 yields that there is a solution
of the SPDE, (3.44). Furthermore, the Hahn-Jordan decomposition is preserved from
the initial signed measure. The proofs of these arguments require using only the
minimal hypotheses of Lipschitz coefficients. Such a result is very desirable from an
application viewpoint. In particular, in 2D fluid dynamics, Theorem 3.9 can be used
to show that the vorticity of the fluid is a conserved quantity.

60

Smooth Stochastic Navier-Stokes


In the previous chapter, the analysis established existence and uniqueness results

for a general class of SPDE. As an application of Theorem 2.8, we showed that a


solution of the SPDE of the form, (3.44), preserves the Hahn-Jordan decomposition
of the initial signed measure. This chapter examines a particular case of (3.44) which
has applications to fluid dynamics. We focus on the smoothed stochastic NavierStokes equations (SNSE) introduced earlier. Previous results in the literature show
existence and uniqueness of solutions for the stochastic Navier-Stokes equations in our
context. However, with Theorem 3.9, we establish the new result that the vorticity
is conserved pathwise for solutions of the SNSE.

4.1

Formulation

We recall the formulation of the SNSE with smoothed Biot-Savart kernel. The
smoothed SNSE are random perturbations of the smoothed Navier-Stokes equations:

X (r, t) = X (r, t) (U (r, t)X (r, t))


t

(4.1)

where
Z
U (r, t) =

K (r q)X (q, t)dq

(4.2)

and K () is the smoothed form of the Biot-Savart kernel. That is, for > 0 and
1
< |r| <

1
K (r) = K(r) :=
(r2 , r1 )
2|r|2
where r = (r1 , r2 ) R2 and K C 2 (R2 , R) with bounded derivatives up to order
two. Without loss of generality, we can also assume that K (0) = 0.
61

There is extensive literature on random perturbations of the Navier-Stokes equations since the work of [Cho73]. The results of [Cho73], [Lon88], and [GHL90] are
more numerical analytic than our focus. Consequently, the discussion of these works
in the introduction is sufficient for our purposes. However, the works, [MP82], [Kot95],
[AX06] and [Ami07] form a background to the stochastic Navier-Stokes equations in
this work.
From [Cho73], the authors of [MP82] approach the SNSE by considering a system
of SODE of the form:

dr (t) =

N
X

ak K (ri (t) rk (s))dt +

2d i (t)

(4.3)

k=1

where for i = 1, . . . , N , i (t) are independent two-dimensional Brownian motions.


The main consequences from [MP82] are the following continuum-limit results.
For the case = 0, (4.3) is deterministic. The Euler equation, (4.9), has a
unique solution by [Kat67] which we denote as X . The authors establish that
there is a sequence, {N } (0, ), such that if XN is the empirical process
associated with (4.3) and mollifying constant N :

< XN (t), >< X (t), >

where C02 (R2 ; R).


a
> 0 and the other half are equal to
For > 0, assume half the weights ai =
N
a
. By [GMO88], there exists a solution of the Navier-Stokes equation, (4.9),
N
which we denote X (t). Again, let XN be the empirical process associated with
(4.3) and mollifying constant N . If < EXN (0), >< X(0), >, then there

62

exists a sequence, {N } (0, ), such that

< EXN (t), >< X (t), >

where C02 (R2 ; R).


Recall that there are several limitations to using independent Brownian motions as
the stochastic term for (4.3). In particular, the noise term is state independent rather
than depending on the positions, ri (t). In particular, [Kot08] makes the following
remark about the work in [MP82]:
By applying Itos formula to the empirical process associated with (4.3), and
computing the quadratic variation27

d[< XN (t), >]


N
2
XX
2a
[
{(5)(rj (t)) j (dt)}]
= 2
N
j=1

2
2
XX
(2a )2 X
= 2
{ (k )2 (rj (t))}dt,
2
N
j=1
k=1

where we used the independence of j (). Hence, for t T

[< XN (t), >] = O(

1
, , T ).
N

(4.4)

Thus, the empirical vorticity distribution XN (t) becomes deterministic as N .


Choosing a sequence K(N ) (r) K(r) and assuming a suitable convergence of the
27

In the following, the quadratic variation is denoted by [].

63

initial conditions towards the initial condition in (4.9), it should follow that:

< XN (t), >< X (t), >,

(4.5)

where X (t) is the solution to (1.1).


To address the drawbacks of using independent Brownian motions, [Kot95] intro ij, : R2 R2 R are
duces the following correlation functionals. For i, j {1, 2},
symmetric, bounded, Borel measurable functions satisfying,
Z

2 (r, p)dp = 1,

ii,

(4.6)

and there is a finite positive constant, c, such that r, q R2


Z 
2

ij, (r, p) ij, (q, p) dp c%(r, q).

(4.7)

Two examples of correlation functionals are the following:


,ii (r, q) := c 1 e|rq|2 /4 where c > 0 for i = 1, 2.
Example 4.1 Choose
2


1
|r q|2

Example 4.2 Choose ,ii (r, q) :=


exp
for i = 1, 2
4
2
Kotelenez defines the following matrix of correlation functionals by

11, (r, p)
0

(r, p) :=

.
22, (r, p)
0

(4.8)

The author considers the following stochastic Navier-Stokes equation:

dX (t) = [X (U X )] dt

 Z

 (, p) w(dp, dt),
2 X

64

(4.9)

and the associated SODE system. For i = 1, . . . , N ,

dr (t) =

N
X

ak K (r (t) r (t))dt +

Z
2

(ri (t), p)w(dp, dt)

(4.10)

k=1

where ak R for k = 1, . . . , N . The following conservation condition is taken as a


requirement in [Kot95]. Fix a+ , a > 0, and define a = a+ + a . A solution of (4.9)
must have
Mf,s with (R2 ) = a where a+ , a > 0

(4.11)

With these assumptions, the author in [Kot95] shows that the smoothed Stochastic
Navier-Stokes Equations have a solution for any adapted initial signed measure.
Finally, we discuss the works of [Ami00], [Ami07] and [AX06]. Recall that [Ami00]
generalized (4.10) so that the vorticity could include jump processes driven by Poisson random measures. The author reexamines the work in [Ami07], and presents the
results in the case of only positive measures due to completeness issues on the space
of signed measures. Finally, Amirdjanova derives a diffusion approximation to the
vorticity model with jump processes. In [AX06], the authors verify a form of exponential tightness for the stochastic vorticity. Such a result is then used to derive a
macroscopic limit theorem. That is, the authors show that as the magnitude of the
stochastic term in (4.10) tends to zero, the solutions of the stochastic equations tend
to a solution of a deterministic equation.
With these results in mind, we now state the problem we wish to analyze in this
work. Following [Kot95], we use the same the SODE system and conditions as in
(4.10) to analyze the SNSE. The issue of existence and uniqueness for the smoothed
stochastic Navier-Stokes equations follows from general results. The existence and
uniqueness results from Chapter 3 yield solutions to the SNSE. What is of greater

65

interest is the issue of conservation of vorticity. Recall that it is a natural consequence


that if X satisfies (1.1), one should expect that the vorticity is conserved along particle
paths. Thus, one has that

X (R2 , t, X0 ) = X (R2 , 0, X0 ) = X0 (R2 ) = a

where X0 is the initial signed measure. As in [Kot95], we make the assumption


throughout the rest of this chapter that a solution, , of the SNSE, (4.9), must
satisfy
(R2 ) = a

(4.12)

where a+ , a and a = a+ + a .
To establish the vorticity claim, we must establish existence and uniqueness for
(4.10). However, as we will soon show, (4.9) is a special form of the general SPDE
from Chapter 3. Consequently, we apply the results from the previous chapter to the
SNSE.
Theorem 4.3 To each F0 -adapted initial condition qN (0) R2N , (4.10) has a unique
Ft -adapted solution, qN () C([0, ); R2N ) a.s.
Proof: By Proposition 3.2, it suffices to verify that the coefficients satisfy the desired Lipschitz and boundedness conditions, (3.3)-(3.5) or (3.6)-(3.8). Clearly, the
boundedness properties follow from the definition of the correlation functionals and
the boundedness of K . For the Lipschitz conditions, if {n }nN is a complete orthonormal system in L2,F0 (R2 ) then, define

n 0
n :=
.
0 n
66

We have

Z tZ

 (r, p)w(dp, ds) =

Z
X

 (r, p)n (p)dpd n (t),

n=1

Z tZ

where (t) :=

n (p)w(dp, ds) are i.i.d. R2 -valued standard Brownian mo-

tions by [Kot08]. It follows from the definition of the correlation function that:
2 Z 
Z 

2
X

 (r, p)  (q, p) n (p)dp =


 (r, p)  (q, p) dp
n=1

c%2 (r, q).

Now, we note that the drift coefficient can be represented by F (XN (t), r) : Mf,s
R2 R2 where F (, r) := K (r) and denotes convolution. For 1 , 2
Mf,s , r1 , r2 R2 we have the following:
|F (1 , r1 ) F (2 , r2 )| |F (1 , r1 ) F (1 , r2 )| + |F (1 , r2 ) F (2 , r2 )|
Z

Z


K (r1 q)1 (dq) K (r2 q)1 (dq)
Z

Z


+ K (r2 q)1 (dq) K (r2 q)2 (dq)

Z
c

Z
%(r1 q, r2 q)1 (dq) + cK


cK, a%(r1 , r2 ) + f (

1 ,
2)

67

(1 (dq) 2 (dq))

where
= (+ , ) and + , is the Hahn-Jordan decomposition of and we
have used the assumption that K is bounded with bounded derivatives.

4.2

New Results

The next result is the first step in showing conservation of vorticity for solutions
of (4.9). It shows that the empirical process associated with (4.10) is a solution
of (4.9). To show conservation of vorticity, we provide a similar argument as for
extension by continuity. We show that the vorticity is conserved if the initial signed
measure is discrete. After establishing this result, the general case will follow by an
approximation result.
Theorem 4.4

i. The empirical process, XN (t), with Hahn-Jordan decomposition

XN (t) is a weak solution of (4.9).


ii. Conservation of Vorticity for Discrete Initial Signed Measures

XN (R2 , t) = XN (R2 , 0) = a a.s. for all t 0

Proof:
i. Note that the one particle diffusion matrix associated with (4.10) is given by the
following. For i, j {1, . . . , N } and k, l = 1, 2

i , ri , , t)kl = 2
D(r , , t)kl := D(r
i

2 Z
X
q=1

68

kq, (ri , p)
lq, (ri , p)dp.

(4.13)

is diagonal, this expression is 0 if k 6= l. Otherwise, it is given by


Since
Z

D(r , , t)kk = 2

2kk, (ri , p).

(4.14)

Applying (4.6) yields that:


D(ri , , t) = 2I2
where I2 M22 is the identity matrix. The statement now follows immediately
from Proposition 3.3.
ii. In Theorem 4.3, we verified that the coefficients to the SODE system satisfy the
Lipschitz conditions. By Proposition 3.5, it follows that the particles ri (t), rj (t)
for i 6= j do not hit in finite time. Consequently, the Hahn-Jordan decomposition
of the empirical processes must remain as the following form:

XN+ (t) =

XN (t) =

ai ri (t)

ai >0

ai ri (t) .

ai <0

These measures must remain on disjoint supports. Consequently, we have the


claim.
We now extend the result of Theorem 4.4 to arbitrary adapted initial conditions.
The existence of a weak solution for (4.9) is shown in [Kot95] and [Kot08]. However,
due to its importance to the argument for conservation of vorticity, we provide the
details according to the previous results in the thesis.
Theorem 4.5

There is a weak solution of the SNSE, (4.9), with initial condi-

tion, X0 , and Hahn-Jordan decomposition, X0 .

69

Conservation of Total Vorticity

X (R2 , t, X0 ) = X (R2 , 0, X0 ) = a a.s..

(4.15)

Proof: In Theorem 4.3, it was shown that the coefficients of (4.10) satisfy the Lipschitz
and boundedness properties. By Theorem 4.4, it follows that for discrete initial signed
measures, (4.9) has a weak solution. Consequently, by Theorem 3.9, we have a weak
solution of (4.9) for arbitrary initial signed measures.
To show conservation of vorticity for arbitrary initial signed measures, we employ
the flow representation of the solution, X (), from Theorem 3.9.
Z
X (t) := X (t, X , X0 ) =

r(t,,X ,q) X0 (dq).

(4.16)

Further,

X (t) := X (t, X , X0 ) =

r(t,,X ,q) X0 (dq).

(4.17)

f,s,d (R2 ), such that


Choose a sequence of discrete signed measures, {XN,0 }N 1 M
XN,0 X0 in f . Recall that (4.9) is a special case of the general class of the
SPDE analyzed in the previous chapter. Consequently, one can apply (3.4) to the
solutions of the SNSE to conclude:

E

sup
0tT

2
f,s
(X (t, XM,0 ), X (t, XN,0 ))

cT,F,J , E
f2 (XN,0 , XM,0 ).

(4.18)

By passing to a subsequence and relabeling indices if necessary it follows that X (, XN,0 )


is a Cauchy sequence for f,s . Furthermore, it must converge by the existence of the
solution X (, X0 ). By Theorem 2.8, we have that the limit, X (, X0 ), must be in
Hahn-Jordan form. The Hahn-Jordan positive and negative vorticities are conserved
70

for the discrete case by Theorem 4.4, and we have conservation of vorticity, (4.15).
Thus, we have established that the vorticity is a conserved quantity for the smooth
stochastic Navier-Stokes equations. The arguments generated by showing this claim
also have important applications to understanding the Hahn-Jordan decompositions
for fluids. Such a conclusion represents an important advancement to fluid dynamics.
Furthermore, the results established in Theorem 4.5 also establish the existence and
uniqueness of solutions to the smoothed stochastic Navier-Stokes.

71

Conclusion
The new research presented in this thesis provide significant advancements to the

fields of fluid dynamics and stochastic differential equations. The advancements, while
in diverse areas, followed a specific trend of theoretical development and application.
We briefly recall the main results.
In chapter two, the analysis focuses on the incompleteness of certain metrics on
the signed measures. The first important result involves identifying a relationship between the Kantorovich-Rubinstein metric and quotient spaces. Recall
that one can identify the signed measures with a quotient space on the space of
product measures. With this identification, we define a quotient-type metric on
the signed measures. The main result involves completeness and this quotienttype metric. Although the metric does not satisfy completeness on the signed
measures, it satisfies a very useful partial-completeness result. One can use the
convergence properties of the metric to conclude a limit is in Hahn-Jordan form.
In chapter three, we introduce a general class of signed-measure valued SPDE
and their associated SODE system. The focus of this thesis was the role of the
Hahn-Jordan decomposition for solutions. The quotient-metric from the previous chapter yields a powerful new result for the Hahn-Jordan decomposition.
Applying the quotient-metric and the product metric in conjunction yields that
the Hahn-Jordan decomposition of the initial signed measure is preserved in solutions. This represents a significant advancement as previous results assumed
smoothness of the coefficients of the SODE. From a stochastic viewpoint, the
result is ideal as the conditions are the same as those required for existence and
uniqueness.

72

In chapter four, the general results from chapter three apply to the smoothed
SNSE. In the general literature, questions of conservation of certain quantities
related to fluid dynamics naturally arise. Of great importance is that dealing with the conservation of vorticity. With the analysis of the Hahn-Jordan
decomposition established in the previous chapter, we show that the vorticity
of a fluid is conserved. As this question is important from both the fluid dynamic and stochastic analytic approaches, this advancement in the fields is a
significant one.
Each of these results provide a significant advancement to the current research
in the fields of fluid dynamics and stochastic partial differential equations. However,
with these advancements, new questions arise that one would like to answer. Although
numerous paths follow, we discuss only the following, yet very important possible
future work.
In order to show that a solution of the SNSE exists, one must solve the singular
vortex SODE with unsmoothed kernel.

dr (t) =

N
X

aj K(r (t) r (t))dt +

Z
2

(ri (t), p)w(dp, dt)

(5.1)

j=1

with ri (s) = rsi for i = 1, . . . , N . To show that solutions of (5.1) exist on [s, ),
one needs to show that the point vortices satisfying (5.1) do not coalesce. Several
approaches to (5.1) use a stochastic driver of independent Brownian motions instead
. The works of [Tak85], [Osa85], and [FM07] show
of the correlation functionals,
that point vortices satisfying (5.1) never coalesce under various conditions on the
vortex intensities, aj . Takanobu shows the result when all the intensities, aj , have
the same sign in [Tak85]. Osada shows the result for arbitrary intensities, aj in

73

[Osa85], but must employ abstract techniques from PDE theory that disguise the
physical interpretation. Yet, [FM07] employs a technique to preserve the physical
interpretation called path-clustering. To use this technique, [FM07] must have vortex
X
intensities that satisfy the following. For all I {1, . . . , N },
ai 6= 0.
iI

The results of [FM07] show potential for extension to the case where correlation
functionals drive (5.1). To generalize the arguments of [FM07], one must address two
issues.
The stochastic flow is the mapping that sends an initial condition, x R2N ,
to the solution of (5.1) with this initial condition. In the case of independent
Brownian motions, the stochastic flow is differentiable, and the Jacobian has
a determinant that is identically 1. In the case of correlation functionals, the
determinant is no longer identically 1, and must be controlled to use the results
of [FM07].
To generate solutions of (5.1), one uses solutions of (4.10) with smoothing
parameter and lets 0 in a certain sense. In the case of independent
Brownian motions as drivers, the diffusion term is independent of . Yet, in the
case of the correlation functionals, the diffusion term depends on . This fact
presents a difficulty in adapting the arguments from [FM07].
Suppose that one can address these issues and prove that a solution of (5.1) exists
on [s, ). Itos Formula can be applied to yield solutions to the stochastic NavierStokes equations for discrete initial signed measures. However, as in chapter three and
four, it is the continuum-limit that poses the most difficulty. A priori estimates must
be derived for the singular Biot-Savart kernel which provides a significant barrier to
the general solution of the stochastic Navier-Stokes equations. Consequently, if one
74

can establish these estimates, then one can solve the true stochastic Navier-Stokes
equations. This will be a significant advancement for the fields of fluid dynamics and
stochastic analysis.

75

Appendix
In this appendix, we collect many of the standard results and notations for the

development of the new results. We separate the statements into two categories:
Fluid Dynamics and Stochastics.

6.1

Fluid Dynamics

Lemma 6.1 Let r(t, r0 ) be a path of a fluid particle in R2 under the Euler equations,

X (r, t) = (U (r, t)X (r, t))


t
U2 U1

,
X (r, t) = curl U (r, t) =
r1
r2

(6.1)
U 0,

with position r0 at t = 0. Here U (r, t) = (U1 (r, t), U2 (r, t))T is the velocity field,
X (r, t) is the vorticity, is the gradient and is the inner product on R2 . Then, for
all t 0, it follows that
(6.2)

X (r(t, r0 ), t) = X (r0 , 0).

Proof: Define f : [0, ) R by f (t) = X (r(t, r0 ), t). Computing the ordinary


derivative yields the following. As U (r, t) is the velocity field for the path of the fluid
particle, it follows that:
df
=
dt


=

X
+ (X ) U
t

r(t,r0 )

X
+ ( U )X + (X ) U
t

76

r(t,r0 )


=



X
+ (U X )
t
r(t,r0 )

=0
Thus, (6.2) follows immediately.
The following result establishes exactly how the Biot-Savart kernel enters into the
analysis of the Navier-Stokes equation.
Lemma 6.2 Assume that U (r, t) is the velocity field and X (r, t) is the vorticity associated with an incompressible fluid in R2 satisfying:

X (r, t) = X (r, t) (U (r, t)X (r, t))


t
U2 U1
X (r, t) = curl U (r, t) =

,
r1
r2

(6.3)

U 0.

Then, the velocity can be written explicitly in terms of the vorticity as follows:
Z
U (r, t) =

K(r q)X (q, t)dq

(6.4)

where K() is the Biot-Savart kernel which for r = (r1 , r2 ) R2 is given by:

K(r) =

and = (


1
1
ln(|r|) =
(r2 , r1 )
2
2|r|2

(6.5)

,
) and |r|2 = r12 + r22 .
r2 r1

Proof: Note that from the incompressibility condition, it follows that U (r, t) is a
divergence free vector field in two dimensions. Consequently, from [Bar11], there

77

must exist a differentiable : R2 [0, ) R2 so that



U (r, t) =


,
r2 r1


.

Since X (r, t) = curl U (r, t), this implies

X (r, t) =

By standard Greens function results, such as [Eva10], () = G X () where G(r) =


1
ln |r| for r R2 and denotes convolution. Consequently,
2
Z
K(r q)X (q, t)dq.

U (r, t) =

6.2

(6.6)

Stochastics

Let (S, ) be a separable metric space with metric bounded by 1. Recall the
definition of the Wasserstein distance is the following.

W1 (, ) :=

Z
inf


%(r, q)Q(dr, dq)

(6.7)

QC(,)

where C(, ) is the set of joint distributions of the probability measures , P1 (S)
(the set of probability measures on S).
We begin with the following important theorem and its proof.

78

Theorem 6.3 Kantorovich-Rubinstein Theorem


For , P1 (S)
f (, ) :=

sup
kf kL, 1

Z



f (r)( )(dr)

(6.8)

then, f (, ) = W1 (, ).
Proof: The following arguments are those of [Dud02] and [Pan08]. The result is first
shown in the case where the supremum in the definition of f is taken over kf kL 1.
Use to denote (6.8) in this case. This is the original statement of the KantorovichRubinstein Theorem. The result is proved by a sequence of intermediate lemmas.
Note that the sets in the definitions of and f are invariant under the multiplication
of 1. Consequently, one can remove the absolute value in their definitions.
Lemma 6.4 Define the following term for , P1 (S):
Z
m (, ) := sup

Z
f (x)d(x) +


g(y)d(y) : f, g C(S, R), f (x) + g(y) < (x, y)
(6.9)

Then, m (, ) = (, )
Proof: Let f CL (S, R) satisfy kf kL 1, and pick > 0. Define g(y) = f (y) .
f (x) + g(y) = f (x) f (y) (x, y) < (x, y)

Thus, f and g satisfy the conditions for m (, ), and have the following relation.
Z

Z
f (x)(dx) +

Z
f (x)(dx)

g(y)(dy) =

79

f (y)(dy)

This implies that


Z
f (x)((dx) (dx))

Z
sup

Z
f (x)(dx) +


g(y)(dy) : f (x) + g(y) < (x, y) +

and, so
(, ) m (, ).
Now, for f, g C(S, R) such that f (x) + g(y) < (x, y). Define

e(x) = inf ((x, y) g(y)) = sup(g(y) (x, y))


y

This implies
f (x) e(x) g(x)
and
Z

Z
f (x)(dx) +

Z
g(y)(dy)

Z
e(x)(dx)

e(y)(dy).

e() also satisfies the following.

e(x)e(z) = sup(g(y)(x, y))+sup(g(y)(z, y)) sup((x, y)(z, y)) (x, z)


y

Thus, kekL 1 and the other inequality is established.


Lemma 6.5 If (S, ) is a compact metric space and , P1 (S), then W1 (, ) =
m (, ).

80

Proof: Denote V = C(S S; R) equipped with the infinity norm, kk , and


U = {f V : f (x, y) < (x, y)}.

Note that U is convex and open because S S is compact. Define the subspace E of
V as
E = { V : (x, y) = f (x) + g(y) where f, g C(S; R)}.
This implies that
U E = {f (x) + g(y) < (x, y)}.
One can define a linear functional on E, r, by
Z
r() =

Z
f (x)(dx) +

g(y)(dy) where (x, y) = f (x) + g(y).

So, by the Hahn-Banach Theorem, one can extend r to r : V R such that r|E = r
and
sup r() = sup r() = m (, )
U

U E

Note that if a(x, y) 0 and c 0, then (x, y) ca(x, y) < (x, y) U . Thus,
for arbitrary c 0:

r( ca ) = r() c
r(a) () sup r <
U

Yet, the above is true only if r(a) 0. So, is a positive linear functional on the
compact space S S. By the Riesz Representation Theorem, there exists a Borel

81

measure, Mf (S S), such that


Z
r(f ) =

f (x, y)(dx, dy).

Recall that r|E = r and, one has that


Z

Z
(f (x) + g(y))(dx, dy) =

Z
f (x)(dx) +

g(y)(dy).

Consequently, C(, ), and


Z
m (, ) = sup r() =
U

sup

Z
f (x, y)(dx, dy) =

(x, y)(dx, dy) W1 (, ).

f (x,y)<(x,y)

The other inequality, m W1 , follows immediately as for any C(, )


Z

Z
f (x)(dx) +

Z
g(y)(dy) =

Z
(f (x) + g(y))(dx, dy)

(x, y)d(dx, dy).

Furthermore, it is clear from the last inequality that (, ) W1 (, ).


Lemma 6.6 If (S, ) is separable and P1 (S), then there is a sequence {n }
P1 such that there are finite sets Fn with n (Fn ) = 1 and W1 (n , ) 0 and
(n , ) 0.
Proof: Since (S, ) is separable, for n 1 there exists a partition of S into {Sn,k }n1
1
such that diam(Sn,k ) . Further, we assume each set in the partition is non-empty
n

82

as we can remove empty sets. Let xn,k Sn,k , and define the following.

fn,k (x) =

xn,j

xn,1

x Sn,j for some

if

jk

otherwise

Then, it follows that for large enough k


Z
(x, fn,k (x))(dx) =

XZ
j1

(x, fn,k (x)(dx)

Sn,j

1X
(Sn,j ) +

n jk

Z
(x, xn,1 )(dx)
S\(S1 Sk )

2
.
n

Define n P1 (S) as the measure concentrated on the sets {Sn,1 , . . . , Sn,k }, and let
n,k be the push-forward measure of under the map x (fn,k (x), x) so that
n,k C(n , ):
Z
W1 (n , )

Z
(x, y)n,k (dx, dy) =

(fn,k (x), x)(dx)

2
.
n

As we remarked before, (n , ) W1 (n , ). Consequently the proof is complete.


Finally, we can prove the Kantorovich-Rubinstein Theorem with the above lemmas. Since (S, ) is a separable metric space, then for , P1 (S), there exists n , n
concentrated on compact sets such that n , n in both and W1 . Since

83

these metrics agree on compact sets, one has that W1 (n , n ) = (n , n ). Thus,


W1 (, ) W1 (, n ) + W1 (n , n ) + W1 (n , )

= W1 (, n ) + (n , n ) + W1 (n , )

W1 (, n ) + (n , ) + W1 (n , ) + (, n ) + (, ).
Letting n shows that W1 (, ) (, ). The other inequality was established
previously, so we have equality in the case of .
To show the claim with f , we must show that
Z
f (, ) =

f (q)((dq) (dq))

sup
kf kL, 1

and
Z
(, ) = sup

f (q)((dq) (dq))

kf kL 1

agree. Clearly, f (, ) (, ). Now for an arbitrary constant, c,


Z
(, ) = sup


(f (q))((dq) (dq)) c((S) (S))

kf kL 1

Z
= sup

f (q)((dq) (dq)).

kf kL 1

As both , are probability measures. Consequently, one can choose without loss of
generality that f (x0 ) = 0 in the definition of (, ) where x0 is a fixed origin in S.

84

Yet, a function with Lipschitz constant bounded by 1 and 0 at the origin, x0 , satisfies:

sup f (r) = (r, x0 ) 1


kf kL 1

as is a metric bounded by 1. This implies that kf k 1 if kf kL 1 and f (x0 ) = 0.


Consequently, we obtain that f (, ) = (, ) = W1 (, ) for all , P1 (S).
Theorem 6.7 Assume (S, %) is a complete, separable metric space with a bounded
metric and countably dense set, T . The set of discrete, finite, Borel measures:
(
Mf,d :=

:=

N
X

)
ai ti , N N, ti T and ai [0, ) Q i = 1, . . . , N

i=1

(6.10)
is dense in (Mf (S), f ). Similarly, the set of discrete, finite, Borel, signed measures:
(
Mf,s,d :=

:=

N
X

)
ai ti , N N, ti T and ai Q i = 1, . . . , N

(6.11)

i=1

is dense in (Mf,s (S), f ).


Proof: We show the claim for the measures as the argument for the other case is
analogous. Let > 0 and Mf (S). We may assume that is not the 0 measure.
Since (S, %) is separable, there is a countable set, {xn }nN T , such that S =
n := B (xn )\
nN B (xn ) where B (xn ) is the ball of radius centered at xn . Define B
(n1
i=1 B (xi )) which partitions S into disjoint sets. This allows one to create the welln to xn .
defined map, f : S S, which sends each element in B
Now, define the measure, Mf (S), as the push-forward of the measure, ,

85

i ) R, we have that
under f . Consequently, by writing ai := (B
() = (f 1 ()) =

ai xi ().

i=1

Since is a finite measure, we must have that a :=

ai < , and (S) = (S) = a.

i=1

By the Kantorovich-Rubinstein Theorem:




f (, ) = af ( , ) = aW1 ( , ).
a a
a a
Note that the product measure, =


C( , ). Consequently, by Fubinia
a
a a

Tonelli,

aW1 ( , ) a
a a

Z
%(x, y)(dx, dy) =

X
i=1

Z
ai

i
B

%(x, xi )(dx) a(S).

Since was arbitrary, it follows that one can approximate by infinite sums of point
masses with arbitrary weights.
Note that for > 0, one can pick a discrete measure such that f (, ) < .

M
X
X
Choose M N so that
ai < . Define Mf by =
ai x i
i=1

i=M +1

f (, ) =

sup
kgkL, 1

sup

Z



g(r)((dr) (dr))

kgkL, 1 i=M +1

ai g(xi )

ai < .

i=M +1

Finally, note that can be approximated elements in Mf,d . For > 0, pick i
M
X
i
i xi , one has the
Q [0, ) such that |ai i | < 2 . Then, by defining :=
i=1

86

following.
f (, ) =

sup
kgkL, 1

sup

Z



g(r)((dr) (dr))


M
X

(ai i )f (xi )

kgkL, 1 i=1

M
X

|ai i |

i=1

<

Proposition 6.8 Let , Mf (S) with total masses m and n respectively. The
following metric is equivalent to the Kantorovich-Rubinstein distance, f ,:

f (, ) := (n m)W1 ( , ) + |m n|
n m

(6.12)

where W1 is the Wasserstein distance.


Proof : We follow the proof given in [Kot08]. Note that by the Kantorovich-Rubinstein
Theorem, we have already established the result in the case where m = n = 1.
Similarly, if either of or is the zero measure, we have equality by the results from
Chapter 2. We first prove the case where m = n > 0. By normalizing these measures,
we have that by the Kantorovich-Rubinstein Theorem:

f (, ) = mf (



, ) = mW1 ( , )
m m
m m

which is (6.12).

87

Now, for the general case, suppose that m > n > 0.


Z
f (, ) =

f (q)((dq) (dq))

sup
kf kL, 1

Z
=

sup

f (q)

kf kL, 1

sup

f (q)

kf kL, 1

n
= f ( , ) +
m

= nf (

n
m

n
m

(dq) (dq) +


(dq) (dq) +

Z
1(q)

mn
f (q)
(dq)
m
Z

sup

f (q)

kf kL, 1

mn
(dq)
m

mn
(dq)
m


, )+mn
m n

= nW1 (


, )+mn
m n

where the last equality follows from the previous case. Consequently, we have shown
that f (, ) f (, ). We need to establish the other inequality to show that these
metrics are equivalent.
Now, assume that

n
n
6= , so that c := f ( , ) > 0. Choose fi with kfi kL,
m
m

1 such that
Z
c = lim

n
fi (q)( (dq) (dq)) = lim
i
m

Z

n
fi (q) (dq)
m


fi (q)(dq) .

Both integrals are bounded sequences, i and i . Consequently, by compactness we


may assume, without loss of generality that these sequences converge to , R,
respectively. Consider the case where < c. This must imply that > 0, and
88

hence i > 0 for large enough i. This implies


Z
fi (q)

mmn
mmn
m
mn
(dq) = i

= ( 1) > 0.
n
n m
n m
n

This implies that


Z

n
f ( , ) = c = lim
i
m
Z
lim

fi (q)(

n
(dq) (dq))
m

n
fi (q)( (dq) (dq)) + lim
i
m

Z
fi (q)

mn
(dq)
m

Z
= lim

fi (q)((dq) (dq)) f (, ).

Now, assume that c. Since c > 0, we may assume that i < 0 for all i.

n
f ( , ) = c = lim
i
m

n
=
lim
m i

n
lim
=
m i

lim
m i

fi (q)(

n
(dq) (dq))
m

fi (q)((dq)

m
(dq))
n

mn
fi (q)((dq) (dq)) +
lim
n i

fi (q)((dq) (dq))

n
f (, )
m

89

Z
fi (q)(dq)

Note that for m > n,

(n m)f (

n

, ) = f ( , ).
m m
m

The left side is symmetric with respect to m and n. Consequently, in both cases we
get that

(n m)f (


, ) f (, ).
m m

Now, note that


Z
mn=

1((dq) (dq)) f (, ).

Combining these inequalities yields

f (, ) (n m)f (


, ) + |m n| 2f (, )
m n

Using the Kantorovich-Rubinstein Theorem yields the result.


In the study of stochastic differential equations, questions of existence and uniqueness arise naturally. The following theorems provide the necessary tools to help address these questions. A classical result is the well-known Holders Inequality. We
F,
), we denote
first need the following definition for an arbitrary measure space (,
F,
) as the space of measurable real-valued functions f such that:
Lp (,

Z
kf kp :=

 p1
|f | ()(d)
<
p

Theorem 6.9 H
olders Inequality
p
F,
), g Lq (,
F,
), then f g
Let p (1, ) and q :=
and f Lp (,
p1
90

F,
) with
L1 (,
kf gk1 kf kp kgkq
Proof For a proof of the inequality, we refer to [Fol99].
Of particular importance is the Cauchy-Schwarz Inequality which can be obtained
1
by picking p = q = in Holders Inequality. Typically, one uses Cauchy-Schwarz in
2
tandem with Doobs Martingale Inequality to obtain global estimates on behavior.
Before we state Doobs inequality, we provide the definition for a martingale in the
Hilbert space setting. Let (H, kkH ) be a separable, real Hilbert space with scalar
product < , >H and norm kkH . An H-valued martingale is a stochastic process (i.e.
a collection of H-valued random variables) {Xt }t0 such that the following hold:
{Xt } is adapted to the filtration {F}t0 . That is, Xt is Ft measurable for all
t0
Xt is integrable for all t 0 in the sense that E kXt kH < .
E(Xt |Fs ) = Xs a.s. for t s 0.
By [MP80] we note that an H-valued martingale always has a modification that is
a.s. and X()
has sample
cadlag (i.e. an H-valued martingale such that X() = X()
paths that are continuous from the right and limits from the left exist).
Theorem 6.10 Doobs Martingale Inequality
For any stopping time and H-valued martingale Xt
E sup kXt k2H 4E kX k2H
0t

Proof For a proof of the inequality, we refer to [EK05].


91

Theorem 6.11 Gronwalls Lemma Suppose u, v, w are R-valued piecewise continuous functions on a t b. Suppose u(t) is nonnegative on [a, b] and the following
holds for a t b:
Z
v(t) w(t) +

u(s)v(s)ds
a

then,
Z
v(t) w(t) +

Z

u(s)w(s) exp
a


u(x)dx ds

Proof For the proof, we refer to [Kot08].


Theorem 6.12 Contraction Mapping Principle
Let (S, d) be a complete metric space. Suppose : S S satisfies the following for
x, y S:
d((x), (y)) = d(x, y)
where 0 < < 1. Then, there exists a unique element z S such that (z) = z.
Proof: For a proof of the statement, see [GG99].
Definition 6.13 Ito Integrals and Quadratic Variation
The quadratic variation processes form a central role in the stochastic integration
theory. To define these processes rigorously we follow [MP80] and [Kot08]. For
a sequence of partitions of [0, ), {tn0 < tn1 < . . . < tnk < . . .} such that for all
T > 0 max
(tnk tnk1 ) 0 as n . An H-valued stochastic process is said
n
tk T

to be of finite quadratic variation if there exists a monotone increasing real-valued


process, << X() >>, such that for every t > 0 we have the following:
X

X(tnk t) X(tnk1 t) 2 << X >> (t) in probability, as n
H
k0

(6.13)
92

where denotes minimum. We call << X >> (t)t0 a quadratic variation for the
process {X(t)}t0
Proposition 6.14 Suppose X() is an H-valued square-integrable martingale. Then,
there is a unique quadratic variation << X >> () such that (6.13) holds. If X() is
continuous, then so is its quadratic variation << X >> (t).
Proof: See [MP80], [Kot08].
Now, for two H-valued processes with finite quadratic variation, X1 () and X2 (),
we define the mutual quadratic variation of X1 () and X2 () as follows:

<< X1 , X2 >> () :=

1
(<< X1 + X2 >> () << X1 X2 >> ())
4

(6.14)

Note that this expression is well defined as the existence of the quadratic variation
processes, << X1 >> and << X2 >>, imply the existence of both << X1 + X2 >>
and << X1 X2 >>. Furthermore, we note that the mutual quadratic variation
defines a bilinear form by [MP80].

We recall the definition for the Ito integral driven by a continuous square integrable
martingale, m() = (m1 (), . . . , md ())T , where d N and T denotes transpose. Define
Mdd to be the set of d d matrices over R and define L2,F ,loc ([0, ) : Mdd )
to be the set of (, ) that are Mdd valued, Ft -adapted, jointly measurable in (t, )
with respect to dt P (where dt is the Lebesgue measure on [0, )). Also, such
must satisfy the following.

d
X
k,l=1

Z
E

2kl (t, ) << ml >> (ds) < T > 0

93

n ) Ftn -adapted Mdd Given a sequence of {0 = tn0 < . . . < tnk < . . .}, and (t
k
k
valued random variables for i = 0, 1, . . ., define

n) +
n (t, ) = (t
0

n )1(tn ,tn ] (t) t a.s.


(t
k1
k1 k

k=1

Such processes are called simple. We define the stochastic integral by:
t

n (s)m(ds) := 0 +

n )(m(tn t) m(tn t))


(t
k1
k
k1

k=1

By [IW89], the simple processes are dense in L2,F ,loc ([0, ) : Mdd ) in the
L2 metric. Consequently, the definition of the stochastic integral can be extended to
arbitrary . It can be shown that if L2,F ,loc ([0, ) : Mdd ), and n is a
sequence of simple functions converging to , then the stochastic integrals converge
in probability by [IW89]:

Z
(s)m(ds) = lim

Z
We note that

n (s)m(ds).
0

(s)m(ds) is a continuous square-integrable Rd -valued martingale

and has quadratic variation process given by :


Z
<<

(s)m(ds) >>=
0

Z
d
X
i,j,k=1

ij (s)ik (s) << mj , mk >> (ds)

(6.15)

where << , >> denotes the mutual quadratic variation of the components.
Generalization of Brownian Motion differentials
Typically, one of the main choices is m(t) = (t) where (t) is d-dimensional Brownian
motion. However, such a choice will not be sufficient for our analysis. Denote R+ :=
94

[0, ).
Definition 6.15 Following [Wal84] and [Kot08], a process exists called Standard
Gaussian Space-Time White Noise, w(dq, dt, ). It is a finitely additive random
signed measure on the Borel sets of Rd R+ of finite Lebesgue measure |A| such that
the following holds:
Z

Z
1A (q, t)w(dq, dt, ) is a normally distributed random variable with mean

0 and variance |A|


Z
if A B = then

1A (q, t)w(dq, dt, ) and


0

1B (q, t)w(dq, dt, ) are


0

independent
Let wi (dq, dt, ) be independent standard Gaussian White-Noise for i = 1, . . . d.
We define d-dimensional standard Gaussian White-Noise as

w(dq, dt, ) = (w1 (dq, dt, ), . . . , wd (dq, dt, ))T

.
Finally, we provide a fundamental theorem from stochastic integration theory,
Itos Formula. We state the theorem as in [Kot08] and [IW89]
Theorem 6.16 It
os Formula Let (r, t) : Rd+1 R be twice continuously differentiable in space and once continuously differentiable in t such that all partial
derivatives are bounded. Further, we let m() be a continuous square-integrable Rd valued martingale and b() a continuous process of bounded variation. Set

a(t) := b(t) + m(t)

95

Then, (a(), t) is a continuous, locally square integrable semimartingale and the


following holds:
Z t
(a(t), t) = (a(0), 0) +
0

Z
+
0

(a(s), s)ds
s

d Z
1X t 2
()(a(s), s) (b(ds) + m(ds)) +
( )(a(s), s) << mi , mj >> (ds)
2 i,j=1 0 i,j

(6.16)
where << mi , mj >> () are the mutual quadratic variations of the one-dimensional
components of m().

96

References
[Ami00]

A. Amirdjanova.

Topics in Stochastic Fluid Dynamics: A Vorticity

Approach. University of North Carolina at Chapel Hill, 2000.


[Ami07]

A. Amirdjanova. Vortex theory approach to stochastic hydrodynamics.


Mathematical and Computer Modelling, 45(11-12):13191341, 2007.

[Arn92]

V.I. Arnold. Ordinary Differential Equations. Springer textbook. New


York, 1992.

[AX06]

A. Amirdjanova and J. Xiong. Large deviation principle for a stochastic navier-stokes equation in its vorticity form for a two-dimensional incompressible flow. Discrete and Continuous Dynamical Systems series B,
6(4):651, 2006.

[Bar11]

C. Barbarosie. Representation of divergence-free vector fields. Preprint,


2011.

[Cho73]

Alexandre Joel Chorin. Numerical study of slightly viscous flow. Journal


of Fluid Mechanics, 57(04):785796, 1973.

[CM93]

A.J. Chorin and J.E. Marsden. A Mathematical Introduction to Fluid


Mechanics. Texts in Applied Mathematics. Springer-Verlag, 1993.

[Dud02]

R.M. Dudley. Real Analysis and Probability. Cambridge Studies in Advanced Mathematics. Cambridge University Press, 2002.

[EK05]

S.N. Ethier and T.G. Kurtz. Markov Processes: Characterization and


Convergence. Wiley Series in Probability and Statistics. Wiley Interscience,
2005.
97

[Eva10]

L.C. Evans. Partial Differential Equations. Graduate Studies in Mathematics. American Mathematical Society, 2010.

[FM07]

Joaquin Fontbona and Miguel Martinez. Paths clustering and an existence result for stochastic vortex systems. Journal of Statistical Physics,
128(3):699719, 2007.

[Fol99]

G.B. Folland. Real Analysis: Modern Techniques and Their Applications.


Pure and Applied Mathematics. Wiley, 1999.

[GG99]

T.W. Gamelin and R.E. Greene. Introduction to Topology. Dover Books


on Mathematics. Dover Publications, 1999.

[GHL90] Jonathan Goodman, Thomas Y. Hou, and John Lowengrub. Convergence


of the point vortex method for the 2-d euler equations. Communications
on Pure and Applied Mathematics, 43(3):415430, 1990.
[GMO88] Y. Giga, T. Miyakawa, and H. Osada. Two-dimensional navier-stokes flow
with measures as initial vorticity. Archive for Rational Mechanics and
Analysis, 104(3):223250, 1988.
[IW89]

N. Ikeda and S. Watanabe. Stochastic Differential Equations and Diffusion


Processes. Kodansha Scientific Books. North-Holland, 1989.

[Kat67]

Tosio Kato. On classical solutions of the two-dimensional non-stationary


euler equation. Archive for Rational Mechanics and Analysis, 25:188200,
1967. 10.1007/BF00251588.

[KK10]

P.M. Kotelenez and T.G. Kurtz. Macroscopic limits for stochastic partial differential equations of mckeanvlasov type. Probability Theory and
Related Fields, 146(1):189222, 2010.
98

[Kot95]

Peter Kotelenez. A stochastic navier-stokes equation for the vorticity of a


two-dimensional fluid. The Annals of Applied Probability, 5(4):pp. 1126
1160, 1995.

[Kot08]

P. Kotelenez.

Stochastic Ordinary and Stochastic Partial Differential

Equations: Transition from Microscopic to Macroscopic Equations. Applications of Mathematics. Springer Science+Business Media, 2008.
[Kot10]

Peter Kotelenez. Stochastic flows and signed measure valued stochastic


partial differential equations. Theory of Stochastic Processes, 16(2):pp.
86105, 2010.

[KS12a]

Peter M. Kotelenez and Bradley T. Seadler. Conservation of total vorticity for a 2d stochastic navier stokes equation. Advances in Mathematical
Physics, 2012, 2012.

[KS12b]

Peter M. Kotelenez and Bradley T. Seadler. On the hahn-jordan decomposition for signed measure valued stochastic partial differential equations.
Stochastics and Dynamics, 0(2), 2012.

[KX99]

T.G. Kurtz and J. Xiong. Particle representations for a class of nonlinear


spdes. Stochastic Processes and their Applications, 83(1):103126, 1999.

[Lad69]

O.A. Ladyzhenskaya. The Mathematical Theory of Viscous Incompressible


Flow. Mathematics and its Applications. Gordon and Breach, 1969.

[Lon88]

Ding-Gwo Long. Convergence of the random vortex method in two dimensions. Journal of the American Mathematical Society, 1(4):pp. 779804,
1988.

99

[MP80]

M. Metivier and J. Pellaumail. Stochastic Integration. Probability and


Mathematical Statistics. Academic Press, 1980.

[MP82]

C. Marchioro and M. Pulvirenti. Hydrodynamics in two dimensions and


vortex theory.

Communications in Mathematical Physics, 84:483503,

1982. 10.1007/BF01209630.
[Mun00] J.R. Munkres. Topology. Prentice Hall, 2000.
[Osa85]

H. Osada. A stochastic differential equation arising from the vortex problem. Proceedings of the Japan Academy, Series A, Mathematical Sciences,
61(10):333336, 1985.

[Pan08]

Dmitry. Panchenko. Theory of probability 18.175. (Massachusetts Institute


of Technology: MIT OpenCourseWare), http://ocw.mit.edu, 2008.

[Pro04]

P.E. Protter. Stochastic Integration and Differential Equations. Applications of mathematics. Springer, 2004.

[Ros31]

L. Rosenhead. The formation of vortices from a surface of discontinuity.


Proceedings of the Royal Society of London. Series A, 134(823):170, 1931.

[Tak85]

S. Takanobu. On the existence and uniqueness of sde describing an nparticle system interacting via a singular potential. Proceedings of the
Japan Academy, Series A, Mathematical Sciences, 61(9):287290, 1985.

[Wal84]

J.B. Walsh. Introduction to Stochastic Partial Differential Equations. University of British Columbia, 1984.

100

You might also like