You are on page 1of 6

Partial Differential Equations, 2nd Edition, L.C.

Evans
Chapter 3 Nonlinear First-Order PDE∗

Yung-Hsiang Huang†

1. Check the definition of complete integral directly.

2. Proof.

ai x2i + u(x)3 = 0 with respect to xj , we have


P
3. Proof. (a) Differentiating the equation i

2aj xj + 3u(x)2 Dj u(x) = 0. (1)

Use this equation, we can rewrite (1) as

3
− u2 (x)x · Du(x) + u(x)3 = 0,
2

which is the desired PDE.

(b) The sphere can be represented by

|(x1 , x2 , · · · , xn , u(x)) − (a1 , a2 , · · · an , 0)|2 − 1 = 0 (x ∈ Rn ). (2)

Differentiating the equation with respect to xi , we have

(xi − ai ) + u(x)Di u(x) = 0.

Use this equation, we can rewrite (2) as

u2 (|Du|2 + 1) − 1 = 0,

which is the desired PDE.

4. Proof.

5. Proof.

Last Modified: 2017/03/25.

Department of Math., National Taiwan University. Email: d04221001@ntu.edu.tw

1
6. Proof. (a) First, we note the following Jacobi’s identity:

d dA
detA(s) = tr((cofA(s)) (s)).
ds ds

Let A(s) = (Xxi j (s, x, t)) and B(s) = (bixj (s)). Then, by the equation,

dA
(s) = B(X)A(s).
ds

Therefore

Js = tr((cofA(s))B(X)A(s)) = tr(B(X)A(s)(cofA(s))) = tr(B(X)detA(s)) = div(b)J.

(b) The characteristic equations to the PDE are

ẋ(t) = b, x(0) = y (3)

ż(t) = −divbz, z(0) = g(y).

The second ODE is equivalent to

˙ (t) = divbz −1 , z −1 (0) = g(y)−1 .


z −1 (4)

According to the hypothesis on b, the definition of X and J, and the Jacobi identity in (a),
we know (1) these ODEs are uniquenely solvable, (2) y = X(−t, x, 0), X(s, X(−t, x, 0), 0)) =
g(y)
X(s − t, x, 0), 0)) for each s, t ∈ R, (3) J(−t, x, 0) = J(0, x, t) and (4) z(t) = J(t,y,0)
.

By the Euler formula again, we have


Rt Rt Rt
divb(X(s,y,0)) ds divb(X(s,X(−t,x,0),0)) ds divb(X(s−t,x,0)) ds
J(t, y, 0) = e 0 =e 0 =e 0

R −t
= e− 0 divb(X(τ,x,0)) dτ
= J(−t, x, 0)−1 = J(0, x, t)−1 .

These facts imply u(x, t) = g(X(0, x, t))J(0, x, t).

7. Proof.

8. Proof.

9. Proof.

10. Compute the Legendre transformation of convex function H : Rn → R:


r
(a) H(p) = |p|r , 1 < r < ∞. (b) H(p) = ni,j=1 aij pi pj + ni=1 bi pi , where (aij ) is a symmetric,
P P

positive definite matrix and b ∈ Rn .

2
Proof. (a) For each v ∈ Rn , by the superlinearity of H, we know the only critical point of the
map p 7→ v · p − H(p) is the maximum point. So (a) is proved.

(b) By the ellipticity, we know H is superlinear. For each v ∈ Rn , we also see the critical
point p∗ satisfies v = 21 Ap∗ + b. Since A is invertible, there is only one critical point and hence
p∗ = A−1 (v − b) is the maxima and

1 1
L(v) = v T A−1 (v − b) − (v − b)T A−T AA−1 (v − b) − bT A−1 (v − b) = (v − b)T A−1 (v − b)
2 2

11. H : Rn → R is convex. We write v ∈ ∂H(p) if H(r) ≥ H(p) + v · (r − p) for all r ∈ Rn . Prove


that (1) v ∈ ∂H(p) ⇔ (2) p ∈ ∂L(v) ⇔ (3) p · v = H(p) + L(v), where L = H ∗ .

Proof. First, we note L = H ∗ is convex on Rn , since for each v1 , v2 ∈ Rn and 0 ≤ t ≤ 1,

L(tv1 + (1 − t)v2 ) = sup {(tv1 + (1 − t)v2 ) · p − H(p)}


p∈Rn

= sup {(t(v1 · p − H(p)) + (1 − t)(v2 · p − H(p))} ≤ tL(v1 ) + (1 − t)L(v2 )


p∈Rn

Next, we prove (1) ⇒ (2) ⇒ (3) ⇒ (1). If (1) is true, then for all r ∈ Rn ,

H(r) ≥ H(p) + v · (r − p).

which implies (2) as follows: for each r0 ∈ Rn ,

L(v) + p · (r0 − v) = sup {v · r − H(r) + p · (r0 − v)} ≤ sup {v · r − H(p) − v · (r − p) + p · (r0 − v)}
r∈Rn r∈Rn

= p · r0 − H(p) ≤ L(r0 ).

If (2) is true, then L(v) + H(p) ≥ p · v − H(p) + H(p) = p · v.

On the other hand, for each r ∈ Rn , L(r) ≥ L(v) + p · (r − v), that is, p · v ≥ L(v) + p · r − L(r).
Hence,
 
p · v ≥ L(v) + sup {p · r − L(r)} ≥ L(v) + sup p · r − sup {r · q − H(q)}
r∈Rn r∈Rn q∈Rn

= L(v) + sup infn H(q) + (p − q) · r.


r∈Rn q∈R

Since H is convex, we can pick −∞ < D− H(q) ≤ s ≤ D+ H(q) < ∞ at p such that H(q) ≥
H(p) + (q − p) · s for all q ∈ Rn . Hence inf q∈Rn H(q) + (p − q) · s ≥ H(p). So p · v ≥ L(v) + H(p).

If (3) is true, then (1) is true since for each r ∈ Rn , p·v = L(v)+H(p) ≥ v ·r −H(r)+H(p).

3
Remark 1. Related to convex duality, the Fenchel-Moreau Theorem characterize when is a
extended real-valued function on a Hausdorff locally convex space equals to its biconjugate
(that is, the double Legendre transformation.)

12. Assume L1 , L2 : Rn → R are convex,smooth and superlinear. Show that

min (L1 (v) + L2 (v)) = maxn (−H1 (p) − H2 (−p)),


v∈Rn p∈R

where H1 = L∗1 , H2 = L∗2 .

Proof. By the superlinearity of L1 and L2 , we know both extrema are attainable.

Given v ∈ Rn and p ∈ Rn , then L1 (v)+L2 (v) ≥ pv −H(p)+(−p)v −H(−p). So L1 (v)+L2 (v) ≥


maxp∈Rn −H(p) − H(−p) and hence

min (L1 (v) + L2 (v)) ≥ maxn (−H1 (p) − H2 (−p)).


v∈Rn p∈R

Use the same way, we can prove the converse inequality as follows:

maxn (−H1 (p)−H2 (−p)) = − minn (H1 (p)+H2 (−p)) ≥ − maxn (−L1 (v)−L2 (v)) = minn L1 (v)+L2 (v)).
p∈R p∈R v∈R v∈R

13. Let H be the smooth convex Hamiltonian and g be the smooth Lipschitz initial data. Prove
that the Hopf-Lax formula reads
n x−y o n x−y o
u(x, t) = minn tL( ) + g(y) = min tL( ) + g(y) ,
y∈R t y∈B(x,Rt) t
for R = supRn |DH(Dg)|, H = L∗ . (This proves finite propagation speed for a Hamilton-Jacobi
PDE with convex Hamiltonian and Lipschitz continuous initial data g.)

Proof. Part of the proof is based on the same idea as exercise 11 (suggested by the first edition).

Since
x−y x−y
tL( ) + g(y) ≥ tL( ) − [g]C 0,1 |x − y| − |g(x)|
t t
 L( x−y ) |g(x)| 
t
= |x − y| |x−y| − [g]C 0,1 − → ∞ as |y| → ∞,
|x − y|
t

there is a constant A > 0 such that φ(y) := tL( x−y


t
) + g(y) ≥ φ(0) provided |y| ≥ A. The
continuity of φ and the fact inf y∈Rn φ(y) = inf |y|≤A implies that φ(y) has a minimizer y ∗ . Next,
we characterize what y ∗ is. By convex duality,
x − y∗  x − y∗   
tL( ) = t sup p · − H(p) = sup p · (x − y ∗ ) − tH(p) = p∗ · (x − y ∗ ) − tH(p∗ ),
t p∈Rn t p∈Rn

4
where the existence of maximizer p∗ is proved similar as y ∗ . Note that 0 = x − y ∗ − tDH(p∗ ).

Again, we have for each y ∈ Rn

x − y∗ x−y
tL( ) = p∗ · (x − y ∗ ) − tH(p∗ ) ≤ p∗ · (x − y ∗ ) − p∗ · (x − y) + tL( ),
t t

that is,

x−y x − y∗
ϕ1 (y) := tL( ) + g(y) ≥ tL( ) + p∗ · (y − y ∗ ) + g(y) =: ϕ2 (y).
t t

Since ϕ1 has a global minimizer at y = y ∗ , Dϕ1 (y ∗ ) = 0. If Di ϕ2 (y ∗ ) = a > 0 for some


i, then there is h > 0 such that Di ϕ1 (y) < a
2
and Di ϕ2 (y) > a
2
for all |y − y ∗ | < 2h and
then ϕ2 (y ∗ + hei ) > ϕ2 (y ∗ ) + a2 h = ϕ1 (y ∗ ) + a2 h > ϕ1 (y ∗ + hei ), a contradiction. Considering
ϕ1 (y ∗ − hei ) and ϕ2 (y ∗ − hei ) for the case a < 0, we see the contradiction. So Dϕ2 (y ∗ ) = 0,
that is, p∗ = Dg(y ∗ ). Hence, |x − y ∗ | ≤ t sup |DH(Dg)|.

14. Let E be a closed subset of Rn . Show that if the Hopf-Lax formula could be applied to the
initial-value problem 


 ut + |Du|2 = 0 in Rn × (0, ∞)

 
 ∞ if x ∈
/E


 u = on Rn × {t = 0},

  0 if x ∈ E

it would give the solution


1
u(x, t) = dist(x, E)2 .
4t

Proof. Let H(p) = |p|2 , then H is smooth, convex and superlinear. Note that L(v) = H ∗ (v) =
|v|2
4
by computing the critial point. So

1 x−y 2
u(x, t) = minn | | + g(y).
y∈R 4 t

We note that the minima is attained at y ∈ E since g = ∞ on E c . So

1 x−y 2 1
u(x, t) = min | | = dist(x, E)2 .
y∈E 4 t 4t

15. Proof.

16. Assume u1 , u2 are two solutions of the initial value problems



 uit + H(Dui ) = 0 in Rn × (0, ∞)
 ui = g i on Rn × {t = 0},

5
given by the Hopf-Lax formula. Prove the L∞ -contraction inequality

sup |u1 (·, t) − u2 (·, t)| ≤ sup |g 1 − g 2 | (t > 0).


Rn Rn

Proof. Given t > 0 and x ∈ Rn , then for some y1 , y2 ∈ Rn , u1 (x, t) = tL( x−y
t
1
) + g 1 (y1 ) and
u2 (x, t) = tL( x−y
t
2
) + g 2 (y2 ). We are almost done since

x − y2 x − y2
u1 (x, t) − u2 (x, t) ≤ tL( ) + g 1 (y2 ) − tL( ) + g 2 (y2 ) ≤ sup |g 1 − g 2 |
t t Rn

and

x − y1 x − y1
u2 (x, t) − u1 (x, t) ≤ tL( ) + g 2 (y1 ) − tL( ) + g 1 (y1 ) ≤ sup |g 1 − g 2 |.
t t Rn

Remark 2. See Exercise 10.4 for an analogy result for viscosity solution.

17. Show that 


2 √ 
 − t + 3x + t2 if 4x + t2 > 0

u(x, t) := 3 (5)
0 if 4x + t2 < 0


2
is a (unbounded) entropy solution of ut + ( u2 )x = 0.

Proof.

18. Use the definitions of derivative and convolution.

19. Assume F (0) = 0, u is a continuous integral solution of the conservation law



 ut + F (u)x = 0 in R × (0, ∞)
(6)
 u = g on R × {t = 0},

and u has compact support in R × [0, T ] for each time T > 0. Prove for all t > 0,
Z ∞ Z ∞
u(x, t) dx = g(x) dx.
−∞ −∞

Proof. For each t > 0, we pick the test function v ∈

20. Proof.

You might also like