You are on page 1of 27

TRANSACTIONS OF THE

AMERICAN MATHEMATICAL SOCIETY


Volume 365, Number 8, August 2013, Pages 4229–4255
S 0002-9947(2013)05760-1
Article electronically published on March 11, 2013

QUASICONVEX FUNCTIONS AND NONLINEAR PDES

E. N. BARRON, R. GOEBEL, AND R. R. JENSEN

Abstract. A second order characterization of functions which have convex


level sets (quasiconvex functions) results in the operator L0 (Du, D 2 u) =
min{v · D 2 u v T | |v| = 1, |v · Du| = 0}. In two dimensions this is the mean cur-
vature operator, and in any dimension L0 (Du, D 2 u)/|Du| is the first principal
curvature of the surface S = u−1 (c). Our main results include a comparison
principle for L0 (Du, D 2 u) = g when g ≥ Cg > 0 and a comparison principle
for quasiconvex solutions of L0 (Du, D 2 u) = 0. A more regular version of L0
is introduced, namely Lα (Du, D 2 u) = min{v · D 2 u v T | |v| = 1, |v · Du| ≤ α},
which characterizes functions which remain quasiconvex under small linear per-
turbations. A comparison principle is proved for Lα . A representation result
using stochastic control is also given, and we consider the obstacle problems
for L0 and Lα .

1. Introduction
In this paper we consider the operator L0 : Rn × S(n) → R, defined by
L0 (p, M ) = min v · M vT ,
{v∈Rn | |v|=1,v·p=0}

and when p = 0, L0 (0, M ) = λ1 (M ) = first eigenvalue of M. Here S(n) is the set of


symmetric n × n matrices. We will study the nonlinear partial differential equation
L0 (Du, D2 u) = g(x), x ∈ Ω, with u = h on ∂Ω.
Solutions are considered in the viscosity sense. For a given function u : Ω → R
we also write L0 (u) for the operator L0 (Du, D2 u). The motivation for considering
this problem is the connection with differential geometry, generalized convexity,
tug-of-war games and stochastic optimal control.
First, observe that the operator L0 has the nice property that it is of geometric
type:
L0 (λp, λM + μ p ⊗ p) = λL0 (p, M ) ∀λ > 0, μ ∈ R, p ∈ Rn , M ∈ S(n).
In R2 it is easy to calculate
 
D⊥ u · D2 uD⊥ uT
L0 (u) = − = Δu − Δ∞ u,
|D⊥ u|2
where
Du · D2 u DuT
Δ∞ u =
|Du|2

Received by the editors February 16, 2011 and, in revised form, November 23, 2011.
2010 Mathematics Subject Classification. Primary 35D40, 35B51, 35J60, 52A41, 53A10.
Key words and phrases. Quasiconvex, robustly quasiconvex, principal curvature, comparison
principles.
The authors were supported by grant DMS-1008602 from the National Science Foundation.

2013
c American Mathematical Society

4229

Licensed to Univ of New Orleans. Prepared on Tue Dec 9 05:39:27 EST 2014 for download from IP 137.30.242.61.
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
4230 E. N. BARRON, R. GOEBEL, AND R. R. JENSEN

is the celebrated ∞−Laplacian. Thus, in two dimensions L0 (u) is the much studied
mean curvature operator. In Rn , L0 (Du, D2 u)/|Du| is the first (smallest) principal
curvature of the surface S = u−1 (c) = {x ∈ Rn | u(x) = c}. The problem of
prescribed principal curvature L0 (u) = g|Du| is considerably complicated in full
generality. In this paper we consider the problem L0 (Du, D2 u) = g in order to
determine whether or not there is a solution, its uniqueness, and its properties.
The results for L0 immediately imply analogous results for the analogous largest
principal curvature by considering the operator
Lmax (Du, D2 u) = max v · D2 u vT .
{|v|=1,v·Du=0}

Moreover, L0 (u) = 0 as well as Lmax (u) = 0 imply that u is a solution of the


homogeneous Monge-Ampère equation det(D2 u) = 0.
Now suppose that Ω ⊂ Rn is a convex and open set.
A necessary and sufficient second order condition that a twice differentiable
function be convex is that D2 u(x) is positive semidefinite for all x ∈ Ω. Alvarez,
Lasry and Lions [1] and Oberman, in the recent papers [13] and [14], as well as
Bardi and Dragoni [3] have established the fact that u is convex if and only if
D2 u ≥ 0 in the viscosity sense, or, in equation form, if
λmin (D2 u(x)) = min{vD2 u(x)v T | |v| = 1} ≥ 0, x ∈ Ω,
in the viscosity sense. That is, if u − ϕ achieves a maximum at x0 ∈ Ω, where ϕ is
a smooth function, then λmin (D2 ϕ(x0 )) ≥ 0.
Now we introduce the connection of our operator L0 (u) with quasiconvex func-
tions. Recall that a function u : Ω → [−∞, ∞] is quasiconvex by definition if
u(λx + (1 − λ)y) ≤ max{u(x), u(y)}, ∀ x, y ∈ Ω, 0 < λ < 1,
which is equivalent to the requirement that the sublevel sets of u be convex.
A necessary and sufficient first order condition for a differentiable function u :
Ω → R to be quasiconvex is the following:
• First Order: ∀ x, y ∈ Ω, u(y) ≤ u(x) =⇒ Du(x) · (y − x) ≤ 0.
A necessary, but not sufficient, second order condition for quasiconvexity, for a
twice differentiable function u : Ω → R is the following:
• Necessary Second Order: ∀ x ∈ Ω, v · Du(x) = 0 =⇒ v · D2 u(x)v T ≥
0, ∀ 0 = v ∈ Rn .
To see that the second order condition is not necessary, consider the following
example.
Example 1.1. Let u(x) = −x4 on R. If x = 0, v · Du(x) = 0 implies v = 0
and then v · D2 u(x)v = 0, and if x = 0, then D2 u(x) = 0; thus the second order
condition is satisfied. However, u is not quasiconvex. In fact, it is quasiconcave.
A sufficient, but not necessary condition for quasiconvexity is strict inequality,
i.e.,
• Sufficient Second Order: ∀ x ∈ Ω, v · Du(x) = 0 =⇒ v · D2 u(x)v T >
0, ∀ 0 = v ∈ Rn .
See Boyd and Vandenberghe [6] for details.
We may express the second order quasiconvexity conditions in terms of L0 : if a
twice differentiable function u : Ω → R is quasiconvex, then L0 (u) ≥ 0. Conversely,
if L0 (u) > 0, then u is quasiconvex. It turns out that viscosity versions of these

Licensed to Univ of New Orleans. Prepared on Tue Dec 9 05:39:27 EST 2014 for download from IP 137.30.242.61.
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
QUASICONVEX FUNCTIONS 4231

two statements are valid as well. This is shown in Section 2 as Theorem 2.6 and
Theorem 2.7. It is more interesting that L0 (Du, D2 u) ≥ 0 is sufficient for quasi-
convexity, if, additionally, u does not have any local maxima (cf. Example 1.1).
We prove this in Theorem 2.8. Thus, L0 arises naturally as the generalization of
the condition guaranteeing convexity, namely λ1 (D2 u) ≥ 0, to quasiconvexity. But
we will see that the generalization is far from straightforward as is indicated by the
fact that L0 (u) ≥ 0 is not sufficient for quasiconvexity without other conditions.
One might conjecture that the problem in proving that L0 (u) ≥ 0 implies qua-
siconvexity without extra assumptions is that the constraint set in the definition
of L0 has no thickness. One can thicken it up by considering a slightly different
operator,

L−
0 (p, M ) = min{v · M v | v ∈ R , |v| ≤ 1, v · p = 0}.
T n

Nevertheless, L− 0 (u) ≥ 0 does not characterize quasiconvexity either. In fact,


u(x) = −x4 is a counterexample, so this idea doesn’t help. However, the oper-
ator L−0 arises in two dimensions as the governing operator for motion by positive
curvature as pointed out in [12].
A different way to thicken up the constraint set in the definition of L0 (u) is to
allow |v · p | ≤ α rather than v · p = 0. This leads to the definition

Lα (p, M ) = min{v · M v T | v ∈ Rn , |v| = 1, |v · p | ≤ α}.

This turns out to be a very fruitful idea. For a smooth u : Ω → R, Lα (u) ≥ 0


is a necessary and sufficient condition for quasiconvexity of u under all sufficiently
small linear perturbations. It turns out that this result, but without explicitly using
Lα , was shown by An [2] in a somewhat different (nonsmooth) framework. Recall
here that there are many quasiconvex functions which fail to be quasiconvex under
arbitrarily small linear perturbations. For example, on R, the function x → arctan x
is quasiconvex, but x → arctan x − ax is not quasiconvex for arbitrarily small
positive a.
In what follows, given α > 0, we say that u : Ω → [−∞, ∞] is α-robustly
quasiconvex if x → u(x) + ξ · x is quasiconvex on Ω for every ξ ∈ Rn , |ξ| ≤ α, and
denote the class of such functions by Rα (Ω). We call u : Ω → [−∞, ∞] robustly
quasiconvex if it is α-robustly quasiconvex
 for some α > 0, and denote the class
of such functions by R(Ω) = α>0 Rα (Ω). We note that robustly quasiconvex
functions were named stable-quasiconvex, or just s-quasiconvex, by Phu and An
[17]. See [2] and [17] for more examples and discussions of robust quasiconvexity.
In Section 4 we show that Lα (u) ≥ 0 in the viscosity sense is a necessary and
sufficient condition for u ∈ Rα (Ω). Furthermore, it turns out that Lα is an operator
with very nice properties including the fact that there is a comparison, and therefore
uniqueness, principle for equations of the form Lα (u) = g, as long as g ≥ 0. Thus
we have a complete characterization of the class of robustly quasiconvex functions
and we have a tool to study what happens as we let α → 0. Notice, however, that
Lα is not a geometric operator in the same sense as L0 .
We are therefore amply motivated to determine whether or not L0 enjoys a
comparison principle among viscosity sub and supersolutions. Barles and Da Lio
[12, Appendix] proved the following result for a related mean curvature problem, a
sort of weak comparison principle.

Licensed to Univ of New Orleans. Prepared on Tue Dec 9 05:39:27 EST 2014 for download from IP 137.30.242.61.
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
4232 E. N. BARRON, R. GOEBEL, AND R. R. JENSEN

Consider the problem


Du D2 u DuT
(1.1) Δu − − 1 = 0, x ∈ Ω, u(x) = 0, x ∈ ∂Ω.
|Du|2
In two dimensions this equation arises by considering a tug-of-war game in which
one player wants to minimize the exit time from Ω while the other player is trying
to thwart that goal. Here is what was proved by Barles and Da Lio in the Appendix
of [12].
Theorem 1.2. Assume Ω is starshaped with respect to the origin. Let u be a
bounded and upper semicontinuous function on Ω which is a subsolution of (1.1),
and let v be a bounded and lower semicontinuous supersolution of (1.1) on Ω. Then
u∗ ≤ v and v ∗ ≤ u, where u∗ and v ∗ denote the lower and upper semicontinuous
envelopes of u, v, respectively.
Notice that the solution of (1.1) may be discontinuous and there is no state-
ment made connecting the problem with quasiconvexity. In two dimensions (1.1)
is equivalent to the following equation, which is the inhomogeneous version of our
quasiconvex problem:
(1.2) L0 (u) − 1 = 0, x ∈ Ω, u(x) = 0, x ∈ ∂Ω.
Observe that a subsolution of (1.2) is a strict subsolution of L0 (u) > 0 and
hence is quasiconvex (assuming Ω is convex) by Theorem 2.7. Thus we are led to
consider the problem in Ω ⊂ Rn , L0 (u) = g(x) assuming that g(x) > 0 on Ω. We
prove in Theorem 3.3 that this problem has a comparison principle that an upper
semicontinuous subsolution lies below a lower semicontinuous supersolution if it
holds on ∂Ω. It is critical that g > 0 because we have an example of nonuniqueness
for L0 (u) = 0.
On the other hand, by considering the fact that Lα L0 as α → 0+, it is natural
to conjecture that using the unique solution (proved in Theorem 5.1) of Lα (uα ) = 0,
the function u = supα>0 uα should be the correct solution of L0 (u) = 0. In fact,
u = supα>0 uα is shown to be the unique quasiconvex solution of L0 (u) = 0. That
is, L0 (u) = 0 may have many solutions, but there is only one quasiconvex solution
(see Theorem 5.5). In Section 6 we also show that the obstacle problem
min{g − u, L0 (Du, D2 u)} = 0, x ∈ Ω, u = g, x ∈ ∂Ω,
has a unique quasiconvex solution and it is g ## , the largest quasiconvex minorant
of g. The obstacle problem for Lα , is also considered and it is shown to have a
unique viscosity solution. This would be a way to calculate the greatest α−robustly
quasiconvex minorant of a given function g.
To conclude the paper, we give a brief introduction to the connections between
our operator L0 and how it arises in stochastic optimal control. The part of sto-
chastic control here is the new area of control in which the payoff is not the expected
value, but the worst case cost. In other words, one takes an essential supremum
over all the paths of the underlying Brownian motion. Thus, for example, we prove
that a viscosity solution of L0 (u) = 0 with u = g on ∂Ω is given by
u(x) = inf ess sup g(ξ(τx )),
η ω∈Σ

where√η : [0, ∞) → S1 (0) = {|y| = 1} is a control, ξ(·) is a trajectory given by


dξ = 2 η dW, t > 0, ξ(0) = x, τx is the exit time of ξ(·) from Ω, and W (·) is a

Licensed to Univ of New Orleans. Prepared on Tue Dec 9 05:39:27 EST 2014 for download from IP 137.30.242.61.
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
QUASICONVEX FUNCTIONS 4233

Brownian motion. The essential supremum is over the sample paths ξ(τx , ω), ω ∈ Σ.
Notice here that there is no drift (although this could also be considered) and the
control occurs only in the diffusion coefficient. These types of control problems have
been studied by Soner [21] and Soner et al. [22], [23], and a similar control problem
is constructed to give a representation formula for motion by mean curvature in
[21] and Buckdahn et al. [7].
To see the connection between a tug-of-war game and L0 , consider the rules of
a game between Paul and Carol introduced by Kohn and Sefaty in [12]:
Paul starts at x ∈ Ω. At each time step first Paul chooses a direction, i.e., a
v ∈ S1 (0), with the goal of trying to reach ∂Ω. Next, Carol, who is trying to prevent
Paul from reaching the boundary, chooses either to (i) confirm the direction chosen
by Paul (choose b = +1) or (ii) reverse the direction (choose b = −1). Paul’s goal is
to minimize the exit time from Ω, while Carol’s goal is to maximize the exit time.
If the time step is ε, the value of the game if the position is x ∈ Ω satisfies the
dynamic programming principle

uε (x) = min max uε (x + ε 2 b v) + ε2 ,
|v|=1 b=±1

where u (x) = ε k if Paul needs k steps to exit starting from x ∈ Ω. In R2 ,


ε 2

uε (x) → u(x), and L0 (u)−1 = 0. This is the starting point leading to the connection
between deterministic optimal control and mean curvature developed in [12].

2. Viscosity characterizations of quasiconvex functions


We state first our precise definition of a viscosity solution that we use in this
paper and refer to [9] for the basic results. Throughout this paper, for any locally
bounded function f, we use the notation that f ∗ is the upper semicontinuous en-
velope of f and f∗ is the lower semicontinuous envelope of f, in all the variables in
the function.
Definition 2.1. A locally bounded function u : Ω → R is a viscosity subsolution
of
F (x, u, Du, D2 u) ≥ 0
if, whenever u∗ − ϕ has a strict local zero maximum at x0 ∈ Ω for some smooth
function ϕ : Ω → R, we have
F ∗ (x0 , ϕ(x0 ), Dϕ(x0 ), D2 ϕ(x0 )) ≥ 0.
The function u is a viscosity supersolution of
F (x, u, Du, D2 u) ≤ 0
if, whenever u∗ − ϕ has a strict zero minimum at x0 ∈ Ω for some smooth function
ϕ : Ω → R, we have
F∗ (x, u, Du, D2 u) ≤ 0.
If we have a Dirichlet boundary condition u(x) = g(x), x ∈ ∂Ω, we take this in
the viscosity sense [9], i.e., when x ∈ ∂Ω, u is a subsolution of F ∗ (x, u, Du, D2 u) ∨
(u(x) − g(x)) ≥ 0, and u is a supersolution of F∗ (x, u, Du, D2 u) ∧ (u(x) − g(x)) ≤ 0.
Remark 2.2. The inequalities in the definition are reversed from the usual defini-
tions because we do not want to carry along minus signs throughout the paper.

Licensed to Univ of New Orleans. Prepared on Tue Dec 9 05:39:27 EST 2014 for download from IP 137.30.242.61.
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
4234 E. N. BARRON, R. GOEBEL, AND R. R. JENSEN

We begin by defining our operators precisely. Set Γ(p, α) = {v ∈ Rn | |v| =


1, |p · v| ≤ α} and Γ(p, 0) := Γ(p).
Define the hamiltonian Lα : Rn × S(n) → R by
Lα (p, M ) = min{vM v T : v ∈ Rn , |v| = 1, |v · p| ≤ α} = min {vM v T }
v∈Γ(p,α)

and L0 : Rn × S(n) → R by
L0 (p, M ) = min{vM v T : v ∈ Rn , |v| = 1, |v · p| = 0} = min {vM v T }.
v∈Γ(p)

If Γ(p, α) = ∅, by convention, we set Lα (p, M ) = +∞. Notice that Γ(0, α) = Γ(0) =


S1 (0) = {y ∈ Rn | |y| = 1}, and hence
Lα (0, M ) = L0 (0, M ) = λ1 (M ),
where λ1 (M ) is the first eigenvalue of M.
We begin by showing that Lα (p, M ) and L0 (p, M ) are continuous if p = 0 and
a convergence of Lα to L0 as α → 0+. The proof is standard, so only an outline is
provided.
Lemma 2.3. Lα and L0 are continuous in (p, M ) ∈ Rn \ {0} × S(n), lower semi-
continuous in (p, M ) at points where p = 0, and Lα (p, M ) L0 (p, M ) as α → 0+.
Also, for α ≥ 0,
L∗α (0, M ) = lim sup Lα (p, M ).
δ→0+ |p|≤δ

Proof. The function



vM v T if |v| = 1, |v · p| ≤ α,
f (α, p, M, v) =
+∞ otherwise,
is lower semicontinuous on [0, ∞)×Rn ×S(n)×Rn and uniformly level bounded in v.
Because Lα (p, M ) = minv f (α, p, M, v), [19, Theorem 1.17], implies that Lα is lower
semicontinuous, not just in (p, M ), but in (α, p, M ). To see that Lα is upper semi-
continuous in (α, p, M ) and at points with p = 0, suppose Lα (p, M ) = f (α, p, M, v)
and pick δ > 0. For (α , p , M  ) sufficiently close to (α, p, M ), there are v  close
to v such that |v  | = 1, |v  · p | ≤ α , and f (α, p, M, v) + δ ≥ f (α , p , M  , v  ).
Then Lα (p, M ) + δ ≥ f (α , p , M  , v  ) ≥ Lα (p , M  ), and this gives upper semi-
continuity. Monotonicity of Lα in α is clear from the definition. Now, continu-
ity implies that Lα (p, M ) L0 (p, M ) as α → 0+ for p = 0, while when p = 0,
Lα (0, M ) = L0 (0, M ) = λ1 (M ). The remaining statements are from the definitions
of the envelopes. 
Example 2.4. To see that L0 (p, M ) is discontinuous at p = 0, let M = ( 00 01 ).
Then for p = (0, δ), L0 (p, M ) = min{v22 | v ∈ Γ(p)} = 0, while for p = (δ, 0),
L0 (p, M ) = 1 = L0 (0, M ).
Remark 2.5. Using the lemma we can give a simplified definition of what it means
to be a viscosity solution of L0 (u) = 0. If u is locally bounded, and u∗ − ϕ achieves
a zero maximum at x0 , then u is a subsolution if

L0 (Dϕ(x0 ), D2 ϕ(x0 )) ≥ 0, if Dϕ(x0 ) = 0,
L0 (p, D2 ϕ(x0 )) ≥ 0, for some |p| ≤ 1, if Dϕ(x0 ) = 0.
If u is locally bounded, and u∗ − ϕ achieves a zero minimum at x0 , then u is a
supersolution if L0 (Dϕ(x0 ), D2 ϕ(x0 )) ≤ 0. The supersolution condition is simplified

Licensed to Univ of New Orleans. Prepared on Tue Dec 9 05:39:27 EST 2014 for download from IP 137.30.242.61.
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
QUASICONVEX FUNCTIONS 4235

because of the fact that L0 (p, M ) is lower semicontinuous on Rn × S(n). Similar


statements hold for Lα .
Now we begin by showing that a quasiconvex function must be a subsolution of
L0 (u) ≥ 0.
Theorem 2.6. Let Ω be convex. If u : Ω → R is quasiconvex, then u is a viscosity
subsolution of L0 (u) ≥ 0.
Proof. Without loss of generality we may assume u is upper semicontinuous since
otherwise we work with u∗ , which is still quasiconvex. Indeed,
u∗ (λx + (1 − λ)y) = lim sup u(λ(x − z) + (1 − λ)(y − z)) ≤ u∗ (x) ∨ u∗ (y).
δ→0+ |z|≤δ

Suppose that u is quasiconvex but not a subsolution of L0 (u) ≥ 0. Then, there


is a smooth function ϕ and x ∈ Ω at which u − ϕ has a zero maximum, and
L0 (ϕ(x)) < 0. By definition, there exists v ∈ Γ(Dϕ(x)) with |v| = 1, v · Dϕ(x) = 0,
but vD2 ϕ(x)v T = −γ < 0. Note that this is true even if Dϕ(x) = 0. By Taylor’s
formula, for sufficiently small ρ, we have
ρ2 ρ2
u(x+ρv) ≤ ϕ(x+ρv) = ϕ(x)+ρv·Dϕ(x)+ vD2 ϕ(x)v T +o(ρ2 ) ≤ u(x)−γ +o(ρ2 )
2 2
and
ρ2 ρ2
u(x−ρv) ≤ ϕ(x−ρv) = ϕ(x)−ρv·Dϕ(x)+ vD2 ϕ(x)v T +o(ρ2 ) ≤ u(x)−γ +o(ρ2 ).
2 2
Directly from the definition of quasiconvexity,
 
1 1 ρ2
u(x) = u (x + ρv) + (x − ρv) ≤ u(x + ρv) ∨ u(x − ρv) ≤ u(x) − γ + o(ρ2 ).
2 2 2
Dividing by ρ2 and sending ρ → 0 gives a contradiction. 
Next we prove a partial converse, namely, that when L0 (u) > 0, u is quasiconvex.
Theorem 2.7. Let Ω be convex. If u : Ω → [−∞, ∞) is an upper semicontinuous
subsolution of L0 (u) > 0, then u is quasiconvex.
Proof. Suppose that u is not quasiconvex, i.e., there exist y, z ∈ Ω and w = (1 −
α)y + αz for some α ∈ (0, 1) such that u(y) ≤ u(z) < u(w). Without loss of
generality suppose that y = (y1 , 0, . . . , 0), z = (z1 , 0, . . . , 0), with y1 < z1 , and
thus w = (w1 , 0, . . . , 0). Let S ⊂ Rn−1 be a compact neighborhood of 0 such that
T := [y1 , z1 ] × S ⊂ Ω and u(y1 , s) < u(w), u(z1 , s) < u(w) for all s ∈ S.
Let ϕ : Ω → R be given, for x = (x1 , x2 , . . . , xn ), by
K 2 
ϕ(x) = u(w) + x2 + x23 + · · · + x2n ,
2
where K is large enough to have ϕ(x) > u(x) for all x in [y1 , z1 ] × ∂S. Then
ϕ(x) > u(x) for all x ∈ ∂T , while ϕ(w) = u(w). Consequently, u − ϕ attains its
maximum over T at some point ξ interior to T and L0 (Dϕ(ξ), D2 ϕ(ξ)) = 0. This
contradicts u being a subsolution. 
To weaken the condition L0 (u) > 0 to L0 (u) ≥ 0 we need an additional assump-
tion.
Theorem 2.8. Suppose that u : Ω → [−∞, ∞) is an upper semicontinuous subso-
lution of L0 (u) ≥ 0 and u does not attain a local maximum. Then u is quasiconvex.

Licensed to Univ of New Orleans. Prepared on Tue Dec 9 05:39:27 EST 2014 for download from IP 137.30.242.61.
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
4236 E. N. BARRON, R. GOEBEL, AND R. R. JENSEN

Proof. Suppose that u is not quasiconvex, i.e., there exist y, z ∈ Ω such that the
maximum of u((1 − α)y + αz) over α ∈ [0, 1] is attained at α ∈ (0, 1). Set w =
(1 − α)y + αz. Upper semicontinuity of u implies that there exist neighborhoods
of y and z such that u(w) > u(x) for all x in these neighborhoods. Subject to an
affine change of variables, we can assume that y = (−1, 0, . . . , 0), z = (1, 0, . . . , 0),
w = (w1 , 0, . . . , 0) for some w1 ∈ (−1, 1), that the set X = [−1, 1] × [−2, 2] ×
· · · × [−2, 2] is contained in Ω, and, furthermore, u(w) > u (−1, [−1, 1], . . . , [−1, 1]),
u(w) > u (1, [−1, 1], . . . , [−1, 1]).
For even m ∈ N, let
1  
2 − x21 (xm
ϕm (x) = 2 + x3 + · · · + xn ) .
m m
m
We will show that for some large enough m, themaximum of u − ϕm  is attained at
an interior point ξ of the domain of u, that L0 Dϕm (ξ), D2 ϕm (ξ) < 0, and thus
that a contradiction with u being a subsolution is obtained.
The epigraphical limit ϕ∞ of ϕm , as m → ∞, is given by

0 if xk ∈ [−1, 1], k = 2, 3, . . . , n,
ϕ∞ (x) =
∞ otherwise.
Consequently, the epigraphical limit of ϕm + δX − u is ϕ∞ + δX − u. Here, δX is the
indicator of the set X: δX (x) = 0 if x ∈ X, δX (x) = ∞ if x ∈ X. The maximum of
u − (ϕ∞ + δX ), equivalently, the maximum of u over [−1, 1] × [−1, 1] × · · · × [−1, 1],
is attained at some point t = (t1 , t2 , . . . , tn ) where t1 ∈ (−1, 1), and it is not the
case that t2 = t3 = · · · = tn = 0. In fact, |ti | = 1 for at least one i ∈ {2, 3, . . . , n},
because otherwise u would have a local maximum. Without loss of generality,
suppose that t2 = 0. By [19, Theorem 7.33], the maximum of u − (ϕm + δX ) is
attained at some ξ = (ξ1 , ξ2 , . . . , ξn ) with ξ1 ∈ (−1, 1), ξ2 = 0, and ξk ∈ (−2, 2) for
k = 2, 3, . . . , n. In particular,
 the maximum is attained at an interior point of X.
We now argue that L0 Dϕm (ξ), D2 ϕm (ξ) < 0. We have
 
Dϕm (ξ) = −2ξ1 (ξ2m + · · · + ξnm ), (2 − ξ12 )mξ2m−1 , . . . , (2 − ξ12 )mξnm−1 ,
⎛ ⎞
−2(ξ2m + · · · + ξnm ) −2ξ1 mξ2m−1 ...
⎜ (2 − ξ12 )m(m − 1)ξ2m−2 . . . ⎟
D2 ϕm (ξ) = ⎝ −2ξ1 mξ2
m−1
⎠.
.. .. ..
. . .

The equations v · Dϕm (ξ) = 0, |v| = 1 can be achieved with v = (v1 , v2 , 0, . . . , 0),
v1 = 0, and
2ξ1 (ξ2m + · · · + ξnm )
v2 = v1 .
(2 − ξ12 )mξ2m−1
For such v,
 
4ξ12 2(m − 1)ξ12 (ξ2m + · · · + ξnm )
v ·D2 ϕm (ξ)v = −2v12 (ξ2m +· · ·+ξnm ) 1 + + .
(2 − ξ1 )
2
m(2 − ξ12 )ξ2m−2
This quantity is negative when v1 = 0, ξ1 ∈(−1, 1), ξ2 = 0, which
 is the case here
(recall that m is large and even). Hence L0 Dϕm (ξ), D2 ϕm (ξ) < 0. 

The assumption about the lack of maxima of u cannot be weakened to exclude


only the global maxima as the following example shows.

Licensed to Univ of New Orleans. Prepared on Tue Dec 9 05:39:27 EST 2014 for download from IP 137.30.242.61.
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
QUASICONVEX FUNCTIONS 4237

Example 2.9. Let u defined on R be an odd function given by u(x) = (x − 1)4 − 1


for x ≥ 0. It satisfies L0 (u) ≥ 0; in fact L0 (u(x)) = 0 for x = 1 and x = −1, and
otherwise, L0 (u(x)) = ∞, when |x| = 1, Du(x) = 0. This function does not have
a global maximum and is not quasiconvex. (It does have a strict local maximum
though, at x = −1.)

3. A comparison principle for L0


In the previous section we have seen that a strict subsolution of L0 (u) > 0
implies that u must be quasiconvex. In addition the Barles and Da Lio theorem
(Theorem 1.2) and the fact that L0 (u) = Δu − Δ∞ u in two dimensions implies
L0 (u) − 1 has at least a weak comparison principle, leads one to suspect that there
might be a comparison principle for equations of the form L0 (u) = g(x) when
g(x) ≥ C > 0. Indeed this is the case, and we will prove it in this section. First we
give a simple example showing that one cannot expect uniqueness for L0 (u) = 0.
Example 3.1. Let Ω = {x = (x1 , x2 ) | |x| < 1} ⊂ R2 . Consider the function
u(x1 , x2 ) = −x42 . It is straightforward to verify that L0 (u(x1 , x2 )) = 0 on Ω with
its own boundary values. Notice that it is immediate that u is not quasiconvex; in
fact, it is quasiconcave.
Now consider the function w(x1 , x2 ) = −(1 − x21 )2 . If (x1 , x2 ) ∈ ∂Ω, 1 − x21 = x22 ,
and so w(x1 , x2 ) = u(x1 , x2 ) on ∂Ω. One can easily verify that L0 (w) = 0, and
hence the problem L0 (u) = 0 does not have a unique solution.
Notice also that w(x1 , x2 ) is quasiconvex. Indeed, we check that w(y1 , y2 ) ≤
w(x1 , x2 ) implies that Dw(x1 , x2 ) · (y1 − x1 , y2 − x2 ) ≤ 0. This is equivalent to
y12 ≤ x21 =⇒ 4x1 (1 − x21 )(y1 − x1 ) ≤ 0,
which is a true statement for all (x1 , x2 ), (y1 , y2 ) ∈ Ω. Despite this nonuniqueness
example the question arises about whether or not a quasiconvex solution is unique.
In Section 6 we will prove that while L0 (u) = 0 does not have a unique solution, it
does have a unique quasiconvex solution.
The next theorem will be used in our comparison theorem following, but it is of
interest on its own. This theorem does not require that Ω be convex.
Theorem 3.2. Let Ω ⊂ Rn be a bounded domain. Let g, γ ∈ C(Ω) such that g ≥ Cg
and γ ≥ Cγ on Ω, for some constants Cg > 0, Cγ > 0. Suppose that
(a) u ∈ C(Ω) ≥ Cu > 0 is a semiconvex subsolution of L0 (u) − γ u ≥ g on Ω,
(b) v ∈ C(Ω) ≥ Cv > 0 is a semiconcave supersolution of L0 (v) − γ v ≤ g on
Ω,
where Cu , Cv are constants. Then u ≤ v on ∂Ω =⇒ u ≤ v on Ω, i.e.,
min [v(x) − u(x)] ≥ 0 =⇒ min[v(x) − u(x)] ≥ 0.
x∈∂Ω x∈Ω

Proof. The proof will proceed by contradiction. Suppose that


(3.1) min [v(x) − u(x)] ≥ 0 but min[v(x) − u(x)] = Cm < 0.
x∈∂Ω x∈Ω

Define the set


M = {x ∈ Ω | (v − u)(x) = Cm }.
It is well known ([9], [10]) that since v − u is semiconcave, both v and u are
differentiable at each point of M where the minimum of v − u is achieved.

Licensed to Univ of New Orleans. Prepared on Tue Dec 9 05:39:27 EST 2014 for download from IP 137.30.242.61.
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
4238 E. N. BARRON, R. GOEBEL, AND R. R. JENSEN

Three cases will be considered. The first case is:


(1) There exists x0 ∈ M such that Du(x0 ) = 0.
If Du(x) = 0 for all x ∈ M , and assuming without loss of generality that M is
connected, let
G = {x ∈ Ω : u(x) < u(x)}
for some x ∈ M and observe that G is independent of the choice of x ∈ M . With
these definitions two more cases arise:
(2) Du(x) = 0 for all x ∈ M and M ∩ G = ∅,
(3) Du(x) = 0 for all x ∈ M and M ∩ G = ∅.
We proceed to consider Case 1.
Let Ku > 0 be such that D2 u ≥ −Ku I (semiconvexity condition) and Kv > 0
be such that D2 v ≤ Kv I (semiconcavity condition).
Case 1. There exists x0 ∈ M with Du(x0 ) = 0. Pick δ > 0 and let v δ (x) =
v(x)+δ|x−x0 |2 . Then x0 is the unique minimum of v δ −u. From the semiconvexity
properties of u, v it is standard in viscosity theory (cf. Lemma 5.2 below) that there
exists εδ > 0 such that for each 0 < ε < εδ , there exists xε ∈ Ω satisfying
(E1) |Dv(xε ) − Du(xε )| < ε.
(E2) D2 v(xε ) and D2 u(xε ) both exist and D2 (v δ − u)(xε ) ≥ 0.
(E3) |Dv(xε ) − Dv(x0 )| < ε.
(E4) |xε − x0 | < ε.
Furthermore Dv(xε ) and Du(xε ) exist. These properties are referred to as dif-
ferentiability at the maximum points and partial continuity of the gradients in
Barles-Busca [4].
Choose zε satisfying |zε | = 1, zε · Dv(xε ) = 0 and L0 (v(xε )) = zε D2 v(xε ) · zεT .
Note that L0 (v(xε )) − γ(xe )v(xε ) ≤ g(xε ). Since Du(x0 ) = 0 we may assume using
(E1)-(E4) that Du(xε ) = 0 and Dv(xε ) = 0. Choose a scalar λ and a vector w ∈ Rn
satisfying
w · Dv(xε ) = 0 and Du(xε ) = λDv(xε ) + w.
Then
Du(xε ) · Dv(xε )
λ=
|Dv(xε )|2
and (E1) implies
ε
|Dv(xε ) · (Dv(xε ) − Du(xε ))| < ε|Dv(xε )| =⇒ |1 − λ| < .
|Dv(xε )|
By writing w = Du(xε ) − Dv(xε ) + (1 − λ)Dv(xε ) we see that |w| < 2ε. Now choose
μ and p ∈ Rn such that
p · Du(xε ) = 0 and zε = μDu(xε ) + p.
Then
zε · Du(xε ) zε · (λDv(xε ) + w) 2ε
μ= = =⇒ |μ| < .
|Du(xε )| 2 |Du(xε )| 2 |Du(xε )|2
2
Since |zε | = 1, we have 1 = |p|2 +μ2 |Du(xε )|2 , which implies that |p|2 ≥ 1− |Du(x

ε )|
2.

Semiconvexity of u implies
 
4ε2 4ε2
zε D u(xε ) zε ≥ 1 −
2 T
L 0 (u(x ε )) − Ku .
|Du(xε )|2 |Du(xε )|2

Licensed to Univ of New Orleans. Prepared on Tue Dec 9 05:39:27 EST 2014 for download from IP 137.30.242.61.
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
QUASICONVEX FUNCTIONS 4239

This, combined with assumptions (a) and (b), yields


0 ≤ zε D2 (v δ (xε ) − u(xε )) zεT
= zε D2 v(xε ) zεT − zε D2 u(xε ) zεT + 2δ
   
4ε2 4ε2
≤ 1− (L0 (v(xε )) − L0 (u(xε ))) + (Kv + Ku ) + 2δ
|Du(xε )|2 |Du(xε )|2
   
4ε2 4ε2
≤ 1− γ(x ε )(v(x ε ) − u(x ε )) + (Kv + Ku ) + 2δ.
|Du(xε )|2 |Du(xε )|2
By first sending ε → 0 and then δ → 0, we get 0 ≤ γ(x0 )Cm < 0. This shows that
the first case leads to a contradiction.
Case 2. |Du(x)| = 0 for all x ∈ M and M ∩ G = ∅. Let x0 ∈ M ∩ G = ∅, v(x0 ) −
u(x0 ) = Cm . Choose a ball of radius r0 > 0 centered at x0 with B(x0 , r0 ) ⊂ Ω. Set
K0 = {x ∈ Rn : ∃ δ > 0 x0 + δx ∈ G ∩ B(x0 , r0 )},
u(x0 + εx) − u(x0 )
uε (x) = ,
ε2
for ε > 0. Using the semiconvexity of u we see that
1 1
(3.2) − Ku |x|2 ≤ uε (x) ≤ Ku |x|2 , D2 uε ≥ −Ku I,
2 2
and
L0 (uε (x)) = L0 (u(x0 + εx)) ≥ γ(x0 + εx)u(x0 + εx) + g(x0 + εx) ≥ Cg > 0.
It follows that the family {uε }ε>0 is uniformly bounded and equicontinuous on every
compact subset of Rn . Therefore, some sequence {uεi }∞ i=1 converges uniformly, on
compact subsets, to a continuous function u0 . In view of (3.2), it is clear that u0
must also satisfy
(3.3)
1 1
− Ku |x|2 ≤ u0 (x) ≤ Ku |x|2 , D2 u0 ≥ −Ku I, and L0 (u0 ) ≥ γu0 + g ≥ Cg > 0.
2 2
In particular, since L0 (u0 ) > 0, u0 is quasiconvex by Theorem 2.7. In addition, u0
satisfies the properties
(3.4) u0 (x) < 0 =⇒ x ∈ K0 and x ∈ K0 =⇒ u0 (x) ≤ 0,
and hence K0  Rn is an open, convex cone.
Let
F0 = {x : u0 (x) ≤ 0}
and note that F0 is a closed convex set. In fact, F0 = K 0 . Indeed, note that
K0 ⊂ F0 , and so K 0 ⊂ F0 . Now suppose F0 \ K 0 = ∅. This implies that int(F0 \
K 0 ) = ∅. But then, since u0 = 0 on F0 \ K 0 , we could conclude that L0 (u0 ) = 0 on
int(F0 \ K 0 ). This contradicts (3.3). Similarly,
(3.5) G0 := {x : u0 (x) < 0} = K0 .
Indeed, if this fails, then convexity of G0 implies that F0 \ G0 = ∅ and hence
int(F0 \ G0 ) = ∅. As before that would imply that L0 (u0 ) = 0 on the interior of
F0 \ G0 , which again is a contradiction of (3.3).

Licensed to Univ of New Orleans. Prepared on Tue Dec 9 05:39:27 EST 2014 for download from IP 137.30.242.61.
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
4240 E. N. BARRON, R. GOEBEL, AND R. R. JENSEN

x, x ), where x
Next given x = (x1 , . . . , xn ) ∈ Rn we will write x = (  = (x1 , . . . , xn−1 )

and x = xn−1 . Using a rotation if necessary, we may assume that
x, x ) ∈ Rn : x > Ψ(
K0 = {x = ( x)}
where Ψ : Rn−1 → R is a convex, homogeneous of degree one function. Thus there
x0 , x0 ) ∈ Rn and a smooth function Φ : Rn−1 → R such that
is a point x0 = (

x0 = Φ(x0 ) and
Φ(x) ≥ Ψ(x), |x0 | = 1,
Φ(x) = Ψ( x) if and only if x =x 0 ,

d 
Φ(t x0 )  = DΨ( 0 = x0 ,
x0 ) · x
dt t=1
d2 

Φ(t x 0 )  = 0.
dt2 t=1
Indeed, it is enough to pick a point where DΨ exists and then rescale. Let Λ :
Rn → R denote the signed distance function from Γ = {x : x = Φ( x)} and such
that Λ(x) < 0 if and only if x > Φ( x) =⇒ x ∈ K0 . The fact that Φ is smooth
implies that Λ is smooth near the graph of Φ.
A computation now shows that
 d2 
d  
Λ(tx0 )  = 0 and 2 Λ(t x0 )  = 0.
dt t=1 dt t=1

That is,
(3.6) DΛ(x0 ) · x0 = 0 and x0 D2 Λ(x0 ) xT0 = 0.
Now let E > ess supx∈B(x0 ,R) |Du0 (x)|, where R > 0 is fixed. Construct a family
{τε }ε>0 of smooth and convex functions τε : R → R satisfying τε (0) = 0 and

E s, if s > ε,
τε (s) =
ε s, if s < −ε.
We will use the test functions ϕε := τε ◦ Λ. Note that u0 − ϕ0 has an isolated local
maximum at x0 . Consequently, for some ε0 > 0, whenever 0 < ε < ε0 we know that
u0 − ϕε achieves a local maximum at xε and xε → x0 as ε → 0.
We have
Dϕε (xε ) = τε (Λ(xε ))DΛ(xε ) and
D2 ϕε (xε ) = τε (Λ(xε ))DΛ(xε ) ⊗ DΛ(xε ) + τε (Λ(xε ))D2 Λ(xε ).
Let pε denote the unique vector such that pε · DΛ(xε ) = 0 and pε − x0 = λDΛ(xε ),
and observe that pε → x0 as ε → 0. Now we put all the pieces together in a
computation:
L0 (u0 (xε )) ≤ L0 (ϕε (xε ))
pε 2 pT
≤ D ϕ(xε ) ε
|pε | |pε |

τ (Λ(xε ))
= ε pε D2 Λ(xε ) pTε
|pε |2
E
≤ pε D2 Λ(xε ) pTε .
|pε |2

Licensed to Univ of New Orleans. Prepared on Tue Dec 9 05:39:27 EST 2014 for download from IP 137.30.242.61.
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
QUASICONVEX FUNCTIONS 4241

Since L0 (u0 (xε )) ≥ Cg > 0, we have reached the inequality


E
0 < Cg ≤ pε D2 Λ(xε ) pTε .
|pε |2
Send ε → 0 to see that 0 < Cg ≤ Ex0 D2 Λ(x0 )xT0 = 0. This contradiction allows us
to conclude that it cannot be the case that M ∩ G = ∅.
Case 3. Du(x) = 0 for all x ∈ M and M ∩ G = ∅. Without loss of generality we
may assume there is a μ > 0 so that v satisfies the property
(3.7)
Given x, there exists yx such that |x − yx | ≤ μ and v(z) ≤ v(x), ∀ |z − yx | ≤ μ.
Indeed, if v does not satisfy this property we may replace v by vμ (x) =
min|y|≤μ v(x + y) and work with vμ instead.
Let x0 ∈ M and y0 = yx0 such that (3.7) holds. Thus |x0 − y0 | ≤ μ, and v(z) ≤
v(x0 ) for any |z − y0 | ≤ μ. Since x0 ∈ M we know that v(z) − u(z) ≥ v(x0 ) − u(x0 ),
i.e., v(z) − v(x0 ) + u(x0 ) ≥ u(z). Since M ∩ G = ∅, this implies that there is a δ > 0
such that u(z) ≥ u(x0 ) for any z such that |z − x0 | ≤ δ. The last two statements
imply that u(z) = u(x0 ) if |z − x0 | ≤ δ and |z − y0 | ≤ μ. But then u is constant
in B(x0 , δ) ∩ B(y0 , μ) = ∅, and hence L0 (u) = 0 in that set. This contradicts
assumption (a). 
We may now present the main comparison theorem of this section.
Theorem 3.3. Let Ω ⊂ Rn be a bounded domain, h ∈ C(Ω) with ωh (·) as a uniform
modulus of continuity. Let u : Ω → R be an upper semicontinuous subsolution of
L0 (u(x))−h(x) ≥ 0, x ∈ Ω, and v : Ω → R be a lower semicontinuous supersolution
of L0 (v(x)) − h(x) ≤ 0, x ∈ Ω. Assume that there is a constant Ch > 0 so that
h(x) ≥ Ch for all x ∈ Ω. Then inf x∈∂Ω (v(x) − u(x)) ≥ 0 implies that inf x∈Ω (v(x) −
u(x)) ≥ 0. Consequently, L0 (u) = h > 0 has at most one solution.
Remark 3.4. Observe that in view of the fact that L0 is the mean curvature operator
in R2 , Theorem 3.3 extends the Barles and Da Lio result of Theorem 1.2 in at least
two ways. First, the inhomogeneous term h allows spatial dependence. Second, we
do not require that the domain be star shaped.
Proof. We assume that inf x∈∂Ω (v(x) − u(x)) ≥ 0 but that inf x∈Ω (v(x) − u(x)) :=
Cm < 0. Because of the fact that L0 is translation invariant and h is bounded on
Ω, we may assume without loss of generality that u ≥ −k, v ≥ −k for some k > 0.
Let ε > 0 and uε (x) denote the supremal convolution of u, i.e.,
1
uε (x) = sup (u(y) − |x − y|2 ).
y∈Ω 2ε
Also, let vε denote the infimal convolution of v given by
1
vε (x) = inf (u(y) + |x − y|2 ).
y∈Ω 2ε
It is well known that uε is semiconvex and vε is semiconcave for each ε > 0. A
calculation shows that uε is a subsolution of
L0 [uε ] ≥ h − ωh (ε), x ∈ Ωε = {x ∈ Ω | dist(x, ∂Ω) > C0 ε1/2 },
and vε is a supersolution of
L0 [vε ] ≤ h + ωh (ε), x ∈ Ωε ,

Licensed to Univ of New Orleans. Prepared on Tue Dec 9 05:39:27 EST 2014 for download from IP 137.30.242.61.
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
4242 E. N. BARRON, R. GOEBEL, AND R. R. JENSEN

where C0 is a constant depending only on the bounds on u and v. In addition Duε


and Dvε are in L∞ (Ωε ) and
1
inf (vε − uε ) ≥ 0, but inf (vε − uε ) ≤
Cm < 0.
x∈∂Ωε 2 x∈Ωε

The next step involves applying a modified Kruzhkov transform to uε and vε . Set
ε
uε = ln(
u + γ) and vε = ln(  = eu − γ and v = evε − γ,
v + γ), i.e. u
for some γ > k. A calculation shows that
 is a subsolution of L0 (
u u) − (h − ωh (ε))(
u + γ) ≥ 0, x ∈ Ωε
and
v is a supersolution of L0 (
v ) − (h + ωh (ε))(
v + γ) ≤ 0, x ∈ Ωε .
u(x) − v(x)) ≥ 0 and inf x∈Ωε (
Furthermore, inf x∈∂Ωε ( m < 0 for
u(x) − v(x)) ≤ C
some constant C m < 0. In addition, we know that u −k −k
 ≥ e − γ, v ≥ e − γ, u  is
semiconvex, and v is semiconcave.
We have
u) − h(x)
L0 ( u(x) ≥ (h − ωh (ε))(
u + γ) − h
u = γh − ωh (ε)(
u + γ)
and
v ) − h(x)
L0 ( v(x) ≤ (h + ωh (ε))(
v + γ) − h
v = γh + ωh (ε)(
v + γ).
1 
Now set v = v − 2 Cm . We see that
1
(3.8) L0 (v) − hv ≤ γh + ωh (ε)(
v + γ) + C m h.
2
For ε > 0 sufficiently small, since γ > k > 0, h ≥ Ch > 0, we have
1
ωh (ε)(( u + γ)) ≤ − C
v + γ) + ( m h,
2
and from (3.8) we get
L0 (v) − h v ≤ γh − ωh (ε)(
u + γ).
Also inf x∈Ωε (v(x) − u
(x)) ≥ 0, while inf x∈Ωε (v(x) − u m < 0. Now we use
(x)) ≤ 12 C
Theorem 3.2 and identify u with u  and v with v, γ with h, and g = γh−ωh (ε)(u +γ),
to conclude that inf x∈Ωε (v − u
) ≥ 0, and that is a contradiction. 

4. Robustly quasiconvex functions


Recall that for each fixed α > 0,
Lα (u) = Lα (Du, D2 u) = min{y · D2 u y T | |y| = 1, |y · Du| ≤ α}.
Quasiconvex functions correspond to a subsolution of L0 (u) ≥ 0. Considering
Lα (u) ≥ 0 leads to a smaller class of functions when α > 0.
Definition 4.1. Let Ω ⊂ Rn be convex, α > 0. A function u : Ω → R is robustly
quasiconvex, with parameter α > 0, α−quasiconvex in abbreviated form, if, for
every ξ ∈ Rn with |ξ| ≤ α, the function x → u(x) + ξ · x is quasiconvex. The class of
robustly quasiconvex functions with parameter α is denoted by Rα (Ω) or just Rα
when the domain is fixed.
 The class of functions which are robustly quasiconvex
for some α is R(Ω) = α>0 Rα .

Licensed to Univ of New Orleans. Prepared on Tue Dec 9 05:39:27 EST 2014 for download from IP 137.30.242.61.
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
QUASICONVEX FUNCTIONS 4243

Remark 4.2. If Ω is convex and u : Ω → R is in Rα , then u is quasiconvex but the


reverse is false. Indeed, u : R → R given by u(x) = arctan x is quasiconvex, but
not robustly for any α > 0. However, if Ω ⊂ R is bounded, then u(x) = arctan x
is robustly quasiconvex on Ω. Every convex function is robustly quasiconvex (with
any α), and if a function is robustly quasiconvex with parameter α, for arbitrarily
large α, then the function is convex. On the other hand, the function u : R → R
given by u(x) = 2x if x < 0 and u(x) = x if x ≥ 0 is robustly quasiconvex, with
parameter α = 1, but is not convex.

Remark 4.3. It is established in [17] that an equivalent definition is that u is robustly


convex (stable convex in the terminology of [17]) if there is an α > 0 such that for
any |δ| < α, x0 , x1 ∈ Ω, and 0 < λ < 1,
u(x1 ) − u(x0 ) u(x1 ) − u(xλ )
≥ δ =⇒ ≥ δ,
|x1 − x0 | |xλ − x0 |
where xλ = (1 − λ)x0 + λ x1 .

That robustly quasiconvex (with parameter α > 0) functions satisfy Lα (u) ≥ 0


follows from what was established in Theorem 2.6. More precisely:

Theorem 4.4. Let α > 0. If u : Ω → [−∞, ∞) is upper semicontinuous and


robustly quasiconvex with parameter α > 0, then u is a subsolution of Lα (u) ≥ 0.

Proof. If u is robustly quasiconvex with parameter α > 0, then Theorem 2.6 implies
that, for every ξ ∈ Rn with |ξ| ≤ α, the function uξ given by uξ (x) = u(x) + ξ · x
is a subsolution of L0 (u) ≥ 0. Let x0 ∈ arg max(u − ϕ) for a smooth ϕ. Then, for
every ξ ∈ Rn with |ξ| ≤ α, x0 ∈ arg max(uξ − ϕξ ), where ϕξ (x) = ϕ(x) + ξ · x.
Hence, for every ξ ∈ Rn with |ξ| ≤ α, Dϕξ (x0 ) = Dϕ(x0 ) + ξ and

(4.1) min{vD2 ϕξ (x0 ) v T : v ∈ Rn , |v| = 1, |v · Dϕ(x0 ) + v · ξ| = 0} ≥ 0.

Let v ∈ Rn be such that |v| = 1 and |v · Dϕ(x0 )| ≤ α. (If Dϕ(x0 ) = 0, any unit
vector v orthogonal to ξ works.) For any such v there exists ξ ∈ Rn with |ξ| ≤ α
such that v · Dϕ(x0 ) + v · ξ = 0. Since (4.1) holds for every ξ ∈ Rn with |ξ| ≤ α we
have

(4.2) Lα (u) = min{vD2 ϕξ (x0 ) v T : v ∈ Rn , |v| = 1, |v · Dϕ(x0 )| ≤ α}


≥ min{vD2 ϕξ (x0 ) v T : v ∈ Rn , |v| = 1, |v · Dϕ(x0 ) + v · ξ| = 0} ≥ 0.

Consequently, u is a subsolution of Lα [u] ≥ 0. 

Showing that Lα (u) ≥ 0 implies robust α−quasiconvexity is similar to what


was done in Theorem 2.8, in proving that L0 (u) and some extra assumptions give
quasiconvexity.

Theorem 4.5. If u : Ω → [−∞, ∞) is an upper semicontinuous subsolution of


Lα (u) ≥ 0, then u is α-robustly quasiconvex.

Proof. Suppose u is not robustly quasiconvex with parameter α , for some α < α.
Then there exists ξ ∈ Rn with |ξ| ≤ α , y, z ∈ Ω and w = (1 − λ)y + λz for
some λ ∈ (0, 1) such that uξ (y) ≤ uξ (z) < uξ (w), where uξ (x) = u(x) + ξ · x.

Licensed to Univ of New Orleans. Prepared on Tue Dec 9 05:39:27 EST 2014 for download from IP 137.30.242.61.
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
4244 E. N. BARRON, R. GOEBEL, AND R. R. JENSEN

Without loss of generality suppose that y = (y1 , 0, . . . , 0), z = (z1 , 0, . . . , 0), with
y1 < z1 , and thus w = (w1 , 0, . . . , 0). Let U ⊂ Rn−1 be a compact neighborhood
of 0 such that T := [y1 , z1 ] × U ⊂ Ω. For i = 1, 2, . . . let ϕi : Ω → R be given, for
x = (x1 , x2 , . . . , xn ), by
1 2 i 2 
ϕi (x) = − x1 + x2 + x23 + · · · + x2n .
2i 2
Pick any convergent sequence {wi }∞ i=1 , where w ∈ arg maxT (uξ − ϕ ), and let
i i

w = limi→∞ w . The epigraphical (and pointwise) limit of −(uξ − ϕ ) restricted


i i

to T , as i → ∞, is given by x → −uξ (x) when x2 = x3 = · · · = xn = 0 and


x → ∞ otherwise. By [19, Theorem 7.33], w  is a minimizer of this function, and in
particular, w = (w 1 ∈ (y1 , z1 ). Consequently, for all large enough
1 , 0, . . . , 0) and w
i, wi ∈ int T . At the same time, wi ∈ arg maxT (u−ϕiξ ), where ϕiξ (x) = ϕi (x)−ξ ·x.
For large enough i, considering v = (1, 0, . . . , 0) yields |v| = 1, |v · Dϕiξ (wi )| < α,
and v · D2 ϕiξ (wi )v < 0. This contradicts u being a subsolution of Lα [u] ≥ 0.
Consequently, u is α -quasiconvex for all α < α. Since x → u(x) + ξ · x, with
|ξ| = α, is a locally uniform limit of x → u(x) + (1 − 1/i)ξ · x as i → ∞, and
|(1 − 1/i)ξ| < α, u is α-quasiconvex. 

Our next goal is to show that a quasiconvex function may be approximated by


robustly quasiconvex functions. In other words, we will eventually show that a
quasiconvex function can be expressed as the supremum of robustly quasiconvex
functions. This is accomplished starting with the next example, which will be used
in the construction of an approximating robustly quasiconvex function.
a
Example 4.6. Let f : R → R be given by f (x) = −∞ if x ≤ 0 and f (x) = −
x
if x > 0, where a > 0. Then f is quasiconvex and is robustly quasiconvex when
restricted to a bounded subset of R. Let ψ : Rn → R be given by
  12

n
ψ(x) = (x1 + r) + 2
x2i − r, where r ≥ 0.
i=2

Consider the function w : Rn → R given by w(x) = f (ψ(x)). The graph of w is


obtained by rotating the graph of f about the point (−r, 0, 0, . . . , 0). Let R > 0.
a
It turns out that u is robustly quasiconvex, with parameter α =  on
3
R (3R + 2r)
the ball of radius R.
To show this, we consider the operator Lα (w) at points x = (x1 , 0, 0, . . . , 0), with
0 < x1 < R. Theorem 4.5 concludes the argument. A calculus exercise yields
 
a
Dw(x) = , 0, 0, . . . , 0 ,
x21
 
a 2 1 1 1
D2 w(x) = 2 diag − , , ,..., ,
x1 x1 x1 + r x1 + r x1 + r

where diag means a diagonal matrix, with diagonal entries as listed. Then, for
αx21
v = (v1 , v2 , . . . , vn ), the inequality |v · Dw(x)| ≤ α reduces to |v1 | ≤ . This
a

Licensed to Univ of New Orleans. Prepared on Tue Dec 9 05:39:27 EST 2014 for download from IP 137.30.242.61.
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
QUASICONVEX FUNCTIONS 4245

bound and the constraint |v| = 1 yields


    
1  2
n
a 2 2 a 1 2 1
v · D w(x)v = 2 − v1 +
2
v = 2 − v1
2
+
x1 x1 x1 + r i=2 i x1 x1 + r x1 x1 + r
  
a 1 α 2 x4 2 1
≥ 2 − 21 + .
x1 x1 + r a x1 x1 + r
Thus, to have v · D2 w(x)v ≥ 0, it is sufficient that
 
1 α2 x41 2 1
≥ 2 + ,
x1 + r a x1 x1 + r
a2
which, in turn, holds because x1 < R whenever α2 ≤ .
R3 (3R + 2r)
Lemma 4.7. Let Ω ⊂ Rn be convex and open. Let u : Ω → [−∞, ∞] be lower
semicontinuous and quasiconvex. Then u is the supremum of all functions φ : Ω →
[−∞, ∞) bounded above by u and of the following form: φ(x) = −∞ if x · v ≤ z · v
and φ(x) = α if x · v > z · v, for some z ∈ Ω, v ∈ Rn , α ∈ R.
Proof. Pick x ∈ Ω such that u(x) > −∞. Suppose first that x is a local minimum
of u, in the sense that allows for u(x) = ∞: a sufficiently small neighborhood of x
does not contain points x with u(x) < u(x). The convex set S = {x ∈ Ω | u(x) <
u(x)} is disjoint from a sufficiently small neighborhood of x, and hence there exists
v ∈ Rn such that sups∈S v · s < v · x; see [20, Theorem 11.4]. One can then pick
z ∈ Ω such that sups∈S v · s < v · z < v · x. If u(x) < ∞, consider φ given by
φ(x) = −∞ if v · x ≤ v · z and φ(x) = u(x) if v · x > v · z. Then u ≥ φ and
u(x) = φ(x). If u(x) = ∞, consider a sequence of functions φi , i = 1, 2, . . . , ∞,
given by φi (x) = −∞ if v · x ≤ v · z and φi (x) = i if v · x > v · z. Then u ≥ φi for all
i = 1, 2, . . . and u(x) = sup φi (x). Now suppose that x is not a local minimum of
u. Then, there exists a sequence of points xi ∈ Ω, xi → x, with u(xi ) u(x). This
last property relies on lower semicontinuity of u. For every i = 1, 2, . . . , the convex
set Si = {x ∈ Ω | u(x) ≤ u(xi )} is disjoint from a sufficiently small neighborhood
of x, and hence there exists vi ∈ Rn such that sups∈Si vi · s < v · x. Now, consider
a sequence of functions φi , i = 1, 2, . . . , ∞, given by φi (x) = −∞ if vi · x ≤ vi · xi
and φi (x) = u(xi ) if vi · x > vi · xi . In particular, φi (x) = u(xi ). Then u ≥ φi for
all i = 1, 2, . . . and u(x) = sup φi (x). 
Lemma 4.8. Let Ω ⊂ Rn be bounded, convex, and open. Let u : Ω → [−∞, ∞] be
a lower semicontinuous and quasiconvex function. Then u is the supremum of all
robustly quasiconvex lower semicontinuous functions on Ω which are bounded above
by u.
Proof. The function u is the supremum of functions φ, as in Lemma 4.7, that are
bounded above by u. Any such function φ, when restricted to a bounded set, is the
supremum of functions w, as in Example 4.6. This requires considering arbitrarily
small a and arbitrarily large r in the construction of w. Consequently, u is the
supremum of such functions w, and Example 4.6 showed that they are robustly
quasiconvex. 
Remark 4.9. Lemma 4.8 will be used to prove in Theorem 5.5 that a quasiconvex
solution of L0 (u) = 0 is unique.

Licensed to Univ of New Orleans. Prepared on Tue Dec 9 05:39:27 EST 2014 for download from IP 137.30.242.61.
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
4246 E. N. BARRON, R. GOEBEL, AND R. R. JENSEN

5. Comparison principles for Lα


The following theorem gives us a comparison principle for the operator Lα . This
theorem does not require that Ω be convex. Furthermore, in contrast to the equa-
tion L0 (u) = g, in the equation Lα (u) = g using the operator Lα , α > 0, the
theorem only assumes g ≥ 0, and not that g > 0.
Theorem 5.1. Let Ω ⊂ Rn be an open set and g : Ω → R denote a nonnegative
continuous function. Let u : Ω → R be a bounded upper semicontinuous subsolution
Lα [u] ≥ g(x) and v : Ω → R a bounded lower semicontinuous supersolution of
Lα [v] ≤ g(x). Suppose also that u ≤ v on ∂Ω. Then u ≤ v in Ω.
Proof. Let R = maxx∈Ω |x| and any δ > 0 such that 2δR < α. Define
uδ (x) = u(x) + δ(|x|2 − R2 ).
Since |x|2 − R2 ≤ 0 we know that uδ (x) ≤ u(x). Now we set β = α − 2δR > 0 and
calculate
Lβ [uδ ] = min{y(D2 u + 2δIn×n )y T | |y| = 1, |y · Duδ | ≤ β}
≥ min{yD2 uy T + 2δ | |y| = 1, |y · Du| − 2δ|y · x| ≤ β}
≥ min{yD2 uy T + 2δ | |y| = 1, |y · Du| − 2δR ≤ β = α − 2δR}
= Lα [u] + 2δ ≥ g(x) + 2δ.
We conclude that uδ is a subsolution of Lβ [uδ ] = Lα−2δR [uδ ] ≥ g(x) + 2δ.
Let ϑ > 1 and fix δ > 0 such that δ < α(1 − ϑ1 ) 2R 1
< 2Rα
. Consider the function
wϑ = ϑ u . One readily computes that this function is a subsolution of Lαβ [wϑ ] ≥
δ

ϑ (g(x) + 2δ) ≥ g(x) + 2δ.


Let γ > 0 and wϑγ (x) denote the supremal convolution of wϑ , i.e.,
 
γ 1
wϑ (x) = sup wϑ (y) − |x − y| .
2
y∈Ω 2γ
To make the notation easier we will denote wϑγ as simply wϑ . Also, let vγ denote
the infimal convolution of v given by
 
1
vγ (x) = inf v(y) + |x − y| .
2
y∈Ω 2γ
Define g γ (x) = miny∈Bγ (x) g(y) and g γ (x) = maxy∈Bγ (x) g(y). By a straightforward
calculation the sup convolution of wϑ is a semiconvex subsolution of
Lβϑ [wϑ ] ≥ ϑ min (g(y)+2δ) = ϑ(g γ (x)+2δ), x ∈ Ωγ = {x ∈ Ω | dist(x, ∂Ω) ≤ C0 γ 1/2 },
y∈Bγ (x)

and the inf convolution vγ , denoted simply as v, is a semiconcave supersolution of


Lα [v] ≤ g γ (x), x ∈ Ωγ .
We know that wϑ is semiconvex, v is semiconcave and that wϑ − v is semiconvex.
Now we need the following lemma.
Lemma 5.2. Let W be semiconvex and V be semiconcave. Suppose that x0 ∈ Ω is
a maximum point of W − V such that
Δ := W (x0 ) − V (x0 ) − max (W (y) − V (y)) > 0.
y∈∂Ω

Licensed to Univ of New Orleans. Prepared on Tue Dec 9 05:39:27 EST 2014 for download from IP 137.30.242.61.
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
QUASICONVEX FUNCTIONS 4247

Then there is a sequence of points xk → x0 , and (pk , Ak ) ∈ D2+ W (xk ), and


( k ) ∈ D2− V (xk ) and a constant K > 0, which may depend on the semi-
pk , A
convexity constant of W − V , so that
• |pk − pk | → 0, k → ∞,
• Ak − A k ≤ − K In×n ,
k
where
D2+ W (x) = {(p, A) ∈ Rn × S(n)|
W (x + y) − W (x) − p · y − 12 yA · y T
lim sup ≤ 0},
y→0,x+y∈Ω |y|2
D 2−
V (x) = {(p, A) ∈ Rn × S(n)|
V (x + y) − V (x) − p · y − 12 yA · y T
lim inf ≥ 0}.
y→0,x+y∈Ω |y|2
First we complete the proof of the theorem and then return to the lemma.
For each fixed ϑ > 1, let x ∈ int(Ω) be a point at which wϑ − v achieves a strict
positive maximum and wϑ (x) − v(x) > supy∈∂Ω (wϑ (y) − v(y)). Using Lemma 5.2
and the definition of viscosity solution, we have for each xk → x and (pk , Ak ) ∈
D2+ wϑ (xk ),
Lβϑ [pk , Ak ] = min{ηAk η T | |η| = 1, |pk · η| ≤ βϑ} ≥ ϑ(g γ (xk ) + 2δ).

For ( k ) ∈ D2− v(xk ),


pk , A
Lα [ k ] = min{η A
pk , A k η T | |η| = 1, |
pk · η| ≤ α} ≤ g γ (xk ).
k η T | |η| = 1, |
Let η0 ∈ arg min{η A pk · η| ≤ α}. Then, |η0 | = 1, |
pk · η0 | ≤ α, and
k η0T = min{η A
g γ (xk ) ≥ η0 A k η T | |η| = 1, |
pk · η| ≤ α}.
Next,
|pk · η0 | = |(pk − pk ) · η0 + pk · η0 | ≤ |pk − pk | + |
pk · η0 | ≤ α(ϑ − 1) − 2δRϑ + α = βϑ,
for all k sufficiently large (since |pk − pk | → 0) and δ < α (1−1/ϑ)
2R . Consequently, η0
is also in the constraint set for Lβϑ , and so,
η0 Ak η0T ≥ min{ηAk η T | |η| = 1, |pk · η| ≤ βϑ} = Lβϑ [pk , Ak ] ≥ ϑ(g γ (xk ) + 2δ).
k ≤ − K In×n , we have
Using the fact that Ak − A k
K K k η0T = Lα [ k ] ≤ g γ (xk ),
η0 (Ak + In×n )η0T = η0 Ak η0T + ≤ η0 A pk , A
k k
and so,
K K
ϑ(g γ (xk ) + 2δ) + ≤ η0 Ak η0T + ≤ g γ (xk ),
k k
or, finally,
K
(5.1) ≤ 0.
ϑ(g γ (xk ) + 2δ) − g γ (xk ) +
k
In general the constant of semiconvexity K = K(γ) → 0 as γ → 0. Sending
γ → 0 in (5.1), since g γ → g, g γ → g, we get 0 < (ϑ − 1)g(xk ) + 2ϑδ ≤ 0, a
contradiction. Hence wϑ − v cannot achieve a strict positive max in the interior of
Ω, and sending ϑ → 1, the same is true of u − v. 

Licensed to Univ of New Orleans. Prepared on Tue Dec 9 05:39:27 EST 2014 for download from IP 137.30.242.61.
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
4248 E. N. BARRON, R. GOEBEL, AND R. R. JENSEN

Proof of Lemma 5.2. The proof of the lemma is sketched for the convenience of the
reader and to make the paper self contained, but this is now a standard result (see
[8], [9], [10]).
Let w : Ω → R be semiconvex with semiconvexity constant M > 0. Recall
that Alexandrov’s lemma implies that for a semiconvex function w there is a zero
measure set so that off this set (p, X) ∈ Rn × S(n) exists such that
1
w(y) = w(x) + p · (y − x) + (y − x)X(y − x)T + o(|y − x|2 ), and X ≥ −2M In×n .
2
Let x0 ∈ Ω and let Br (x0 ) ⊂ Ω with
Δ := w(x0 ) − max w(y) > 0.
y∈∂Br (x0 )

Given any δ > 0 set




Eδ := {x ∈ Br (x0 )  δ ≤ |p| ≤ 2δ, ∃p ∈ Rn , x ∈ arg maxy∈Br (x0 ) (w(y) − p · y)}.

For each x ∈ Eδ if D2 w(x) exists, then |Dw(x)| ≤ δ, D2 w(x) ≤ 0, and w(y) ≤


w(x) + Dw(x) · (y − x), ∀ y ∈ Br (x0 ).
We claim that the Lebesgue measure of Eδ is positive. More precisely, for all
0 < δ < Δ/r,
ωn (2n − 1)δ n Δ
(5.2) |Eδ | ≥ , ∀ 0 < δ < , where ωn = |B1 (0)|.
(2M )n r
To simplify the proof, assume that w ∈ C 2 . Then (5.2) comes from the inequality
derived from the coarea formula

(5.3) ωn ((2δ)n − δ n ) = |B2δ (0) \ Bδ (0)| = |Dw(Eδ )| ≤ |det(D2 w(x))| dx.

The estimate (5.2) follows from the fact that for all x ∈ Eδ , −2M In×n ≤ D2 w(x) ≤
0, which implies |det(D2 w(x))| ≤ (2M )n . Refer to [9] or to Fleming and Soner [10]
for an exposition of these facts.
From (5.3) we see that there is a set Aδ ⊂ Eδ with |Aδ | > 0 so that
0 < Cn δ n ≤ |det(D2 w(x))| = |λ1 (x) · · · λn (x)| ≤ −λmax (2M )n−1 ,
where Cn denotes a generic positive constant depending only on n, λi (x), i =
1, 2, . . . , n, are the eigenvalues of D2 w(x), and λmax (x) ≤ 0 is the largest eigenvalue.
Consequently,
(5.4) λmax (x) ≤ −Cn δ n , implying that D2 w(x) + Cn δ n In×n ≤ 0, ∀ x ∈ Aδ .
For any k = 1, 2, . . . , there exists xk ∈ A1/k . By the definition of A1/k using
(5.4) we have 1/k ≤ |Dw(xk )| ≤ 2/k, D2 w(xk ) + Cn (1/k)n In×n ≤ 0, and xk ∈
arg maxy∈Br (x0 ) (w(y)−Dw(xk )·y). Consequently a subsequence, still denoted {xk },
converges to a point of maximum of w on Br (x0 ), which may be assumed to be
x0 because we may make x0 a unique strict maximum of w by approximation
if necessary. We conclude that {xk } satisfies xk → x0 , and Lemma 5.2 follows
immediately.
Finally, if w is assumed merely semiconvex and not C 2 , then we replace w by a

C mollifier wε . The proof is then carried out with the mollified wε → w as ε → 0,
uniformly. See, for example, Caffarelli and Cabre [8] or Fleming and Soner [10] for
details. 

Licensed to Univ of New Orleans. Prepared on Tue Dec 9 05:39:27 EST 2014 for download from IP 137.30.242.61.
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
QUASICONVEX FUNCTIONS 4249

Next, we ask the question in what sense does an α-quasiconvex solution of


Lα (uα ) = 0 approximate a given quasiconvex function u. The next theorem be-
gins to answer this question in that it shows that the limit as α → 0+ of solutions
of Lα (uα ) = 0, which are robustly quasiconvex with parameter α, is indeed quasi-
convex and a viscosity solution of L0 (u) = 0.
Theorem 5.3. Let h : Ω → R be continuous, and let Ω ⊂ Rn be open, bounded,
and convex. Let uα denote the unique (continuous) viscosity solution of Lα (uα ) =
0, x ∈ Ω, uα = h, x ∈ ∂Ω. Set v = supα>0 uα . Then v is a viscosity solution of
(5.5) L0 (v) = 0, x ∈ Ω, v = h, x ∈ ∂Ω.
Furthermore, v is quasiconvex.
Proof. First uα is continuous because h is continuous and, from Theorem 5.1, Lα
has a comparison principle so that (uα )∗ ≤ (uα )∗ . Now let u0 be any viscosity so-
lution of (5.5) (which exists by Perron’s method). For any α > 0, Lα (u0 ) ≤ L0 (u0 )
= 0, and hence u0 is a supersolution of Lα (u0 ) ≤ 0. By comparison, we then
have u0 ≥ uα , and hence {uα }α is uniformly bounded above by u0 . Consequently,
v = supα>0 uα ≤ u0 , and note that v is at least lower semicontinuous with v = h
on ∂Ω.
Furthermore, if 0 < α1 ≤ α2 , we also conclude by comparison that uα1 ≥ uα2
so that α → uα is monotone increasing as α  0. Because of the monotone
convergence and Lα L0 , we know from standard results in viscosity solutions
that v is a viscosity solution of L0 (v) = 0. In fact, it is the Perron minimal viscosity
supersolution.
Since each uα ∈ Rα is robustly quasiconvex, and hence quasiconvex automati-
cally, and since the supremum of quasiconvex functions is quasiconvex, v is quasi-
convex. 
Remark 5.4. In a sense, v is the correct quasiconvex function which solves L0 (v) = 0
as we will see next. In fact, we will see that there is only one quasiconvex viscosity
solution of L0 (u) = 0, and hence it must be v.
Theorem 5.5. Let Ω be a convex bounded domain in Rn . Let h : ∂Ω → R be a
continuous function. Let v : Ω → R be a viscosity solution of L0 (Dv, D2 v) = 0
in Ω. Assume that v is lower semicontinuous and quasiconvex with v = h on ∂Ω.
Then v is unique.
Proof. Let α > 0 and define uα as the unique (by Theorem 5.1) solution of
(5.6) Lα (Duα , D2 uα ) = 0, x ∈ Ω, uα = h = v, x ∈ ∂Ω.
We will show that any quasiconvex solution of L0 (v) = 0 must satisfy v = supα>0 uα
on Ω, and this immediately implies the uniqueness. First, since Lα (v) ≤ L0 (v) = 0
and uα = v on ∂Ω, by the comparison Theorem 5.1, we know that v ≥ uα for any
α > 0, and hence v ≥ supα>0 uα . Note that in fact uα v as α → 0 + .
We claim that
(5.7)
uα (x) = sup{q | q ∈ Rα (Ω), q = v on ∂Ω} = sup{q | q ∈ Rα (Ω), q ≤ v, x ∈ Ω}.
The first equality follows from the fact that q ∈ Rα if and only if Lα (q) ≥ 0 (by
Theorem 2.6 and Theorem 4.4) and from Perron’s method, which tells us that uα
is the maximal subsolution of Lα (uα ) ≥ 0. For the second equality in (5.7), since

Licensed to Univ of New Orleans. Prepared on Tue Dec 9 05:39:27 EST 2014 for download from IP 137.30.242.61.
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
4250 E. N. BARRON, R. GOEBEL, AND R. R. JENSEN

any eligible q in the first supremum satisfies Lα (q) ≥ 0 and q = v, x ∈ ∂Ω, we have
q ≤ uα ≤ v. Consequently, it must be an eligible q in the second supremum and we
have verified (5.7).
It follows from Lemma 4.8, (5.7) and the assumption that v is quasiconvex that
(5.8) v(x) = sup{q(x) | v ≥ q, q ∈ Rα (Ω) ∃ α > 0} = sup uα (x). 
α>0

The next corollary follows immediately from the preceding proof. Notice that it
only requires the subsolution to be quasiconvex.
Corollary 5.6. Let u be a supersolution L0 (u) ≤ 0 and v a subsolution L0 (v) ≥ 0
in Ω. If v is quasiconvex and u ≥ v on ∂Ω, then u ≥ v on Ω.
Proof. Since v is assumed quasiconvex, we know that v = supα>0 vα , where for
each α > 0 vα is the unique solution of Lα (vα ) = 0 in Ω, with vα = v on ∂Ω. Since
0 ≥ L0 (u) ≥ Lα (u) for each α > 0 and u ≥ v on ∂Ω, we have u ≥ vα for each
α > 0. Hence u ≥ v = supα vα on Ω. 

6. Quasiconvex envelopes and obstacle problems


In this section we take Ω ⊂ Rn to be a bounded, convex domain. Given a
function g : Ω → R which is lower semicontinuous (and bounded from below if
Ω is not bounded), for many purposes it is necessary to calculate the quasiconvex
envelope of the function. This is defined to be the greatest quasiconvex minorant
of g.
If g is not quasiconvex, then the function g ## : Ω → R defined by
 
g ## (x) = sup p · x − g # (γ, p) ∧γ, where g # (γ, p) = sup (p·x−g(x))∧γ.
p∈Rn ,γ∈R {x|g(x)≤γ}

is the greatest quasiconvex minorant of g. That is,


g ## (x) = sup{q(x) | q : Ω → R, q ≤ g, q is quasiconvex on Ω}.
Refer to Penot [15] and [16] for a survey of quasiconvex duality and applications;
however, this form of the conjugates was introduced in [5]. The second order
condition introduced in the previous section for quasiconvexity gives us a way to
calculate the quasiconvex envelope of g as a solution of an obstacle problem. In
particular, we consider the problem in the viscosity sense:
(6.1) min{g − u, L0 (Du, D2 u)} = 0, x ∈ Ω, u = g on ∂Ω.
Formally, u ≤ g and L0 (Du, D2 u) ≥ 0 everywhere, and whenever strict inequal-
ity holds in one of the terms, then equality holds in the other. In the viscosity
sense, the boundary condition means
max{min{g − u, L0 (Du, D2 u)}, g − u} ≥ 0, x ∈ ∂Ω (subsolution on ∂Ω)
and
min{min{g − u, L0 (Du, D2 u)}, g − u} ≤ 0, x ∈ ∂Ω (supersolution on ∂Ω).
We begin by proving that there is only one viscosity solution of the obstacle
problem for Lα ,
(6.2) min{g − u, Lα (u) − h(x)} = 0, x ∈ Ω, u(x) = g(x), x ∈ ∂Ω.
For simplicity we will assume that g : Ω → R is a continuous function.

Licensed to Univ of New Orleans. Prepared on Tue Dec 9 05:39:27 EST 2014 for download from IP 137.30.242.61.
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
QUASICONVEX FUNCTIONS 4251

Theorem 6.1. Assume that h : Ω → R is a nonnegative continuous function. Let


u : Ω → R be a continuous subsolution and v : Ω → R a continuous supersolution
of (6.2). Then u ≤ v on ∂Ω implies that u ≤ v on Ω.
Proof. Since u is a subsolution min{g − u, Lα (u) − h(x)} ≥ 0, x ∈ Ω, and since v
is a supersolution we have min{g − v, Lα (v) − h(x)} ≤ 0, x ∈ Ω. Define the open
set
Ωg = {x ∈ Ω | v(x) < g(x)}.
On Ωg we have Lα (v) − h(x) ≤ 0. Since Lα (u) − h(x) ≥ 0 on Ω this also holds on
Ωg ⊂ Ω. Furthermore, on ∂Ωg ∩ Ω we have v = g. On ∂Ωg ∩ ∂Ω one verifies that
v = g as in [11, section 1.5], using the convexity of ∂Ω. Then, since g ≥ u on Ω, we
have u ≤ g = v on ∂Ωg . In summary, we have shown that v is a supersolution of
Lα (v) − h ≤ 0 and u is a subsolution of Lα (u) − h ≥ 0 on Ωg and u ≤ v on ∂Ωg .
By the comparison principle for the Lα − h operator we conclude that u ≤ v on all
of Ωg .
On the other hand, on Ω \ Ωg we have v ≥ g ≥ u, and hence u ≤ v on all of
Ω. 

Next we can prove that the unique quasiconvex viscosity solution of the obstacle
problem for L0 with obstacle g must be the quasiconvex envelope of g.
Corollary 6.2. There is a unique quasiconvex viscosity solution of min{g−u, L0 (u)}
= 0, x ∈ Ω, u = g, x ∈ ∂Ω, and it is given by u = g ## .
Proof. The proof that the obstacle problem has a unique quasiconvex solution fol-
lows just as in Theorem 6.1. Now since g ## is the greatest quasiconvex minorant
of g, we have g ≥ g ## and L0 (g ## ) ≥ 0. Hence min{g − g ## , L0 (g ## )} ≥ 0. Since
u is a solution of this problem and g ## is a subsolution, we conclude u ≥ g ## . But
g ≥ u ≥ g ## and u quasiconvex implies that u = g ## . 

The next proposition shows that g ## arises naturally as a limit of convex mino-
rants. We will assume that g : Ω → R is continuous in order to avoid technicalities.
Proposition 6.3. Let Ω ⊂ Rn be an open, bounded, convex set. We have

lim [(g(x)p )∗∗ ] p = g ## (x),


1

p→∞

where, for any function f, f ∗∗ is the largest convex minorant of f. Also, g ## is the
quasiconvex solution of min{g − g ## , L0 (g ## )} = 0 in Ω with g ## = g on ∂Ω.
Proof. By standard results in convex functions [20], the greatest convex minorant
of a given function f is
n+1 
 
n+1 
n+1
(f (x))∗∗ = min λi f (xi ) | {λi , xi }, x = λi xi , λi ≥ 0, λi = 1 .
i=1 i=1 i=1

Hence, for each p ≥ 1,


⎧  p1 ⎫
⎨ 
n+1 
n+1 
n+1 ⎬
[(g(x)p )∗∗ ] p = min
1
λi g(xi )p | {λi , xi }, x = λi xi , λi ≥ 0, λi = 1 .
⎩ ⎭
i=1 i=1 i=1

Licensed to Univ of New Orleans. Prepared on Tue Dec 9 05:39:27 EST 2014 for download from IP 137.30.242.61.
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
4252 E. N. BARRON, R. GOEBEL, AND R. R. JENSEN

Sending p → ∞ we see that


 

n+1 
n+1
p ∗∗ 1
lim [(g(x) ) ] = min p max g(xi ) | {λi , xi }, x = λi xi , λi ≥ 0, λi = 1
ψ→∞ 1≤i≤n+1
i=1 i=1
= g ## (x).
The last equality is proved in [5].
Now, from [13] we know that the convex envelope of g p , namely w = (g p )∗∗ is
the viscosity solution of
min{g p − w, min v · D2 w v T } = 0, x ∈ Ω, w = g, x ∈ ∂Ω.
|v|=1

Set w = upp and calculate that u satisfies


(6.3)  
(p − 1)
min g−up , min v · D2 up v T + |v · Dup |2 = 0, x ∈ Ω, up = g, x ∈ ∂Ω.
|v|=1 up
Since g ## = limp→∞ up , using straightforward viscosity theory we now show using
(6.3) that
 
 
min g − g ,
##
min v·D g2 ## T
v = 0, x ∈ Ω, g ## = g, x ∈ ∂Ω.
{|v|=1,v·Dg ## =0}


7. The control problem representation


In this section we indicate how the operator L0 arises in optimal stochastic
control in which the control appears in the diffusion term and the payoff involves
minimizing the essential supremum of a function of the trajectory rather than the
usual expected value of the function. This is a worst case analysis rather than an
expected value analysis. Problems of this type were introduced by Soner [21] and
Soner and Touzi [22] (see the references there) and was used by these authors as
well as Buckdahn et al. [7] to represent the solution of the equation for motion
by mean curvature ut = Δu − Δ∞ u as the value function for one of these control
problems. We will show that L0 (u) = 0 also has such a representation. Refer as
well to the report by Popier [18] for a similar approach to a general problem and
more details in using Lp approximations. Note however that the representation
results apply to the parabolic problem.
Consider the controlled stochastic differential equation

(7.1) dξ(t) = 2 η(t) dW (t), 0 < t ≤ τ, ξ(0) = x,
where τ = τx is the exit time of ξ(t) from Ω. For simplicity we will assume that ∂Ω
is C 2 . We denote the underlying probability space by Σ. The control functions are
η : [0, τ ] → S1 (0), where S1 (0) = {v ∈ Rn | |v| = 1}, assumed to be adapted to the
σ−algebra generated by ξ(·). Denote this class by U. The controls are chosen so as
to minimize the essential sup (over paths) of some given continuous and bounded
function g : Ω → R. In particular we set the value function
u(x) := inf ess sup g(ξ(τx )).
η∈U ω∈Σ

Technically, the infimum is also taken over any complete stochastic basis (Σ, F, P,
{Fs , 0 ≤ s}) endowed with an n−dimensional standard Fs -Brownian motion

Licensed to Univ of New Orleans. Prepared on Tue Dec 9 05:39:27 EST 2014 for download from IP 137.30.242.61.
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
QUASICONVEX FUNCTIONS 4253

W = (W (s), s ≥ 0). Doing so is needed to guarantee that an optimal control


actually exists.
This problem is approximated by using the fact that for f ∈ L∞ (Σ),

1
lim ln ekf (x) dx = ess sup f (x).
k→∞ k Ω x∈Σ

Set
1
ln Eekg(ξ(τx )) .
Wk (x) = inf
k η∈U
By standard stochastic control theory and the comparison result of Barles and
Busca [4], the function Wk is the unique (possibly discontinuous) viscosity solution
of the problem
(7.2) min {vD2 Wk v T + k|v · DWk |2 } = 0, x ∈ Ω, Wk (x) = g(x), x ∈ ∂Ω.
{v:|v|=1}

Recall that the boundary condition is taken in the viscosity sense. Define the set
Γ(p) := {v ∈ Rn : |v| = 1, v · p = 0} ⊂ S1 (0), p ∈ Rn .
It can be shown in a straightforward way (see, for instance [7], [18] for the similar
proof for parabolic problems) that
lim Wk (x) = u(x), ∀ x ∈ Ω,
k→∞

and in fact Wk u.
We will prove in this section that u is a viscosity solution of L0 (Du, D2 u) = 0.
Theorem 7.1. The function u(x) = inf η∈U ess supω∈Σ g(ξ(τx )) is a viscosity so-
lution of L0 (Du, D2 u) = 0, with u = g on ∂Ω.
Proof. Let x ∈ Ω be a local strict minimum of the function u∗ − ϕ with ϕ ∈ C 2 .
It is not hard to show that there is a sequence {xk } ⊂ Ω such that xk → x,
Wk (xk ) → u(x), and Wk − ϕ achieves a minimum at xk . We know that Wk is a
supersolution of (7.2), and so
(7.3) min {vD2 ϕ(xk )v T + k|v · Dϕ(xk )|2 } ≤ 0, k = 1, 2, . . . .
v∈S1 (0)

For each k, let vk be a point achieving the minimum in (7.3). Thus,


(7.4) vk D2 ϕ(xk )vkT + k|vk · Dϕ(xk )|2 ≤ 0, k = 1, 2, . . . .
There is a subsequence, still denoted {vk }, and a point v ∈ S1 (0) such that vk → v.
From (7.4) it is clear that we must have v · Dϕ(x) = 0, and hence v ∈ Γ(Dϕ(x)),
since otherwise (7.4) could not hold for large enough k. Letting k → ∞ in (7.4) we
conclude that
(7.5) vD2 ϕ(x)v T ≤ 0, which implies L0 (Dϕ(x), D2 ϕ(x)) ≤ 0,
and hence u is a supersolution of L0 (Du, D2 u) ≤ 0.
Now let w be any supersolution of L0 (Dw, D2 w) ≤ 0 in Ω. We will prove that
w ≥ Wk for each integer k = 1, 2, . . . , which will give us the fact that w ≥ u, and
then u will be the smallest supersolution.
Since w is a supersolution of L0 (Dw, D2 w) ≤ 0 we know that
min vD2 w v T + k|v · Dw|2 ≤ min vD2 w v T + k|v · Dw|2 = L0 (Dw, D2 w) ≤ 0,
v∈S1 (0) v∈Γ(Dw)

Licensed to Univ of New Orleans. Prepared on Tue Dec 9 05:39:27 EST 2014 for download from IP 137.30.242.61.
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
4254 E. N. BARRON, R. GOEBEL, AND R. R. JENSEN

for any k = 1, 2, . . . . This says that w is a supersolution of (7.3) for any k. Since
Wk is the solution of (7.2), we have w ≥ Wk for all k and hence that w ≥ u.
Finally, the monotone convergence of Wk to u implies that u is a solution, in
fact the Perron solution, of L0 (u) = 0. 
Remark 7.2. Since u is a solution we use Theorem 5.3 to obtain that if Ω is convex
and u is quasiconvex, then Theorem 5.5 allows us to conclude that u is the unique
quasiconvex viscosity solution of L0 (u) = 0 in Ω with u = g on ∂Ω.
In a similar way we now consider the following stochastic control problem with
a running cost, but we will have a stronger conclusion. The controlled stochastic
equation is still given by (7.1), but now we have the following value function:
  τx 
(7.6) u(x) := inf ess sup g(ξ(τx )) − h(ξ(s)) ds .
η∈U ω∈Σ 0

It is assumed that h : Ω → R satisfies h ∈ C(Ω), h ≥ Ch > 0. Using an argument


just as above we conclude that u is a viscosity solution of
(7.7) L0 (Du, D2 u) − h(x) = 0, x ∈ Ω, u(x) = g(x), x ∈ ∂Ω.
However, we have seen that (7.7) has a unique viscosity solution, and so it must
be u. Furthermore, we also conclude that u must be continuous. We have shown
the following.
Theorem 7.3. The function u in (7.6) is the unique continuous viscosity solution
of L0 (Du, D2 u) = h, with u = g on ∂Ω.
Remark 7.4. It is also possible to represent the solution of the obstacle problem
min{g − u, L0 (Du, D2 u)} = 0 as the value function of a control problem with
optimal stopping of the stochastic trajectories. Refer to [13] for the convex case.

Acknowledgement
We would like to express our gratitude to the referees for their very careful
reading of the manuscript.

References
1. O. Alvarez, J.-M. Lasry, and P.-L. Lions, Convex viscosity solutions and state constraints, J.
Math. Pures Appl. (9) 76 (1997), no. 3, 265–288. MR1441987 (98k:35045)
2. P.T. An, Stability of generalized monotone maps with respect to their characterizations, Op-
timization 55 (2006), no. 3, 289–299. MR2238417 (2008a:47085)
3. M. Bardi and F. Dragoni, Convexity and semiconvexity along vector fields, Calc. Var. Partial
Differential Equations, 42 (2011), no. 3–4, 405–427. MR2846261 (2012i:49037).
4. G. Barles and J. Busca, Existence and comparison results for fully nonlinear degenerate
elliptic equations without zeroth-order term, Communications in Partial Differential Equations
26 (2001), no. 11, 2323–2337. MR1876420 (2002k:35078)
5. E. N. Barron and W. Liu, Calculus of variations in L∞ , Appl. Math. Optim. 35 (1997), no. 3,
237–263. MR1431800 (98a:49021)
6. S. Boyd and L. Vandenberghe, Convex Optimization, Cambridge University Press, 2004.
MR2061575 (2005d:90002)
7. R. Buckdahn, P. Cardaliaguet, and M. Quincampoix, A representation formula for the mean
curvature motion, SIAM J. Math. Anal. 33 (2001), no. 4, 827–846 (electronic). MR1884724
(2002j:49036)
8. L. A. Caffarelli and X. Cabré, Fully Nonlinear Elliptic Equations, American Mathematical
Society Colloquium Publications, vol. 43, American Mathematical Society, Providence, RI,
1995. MR1351007 (96h:35046)

Licensed to Univ of New Orleans. Prepared on Tue Dec 9 05:39:27 EST 2014 for download from IP 137.30.242.61.
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
QUASICONVEX FUNCTIONS 4255

9. M. G. Crandall, H. Ishii, and P.-L. Lions, User’s guide to viscosity solutions of second or-
der partial differential equations, Bull. Amer. Math. Soc. (N.S.) 27 (1992), no. 1, 1–67.
MR1118699 (92j:35050)
10. W. H. Fleming and H. M. Soner, Controlled Markov Processes and Viscosity Solutions, sec-
ond ed., Stochastic Modelling and Applied Probability, vol. 25, Springer, New York, 2006.
MR2179357 (2006e:93002)
11. C. Gutierrez, The Monge-Ampere Equation, Birkhauser, Boston, MA, 2001. MR1829162
(2002e:35075)
12. R. V. Kohn and S. Serfaty, A deterministic-control-based approach to motion by curvature,
Comm. Pure Appl. Math. 59 (2006), no. 3, 344–407. MR2200259 (2007h:53101)
13. A. M. Oberman, The convex envelope is the solution of a nonlinear obstacle problem, Proc.
Amer. Math. Soc. 135 (2007), no. 6, 1689–1694 (electronic). MR2286077 (2007k:35184)
14. , Computing the convex envelope using a nonlinear partial differential equation, Math.
Models Methods Appl. Sci. 18 (2008), no. 5, 759–780. MR2413037 (2009d:35102)
15. J.P. Penot, Glimpses upon quasiconvex analysis, ESAIM:Proceedings 20 (2007), 170–194.
MR2402770 (2009e:26018)
16. J.P. Penot and M. Volle, Duality methods for the study of Hamilton-Jacobi equations,
ESAIM:Proceedings 17 (2007), 96–142. MR2362694 (2009a:49052)
17. H.X. Phu and P.T. An, Stable generalization of convex functions, Optimization 38 (1996),
309–318. MR1434252 (98a:90079)
18. A. Popier, Contrôle optimal stochastique en essential supremum, Tech. report, Uni-
versité de Bretagne Occidentale in Brest, available from http://perso.univ-lemans.fr/˜
apopier/Research.html, 2001.
19. R. T. Rockafellar and R. J.-B. Wets, Variational Analysis, Grundlehren der Mathematischen
Wissenschaften [Fundamental Principles of Mathematical Sciences], vol. 317, Springer-Verlag,
Berlin, 1998. MR1491362 (98m:49001)
20. R.T. Rockafellar, Convex Analysis, Princeton University Press, 1970. MR0274683 (43:445)
21. H. M. Soner, Stochastic representations for nonlinear parabolic PDEs, Handbook of differen-
tial equations: evolutionary equations. Vol. III, Handb. Differ. Equ., Elsevier/North-Holland,
Amsterdam, 2007, pp. 477–526. MR2549373 (2010m:60204)
22. H. M. Soner and N. Touzi, The dynamic programming equation for second order stochastic tar-
get problems, SIAM J. Control Optim. 48 (2009), no. 4, 2344–2365. MR2556347 (2011c:91252)
23. H.M. Soner and N. Touzi, Dynamic programming for stochastic target problems and geometric
flows, J. European Math. Soc. 4 (2002), 201–236. MR1924400 (2004d:93142)

Department of Mathematics and Statistics, Loyola University Chicago, Chicago,


Illinois 60660
E-mail address: ebarron@luc.edu

Department of Mathematics and Statistics, Loyola University Chicago, Chicago,


Illinois 60660
E-mail address: rgoebel1@luc.edu

Department of Mathematics and Statistics, Loyola University Chicago, Chicago,


Illinois 60660
E-mail address: rjensen@luc.edu

Licensed to Univ of New Orleans. Prepared on Tue Dec 9 05:39:27 EST 2014 for download from IP 137.30.242.61.
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use

You might also like