You are on page 1of 14

IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 22, NO.

6, JUNE 2013 2233


Coupled Variational Image Decomposition and
Restoration Model for Blurred Cartoon-Plus-Texture
Images With Missing Pixels
Michael K. Ng, Xiaoming Yuan, and Wenxing Zhang
AbstractIn this paper, we develop a decomposition model to
restore blurred images with missing pixels. Our assumption is
that the underlying image is the superposition of cartoon and
texture components. We use the total variation norm and its
dual norm to regularize the cartoon and texture, respectively.
We recommend an efcient numerical algorithm based on the
splitting versions of augmented Lagrangian method to solve the
problem. Theoretically, the existence of a minimizer to the energy
function and the convergence of the algorithm are guaranteed.
In contrast to recently developed methods for deblurring images,
the proposed algorithm not only gives the restored image, but
also gives a decomposition of cartoon and texture parts. These
two parts can be further used in segmentation and inpainting
problems. Numerical comparisons between this algorithm and
some state-of-the-art methods are also reported.
Index TermsCartoon and texture, deblurring, image decom-
position, variable splitting method.
I. INTRODUCTION
I
MAGE decomposition is an important problem in image
processing [2][4], [7], [16], [26], [34]. It plays a signicant
role in realm of object recognition, biomedical engineering,
astronomical imaging, etc. The target image is required to
be decomposed into two meaningful components. One is the
geometrical part or sketchy approximation of an image which
is called cartoon component, and the other is the oscillating
part or small scale special patterns of an image which is called
texture component. Mathematically, the cartoon component
can be described by a piecewise smooth (or a piecewise
constant) function whilst the texture component is commonly
oscillating. Because of their different properties, it is more
Manuscript received January 17, 2012; revised October 29, 2012; accepted
January 13, 2012. Date of publication February 11, 2013; date of current
version March 29, 2013. The work of M. K. Ng was supported by RGC
Grants and HKBU FRG Grants. The work of X. Yuan was supported by
Hong Kong General Research Fund under Grant 203311. The associate editor
coordinating the review of this manuscript and approving it for publication
was Dr. Farhan A. Baqai.
M. K. Ng is with the Centre for Mathematical Imaging and Vision and
Department of Mathematics, Hong Kong Baptist University, Kowloon Tong,
Hong Kong (e-mail: mng@math.hkbu.edu.hk).
X. Yuan is with the Department of Mathematics, Hong Kong Baptist
University, Kowloon Tong, Hong Kong (e-mail: xmyuan@math.hkbu.edu.hk).
W. Zhang is with the School of Mathematical Sciences, University of
Electronic Science and Technology of China, Chengdu 610051, China
(e-mail: wxzh1984@126.com).
Color versions of one or more of the gures in this paper are available
online at http://ieeexplore.ieee.org.
Digital Object Identier 10.1109/TIP.2013.2246520
efcient and effective to separate them for image processing
and image analysis. The main task herein is to extract the
cartoon and texture components from a given image with
degradation, e.g., blurry and/or missing pixels.
For a target image f R
n
, the image decomposition is to
derive f = u+v, where u and v represent the cartoon and tex-
ture, respectively. We herein treat a two-dimensional or higher-
dimensional image by vectorizing it as a one-dimensional
vector, e.g., in lexicographic order. The rst model for image
decomposition is the popular image denoising model proposed
by Rudin et al. [33]. The model is
min
uR
n
,vR
n
|u|
1
+v
2
2
subject to u +v = f (1)
where : R
n
R
n
R
n
is the rst-order derivative operator,
and > 0 is a parameter to control the decomposition of f
into u and v. Hereafter, for any x = (x
1
, x
2
, . . . , x
n
)
T
R
n
,
x
p
: =
_
n
i=1
|x
i
|
p
_ 1
p
represents the p-norm of x, and for
any y = (y
1
, y
2
) R
n
R
n
, |y| denotes a vector in R
n
whose
entries are given by
|y|
i
: =
_
(y
1
)
2
i
+(y
2
)
2
i
, i = 1, 2, . . . , n.
It follows that |y|
p
= (

n
i=1
|y|
p
i
)
1
p
. |x|
1
is the
well-known total variation (TV) norm of x. A substantial
advantage of TV norm in image processing is that it can
recover piecewise constant functions without smoothing sharp
discontinuities (i.e., it can preserve the edges in images). By
exploiting model in (1), the cartoon part of a given image can
be extracted. However, if the original image contains some
texture patterns, such as the scene of meadow and the surface
of stone or cloth, the model in (1) may remove such texture
patterns (see [26] for instance).
For any u R
n
, considering the semi-norm in a Sobolev
space
u
1, p
: = |u|
p
p 1
it follows that the TV norm is just the semi-norm
1,1
. The
dual norm of
1, p
, denoted by
1,q
with
1
p
+
1
q
= 1
(q = corresponding to p = 1, and vice versa), is dened
as
v
1,q
: = inf
_
|g|
q
| v = divg, g R
n
R
n
_
(2)
where div : =
T
is the divergence operator (see [1] for
more details). It is interesting to note that for an image v
1057-7149/$31.00 2013 IEEE
2234 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 22, NO. 6, JUNE 2013
with oscillating patterns, v
1,
is smaller than v
2
(see
[2], [25], [37] for numerical verication). Indeed Meyer [26]
proved that the
1,
-norm can seize texture components
(or oscillating patterns) in an image. This property is useful
to design an energy minimization model to extract texture
patterns from an image. As a result, Meyer proposed the model
min
uR
n
,vR
n
|u|
1
+v
1,
subject to u +v = f (3)
for image decomposition.
Since the corresponding Euler-Lagrange equation of (3)
cannot be expressed directly, it is hard to compute a numerical
solution. Vese and Osher [37] tackled the model in (3) by
solving its approximation as follows:
min
uR
n
,gR
n
R
n
|u|
1
+u +divg f
2
2
+|g|
p
(4)
with p 1, and here > 0 and > 0 are parameters
to balance three terms in the objective function. In (4),
u represents the cartoon part and v = divg is the texture
part. Note that the second term in the objective function of
(4) forces f u + divg and the last term penalizes the
texture part v (recall the denition of
1,
in (2) and
the fact |g|

= lim
p

_
g
2
1
+ g
2
2

p
). Obviously, when
p , the model in (4) approximates the model in (3).
In [37], the gradient method is employed to solve the Euler-
Lagrange equation of (4), and also it has been demonstrated
the solution of (4) can be used for texture discrimination and
segmentation.
In [29], an alternative model for image decomposition was
proposed as follows:
min
uR
n
,vR
n
|u|
1
+v
1,2
subject to u +v = f (5)
where
1,2
is used instead of
1,
, to extract the
texture component of an image. We note that
v
1,2
=
_
v,
1
v v R
n
(6)
where , denotes the inner product in the Euclidean space
and is the Laplacian operator with denition as
v = div(v) v R
n
.
Thus the model in (5) is easier to tackle numerically than the
models in (3) and (4).
There are other image decomposition models in the litera-
ture. Aujol et al. [4] proposed a constrained model for image
decomposition
min
uR
n
,vR
n
|u|
1
+
1
2
f u v
2
2
subject to v
1,
(7)
where > 0 and > 0 are parameters to control the
decomposition of f into cartoon and texture parts. They
exploited the alternating minimization (AM) scheme for the
model in (7). The corresponding u- and v-subproblems can
be solved approximately by Chambolles projection method
[9]. Cai et al. [8] proposed a tight frame based method for
handling inpainting and image decomposition simultaneously.
Their model is given by
min

1
R
m
,
2
R
m
2

i=1
diag(u
i
)
i

1
+
2

i=1

2
_
_
_(I A
i
A
T
i
)
i
_
_
_
2
2
+
1
2
_
_
_
_
_
P

_
f
2

i=1
A
T
i

i
__
_
_
_
_
2
2
(8)
where A
i
R
mn
(i = 1, 2) are the tight frames that both
cartoon and texture components have sparse approximations
in their transformed domains, respectively; u
i
R
m
(i = 1, 2)
are thresholding parameters; and [0, +] is a trade-off
between regularization and delity terms. Therein, the authors
suggested to apply the proximal forward-backward splitting
method to solve (8).
On the other hand, Chan et al. [12] showed the necessity
of simultaneous image inpainting and blind deconvolution
for total variation models, rather than performing these two
tasks sequentially. This is the rst work to exploit the addi-
tional information provided when coupling the inpainting and
deblurring problems. In [14], Daubechies and Teschke studied
a variational image restoration model by means of wavelets
for performing simultaneous decomposition, deblurring and
denoising. In [24], Kim and Vese used the image decompo-
sition model for image deblurring. Recently, a TV-curvelets
image decomposition model was considered and developed
in [25].
In this paper, we study an image decomposition model for
images with corruptions, e.g., blurry and/or missing pixels:
min
uR
n
,gR
n
R
n
|u|
1
+
1
2
K(u+divg) f
2
2
+|g|
p
(9)
with p 1. Numerically, we only consider the case of p =
1, 2 and so that the resulting subproblems possess closed-
form solutions or can be easily solved up to high accuracy.
Here, K: R
n
R
n
is a linear operator; > 0 and > 0 are
trade-offs to balance the image f into the cartoon u and the
texture divg, respectively. Different choices of K correspond
to the observed images with different corruptions: (i) images
with blurry, i.e., K = B where B is the blurring matrix
associated with a spatially invariant point spread function; (ii)
images with missing pixels (herein we consider the case of
missing pixels with zero values), i.e., K = S where S is a
binary matrix (also the so-called mask) to represent missing
pixels; (iii) images with blurry and missing pixels, i.e., K =
SB where S is a mask and B is a blurring matrix. Therefore,
model (9) is more general than the image decomposition model
(4). For solving the (9), we recommend the alternating direc-
tion method of multipliers with Gaussian back substitution
recently developed in [23], which originates from the idea
of splitting the classical augmented Lagrangian method. The
recommended method can provide fast and accurate solutions
with guaranteed convergence. We will report some numerical
results to show the effectiveness of the model (9) and the
efciency of the algorithm in [23].
The rest of the paper is organized as follows. In Section II,
Some preliminaries involving convex programming and the
NG et al.: COUPLED VARIATIONAL IMAGE DECOMPOSITION AND RESTORATION MODEL 2235
variable splitting method are stated. We rst reformulate the
model (9) as an optimization problem with separable structure,
then elaborate the procedure on solving the resulting subprob-
lems by algorithm in [23]. In Section III, some numerical
examples for image decomposition are tested to demonstrate
the effectiveness of the model (9) and the efciency of
the algorithm in [23]. Finally, we conclude our paper in
Section IV.
II. VARIABLE SPLITTING METHODS FOR STRUCTURED
CONVEX PROGRAMMING
A. Preliminaries
In this subsection, we summarize some basic concepts and
properties that will be used in subsequent analysis.
For any x R
n
and symmetric positive denite matrix
H R
nn
, x
H
: =

x, Hx denotes the H-norm
of x. I represents the identity operator. Let : R
n

(, +] be a function. The domain and epigraph of are


dened as dom : = {x R
n
| (x) < +} and epi : =
{(x, y) R
n
R | (x) y}, respectively. is said to be
proper if dom is nonempty and is called lower semi-
continuous (l.s.c) if epi is a closed set in R
n
R. The
subdifferential of at x, denoted by (x): R
n
2
R
n
, is
dened as
(x) : =
_
R
n
| (y) (x) +, y x y R
n
_
.
We refer to [31] for more details.
B. Variable Splitting Methods
We now recall some variable splitting methods for solv-
ing separable convex optimization problems with linear con-
straints. Consider the convex optimization problem as follows:
min
1
(x
1
) +
2
(x
2
) +
3
(x
3
)
subject to A
1
x
1
+ A
2
x
2
+ A
3
x
3
= b
x
i
X
i
, i = 1, 2, 3 (10)
where
i
: R
m
i
(, +] (i = 1, 2, 3) are l.s.c proper
convex functions; A
i
R
lm
i
are given matrices; X
i
R
m
i
are nonempty closed convex sets; b R
l
is a known vector.
The augmented Lagrangian function of optimization prob-
lem in (10) on the domain X
1
X
2
X
3
R
l
is dened
as
L(x
1
, x
2
, x
3
, ) =
3

i=1

i
(x
i
)
_
,
3

i=1
A
i
x
i
b
_
+

2
_
_
_
_
_
3

i=1
A
i
x
i
b
_
_
_
_
_
2
2
(11)
where R
l
is the Lagrange multiplier and > 0 is a penalty
parameter to the linear constraint. Suppose that (x

1
, x

2
, x

3
)
is an optimal solution of optimization problem in (10) and

is the optimal solution of its dual problem. Then, for any


(x
1
, x
2
, x
3
) X
1
X
2
X
3
and R
l
, we have (see the
monograph [31] for details)
L(x

1
, x

2
, x

3
, ) L(x

1
, x

2
, x

3
,

) L(x
1
, x
2
, x
3
,

).
Algorithm 1 The Extension of ADM for the Optimization
Problem in (10)
Require: Choose arbitrary > 0, tolerance > 0 and the
initial point v
0
= (x
0
2
, x
0
3
,
0
) R
m
2
R
m
3
R
l
.
1: repeat
2: x
k+1
1
= arg min
_
L(x
1
, x
k
2
, x
k
3
,
k
) | x
1
X
1
_
3: x
k+1
2
= arg min
_
L(x
k+1
1
, x
2
, x
k
3
,
k
) | x
2
X
2
_
4: x
k+1
3
= arg min
_
L(x
k+1
1
, x
k+1
2
, x
3
,
k
) | x
3
X
3
_
5:
k+1
=
k

_
A
1
x
k+1
1
+ A
2
x
k+1
2
+ A
3
x
k+1
3
b
_
6: until v
k
v
k+1

H
<
Hence, the rst-order optimality condition of optimization
problem in (10) can be characterized by the following varia-
tional form: Finding (x

1
, x

2
, x

3
,

) X
1
X
2
X
3
R
l
and

i

i
(x

i
) (i = 1, 2, 3) such that
_

_
x

1
x

1
,

1
A
T
1

0 x

1
X
1
x

2
x

2
,

2
A
T
2

0 x

2
X
2
x

3
x

3
,

3
A
T
3

0 x

3
X
3

3
i=1
A
i
x

i
b 0

R
l
.
(12)
Let W

denote the set of (x

1
, x

2
, x

3
,

) satisfying (12), and


moreover,
V

: =
_
(x

2
, x

3
,

) | (x

1
, x

2
, x

3
,

) W

_
.
Furthermore, for notational convenience, we denote
v : =
_
_
x
2
x
3

_
_
and H : =
_
_
A
T
2
A
2
0 0
0 A
T
3
A
3
0
0 0
1

I
_
_
. (13)
Evidently, H is positive denite if A
i
(i = 2, 3) are of full
column rank.
Extending straightforwardly the algorithmic framework
of the classical alternating direction method of multipli-
ers (ADM) in [17] and [18] to the problem (10), we obtain
the iterative scheme of Algorithm 1.
It has been shown numerically in [30], [35] that Algorithm 1
performs very well empirically. Unfortunately, to the best of
our knowledge, its convergence is still open. This lack of
convergence thus has inspired some variants of ADM-based
methods, e.g., [21][23], [35]. Those variants are based on
different algorithmic frameworks and each of them is particu-
larly efcient for some different kinds of applications. In this
paper, we focus on applying the most recent one proposed in
[23], i.e., the ADM with Gaussian back substitution, to solve
the model in (9).
To elaborate on the application of the algorithm in [23] to
the problem in (10), we rst dene
M : =
_
_
A
T
2
A
2
0 0
A
T
3
A
2
A
T
3
A
3
0
0 0
1

I
_
_
. (14)
The application of the algorithm in [23] to the problem in (10)
is summarized as Algorithm 2.
Remark 1: Compared to Algorithm 1, Algorithm 2 requires
an additional step of Gaussian back substitution, i.e., Steps
2236 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 22, NO. 6, JUNE 2013
Algorithm 2 Alternating Direction Method With Gaussian
Back Substitution for (10)
Require: Choose arbitrary > 0 and [0.5, 1), tolerance
> 0 and the initial point v
0
= (x
0
2
, x
0
3
,
0
) R
m
2

R
m
3
R
l
.
1: repeat
2: x
k
1
= arg min
_
L(x
1
, x
k
2
, x
k
3
,
k
) | x
1
X
1
_
3: x
k
2
= arg min
_
L( x
k
1
, x
2
, x
k
3
,
k
) | x
2
X
2
_
4: x
k
3
= arg min
_
L( x
k
1
, x
k
2
, x
3
,
k
) | x
3
X
3
_
5:

k
=
k

_
A
1
x
k
1
+ A
2
x
k
2
+ A
3
x
k
3
b
_
6: H
1
M
T
(v
k+1
v
k
) = ( v
k
v
k
)
7: x
k+1
1
= x
k
1
8: until v
k
v
k

H
<
6-7 in Algorithm 2. As shown in [23], the matrix H
1
M
T
is
a upper-triangular block matrix. Thus, Step 6 of Algorithm 2
can be implemented easily. In fact, it can be specied as
_
_
_
x
k+1
2
= x
k
2
+( x
k
2
x
k
2
) (A
T
2
A
2
)
1
A
T
2
A
3
( x
k
3
x
k
3
),
x
k+1
3
= x
k
3
+( x
k
3
x
k
3
),

k+1
=
k
+(

k
).
(15)
The convergence of Algorithm 2 has been analyzed in [23],
and we refer to Appendix I for details.
C. Implementation of Algorithm 2 to (9)
Now, we elucidate the implementation details of Algo-
rithm 2 to the model in (9).
First, we show that the model in (9) can be reformulated
as a special case of problem in (10). Thus, both Algorithm 1
and Algorithm 2 are applicable. By introducing the auxiliary
variables x R
n
R
n
, y R
n
and z R
n
R
n
, the model
in (9) can be rewritten as
min |x|
1
+
1
2
Ky f
2
2
+ |z|
p
subject to x = u
y = u +div g
z = g (16)
with p 1. Thus, the problem in (16) is a special case of
problem in (10) with the following specications:
1) x
1
: = g, x
2
: = u, x
3
: = (x, y, z), X
1
: = R
n
R
n
X
2
: = R
n
and X
3
: = (R
n
R
n
) R
n
(R
n
R
n
);
2)
1
(x
1
) : = 0,
2
(x
2
) : = 0 and
3
(x
3
) : = |x|
1
+
1
2
Ky f
2
2
+ |z|
p
;
3)
A
1
: =
_
_
0
div
I
_
_
A
2
: =
_
_

I
0
_
_
A
3
: =
_
_
I 0 0
0 I 0
0 0 I
_
_
and b : = 0.
Therefore, the model in (9) falls into the form of problem
in (10) with above specications. Note that the strategy of
regrouping (x, y, z, u, g) into three decoupled blocks is not
unique. For example, we can also set x
1
: = (u, z), x
2
: =
(x, g) and x
3
: = y. Accordingly,
1
(x
1
) : = |z|
p
,

2
(x
2
) : = |x|
1
,
3
(x
3
) : =
1
2
Ky f
2
2
, coefcient
matrices in the linear constraints are
A
1
: =
_
_
0
I 0
0 I
_
_
A
2
: =
_
_
I 0
0 div
0 I
_
_
A
3
: =
_
_
0
I
0
_
_
and b : = 0.
It is, however, easy to show that this different regrouping leads
to subproblems which are essentially identical with those of
the previous strategy.
According to (11), we specify the augmented Lagrangian
function of (16) as
L(x
1
, x
2
, x
3
, )
= |x|
1
+
1
2
Ky f
2
2
+ |z|
p
+

1
2
_
_
_
_
u x

1

1
_
_
_
_
2
2
+

2
2
_
_
_
_
u +div g y

2

2
_
_
_
_
2
2
+

3
2
_
_
_
_
z g

3

3
_
_
_
_
2
2
where = (
1
,
2
,
3
) (R
n
R
n
) R
n
(R
n
R
n
) is
the Lagrange multiplier and
i
> 0 (i = 1, 2, 3) are penalty
parameters. We herein adopt the variant penalties to different
linear constraints as in [28]. Generally, this strategy can render
better numerical performance for Algorithm 2. We list the
x
1
-, x
2
- and x
3
-subproblem (correspondingly, the g-, u- and
(x, y, z)-subproblem) in Algorithm 2 as follows:
1) g-subproblem corresponds to the following optimization
problem:
g
k
= arg min
g

2
2
_
_
_
_
_
u
k
+div g y
k


k
2

2
_
_
_
_
_
2
2
+

3
2
_
_
_
_
_
z
k
g

k
3

3
_
_
_
_
_
2
2
and it is equivalent to the linear system:
(
2
div
T
div +
3
I )g = div
T
_

2
(y
k
u
k
) +
k
2
_
+
3
z
k

k
3
which can be easily handled by fast Fourier transform
(FFT) if periodic boundary condition is exploited to
the divergence operator, or by discrete cosine transform
(DCT) if reective boundary condition is exploited (see
e.g., [20, Chapter 7]).
2) u-subproblem is equivalent to
u
k
= arg min
u

1
2
_
_
_
_
_
u x
k


k
1

1
_
_
_
_
_
2
2
+

2
2
_
_
_
_
_
u +div g
k
y
k


k
2

2
_
_
_
_
_
2
2
.
NG et al.: COUPLED VARIATIONAL IMAGE DECOMPOSITION AND RESTORATION MODEL 2237
It amounts to solving the linear system:
(
1

T
+
2
I )u =
T
(
1
x
k
+
k
1
)
+
k
2
+
2
(y
k
div g
k
) (17)
and FFT (resp. DCT) can be utilized if periodic (resp.
reective) boundary condition is exploited to rst order
derivative operator .
3) (x, y, z)-subproblem corresponds to the following opti-
mization problem and each subproblem can be solved
separably
( x
k
, y
k
, z
k
)
= arg min
x,y,z
|x|
1
+
1
2
Ky f
2
2
+|z|
p
+

1
2
_
_
_
_
_
u
k
x

k
1

1
_
_
_
_
_
2
2
+

2
2
_
_
_
_
u
k
+ div g
k
y

2

2
_
_
_
_
2
2
+

3
2
_
_
_
_
z g
k

3
_
_
_
_
2
2
a) x-subproblem can be solved explicitly by the soft-
thresholding operator:
x
k
= arg min
x

1
|x|
1
+
1
2
_
_
_
_
_
x u
k
+

k
1

1
_
_
_
_
_
2
2
= S

1
_
u
k


k
1

1
_
where, for any c > 0, S
c
() dened as
S
c
(g) : = g min{c, |g|}
g
|g|
g R
n
R
n
(18)
and (
g
|g|
)
i
should be taken as 0 if |g|
i
= 0.
b) y-subproblem is equivalent to
y
k
= arg min
y
1
2
Ky f
2
2
+

2
2
_
_
_
_
_
u
k
+div g
k
y

k
2

2
_
_
_
_
_
2
2
which is a linear system as follows
(K
T
K +
2
I )y = K
T
f +
2
( u
k
+div g
k
)
k
2
.
(19)
The computational effort on solving (19) is depen-
dant on the operator K. If K is an identity, diagonal
or downsampling matrix, we can solve (19) directly
since its coefcient matrix is diagonal. If K is
a blurring matrix, it can be solved efciently by
FFT or DCT. More specically, when the periodic
boundary condition is applied to K, FFT can be
used to diagonalize the blurring matrix K; while if
the reective boundary condition is considered and
K itself is of double symmetric, DCT can be uti-
lized. We refer to [20, Chapter 4] for more details.
When K = SB, we recommend the preconditioned
conjugate gradient (PCG) method [19] or Barzilai-
Borwein (BB) method [5] to solve an approximate
solution.
c) z-subproblem is equivalent to
z
k
= arg min
z

3
|z|
p
+
1
2
_
_
_
_
_
z g
k


k
3

3
_
_
_
_
_
2
2
= prox

3
||
p
_
g
k
+

k
3

3
_
(20)
where prox
c||
p
() denotes the proximal function
(see e.g., [13], [27] for more details) of the function
c||
p
for a constant c > 0. It is easy to verify that
z
k
can be easily computed via this formula when
p = 1 or 2. When p = , we recommend some
subroutines, e.g., [15], [36], to seek an approximate
solution of z
k
. We refer to Appendix II for details
of solving the z-subproblem with different values
of p.
4) The Gaussian back substitution step, i.e., Step 6 of
Algorithm 2 can be executed easily. More specically,
the x
2
-subproblem in (15) is
_

T
+
2
I
__
u
k+1
u
k

_
u
k
u
k
__
=
_

T
_
x
k
x
k
_
+
2
_
y
k
y
k
__
which can can be easily solved as the linear system
in (17).
III. EXPERIMENTAL RESULTS
In this section, we apply Algorithms 1 and 2 to solve the
model (9) with different choices of K. More specically, we
test four cases: K = I , K = S with S being a binary mask
in image inpainting, K = B with B being a blurring matrix,
and K = SB being the composition of a binary matrix and
a blurring matrix. As in the literature [4], we assume that
the cartoon and texture parts in an image are not correlated.
We thus take the correlation between the cartoon u and the
texture v, which is computed by
Corr(u, v) : =
cov(u, v)

var(u) var(v)
to measure the quality of decomposition, where var() and
cov(, ) refer to the variance and covariance of given variables,
respectively.
The images to be tested are displayed in Fig. 1. Note that the
image (c) in Fig. 1 is a synthetic image superposed by images
(a) and (b) with the ratio of 7:3. In the upcoming experiments,
all tested images are re-scaled into [0, 1]. All the codes were
written in MATLAB (R2009b) and were run on a personal
Lenovo laptop computer with Intel(R) Core (TM) 2.30 GHZ
and 8G memory.
A. Example 1: K = I
In this setting, the model in (9) reduces to the model
in (4), i.e., image decomposition on clean images. We focus
on images (d) and (e) in Fig. 1 for this case.
Algorithms 1 and 2 are both applicable for the model
in (9). We rst test Algorithm 2 (ADMGB" for short) for
2238 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 22, NO. 6, JUNE 2013
(a) (b) (c) (d)
(e) (f) (g) (h)
Fig. 1. Testing images (a) 512 512 TomJerry image, (b) 512 512 Wool image, (c) combined 512 512 TomJerry and Wool image, (d) 256 256 a
part of Barbara image, (e) 250 248 Weave image, (f) 512 512 Barbara image, (g) 256 256 Wood image, and (h) 256 256 Brick image.
p = 1
p = 2
p =
Fig. 2. Image decomposition on clean images by ADMGB with different values of p.
the model in (9) with different values of p. As analyzed in
Subsection II-C, the resulting z-subproblems are different for
different choices of p (see also Appendix II). When p = ,
we use the algorithm in [36] to obtain an approximation of
the resulting z-subproblem. We take = 10
1
, = 10
3
,

1
=
2
=
3
= 10 for the tested values of p in this
experiment, and all initial points required by ADMGB are
taken as zero vectors (we remark that since the model in (9) is
convex and ADMGB converge globally, the numerical results
with different initial guesses differ very slightly up to the
NG et al.: COUPLED VARIATIONAL IMAGE DECOMPOSITION AND RESTORATION MODEL 2239
TABLE I
IMAGE DECOMPOSITION ON CLEAN IMAGES
Barbara (gray) Weave
Iter CPU Corr Iter CPU Corr
p = 1 29 2.24 0.0284 40 10.4 0.0292
p = 2 29 2.24 0.0280 40 11.2 0.0288
p = 29 2.47 0.0280 40 12.2 0.0288
10 20 30 40 50 60 70 80 90 100
0.01
0.02
0.03
0.04
0.05
0.06
0.07
0.08
0.09
Iteration No.
C
o
r
r

(
u
,

v
)
p=1
p=2
p=
5 10 15 20 25
0.01
0.02
0.03
0.04
0.05
0.06
0.07
0.08
0.09
CPU time (s)
C
o
r
r

(
u
,

v
)
p=1
p=2
p=
Fig. 3. Changes of Corr(u, v) with respect to iterations and computing time
for ADMGB with p = 1, 2, .
stopping convergence criterion). Here, the stopping criterion
is used as follows:
Tol = max
_
u
k+1
u
k

max{1, u
k
}
,
v
k+1
v
k

max{1, v
k
}
_
10
2
. (21)
We report the numerical results in Table I, where the number
of iteration (Iter), computing time in seconds (CPU) and
the obtained correlations (Corr) of the decomposed two
parts are reported when the stopping criterion (21) reaches.
The results in Table I show a tiny difference with respect to
different choices of ps, which coincide with the conclusion
in [37]. In Fig. 2, we display the decomposed cartoons and
textures with different values of p; and in Fig. 3, we plot the
changes of Corr(u, v) with respect to iterations and computing
time for the image (e). Neither the decompositions in Fig. 2
nor the changes in Fig. 3 have discernible differences. These
results further show that the effectiveness of the model in (9)
is not sensitive to different values of p.
Now, we compare ADMGB with Algorithm 1 (denoted
by ADME), the AM based method in [3] (denoted by
AABC), the proximal forward-backward splitting method
Iteration No.
C
o
r
r

(
u
,

v
)
VO
AABC
PFBS
ADMGB
ADME
2 4 6 8 10 12 14 16 18 20
0.05
0.1
0.15
0.2
0.25
CPU time (s)
C
o
r
r

(
u
,

v
)
VO
AABC
PFBS
ADMGB
ADME
10 20 30 40 50 60 70 80
0.05
0.1
0.15
0.2
0.25
Fig. 4. Changes of Corr(u, v) with respect to iterations and computing time
for image (c).
in [8] (denoted by PFBS) and the PDE based method
in [37] (denoted by VO) on image decomposition. For this
comparison, we focus on image (c) in Fig. 1. AABC applies
an alternating minimization scheme to solve the model in (7),
and both the resulting u- and v-subproblems should be solved
iteratively. We execute 10 iterations of Chambolles projection
method in [9] to solve these subproblems at each iteration. For
the parameters and , the tuned values are = 10
2
and
= 10 in order to obtain a minimal correlation between the
cartoon and texture. Note that these choices are different from
the suggested values in [4]. To implement PFBS, we choose
= 5 in the model in (8). As in [8], we use the piecewise
linear polynomial tight frame (see [32] for its derivation)
and the redundant local discrete cosine tight frame to obtain
sparse representations of the cartoon and texture, respectively.
Specically, the level of tight frame for cartoon is taken as 2;
the window size and frequency of tight frame for texture are
chosen as 64 and 6, respectively. VO is a PDE-based approach.
It aims at the model in (4) and solves the corresponding
Euler-Lagrange equations. Theoretically, VO approaches to
the model in (3) when p . However, a large value of
p may lead to difculty in the numerical implementation.
Moreover, the difference between the decomposed parts with
p = 1 and p > 1 by VO is not signicant, we hence choose
p = 1 for VO as suggested in [37]. The parameters in VO are
taken as = 10
2
and = 15. For ADME and ADMGB,
ve parameters should be tuned, including the two parameters
and controlling the decomposition of cartoon and texture
parts respectively, and the penalty parameters
i
(i = 1, 2, 3).
We choose = 10
1
, = 10
3
,
1
=
2
= 5 and
3
= 30
for both algorithms. Since the parameter in ADMGB can
be arbitrarily close to 1, we choose = 1 empirically. For the
2240 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 22, NO. 6, JUNE 2013
Fig. 5. Image decomposition on synthetic image with K = I . From left column to right column are decompositions by VO, PFBS, AABC, ADME, and
ADMGB.
initial iterates of the above methods, u
0
= v
0
= 0 are set in
AABC (see also [4]); u
0
= f , g
0
1
=

x
f
2| f |
, g
0
2
=

y
f
2| f |
are set in VO (see also [37]); u
0
, x
0
, y
0
, z
0
and
0
i
(i = 1, 2, 3)
are set to be zero vectors in ADMGB and ADME.
We plot the changes of Corr(u, v) with respect to iterations
and computing time in Fig. 4. It shows that ADME can reach
a relatively lower correlation within fewer iterations and less
computing time, and ADMGB is slightly less efcient than
ADME. However, as we have mentioned, the convergence of
ADME is still unclear while the convergence of ADMGB has
been established in [23]. Therefore, ADMGB can be used as
a practical surrogate of ADME in the following experiments.
In Fig. 5, we display the decomposed cartoons and textures
by different methods. We see that some textures are contained
in the decomposed cartoon by VO and a blurry part exists in
the decomposed cartoon by PFBS. The image decompositions
given by AABC, ADME and ADMGB are very close.
B. Example: K = S
Now we consider the image decomposition on an image
with missing pixels. For a binary mask S, the indices where
the entries are zeros, represent the locations of missing pixels.
For this case, we test the image (d) in Fig. 1. Comparisons of
ADMGB with PFBS are also reported. We generate a 256
256 mask with 15.3% of missing pixels. The signal-to-noise
ratio is dened by
SNR = 20 log
10
x

x x

where x is a reconstructed image and x

is a true image. The


SNR value is used to measure the quality of a reconstructed
image. For the corrupted image listed in Fig. 7, its SNR value
is 7.94 dB.
We choose = 10
2
, = 5 10
3
and
1
=
2
=

3
= 510
2
for ADMGB. The initial iterates for Algorithm
2 are all zeros vectors. The tight frames representing the
cartoon and texture required by PFBS are generated the same
0 5 10 15 20 25 30 35 40 45 50
0
5
10
15
20
Iteration No.
S
N
R


(
d
B
)
PFBS
ADMGB
0 1 2 3 4 5 6 7 8 9 10
0
5
10
15
20
CPU time (s)
S
N
R


(
d
B
)
PFBS
ADMGB
Fig. 6. Changes of SNR with respect to iterations and computing time for
image decomposition with missing pixels.
as in Subsection III-A. The level of tight frame for cartoon is
taken as 2; the window size and frequency of tight frame for
texture are chosen to be 64 and 8, respectively. We run both
ADMGB and PFBS for 50 iterations and plot their SNR curves
of reconstructed image (i.e., by superposing the decomposed
cartoon and texture) with respect to iterations and computing
time in Fig. 6. From these curves, we see that ADMGB is
faster than PFBS. Finally, we display the decomposed images
of these two methods in Fig. 7.
NG et al.: COUPLED VARIATIONAL IMAGE DECOMPOSITION AND RESTORATION MODEL 2241
u v u + v
u v u + v
Fig. 7. Image decomposition on image with missing pixels. Top: Original image with missing pixels. Center row: Decompositions by PFBS. Bottom row:
Decompositions by ADMGB.
TABLE II
IMAGE DECOMPOSITION ON IMAGE WITH BLURRY
Image Blur SNR
0
Iter CPU SNR Corr
Gaussian (5,3) 19.3 39 9.6 34.7 0.0767
Gaussian (9,3) 16.5 40 9.7 26.1 0.0754
Out-of-focus(3) 18.5 39 9.5 32.9 0.0766
Weave Out-of-focus(5) 15.9 39 9.6 28.2 0.0708
Gaussian (11,3) 16.8 48 39.4 20.5 0.0738
Gaussian (21,3) 16.5 57 45.3 18.8 0.0971
Out-of-focus(7) 15.6 44 36.8 24.9 0.0571
Barbara Out-of-focus(11) 14.2 53 41.3 22.4 0.0626
C. Example: K = B
Now we consider the case of the model in (9) with K = B
where B is a blurring matrix. We test both the out-of-focus blur
and the Gaussian blur, and focus on the images (e)-(f) in Fig.
1. Since the methods VO, AABC and PFBS in Subsection III-
A are not applicable to the case of (9) with K = B, we only
implement ADMGB for this case. To implement ADMGB, we
take = 5 10
5
, = 10
5
,
1
=
2
=
3
= 10
2
. The
initial guess is a zero vector and the stopping criterion is (21).
In Table II, we report the numerical performance of
ADMGB for images with different blurs. More specically,
the second column in this table has the following explanation.
For example, Gaussian (5,3) means the blur is generated
by MATLAB function fspecial(gaussian,5,3) and
Out-of-focus (7) means the radius of the out-of-focus blur
is 7. Moreover, SNR
0
represents the initial SNR value of
the blurred image.
In Fig. 8, we display the decomposed cartoons and textures
for the cases of Out-of-focus (3) and Out-of-focus (7) in
Table II. To view the decomposed cartoon and texture clearly,
we also zoom the area highlighted by the red rectangle in the
Barbara image. The images labeled by u + v in Fig. 8 are
obtained by superposing u and v.
D. Example: K = SB
Finally, we test the case of the model in (9) with K = SB
where S is a binary matrix and B is a blurring matrix, i.e.,
we consider decomposing images with both blurry and missing
pixels. We focus on images (g)-(h) in Fig. 1 in the experiments,
and we only test ADMGB since as far as we know, there is
no other applicable algorithm for this problem.
Both true images are convolved by the out-of-focus blur
with a radius 5, and they are further corrupted by 256 256
masks. The mask contains 26.4% missing pixels for the
images of Wood and Brick. The initial SNR values of
the corrupted images are 5.35 dB for Wood and 3.24 dB for
Brick, respectively. See the rst column in Fig. 9.
We take = 510
5
, = 10
5
,
1
=
2
=
3
= 10
2
for
ADMGB and the initial iterates are all zero vectors. Recall that
2242 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 22, NO. 6, JUNE 2013
Fig. 8. Image decomposition on blurred image. Top row: For image Weave. Center row: For image Barbara. Bottom row: Zooms on the decomposed
Barbara.
TABLE III
CORRELATIONS, l
2
-NORM OF TEXTURE COMPONENT AND TUNED PARAMETERS FOR DECOMPOSITIONS BY DIFFERENT METHODS
Combined Model Separated Procedure I Separated Procedure II
Corr(u, v)=0.0264 Corr(u, v)=0.0302 Corr(u, v)=0.0651
v
2
= 23.97 v
2
= 22.76 v
2
= 18.13
K = S (, ) = (10
2
, 5 10
3
) Task 1 (see [10]): (, )

= (50, 10) Task 1: (, ) = (10


1
, 10
3
)
Task 2: (, ) = (10
1
, 10
3
) Task 2 (see [10]): (, )

= (100, 10)
Corr(u, v)=0.0397 Corr(u, v)=0.0420 Corr(u, v)=0.0935
v
2
= 17.46 v
2
= 15.92 v
2
= 11.53
K = SB (, ) = (5 10
5
, 5 10
3
) Task 1 (see [10]): (, )

= (50, 10) Task 1: (, ) = (10


1
, 10
3
)
Task 2: (, ) = (10
1
, 10
3
) Task 2 (see [10]): (, )

= (100, 10)
*Refers to the Notations of Regularization Parameters Used in [10].
the y-subproblem in (19) should be solved iteratively. In our
experiment, we call the MATLAB function: pcg to obtain an
approximate solution of this subproblem, with the tolerance .
We run ADMGB for 50 iterations and plot the evolutions
of SNR with respect to iterations and computing time with
different s in Fig. 10. The numerical results with = 10
4
after 50 ADMGB iterations are illustrated in Fig. 9.
E. Numerical Verication of the Model in (9)
The model in (9) considers the tasks of image restora-
tion (inpainting and/or deblurring), and image decomposition
together. As analyzed in [10], it is interesting to study how
this combined computational model competes with separated
treatment where the tasks of image restoration and image
decomposition are tackled individually. In this subsection, we
focus on the cases of K = S and K = SB to illustrate
the effectiveness of the combined model in (9). We compare
it with the following separated computational procedures as
follows.
1) Separated Procedure I. We rst perform inpainting
and/or deblurring a corrupted image to perform the
image restoration by using the method given in [10], and
then decompose the restored image to obtain the cartoon
and texture components by using the model in (9) with
K = I .
NG et al.: COUPLED VARIATIONAL IMAGE DECOMPOSITION AND RESTORATION MODEL 2243
Fig. 9. Image decomposition on blurred image with missing pixels. From left column to right column: blurred images with missing pixels, cartoons, textures,
reconstructed images (cartoon+texture).
0
2
4
6
8
10
12
14
16
18
20
Iteration No.
S
N
R


(
d
B
)
= 10
2
= 510
3
= 10
3
= 510
4
= 10
4
0
2
4
6
8
10
12
14
16
18
20
CPU time (s)
S
N
R


(
d
B
)
= 10
2
= 510
3
= 10
3
= 510
4
= 10
4
0 5 10 15 20 25 30 35 40 45 50
0 20 40 60 80 100 120
Fig. 10. Changes of SNR with respect to iterations and computing time
for image decomposition when K = SB with different accuracy in solving
y-subproblem (for Wood image).
2) Separated Procedure II. We rst perform decomposing
a corrupted image to derive the cartoon and texture
components by the model in (9) with K = I , and then
carry out inpainting and/or deblurring to the cartoon and
texture components by using the method given in [10].
For the cases of K = S and K = SB, we use a mask
with 9.2% missing pixels for S and out-of-focus blur with a
radius 3 for B. The corrupted images for both cases are listed
in column (a) of Fig. 11. The initial SNR values for both
corrupted images are 10.87 dB and 9.08 dB for the cases of
K = S and K = SB, respectively. In numerical experiment,
the parameters in the combined model are chosen manually by
testing different values of and in order to determine the
result with the smallest correlation of the decomposed cartoon
and texture components. For the both separated Procedures
I and II, we consider the parameters in the decomposition
model in (9) and in the restoration method given in [10]
together, and choose them manually by testing different values
in order to obtain the smallest correlation between the cartoon
component and the texture component. The tuned values of
these parameters are given in Table III.
The initial guesses of all the methods are set to be zero
vectors. When K = SB, the y-subproblem in (19) for the
combined model is solved iteratively by pcg with tolerance
as 10
4
. The decomposed images by the combined model are
listed in column (a) of Fig. 12; For separated Procedure I,
we rst restore the corrupted images by applying the method
in [10] to get the restorations in column (b) of Fig. 11, then
the restored images are decomposed into cartoons and textures
in column (b) of Fig. 12; For separated Procedure II, we rst
decompose the corrupted images by the model (9) with K = I
to derive the cartoon components (column (c) of Fig. 11)
and texture components (column (d) of Fig. 11), then we
respectively restore these cartoon and texture components by
applying the method in [10] to get the restored cartoons and
textures in column (c) of Fig. 12.
To compare the effectiveness of decompositions by different
approaches, we list the correlation values of decomposed
cartoon and texture components, and their 2-norm of cor-
responding texture vectors in Table III. The data in this
table show that the combined model by solving (9) to obtain
decomposed images with lower correlations than those by
2244 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 22, NO. 6, JUNE 2013
(a) (b) (c) (d)
Fig. 11. Corrupted images and restored images by separated procedures. Top row: for K = S. Bottom row: for K = SB. (a) Corrupted images. (b) Restored
images after the restoration in separated Procedure I. (c) Cartoon components after the decomposition in separated Procedure II. (d) Texture components after
the decomposition in separated Procedure II.
(a) (b) (c)
Fig. 12. Decomposed images for K = S (top two rows) and K = SB (bottom two rows). (a) Combined model. (b) Separated
Procedure I. (c) Separated Procedure II.
separated Procedures I and II. The decomposition results can
be seen from Fig. 12. In Fig. 13, we superpose the cartoon
and texture components decomposed by the combined model
and separated Procedures I and II for the case K = SB. From
this Figure, we observe that the SNR value of the superposed
image from the combined model in (9) is higher than those
by the separated Procedures I and II. In summary, we verify
the necessity of considering the combined model in (9) to
NG et al.: COUPLED VARIATIONAL IMAGE DECOMPOSITION AND RESTORATION MODEL 2245
(b)
(a) (c)
Fig. 13. Superposed images for K = SB from decomposed cartoons and
textures. (a) Ombined model (SNR = 16.11 dB). (b) Separated Procedure I
(SNR = 15.70 dB). (c) Separated Procedure II (SNR = 12.72 dB).
decompose an image with blur and/or missing pixels. This
assertion coincides with the observation in [12].
IV. CONCLUSION
In this paper, we have developed a coupled model for
decomposing the cartoon and texture components of an image
with blurry and/or missing pixels. An efcient numerical
algorithm with guaranteed convergence is applied to solve
the proposed model. Experimental results have shown that
the effectiveness of the new model and the efciency of the
adopted algorithm. In our numerical experiments, we found
that the proposed model may not be effective when a large
region of pixels are missed. For this case, an alternative
strategy is to add a curvature term to the total variation term
(e.g., [6], [10], [11]) or use some nonconvex models (e.g.,
[25]). This is our interest in future research.
APPENDIX I
CONVERGENCE OF ALGORITHM 2
The convergence of Algorithm 2 has been analyzed in [23],
and here we only summarize it in the following theorems
without detailed proof.
Theorem 1: If v
k
v
k

H
= 0, it follows that the iterate
(x
k
1
, x
k
2
, x
k
3
,
k
) satises (12), and hence (x
k
1
, x
k
2
, x
k
3
) is a
solution of (10).
Theorem 2: Let {(x
k
1
, x
k
2
, x
k
3
,
k
)} be the sequence gener-
ated by Algorithm 2. Then we have
1) The sequence {v
k
} is bounded.
2) lim
k
v
k
v
k

H
= 0.
3) Any cluster point of the sequence {( x
k
1
, x
k
2
, x
k
3
,

k
)}
satises (12).
4) The sequence { v
k
} converges to some point { v

} V

.
APPENDIX II
SOLVING THE z-SUBPROBLEM (20) WITH DIFFERENT
VALUES OF p
Without loss of generality, we reduce the z-subproblem (20)
into the following simple pattern
z
k
= arg min
z
|z|
p
+
1
2
z b
2
2
(22)
where is a positive scalar and b = (b
1
, b
2
) R
n
R
n
. It
follows from [13], [27] that:
1) If p = 1, we have z
k
= S

(b), where S

() is the soft-
thresholding operator dened in (18).
2) If p = 2, we have
z
k
= b min{|b|, }
b
|b|
.
3) If p = , we have
z
k
= b P

(b)
where P

() is the projection operator onto = {z


R
n
R
n
| |z|
1
}. It can be efciently solved by
many existing subroutines, e.g., [15], [36].
REFERENCES
[1] R. Adams, Sobolev Spaces, Pure and Applied Mathematics. San
Francisco, CA, USA: Academic, 1975.
[2] J.-F. Aujol and A. Chambolle, Dual norms and image decomposition
models, Int. J. Comput. Vis., vol. 63, no. 1, pp. 85104, Jun. 2005.
[3] J.-F. Aujol, G. Aubert, L. Blanc-Frau, and A. Chambolle, Image
decomposition into a bounded variation component and an oscillating
component, J. Math. Imaging Vis., vol. 22, no. 1, pp. 7188, Jan. 2005.
[4] J.-F. Aujol, G. Gilboa, T. Chan, and S. Osher, Structure-texture image
decompositionmodeling, algorithms, and parameter selection, Int. J.
Comput. Vis., vol. 67, no. 1, pp. 111136, Apr. 2006.
[5] J. Barzilai and J. Borwein, Two-point step size gradient methods, IMA
J. Numer. Anal., vol. 8, no. 1, pp. 141148, 1988.
[6] M. Bertalmio, G. Sapiro, V. Caselles, and C. Ballester, Image inpaint-
ing, in Proc. 27th Annu. Conf. Comput. Graph. Interact. Tech., 2000,
pp. 417424.
[7] M. Bertalmio, L. Vese, G. Sapiro, and S. Osher, Simultaneous structure
and texture image inpainting, IEEE Trans. Image Process., vol. 12, no.
8, pp. 882889, Aug. 2003.
[8] J.-F. Cai, R. H. Chan, and Z. Shen, Simultaneous cartoon and texture
inpainting, Inverse Prob. Imaging, vol. 4, no. 3, pp. 379395, Aug.
2010.
[9] A. Chambolle, An algorithm for total variation minimization and
applications, J. Math. Imaging Vis., vol. 20, nos. 12, pp. 8997, Jan.
2004.
[10] R. H. Chan, J. Yang, and X. Yuan, Alternating direction method for
image inpainting in wavelet domain, SIAM J. Imaging Sci., vol. 4, no.
3, pp. 807826, Sep. 2011.
[11] T. F. Chan, S. H. Kang, and J. Shen, Eulers elastica and curvature-
based inpainting, SIAM J. Appl. Math., vol. 63, no. 2, pp. 564592,
2002.
[12] T. F. Chan, A. M. Yip, and F. E. Park, Simultaneous total variation
image inpainting and blind deconvolution, Int. J. Imag. Syst. Tech., vol.
15, no. 1, pp. 92102, Jul. 2005.
[13] P. L. Combettes and V. R. Wajs, Signal recovery by proximal forward-
backward splitting, SIAM J. Multiscale Model. Simul., vol. 4, no. 4,
pp. 11681200, 2005.
[14] I. Daubechies and G. Teschke, Variational image restoration by means
of wavelets: Simultaneous decomposition, deblurring and denoising,
Appl. Comput. Harmon. Anal., vol. 19, no. 1, pp. 116, 2005.
[15] J. Duchi, S. Gould, and D. Koller, Projected subgradient methods for
learning sparse Gaussian, in Proc. Conf. Uncertainty Artif. Intell., 2008,
pp. 18.
[16] M. J. Fadili, J. L. Starck, J. Bobin, and Y. Moudden, Image decompo-
sition and separation using sparse representations: An overview, Proc.
IEEE, vol. 98, no. 6, pp. 983994, Jun. 2010.
2246 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 22, NO. 6, JUNE 2013
[17] D. Gabay and B. Mercier, A dual algorithm for the solution of nonlinear
variational problems via nite-element approximations, Comput. Math.
Appl., vol. 2, no. 1, pp. 1740, 1976.
[18] R. Glowinski and A. Marrocco, Sur lapproximation par elements
nis dordre un, et la resolution par penalisation-dualite dune classe de
problemes de Dirichlet nonlineaires, Rev. Francaise dAut. Inf. Rech.
Oper., vol. 9, no. 2, pp. 4176, 1975.
[19] G. H. Golub and C. F. Van Loan, Matrix Computations. Baltimore, MD,
USA: Johns Hopkins Univ. Press, 1996.
[20] P. Hansen, J. Nagy, and D. OLeary, Deblurring Images: Matrices,
Spectra, and Filtering. Philadelphia, PA, USA: SIAM, 2006.
[21] B. S. He, Parallel splitting augmented Lagrangian methods for
monotone structured variational inequalities, Comput. Optim. Appl., vol.
42, no. 2, pp. 195212, Mar. 2009.
[22] B. S. He, M. Tao, and X. M. Yuan. (2011, Aug.). A splitting method
for separable convex programming, IMA J. Num. Anal., Revision
[Online]. Available: http://www.optimization-online.org/ARCHIVE-
DIGEST/2010-06.html
[23] B. S. He, M. Tao, and X. M. Yuan, Alternating direction method with
Gaussian back substitution for separable convex programming, SIAM
J. Optim., vol. 22, no. 2, pp. 313340, Apr. 2012.
[24] Y. Kim and L. Vese, Image recovery using functions of bounded
variation and Sobolev spaces of negative differentiability, Inverse Prob.
Imaging, vol. 3, no. 1, pp. 4368, Feb. 2009.
[25] P. Maure, J. F. Aujol, and G. Peyr, Locally parallel texture modeling,
SIAM J. Imag. Sci., vol. 4, no. 1, pp. 413447, Mar. 2011.
[26] Y. Meyer, Oscillating patterns in image processing and nonlinear
evolution equations, (University Lecture Series), vol. 22, Providence,
RI, USA: AMS, 2002.
[27] J. Moreau, Proximit et dualit dans un espace Hilbertien, Bulletin de
la Societe Mathematique de France, vol. 93, pp. 273299, 1965.
[28] M. K. Ng, P. Weiss, and X. M. Yuan, Solving constrained total-variation
problems via alternating direction methods, SIAM J. Sci. Comput., vol.
32, no. 5, pp. 27102736, 2010.
[29] S. Osher, A. Sole, and L. Vese Image decomposition and restoration
using total variation minimization and the H
1
norm, Multiscale
Model. Simul., vol. 1, no. 3, pp. 349370, 2003.
[30] Y. Peng, A. Ganesh, J. Wright, W. Xu, and Y. Ma, Robust align-
ment by sparse and low-rank decomposition for linearly correlated
images, in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 2010,
pp. 763770.
[31] T. Rockafellar, Convex Analysis. Princeton, NJ, USA: Princeton Univer-
sity Press, 1970.
[32] A. Ron and Z. Shen, Afne systems in L
2
(R
d
): The analysis of the
analysis operator, J. Funct. Anal., vol. 148, no. 2, pp. 408-447, Aug.
1997.
[33] L. Rudin, S. Osher, and E. Fatemi, Nonlinear total variation based noise
removal algorithms, Phys. D., vol. 60, no. 1, pp. 259-268, 1992.
[34] J. L. Starck, M. Elad, and D. Donoho, Image decomposition via the
combination of sparse representations and a variational approach, IEEE
Trans. Image Process., vol. 14, no. 10, pp. 15701582, Oct. 2005.
[35] M. Tao and X. M. Yuan, Recovering low-rank and sparse components
of matrices from incomplete and noisy observations, SIAM J. Optim.,
vol. 21, no. 1, pp. 57-81, 2011.
[36] E. van den Berg and M. P. Friedlander, Probing the Pareto frontier for
basis pursuit solutions, SIAM J. Sci. Comp., vol. 31, no. 2, pp. 890-912,
2008.
[37] L. Vese and S. Osher, Modeling textures with total variation mini-
mization and oscillating patterns in image processing, J. Sci. Comput.,
vol. 19, no. 1, pp. 553572, 2003.
Michael K. Ng received the B.Sc. and M.Phil.
degrees from the University of Hong Kong, Hong
Kong, in 1990 and 1992, respectively, and the Ph.D.
degree from the Chinese University of Hong Kong,
Hong Kong, in 1995.
He is currently a Professor with the Department of
Mathematics, Hong Kong Baptist University, Hong
Kong. He was a Research Fellow with the Computer
Sciences Laboratory, Australian National University,
from 1995 to 1997, and an Assistant Professor and
an Associate Professor with the University of Hong
Kong from 1997 to 2005. His current research interests include bioinformatics,
data mining, image processing, scientic computing, and data mining
Dr. Ng is on the editorial boards of several international journals.
Xiaoming Yuan received the Bachelors and Mas-
ters degrees from Nanjing University, Nanjing,
China, and the Ph.D. degree from the City University
of Hong Kong, Hong Kong.
He is currently an Assistant Professor with the
Department of Mathematics, Hong Kong Baptist
University, Hong Kong. His current research inter-
ests include numerical optimization, with its appli-
cations in image processing and statistics.
Wenxing Zhang received the B.S. degree in math-
ematics from Shandong Normal University, Jinan,
China, and the Ph.D. degree in computational math-
ematics from Nanjing University, Nanjing, China, in
2006 and 2012, respectively.
He is currently a Lecturer with the University
of Electronic Science and Technology of China,
Chengdu, China. From 2010 to 2011, he was a
Research Assistant with the Department of Mathe-
matics, Hong Kong Baptist University, Hong Kong.
His current research interests include optimization
theory and algorithms, and their applications in image processing.

You might also like