You are on page 1of 22

ACTIVE CONTOUR WITHOUT EDGES

Last updated 03/23/2009


By Yue Wu
All Rights reserved

1. Problem Introduction
2. Mathematical Description
3. Numerical Algorithm
4. Evaluations on real images
5. Differences between methods
6. Reference

1 . Problem Introduction
Simply speaking the problem is: given an image I, how to find and
what IS its segmentation and how to find it.
For example, if a following brain image is given, the problem could be described to
"what is the image component inside of the skull", i.e. "how to truncate the part of the
brain image bounded by the 'white' skull contours ".

fig.1 Original MRI brain image


There are many approaches for you to get the segmentation of the skull shape. Even
those simple ones, like a simple sobel mask, will give you good detection on the contour
of the skull. In order to get a better edge map, you may use Canny Edge detector.
Furthermore, you can get a segmentation below.

fig.2 Skull segmentation of the brain image


Of course, the above description of segmentation for this brian image is not that
precisely, because we want to know the details of the brain but not everything inside of
the skull. However, the segment of brain is not a easy problem. Although you could use
'high edge value' and 'low edge value' to distinguish the contour of the skull and that of
the brain, the method used here is called active contours, more precisely Chan-Vese
Active Contours without edges.

fig. 3
Brain segmentation of the brain image
Clearly later segment tells much more useful information and it gives more details.
In this passage, we mainly talk about the idea, algorithm, and realization of this ChanVese Active Contour.

2 . Mathematical Description
CHAN-VESE active contour algorithm comes from segmentation problem formulated
by Mumford and Shah. As a result of lacking formula input tools in this site I use image
file for those formulas and Greek symbols.

Mumford and Shahs minimization problem:

Chan-Vese active contours without edges

In order to understand what's going on with this idea, let's see


some figures from the original paper.
In the paper: Active contours without edges [1], the first two terms have
been interpreted to two forces. The first term is the force to shrink the contour. The

second term is the force to expand the contour. These two forces get balanced when the
contour reaches the boundary of our interested object.

For example, let's see the following four cases. Simply define everything in black = -1
and everything in gray = 1. And here c1 and c2 could be interpreted to be the mean value
of everything inside of the contour C and the mean value of everything outside of the
contour C, respectively. Here u0 stands for the entire image.
Case one:
the initial contour covers the whole object (-1) and some gray region (+1). Thus c1 is
about zero something and c2 is 1. Notice the integral above is only respect to inside
region of u0 or outside region. Clearly, if use everything outside of the contour to minus
c2, we will get zero. Hence the second term F2 is zero. Because c1 is about zero, when we
use everything inside of the contour to minus c1 and find the sum of the squares as the
formula showed, we will reach some big positive number. So F1> 0.
Now F1>0 but F2=0. Therefore, the contour will shrink itself in the next step.
Now you might be able to explain what's going on with the contours in other cases.

Finally, when our contour C reaches the boundary of the object, as case four shows, F1 =
0 and F2 = 0. As a result, the contour C reaches its equilibrium. Hence we find the
contour of the object and thus we could get its segmentation.

Level set formulation

3 . Numerical Algorithms
Analytic approximations for Heaviside and Dirac delta
functions

Note: there are many different ways to get a 'continuous' version of Heaviside function
and its corresponding delta function.
For more information please see Wikipedia.

Analytic approximations for image curvature

Approximation on Phi
Actually there are several ways to achieve the approximation of this Phi (level set 0
function).
One way is discussed in the paper, for more details, please check the reference.
Breifly speaking, by discretizing Phi, the author achieved following equation. As you can
see, this is an implict equation about
Phi(n+1). Good news is that this equation is still linear and thus it is solvable. However,
bad news is it needs more steps and more complicated calculations.

If you want to solve the equation via this way, please check the paper and it has all
necessary steps.
All in all, finally this equation is solvable and thus we can get Phi(n+1) and the problem
solved. (Because Phi will stop to evolve once it reached the boundaries of our interested
object.)

4. Evaluation on real images


Chan-Vese active contours without edges[1] on gray scale
image

If you interested in how to get these results and what are their Matlab codes, please
check the page: MATLAB CODES FOR ACTIVE CONTOURS
Here is a video clip on YouTube uploaded by me.
In this clip, you will see how we find the sketch of the Micky.

Chan-Vese active contours without edges for vector image[2]

Active contours without edges for vector image[2] is similar to active contours without
edges[1].
The major difference is:

Here c(i) just defined to different image components. Like in RGB image, there are R, G,
B three color components. These c(i), i.e. mean value of the contour for each color
component.
In previous Chan-Vese active contours without edges, we only apply those 'force
balance' things to one 2D image. However, now for Chan-Vese active contours without
edges for vector image, we apply the same algorithm to one 3D image, which could be
consider to be 3-2D-image. Because the algorithm works well on 2D image, we can not
deal with 3D image.
You might think we should apply the algorithm on each component and then we will
reach a nice contour. Unfortunately, if you do that, you will get one contour on each
component (2D image). And probably these three are not identical.
Therefore the right way to apply the 2D algorithm to 3D problem here is not like above.
Recall our objective and it is still to get one and only one neat contour(s) about our
interested objects.

As you see in the formula above, the force thing could be perfectly interpreted in
Dynamics. We are still looking for two forces: one tending to shrink the contour and the
other tending to expand the contour. Instead of having only one component, now we
have three. We firstly find the component of forces on these three 2D images. And then
we calculate their resultant forces. Finally, the resultant forces control the movement of
our contour.
This approach is very good to many noisy image, where some or many pixels of the
image have been added noises. General speaking, many edge detection algorithm will
mistakenly catch those noise, as a result of these noisy pixels are not
distinguishable from those information pixels. However, in this algorithm, unless the
noise contaminated all three components at the same pixel, we can resist the noise
influence to our edge detection.
Here is an example, right image have been added two types of noises in two components
and become really noisy.

We apply Chan-Vese active contours without edges for this noisy image and following
thins are our result.

The result is really nice.

Multiphase Chan-Vese active contours without edges [3]


Simply speaking, in previous two approaches we discussed above, we only try to find one
'active contour'. As we can see from the result, one active contour can only tell us two
things: what is the foreground of the image and what is the background. For more
details, they fail to tell.
Therefore, for any image which has more than two colors, previous approach cannot
distinguish them. They will either get two colors in foreground or catch two colors in
background.
How to solve the problem above? Naturally we will think about
getting more contours. If we have 'N' contours, we should be able
to distinguish 2^N colors.
I should be able to add more information for this topic later.
If you are interested in this part, please see its reference paper.
Here are two examples:

Following image has several complicated structures: triple junction, quadruple junction,
different colors and different shapes. One phase active contours cannot distinguish all
objects in the image, while two-phases active contours could achieve this goal.

Above result shows that multiphase active contours could


distinguish much more objects and automatically detect every
edge in the input image. This method can easily handle different
complicated topological changes.

In this photo image, background is sky and foreground is two red and two yellow
flowers.
By using multiphase active contour, we can get the segmentation as follows.

Really cool, isn't it?

5 . Differences between methods


Although we talked something about the difference between these methods, you might
still wonder what are exactly differences between above methods? Which one gives the
best quality and which one works fastest? This section is trying to show you their
differences.
1. Gray image or Color image?
Treating image as a gray image or a color image will lead us to two different approaches.
If you treat an image to a gray one, then we only need to care about two image forces as
we mentioned in previous section.
However, if you'd like to treat the image to a color one, then you have to apply the idea
for vector images, which will calculate all components of forces.
Generally speaking, considering the image to a gray one will simplify the segmentation
question and lead to simple calumniation but lose some image information.
Considering the image to a color one will be helpful to denoise the image but lead to
more calculations.

Therefore, if you have a clean image, which means no noise, then probably you can treat
it as a gray image. Otherwise, I will treat it to a vector image for trading calculation
complexity for better denoising ability.
Recall the example in section 4 about denoising. Let's see what happens when we
treating the same image as a gray one.

It is not hard to convince yourself why we get a result above, isn't it?
However, when we consider it to a vector image, the strong denoising ability shows.

We get a very neat segmentation image comparing to the noisy input image. And this
segmentation image is almost perfect match the original clean image.

2. Single phase or multiphases?


Treating a image to single phase means you could only 'divide' the original image into
two parts (No matter how you interpreted here, i.e. two colors, background and
foreground, etc. ).
Treating a image to multiphase means you could 'divide' the original image into more

parts, more precisely 2^N parts, where N is the number of phases.


So if the objects you are interested in are not that 'simple' (like a single object, a
homogeneous region and etc), probably you should apply multiphase for better results.
However you have to trade time off for multiphase. If you do not have a fast machine, it
will be a really painful experience to wait the result.
But if you only want detect homogeneous region, like text segmentation, single phase is
a good choice for this kind of task.
Here is the result if we apply single phase active contour to 'flowers.jpg'.

As we mentioned before, single phase could only 'divide' image into two parts. In above
result, we see finally the algorithm recognize every 'bright' thing to the background and
every 'dark' thing to the foreground. Therefore some 'highlights' on the stem and the
flowers have been reorganized to the background. However, we can still recognize
'flowers' in the result, at least the algorithm separated the sky and the flowers.
More precisely, there are at least 5 colors in the original image, blue, white, red, yellow
and green. However, we only have one phase and the algorithm have to group these five
regions to two regions. How the algorithm does this procedure? Basically, the algorithm
will group these regions respect to their different 'mean value'. If two regions have a
close mean value than other colors, these two color will be grouped into one. Clearly, the
means value of white and sky blue are much closer than others, and thus these two color
regions go to one in the final segmentation.
But with multiphase, we can get more information. Not just divide the image to 'black'

and 'white' regions but to more colors regions.

Although you may argue that the result is still not perfect, for me it's already good
enough. As a result of two phases, we can only divide image to four regions (or you can
say four colors). Now we successfully distinguished the colors about the flowers, which
are grouped into one region in a single phase case. However, we still failed to distinguish
'white' in 'sky blue'. Why? Because we only have two phases and consequently we can
only distinguish four regions at most. There is no way to detect a five-color-image by
four-color-detection algorithm.
You may also ask Can we distinguish 'white' and 'sky blue' but not 'yellow' and 'red' or
other color pairs. Or if we run the algorithm in a different way is it possible for us to get
a segmentation which distinguishes 'white' and 'sky blue' but leaves other two colors
together. The answer is YES.
The reason why it is possible to get a segmentation distinguishing 'white' and 'sky blue'
with two phases is because our algorithm is a variance based algorithm.
Approximate mean value for each color on all three components:
R
G B
R+G+B
Red:
212, 3, 3
(218)
Green:
30, 28, 20
(78)

Yellow:
White:
Sky Blue:

197, 172 , 1
214, 227, 241
157, 190, 211

(370)
(682)
(585)

Clearly, both sky blue and white have three components' mean above 120. Other three
colors share the feature of a very low 'B' value.
More precisely, because we use equal weights for RGB image, R+G+B could stand for
their overall mean value. With no doubt, if we need divide above five numbers to two
most different groups in mean, we should pick White and Sky Blue in one group and
leave the other three colors to the other group. This is the reason why single phase case,
the algorithm grouped Red, Green and Yellow to one group but White and Sky blue to
the other.
Furthermore, in two-phase-case, Red Green and Yellow have more salient differences in
their mean values than White and Sky blue. Say Red, Green and Yellow first in one
group and White and Sky Blue in the other, let see the cost for distinguish each of it.
Note the closer distance to the average, the higher cost is.
Average( Red+Green+Yellow) = 222
Red:
Green:
Yellow:

R+G+B
218
78
370

distance from the average


4
144
148

Average(White+Sky Blue) = 633.5


White:
Sky Blue:

R+G+B
682
585

distance from the average


48.5
48.5

As we can see here, it cost less to distinguish Green and Yellow out of the
(Red+Green+Yellow) group than to distinguish White or Sky Blue.
Note: the actual situation might not be exactly the same, because the active contour
might also get a different segmentation as a result of different initial conditions.

You might also like