You are on page 1of 8

2126

IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 30, NO. 12, DECEMBER 2011

Accurate and Efficient Optic Disc Detection and


Segmentation by a Circular Transformation
Shijian Lu

AbstractUnder the framework of computer-aided diagnosis,


this paper presents an accurate and efficient optic disc (OD) detection and segmentation technique. A circular transformation is
designed to capture both the circular shape of the OD and the
image variation across the OD boundary simultaneously. For each
retinal image pixel, it evaluates the image variation along multiple
evenly-oriented radial line segments of specific length. The pixels
with the maximum variation along all radial line segments are determined, which can be further exploited to locate both the OD
center and the OD boundary accurately. Experiments show that
OD detection accuracies of 99.75%, 97.5%, and 98.77% are obtained for the STARE dataset, the ARIA dataset, and the MESSIDOR dataset, respectively, and the OD center error lies around
six pixels for the STARE dataset and the ARIA dataset which is
much smaller than that of state-of-the-art methods ranging 1429
pixels. In addition, the OD segmentation accuracies of 93.4% and
91.7% are obtained for STARE dataset and ARIA dataset, respectively, that consists of many severely degraded images of pathological retinas that state-of-the-art methods cannot segment properly. Furthermore, the algorithm runs in 5 s, which is substantially
faster than many of the state-of-the-art methods.
Index TermsCircular transformation, ocular image analysis,
optic disc detection, optic disc segmentation.

I. INTRODUCTION

PTIC disc (OD) detection and OD segmentation refer to


the location of the OD center and the OD boundary, respectively. Accurate OD detection and OD segmentation are
very important in ocular image analysis [1], [2] and computer
aided diagnosis for different types of eye diseases such as diabetic retinopathy and glaucoma [3][5]. On the one hand, OD
detection is often a key step for the detection of other anatomical structures such as the retinal vessels and the macula [1],
[6][8]. It also helps to establish a retinal frame that can be used
to determine the position of many retinal abnormalities such as
exudates, drusen, microaneurysms, and hemorrhages [9], [10].
On the other hand, OD segmentation often provides additional
diagnostic information for the ophthalmologist. For example,
the OD size derived from the OD segmentation has been widely
used in cup-disc-ratio based glaucoma diagnosis [4].
Many OD detection and segmentation methods have been
reported but most of them have a number of typical common
Manuscript received April 27, 2011; revised July 10, 2011; accepted July 30,
2011. Date of publication August 12, 2011; date of current version December
02, 2011.
The author is with the Institute for Infocomm Research, A*STAR, 138632
Singapore (e-mail: slu@i2r.a-star.edu.sg).
Color versions of one or more of the figures in this paper are available online
at http://ieeexplore.ieee.org.
Digital Object Identifier 10.1109/TMI.2011.2164261

Fig. 1. (a) Example retinal image in the STARE dataset (i.e., im0043.bmp)
[18]. (b) Corresponding intensity image.

limitations. First, most reported OD detection methods define a


correct detection if the detected OD center lies within the OD
boundary. On the other hand, they often fail to locate the exact
OD center with an OD center error (i.e., the distance between
the detected OD center and the real OD center) of 1429 pixels
for the STARE dataset. Second, most reported OD segmentation methods assume that the OD has a clear image variation
across the whole OD boundary. But for many images of pathological retinas such as the one in Fig. 1(a) (and many others
shown in Figs. 7 and 8), the OD is often severely degraded by
different types of retinal lesions and imaging artifacts where certain parts of the OD boundary have little image variation across
them. Third, most reported methods consider the OD detection
and the OD segmentation as two separate tasks rather than combine the two tasks under the an integrated framework to locate
both the OD center and the OD boundary simultaneously.
In this paper, a circular transformation is presented that is capable of detecting both the OD center and the OD boundary accurately and efficiently. The contributions of the proposed technique can be summarized in several aspects. First, the proposed
technique is capable of detecting the location of the optic disc
center with an error that is substantially smaller than that of
other state-of-the-art methods. Second, the proposed technique
is capable of segmenting the OD from images of pathological
retinas, many of which state-of-the-art methods cannot handle
properly. Third, the proposed technique is substantially faster
than many of the state-of-the-art methods. Fourth, the circular
transformation combines the OD detection and OD segmentation under the same framework and locates both the OD center
and the OD boundary simultaneously. Last but not least, the proposed technique can be applied to detect and segment other circular-shaped objects such as blood cells within microscopy images though it is used for the OD detection and segmentation as
described in this paper.
1http://www.parl.clemson.edu/stare/nerve/

0278-0062/$26.00 2011 IEEE

LU: ACCURATE AND EFFICIENT OPTIC DISC DETECTION AND SEGMENTATION BY A CIRCULAR TRANSFORMATION

2127

The rest of this paper is organized as follows. Previous work


on OD detection and segmentation is first reviewed in Section II.
The proposed OD detection and segmentation technique is presented in Section III. Experimental results are described and
discussed in Section IV. Finally, some concluding remarks are
drawn in Section V.
II. PREVIOUS WORK
A number of OD detection methods have been reported in the
literature. One category of methods detects the OD based on different types of OD-specific image characteristics. In particular,
some methods [11][13] assume that the OD corresponds to
the brightest region within a retinal image. Some other methods
[3], [14], [15] are based on the assumption that the OD has the
highest image variance due to the bright OD pixels and the dark
retinal blood vessel pixels within the OD. The limitation of these
methods is that many images of pathological retinas suffer from
different types of retinal lesions (e.g. drusen, exudates, hemorrhage, etc.) and imaging artifacts (e.g., haze, lashes, uneven illumination, etc.) often introducing brighter regions or regions with
higher image variance compared with the OD. At the same time,
the center of the brightest region or the region with the highest
image variance often does not correspond to the OD center even
though it lies within the OD boundary.
Another typical category of OD detection methods make
use of the anatomical structures between the OD, the macula,
and the retinal vessels. In particular, some methods make use
of the fact that all major retinal vessels converge into the OD
[16][20]. Some author assume the relative position between
the OD and the macula varies within a specific range [21][23].
Compared with the image characteristics, the anatomical
knowledge is much more reliable in the presence of imaging artifacts and retinal lesions. But anatomy based methods seldom
locates the exact OD center either. In particular, the retinal
vessels seldom converge to the OD center even though they
always converge into the OD boundary. The relative position
between the macula and the OD also varies within a certain
range and so the OD cannot be located accurately even though
the macula is detected accurately. Besides, automatic detection
of either the retinal vessels or the macula is often a nontrivial
task by itself.
Some OD segmentation methods have also been reported
to determine an OD boundary by either Hough transform
[6], [24], [25] or active contour model [4], [22], [26][28].
In particular, the Hough transform based methods determine
the OD boundary from the gradient image of a retinal image.
The active contour model based methods determine the OD
boundary by iteratively estimating a contour model. Generally
both Hough transform based and active contour model based
methods can segment the OD from images of normal retinas
that usually have a clear and consistent OD boundary. But
for images of pathological retinas, these methods often fail
because both Hough transform and active contour model are
very sensitive to different types of retinal lesions and imaging
artifacts around the OD boundary. For example, the active
contour model based methods [4], [22], [26][28] often fail to
control the contour formation process as they either terminate
far outside or inside the OD boundary. In addition, some other

Fig. 2. Retinal image preprocessing. (a) Determined binary template.


(b) Smoothing of the down-sampled example retinal image by a median filter.

OD segmentation methods have also been reported that rely on


Hausdorff distance based template matching [29], OD location
regression [30], and graph search [31], and so on.
III. PROPOSED METHODS
This section describes the proposed OD detection and segmentation technique. In particular, it can be divided into four
subsections which deal with retinal image preprocessing, circular transformation, OD detection and segmentation, and parameter value selection, respectively.
A. Retinal Image Preprocessing
Given a retinal image an intensity image is first derived. As
the blue color component of many retinal images contains little
OD structural information, the intensity image is determined by
combining the red and green components as follows:
(1)
where and denote the red and green color components of
the retinal image under study. Parameter controls the weights
of and with set at 0.75 in the built system. The purpose
is to keep the image variation
of assigning more weight to
across the OD boundary but suppress that across the retinal vesthan that in . Besels which is usually much stronger in
sides, a binary template is also determined as shown in Fig. 2(a)
[20] that helps to exclude the surrounding dark region from the
ensuing processing.
Several preprocessing operations are performed to speed up
the ensuing processing and improve the OD detection accuracy.
First, the retinal image is down-sampled to 0.3 of its original
size to reduce the computation cost. The image down-sampling
has little effects on the OD detection and segmentation because
it does not change the OD structure to be captured by the circular transformation. Next, the down-sampled retinal image is
filtered by a median filter to suppress speckle noise and image
variation across the retinal vessels. The size of the filter window
( denotes the diameter of the central circular
is set at
retinal region as shown in Fig. 2(a) to be adapted to the resolution variation of retinal images from different sources. For the
retinal image in Fig. 1(a) and Fig. 2(b) shows the smoothed intensity image.
Finally, the OD search space is reduced based on an OD probability map that is derived based on Mahfouzs method [16]. In
particular, Mahfouz et al. detect the OD by first projecting the

2128

IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 30, NO. 12, DECEMBER 2011

Fig. 3. Searching space reduction. (a) OD probability map of the example


retinal image in Fig. 1(a). (b) Extracted 20% brightest pixels.

L1-norm image gradient and image intensity to the horizontal


and vertical directions as follows:

(2)
and
denote the L1-norm image grawhere
dients in the horizontal and vertical directions, respectively.
and denote the numbers of image rows and columns. The OD
probability map is determined as follows:
(3)
where the OD center can be detected at the brightest pixel
within
. For the example retinal image in Fig. 1(a)
and Fig. 3(a) shows the determined OD probability map.
In the proposed technique, the OD is detected by searching
from the first 20% brightest pixels within the OD probability
map shown in Fig. 3(b). Though the OD may not lie at the
brightest pixel within the OD probability map (92.6% over
STARE dataset as reported in [16]), it nearly always lies within
the first 20% brightest pixels within the OD probability map.
It should be noted that the search space can also be reduced
by the OD detected by other methods such as Youssifs [17].
In such case the reduced search space becomes a circle that
is centered at the detected OD center [17] with a radius much
larger than the OD radius, says, 90 pixels. Mahfouzs method
is used because it is ultrafast (0.6 s for the STARE dataset) and
the 20% brightest pixels within the OD probability map are
reliable to include the OD center properly.
B. Circular Transformation
A transformation is designed to capture the OD-specific
image and shape characteristics. The transformation is termed
circular transformation because it performs best while a
perfect circular region is present. The circular transformation
is designed based on the observation that for a point within a
(roughly) circular-shaped image region, the variation of the
distances from the point to the region boundary reaches the
minimum when the point lies exactly at the region centroid.
Specifically, for each retinal image pixel it first detects multiple
pixels with the maximum variation (denoted by PMs in the
ensuing discussion) along multiple evenly-oriented radial line

Fig. 4. Example set of evenly-oriented radial line segments where n is set at


40 and p is set at 10 in pixels.

segments of specific length. The detected PMs are then filtered


based on a pair of OD-specific image and shape constraints.
Finally, the retinal image is converted into an OD map where
the peak with the maximum amplitude lies exactly at the OD
center and many PMs detected for the pixel at the detected OD
center lie exactly along the OD boundary.
evenly-oriented radial line segments of speA number of
cific length
are used to detect the PMs. Fig. 4 shows an example set of radial line segments that sets at 40 and at 10.
For an image pixel at
such as the one labeled by a black
cell in Fig. 4, the image variation along each radial line segment
can be evaluated as follows:

(4)
where
and
denote the positions
of two image pixels along the th radial line segment neigh. An image variation matrix
boring to
can thus be determined for the pixel at
. It should
where
be noted that the pixels are indexed from
is closer to
than
.
Therefore, if the pixel at
lies within the OD, the
evaluated image variation across the OD boundary is usually
positive as the OD is usually brighter than the surrounding.
, the positions of the
For the retinal image pixel at
PMs along the evenly-oriented radial line segments can be
denoted by an index vector as follows:
(5)
where
indicates the position of the PM along the th line
segment, i.e., the index of the maximum of the th row of
. The maximum image variation along the radial
line segments can therefore be denoted as follows:

(6)

LU: ACCURATE AND EFFICIENT OPTIC DISC DETECTION AND SEGMENTATION BY A CIRCULAR TRANSFORMATION

2129

where
gives the maximum image variation along
the th radial line segment. The distances from all PMs to the
under study can thus be determined by a dispixel at
tance vector as follows:
(7)
where
is the distance from the pixel at
to the
PM along the th line segment at
as follows:
(8)
For image pixels within the OD, many detected PMs lie exactly at the OD boundary due to the clear image variation across
the OD boundary. But some detected PMs may lie at other positions due to the retinal vessels/lesions. The PM outliers can be
filtered out based on a pair of OD-specific constraints. First, the
PMs with a zero/negative maximum variation are filtered out because for pixels within the OD, the maximum image variation at
the detected PMs is usually positive. The PMs with a zero maximum variation are often detected for pixels around the border
of the central circular retinal region where the radial line segments quickly probe into the surrounding dark region. The PMs
with a negative maximum variation are often detected for pixels
at some dark region such as the macula center where the image
consistently becomes brighter with its distance from the macula
center.
The remaining PMs are further filtered based on their dis. A distance threshold is defined as follows:
tances to

Fig. 5. OD map construction and PMs detection. (a) OD map of the example
retinal image in Fig. 1(a). (b) Close-up view of the PMs detected for the pixel
at the located OD center (labeled by the cross).

Fig. 6. OD boundary pixel detection. (a) Close-up view of the remaining PMs
after the two-stage filtering. (b) Close-up view of the OD boundary pixels created based on symmetry of the OD boundary with respect to the OD center: the
black points denote the remaining PMs shown in (a) and the white ones denote
the created symmetry-based OD boundary pixels.

(9)
where
is a median function.
denotes a
subset of
in (7) where some distance elements
(corresponding to the pixels with a zero or negative maximum
variation) have been removed in the first-stage filtering. As (9)
evaluates the deviation of
from
shows,
that usually approximates the OD radius
is at the OD center.
when
The circular transformation finally converts a retinal image
into an OD map as follows:
(10)

and
refer to the maximum variation
where
and distance of the PMs respectively [as defined in (6) and (7)]
denotes the mean of
after the two steps of filtering.
. Therefore, the numerator in (10) gives the overall
image variation and the denominator gives the standard deviation of
. The OD map can usually be enhanced by
incorporating the OD probability map as follows:
(11)
where
denotes the OD probability image in (3)
[shown in Fig. 3(a)]. For the example retinal image in Fig. 1
and Fig. 5(a) shows the final OD map computed.

C. OD Detection and Segmentation


The OD center can thus be located at the global peak within
the converted OD map. As Fig. 5(a) shows, the peak with the
maximum amplitude is located exactly at the OD center of the
retinal image in Fig. 1(a). Fig. 5(b) shows PMs that are detected for the pixel at the located OD center (labeled by a cross).
As Fig. 5(b) shows, many PMs lie exactly at the OD boundary
but some lie at other positions due to retinal lesions. Fig. 6(a)
shows the remaining PMs after the two-stage filtering as described in the last subsection. As Fig. 6(a) shows, most of the
remaining PMs lie exactly along the OD boundary.
It should be noted that for some retinal images certain parts of
the OD boundary may have no boundary pixels detected, i.e., the
PMs are detected at other positions and have been filtered out.
Under such circumstance, the OD segmentation based on the
detected boundary pixels may introduce error especially when
many OD boundary pixels are not detected. This issue is handled by exploiting the rough symmetry of the OD boundary with
respect to the OD center. In particular, if a PM detected is filtered out but its counterpart PM detected from the radial line
segment in the opposite direction is kept after the filtering, an
OD boundary pixel is created that is symmetrical to the counterpart PM with respect to the OD center. For the detected OD
boundary pixels shown in Fig. 6(a) and (b) shows the final OD
boundary pixels where those created using symmetry properly
are highlighted in white.

2130

IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 30, NO. 12, DECEMBER 2011

An OD boundary can thus be determined through B-spline fitting of the final OD boundary pixels. It should be noted that the
detected OD center and OD boundary pixels need to be mapped
back to the retinal images of the original resolution for proper
evaluation. For the example retinal image in Fig. 1(a) and (b)
shows the determined OD center and OD boundary of the original resolution.
D. Parameter Value Selection
The designed circular transformation has several parameters.
The first is the number of the radial line segments as described
in Section III-B. In general, has little effect on the OD detection when it is larger than 60. But it may affect the OD segmentation because the number of the PMs after the filtering may
not be enough to fit an OD boundary smoothly when is too
small. Experiments show that 180 radial line segments are sufficient for OD boundary fitting and the use of more does not
help much. Therefore, 180 line segments are used in the implemented system.
The second and also the most important parameter is the
length of the radial line segments. Generally speaking,
should be larger than the OD radius so that for retinal image
pixels around the OD center, the PMs along the radial line segments can be detected at the OD boundary. As the relative size
between the OD and the corresponding retinal image usually
varies within a specific range, parameter can be set based
on the diameter of the central circular region of the retinal
image as illustrated in Fig. 2(a). In the implemented system,
is set at
that is much larger than the OD radius that usually
and
(around
for most retinal
varies between
images). In addition, the use of a large also reserves the space
that excludes the ending section of each radial line segment (to
be discussed next).
Lastly, the performance of the circular transformation can
often be improved by excluding the starting and ending sections of each radial line segment from the image variation evaluation in (4). By excluding the starting section PMs will not
be detected at the retinal vessels (lying around the OD center)
for retinal image pixels lying around the OD center. By excluding the ending section PMs will not be detected around
the border of the central circular retinal region of the retinal
image where large image variation is often present due to different imaging artifacts such as uneven illumination. In the implemented system, the lengths of the starting and ending secthat is much smaller than the OD
tions are both set at
ending section does not
radius. It should be noted that the
of each radial line segment.
always correspond to the last
of each radial line segment
Instead it refers to the last
that lies within the central circular retinal region as illustrated
in Fig. 2(a) (that is how the border of the central circular retinal
region is excluded).
IV. EXPERIMENTAL RESULTS
This section presents experimental results. The three public
datasets used are first described. The performance of the designed circular transformation is then presented and discussed.

A. Dataset and Evaluation Criteria


Three public datasets are tested in the experiments. The first
is the MESSIDOR dataset with 1200 retinal images that was
created for studies of computer-assisted diagnosis of diabetic
retinopathy. The second is the ARIA dataset that consists of
59 images with diabetic and 61 images for control. The third
is the STARE dataset with 31 images of healthy retinas and 50
images of pathological retinas which is widely used for benchmarking of OD detection in the literature. For each retinal image
in ARIA dataset, an OD boundary is manually traced by trained
image analysis experts and the OD center is directly estimated
as the centroid of the provided OD boundary. For each retinal
image in STARE dataset, an OD center and 4060 OD boundary
pixels are manually labeled by the author under the supervision
of clinical graders. An OD boundary is then determined through
B-spline fitting of the labeled OD boundary pixels.
The proposed technique is evaluated based on three criteria.
First an OD detection is defined as correct if the detected OD
center lies within the OD boundary. The second zooms in to
measure the OD center error that is defined by the distance between the detected OD center and the labeled one. Therefore,
correct OD detection may have incurred a large OD center error
under the second criterion. The third criterion aims to evaluate
the OD segmentation based on the overlapping area between the
manually labeled OD region and the one determined by the proposed technique as follows:
(12)
and
denote the size of the image region enwhere
closed by the reference OD boundary and the size of the image
region enclosed by the OD boundary determined by the circular
transformation, respectively.
In the experiments, the first dataset is evaluated by the first
criterion only to show that the proposed technique is able to
detect the OD of a large number of retinal images with different
characteristics. As most retinal images in this dataset are normal
and have a clear OD boundary, whether the detected OD center
lies within the OD boundary is straightly determined through
visual inspection by the author under the supervision of clinical
graders. The last two datasets are evaluated based on all the three
criteria. The STARE dataset is particularly used for comparison
with state-of-the-art methods because it is widely studied in the
literature.
B. OD Detection and Segmentation Results
Qualitative results are first shown in Figs. 7 and 8 by several
test retinal images from the three datasets described in the last
subsection. In particular, the test retinal images in Figs. 7 and 8
suffer from different retinal lesions (e.g., exudates, drusen,
hemorrhages, etc.) and imaging artifacts (e.g., shading, haze,
low-contrast, occlusion, etc.), respectively. In addition, the
three rows in each figure show the test retinal images, the
converted OD maps, and the close-up view of the fitted OD
boundary, respectively. For each test retinal image, the bold
2http://messidor.crihan.fr/index-en.php
3http://www.eyecharity.com/aria

online

LU: ACCURATE AND EFFICIENT OPTIC DISC DETECTION AND SEGMENTATION BY A CIRCULAR TRANSFORMATION

2131

Fig. 7. OD detection and segmentation under the presence of retinal lesions: The three rows shows example retinal images, the converted OD maps, and the
close-up view of the finally fitted OD boundary, respectively. The bold white boundary in the third row denotes the manually traced reference OD boundary, the
thin black boundary denotes the OD boundary determined by the proposed technique.

Fig. 8. OD detection and segmentation under the presence of imaging artifacts: The three rows show example retinal images, the converted OD maps, and the
close-up view of the finally fitted OD boundary, respectively. The bold white boundary in the third row denotes the manually traced reference OD boundary, the
thin black boundary denotes the OD boundary determined by the proposed technique.

white curves and light black curves in the fifth row correspond
to the reference OD boundary and the one determined by the
circular transformation, respectively. As Figs. 7 and 8 show, the
proposed technique is tolerant to retinal lesions and imaging
artifacts and capable of locating both the OD center and the OD
boundary accurately.
Quantitative results show that an average OD detection
accuracy of 99.5% is obtained for 1401 retinal images within
the three datasets (i.e., 1197/1200 for the MESSIDOR dataset,
117/120 for the ARIA dataset, and 80/81 for the STARE
dataset) based on the first criterion. Most failure cases can be
explained by retinal lesions and imaging artifacts that almost
completely impair the OD-specific image and structural charac-

teristics. In particular, the accuracy of the proposed method on


the STARE dataset is 98.77% (i.e., 80/81) that is comparable
to the best state-of-the-art performance as listed in Table I. It
should be noted that most image characteristics based methods
[3], [11], [14] have a much lower OD detection accuracy for
the STARE dataset (e.g., 42% and 58% for methods in [14] and
[11] as reported in [20]) because they cannot handle different
retinal lesions and imaging artifacts properly as illustrated in
Figs. 7 and 8.
In addition, the average OD center error of the proposed technique is around six pixels for all correctly detected retinal images within the ARIA dataset and the STARE dataset. The sixpixel OD center error is pretty small compared with the OD ra-

2132

IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 30, NO. 12, DECEMBER 2011

TABLE I
COMPARISON OF THE OD DETECTION METHODS ON STARE DATASET (THE
SPEEDS OF FORACCHIA et al. [19] AND YOUSSIF et al. [17] ARE BOTH
TAKEN FROM MAHFOUZ et al. [16])

dius that is around 60 pixels for retinal images within the two
datasets. The small OD center error can be explained by the circular transformation that is designed to detect the OD centroid
exactly. At the same time, the six-pixel OD center error is much
smaller than that of state-of-the-art methods [13], [16][19] as
shown in Table I where the OD center error ranges 1429 pixels.
The large OD center error of the state-of-the-art methods can be
explained by the fact that the retinal vessels seldom converge to
the exact OD center. It should be noted that the OD center error
for the proposed method is evaluated on retinal images of the
original resolution.
Furthermore, the proposed technique is capable of segmenting the OD from many images of pathological retinas as
illustrated in Figs. 7 and 8. Based on the region overlapping
criterion, OD segmentation accuracies of 93.4% and 91.7% are
obtained for the 80 and 117 correctly detected retinal images
of the STARE dataset and the ARIA dataset, respectively. In
particular, imaging artifacts often have larger effects on the OD
segmentation compared with retinal lesions. This is because
retinal images with imaging artifacts often have low image
variation across the OD boundary where many PMs are not
detected as illustrated in the first three retinal images in Fig. 8.
On the other hand, retinal images with retinal lesions often
have relatively high image variation across the OD boundary
where PMs can be detected properly as illustrated in Fig. 7.
This also explains the higher OD segmentation accuracy of
STARE dataset that is composed of a large number of images
of pathological retinas with retinal lesions.
The direct comparison with state-of-the-art methods is difficult because most reported OD segmentation methods [4],
[22], [26][28] are evaluated on a private dataset. In the experiments, the active contour model technique [32] is implemented
and tested on the STARE dataset. Experiments show that the
OD segmentation accuracy is just around 70%. On the other
hand, the proposed technique achieves high OD segmentation
accuracy due to its flexibility. In particular, the circular transformation is capable of detecting certain proportion of OD
boundary pixels even though certain OD boundary segments
are severely degraded. But Hough transform and active contour
model both rely heavily on a closed shape model whose behavior varies greatly when certain OD boundary segments are
severely degraded. Second, the circular transformation is more
flexible to incorporate the OD-specific anatomical knowledge.
For example, it is very straightforward for the circular transformation to create the symmetry-based OD boundary pixels as
described in Section III-C.

Fig. 9. OD detection and segmentation performance in relation to the number


n and the length p of the radial line segments. (a) Variation of OD detection and
segmentation accuracies when n changes from 30 to 360 (p is fixed at R=5).
(b) Variation of OD detection and segmentation accuracies when p changes from
R=8 to R=3 while n is fixed at 180.

C. Discussion
The proposed technique is efficient as shown in Table I. For
the STARE dataset, it takes around 5 s (implemented in Matlab
on a Dell PC with a 3.00 GHz CPU and 3.25 GB of RAM) on
average for both OD detection and OD segmentation. But for
most reported methods [13], [17][19], it takes 24.5 min to
perform the OD detection alone on similar computing platforms.
Mahfouzs method [16] is ultrafast but it only detects the OD
center with a low accuracy at 92.6% and an OD center error at
14 pixels. The high efficiency of the proposed technique is due
to the circular transformation as well as the two preprocessing
steps described in Section III-D, i.e., the image down-sampling
and the search space reduction.
In addition, the parameters of the circular transformation are
fixed as described in Section III-D for all experimental results
described in the last subsection. On the other hand, the proposed
technique is stable when the parameters vary within a certain
range. Fig. 9(a) and (b) show the performance variation with the
change of two key parameters, i.e., the number and the length
of the radial line segments. As Fig. 9(a) shows, the OD detection accuracy of the three datasets varies little when 30, 60, 90,
120, 180, 270, and 360 radial line segments are implemented
). The OD segmentation accuracy of
(where is fixed at
the last two datasets instead improves slightly when more radial
line segments are implemented but the improvement saturates
when is bigger than 180. In addition, the OD detection accuracy reduces when becomes too big or too small deviating
from the relative OD radius as shown in Fig. 9(b) (where is
fixed at 180). But the OD segmentation accuracy for all correctly detected retinal images varies little with the change of .
It should be noted that the lengths of the skipped starting and
because the OD radius
ending sections are both fixed at
.
is much larger than
Most OD segmentation error comes from three sources at
the current stage. First, it may be introduced for some special
retinal images that have much larger image variation across the
cup boundary than that across the OD boundary. Under such
circumstance, many PMs will be located at the cup instead of
the OD boundary improperly. Second, it may be introduced for
some special retinal images that have an ultra-low image variation across the OD boundary. For such retinal images, many OD
boundary sections have no PM detected and so the OD boundary

LU: ACCURATE AND EFFICIENT OPTIC DISC DETECTION AND SEGMENTATION BY A CIRCULAR TRANSFORMATION

pixels cannot be determined even based on symmetry. Third, it


may also be introduced by the PMs that are created based on
symmetry as illustrated by the first two retinal images in Fig. 8.
This is because the OD boundary is often not perfectly symmetric with respect to the OD center. Study of these issues is
left for future work.
V. CONCLUSION
This paper presents an accurate and efficient OD detection
and segmentation technique based on a circular transformation.
Experiments over three public datasets show that an OD detection accuracy of 99.5% is obtained and the OD center error is
around six pixels which is much smaller than that of state-ofthe-art methods. In addition, average OD segmentation accuracies of 93.4% and 91.7% are obtained for the STARE dataset
and the AIRA set within which many images of pathological
retinas cannot be segmented by most state-of-the-art methods
properly. Furthermore, the proposed technique needs around 5 s
only for both OD detection and OD segmentation whereas most
state-of-the-art methods need 24.5 min to perform the OD detection alone.
REFERENCES
[1] K. Akita and H. Kuga, A computer method of understanding ocular
fundus images, Pattern Recognit., vol. 15, no. 6, pp. 431443, 1982.
[2] N. Patton, T. M. Aslam, T. MacGillivary, I. J. Deary, B. Dhillon, R. H.
Eikelboom, K. Yogesan, and I. J. Constable, Retinal image analysis:
Concepts, applications and potential, Progress Retinal Eye Res., vol.
25, no. 1, pp. 99127, 2006.
[3] T. Walter, J. C. Klein, P. Massin, and A. Erginay, A contribution of
image processing to the diagnosis of diabetic retinopathy-detection of
exudates in color fundus images of the human retina, IEEE Trans.
Med. Imag., vol. 21, no. 10, pp. 12361243, Oct. 2002.
[4] R. Chrastek, M. Wolf, K. Donath, H. Niemann, D. Paulus, T. Hothorn,
B. Lausen, R. Lammer, C. Y. Mardin, and G. Michelson, Automated
segmentation of the optic nerve head for diagnosis of glaucoma, Med.
Image Anal., vol. 9, no. 4, pp. 297314, 2005.
[5] A. D. Fleming, K. A. Goatman, S. Philip, J. A. Olson, and P. F. Sharp,
Automatic detection of retinal anatomy to assist diabetic retinopathy
screening, Phys. Med. Biol., vol. 52, no. 2, pp. 331345, 2007.
[6] A. Pinz, S. Bernogger, P. Datlinger, and A. Kruger, Mapping the
human retina, IEEE Trans. Med. Imag., vol. 17, no. 4, pp. 606619,
Apr. 1998.
[7] K. W. Tobin, E. Chaum, V. P. Govindasamy, and T. P. Karnowski, Detection of anatomic structures in human retinal imagery, IEEE Trans.
Med. Imag., vol. 26, no. 12, pp. 17291739, Dec. 2007.
[8] M. Niemeijer, M. D. Abramoff, and B. V. Ginneken, Segmentation of
the optic disc, macula and vascular arch in fundus photographs, IEEE
Trans. Med. Imag., vol. 26, no. 1, pp. 116127, Jan. 2007.
[9] W. Hsu, P. M. D. S. Pallawala, M. L. Lee, and K. A. Eong, The role
of domain knowledge in the detection of retinal hard exudates, in Int.
Conf. Comput. Vis. Pattern Recognit., 2001, vol. 2, pp. 246251.
[10] Z. B. Sbeh, L. D. Cohen, G. Mimoun, and G. Coscas, A new approach
of geodesic reconstruction for drusen segmentation in eye fundus images, IEEE Trans. Med. Imag., vol. 20, no. 12, pp. 13211333, Dec.
2002.
[11] T. Walter and J. C. Klein, Segmentation of color fundus images of the
human retina: Detection of the optic disc and the vascular tree using
morphological techniques, in Int. Symp. Med. Data Anal., 2001, pp.
282287.

2133

[12] H. Li and O. Chutatape, Automatic location of optic disc in retinal


images, in Int. Conf. Image Process., 2001, vol. 2, pp. 837840.
[13] S. Lu and J. H. Lim, Automatic optic disc detection from retinal images by a line operator, IEEE Trans. Biomed. Eng., vol. 58, no. 1, pp.
8894, Jan. 2011.
[14] C. Sinthanayothina, J. F. Boycea, H. L. Cookb, and T. H. Williamsonb,
Automated localisation of the optic disc, fovea, and retinal blood vessels from digital colour fundus images, Br. J. Ophthalmol., vol. 83,
no. 8, pp. 902910, 1999.
[15] S. Sekhar, W. Al-Nuaimy, and A. K. Nandi, Automated localisation of
retinal optic disk using Hough transform, in IEEE Int. Symp. Biomed.
Imag.: From Nano to Macro, 2008, pp. 15771580.
[16] A. E. Mahfouz and A. S. Fahmy, Ultrafast localization of the optic
disc using dimensionality reduction of the search space, in Int. Conf.
Med. Image Computing Computer-Assisted Intervent., 2009, vol. 5762,
pp. 985992.
[17] A. Youssif, A. Z. Ghalwash, and A. Ghoneim, Optic disc detection
from normalized digital fundus images by means of a vessels direction
matched filter, IEEE Trans. Med. Imag., vol. 27, no. 1, pp. 1118, Jan.
2008.
[18] A. Hoover and M. Goldbaum, Locating the optic nerve in a netinal
image using the fuzzy convergence of the blood vessels, IEEE Trans.
Med. Imag., vol. 22, no. 8, pp. 951958, Aug. 2003.
[19] M. Foracchia, E. Grisan, and A. Ruggeri, Detection of optic disc in
retinal images by means of a geometrical model of vessel structure,
IEEE Trans. Med. Imag., vol. 23, no. 10, pp. 11891195, Oct. 2004.
[20] F. Haar, Automatic localization of the optic disc in digital colour Images of the human retina, M.S. thesis, Utrecht Univ., Utrecht, The
Netherlands, 2005.
[21] M. Niemeijer, M. D. Abrmoff, and B. van Ginneken, Fast detection
of the optic disc and fovea in color fundus photographs, Med. Image
Anal., vol. 13, no. 6, pp. 859870, 2009.
[22] H. Li and O. Chutatape, Automated feature extraction in color retinal
images by a model based approach, IEEE Trans. Biomed. Eng., vol.
51, no. 2, pp. 246254, Feb. 2004.
[23] A. P. Rovira and E. Trucco, Robust optic disc location via combination of weak detectors, in Annu. Int. Conf. IEEE Eng. Med. Biol. Soc.,
2008, pp. 35423545.
[24] M. Park, J. S. Jin, and S. Luo, Locating the optic disc in retinal images, in Int. Conf. Comput. Graphics, Imag. Visualisat., 2006, pp.
141145.
[25] S. Barrett, E. Naess, and T. Molvik, Employing the Hough transform
to locate the optic disk, Biomed. Sci. Instrum., vol. 37, pp. 8186,
2001.
[26] R. Bock, J. Meier, L. G. Nyl, J. Hornegger, and G. Michelson, Glaucoma risk index: Automated glaucoma detection from color fundus images, Med. Image Anal., vol. 14, no. 3, pp. 471481, 2010.
[27] F. Mendels, C. Heneghan, and J. P. Thiran, Identification of the optic
disk boundary in retinal images using active contours, in Irish Mach.
Vis. Image Process. Conf., 1999, pp. 103115.
[28] J. Lowell, A. Hunter, D. Steel, A. Basu, R. Ryder, E. Fletcher, and L.
Kennedy, Optic nerve head segmentation, IEEE Trans. Med. Imag.,
vol. 23, no. 2, pp. 256264, Feb. 2004.
[29] M. Lalonde, M. Beaulieu, and L. Gagnon, Fast and robust optic
disc detection using pyramidal decomposition and hausdorff-based
template matching, IEEE Trans. Med. Imag., vol. 20, no. 11, pp.
11931200, Nov. 2001.
[30] M. D. Abramoff and M. Niemeijer, The automatic detection of
the optic disc location in retinal images using optic disc location
regression, in Proc. Int. Conf. IEEE Eng. Med. Biol. Soc., 2006, pp.
44324435.
[31] M. B. Merickel, M. D. Abramoff, M. Sonka, and X. Wu, Segmentation of the optic nerve head combining pixel classification and graph
search, SPIE Med. Imag., pp. 651215-10651215-10, 2007.
[32] C. Xu and J. L. Prince, Gradient vector flow: A new external force for
snakes, in IEEE Int. Conf. Comput. Vis. Pattern Recognit., Jun. 1997,
pp. 6671.