You are on page 1of 92

ABSTRACT

Glaucoma is an eye disease which harms the optic nerve of the eye and ends up serious after some time. It is caused
because of development of presure inside the eye. Glaucoma tends to be procured and may not show up until some
other time in time. The early recognition of glaucoma is vital keeping in mind the end goal to empower fitting
checking, treatment and to limit the danger of irreversible visual field misfortune. The detection of glaucomatous
progression is one of the most important and most challenging aspects of primary open angle glaucoma (OAG)
management. The detection of glaucomatous progression is one of the most important and most challenging aspects
of primary open angle glaucoma (OAG) management. The early detection of glaucoma is an important in human life
in order to enable appropriate monitoring, treatment and to minimize the risk of irreversible visual field loss.
Although advances in ocular imaging offer the potential for earlier diagnosis, the best method is to involve a
combination of information from structural and functional tests. In this proposed method both structural and energy
features are considered. Energy distribution over cup to disk ratio were applied to find these important texture
energy features. In the proposed algorithm the image database is applied to the cup to disc algorithm and finding the
percentage of glaucoma present in input image. With various threshold values the cup to disc ratio of image are
identified and analyzed for better glaucoma detection.. Finally features of extracted energy are connected to
Multilayer Perceptron (MLP) and Back Propagation (BP) neural system for successful classification by considering
ordinary subject’s extracted energy features. NavieBayes portrays the images in the database with the precision of
89.6%. MLP-BP Artificial Neural Network (ANN) algorithm masterminds the images in the database with the
exactness of 97.6%. Diagnosis of Glaucoma depends on various findings such as elevated Intra Ocular Pressure
(IOP > 23 mm Hg is considered as a suspicious case for glaucoma), optical nerve cupping and visual field loss.
Detection of Glaucoma is carried out by various tests such as Tonometry, Ophthalmoscopy, Perimetry, OCT,
Gionoscopy, Pachymetry. There are three methods to detect glaucoma: - 1) assessment of raised intraocular pressure
(IOP), 2) assessment of abnormal visual field, 3) assessment of damaged optic nerve head. The goal of the proposed
work is to develop an algorithm that automatically classify normal eye images and diseased glaucoma eye images
Features extracted from retinal images are used for classification. Here discrete wavelet transform using daubechies,
symlets and biorthogonal wavelets are used to extract features. Wavelet energy signatures are calculated from
these extracted features. SVM, SMO, Random forest, Naïve bayes and ANN classifiers are used to classify images
as normal or abnormal eye images. Then find out which classifier is good in accuracy for finding out the
glaucomatous or normal retinal images.And finally segmentation is done on each glaucomated retinal image.
CHAPTER.1
Introduction

1.Introduction: Glaucoma is an eye disease which harms the optic nerve of the eye and ends up
noticeably extreme over a period. It is caused due to the intraocular pressure which happens
inside the optic nerve of the eye. Because of this, Doctors may have problem to identify the little
irregularity conditions in the eye. To address this issue an modified wavelets sub bands are
utilized to extricate the energy levels of the glaucoma and MLP-BP ANN algorithms for
classification is proposed and analyzed. The 300 Glaucoma images of both normal and abnormal
eye images are grouped from various hospitals of various patients and are put away in database.
One of the images is taken from the database and subjected to Glaucoma detection. The energy
features are extricated from the image by utilizing wavelet subbands(annu). The subbands
utilized as a part of our work are Daubechies, Biorthogonal, and Symlets. Execution is done of
Wavelets based subbands utilizing Matlab. The normal and energy values are figured from the
given image(sumeet jha, rajendra acharya) and compute CDR, the Cup-to-Disk Ratio (CDR)
comprises of cropped gray optical disc, segmentation and calculation of the cup to disc
ratio(ABIRAMI). From the given image, it is utilized to crop the required area of an eye; the
selected area is connected for segmentation to segment the Glaucoma (nikitha). CDR is the ratio
of the measure of the optic cup to the optic disc and is processed as CDR = VCD/VDD where
VDD = Vertical Disk Diameter (saranya). The optic disc and cup is first calculated to compute
CDR. At the point when CDR is more prominent than an threshold; it is glaucomatous, else
sound eye. After finding CDR following stage is apply to Artificial Nueral Networks, which
classify the disease in light of seriousness as Normal or regular, first stage, second Stage and
third Stage or advanced Stage.
1.1Motivation: Iam interested in bio medicinal field , I choosen image processing area to build
my carrier field. I want to build a generic framework which can be connected to image
processing area. A level set model used to portion images chosen from imaging area. I have a
predisposition towards algorithm that are generic and can be applied to images from area. I find
previously mentioned domain that i can work on: Domain particular glaucoma image
segmentation. The point of the algorithm is to extract the energy features of glaucoma from the
eye image using wavelet subbands. The average and energy values are calculated from the given
image and compute CDR. After finding CDR , CDR output apply to Artificial Nueral Networks,
which classify the disease. It is fascinating to the outsider having many advantages; the proposed
strategy achieves good accuracy for glaucoma detection. Disk and Cup segmentation results are
superior than existing. Computational complexity can be reduced (jyotica Pruthvi). The strategy
has a incredible potential to be used for large-scale population-based glaucoma screening. This
work concentrates on computing the CDR from the disc. Motivated from the observation that
similar discs often have very similar CDRs and the fact that many discs do not have obvious
boundary between neuro retinal rim and the optic cup, we segment the disc using the disc
segmentation method in which first preprocessing such as image filtration, color contrast
enhancement are performed which is followed by a joined approach for image segmentation and
classification utilizing texture, thresholding and morphological operation for segmenting the
Optic Cup(saranya). Multimodality such as Gobar wavelet transformations is used. Based on the
segmented disc and cup, CDR is computed for glaucoma screening (ABIRAMI ). Assessment of
raised intraocular pressure (IOP) is the strategy already used to identity glaucoma. In the
previous work on “Classifying glaucoma with image-based features from fundus photos”, the
features are regularly computed at the image-level and we utilize image features for a binary
classification between glaucomatous and sound subject. The past methods disadvantages are
these do not take into consideration of the noise or the image normalization in the input retinal
image. These strategies does not show high complexity image for the output image. Manual
appraisal is subjective, tedious and expensive. In these strategies, determination of features and
classification procedure is difficult and challenging. 3D images are not effortlessly accessible
easily available and High cost of acquiring 3D images makes it in appropriate for a large-scale
screening program. The proposed framework advantages are Assessment of the harmed optic
nerve head is both more promising, and better than to IOP estimation or visual field testing for
glaucoma screening.
1.2. Problem statement: Narrow angles are a risk factor for developing acute angle closure
glaucoma. This is a form of glaucoma that can lead to vision loss very quickly but regular
eye exams can help diagnose narrow angles before glaucoma develops. Glaucoma affects all
age groups, including infants, children, and the elderly due to low contrast and noise in
images, doctors may have problem to identify the small abnormality conditions in the eye.
To Address this issue in this work, proposed a modified Wavelets Subbands to Extract the
energy levels of the Glaucoma and used MLP-BP ANN algorithms for Classification is
proposed and analyzed

1.3 Sturcture of Optic Cup and Disc


Currently, three methods are used to detect glaucoma. One is the assessment of increased
pressure inside the eyeball. This method is not sensitive enough to detect glaucoma early and is
not specific to the disease, which sometimes occurs without increased pressure. Another is the
assessment of abnormal vision. This method requires specialized equipment, rendering it
unsuitable for widespread screening. The third method assessment of the damage to the head of
the optic nerve is the most reliable but requires a trained professional and is time-consuming,
expensive and highly subjective. Glaucoma is characterized by a vertical elongation of the optic
cup, a white area at the center of the optic nerve head, or optic disc. This elongation alters the
cup-to-disc ratio (CDR) but does not normally affect vision. The computerized technique
developed to measures the CDR from two-dimensional images of the back of the eye. The
technique uses an algorithm that divides the images into hundreds of segments called superpixels
and classifies each segment as part of either the optic cup or the optic disc. The cup and disc
measurements can then be used to compute the CDR.
Fig 1 Major structure of optic cup and optic disc

1.3.1 Nueroretinal Rim


The region between cup and disc is neuroretinal rim. The neuro retinal rim consists of the nerve
fibers and the pale center is free from the nerve fibers. The normal optic disc consists of
approximately 1.5 million nerve fibers but in glaucoma the pressure within the eye reduces the
blood supply and consequently there is lack of nourishment in the retina resulting in death of the
nerve fibers [4].Thus, thinning of the neuroretinal rim along with the enlargement of cup
(Cupping) takes place (Figure 1).Evaluation of the size of the cup can be made with respect to
the size of the disc as a whole and can be termed as a Cup-to-Disc Ratio. The Cup-to-Disc Ratio
(CDR) expresses the proportion of the disc occupied by the cup and it is widely accepted as an
index for the appraisal of glaucoma [5]. For normal eye, CDR value is found to be 0.1 to 0.3 [6].
As the optic nerve degenerates, the said ratio increases. Calculation of cup- to-disc ratio (C/D)
helps in classifying the extent of differentiation among the normal and the glaucomatous cases.

Fig 1.Optic Disc Structure

1.4 High intraocular pressure(IOP)


High amount of intra-ocular pressure (IOP) is one of the major danger components of glaucoma
disease.Accusative of present medicament accesses is to reduce (IOP) inside eyes to prevent
structural anthropology damage [3]. Glaucoma has several types but the main two types are
open-angle and close- angle glaucoma because both these types have high intra- ocular pressure
inside the eyes. Open-angle glaucoma is common as compared to angle-closure. There are no
clear symptoms for open-angle glaucoma because it develops gradually while close-angle
glaucoma is very painful and needs immediate treatment [4].Valuation of retinal nerve fiber layer
(RNFL) heaviness and ocular field arguments are important for the detection of glaucoma [5].

1.5 ISNT quadrants


In healthy eyes, there is normal balance between the fluids, one that is produced in the eye,
and the second that leaves the eye through eye’s drainage system [2]. This balance of fluids
keeps Inter Ocular Pressure (IOP) within the eye constant but in glaucoma, the balance of
fluids produced within the eye is not maintained properly which in turn causes an increase
in IOP, resulting in the damage of optic nerve. The diagnostic criteria for glaucoma
include 1)intraocular pressure measurement, 2)optic nerve head evaluation, 3)retinal nerve
fiber layer and 4)visual field defect. . Optic nerve head assessment in fundus images is more
promising and advanced. The observation of optic nerve head, cup to disc ratio (CDR) and
neural rim configuration are important for early detecting glaucoma in clinical practice.
Due to increase in IOP, the cup size begins to increase which consequently increases the
CDR. As the cup size increases it also affects the Neuroretinal Rim (NRR) [2]. NRR is the
region located between the edge of the disc and the physiological cup. In the presence of
glaucoma, area ratio covered by NRR in superior and inferior region becomes thin as
compared to area covered by NRR in nasal and temporal region. The digital fundus image
of a normal eye and glaucoma tic disc and inferior, superior, nasal and temporal (ISNT
quadrants) are illustrated in Figure 1.

Figure 1: Normal Disc, Glaucoma tic Disc, ISNT Quadrants


Optic nerve assessment is thus able to detect glaucoma early and is currently performed by
a trained glaucoma specialist, or using specialized expensive equipment such as the OCT
(Optical coherence tomography) and HRT (Heidelberg Retinal Tomography) systems [17].
However, optic disc assessment by an ophthalmologist is subjective and the
availability of OCT/HRT is limited because of the cost involved. The 2D
fundus digital image is taken by a fundus camera,which photographs the retinal surface of
the eye. In comparison with OCT/HRT machines, the fundus camera is easier to operate,
less costly, and is able to assess multiple eye conditions [17]. Many researchers have utilized
the fundus images to automatically analyze the optic disc structure

1.6 DWT Transform


Here neural[anng1] network is trained to recognize the parameters for the detection of different
stages of the glaucoma(sheeba). The neuron model has been developed using feed forward
backword propagation network(sheeba). Here the program is developed using Matlab. The
images acquired using medical imaging techniques are analysed in Matlab. Matlab provide
variety of options for image processing that enable us to extract the required features and
information from the images. The software can be used to detect the early stages of
glaucoma..For the detection and management of glaucoma recent advances in biomedical
imaging offers effective quantitative imaging p18alternatives ( Kullayamma). Manual analysis of
eye images is fairly time consuming, and the accuracy of parameter measurements varies
between experts.. Wavelet transforms (WT) in image processing are used to obtain the texture
features. In WT, the content of the image is represented in frequency domain. Here, discrete
wavelet transform(DWT) using daubechies wavelet, symlets wavelet and biorthogonal wavelet
are used to extract features. Wavelet Energy signatures are calculated from these extracted
features. Probabilistic Neural Network is used to automatically anp18alyse (Kullayamma ) and
classify the images as normal or abnormal eye images. K-means Clustering technique is applied
lastly to find the exudates present in the abnormal eye images. This scheme will reduce the
processing time currently taken by the technologist to analyze patientp18 images (Kullayamma
).The impact of feature ranking and normalization is also studied to improve results. Our
proposed novel features are clinically significant and can be used to detect glaucoma accurately.
It comes along with an ongoing destruction of optic nerve head (ONH) caused by an increase in
intraocular pressure within eye we investigate the discriminatory potential of wavelet features
obtained from the daubechies (db3), symlets(sym3), and biorthogonal (bio3.3, bio3.5, and
bio3.7) wavelet filters. We propose a novel technique to extract energy signature obtained using
2-D discrete wavelet transform, and subject these signatures to different feature ranking and
feature selection strategies. It examines the living tissue non-invasively. [P5]Glaucoma is caused
by increased intraocular pressure (IOP) due to the malfunction of the drainage structure of the
eyes [1]. The prevalent model estimates that approximately 11.1million patients worldwide will
suffer from[p5] glaucoma induced bilateral blindness in 2020 [2]-[3].. If the entire nerve is
destroyed blindness results.
[15]. We calculate the averages of the detailed horizontal and vertical coefficients and wavelet
energy signature from the detailed vertical coefficients. We subject the extracted featuresto a
myriad of feature ranking and feature selection schemes to determine the best combination of
features to maximize inter class similarity and aid in the convergence of classifiers, suchas the
support vector machine (SVM), sequential minimal optimization (SMO), random forest, and
naive Bayes techniques. and feature selection schemes that we chose.
1.7 Dataset
The retinal images used for this study were collected from the web database. All the images were
taken and stored in JPEG format.
©

(d)
Fig. 1. Typical fundus images (a) normal (b) glaucoma (c) Normal and Glaucoma Vision
(d)Optic disc of a right and left eye from a patient with advanced glaucoma.
In glaucoma, the pressure within the eye’s vitreous chamber rises and compromises the blood
vessels of the optic nerve head, leading to eventual permanent loss of axons of the vital ganglion
cells and usability of samples. The ethics committee, consisting of senior doctors, approved the
use of the images for this research. All the images were taken with a resolution of 560 × 720
pixels and stored in lossless JPEG format [10]. The dataset contains60 fundus images: 30 normal
and 30 open angle glaucomatous images from 20 to 70 year-old subjects. The fundus camera, a
microscope, and a light source wp17ere used to acquire the retinal images to diagnose diseases.
Fig. 1(a) and (b) presents typical normal and glaucoma fundus images, respectively.
1.8 Methodology
The images in the dataset were subjected to standard histogram equalization [16]. The objective
of applying histogram equalization was twofold: to assign the intensity values of pixels in the
input image, such that the output image contained a uniform distribution of intensities, and to
increase the dynamic range of the histogram of an image. The following detailed procedure was
then employed as the feature extraction procedure on all the images before prop17ceeding to the
feature ranking and feature selection schemes.
1.8.1 Symptons and classification
The first and foremost variation due to glaucoma is in the intraocular pressure of eye. In fact
glaucoma is said to be a disease due to elevation in eye pressure [1]. The usual value of
intraocular pressure varies between 10 to 20 mmHg. But in patients with glaucoma it may
increase (but not always). Due to this pressure increase the nerve fibres begin to die. When these
fibres die the light that falls on these regions will not induce any sense of vision. Thus the spot
becomes blind. Infact the blind spot is called cup. Due to glaucoma the cup area may increase,
correspondingly the disc area reduces. It is as low process that it may take years for a small
change. As a result of these changes the side vision of the patient reduces gradually [2].The
increase in blind area increases the cup size assuming the disc size to remain constant. The
vertical cup to disc ratio of the eye is defined as the ratio of the diameter of the cup to diameter
of the optic disc.
Cup to disc ratio = Area of the Cup/Area of the Disc
Due to glaucoma the cup enlarges so the cup to disc ratio increases in patients with glaucoma.
The cup may not increase uniformly to all directions. This may create irregularities in the
thickness of Inferior (I), Superior(S),Nasal (N), and Temporal (T).Another parameter that can be
taken into consideration is the area of neuro retinal rim(NRR). Neuro retinal rim is the region
between the cup and the disc. As the cup enlarges due to glaucoma the area of the NRR gets
reduced. As the loss of nerve fibre becomes more severe the peripheral vision of the patient
reduces. The different stages of the disease are classified as mild, normal and severe.
1.9 Processing of images: The images are obtained using fundus camera and a reprocessed in
Matlab. A number of morphological operationsare used for this purpose. First the images are
croppedinteractively to select the required disc-cup area. All furtherprocesses are done on the
cropped image. Lighter objects onthe edge are suppressed and the resulting image
isthenconverted into grey scale. On the resulting grey scale imagemorphological operations are
performed.Mathematical morphology is an approach to imageanalysis based on set theory [3].
Here two fundamentalmorphological operations, dilation and erosion are used interms of
intersection of an image with a translated shape forextracting features from an image. Dilation is
an operation that “grows” objects in an image. A shape referred to as a structuring element
controls the extent of growing.

1.9.1 Morphological Operation


TheGlaucoma Detection Using Artificial Neural Networkerosion of a set by a structuring
element is the set of pixelpositions for which a structuring element placed with itsreference point
that will be contained completely within theset. An opening is similar to erosion, except that it
consists ofall points of the structuring element when the structuringelement can be placed within
a set. Dilation is an erosion ofthe background of a set. Erosion “shrinks” objects in animage.
Dilation and erosion canbeusedinvariouscombinations. In morphological opening, erosion
removessmall objects and the subsequent dilation tends to restore theshape of the objects that
remain. The structuring elementused is a disk of size 200.The morphologically opened output is
treated as thebackground image and is subtracted from the grey scaleimage obtained [4]. The
intensity of the resulting image isadjusted and converted into binary image by
thresholdingrepresenting the cup disc region. The areas of cup, disk andNRR are found from the
extracted binary regions.Glaucoma is characterized by a particular pattern of progressive damage
to the optic nerve that generally beginswith a subtle loss of side vision (peripheral vision).
1.10 Image Preprocessing
Differences in luminosity, [p10]contrast and brightness inside retinal images make it complex to
extortretinal features and make a distinction of exudates from other bright features in images.
Hence image preprocessingis required in equalization of the irregular illumination associated
with retinal images. Each image is subject to z-score normalization [2],[1]. Z-score normalization
converts to common scale with an average ofzero and standard deviation of one.
Ynew = Yold-mean……………………(1)
std
Average of zero shows that it avoids introducing aggregation distortion. Here, y old is the
original valueand ynew is the new value andp10 the mean and std are the p10 mean and
standarddeviation of the original data range respectively.
Due to damage to large number of nerve fibres, a blind spot is created leading to loss of vision.
One of theindicators of glaucomatous eye is changp14e in appearance of optic disk. Optic disk is
elliptical in shape having brightorange-pink color with a pale centre. Due to degeneration of
nerve fibre orange pink color disappears and becomepale. i.e enlargement of depression
callep14d cup and thinning of neuroretinal rim. The pale centre called cup is devoidof
neuroretinal tissue. For normal eye, cup-to-disc ratio is 0.3 to 0.5. For glaucomatous eye, ratio
becomes 0.8.Approximately, 5 million people live with a glaucoma risk while around 800.000
people suffer from glaucomatousdamages in Germany [1]. If the optical imaging of the retina is
taken then by performing the series of imageprocessing operations, autop14mated early detection
of eye disease is possible.With great improvement in field of medical imaging, Image processing
technique helps in early diagnosis ofglaucoma and other eye disease. Retinal fundus images
assist trained clinicians to diagnose any abnormality and anychange in retina. These images are
captured by using special devices called ophthalmoscopes. Medical imageanalysis and
processing has p14great significance in non-invasive treatment and clinical study. The
information aboutthe optic disk can be used to examine severity glaucoma. Thp14e location of
the optic disk is an important issue inretinal image analysis as it is a significant landmark feature.
Fig.1 shows the fundus camera and retinal fundusimage.
Fig. 2 Digital fundus camera and acquired retinal fundus image
Image processing Technique
Various image processing techniques used in automated early diagnosis p14and analysis
ofvarious eye disease areEnhancement, Registration, Fusion, Segmentation, Featurep14
extraction, Pattern matching, Classification,Morphology, Statistical measurements and Analysis
[2][3].Image Enhancement- Image enhancement includes varying brightness and contrast of
image. It also includesfiltering and histogram equalization. It comes under preprocessing step to
enhance various features of image.
Image Registration- Image Registration is an important technique for change detection in retinal
image diagnosis. Inthis process, two images are aligned onto a common coordinate system.p14
Images may be taken at different times andwith imaging devices In medical diagnosis, it is
essp14ential to combine data from different images and for betteranalysis and measurements
images are aligned geometrically.[4]
Image Fusion- Image fusion is a process of combining information acquired from number of
imaging devices. Its goal is to integrate contemporary, multisensor, multi-temporal or multi-view
information into a single image,containing all the information so as to reduce the amount of
information.
Feature Extraction- It is the process of identifying and extracting region of interest from the
image.
Segmentation- Segmentation is the process of dividingp14 an image into its constituent object
and group of pixels whichare homogenous according to some criteria. Segmentation algorithms
are area-oriented instead of pixel oriented. Themain objective of image segmentation is to extract
various features of image which can be merged or split in orderto build object of interest on
which analysis and interpretation can be performed. It includes clustering, thresholding etc.
Morphology- Morphology is the science of appearance, shape and organization. Mathematical
morphology is acollection of non-linear processes which can be applied to an image to remove
details smaller than a certainreference shape. Various morphological operation are erosion,
dilation, opening and closing.
Classification- Classification is an important technique p14of image analysis for estimation of
statistical parameteraccording to the gray level intensities of pixels. It includes labeling of a pixel
p14or group of pixels based on the greyvalues and other statistical parameters. For understanding
the contents of an image, image analysis functions areused [5]An artificial intelligence system
involving ANN and the analysisof the nerve fibers of the retina from the study perimetry, and
clinical data was developed [8].The groups were defined as follows. Normal eyes were
consideredstage 0 and ocular hypertension as stage 1.
Early glaucoma was considered stage 2, and established glaucoma as stage 3.Advanced
glaucoma was considered stage 4, and terminal glaucomaas stage 5. The MLP using the
Levenberg–Marquardttechnique was used. The 100% specificity and sensitivity
obtainedindicates that 100% correct classification of each eye inthe corresponding stage of
glaucoma was achieved.An algorithm to detect the glaucoma using morphologicalimage
processing was developed using fundus images.

1.12 Background
The glaucoma disease is characterized by the degeneration ofoptic nerve fibers and astrocytes
that is often accompanied by anincreased intraocular pressure. Due to the loss of nerve fibers
theRetinal Nerve Fiber Layer (RNFL) thickness[ p4]is decreasing. In thecourse of disease, the
interconnection between the photoreceptorsand the visual cortex is reduced. In the worst case,
the visual information of the photoreceptors can no longer be transmitted to th

Fig. 1. Major structures of the optic nerve head that are visible in color fundus photographs
The optic disk is margined by the optic disk border and can be divided into two major zones: (i)
the neuroretinal rim is composed of astrocytes and nerve fibers while (ii) the brighter cup or
excavation exclusively consists of supporting tissue. brain and visual field loss up to blindness is
threatening. The disappearance of axons and astrocytes affects the structural appearanceof the
ONH and causes a reduction of the functional capabilities ofthe retina. The ONH can be
examined by ophthalmoscopy or bystereo fundus photography: in the course of the disease the
neuroretinalrim gets thinner while the cup is expanding due to the loss of nerve fibers and
astrocytes (Fig. 1).The qualitative assessment of the ONH structure and the functionalabilities in
addition with the patient’s medical history andintraocular pressure are the common base for a
reliable glaucoma diagnosis by ophthalmologists. This inherent subjectivity of thegained
conclusion leads to a considerable inter- and intra-observervariability in differentiating between
normal and glaucomatousONHs (Varma et al., 1992).However, quantitative parameters can help
to make the qualitativeassessment more objective, reproducible and lead to a reductionof the
observer variability or to track glaucoma progression inpatient follow-up. Such parameters can
be gained manually or

(c)
Fig. 2. images of the central retina optic nerve head (ONH) centered fundus photograph
(a) is used for automated glaucoma detection by the proposed glaucoma risk index while
glaucoma probability score utilizes HRT 2.5-dimensional topography images (b). OCT line scan
(c) traversing the ONH illustrates different layers of the retina such as nerve fiber layer as the top
one.
even by computer based technologies from several imagingmodalities.Stereoscopic images of the
ONH are commonly used for documentingits cup shaped structure. Important ONH
characteristicssuch as disk area, disk diameter, rim area, cup area or cup diametercan be
extracted from the stereo image by planimetry (Betz et al.,1982) to also gain the well established
cup-to-disk ratio. For theglaucomatous disease the cup-to-disk ratio measures the decreaseof rim
area while the disk area remains constant.Although, this ratio is highly influenced by the disk
size, it givesa general estimation whether the ONH shape is within its normallimits or it has to be
considered conspicuous.There exist several imaging modalities which provide
quantitativeparameters of the ONH in glaucoma: (i) Confocal ScanningLaser Ophthalmoscopy
(CSLO), (ii) Scanning Laser Polarimetry (SLP)or (iii) OpticalCoherence Tomography (OCT).
CSLO, commerciallyavailable as Heidelberg Retina Tomograph (HRT, Heidelberg
Engineering,Heidelberg, Germany), provides a 2.5-dimensional topographicimage of the ONH
through the undilated pupil (Fig. 2b)

1.13 Artificial Nueral Networks


In this multi layer neural network and principalcomponent based performance analysis is
explored. Selection ofthe optimal parameters sanng6uch as number of hidden layers,
learningrules and transfer functions are taken into consideration. Theclassification results are
obtained through rigorousexperimentation. Diabetic retinopathy is an eye syndrome causedby the
impediment of diabetes and it can be detected prior foreffective treatment. The vision of patient
may start to deteriorateas diabetes progresses and lead to diabetic retinopathy. In
thisinvestigation, the sets of parameters describing EEG eye statesdata set are taken. Thus the
classification of eye statusrepresented by the data sets becomes possible. An automatedapproach
for classification of the disease diabetic retinopathyusing images is pranng6esented. The
performances are classified asnormal and diseased. Testing grades were found towith the
accepted results that aanng6re imitative from the physician’s direct diagnosis. The states results
verify that the proposedmethod could point out the capability of design of a newintelligent
assistance diagnosis system. Result shows thatneural network model is more accurate than the
other NN models.These results suggest that this manng6odel is effective for classificationof EEG
eye states.One of the unrefined electronic models based on the neural structure of the brain is the
Artificial Neural Networks. The brain fundamentally learns from experience. It is original proof
that some problems that are beyond the scope of present computers are indeed solvable by small
energy efficient packages. This brain duplicate also promises a less technical way to develop
machine solutions. This new approach to computing also provides a more graceful indignity
during system overload than its more established counter parts.

1.13.1Nuerons
‘Artificial neurons’ are one type of network sees the nodes. These are called artificial neural
networks (ANNs). An artificial neuron is a computational model extraordinary in the natural
neurons. Membrane of the neuron receive the natural neurons that are located on the synapses
through signals. When the signals received are strong enough (surpass a certain threshold), the
neuron is operative and emits a signal though the axon. This signal may be sent to another
synapse, and may switch on other neurons.

Fig.2: Natural neurons (artist’s conception).

The highly distracted is the complexity of real neurons when modelling with the unnatural
neurons. These are firstly consist of inputs (like synapses), which are multiplied by weights
(strength of the respective signals), and then work out by a mathematical function which
determines the activation of the neuron. Another function (which may be the identity) computes
the output of the artificial neuron (sometimes in dependence of a certain threshold). ANNs
combine artificial neurons in order to process information.

Fig.3: An artificial neuron

The higher a weight of an unnatural neuron is, the Powerful input which is multiplied by it
.Weights can also be negative, so we can say that the signal is un comfortable by the negative
weight. Depending on the weights, the act of the neuron will be different. By adjusting the
weights of an artificial neuron we can obtain the output we want for specific inputs. But when we
have an ANN of hundreds or thousands of neurons, it would be quite complicated to find by
hand all the required weights. But we can find algorithms which can adjust the weights of the
ANN in order to obtain the required output from the network. This process of adjusting the
weights is called learning or training. The number of types of ANNs and their uses is very high.
Since the first neural model by McCulloch and Pitts (1943) there have been developed hundreds
of saperate models considered as ANNs. The differences in them might be the functions, the
topology, the accepted values, the learning algorithms, etc. Also there are many cross models
where each neuron has more properties than the ones we are reviewing here. Because of matters
of area, will present only an ANN which learns using the back propagation algorithm (Rumelhart
and McClelland, 1986) for learning the appropriate weights, since it is one of the most common
models used in ANNs, and many others are based on it. Since the function of ANNs is to operate
the information, they are used mainly in fields related with it. There are a wide various of ANNs
that are used to model real neural networks, and study behaviour and control in animals and
machines, but also there are ANNs which are used for engineering purposes, such as pattern
recognition, forecasting, and data compression.
1.13.2Nueral Network

Fig.4: An artificial neural network

An artificial neural network is shown in the figure 4. ANN is an interconnected group of nodes to
the vast network of neurons in a brain. Here, each circular node represents an artificial neuron
and an arrow represents a connection from the output of one neuron to the input of another. An
artificial neural network is an information processingsystem that has certain performance
characteristics incommon with biologanng1ical neural networks [5]neuralnetworkischaracterized
by its pattern of connectionsbetween the neurons, its method of determining the weightson the
connections and its activation function. A neural netconsists of a large number of sianng9mple
processing elementscalled neurons or nodes. Each neuron is connected to otherneurons by means
of directed communication links, eachwith an associated weightanng1. The weights represent
informationbeing used by the net to solve a problem. Each neuron has aninternal state called its
activation or activity level, which is afunction of the inputs it has received. A neuron sends its
activation as a signal to several other neurons.Artificial neural networks consist of many
nodes,processing units analogous to neurons in the brain. Theneural net can be a single layer or
multilayer net. In a singlelayer net there is a single laanng9yer of weighted interconnections.A
multi-layer artificial neural neanng1twork comprises an inputlayer, output layer and hidden
(intermediate) layer of neurons.The activity of neurons in the input layer is represents the
rawinformation that is fed into the network. The activity ofneurons in the hidden layer isanng9
determinedanng1 by the activity ofinput neurons and the connectinganng1 weights between the
inputand hidden units. Similarly thanng1e behavior of the output unitsdepends on the activity of
the neurons in the hidden layer andthe connecting weights between hidden and the output
layers[5]. A neural network can be trained to perform a particularfunction by adjusting the values
of the connections (weights)between elements.Commonly neural networks are adjusted, or
trained, so thata particular input leads to a specific target output. There, thenetwork is adjusted,
based on a comparison of the output andthe target, untianng1l the network output matches the
target.Typically many such input/target pairs are needed to train anetwork.
1.13.3 Back Propagation
Back propagation is the generalization of the anng1Widrow-Hofflearning rule to multiple-layer
networks and nonlineardifferentiable transfer functions. Input vectors and thecorresponding
target veanng9ctors are used to train a network untilit can approximate a function, associate input
vectors withspecific output vectors, or classify input vectors inanappropriaanng1te way as
defined by you. Networks with biases, asigmoid layer, and a linear output layer are capable of
approximating any function with a finite number ofdiscontinuities.Standard back propagation [6]
is a gradient descentalgorithm, as is the Widrow-Hoff learning rule, in which thenetwork weights
are moved along the negative of the gradientof the performance function. The term back
propagationrefers to the manner inanng9 which the gradient is computed fornonlinear multilayer
networks. Tanng1here are a number ofvariations on the basic algorithm that are based on
otherstandard optimization techniques, such as conjugate gradientand Newton methodsProperly
trained back propagation networks tend to givereasonable answers when presented with inputs
that theyhave never seen. Typically, a new input leads to an outputsimilar to the correct output
for input vectors used in trainingthat are similar to the new input being presented.
Thisgeneralization property makes it pannng1ossible to train a networkon a representative set of
input/target pairs and get goodresults without training the netwanng9ork on all
possibleinput/output pairs [7].The simplest implementation of back propagation learningupdates
the network weights and biases in the direction inwhich the performaanng1nce function
decreases most rapidly, thenegative of the gradient [8].
1.13.4 TRAINING THE NETWORK
Once the network weights and biases are ianng1nitialized, thenetwork is ready for training. The
training process requires a set of examples of proper network behaviour network inputsand target
outputs. During training the weights and biases ofthe network are iteratively adjusted tominimize
the networkperformance fanng1unction. The default performance function forfeed forward
networks is mean square error (mse), theaverage squared error between the networks outputs and
thetarget outputs. All these algorithms use the gradient of theperformance function to determine
how to adjust the weightsto minimize performance [9]. The anng1gradient is determinedusing
back propagation, which involves performingcomputations backward through the network.
There are anng1generally four steps in the training process:
1) Assemble the training data.
2) Create the network object.
3) Train the network.
4) Simulate the network response to new inputs.
An artifanng1icial neural network is an information processingsystem that has certain
performance characteristics incommon with biological neural networks [5]. A neuralnetwork is
charaanng1cterized by its pattern of connectionsbetween the neurons, its method of determining
the weightson the connections and its activation function. A neural netconsists of a large number
of simple processing elementscalled neurons or nodes. Each neuron is connected to otherneurons
by means of directed communication links, eachwith an associated weight. The weights represent
informationbeing used by the net to solve a problem. Each neuron has aninternal state called its
actanng1ivation or activity level, which is afunction of the inputs it has received. A neuron sends
itsactivation as a signal to several other neurons.Artificial neural networks consist of many
nodes,processing units analogous to neurons in the brain. Theneural net can be a single layer or
multilayer net. In a singlelayer net there is a single layer of weighted interconnections.A multi-
layer artificial neural network comprises an inputlayer, output layer and hidden (intermediate)
layer of neurons.The activity of neurons in the input layer is represents the rawinformation that is
fed into tanng1he network. The activity ofneurons in the hidden layer is determined by the
activity ofinput neurons and the connecting weights between the inputand hidden units. Similarly
the behavior of the output unitsdepends on anng1the activity of the neurons in the hidden layer
andthe connecting weights between hidden and the output layers[5]. A neural network can be
trained to perform a particularfunction by adjusting the values of the connections
(weights)between elements.Commonly neural networks are adjusted, or trained, so thata
particular input leads to a specific target output. There, thenetwork is adjusted, based on a
comparison of the output andthe target, until the network output anng1matches the
target.Typically many such input/target pairs are needed to train a network.
CHAPTER2
LITERATURE REVIEW

Introduction
Currently, there is an increasing interest in establishing automatic systems that screen a huge
number of people for vision threatening diseases glaucoma and diabetic retinopathy and to
provide an automated detectionof the disease. Image processing is now becoming practical and a
useful toolfor screening. Digital imaging offers a high quality permanent record of thefundus
images, which are used by ophthalmologists for the monitoring ofprogression or response to the
therapy. Digital images have the potential to beprocessed by automated analysis systems. Fundus
image analysis is acomplicated task, because of the variability of the fundus images in terms
ofcolor or gray levels, the morphology of the anatomical structures of the retinathe existence of
certain features in different patients that may lead to awrong interpretation. In the literature,
numerous examples of the applicationof digital imaging techniques used in identification of
diabetic retinopathy canbe found. There have been few research investigations to identify retinal
components like optic disc, optic cup and lesions like hard and soft exudates.The major
contributions to detect glaucoma and severity of diabetic retinopathy using fundus images
.Glaucoma detection algorithms are broadly classified into two categories, which are based on
the detection of optic disc and detection of optic cup. Optic disc and optic cup detections are
compared based on the localization, detection and boundary extraction procedure. Techniques to
detect diabetic retinopathy are explained with respect to various extractions of features and
abnormality detection in fundus images.anng40The shortcomings in the existing algorithms are
identified and a method is proposed to detect glaucoma and diabetic retinopathy at an early stage
for screening applications.
Literature Review using texture analysis, PCN-PNN
From literature survey different methods are used finding the glaucoma detection. They are
explained as fallows,the Automated analysis of glaucoma using texture and higher spectra
energy features is proposed higher order spectra (HOS) feature from digital fundus images and
SVM for classification with accuracy of 91%(Rajendra Acharya). The classification of glaucoma
is normal and abnormal based on false positive and false negative and optimal filter framework
for detecting target lesions in retinal images (Gwenolequellec). 3-D Gabor wavelet based
approach for pixel-based hyper spectral imagery classification with maximum exactness of
96.04% and 95.36% (Linlin Shen and Sen Jia). Spatial–Contextual support vector machine for
remote sensed image with overall classification accuracy of the hyper spectral image of the IPS
data set with 16 classes is 95.5%. The kappa accuracy is 94.9%, and the average exactness of
each class is up to 94.2% (Cheng-Hsuan Li). The scientifically optimized for unmixing issues
with a earlier known information ignores some statistical properties of the extracted samples and
leads to a suboptimal solution for real situations (Fereidoun A. Mianji). Different wavelet
transforms and a technique to extract energy signatures of 2D DWT to feature ranking and
feature selection strategies for glaucoma classification using tenfold cross validations with
accuracy of around 93%(SumeetDua). The wavelets, Principal component analysis (PCA) and
probabilistic neural network (PNN) with 90% of PCA-PNN and 95% of DWT-PNN. Heidelberg
Retina Tomography (HRT) is utilized for determination of Glaucoma, it is a confocal laser
scanning system developed by Heidelberg Engineering.
It analyzes 3-dim images of the retina. Thus, any changes in nerve head, called papilla, can be
quantitatively characterized (annu). The estimation of RFNL thickness based on pixel calculation
using OCT images for glaucoma detection (Divyabharati). The technique to obtain gray scale
image from original RGB image using Optic Disc Based on PCA and Mathematical Morphology
and obtained the accuracy 0.9947 for automatic segmentation of the optic disc (Morales Sandra
et .sl).The Glaucoma Recognition and Segmentation Using Feed Forward Neural Network and
with Optical material science(Vijayan T). The Glaucoma is classified by extracting two features
using retinal fundus images. (i) Cup to Disc Ratio (CDR). (ii) Ratio of Neuroretinal Rim in
inferior, superior, temporal and nasal quadrants that is ISNT quadrants and Using Artificial
Neural Networks to identify the Glaucoma Stages(Kurnika). Utilizing Artificial Neural Network
in farword and backword propagation to Detect Glaucoma with the Help of Cup to Disk Ratio
(Ms. Pooja Chaudhari ). Glaucoma Detection Using Artificial Neural Network (Sheeba).

Wavelet based Energy Features and PCA


Annu.et.al[1] proposes glaucomatous image classification using texture features within images
and it will be classified effectively based on Probabilistic neural network (PNN).Texture features
within images are actively pursued for accurate and efficient glaucoma classification. Energy
distribution over wavelet sub bands and Principal Component Analysis (PCA) were applied to
find these important texture features. Features were obtained from daubechies (db3),
biorthogonal (bio3.3, bio3.5, and bio3.7) and symlets (sym3) wavelet filters. It uses a technique
to extract energy signatures obtained using 2-D Discrete Wavelet Transform and the energy
obtained from the detailed coefficients can be used to distinguish between normal and
glaucomatous images. A classification with a success of 90% and 95% has been obtained by
PCA-PNN and DWT-PNN, respectively. This demonstrates the effectiveness of wavelet as
feature extractor and PNN as classifier compared with other recent work.
Texture and Higher Order Spectra Features
Rajendra Acharya . et.al, [2]The retinal optic nerve fiber layer can be assessed using optical
coherence tomography,scanning laser polarimetry and Heidel berg retina tomography scanning
methods. In this paper, we present a novel method for glaucoma detection using a combination
of texture and higher order spectra (HOS) features from digital fundus images. Support vector
machine, sequential minimal optimization, naive Bayesian and random-forest classifiers are used
to perform super- vised classification. Our results demonstrate that the texture and HOS features
after z-score normalization and feature selection, and when combined with a random-forest
classifier, performs better than the other classifiers and correctly identifies the glaucoma images
with an accuracy of more than 91%. The impact of feature ranking and normalization is also
studied to improve results. Our proposed novel features are clinically significant and can be used
to detect glaucoma accurately.

Lesions in Retinal Images


Gwenole Quellec.et.al[3] Automated detection of lesions in retinal images is a crucial step
towards efficient early detection, or screening, of large at risk populations. In particular, the
detection of microa- neurysms, usually the first sign of diabetic retinopathy (DR), and the
detection of drusen, the hallmark of age related macular degeneration (AMD), are of primary
importance. In spite of substantial progress made, detection algorithms still produce 1) false
positives target lesions are mixed up with other normal orabnormalstructuresintheeye,and2)false
negatives the large variability in the appearance of the lesions causes asubset of these target
lesions to be missed. We propose a general framework for detecting and characterizing target
lesions almost instantaneously. This framework relies on a feature space automatically derived
from a set of reference image samples representing target lesions, including atypical target
lesions, and those eye structures that are similar looking but are not target lesions. The reference
image samples are obtained either from an expert- or a data-driven approach. Factor analysis is
used to derive the filters generating this feature space from reference samples.The entire image
processing sequence takes less than a second on a standard PC compared to minutes in our
previous approach, allowing instantaneous detection. Free-response receiver operating
characteristic analysis showed the superiority of this approach over a framework where false
positives and the a typical lesions are not explicitly modeled.A greater performance was
achieved by the expert-driven approach for DR detection, where the designer had sound expert
knowl- edge. However, for both problems, a comparable performance was obtained for both
expert and data driven approaches. This indicates that annotation of a limited number of lesions
suffices for buildingadetectionsystemforanytypeoflesioninretinalimages, if no expert-knowledge
is available. We are studying whether the optimal filter framework also generalizes to the
detection of any structure in other domains.

Gabor Wavelets for Pixel-Based Hyperspectral Imagery Classification


Linlin Shen and Sen Jia .et.al,[4] The rich information available in hyperspectral imagery not
only poses significant opportunities but also makes big challenges for material classification.
Discriminative features seem to be crucial for the system to achieve accurate and robust
performance. In this paper, we propose a 3-D Gabor wavelet based approach for pixel-based
hyperspectral imagery classification. A set of complex Gabor wavelets with different frequencies
and orientations is first designed to extract signal variances in space, spectrum, and joint spatial
spectral domains. The magnitude of the response at each sampled location (x,y) for spectral band
b contains rich information about the signal variances in the local region. Each pixel can be well
represented by the rich information extracted by Gabor wavelets. A feature selection and fusion
process has also been developed to reduce the redundancy among Gabor features and make the
fused feature more discriminative. The proposed approach was fully tested on two real world
hyperspectral data sets, i.e., the widely used Indian Pine site and Kennedy Space Center.The
results show that our method achieves as high as 96.04% and 95.36% accuracies, respectively,
even when only few samples, i.e., 5% of the total samples per class, are labeled.

Support Vector Machine for Remotely Sensed Image Classification


Cheng-Hsuan Li.et.al[5], Recent studies show that hyperspectral image classi- fication
techniques that use both spectral and spatial information are more suitable, effective, and robust
than those that use only spectral information. Using a spatial–contextual term, this study
modifies the decision function and constraints of a support vector machine (SVM) and proposes
two kinds of spatial–contextual SVMs for hyperspectral image classification. One machine,
which is based on the concept of Markov random fields (MRFs), uses the spatial information in
the original space (SCSVM). The other ma- chine uses the spatial information in the feature
space (SCSVMF), i.e., the nearest neighbors in the feature space. The SCSVM is better able to
classify pixels of different class labels with similar spectral values and deal with data that have
no clear numerical interpretation. To evaluate the effectiveness of SCSVM, the exper-
imentsinthisstudycomparetheperformancesofotherclassifiers: an SVM, a context-sensitive
semisupervised SVM, a maximum likelihood (ML) classifier, a Bayesian contextual classifier
based on MRFs (ML_MRF), and k nearest neighbor classifier. Exper- imental results show that
the proposed method achieves good classification performance on famous hyperspectral images
(the Indian Pine site (IPS) and the Washington, DC mall data sets). The overall classification
accuracy of the hyperspectral image of the IPS data set with 16 classes is 95.5%. The kappa
accuracy is upto94.9%,andtheaverageaccuracyofeachclassisupto94.2%.

SVM-Based Unmixing-to-Classification
Fereidoun.et.al[6] Need for a priori knowledge of the components comprising each pixel in a
scene has set the end member determination, rather than the end member abundance
quantification, as the primary focus of many unmixing approaches. In the absence of the
information about the pure signatures present in an image scene, which is often the case, the
mean spectra of the pixel vectors, directly extracted from the scene, are usually used as the pure
signatures’ spectra. This approach which is mathematically optimized for unmixing problems
with a priori known information ignores some statistical properties of the extracted samples and
leads to a sub optimal solution for real situations. This paper proposes an over learning based
unmixing to classification conversion model to treat the abundance quantification task as a
classification problem. Support vector machine, as an efficient classifier, is used to realize this
model. It exploits the statistical nature (endmember spectral variability) of the extracted end
member representatives from the hyperspectral scene, rather than solving the problem according
to the ideal model in which only the mean spectra of each training sample set is used. Several
experiments are carried out on simulated and real hyperspectral images. The obtained results
validate the high performance of the proposed technique in abundance quantification which is a
key subpixel information detection capability.
Wavelet-Based Energy Features
Sumeet Dua. et.al [7]Texture features within images are actively pursued for accurate and
efficient glaucoma classification. Energy distribution over wavelet subbands is applied to find
these important texture features. In this paper ,we investigate the discriminatory potential of
wavelet features obtained from the daubechies(db3), symlets (sym3), and biorthogonal (bio3.3,
bio3.5, and bio3.7) wavelet filters. We propose a novel technique to extract energy signatures
obtained using 2-D discrete wavelet transform, and subject these signatures to different feature
ranking and feature selection strategies. We have gauged the effectiveness of the resultant ranked
and selected subsets of features using a support vector machine, sequential minimal optimization,
random forest, and naive Bayes classification strategies. We observed an accuracy of around
93% using ten fold cross validations to demonstrate the effectiveness of these methods.

Measurement of RFNL Thickness Using Oct Images For Glaucoma Detection


Dhivyabharathi.et.al[8], The thickness of retinal nerve fiber layer (RNFL) is one of the
pompous parameters for assessing the disease, Glaucoma. A substantial amount of vision
can be lost before the patient becomes aware of any defect. Optical Coherence Tomography
(OCT) provides enhanced depth and clarity of viewing tissues with high resolution
compared with other medical imaging devices. It examines the living tissue non-invasively.
This paper presents an automatic method to find the thickness of RNFL using OCT
images. The proposed algorithm first extracts all the layers present in the OCT image by
texture segmentation using Gabor filter method and an algorithm is then developed to
segment the RNFL. The thickness measurement of RNFL is automatically displayed based
on pixel calculation. The calculated thickness values are compared with the original values
obtained from hospital. The result shows that the proposed algorithm is efficient in
segmenting the region of interest without manual intervention. The effectiveness of the
proposed method is proved statistically by the performance analysis
ANN Glaucoma Detection using Cup-to-Disk Ratio and Neuroretinal Rim
Kurnika Choudhary.et.al[9]. In this paper glaucoma is classified by extracting two features
using retinal fundus images. (i) Cup to Disc Ratio (CDR). (ii) Ratio of Neuroretinal Rim in
inferior, superior, temporal and nasal quadrants that is to say ISNT quadrants. Glaucoma
frequently damages superior and inferior fibers before temporal and nasal optic nerve
fibers and which start decreasing the superior and inferior rims areas and change the
order of ISNT rule. Hence, the detection of rim areas in four directions can assist the
correct verification of ISNT rule and then improve the correct diagnosis of glaucoma at
early stages. In the end, feed forward back propagation neural network is used for
classification based on the above two features. The tool used to accomplish the objective is
MATLAB R2013a. The average accuracy of the system is around 96%. The method does
not rely on trained glaucoma specialists or specialized and costly OCT/HRT machines.
Several fundus retinal images containing normal and glaucoma were applied to the
proposed method for demonstration
Application of Neural Network for Diagnosing Eye Disease
Gauri Borkhade.et.al[10] This paper explores the neural network as an eye disease
classifier. In this multi layer neural network and principal component based performance
analysis is explored. Selection of the optimal parameters such as number of hidden layers,
learning rules and transfer functions are taken into consideration. The classification results
are obtained through rigorous experimentation. Diabetic retinopathy is an eye syndrome
caused by the impediment of diabetes and it can be detected prior for effective treatment.
The vision of patient may start to deteriorate as diabetes progresses and lead to diabetic
retinopathy. In this investigation, the sets of parameters describing EEG eye states data set
are taken. Thus the classification of eye status represented by the data sets becomes
possible. An automated approach for classification of the disease diabetic retinopathy using
images is presented. The performances are classified as normal and diseased Testing
grades were found to be complaint with the accepted results that are imitative from the
physician’s direct diagnosis. The states results verify that the proposed method could point
out the capability of design of a new intelligent assistance diagnosis system. Result shows
that this new neural network model is more accurate than the other NN models. These
results suggest that this model is effective for classification of EEG eye states.
Feed Forward Neural Network and Optical physics
Vijayan.et.al[11],This paper proposed disc and cup segmentation technique for glaucoma
detection using feed forward network. In optic disc segmentation and optic cup
segmentation , center surround statistics are used to classify each superpixel as disc or non
disc. The segmented optic disc and optic cup are then used to compute the cup to disc ratio,
this cup to disc is then compared with threshold value, this threshold value is already been
calculated from healthy eye ,the retinal fundus image is put into the process the algorithm
to calculate the CDR , calculating is not the end of experiment we should know the stage of
glaucoma. The feed forward neural network is used to detect weather the glaucoma disease
is in initial or final stage, or disease can be cured or not. The proposed segmentation
method have be evaluated in a database of 20 images.
Early Stage Glaucoma Detection in Diabetic Patients: A Review
Hussandeep.et.al[12],The method proposed for the detection of optic disc and optic cup
segmentation using morphological operations. The aim of this paper is to find the cup to
disc ratio of glaucoma patient and check the level of disease. If the cup to disc ratio exceeds
0.3 it indicates high glaucoma for the tested patient

UsingArtificialNeuralNetworktoDetect GlaucomawiththeHelpofCuptoDiskRatio
Ms.PoojaChaudhari .et.al[13], This paper proposes novel method to detect Glaucoma. This
method makes use of feed forward Artificial Neural Network and Cup To Disk ratio to
detect Glaucoma. It is observed that this method gives more accurate results than prior
methods available. This paper talks about detail process of new method, its applications
and future scope.
Automatic Classification of Lymphoma Images With Transform-Based Global Features
Nikita.et.al[14],We propose a report on automatic classification of three common types of
malignant lymphoma: chronic lympho- cytic leukemia, follicular lymphoma, and mantle
cell lymphoma. The goal was to find patterns indicative of lymphoma malignan- cies and
allowing classifying these malignancies by type. We used a computer vision approach for
quantitative characterization of image content. A unique two-stage approach was employed
in this study. At the outer level, raw pixels were transformed with a set of transforms into
spectral planes. Simple (Fourier, Chebyshev, and wavelets) and compound transforms
(Chebyshev of Fourier and wavelets of Fourier) were computed. Raw pixels and spectral
planeswerethenroutedtothesecondstage(theinnerlevel).Atthe inner level, the set of
multipurpose global features was computed
oneachspectralplanebythesamefeaturebank.Allcomputedfea- tures were fused into a single
feature vector. The specimens were stained with hematoxylin (H) and eosin (E) stains.
Several color spaces were used: RGB, gray, CIE-L∗a∗b∗, and also the specific stain-
attributed H&E space, and experiments on image classifica- tion were carried out for these
sets. The best signal (98%–99% on
earlierunseenimages)wasfoundfortheHE,H,andEchannelsof the H&E data set.
DEEP CONVOLUTIONAL NEURAL NETWORKFOR GLAUCOMA SCREENING
Sharanya .et.al [15],In the paper developed Deep Learning (DL) architecture with
convolution neural network for automated glaucoma diagnosis. Deep learning systems,
such as Convolution Neural Networks (CNNs), can infer a hierarchical representation
between images to discriminate between glaucoma and non-glaucoma patterns for
diagnostic decisions. This research proposes image processing technique for the early
detection of glaucoma. Glaucoma is one of the major causes which cause blindness but it
was hard to diagnose it in early stages. In this research, we propose a method for Cup to
Disc Ratio (CDR) assessment using 2-D retinal fund us images. In the proposed method,
the optic disc is first segmented and reconstructed using a novel Sparse Dissimilarity
Constrained coding (SDC) approach which considers both the dissimilarity constraint and
the sparsity constraint from a set of reference discs with known CDRs. Subsequently, the
reconstruction coefficients from the SDC are used to compute the CDR for the testing disc.
The segmented optic disc and optic cup are then used to compute the cup to disc ratio for
glaucoma screening. The previous results show Area under Curve (AUC) of the receiver
operating characteristic curve in glaucoma detection at 88 % in databases. The present
Results show the glaucoma detection at 93 % in databases.
Automated Early Nerve Fiber Layer Defects Detection using Feature Extraction in Retinal
Colored Stereo Fundus Images
Jyotika Pruthvi.et.al[16],The inception of Glaucoma causes devastation of these
indispensable nerves and eventual vision loss. Hence it calls for a need to have an early
detection of the disease and espy the degeneration of nerve in a non-invasive mode through
the Retinal Fundus images. In particular, Anisotropic Diffusion Filter for noise removal,
Otsu Thresholding, Canny edge map and Image Inpainting for extraction of retinal blood
vessels, K means Clustering, Multi- thresholding, Active Contour Method, Artificial Neural
Network, Fuzzy C Means Clustering, Morphological Operations have been used and
compared using average measures for detection of boundary of optic disc and cup, modeled
as elliptical objects. Adaptive Neuro Fuzzy Inference System, Support Vector Machine and
Back Propagation Network classifies batch of 20 images obtained from Vitreo Retina Unit,
AIIMS, New Delhi,India and Optos, Scotland, UK as normal and abnormal samples.
Quantitative analysis and comparison of classifiers is performed by computing
Classification Accuracy, Sensitivity and Specificity. Conclusively, Cup-to-Disc Ratio is
computed and values are compared with truths obtained from Heidelberg Retina
Tomograph and ophthalmologists for clinical validation, bring forth a model to get
perception of progressive degeneration of optic nerve and the impact of glaucoma on
Retinal Nerve Fiber Layer. This automatic retinal image analysis contributes to the field of
ophthalmology by providing a screening tool for the early detection of Glaucoma
Glaucoma Detection And Segmentation Using Retinal Images
Anju soman .et.al [17]Glaucoma is an eye disease .It is detected from retinal images using
some classifiers like Support vector machine, random forest, Dual sequential minimal
optimization, Naïve bayes and artificial neural networks. Some features are obtained from
retinal images using 2D-DWT. These features are used for classification. Different wavelet
features are obtained from the three filters symlets (sym3), daubechies (db3) and
biorthogonal (bio 3.3,bio 3.5,bio 3.7) wavelet filters. These features are used for classifying
and detecting normal and glaucomatous retinal images.The glaucomated retinal image is
then threshold based segmented to highlight the affected portion
AUTOMATIC GLAUCOMA DETECTION BASED ON THE TYPE OF FEATURES
USED
ANINDITA SEPTIARINI.et.al [18]The characteristic of glaucoma are high eye pressure,
loss of vision gradually which can cause blindness and damage to the structure of retina.
The damages which may occur for example are structural form changes of the Optic Nerve
Head (ONH) and Retinal Nerve Fiber Layer (RNFL) thickness. The observable part of
ONH which is the features of glaucoma such as disc, cup, neuroretinal rim, Parapapillary
atrophy and blood vessels. The structure of the retina can be observed through a retinal
image, where the image is produced from several types of equipment such funduscopy,
Confocal Scanning Laser Ophthalmoscopy (CSLO), Heidelberg Retina Tomograph (HRT)
and Optical Coherence Tomography (OCT). This paper discusses about the automatic
feature extraction technique in retinal fundus images which can be used for detection or
classification of glaucoma. The technique is divided into two groups, namely morphological
and non-morphological based on the type of features used. This grouping aims to
determine what type of features extraction technique can be used to represent the
glaucoma characteristic.
A Novel Approach towards Automatic Glaucoma Assessment
Darsana.et.al[19],This paper proposes automatic glaucoma assessment by combined
analysis of fundus eye image and patient data. Fundus image feature extraction and ocular
parameter evaluation are carried out for image level analysis. The techniques used for
feature extraction include color model analysis, morphological processing, filtering and
thresholding. Ocular parameters considered are Cup to Disc Ratio (CDR), Rim to Disc
Ratio(RDR), cup to disc area ratio and Inferior Superior Nasal Temporal(ISNT) ratio of
bloodvessels in disc region. The CDR, RDR and cup to disc area ratio based on optic disc,
cup and rim are calculated using image measuring techniques. Mask generation and
feature segmentation based on Array- Centroid method is proposed for RDR and ISNT
ratio calculation. Image level classification makes use of ocular parameters and SVM is
used to classify the images as normal or glaucoma suspect. Data level analysis uses patient
data and classification is done with the help of risk calculator. Then a combined glaucoma
risk analysis is performed to label a risk class to the patient. MATLAB software is used for
developing user interface for the proposed approach. Performance analysis is carried out
for image level, data level and combined glaucoma analysis
Retinal Fundus Images with Gabor Filter
Apeksha Padaria .et.al[20], automated Glaucoma diagnosis is required to detect the
abnormality so that when early signs are observed corrective treatments can be prescribed
and vision loss can be prevented. Lots of research has been carried out in detecting the
Glaucoma disease. In this paper, a new approach for detection and grading of Glaucoma
by following three methods: (1) Gabor filter is used for detection of optic disc and cup (2)
OTSU’s method is used for thresholding (3) Artificial Neural Network (ANN) is used for
classification of the disease.
Dwt Based Energy Features and Ann Classifier
Nitha Rajandran .et.al[21],For identification of disease in human eyes we are using clinical
decision support system which is based on retinal image analysis technique,that used to
extract structure,contextual or texture features.Texture features within images which gives
accurate and efficicent glaucoma classification.For finding this texture features we use
energy distribution over wavelet subband.In this paper we focus on fourteen features
which is obtained from daubecies(db3),symlets(sym3),and biorthogonal(bio3.3,bio3.5,and
bio3.7).We propose a novel technique to extract this energy signatures using 2- D wavelet
transform and passed these signatures to different feature ranking and feature selection
strategies.The energy obtained from detailed coefficent are used to classify normal and
glaucomatous image with high accuracy. This will be classified using support vector
machines, sequential minimal optimization, random forest, naive Bayes and artificial
neural network.We observed an accuracy of 94% using the ANN classifier.Performance
graph is shown for all classifiers.Finally the defected region founded by segmentation and
this will be post processed by morphological processing technique for smoothing operation
A Literature Survey on Glaucoma Detection Techniques using Fundus Images
Nidhi Shah.et.al[22] Glaucoma is caused due to unawareness in people which can be
resulted in to the blindness. For patients affected by this, mass screening can be the best
curable solution which can help to extend symptom-free life. Using hybrid feature
extraction from digital fundus images, we propose a novel low cost automated glaucoma
diagnosis system. Higher order spectra (HOS), trace transform (TT), and discrete wavelet
transform (DWT) features are used for automated identification of normal glaucoma
classes. Vector machine (SVM) classifier with linear, radial basis function (RBF) and
polynomial order 1, 2, 3 are the extracted features fed to support in order to select the best
kernel for automated decision making
Review of Image Processing Technique for Glaucoma Detection
Preeti.et.al[23], The review paper describes the application of various image processing
techniques for automatic detection of glaucoma. Glaucoma is a neurodegenerative disorder
of the optic nerve, which causes partial loss of vision. Large number of people suffers from
eye diseases in rural and semi urban areas all over the world. Current diagnosis of retinal
disease relies upon examining retinal fundus image using image processing. The key image
processing techniques to detect eye diseases include image registration, image fusion, image
segmentation, feature extraction, image enhancement, morphology, pattern matching,
image classification, analysis and statistical measurements
Automated Glaucoma Detection using Cup to Disc Ratio
Chaitali.et.al[24],The aim of this paper is to design and implement an automated system for
glaucoma detection using cup to disc ratio. Glaucoma is a chronic eye disease affecting the
optic nerve which leads to blindness. It is second leading cause for blindness. Glaucoma
affects Optic nerve head. Some methods for glaucoma detection include measurement of
intraocular pressure, assessment of abnormal visual field and assessment of damaged optic
nerve head. Calculation of cup to disc ratio is one of the methods in assessment of damaged
optic nerve head. Optic Disc and Optic Cup are parts of nerve head. First, retinal fundus
image is acquired. Then optic disc is localized using thresholding. This is preprocessing
step. Accurate localization is very important marker for many computer aided design
techniques. Superpixel segmentation (which uses simple linear iterative clustering
algorithm) is used to extract correct disc and cup boundaries. Number of superpixels is the
only parameter for superpixel segmentation. Center Surround statistics is used for feature
extraction purpose. This is because area around disc looks similar to disc(Paripappilary
Atrophy) in color but differs in texture. Artificial neural network is used as classifier. Then
vertical cup diameter to vertical disc diameter ratio is calculated. If it is greater than
normal value then that is glaucomatous. Objectives of project are correct localization of
optic disc and finding cup inside disc. Resulting images show disc and cup boundaries and
cup to disc ratio. Sensitivity and accuracy are two main parameters for evaluation purpose.
Success rate achieved in this method is 89.18%.

Grewal.et.al[25], To develop, train, and test an artificial neural network (ANN) for
differentiating among normal subjects, primary open angle glaucoma (POAG) suspects,
and persons with POAG in Asian-Indian eyes using inputs from clinical parameters,
optical coherence to- mography (OCT), visual fields, and GDx nerve fiber analyzer.
METHODS. One hundred eyes were classified using optic disc examination and perimetry
in- to normal (n=35), POAG suspects (n=30), and POAG (n=35). EasyNN-plus simulator
was used to develop an ANN model with inputs including age, sex, myopia, intraocular
pressure (IOP), optic nerve head, and retinal nerve fiber layer (RNFL) parameters on
OCT, Octopus 30-2 full threshold visual field, and GDx parameters. RESULTS. With two
outputs (POAG or normal), specificity was 80% and sensitivity was 93.3%. Ninety percent
of POAG suspects were labeled as abnormal in this analysis. ANN assigned the highest
importance to Smax/Imax RNFL on OCT followed by cup-area (OCT) and other RNFL
parameters (OCT) for two outputs. With three outputs (normal, POAG, and POAG
suspect), ANN gave an overall classification rate of 65%, specificity of 60%, and sensitivity
of 71.4% with a tar- get error rate of the training set at 1%. The parameters for three
outputs, in decreasing order of relative importance, were Savg, vertical cup-disc ratio, cup-
volume, and cup-area on OCT. CONCLUSIONS. An ANN taking varied diagnostic
imaging inputs was able to separate POAG eyes from normal subjects and POAG suspects.
The network had reasonable sensitivity with three outputs; however, it had a tendency to
mislabel POAG suspects as POAG.
Image Processing Techniques for Glaucoma Detection Using the Cup-to-Disc Ratio
Chalinee.et.al[26], In this paper, the authors propose a method to calculate the CDR
automatically from non- stereographic retinal fundus photographs taken from a NIDEK
AFC-230, which is a non-mydriatic auto fundus camera. To automatically extract the disc,
two methods making use of an edge detection method and variational level-set method are
proposed in the paper. For the cup, color component analysis and threshold level-set
method are evaluated. To reshape the obtained disc and cup boundary from our methods,
ellipse fitting is applied to the obtained image. A set of 44 retinal images obtained from
Mettapracharak Hospital, Nakhon Pathom Thailand is used to assess the performance of
the determined CDR to the clinical CDR, and it is found that our proposed method
provides 89% accuracy in the determined CDR results
RETINAL IMAGE ANALYSIS USING MORPHOLOGICAL PROCESS AND
CLUSTERING TECHNIQUE
Radha.et.al[27] This paper proposes a method for the Retinal image analysis through
efficient detection of exudates and recognizes the retina to be normal or abnormal. The
contrast image is enhanced by curvelet transform. Hence, morphology operators are
applied to the enhanced image in order to find the retinal image ridges. A simple
thresholding method along with opening and closing operation indicates the remained
ridges belonging to vessels. The clustering method is used for effective detection of exudates
of eye. Experimental result proves that the blood vessels and exudates can be effectively
detected by applying this method on the retinal images. Fundus images of the retina were
collected from a reputed eye clinic and 110 images were trained and tested in order to
extract the exudates and blood vessels. In this system we use the Probabilistic Neural
Network (PNN) for training and testing the pre-processed images. The results showed the
retina is normal or abnormal thereby analyzing the retinal image efficiently. There is 98%
accuracy in the detection of the exudates in the retina.
NEURAL NETWORK BASED CLASSIFICATION AND DETECTION OF GLAUCOMA
USING OPTIC DISC AND CUP FEATURES
Abirami .et.al[28] ,This paper proposes optic disc and optic cup segmentation using region
growing and men shift algorithm for glaucoma screening. A self-assessment reliability
score is computed to evaluate the quality of the automated optic disc segmentation. For
optic cup segmentation, in addition to the histograms and centre Surround statistics, the
location information is also included into the feature space to boost the performance. The
proposed segmentation methods have been evaluated in a database consisting both healthy
and glaucoma images with optic disc and optic cup boundaries manually marked by
trained professionals. Experimental results are expected to show a better performance.
Glaucoma Detection System using Texture Features
Swapna.et.al [29] In this paper, a novel method is proposed, making use of a combined
feature set of Fractal Dimension calculated from extracted texture feature of total fundus
images and Local Binary Pattern (LBP) feature along with an efficient Regression. Neural
Network decision making unit. An accuracy of 88.70 % has been obtained by the proposed
system, with sensitivity of 87.20% and a specificity of 90%.
Diagonization Of Glaucomatous Images Using Wavelet Transforms
Sreeja Mole .et.al.[30] Texture features are extracted from the images are mostly effective
for accurate and efficient glaucoma classification. Glaucoma is a disease caused due to the
increased internal pressure of the eyes. This paper addresses the various image processing
techniques to diagnose the glaucoma and the corresponding images are implemented for
feature extraction using five wavelet filters daubenchies (db3), symlets (sym3) and
biorthogonal (bio3.3, bio3.5, bio3.7). Similarity method is used for classification is done by
Baye’s method
Wavelet Based Energy Features
Iyyanarappan .et.al[31] Glaucoma is caused due to the increase in intraocular pressure of
the eye. The intraocular pressure increases due to malfunction or malformation of the
drainage system of the eye. The anterior chamber of the eye is the small space in the front
portion of the eye. A clear liquid flow in and out of thechamberandthis fluid is called
aqueous humor.The increased intraocular pressure within the eye damages the optic nerve
through which retina sends light to the brain where they are recognized as images and
makes vision possible[1]. The goal of this paper is to develop an algorithm which
automatically analyze eye ultrasound images and classify normal eye images and diseased
glaucoma eye images The two central issues to automatic glaucoma recognition feature
extraction from the retinal images and classification based on the chosen feature extracted.
Features extracted from the images are categorized as either structural features or texture
features.Here, discrete wavelet transform (DWT) using daubechies wavelet, symlets
wavelet and biorthogonal wavelet are used to extract features. Wavelet Energy signatures
are calculated from these extracted features [2]. Probabilistic Neural Network is used to
automatically analyse and classify the images as normal or abnormal eye images.Finally,
this Classifier can be used to distinguish between normal and glaucomatous images. We
observed an accuracy of around 95%,this demonstrates the effectiveness of these methods
Unmixing-Based Feature Extraction Techniques for Hyperspectral Image Classification
Inmaculada Dópido.et.al[32],Over the last years, many feature extraction tech- niques have
been integrated in processing chains intended for hyperspectral image classification. In the
context of supervised classification, it has been shown that the good generalization
capability of machine learning techniques such as the support vector machine (SVM) can
still be enhanced by an adequate extraction of features prior to classification, thus
mitigating the curseofdimensionalityintroducedbytheHugheseffect.Recently, a new strategy
for feature extraction prior to classification based on spectral unmixing concepts has been
introduced. This strategy hasshownsuccesswhenthespatialresolutionofthehyperspectral
image is not enough to separate different spectral constituents at a sub-pixel level. Another
advantage over statistical trans- formations such as principal component analysis (PCA) or
the minimum noise fraction (MNF) is that unmixing-based features are physically
meaningful since they can be interpreted as the abundance of spectral constituents. In turn,
previously developed unmixing-based feature extraction chains do not include spatial
information. In this paper, two new contributions are proposed. First, we develop a new
unmixing-based feature extraction tech- nique which integrates the spatial and the spectral
information using a combination of unsupervised clustering and partial spec-
tralunmixing.Second,weconductaquantitativeandcomparative assessment of unmixing-
based versus traditional (supervised and unsupervised) feature extraction techniques in the
context of hyperspectral image classification. Our study, conducted using a
varietyofhyperspectralscenescollectedbydifferentinstruments, provides practical
observations regarding the utility and type of feature extraction techniques needed for
different classification scenarios.
Melanoma Using Border- and Wavelet-Based Texture Analysis
Rahil Garnavi.et.al[33], This paper presents a novel computer-aided diagno- sis system for
melanoma. The novelty lies in the optimized se- lection and integration of features derived
from textural, border- based, and geometrical properties of the melanoma lesion. The
texturefeaturesarederivedfromusingwavelet-decomposition,the border features are derived
from constructing a boundary-series model of the lesion border and analyzing it in spatial
and frequency domains ,and the geometry features are derived from shape indexes.The
optimizedselectionoffeaturesisachievedbyusingthe gain-ratio method, which is shown to be
computationally efficient formelanomadiagnosisapplication.Classificationisdonethrough
the use of four classifiers; namely, support vector machine, ran- dom forest, logistic model
tree, and hidden naive Bayes. The proposed diagnostic system is applied on a set of 289
dermoscopy im- ages(114malignant,175benign)partitionedintotrain,validation, and test
image sets. The system achieves an accuracy of 91.26% and area under curve value of
0.937, when 23 features are used. Other important findings include 1) the clear advantage
gained in complementing texture with border and geometry features, com- pared to using
texture information only,and2)higher contribution of texture features than border-based
features in the optimized feature set.
Retinal Abnormality Detection Using Artificial Neural Network
Syeda Fazilath Banu.et.al[34], Glaucoma is the diagnosis given to a group of ocular
conditions that contribute to the loss of retinal nerve fibers with a corresponding loss of
vision.Glaucoma is the major cause of blindness in people above the age of 40. The Intra
Ocular Pressure (IOP) increases because of the malfunction of the drainage structure of
the eyes leading to Glaucoma. There are several methods to detect Glaucoma from a
human eye in the initial stages. The proposed work automatically detects Glaucoma disease
in human eye from the fundus database images. The feature extraction within images is
done using Gray Level Difference Method (GLDM) and the classification is done by
trained Artificial Neural Network. In this work, we achieved an accuracy of 86.66% with
sensitivity at 93.33% and specificity at 80%.
Convex hull based ellipse optimization algorithm
Zhuo Zhang et al. [35] proposed a convex hull based ellipse optimization algorithm for a more
accurate detection of neuro-retinal optical cup. Comparing with the state-of-the- art ARGALI
system, the new approach achieves a better CDR value calculation, which results to more
accurate Glaucoma Diagnosis. The good performance of the new approach leads to a large scale
clinical evaluation involving 15 thousand patients from Australia and Singapore.

Extract features automatically and robustly in color fundus images


Huiqi Li et al. [36] proposed algorithms to extract features automatically and robustly in color
fundus images. PCA is employed to locate optic disk. A modified ASM is proposed in the shape
detection of optic disk. A fundus coordinate system is established based on the fovea
localization. An approach to detect exudates by the combined region growing and edge detection
is proposed. The success rates of disk localization, disk boundary detection, and fovea
localization are 99%, 94%, and 100% respectively. The sensitivity and specificity of the exudates
detection are 100% and 71% correspondingly. The success of the proposed algorithms can be
attributed to the utilization of the model-based methods. The satisfactory feature detection could
make the automatic analyzing system become more reliable.
Archana Nandibewoor et al. [37] proposed the early detection of glaucoma can be done in this
method. An algorithm is proposed in such a way that any disorder found inside the eye with
respect to color, an immediate action is taken. By keeping a standard color as reference, the
patient’s eye color is matched. If this patients eye color is darker then the reference image then
the result is displayed as positive. Also the percentage of glaucoma affected is given. The risk
involved in losing the eye sight is decreased due to the early detection and prevents the human
from virtual impairers. Where an increase in the formation of color detects the glaucoma.

Screening technique using super pixel classification


Inoue et al. [38] developed a glaucoma screening technique using super pixel classification on
optic disc and optic cup segmentation. In optic disc segmentation, histograms were utilized to
classify each super pixel as disc or non-disc. The quality of the automated optic disc
segmentation is calculated using a self-assessment reliability score. For optic cup segmentation,
along with the histograms, the location information is also included to boost up the performance.
In this proposed segmentation approach a database of 650 images was used with optic disc and
optic cup boundaries which had been manually marked by professionals. The results showed an
overlapping error of 9.5% and 24.1% in disc and cup segmentation, respectively. Lastly the cup
to disc ratio for glaucoma screening was computed.
Bock et al. [39] developed an automated glaucoma classification system that does not at all
depend on the segmentation measurements. They had taken a purely data-driven approach which
is very useful in large-scale screening. This algorithm undertakes a standard pattern recognition
approach with a 2-stage classification step. In this study, various image-based features were
analyzed and integrated to capture glaucomatous structures. There are certain disease
independent variations such as size differences, illumination in homogeneities and vessel
structures which are removed in the preprocessing phase. This system got 86% success rate on a
data set of 200 real images of healthy and glaucomatous eyes.
Automated glaucoma diagnosis system using a combination of HOS, TT, and DWT
M muthu rama krishnan et al. [40] proposed a new automated glaucoma diagnosis system using
a combination of HOS, TT, and DWT features extracted from digital fundus images. The system,
uses an SVM classifier (with polynomial kernel order 2), was able to detect glaucoma and
normal classes with an accuracy of 91.67%, sensitivity of 90%, and specify of 93.33%. This
classification efficiency may even be further improved using images with a broader range of
disease progression, better features, and robust data mining algorithms. In addition, we propose
an integrated index, which is composed of HOS, TT, and DWT features. The GRI is a single
feature which distinguishes normal and glaucoma fundus images, Hence; it is a highly effective
diagnostic tool which may help clinicians to make faster decisions during mass screening of
retinal images. The proposed system is cost effective, because it integrates seamlessly with
digital medical and administrative processes and incorporates inexpensive general processing
components. Therefore, the glaucoma detection system can be used in mass screening where
even a modest cost reduction, in the individual diagnosis, amounts to considerable cost savings.
Such cost savings may help to eliminate suffering, because the money can be used to increase the
pervasiveness of glaucoma screening or it can be used anywhere else in the health service, where
it is even more effective.
Grau et al. [41] proposed a new segmentation algorithm, depending on the expectation-
maximization. This algorithm used an anisotropic Markov random field (MRF). In this study,
structure tensor had been used to characterize the predominant structure direction as well as
spatial coherence at each point. This algorithm had been tested on an artificial validation dataset
that is similar to ONH datasets. It has shown significant improvement over an isotropic MRF.
This algorithm provides an accurate, spatially consistent segmentation of this structure.
Joshi et al. [42] purposed an automated OD parameterization technique An OD segmentation
technique is developed which works by integrating the information of local images around each
point of interest in multidimensional feature space. This technique is quite robust against any
form of variations found in the OD region. They utilized a cup segmentation technique
depending on anatomical information such as vessel bends at the cup boundary, which is quite
vital as considered by glaucoma experts. The bends in a vessel can be easily detected by utilizing
a region of support concept, which helps in selecting the right scale for analysis. In this study, a
multi-stage strategy is used to find a reliable subset of vessel bends called r-bends, which is
followed by a local spline fitting in order to find the desired cup boundary.

Glaucoma detection based on RetCam


Cheng et al. [43] proposed a new technique for glaucoma detection based on RetCam. Wich is
used in imaging modality that captures the image of irridocorneal angle. The manual grading and
analysis of the RetCam image is quite a time consuming process but it gives expected output.
They developed an intelligent system for analysis of iridocorneal angle images, which can
distinguish between open angle glaucoma and closed angle glaucoma automatically and which
consume less time and give expected result .
Vermeer et al. [44] proposed a model for detecting the change in images.This methodology
depends on image set of 23 healthy eyes and includes colored noise, incomplete cornea and
masking is done by the retinal blood vessels. This system uses two more methodologies for
tracking progression by taking up one or two follow-up visits into the account. Then they are
tested on these simulated images. Both of these methods are depending on Student's t-tests,
anisotropic filtering and morphological operations. The images simulated by this technique are
visually pleasing and also show statistical properties to the real images. This results in
optimizing the detection methods. The results reveal that tracking the progression depending on
two follow-up visits marks a great improvement in sensitivity without affecting the specificity
favorably.
Huang et al. [45] developed an automated classifier based on adaptive neuro-fuzzy inference
system (ANFIS) Stratus optical coherence tomography (OCT) technique was used for calculation
of glaucoma variables (optic nerve head topography, retinal nerve fiber layer thickness).
Decision making was performed in two stages: feature extraction using the orthogonal array and
the selected variables were treated as the feeder to adaptive neuro-fuzzy inference system
(ANFIS), which was trained with the back-propagation gradient descent method in combination
with the least squares method. With the Stratus OCT parameters used as input, receiver operative
characteristic (ROC) curves were generated by ANFIS to classify eyes as either glaucomatous or
normal. The mean deviation was -0.67 ± 0.62 dB in the normal group and - 5.87 ± 6.48 dB in the
glaucoma group. The inferior quadrant thickness parameter was used for distinguishing between
normal and glaucomatous eyes.
Hatanaka et al. [46] proposed a technique for detection of glaucoma utilizing a vertical cup-to-
disc ratio. The proposed method tries to measure the cup-to-disc ratio using a vertical profile on
the optic disc. After that the blood vessels of the disc were removed from the image. Then canny
edge detection filter was used for detection of the edge of optic disc. The edge of the cup area on
the vertical profile was calculated by the threshold method. as a final point, the vertical cup-to-
disc ratio was found out. In this study, they also presented a method for recognizing glaucoma by
calculating C/D ratio. The method correctly identified 80% of glaucoma cases and 85% of
normal cases.
L´aszl´o G. Ny´ul [47] devised a novel automated glaucoma classification technique depending
on image features from fundus photographs. First at all size differences non uniform
illumination and blood vessels are eliminated from the images. Then extraction of the high
dimensional feature vectors is done. Finally compression is done using PCA and the combination
before classification with SVMs takes place. The Glaucoma Risk Index (GRI) produced by the
proposed system with a 2- stage SVM classification scheme achieved 86% success rate. This is
analogous to the performance of medical experts in detecting glaucomatous eyes from such
images. Since GRI is computed automatically from fundus images.
Mary et al. [48] devised a technique for glaucoma detection where optic disc segmentation is
done by pyramidal decomposition with the help of Hough transformation it guaranteed to
converge though it’s very sensitive to noise which carried out on the retinal images for better
performance than other algorithms. They have proposed a model approach using discriminate
analysis which has shown an improvement over the rest.
Sobi Nazi et al.49] proposed a system where the main technique is to identify the cup-to-disk
ratio (CDR). The CDR was calculated by taking the ratio between the area of optic cup and
disc.CDR > 0.3 indicates glaucoma and CDR ≤ 0.3, is considered as normal image. They
examine the mean square error (MSE) pixel signal to noise ratio (PSNR) and signal to noise ratio
(SNR) to quantify the performance the pre-processing algorithms. The algorithm for the earlier
identification of Glaucoma by estimating CDR was developed in this paper. The optic disc was
segmentation is done using the three methods first edge detection method second optimal
Thresholding method and third manual threshold analysis are. For the cup threshold level-set
method is evaluated. The performance of various methods was evaluated by comparing the CDR.
It was found that the manual threshold method and edge detection method provides better
estimation of CDR. The method has been applied to nearly forty images and the CDR was
correctly identified.
Chandrika et al. [50] adopted a technique for automated Glaucoma diagnosis. In this technique
optic disk identification is performed on retinal images for calculating CDR thye first performed
Thresholding then image segmentation is performed using k-means clustering and Gabor wavelet
transform. Then optic disc and cup boundary smoothing is performed using different
morphological features. If the CDR ratio exceeds 0.3 it indicates high Glaucoma for the tested
patient.
Aquino et al. [51] presented a new Optic disc (OD) detection technique which is an imperative
step for automated diagnosis of Glaucoma. In this paper they developed a new template-based
approach for differentiating the OD from digital retinal images. For circular OD boundary
approximation they use morphological, edge detection techniques followed by the Circular
Hough Transform. A pixel located within the OD is taken as initial information. Glaucoma is
identified by recognizing the changes in depth, shape and color that it produces in the OD. Thus,
this technique used segmentation as well as analysis to detect Glaucoma automatically.
Kumar et al. [52] proposed an algorithm for glaucoma detection. In this algorithm, active
contours are used for segmentation and feature extraction of an image. The initial point of
interested feature is determined accordingly. After that masking is done by cropping the region
of interest. Finally calculation of the open angle of the anterior chamber is conducted. If the open
angle is found to be greater than threshold angle, the eye is diagnosed as normal eye. Otherwise,
it is diagnosed to be diseased eye.
Pachiyappan et al. [53] proposed a technique for Glaucoma diagnosis utilizing fundus images of
the eye and the optical coherence tomography (OCT). The Retinal Nerve Fiber Layer (RNFL)
can be generally classified into two types anterior boundary (top layer of RNFL), the posterior
boundaries (bottom layer of RNFL) it was also based on the distance in between the two
boundaries. Glaucomatous and Non-Glaucomatous classification was done using the thickness of
the nerve fiber layer which is nearly 105 μm. This approach provided optical disk detection with
97.75% accuracy.
Preeti Kailas Suryawanshi [54] proposed a technique which extracted ROI from retinal images.
Optic disc segmentation is performed on the extracted ROI in order to detect the disc boundary
using optimal color channel. Optic disc boundary smoothing is performed using ellipse fitting
for capturing near perfect shape of the disc. Optic cup segmentation is also performed. After the
cup boundary detection, ellipse fitting is again employed to eliminate some of the cup
boundary’s sudden changes in curvature. Ellipse fitting becomes especially useful when portions
of the blood vessels in the neuro-retinal rim are incorporated within the detected boundary. The
CDR is automatically obtained based on the height of detected cup and disc. If the cup to disc
ratio exceeds 0.3 then it indicates the abnormal condition that is the presence of glaucoma.
K. Narasimhan et al. [55] proposed a new methodology for the detection of glaucoma based
on two imperative features CDR and ISNT ratio. K-means clustering is recursively applied to
ROI to localize the optic disc and optic cup region. An elliptical fitting technique is used to
calculate the CDR values. The blood vessels inside the optic disc are extracted by local entropy
Thresholding and four different masks are used to determine the ISNT ratio. CDR and ISNT
ratio are calculated. Then they carried out a performance of the proposed algorithm on three
different classifiers. Experiments suggest that the maximum classification rate of 95% for
glaucoma can be achieved when using the SVM classifier.
Neelapala Anil Kumar et al. [56] proposed a technique for automated detection of glaucoma in
eye by using angel open distance 500 calculations. This technique is a 3 step methodology. In the
first step, for effective anterior chamber segmentation they usese features of the ultrasound
images i.e. contrast, resolution and clarity is. In second step, classification is performed to
eliminate the unwanted image. Then they crop the anterior chamber region and the reference axis
is located. In third step on the basis of anterior chamber angle for finding out whether the eye is
affected by glaucoma or not focuses. This algorithm is able to correctly diagnose glaucoma in
97% of the cases.
Urja Zade et al. [57] proposed a technique for glaucoma assessment which allows derivation of
various geometric parameters of the OD and Incremental cup segmentation method using 3D
interpolation. The bestowed answer for eye disease assessment was within the form of 2
segmentation strategies for OD and cup. A novel, active contour model is bestowed to urge
strong OD segmentation. This has been achieved by enhancing the CV model by together with
image info at the support domain around each contour purpose. A horny facet of the extension is
the strengthening of region-based active contour model by the integration of data from multiple
image feature channels. The obtained results will show that technique captures OD boundary
during a unified manner for each traditional and difficult cases while not imposing any form
constraint on the segmentation result, in contrast to the sooner strategies. In cup segmentation,
it's observed that boundary estimation errors area unit principally in regions with no depth cues
that is in keeping with the high inter-observer variability in these regions.
S.Kavitha et al. [58] proposed K Means clustering technique which focuses on the pallor
information at each pixel thereby enabling rapid clustering and achieves a very good accuracy in
detecting the optic cup. It is simple and easy to implement an unsupervised method rather than a
supervised approach. Hill climbing technique and k means clustering provides a promising step
for the accurate detection of optic cup boundary. Vertical CDR or superior or inferior rim area
parameters may be more specific in identifying the Neuroretinal rim loss along the optic disc
compared to an overall cup-to-disc diameter ratio. Textural features are considered in this work
in order to effectively detect glaucoma for the pathological subjects. A hybrid method involving
textural features along with CDR, Neuroretinal Rim area calculation provides an efficient means
to detect glaucoma. ANFIS achieves good classification accuracy with a smaller convergence
time compared to Neural network classifiers. Performance of the proposed approach is
comparable to human medical experts in detecting glaucoma. Proposed system combines feature
extraction techniques with segmentation techniques for the diagnosis of the image as normal and
abnormal. The method of considering the Neuroretinal rim width for a given disc diameter with
the textural features can be used as an additional feature for distinguishing between normal and
glaucoma or glaucoma suspects Progressive loss of Neuroretinal rim tissue gives an accurate
result to detect early stage of glaucoma with a high sensitivity and specificity.
Asma Mansour et al. [59] proposed a method of automatic detection of OD two methodologies
one to locate the OD based on a PCA and another one to segment its boundary based on RSF.
Then, early detection of exudates is very important in the diagnosis of ocular diseases, they have
projected an approach which combines coarse and fine segmentation to obtain final detection of
this kind of abnormalities.
Sheeba O. et al. [60] proposed a method for training and simulating artificial neural network to
detect the presence of glaucoma and classify the disease as mild, severe and normal. The various
parameters are easily extracted using Matlab and compared with standard values using neural
network. The artificial neural network makes the Glaucoma detection accurate and adaptive. The
advantage of the system is simplicity of operation. This software intended to help the doctors in
their decision making process. To make this more user friendly graphical user interface is also
given which makes the handling of this tool very simple.
Darsana S et al. [61] proposed technique in which thye analyzed three sections. In the first
section the performance of image based classification is analyzed. A total of 70 images are tested
using the trained SVM which include 25 glaucomatous image and 45 normal images. Out of 70
images 67 images are classified correctly. The performance analysis is done by calculating
sensitivity, specificity and accuracy. The results obtained are 97.7% sensitivity, 92% specificity
and 95.7% accuracy. The performance of risk calculator is analyzed in the second section. Risk
calculator effectively calculates the score for every set of data inputs assuring high accuracy.
Finally the combined glaucoma risk analysis is analyzed in the third section. The classification
accuracy of this stage is a clear reflection of above stages. The risk labeled to each patient at this
classification level will be a valuable reference for the clinicians for their further assessment.
S.Kavitha et al. [62] proposed algorithms for the identification of Glaucoma by estimating CDR
were developed. ROI based segmentation is proposed to localize optic disk, which is estimated
by using contour method exactly, when compared with other methods even though the image is
in low contrast. The optic cup was segmented using the component analysis and the threshold
methods separately. The performance of various methods was evaluated using the proximity of
the calculated CDR to the clinical CDR. It was found that ROI, combined with the component
Analysis method provides the better estimation of CDR.C/D ratio does not take into
consideration the diameter of the disc and hence it is prone to give false positive and false
negative impressions. The proposed work focuses on how much the Neuroretinal rim tissue is
present. By categorizing the discs as small, medium or large, the expectation of rim thickness can
be adjusted. This reduces the misclassification based on the disc size. It also takes into
consideration the focal loss of rim tissue. Neuroretinal rim area evaluation may increase the
value in Clinic practice for automatic screening of early diagnosis of Glaucoma. The results
presented in this paper indicate that the features are clinically significant in the detection of
glaucoma.
S. Sekhar et al. [63] The proposed technique was tested on the DRIVE database of retinal
images this consists of 40 fundus images of dimensions 768×584, captured by a Canon CR5 non-
mydriatic 3CCD camera with a 45o field of view (FOV). These images contain both normal
(healthy) and abnormal retinas. In this study, 36 of these images were used (4 images have been
excluded for not having visually-detectable optic disks). The performance of the optic disk
localization was evaluated based on the determined optic disk location with regard to an expert.
Proposed method is capable of localizing the optic disk correctly for 34 of these images (success
rate of 94.4%). method is able to detect the fovea in all of these 34 images with a success rate.
K.Kavitha et al. [64] proposed a glaucoma screening for optic disc and optic cup segmentation
for the area under curve (AUC) of the ROC curves by various cup segmentation methods.
Therefore, the AUC significantly larger than IOP, threshold, r-bend, ASM, and regression
methods. The results show smaller CDR errors in CDR measurement and higher AUC in
glaucoma screening by the proposed method. The proposed disc and cup segmentation methods
achieve an AUC of 0.800,0.039 lower than AUC of 0.839of the manual CDR computed from
manual disc and manual cup.In the results for the SCES dataset, the proposed method achieves
AUC 0.822 in the screening SCES data, which is much higher than 0.660 by the currently used
IOP measurement discussions with clinicians, the accuracy is good enough for a large-scale
glaucoma.
Chalinee Burana-Anusorn1 et al. [65] proposed a method to calculate the CDR
automatically from fundus images. The optic disc is extracted using an edge detection approach
and a variation level-set approach individually. The optic cup is then segmented using a color
component analysis method and threshold level-set method. After obtaining the contours, an
ellipse fitting step is introduced to smoothen the obtained results. The performance of this
approach is evaluated using the proximity of the calculated CDR to the manually graded CDR.
The results indicate that our approach provides 89% accuracy in glaucoma analysis. As a result,
this study has a good prospective in automated screening systems for the early detection of
glaucoma.

Chapter3 Discrete wavelet transform

Introduction
Glaucoma is a leading cause of blindness in the world .It occurs due to damage of eye’s optic nerve.
The damage to the optic nerve is due to the increased pressure inside the eye. The optic nerve carry
the information from retina to brain. The intraocular pressure is due to malfunction of the drainage
system of the eye. A clear liquid flow in and out of the eye is essential. The liquid is called acqueous
humour, if it cannot drain properly immense pressure arise inside the eye. This pressure damages the
optic nerve. In this proposed method both structural and energy features are considered then analyzed to classify as
glaucomatous image. Energy distribution over wavelet sub bands were applied to find these important texture energy features.
Finally extracted energy features are applied to Multilayer Perceptron (MLP) and Back Propagation (BP) neural network for
effective classification by considering normal subject’s extracted energy features. Naive Bayes classifies the images in the database
with the accuracy of 89.6%. MLP-BP Artificial Neural Network (ANN) algorithm classifies the images in the database with the
accuracy of 97.6%. The goal of the proposed work is to develop an algorithm that automatically classify
normal eye images and diseased glaucoma eye images Features extracted from retinal images are used
for classification. Here discrete wavelet transform using daubechies, symlets and biorthogonal wavelets
are used to extract features. Wavelet energy signatures are calculated from these extracted
features. SVM, SMO, Random forest, Naïve bayes and ANN classifiers are used to classify images as
normal or abnormal eye images. Then find out which classifier is good in accuracy for finding
out the glaucomatous or normal retinal images.And finally segmentation is done on each
glaucomated retinal image.

I. MATERIAL USED
The retinal images were collected from the web database, which manually curated the images based
on the quality and usability of samples. The images grouped in to a set of normal retina images
and a set of images diagnosed with glaucoma. All images were taken with a resolution of
560x720 pixel and stored in JPEG format. The dataset contains 30 fundus retinal images. The 30
retinal images consist of 15 normal and 15 glaucomatous images collected from database. The fundus
camera, a microscope and a light source are used to captutre the retinal images to diagnose diseases.

Fig.1 a) Normal Retina Image (b) Glaucomatous Image

the optic nerve damages by the elevation in the intraocular pressure inside the eye, causing irreversible
damage to the optic nerve and to the retina.

III. METHODOLOGY

Retina Image Classification can be based on these methods, In this they are using different
techniques to classify image and then predict as Glaucoma or Normal Retina Image.

A. Image Acquisition B. Feature Extraction C. Image Classification

A. Image Acquisition

The first stage in fundal digital image analysis is image capture. This is normally acquired by a fundal camera
(mydriatic or non-mydriatic) that has a back-mounted digital camera. Digital cameras use an image sensor
likeDirect digital sensors are either a Charge-Coupled Device (CCD) or Complementary Metal Oxide
Semiconductor Active Pixel Sensor (CMOS-APS) (Gonzalez and Woods,1992)[4]. S.Y.Lee(2004) proposed
an,RNFL photographs were acquired by fundus camera system (CF-60UD, Canon Inc., Tokyo)integrated
with digital camera (D60, Canon Inc.). Green filter was used to enhance the RNFL on the fundus
photograph during acquisition. Image was stored in 560x720 pixel JPEG format for further analysis [5][6].

B. Feature Extraction
Daubechies proposed, In this paper shows how quantitatively examine the effectiveness of different wavelet
filters on a set of curated glaucomatous images by employing the standard 2-D-DWT [7]. In this approach,
discrete wavelet transform (DWT) usingthedaubechies (db3), the symlets (sym3), and the biorthogonal
(bio3.3) threewaveletfilters is used to extract features and analyzed discontinuities and abrupt changes
contained in signals. DWT can be performed by iteratively filtering a signal or image through the low-pass and
high-pass filters, and subsequently downsampling the filtered data by two Polikar (1999)[8]. This process will
decompose the input image into a series of subband images. The wavelet features of an image are obtained by
undergoing wavelet decomposition. Herethe wavelet decomposition is done by using 2-D discrete wavelet
transformwhich captures both the spatial and frequency informations of a signal. DWT analyzes the image by
decomposing it into a coarse approximation via low-pass filtering and into detail information via high-pass
filtering [9,10]. Such decomposition is performed recursively onlow-pass approximation coefficients obtained at
each level, until the necessary iterations are reached. Let each image be represented as a p × q gray-scale
matrix I[i,j], where each element of the matrix represents the grayscale intensity of one pixel of the image.
Each nonborder pixel has eight adjacent neighboring pixel intensities. These eight neighbors can be used to
traverse the matrix.

The resultant 2-D DWT coefficients are the same irrespective of whether the matrix is traversed right-to-left or
left-to-right. Hence, it is sufficient that we consider four decomposition directions corresponding to 0◦
(horizontal, Dh), 90◦ (vertical, Dv) orientations. The decomposition structure for one level, I is the image, g[n]
and h[n] are the low-pass and high-pass filters, respectively, and A is the approximation coefficient. As is
evident from Fig.3, the first level of decomposition results in four coefficient matrices, namely, A1, Dh1, Dv1
[11][12].

Columns
1DS2 A1
X g(n)
Rows X
2DS1
g(n)
Columns
1DS2 Dh1
InputI X h(n)
Retinal
nput
Image
Columns
1DS2 Dv1
X g(n)
Rows X
2DS1
h(n)
Columns
1DS2 Dd1
X h(n)

Figure.1 DWT two level subbands decomposition

thehorizontal detail subband (LH), the vertical detail subband (HL), and the diagonal detailsubband
(HH) [13]. The LL, LH, HL, and HH subband are respectively low frequencies for bothdirections, low
frequencies for the horizontal direction and high frequencies for the verticaldirection, high frequencies for
the horizontal direction and low frequencies for the verticaldirection, and high frequencies for both directions
[14].Then, the obtained approximation images(LL) are decomposed again to obtain one level detail and
approximation images. This decomposition process can be represented in the common square scheme Fig.4,
which is depictedbelow
LL1 HL1

LH1 HH1

R-plane(LL) G-plane(LL) B-plane(LL)

Fig.4 One level 2D-DWT decomposition of an image

Extract the Energy Features

The 2-D DWT is used in order to extract the energy signatures[15]. The DWT is applied to three different
filters namely daubechies (db3), symlets (sym3) and biorthogonal (bio3.3, bio3.5, bio3.7). With the help of
these filters, we obtain the wavelet coefficients.Since the number of elements in these matrices is
high, and we only need a single number as a representative feature, we employ averaging methods to
determine such single valued features. The definitions of the three features that were determined using the
DWT coefficients are in order. Equations (1) and (2) determine the averages of the corresponding intensity
values, whereas (3) is an averaging of the energy of the intensity values. Thus wavelet coefficients which
are subjected to average and energy calculation results in feature extraction.

𝐴𝑣𝑒𝑟𝑎𝑔𝑒𝐷ℎ1=1𝑝×𝑞ΣΣ|𝐷ℎ1 (𝑥, 𝑦) |𝑦= {𝑞} 𝑥= {𝑝}….…(1)

𝐴𝑣𝑒𝑟𝑎𝑔𝑒𝐷𝑣1=1𝑝×𝑞ΣΣ|𝐷𝑣1 (𝑥, 𝑦) |𝑦= {𝑞} 𝑥= {𝑝} ……(2)

𝐸𝑛𝑒𝑟𝑔𝑦=1/𝑝2×𝑞2 Σ x= {p} Σ y= {q} (𝐷𝑣1 (𝑥,y)) 2…..............(3)

by afactor of two.As a result, the 2D-DWT hierarchically decomposes a digital image into aseries of
successively lower resolution images and their associated detail images. Van de Wouwer
(1999),Forinstance, the result of filtering process is four subbands: the approximation subband (LL),
Energy signatures provide a good indication of the total energy contained at specific spatial frequency
levels and orientations [15].The energy-based approach assumes that different texture patterns have
different energy distribution in the space-frequency domain. The energy obtained from the detailed
coefficients can be used to distinguish between normal and glaucomatous images with very high accuracy.
Hence these energy features are highly discriminatory. This approach is very appealing due to its low
computationa complexity involving mainly the calculation of first and second order moments of
coefficients.

C. Image Classification

Classifiers: The ability of each image-based feature extraction method to separate glaucoma and non-
glaucoma cases is quantified by the results of ANN classifiers. Classifiers achieve good results if their
underlying separation model fits well to the distribution of the sample data. As the underlying data distribution
is unknown, we tested different classifiers.

V. EXPERIMENT RESULTS
The following section provides a detailed description of the results obtained from our, feature extraction, and
Classification.

Energy Feature Extraction Based on DWT

The energy based approach assumes that different texture patterns have different energy distributions in the
space- frequency domain. This approach is very appealing due to its low computational complexity involving
mainly the calculation of first and second order moments of transform coefficients. Its provides a snapshot of
the results obtained from Feature extraction described in the methodology section.

Table I- Shows the energy feature extraction using

Discrete Wavelet Transform.

Db12 Db12 Sym12 Sym12 rbio3.7 rbio3.7 rbio3.7 rbio3.7 rbio3.9 rbio3.9 rbio3.9 rbio4.4 rbio4.4 rbio4.4
Dh1 cV Dh1 cV Dh1 cD cV Dh1 cD cV cV Dh1 cH cV
averag e energy average energy average energy energy average energy energy average average energy energy

1.6083e- 1.6083e- 1.3965e- 4.1188e- 9.1801e- 3.3841e- 4.1182e- 7.6863e- 3.1515e-


0.0015 0.0015 0.0026 0.0023 0.0022
04 04 04 04 05 04 05 05 04

1.8266e- 1.8266e- 1.5435e- 4.7205e- 9.9975e- 3.9047e- 5.2522e- 8.2761e- 3.6286e-


0.0017 0.0017 0.0035 0.0033 0.0032
04 04 04 04 05 04 05 05 04

1.0428e- 1.0428e- 8.5355e- 2.5077e- 5.8071e- 2.1084e- 3.2903e- 4.9727e- 1.9830e-


0.0012 0.0012 0.0023 0.0021 0.0019
04 04 05 04 05 04 05 05 04

9.1697e- 9.1697e- 9.0650e- 2.1893e- 6.2153e- 1.8167e- 3.3333e- 5.3644e- 1.7031e-


0.0014 0.0014 0.0027 0.0024 0.0023
05 05 05 04 05 04 05 05 04

7.6268e- 7.6268e- 6.1105e- 1.9601e- 4.0240e- 1.5707e- 2.2921e- 3.3995e- 1.4407e-


0.0011 0.0011 0.0019 0.0017 0.0016
05 05 05 04 05 04 05 05 04

9.1858e- 9.1858e- 1.0047e- 2.1832e- 7.0314e- 1.8032e- 3.2857e- 6.0837e- 1.6891e-


0.0013 0.0013 0.0024 0.0022 0.0021
05 05 04 04 05 04 05 05 04

8.9921e- 8.9921e- 6.8610e- 2.4475e- 4.5708e- 1.9750e- 2.2445e- 3.8927e- 1.8287e-


0.0011 0.0011 0.0021 0.0019 0.0017
05 05 05 04 05 04 05 05 04

8.8547e- 8.8547e- 7.7716e- 2.3148e- 5.1534e- 1.8837e- 2.3693e- 4.3472e- 1.7594e-


0.0011 0.0011 0.0020 0.0018 0.0017
05 05 05 04 05 04 05 05 04

1.1000e- 1.1000e- 8.8269e- 2.8980e- 6.1147e- 2.3224e- 2.2607e- 5.3601e- 2.1400e-


0.0013 0.0013 0.0023 0.0020 0.0019
04 04 05 04 05 04 05 05 04

8.1044e- 8.1044e- 7.1364e- 2.0624e- 4.9080e- 1.6889e- 2.5723e- 4.2792e- 1.5695e-


0.0012 0.0012 0.0021 0.0020 0.0019
05 05 05 04 05 04 05 05 04

8.0428e- 8.0428e- 7.5344e- 2.0372e- 5.1164e- 1.6671e- 2.4928e- 4.4139e- 1.5482e-


0.0010 0.0010 0.0020 0.0017 0.0016
05 05 05 04 05 04 05 05 04

9.6110e- 9.6110e- 9.8074e- 2.3305e- 6.7280e- 1.9243e- 3.7208e- 5.7523e- 1.7914e-


0.0013 0.0013 0.0028 0.0025 0.0023
05 05 05 04 05 04 05 05 04
8.7326e- 8.7326e- 8.4659e- 2.1165e- 5.8957e- 1.7474e- 3.3146e- 5.1627e- 1.6220e-
0.0016 0.0016 0.0028 0.0025 0.0024
05 05 05 04 05 04 05 05 04

6.9764e- 6.9764e- 7.3511e- 1.7000e- 5.0603e- 1.4164e- 2.9544e- 4.3634e- 1.3235e-


0.0012 0.0012 0.0023 0.0020 0.0019
05 05 05 04 05 04 05 05 04

8.0723e- 8.0723e- 5.8230e- 2.0309e- 3.8834e- 1.6543e- 1.8239e- 3.3531e- 1.5319e-


0.0010 0.0010 0.0019 0.0016 0.0015
05 05 05 04 05 04 05 05 04

8.9255 e- 9.6231e- 8.9255 e- 9.6231e- 6.6218e- 2.5209e- 4.3798e- 2.0189e- 1.8655e- 3.7398e- 1.8606e-
0.0015 0.0014 0.0013
04 05 04 05 05 04 05 04 05 05 04

8.8988e- 8.8988e- 7.0271e- 2.4269e- 4.6995e- 1.9505e- 2.4418e- 3.9866e- 1.8152e-


0.0010 0.0010 0.0019 0.0018 0.0017
05 05 05 04 05 04 05 05 04

6.7884 e- 9.5880e- 6.7884 e- 9.5880e- 5.5397e- 2.4440e- 3.6764e- 1.9688e- 9.6942e- 3.2443e- 1.8140e-
0.0012 0.0011 0.0011
04 05 04 05 05 04 05 04 06 05 04

7.9689 e- 7.9448e- 7.9689 e- 7.9448e- 5.7240e- 1.9558e- 3.9244e- 1.5844e- 1.5574e- 3.4889e- 1.4722e-
0.0015 0.0014 0.0013
04 05 04 05 05 04 05 04 05 05 04

9.0726 e- 6.8426e- 9.0726 e- 6.8426e- 4.6595e- 1.7880e- 2.9594e- 1.4437e- 1.4495e- 2.4854e- 1.3283e-
0.0020 0.0017 0.0016
04 05 04 05 05 04 05 04 05 05 04

9.3858 e- 7.3962e- 9.3858 e- 7.3962e- 5.3136e- 1.9232e- 3.5696e- 1.5600e- 1.6285e- 3.1009e- 1.4470e-
0.0017 0.0015 0.0014
04 05 04 05 05 04 05 04 05 05 04

9.6115 e- 1.0342e- 9.6115 e- 1.0342e- 9.0772e- 2.7366e- 6.1034e- 2.2223e- 2.5909e- 5.1879e- 2.0575e-
0.0019 0.0017 0.0016
04 04 04 04 05 04 05 04 05 05 04

1.3331e- 1.3331e- 9.6781e- 3.3990e- 6.2494e- 2.7813e- 2.8689e- 5.2293e- 2.5862e-


0.0012 0.0012 0.0023 0.0020 0.0018
04 04 05 04 05 04 05 05 04

7.6078e- 7.6078e- 6.0777e- 1.9233e- 4.1093e- 1.5753e- 2.1504e- 3.5577e- 1.4723e-


0.0011 0.0011 0.0021 0.0019 0.0018
05 05 05 04 05 04 05 05 04

9.7494 e- 1.2479e- 9.7494 e- 1.2479e- 1.0399e- 3.1826e- 6.9766e- 2.6092e- 2.5486e- 5.9133e- 2.4379e-
0.0018 0.0016 0.0016
04 04 04 04 04 04 05 04 05 05 04

1.0096e- 1.0096e- 7.8924e- 2.6056e- 5.2300e- 2.1143e- 2.1869e- 4.4696e- 1.9519e-


0.0011 0.0011 0.0021 0.0019 0.0018
04 04 05 04 05 04 05 05 04

1.2202e- 1.2202e- 9.1694e- 3.0338e- 6.1037e- 2.5047e- 2.8282e- 5.2042e- 2.3515e-


0.0011 0.0011 0.0021 0.0019 0.0018
04 04 05 04 05 04 05 05 04

9.2933 e- 9.0100e- 9.2933 e- 9.0100e- 6.3639e- 2.3396e- 4.1252e- 1.8773e- 2.2542e- 3.5075e- 1.7301e-
0.0018 0.0016 0.0016
04 05 04 05 05 04 05 04 05 05 04

8.4856 e- 7.0662e- 8.4856 e- 7.0662e- 4.9956e- 1.9112e- 3.2166e- 1.5594e- 1.7069e- 2.7148e- 1.4487e-
0.0017 0.0015 0.0014
04 05 04 05 05 04 05 04 05 05 04

7.8549e- 7.8549e- 6.1773e- 2.0039e- 4.1502e- 1.6195e- 1.8368e- 3.5917e- 1.4887e-


9.640 9.6401 0.0017 0.0016 0.0016
05 05 05 04 05 04 05 05 04

1.6083e- 1.6083e- 1.3965e- 4.1188e- 9.1801e- 3.3841e- 4.1182e- 7.6863e- 3.1515e-


0.0015 0.0015 0.0026 0.0023 0.0022
04 04 04 04 05 04 05 05 04

0.0017 0.0017 0.0035 0.0033 0.0032


1.8266e- 1.8266e- 1.5435e- 4.7205e- 9.9975e- 3.9047e- 5.2522e- 8.2761e- 3.6286e-
04 04 04 04 05 04 05 05 04

1.0428e- 1.0428e- 8.5355e- 2.5077e- 5.8071e- 2.1084e- 3.2903e- 4.9727e- 1.9830e-


0.0012 0.0012 0.0023 0.0021 0.0019
04 04 05 04 05 04 05 05 04

9.1697e- 9.1697e- 9.0650e- 2.1893e- 6.2153e- 1.8167e- 3.3333e- 5.3644e- 1.7031e-


0.0014 0.0014 0.0027 0.0024 0.0023
05 05 05 04 05 04 05 05 04

7.6268e- 7.6268e- 6.1105e- 1.9601e- 4.0240e- 1.5707e- 2.2921e- 3.3995e- 1.4407e-


0.0011 0.0011 0.0019 0.0017 0.0016
05 05 05 04 05 04 05 05 04

Here one level Wavelet decomposition is done, and the wavelet filters used here were, the daubechies (db3),
the symlets (sym3), and the biorthogonal (bio3.3) filters. The extracted features were used for
Classification.Fig.6shows the energy feature extraction using 2D-DWT.

The execution is finished utilizing Matlab .A Graphical User Interface is created and shown in Figure.9
.The image which is to be analyzed is selected through the GUI. The selected image is pre-processed
utilizing Z-score normalization. The pre-processed image is applied to Wavelet filters Daubechies,
Symlets and Biorthogonal(anju soman)..Then feature extraction option is selected to get rundown of the
energy levels extracted from the image. In the GUI, there are two tables: one compares to list of energy
levels of the input image. The second corresponds to list of selected energy levels of all the images in
the database.

Table.1 shows the enlarged version of the wavelet Subbands in the GUI. The rows of the table are
individual energy level of each retina image in the database. The Columns of the table shows energy
level extracted from the images of database with respect to each wavelet filter. Initial two columns are
corresponding to Daubechies filter, the third and fourth columns are corresponding to symlets12. The
fifth, sixth and seventh columns are corresponding to Biorthogonal filter (Bio3.7).The eighth, ninth and
tenth columns are corresponding to Biorthogonal filter (Bio3.9).The Eleventh ,twelve and thirteenth
columns are corresponding to Biorthogonal filter (Bio4.4) (Anju soman). Using these energy levels the
Naive Bayes and MLP-BP ANN algorithm will classify the images as normal or abnormal. Figure.10
indicates normal condition of the glaucoma and displays the tables corresponding to Naive Bayes and
MLP-BP ANN Algorithm. Naive Bayes classifies the images in the database with the accuracy of 89.6%.
MLP-BP ANN algorithm classifies the images in the database with the accuracy of 97.6%. This GUI also
shows two popup windows which are nothing but the report generated by Naive Bayes and MLP-BP
ANN algorithm for the selected input image(annu). Figure.11 shows abnormal condition of glaucoma
and displays the tables corresponding to Naive Bayes and MLP-BP ANN Algorithm.
Fig.6 Energy feature extraction using 2D-DWT

Fig.4. GUI of Wavelets, energy and classification of glaucoma


Fig. classification of database images

The following section provides the results obtained from feature extraction, Classification,
performance measurement and segmentation

E. Classification Results

The extracted features are used for training the system that is the classifiers like SMO, random
forest, naïve bayes, SVM and ANN. Fig 5 & Fig 6 shows classification result for the retinal images are
classified as glaucoma detected or glaucoma not detected

Fig.5 shows the GUI, displays the tables corresponding to Naive Bayes and MLP-BP ANN algorithm.
Naive Bayes classifies the images in the database with the accuracy of 89.6% MLP-BP ANN algorithm
classifies the images in the database with the accuracy of 97.6% This GUI also shows two popup
windows which are nothing but the report generated by Naive Bayes and MLP-BP ANN algorithm for the
selected input image.
Fig.6 shows abnormal condition of glaucoma
.
Fig.5.Wavelets, energy and classification of glaucoma for normal

Fig.6.Wavelets, energy and classification of glaucoma for abnormal

Performance Results

In this approach, we have considered 30 retinal images both normal and glaucoma eye.Out of that 15
were normal retinal images and the remaining 15 were glaucomatous images. Results are presented
in Table 1.

CONCLUSION AND FUTURE WORK


For the glaucomatous image, the energy levels extracted using wavelet subbands Daubechies (Db4),
Symlets (sym4) and Biorthogonal filters (bio3.7, bio4.2 & bio4.7) gives the clear indication of difference
in the energy levels compared to that of normal retina image. The ANN algorithms Naive Bayes and
MLP-BP are trained with normal retina images and classifies input image into normal or abnormal by
considering extracted energy levels. Naive Bayes classifies the images in the database with the accuracy
of 89.6%. MLP-BP ANN algorithm classifies the images in the database with the accuracy of 97.6%. The
proposed system exhibits better accuracy compared to existing glaucoma classification systems. This
system is cost effective and can be readily used in the hospitals. This system reduces doctor’s burden
and overcomes human error. In future the system can be incorporated with artificial intelligence to
classify other abnormalities of the eye as well. It can also be designed to generate report by itself with
full information of the patient so that it can be used for telemedicine purpose.

Chapter4 cup to Disc Ratio with threshold value

Anng4

PROPOSED METHOD using morphological operations


The method proposed in this paper is mainly based on mathematical morphology the mathematical morphology is a
nonlinear image processing methodology based on minimum and maximum operations whose aim is to extract the
relevant structures of an image[4,10,15]. Their purpose is to expand the light or dark regions, respectively, according
to the size and shape of the structuring elements. These morphological operators that complement the previous ones
are geodesic transformations, using the geodesic reconstruction; a close-hole operator can be defined. The grey-scale
image is considered a whole any set of connected points surrounded by connected components of value strictly is
greater than the whole values[6].This paper proposes new method which improves end results of detecting
glaucoma. This method makes use of Artificial Neural Network and Cup to Disk Ratio to process the fundus image.

Input Image

RGB2NTSC
And
RGB2GRAY

Median Filter

Morphological
Operations

Cup Area Disc area

Cup to Disc ratio

No
CTD>0.3
Yes

ANN

Glaucoma Detected
Image
Database consists of thirty images retinal images. The images are stored in JPEG image format file (.jpg). First step
is to read image and convert it to YIQ color space and grayscale image[1,2,14,15]. RGB2NTSC (rgbmap) converts
the m-by-3 RGB values in rgbmap to NTSC color space. Yiq map is an m-by-3 matrix that contains the NTSC
luminance (Y) and chrominance (I and Q) color components as columns that are equivalent to the colors in the RGB
color map[1,16]. Input image is converted to gray scale image. RBG2Gray (RGB) converts the true color image
RGB to the grayscale intensity image. The rgb2gray function converts RGB images to grayscale by eliminating the
hue and saturation information while retaining the luminance[4,9]. RGB2NTSC (rgbmap) converted image is
applied to median filter. The median filter is a non-linear filter type and which is used to reduce the effect of noise
without blurring the sharp edge. The operation of the median filter is first arrange the pixel values in either the
ascending or descending order and then computes the median value of the neighborhood pixels. Morphological
operations play a key role in applications such as machine vision and automatic object detection. Morphological
operations are used to understand the structure or form of an image[3]. This usually means identifying objects or
boundaries within an image. The filtered image is applied to the morphological operations to extract the cup and disc
areas[14]. Calculate the cup to disc ratio and compare the ratio for normal cup to disc ratio. For normal eye cup to
disc ratio is 0.3, if the input image cup to disc ratio exceeds the 0.3 value then it is considered as image is having
glaucoma.

Image Filtering & Calculate Segmentati Glaucoma


Database Morphologic Cup to on process Detected
al operations Disc ratio

Fig.2. Block diagram of the proposed method glaucoma detection

Cup to Disc Ratio

The 30 Glaucoma images of both normal and abnormal eye images are collected from hospitals of different patients
and are stored in database. One of the images is taken from the database and subjected to Glaucoma detection. The
input image is feed to the Median filter to remove noise and also to smooth edges of the image. After filtering the
image applied to the morphological operations to identify the radius of the cup and disc[14,15]. The Cup-to-Disc
Ratio (CDR) consists of cropped gray optical disc, segmentation and calculation of cup to disc ratio[2]. From the
given image, it is used to crop the required area of an eye; the selected area is applied for segmentation to segment
Glaucoma. CDR is the ratio of the size of the optic cup to the optic disc. It is one of the clinical indicators of
glaucoma, and is determined manually by trained ophthalmologists, limiting its potential in mass screening for early
detection and is computed as

CDR= Optical Cup Area


Optical Disc Area

CDR is an important indicator of glaucoma. The optic cup and disc is first segmented to calculate CDR. To extract
the optic disc and cup each retinal fundus image[2,15]. When CDR is greater than a threshold, it is glaucomatous,
else healthy eye. Usually for healthy eye the ratio is of 0.2 -0.3.
Data base Optic nerve
image
Cropped gray image

Segmented optic Segmented optic


disc cup

Calculate CDR

Fig.3. Flow chart of Cup to Disc Ratio Algorithm

The Optic disc is the entry and exit point for blood vessels for retina[2,14,15]. A normal optic disc is orange pink in
color. A pale disc is an optic disc which varies in color from a pale pink or orange color to white and is an indication
of disease condition. The central depression of variable size in the disc is called the Optic Cup and is usually white in
color[5,11]. In order to extract only this boundary the edge image has to be classified into three groups based on the
distance from the center of the optic disc and is achieved by performing K-means clustering to the edge of the image.
K-means clustering is a method of cluster analysis which aims to partition n observation into k clusters. Then, the
group that contains only edge detection of a disc boundary is selected, and noise canbe rejected.CDR (Cup to disc
ratio) - The vertical cup-to-disc ratio (CDR) is one of the most important risk factors in the diagnosis of glaucoma
[9].It is defined as the ratio of the vertical cup diameter over the vertical disc diameter. The optic disc is the location
where the optic nerve connects to the retina. In a typical 2D fundus image, the optic disc is an elliptic region which is
brighter than its surroundings. The disc has a deep excavation in the center called the optic cup. It is a cup-like area
devoid of neural retinal tissues and normally white in color, OC of a glaucomatous eye tends to grow over time due
to persistently increased intraocular pressure. As the OC grows, the neuroretinal rim located between the edge of the
OD and the OC which contains optic nerve fibers becomes smaller in area. If the neuroretinalrim istoo thin,vision
willbedeteriorated. Thus, quantitative analysis of the opticdisc cupping can be used to evaluate the progression of
glaucoma [10]. As more and more optic nerve fibers die, the OC becomes larger with respect to the OD, which
corresponds to an increased CDR value. For a normal subject, the CDR value is typically around 0.2 to 0.3.
Typically, subjects with CDR value greater than 0.6or0.7aresuspected of having glaucoma and further testing is
often needed to make thediagnosis[11].

3. Extraction of Optic Disk and Cup

To suspect glaucoma, evaluation of CDR is one of the key elements, which is calculated by the extraction of optic
disc and cup. Firstly, the original colored fundus image was cropped and resized.In next step, blood vessels
areremoved from the image. For this morphological operation such as the dilation, erosion, is performed as defined
in equation (1) and (2). Dilation causes objects to grow in size by adding pixels to the boundaries of the object in the
input image. Image is dilated by using the structuring element “DISK”. This dilation results in filling all internal
gaps and lighting bloodvessels but increasing the size of optic disc which will affect the CDR. For this after dilation
the image is being eroded by same structuring element and size. Erosion is done to contrast the boundary of the
object. The result of thisoperationhasasmoothimagewithoutany bloodvessels.

ThedilationofAbyBisdefinedby:
WhereA:binaryimage

B:Structuringelement

After that, a number of images were analyzed and it was concluded that optic disc has a better contrast in V plane
extract from HSV image. After calculating the mean value of the V plane image; this value was set as threshold for
converting it to binary image. The unwanted objects obtained in resultant binary image were labeled and removed
by applying another morphological operation which removes from a binary image all connected components
(objects) that have fewer than some pixels value. This helps in removing all the unwanted objects except the optic
disc. Further the Gaussian filter is applied to the resultant image to smoothen the boundaries of the imagesasshown
infig.3

Fig.3Extraction of optic disk (a) Morphological operations(b)V plane image(c) Binary Image(d)Gaussian filtered
image

For the extraction of cup, green plane is extracted from the eroded image; the cup has much brighter contrast as
compared to other regions of fundus image. In nextstep, green plane is converted into gray scale image by using
global threshold which chooses the threshold to minimize the intra class variance of the black and white pixels.
After extracting the binary optic cup morphological operation i.e. for removal of small objects is applied same as
optic disk but with less pixel value as cup size is small. To smoothen the boundaries of optic cup, Gaussian filter is
applied to the resultant binaryimage oftheopticcup

ExtractionofNeuroratinalRim

Extraction of NRR is another feature used for the detection of glaucoma. Loss of axons in Glaucoma is reflected as
abnormalities of the neuroretinal rim. Identification of the neuroretinal rim width in all sectors of the optic disc is of
fundamental importance for detection of diffuse and localized rim loss in glaucoma. The rim width is calculated
usingISNTrule.[1]
V. EXPERIMENTAL RESULT

IV. Implementation and Results

Fig.4. Input image and transferred image


The images are stored in JPEG image format file (.jpg) shown in figure 4. The original (RGB and NTSC) image is
transformed into appropriate colour space for further processes[11]. And then, filtering technique is used to reduce
the effect of noise. Canny edge detection and thresholding is applied for filtered image[13].

Fig.5. Cup segmented area


After thresholding the region having uniform pixel values are grouped which gives the measurement of the
segmented cup area[2,15]. The segmented cup portion of the input image is shown in the figure 5.

Fig.6. Disc segmented area

To find the disc area of input image applies thresholding as done for finding cup area[2,14,15]. The segmented disc
portion of the input image is shown in the figure 6.

Table.1 shows the area of the cup and disc of the input image. The rows of the table are individual cup and disc areas
of image in the database. The cup and disc area values are obtained by using artificial neural
networks[1,2,,14,15,17].
Table 1. Cup to Disc ratios of the input image

Input Threshold Cup Area Disc Cup to


image Area Disc ratio
25 0.62 502.25 980.125 0.5124
25 0.625 495.875 980.125 0.4692
25 0.63 433.375 980.125 0.4422
25 0.635 402.5 980.125 0.4107
25 0.64 383.75 980.125 0.3915
25 0.645 372.125 980.125 0.3797
25 0.65 358.125 980.125 0.3654
25 0.655 344.375 980.125 0.3514
25 0.66 334.75 980.125 0.3415
25 0.665 331.375 980.125 0.3381

(a)

(b)

0.6
0.4 Threshold

0.2
Cup to Disc
0 ratio
1 3 5 7 9

(c)

0.6
0.4 Threshold

0.2
Cup to Disc
0 ratio
1 2 3 4 5 6 7 8 9
(d)

Figure.16 (a), (b),(c) and (d) Cup to disc ratio with different threshold value

Table.3:Cup to Disc Ratio for normal image dataset

Serial Cup Area Disc Area Cup to Disc Previous


Number Area technique
(CTD) (CTD)
1 269.25 835.875 0.3222 0.4010
2 367.75 1133.875 0.3243 0.3822
3 264.875 759.625 0.3487 0.5258
4 261.875 749 0.3298 0.5252
5 262.885 753 0.3258 0.5256
6 286 924.875 0.3092 0.3945
7 287.625 944 0.30469 0.3052
8 284.875 1044.75 0.2727 0.3229
9 273.25 1264.625 0.2161 0.3696
10 216.75 939.625 0.23068 0.4436
11 293.875 1192.5 0.2464 0.3784
12 206.5 982.25 0.2010 0.3298
13 187.625 1107.375 0.1694 0.3320
14 199.75 1057 0.1889 0.2796
15 205.875 707 0.2912 0.2914

Table.4: Cup to disc ratio for abnormal image dataset

Serial Cup Area Disc Area Cup to Disc Previous


Number Area technique
(CTD) (CTD)
1 569.25 1134.375 0.5081 0.4932
2 557.75 749.75 0.7439 0.5822
3 558.125 905.375 0.6145 1.0569
4 653.25 808.75 0.8077 0.5432
5 324.75 884.375 0.3672 0.3980
6 363.375 903 0.4024 0.4025
7 328 815.875 0.3850 0.5688
8 476.125 827.875 0.5752 0.6653
9 569.25 1134.375 0.5018 0.4932
10 557.625 749.75 0.7439 0.5822
11 396.625 890.5 0.4454 0.3892
12 285.875 695.625 0.4109 0.3542
13 313.75 800 0.3922 0.3414
14 413.25 767.125 0.5387 0.4438
15 428.625 956.625 0.4480 0.4559

Table.1 and Table.2 shows the Cup to Disc ratio for normal image dataset and abnormal image dataset.
Here we comparethe present technique to the previous technique. In the previous technique
thresholding the region having uniform pixel values are grouped which gives the measurement of the
segmented cup area[6].To find the disc area of input image with different threshold values are applied
and related to the cup areas. After finding disc area keep disc area as constant for all the different
images of the eye and cup area varied by applying threshold values for different regions[21]. In the
present technique the author measures the cup area and disc area keeping the structureing element on
it and do the segmentation for required region,Which gives the more accurate result compare to
previous result[21]. Here the simulation results gives the better accuracy compared to the previous
work. Figure 8 gives a snapshot of the graphical user interface (GUI) to identify the glaucoma using the
fundus pictures

Figure.8 GUI window for glaucoma detection


Figure.9 GUI window displaying for normal results

A normal image is chosen the database connected to GUI window and the outcomes are shown as
appeared in the Figure.9. The outcomes Cup area, Disc area and Cup to Disc ratio are 269.25, 836.876
and 0.32212 respectively. A glaucomatous image is chosen from the database connected to GUI window
and the GUI representation is Figure 10[19].

Figure.10 GUI window displaying for glaucoma results

The outcomes Cup area, Disc area and Cup to Disc ratio are 569.25, 1134.375 and 0.5081 respectively.

Conclusion

Table.1 and Table.2 shows the Cup to Disc ratio for normal image dataset and abnormal image dataset.
Here we comparethe present technique to the previous technique. In the previous technique
thresholding the region having uniform pixel values are grouped which gives the measurement of the
segmented cup area[6].To find the disc area of input image with different threshold values are applied
and related to the cup areas. After finding disc area keep disc area as constant for all the different
images of the eye and cup area varied by applying threshold values for different regions[21]. In the
present technique the author measures the cup area and disc area keeping the structureing element on
it and do the segmentation for required region,Which gives the more accurate result compare to
previous result[21]. Here the simulation results gives the better accuracy compared to the previous
work. Figure 8 gives a snapshot of the graphical user interface (GUI) to identify the glaucoma using the
fundus pictures.
VIII. APPLICATIONS
Application areas include the system identification and control (vehicle control, trajectory prediction, process
control, natural resources management), quantum chemistry, game-playing and decision making (backgammon,
chess, poker), pattern recognition (radar systems, face identification, object recognition and more), sequence
recognition (gesture, speech, handwritten text recognition), medical diagnosis, financial applications (e.g. automated
trading systems), data mining (or knowledge discovery in databases, "KDD"),visualization and e-mail spam
filtering.

Chapter 5 Image Preprocess

Anng4

DIFFERENT IMAGE PROCESSING METHODS TO DETECT GLAUCOMA

Image pre process and Nuero retinal rim

Fig.1DifferentImageProcessingmethodstodetect Glaucoma

Various image processing techniques used in automated early diagnosis and analysis of various eye disease are
Enhancement, Registration, Fusion, Segmentation, Feature extraction, Pattern matching, Classification,
Morphology, StatisticalmeasurementsandAnalysis.

Image Enhancement- Image enhancement includes varying brightness and contrast of image. It also includes
filtering and histogram equalization. It comes under pre- processingsteptoenhancevariousfeaturesofimage.
Image Registration- Image Registration is an important technique for change detection in retinal image diagnosis.
In this process, two images are aligned onto a common coordinate system. Images may be taken at different times
and with imaging devices In medical diagnosis, it is essential to combine data from different images and for better
analysis and measurements images are aligned geometrically. Image Fusion- Image fusion is a process of
combining information acquired from number of imaging devices. Its goal is to integrate contemporary, multi
sensor, multi- temporal or multi-view information into a single image, containing all the information so as to
reduce the amount of information. Feature Extraction- It is the process of identifying and
extractingregionofinterestfromtheimage.

Segmentation- Segmentation is the process of dividing an image into its constituent object and group of pixels
which are homogenous according to some criteria. Segmentation algorithms are area-oriented instead of pixel
oriented. The main objective of image segmentation is to extract various features of image which can be merged
or split in order to build object of interest on which analysis and interpretation
canbeperformed.Itincludesclustering,thresholding etc..

Morphology - Morphology is the science of appearance, shape and organization. Mathematical morphology is a
collection of non-linear processes which can be applied to an image to remove details smaller than a certain
reference shape. Various morphological operations are erosion, dilation,openingandclosing.

CDR (Cup to disc ratio) - The vertical cup-to-disc ratio (CDR) is one of the most important risk factors in the
diagnosis of glaucoma [9].It is defined as the ratio of the vertical cup diameter over the vertical disc diameter. The
optic disc is the location where the optic nerve connects to the retina. In a typical 2D fundus image, the optic disc
is an elliptic region which is brighter than its surroundings. The disc has a deep excavation in the center called the
optic cup. It is a cup-like area devoid of neural retinal tissues and normally white in color, OC of a glaucomatous
eye tends to grow over time due to persistently increased intraocular pressure. As the OC grows, the neuroretinal
rim located between the edge of the OD and the OC which contains optic nerve fibers becomes smaller in area. If
the neuroretinalrim istoo thin,vision willbedeteriorated. Thus, quantitative analysis of the opticdisc cupping can be
used to evaluate the progression of glaucoma [10]. As more and more optic nerve fibers die, the OC becomes
larger with respect to the OD, which corresponds to an increased CDR value. For a normal subject, the CDR value is
typically around 0.2 to 0.3. Typically, subjects with CDR value greater than 0.6or0.7aresuspected ofhaving
glaucoma and further testingisoftenneededtomakethediagnosis[11].

PROPOSEDMETHOD

This method proposes new method which improves end results of detecting glaucoma. This method makes use of
Artificial Neural Network and Cup to Disk Ratio to process thefundusimage.

RetinalFundusDatabase

To develop the algorithm for automatic detection of glaucoma, the first essential step was to obtain the effective
database and for that purpose 90 retinal images were collected in total from various online databases. In which, 30
images from High Resolution Fundus image database and 20 from optic-disc.org database images and rest all from
other different online databases including DROINS. The 2D fundus digital image is taken by a fundus camera, which
photographs the retinal surface of the eye. In comparison with OCT/HRT machines, the fundus camera is easier to
operate, less costly, and is able to assess multiple eye conditions. Many researchers have utilized the fundus
imagestoautomatically analyzetheopticdiscstructures.

2. ImagePreprocessing
Incolorretinal images,Opticdiscappearsto bethe brightest part having pink or light orange color and is considered
to be Region of Interest (ROI). ROI is the region around the optic disc that must first be delineated, as the optic disc
generally occupies less than 5% of the pixels in a typical retinal fundus image. While the disc and cup extraction
can be performed on the entire image, localizing the ROI would help to reduce the computational cost as well as
improve segmentation accuracy. The ROI from all images is crop downandisresizedto256×256asshown inFig.2 [1]

Fig.2ResizedImage

4. CDRCalculation

The area is calculated by counting the number of white pixels after that, the area of cup is divided by the area of
disctocalculateCDR.[1]

CDR=CupArea/Disk Area

After obtaining the disc and cup, various features can be computed. The clinical convention is used to compute the
CDR. as CDR is an important indicator for glaucoma screening. . The hole represents the cup and the surrounding
area the disc. If the cup fills 1/10 of the disc, the ratio will be 0.1. If it fills 7/10 of the disc, the ratio is 0.7. The
normal cup-to-disc ratio is 0.3. A large cup- to-disc ratio may imply glaucoma .After obtaining the disc and cup,
various features can be computed. Then follow the clinical convention to compute the CDR., CDR can be computed
as CDR=CD/DD Where, CD-Cup Diameter DD- Disc Diameter CDR-Cup-to-Disc Ratio The computed CDR is used for
glaucoma screening .When CDR is greater than threshold, it is glaucomatous.

1.1 Screening of Glaucoma

Manualscreeningofdiseaseisperformedasapublichealthinitiativeforthemanagementofdisease in a community. It
refers to a proactive strategy to deal with the disease, the symptoms of which are not yet surfaced or recognized.
Glaucoma screening systems is prevalent in many countries [13] [12]. Clinically, a patient is classified to be a
glaucoma suspect based on various measures such as optic disc topography, visual fields, intraocular pressure,
family history etc. In screening process, color retinal imaging are the de facto standard for screening the presence
of different types of retinopathy due to its low cost, non-invasiveness and ease of use [30]. It has resulted in higher
penetrability to every section of society and thereby reducing the cases of loss of vision. Color retinal images
provide two dimensional projection of retina yielding structural information of optic disc(OD) along withother
retinal structures such as cup,rim and bloodvessels (shown in Fig.1.2). Since loss of nerve fibres primarily leads to
glaucoma, the region of interest is the Optic disc and its periphery (Figure.1.2). An optic disc region consists of two
structures: the disc identified by the outer boundary of disc (marked in white) and the cup which is within the
inner boundary of optic disc. Glaucoma primarily leads to structural changes in the OD resulting in deformation of
the cup and disc morphology. A common deformation is the enlargement of cup with respect to the disc and is
referred
Figure1.2Sample3Dcolorimageofretina(left),Samplecolorretinalimagewithopticnerveheadand other retinal
structures (middle), Sample region of interest with varying color, texture, disc boundary shape and surrounding
deformations (right).

to as cupping (Fig.1.3). Secondary visual indicators (Fig.1.3 and 1.5) for the disease are appearance of bright blob-
like lesions adjoining the OD (Peri-papillary Atrophy) and subtle darkening in a wedge- shaped region (retinal nerve
fibre loss) in superior and inferior directions around the optic disc.

1.1.1 NeuroretinalRimThinning

In a retinal image, neuroretinal rim thinning is a definite indicator of glaucoma. The rim begins to thin with the
onset of disease as the cup boundary starts approaching the disc. As a result the cup-to- disc diameter ratio
increases. Thinning can be a global or local phenomenon around the cup boundary.
Fig.1.3showsanexampleofglobalrimthinninginthesecondrow. Itcanbenotedthatnearlytheentire
boundaryalongthecupisextendedtothediscinthiscase. Fig.1.4showsanexamplewhereonlyasmall
localregionintherimhasundergonethinning. Here, thinningisasubtlevariationintheill-definedcup boundary. While
easily visible to a trained human eye, automatic detection of such minor variations across retinal images, already
inflicted with other non-disease variations is a significant challenge.

1.1.2 PeripapillaryAtrophy(PPA)

PPAisanotherindicatorforglaucoma(seeFig1.4(inyellow)).Whilethecupboundaryisnotaffected due to this, a change


in intensity adjoining the disc boundarycanbeobservedclearly. PPAoccursdueto atrophy of retinal cells around the
optic disc. Some atrophy appears in both normal and glaucomatous eyes but it is more commonly observed in
glaucomatous cases. We can observe that an increase in
Figure 1.4 Glaucomatous case depicting local rim thinning (indicated by the red arrow), PPA (marked in yellow) and
RNFL defect (marked in green).

have similar neuroretinal rim as the confirmed cases in Fig.1.5(b). This is because the intensity is not considered as
primary indicator for cup boundary within optic disc. The vessel bends serve as cues for the depth change.
Similarly, normal cases and confirmed cases seem to have regions which look alike (refer Figure.1.5) which may be
misinterpreted as atrophy or nerve fibre loss defect.

5. Extraction of Neuroratinal Rim

Extraction of NRR is another feature used for the detection of glaucoma. Loss of axons in Glaucoma is reflected as
abnormalities of the neuroretinal rim. Identification of the neuroretinal rim width in all sectors of the optic disc is
of fundamental importance for detection of diffuse and localized rim loss in glaucoma. The rim width is calculated
using ISNT rule[1]

For the extraction of cup, green plane is extracted from the eroded image; the cup has much brighter contrast as
compared to other regions of fundus image. In next step, green plane is converted into gray scale image by using
global threshold which chooses the threshold to minimize the intraclass variance of the black and white pixels.
After extracting the binary optic cup morphological operation i.e. for removal of small objects is applied same as
optic disk but with less pixel value as cup size is small. To smoothen the boundaries of optic cup, Gaussian filter is
applied to the resultant binary image of the optic cup as shown in Figure 4.

Figure 4: Extraction of optic cup

3.1.4 CDR Calculation The area is calculated by counting the number of white pixels after that, the area of cup is
divided by the area of disc to calculate CDR.

Extraction of NRR is used for the detection of glaucoma. Loss of axons in Glaucoma is reflected as abnormalities of
the neuroretinal rim. Identification of the neuroretinal rim width in all sectors of the optic disc is of fundamental
importance for detection of diffuse and localized rim loss in glaucoma. The rim width is calculated using ISNT rule.

3.1.6 ISNT calculation


Rim area is measured in the ISNT quadrants. Usually the rim area thickness must be more in the superior and
inferior region when compared to the temporal and nasal region. To obtain the thickness in all the four quadrants,
a binary image of the neuroretinal rim is taken and then cropped as in Figure 4. A mask of the cropped image size
is used to filter one quadrant. Then the mask is rotated 90º to obtain the other quadrant areas. Figure 5 shows the
mask used for identifying rim area in the ISNT side of optic disc.

Figure 4: NRR Image

Figure 5: Mask used for detecting the Rim area in (a)inferior quadrant; (b)superior quadrant; (c) nasal quadrant; (d)
temporal quadrant of the optic disc

This neuroretinal rim configuration gives rise to a cup shape that is either round or horizontally oval. Neuroretinal
rim area is calculated by subtracting the area of the optic cup from area of optic disc. Figure 6 shows area covered
by NRR in ISNT Quadrants. The area covered by white pixels is counted for the evaluating the ISNT Ratio.
Figure 6: Rim area of ISNT quadrants of the optic disc

VII.CONCLUSION

PPAisanotherindicatorforglaucoma(seeFig1.4(inyellow)).Whilethecupboundaryisnotaffected due to this, a change


in intensity adjoining the disc boundarycanbeobservedclearly. PPAoccursdueto atrophy of retinal cells around the
optic disc. Some atrophy appears in both normal and glaucomatous eyes but it is more commonly observed in
glaucomatous cases. We can observe that an increase in have similar neuroretinal rim as the confirmed cases in
Fig.1.5(b). This is because the intensity is not considered as primary indicator for cup boundary within optic disc.
The vessel bends serve as cues for the depth change. Similarly, normal cases and confirmed cases seem to have
regions which look alike (refer Figure.1.5) which may be misinterpreted as atrophy or nerve fibre loss
defect..Extraction of NRR is another feature used for the detection of glaucoma. Loss of axons in Glaucoma is
reflected as abnormalities of the neuroretinal rim. Identification of the neuroretinal rim width in all sectors of the
optic disc is of fundamental importance for detection of diffuse and localized rim loss in glaucoma. The rim width is
calculated using ISNT rule[1]. The clinical convention is used to compute the CDR. as CDR is an important indicator
for glaucoma screening. . The hole represents the cup and the surrounding area the disc. If the cup fills 1/10 of the
disc, the ratio will be 0.1. If it fills 7/10 of the disc, the ratio is 0.7. The normal cup-to-disc ratio is 0.3. A large cup-
to-disc ratio may imply glaucoma .After obtaining the disc and cup, various features can be computed. Then follow
the clinical convention to compute the CDR., CDR can be computed as CDR=CD/DD Where, CD-Cup Diameter DD-
Disc Diameter CDR-Cup-to-Disc Ratio The computed CDR is used for glaucoma screening .When CDR is greater than
threshold, it is glaucomatous. In this work we have done the detection of Glaucoma disease. Here MATLAB has
been used for training & simulating the ANN to detect the glaucoma. The CDR & feed forward propagation artificial
neural network has been used & these parameters are extracted using MATLAB & are compared with standard
values & the result is given.The ANN provides simplicity of operation in the system. This software will help the
doctors to easily detect the glaucomadisease.

Chapter6 superpixal generation


II. STEPS TO CALCULATE CDR

The segmentation of optic disc and optic cup done in a particular steps, those steps can be explain with a flow diagram .
Fig 2 Flow diagram of proposed work

The block diagram is explained as follows, each block has its own important in problem solving. A Retinal Fundus Image Retinal
fundus images are the images of interior parts of eye including the retina , optic disc, fundus. The fundus images are used by
ophthalmologist for the detection of any eye disease[3]. In our work we use fundus images to detect the glaucoma disease and
also the the stage of disease. B Superpixel Generation Superpixel are greater than normal pixel, these pixel have same color
and contrast, we easily get the required information from image, superpixel have wide application in defence field, traffic,
disease detection . In our proposed work superpixel is generated using SLIC algorithm. C RGB Separation RGB model is color
model which is formed by the combination of three color called red, green, blue , these are the primary color by the
combination of these color other color are formed ,when these color are mixed they formed a broad array of colors[4][6] . The
main purpose of this model is to represent the image on digital media like computer, television. In propose work we increase
the sharpness of image so that image can be easily analysis. D Histogram Representation Histogram represent as a graphical
representation of tonal distribution of digital image , it plots number of pixel for each tonal value, by the histogram the viewer
can easily get the tonal distribution at a glance. Histogram is widely used in digital camera, disease detection. Horizontal axis of
histogram represented tonal distribution and vertical axis represent number of pixel. E Histogram Equalization This method is
used to increase the global contrast of image especially when image having close contrast. This method widely used in x-ray
technique for better view of bone structure, in thermal imaging also this technique is used. Histogram gives a realistic view to a
dull image and help in extracting required information. F HSV Conversion In RGB separation we do not get the required
brightness or contrast so HSV conversion is required, it help in giving different maturity to image, each superpixel is so clear to
mark the cup and disc on fundus image. G Cup Segmentation For the detection of glaucoma cup segmentation is very
important, cup is interior part of eye through which light travel inside the eye, to detect glaucoma the cup diameter is
calculated to find out the CDR. H Disc Segmentation Disc is the white part of the eye, when glaucoma spread the diameter of
disc start decreasing, with the help of disc diameter we can calculate CDR. Disc and cup segmentation are the important steps
to detect glaucoma[1][2][3]. G Neural Network These are interconnected neural which transfer information, after calculating
CDR we feed the information to feed forward neural network that compare the CDR with threshold and open up a dialogue box
to so stage of glaucoma .

III. OPTIC DISC SEGMENTATION

Segmentation of disc is very important in for the diagnosis of glaucoma disease. Segmentation include finding of disc pixel this
can be done by superpixel generation then histogram representation to extract useful data from image to calculate CDR. Some
approaches have been proposed for disc segmentation, these two approaches are template based method and deformable
model based method and some time circular Hough transformation is also used to for optic disc segmentation [1][2][4]. Both
the template and deformable model based methods are based on edge characteristic. The above methods are sensitive to poor
initilization, so to overcome the problem we propose superpixel classification based method .With the help of superpixel
classification method we easily extract the useful data if the image is not clear.

A. SLIC Superpixel

There are many algorithms have been proposed for superpixel generation like (1) graph- based algorithm (2) Gradient- ascent-
based algorithm, these algorithm proved to be useful in image segmentation of various images of animal, plants. This paper
uses the simple linear iterative clustering algorithm (SLIC)[4][5]. The drawback of above methods are these are very slow and
require large memory to work. SLIC is fast, memory efficient and to use with only one parameter i.e the number of desired
superpixel k. To produce roughly equally sized superpixels, on a regular grid space by S= √N/k. The centers are moved to seed
locations corresponding to the lowest gradient position.

This paper focuses on automatic glaucoma using CDR from 2-D fundus images. This concept can be used for detection of brain
cancer , lung cancer and for blood vessel segmentation[7] .In this we compute center surround statistic from superpixels and
unify them with histogram for disc and cup segmentation ,based on segmented cup and disc ,CDR is computed for glaucoma
screening ,this CDR then compared with threshold value ,after comparison we easily detect the stage of glaucoma[17,18]

IV. OPTIC CUP SEGMENTATION

Cup segmentation from 2-D fundus image without depth information is a tough task Pallor is the region to determine cup
segmentation[12][13] .Like disc segmentation, fewer method have been purposed for cup segmentation and the most common
are level set based approach and threshold based method these method are based on pallor, the main challenge in cup
segmentation is to determine

the cup boundary when the pallor are weak. Liu et al. proposed a method in which a potential set of pixels belonging to cup
region is first derived based on the reference color obtained from a manually selected point. Next, an ellipse is fit to this set of
pixels to estimate the cup boundary.

A Cup to Disc Ratio

CDR is an important indicator for glaucoma screening, after computing disc and cup , various feature can be computed ,we
compare the threshold and CDR ,if CDR is greater than threshold ,it is glaucomatous, other-wise ,healthy.CDR is an important
indicator for glaucoma screening computed as

CDR= VCD/VDD

Here, VCD –vertical cup diameter VDD- vertical disc diameter

B. Feed Forward Neural Network

Artificial neural networks are generally presented as systems of interconnected "neurons" which can compute values from
inputs, and are capable of machine learning as well as pattern recognition thanks to their adaptive nature[8][9]. The output disc
and cup segmentation are then pass to neural network , this along CDR and threshold value compute the result and show the
stage of glaucoma ,if the eye is healthy it open a dialogue box and show eye is healthy and same for glaucomatous eye.

The word network in the term ' neural network' refers to the inter–connections between the neurons in the different layers of
each system. An example system has three layers. The first layer has input neurons which send data via synapses to the second
layer of neurons, and then via more synapses to the third layer of output neurons[10]. More complex systems will have more
layers of neurons with some having increased layers of input neurons and output neurons. The synapses store parameters
called "weights" that manipulate the data in the calculations. Mathematically neuron network f(x) is defined as the composition
of g(x)

V. EXPERIMENTAL RESULT

Our experiment uses 15 images from 15 different subject eye and IOP have been measured we use a SLIC algorithm to generate
superpixel. The calculation of CDR is with good accuracy is quite tough, the glaucoma screening with segmented disc and cup
depend on the accuracy of CDR there are some image related to our experiment these retinal fundus images are used for
glaucoma screening.[19,20]

Fig 4 Input retinal fundus images

A Superpixel generation

The superpixel generation from above image using SLIC algorithm .

Fig 5 Superpixel generation from data base

B RGB Sepration
Fig 6 Red, green, blue sepration

C Histogram representation When superpixel is generated histogram representation is the next step, the histogram
representation help in extracting useful data from image, in case there is any change in color or contrast it help in adjusting all
such things.[21]

Fig.7 Red, green ,blue histogram representation

D Histogram Equalization

Fig 8 Equalization of red, green, blue channel

E Equalized Histogram Representation

Fig 9 Equalized histogram for RGB


F HSV Conversion

Fig 10 Hue, saturation, value conversion

G Optic disc and optic cup segmentation For the calculation of CDR optic disc and optic cup segmentation is done, after
segmentation the vertical cup and disc diameter ratio is calculated, the ratio of vertical cup and disc diameter is called CDR.[22]

Fig 11 Cup and disc segmentation of retinal fundus image

H Output

The proposed work help in detecting glaucoma in large number of patient with 95% of accuracy, this method not only detect
but also tell the stage of disease. There is the experimental result , the complete experiment is done with advanced metlab
tool.

Fig 12 CDR calculation, stage of glaucoma disease

Priyankaverma

Super pixel based optic cup and disc segmentation


II. OPTIC DISC SEGMENTATION A.

INTRODUCTION

Localization and segmentation of disc are very important in many computer aided diagnosis

systems, including glaucoma screening. The localization focuses on finding a disc pixel, very often the centre. Our work focuses
on the segmentation problem and the disc is located by our earlier method ,which works well in our data set for glaucoma
screening .The segmentation estimates the disc boundary, which is a challenging task due to blood vessel occlusions,
pathological changes around disc, variable imaging conditions, etc. Some approaches have been proposed for disc
segmentation, which can be generally classified as template based methods ,deformable model based methods and pixel
classification based methods. Both the template and deformable model based methods are based on the edge characteristics.
The performance of these methods very much depends on the differentiation of edges from the disc and other structures
,especially the PPA. The PPA region is often confused as part of disc for two reasons:1) it looks similar to the disc2) Its crescent
shape makes it t o form another ellipse (often stronger) together with the disc. Deformable models are sensitive to poor
initialization. To overcome the problem, a template based approach with PPA elimination is proposed. This method reduces the
chance of mistaking PPA as part of the disc. However, the approach does not work well when the PPA area is small, or when the
texture is not significantly predominant .Pixel classification based methods use various features such as intensity, texture, etc.
from each pixel and its surroundings to find the disc. The number of pixels is high even at moderate resolutions, which makes
the optimization on the level of pixels intractable. To overcome the limitations of pixel classification based methods and
deformable model based methods, a super pixel classification based method and combine it with the deformable model based
methods is used.In the proposed method, super pixel classification is used for an initialization of disc boundary and the
deformable model is used to fine tune the disc boundary, i.e., a super pixel classification based disc initialization for deformable
models.. The segmentation comprises: a super pixel generation step to divide the image into super pixels; a feature extraction
step to compute features from each super pixel; a classification step to determine each super pixel as a disc or non-disc super
pixel to estimate the boundary; a deformation step using deformable models to fine tune the disc boundary.

B SUPERPIXEL GENERATION

Super pixels are local, coherent and provide a convenient primitive to compute local image features. They capture redundancy
in the image and reduce the complexity of subsequent processing. A super pixel generation step is used to divide the image into
super pixels. Many algorithms have been proposed for super pixel classification .The simple linear iterative clustering algorithm
(SLIC) is used to aggregate nearby pixels into super pixels in retinal fundus images. SLIC is fast, memory efficient and has
excellent boundary adherence. SLIC is also simple to use with only one parameter, i.e., the number of desired super pixels SLIC
ALGORITHM SLIC is a simple and efficient method to decompose an image in visually homogeneous regions. It is based on a
spatially localized version of k-means clustering. Similar to mean shift or quick shift, each pixel is associated to a feature vector
SLIC takes two parameters: the nominal size of the regions (super pixels) region Size and the strength of the spatial
regularization regularizer. The image is first divided into a grid with step region Size. The centre of each grid tile is then used to
initialize a corresponding k-means (up to a small shift to avoid image edges). Finally, the k-means centres and clusters are
refined, yielding the segmented image. As a further restriction and simplification, during the k-means iterations each pixel can
be assigned to only the 2 x 2 centres corresponding to grid tiles adjacent to the pixel. After the k- means step, SLIC optionally
removes any segment whose area is smaller than a threshold min Region Size by merging them into larger ones. In SLIC, k initial
cluster centres Ck are sampled on a regular grid spaced by pixels apart from the image with N pixels. The centres are first
moved towards the lowest gradient position in a 3 × 3 neighbourhood. Clustering is then applied. For each Ck, SLIC iteratively
searches for its best matching pixel from the neighborhood around Ck based on colour and spatial proximity and then compute
the new cluster centre based on the found pixel. The iteration continues until the distance between the new centres and
previous ones is small enough.

C. FEATURE EXTRACTION
This is used to compute features from each super pixel .Many features such as colour, appearance, location and texture can be
extracted from super pixels for classification using contrast enhanced histogram, centre surround statistics and texture as the
features.

CONTRAST ENHANCED HISTOGRAM:

Many features such as colour, appearance, gist, location and texture can be extracted from super pixels for classification. Since
colour is one of the main differences between disc and non-disc region, colour histogram from super pixels is an intuitive choice
.Motivated by the large contrast variation between images and the use of histogram equalization in biological neural networks,
histogram equalization is applied to red , green , and blue channels from RGB colour spaces individually to enhance the contrast
for easier analysis. Thus, hue and saturation from HSV colour space are also included to form five channel maps. This is
computed for the jth super pixel SPj, where HE (.) denotes the function of histogram equalization and hj (.)denotes the function
to compute histogram from SPj.

CENTER SURROUND STATISTICS

It is important to include features that reflect the difference between the PPA region and the disc region. The super pixels from
the two regions often appear similar except for the texture: the PPA region contains blob-like structures while the disc region is
relatively more homogeneous. The histogram of each super pixel does not work well as the texture variation in the PPA region
is often from a larger area than the super pixel . This is because the super- pixel often consists of a group of pixels with similar
colors. Inspired by these observations, centre surround statistics (CSS) from super pixels as a texture feature can be included.
To compute CSS, nine spatial scale dyadic pyramids are generated. The dyadic Gaussian pyramid is a hierarchy of low-pass
filtered versions of an image channel, so that successive levels correspond to lower frequencies. It is accomplished by
convolution with a linearly separable Gaussian filter and decimation by a factor of two. Then centre surround operation
between centre (finer) levels and surround levels (coarser) is performed. Denote the feature map in centre level c as I(c) and
the feature map in surround level s as I(s) and the interpolated map is denoted as I(c) , where, fs-c(I(s) denotes the interpolation
from the surround level to the centre level . The centre surround difference is then computed as /I(c)-fs-c(I(s)/ .All the
difference maps are resized to be the same size as the original.

FINAL FEATURE

The features from neighboring super pixels are also considered in the classification of current super pixel. Search for four
neighboring super pixels for SPj and denote as SPj1, SPj2, SPj3, SPj4. SPj1 is determined as first super pixel by moving out by the
current super pixel horizontal to left from its center. Simiarly, SPj2,SPj3 and SPj4 are determined by moving right, up and
down.CSS feature is then computed as CSSj= [CSSj CSSj1 CSSj2 CSSj3 CSSj4].we combine the HISTj and CSSj to form proposed
feature.

D. INITIALIZATION AND DEFORMATION In this, a classification step to determine each super pixel as disc or non-disc super pixel
to estimate the boundary and a deformation step to fine tune the disc boundary is used. A support vector machine is used as
the classifier. The LIBSVM with linear kernel is used. The output value for each super pixel is used as the decision values for all
pixels in the super pixel, .A smoothing filter is then applied on the decision values to achieve smoothed decision values. In this
implementation, mean filter, and Gaussian filter are tested and the mean filter is found to be a better choice. The smoothed
decision values are then used to obtain the binary decisions for all pixels with a threshold. In the experiments, assign +1 and-1
to positive (disc) and negative (non-disc) samples, and the threshold is the average of them .i.e., 0.After getting the binary
decisions for all pixels have a matrix with binary values with 1 as object and 0 as background. The largest connected object, i.e.,
the connected component with largest number of pixels, is obtained through morphological operation and its boundary is used
as the raw estimation of the disc boundary.The best fitted ellipse using elliptical Hough transform is computed as the fitted
estimation. The active shape model employed in is used to fine tune the disc boundary.
Fig 1. Illustration of neighbouring super pixel

III. OPTIC CUP SEGMENTATION

A. INTRODUCTION
Detecting the cup boundary from 2-D fundus images without depth information is a challenging task, as
depth is the primary indicator for the cup boundary. In 2-D fundus images, one land- mark to determine the
cup region is the pallor, defined as the area of maximum colour contrast inside the disc. The main challenge
in cup segmentation is to determine the cup boundary when the pallor is non obvious or weak. In such
scenarios, we lack landmarks, such as intensity changes or edges to estimate the cup boundary reliably.
Although vessel bends are potential landmarks, they can occur at many places within the disc region and
only one subset of these points defines the cup boundary. Besides the challenges to obtain these points, it is
also difficult to differentiate the vessel bends that mark the cup boundary from other vessel bends without
obvious pallor information. A super pixel classification based method for cup segmentation incorporates prior
knowledge into the training of super pixel classification.
B. FEATURE EXTRACTION
The feature extraction process can be summarized as below. After obtaining the disc, the minimum
bounding box of the disc is used for cup segmentation. The histogram feature is computed similarly to that
for disc segmentation, except that the histogram from the red channel is no longer used. This is because
there is little information about the cup in the red channel. Denote it as HISTjc to be differentiated from that
for disc segmentation C. SUPERPIXEL CLASSIFICATION FOR OPTIC CUP ESTIMATION
The LIBSVM with linear kernel is used for the classification. Randomly obtain the same number of super
pixels from the cup and non-cup regions in the training step from a set of training images with manual cup
boundary. Similarly, the output values from the LIBSVM decision function are used. As illustrated, the output
value for each super pixel is used as the decision values for all pixels in the super pixel. A mean filter is
applied on the decision values to compute smoothed decision values. Then the smoothed decision values
are used to obtain the binary decisions for all pixels. The largest connected object is obtained and its
boundary is used as the raw estimation. The best fitted ellipse is computed as the cup boundary. The ellipse
fitting here is beneficial for overcoming the noise introduced by vessels especially from the inferior and
superior sector of the cup. Do not apply contour deformation after obtain the estimated cup boundary from
super pixel classification, because many cases do not have an obvious/strong contrast between the cup and
the rim for the deformable models. A deformation in these cases often leads to an overestimated cup.

Fig 2:Super pixel based optic cup segmentation.

Each disc image is divided into super pixels. The features are used to classify the super pixels as cup or non-cup. The decision
values from SVM output are smoothed to determine cup boundary.
D.CUP TO DISC RATIO After obtaining the disc and cup, various features can be computed. The clinical convention is used to
compute the CDR. as CDR is an important indicator for glaucoma screening. . The hole represents the cup and the surrounding
area the disc. If the cup fills 1/10 of the disc, the ratio will be 0.1. If it fills 7/10 of the disc, the ratio is 0.7. The normal cup-to-
disc ratio is 0.3. A large cup- to-disc ratio may imply glaucoma .After obtaining the disc and cup, various features can be
computed. Then follow the clinical convention to compute the CDR., CDR can be computed as CDR=CD/DD Where, CD-Cup
Diameter DD- Disc Diameter CDR-Cup-to-Disc Ratio The computed CDR is used for glaucoma screening .When CDR is greater
than threshold, it is glaucomatous.

VI. CONCLUSION

Here in Cup segmentation has been attempted based on pixel and superpixel based classification strategies. a
superpixel classification based optic cup segmentation technique was proposed for glaucoma detection [34] where
each disc image is converted to superpixels over which features are extractedandareclassifiedascupornon-cup.
Similarlypixellevelclassificationwhereclassisassigned to each pixel based on the feature extracted using numeric
properties of the pixel and its surroundings is also used for cup segmentation.In our work we use fundus images to
detect the glaucoma disease and also the the stage of disease. B Superpixel Generation Superpixel are greater than normal
pixel, these pixel have same color and contrast, we easily get the required information from image, superpixel have wide
application in defence field, traffic, disease detection . In our proposed work superpixel is generated using SLIC algorithm. C
RGB Separation RGB model is color model which is formed by the combination of three color called red, green, blue , these are
the primary color by the combination of these color other color are formed ,when these color are mixed they formed a broad
array of colors[4][6] . The main purpose of this model is to represent the image on digital media like computer, television. In
propose work we increase the sharpness of image so that image can be easily analysis. D Histogram Representation Histogram
represent as a graphical representation of tonal distribution of digital image , it plots number of pixel for each tonal value, by
the histogram the viewer can easily get the tonal distribution at a glance. Histogram is widely used in digital camera, disease
detection. Horizontal axis of histogram represented tonal distribution and vertical axis represent number of pixel. E Histogram
Equalization This method is used to increase the global contrast of image especially when image having close contrast. This
method widely used in x-ray technique for better view of bone structure, in thermal imaging also this technique is used.
Histogram gives a realistic view to a dull image and help in extracting required information. F HSV Conversion In RGB
separation we do not get the required brightness or contrast so HSV conversion is required, it help in giving different maturity
to image, each superpixel is so clear to mark the cup and disc on fundus image. G Cup Segmentation For the detection of
glaucoma cup segmentation is very important, cup is interior part of eye through which light travel inside the eye, to detect
glaucoma the cup diameter is calculated to find out the CDR. H Disc Segmentation Disc is the white part of the eye, when
glaucoma spread the diameter of disc start decreasing, with the help of disc diameter we can calculate CDR. Disc and cup
segmentation are the important steps to detect glaucoma[1][2][3]. G Neural Network These are interconnected neural which
transfer information, after calculating CDR we feed the information to feed forward neural network that compare the CDR with
threshold and open up a dialogue box to so stage of glaucoma .

CHAPTER 8
Anng4
NeuralNetworkforClassification
The Probabilistic Neural Network was developed by Donald Speech.Classification refers to the analysis of the properties of an image.
Depending upon the analysis, the dataset is further referred into different classes. Input features are categorized as 0 and 1.The classification
process is divided into two phases: training phase and testing phase . In the training phase, known data is given and in the testing phase, an
unknown data is given. Classification is done by using classifier after the training phase [10].The Probabilistic Neural Network provides a
general solution to pattern classificationproblems[11].Classification - Classification is an important technique of image analysis for estimation
of statistical parameter according to the gray level intensities of pixels. It includes labeling of a pixel or group of pixels based on the grey values
and other statistical parameters. For understanding

TransferFunction
The behavior of an ANN (Artificial Neural Network) depends onboth the weights and the input-output function (transfer function) that is
specified for the unitsnn1. This function typically falls into one of three
categories: linear (or ramp), threshold and sigmoid.For linear units, the output activity is proportional to the total weightedoutput. For threshold a
unit, the output is set at one of two levels,depending on whether the total input is greater than or less than somethreshold value. For sigmoid units,
the output varies continuously but not
linearly as the input changes. Sigmoid units bear a greater resemblance toreal neurons than do linear or threshold units, but all three must
beconsidered rough approximations(Wilbert Sibanda).

2.2.1 Learning:nn1

Learning rule or Learning process is a method or a mathematical logic which improves the artificial neural network's performance and usually
this rule is applied repeatedly over the network. It is done by updating the weights and bias levels of a network when a network is simulated in a
specific data environment.[1] A learning rule may accept existing condition ( weights and bias ) of the network and will compare the expected
result and actual result of the network to give new and improved values for weights and bias. [2]Depending on the complexity of actual model,
which is being simulated, the learning rule of the network can be as simple as an XOR gate or Mean Squared Error or it can be the result of
multiple differential equations. The learning rule is one of the factors which decides how fast or how accurate the artificial network can be
developed

Activation functions
the role of the activation function in a neural network is to produce a non-linear decision boundary via non-linear combinations of the
weighted inputs the activation function of a node defines the output of that node given an input or set of inputs. A standard computer chip
circuit can be seen as a digital network of activation functions that can be "ON" (1) or "OFF" (0), depending on input. This is similar to the
behavior of the linear perceptron in neural networks. only nonlinear activation functions allow such networks to compute nontrivial problems
using only a small number of nodes. In artificial neural networks this function is also called the transfer function

So consider a neuron.

The Y output of neuron is given as input to the next neuron

Or number of neurons.

Training Set: this data set is used to adjust the weights on the neural network.
Validation Set: this data set is used to minimize overfitting. You're not adjusting the weights of the network with this data set, you're just
verifying that any increase in accuracy over the training data set actually yields an increase in accuracy over a data set that has not been shown to
the network before, or at least the network hasn't trained on it (i.e. validation data set). If the accuracy over the training data set increases, but the
accuracy over then validation data set stays the same or decreases, then you're overfitting your neural network and you should stop training.
Testing Set: this data set is used only for testing the final solution in order to confirm the actual predictive power of the network

IV. ARTIFICIAL NEURAL NETWORKS An artificial neural network is an information processing system that has certain performance
characteristics in common with biological neural networks [5]. A neural network is characterized by its pattern of connections between the
neurons, its method of determining the weights on the connections and its activation function. A neural net consists of a large number of simple
processing elements called neurons or nodes. Each neuron is connected to other neurons by means of directed communication links, each with
an associated weight. The weights represent information being used by the net to solve a problem. Each neuron has an internal state called its
activation or activity level, which is a function of the inputs it has received. A neuron sends its activation as a signal to several other neurons.
Artificial neural networks consist of many nodes, processing units analogous to neurons in the brain. The neural net can be a single layer or
multilayer net. In a single layer net there is a single layer of weighted interconnections. A multi-layer artificial neural network comprises an
input layer, output layer and hidden (intermediate) layer of neurons. The activity of neurons in the input layer is represents the raw information
that is fed into the network. The activity of neurons in the hidden layer is determined by the activity of input neurons and the connecting
weights between the input and hidden units. Similarly the behavior of the output units depends on the activity of the neurons in the hidden layer
and the connecting weights between hidden and the output layers [5]. A neural network can be trained to perform a particular function by
adjusting the values of the connections (weights) between elements. Commonly neural networks are adjusted, or trained, so that a particular
input leads to a specific target output. There, the network is adjusted, based on a comparison of the output and the target, until the network
output matches the target. Typically many such input/target pairs are needed to train a network.
V. BACK PROPAGATION Back propagation is the generalization of the Widrow-Hoff
learning rule to multiple-layer networks and nonlinear differentiable transfer functions. Input vectors and the corresponding target vectors are
used to train a network until it can approximate a function, associate input vectors with specific output vectors, or classify input vectors in an
appropriate way as defined by you. Networks with biases, a sigmoid layer, and a linear output layer are capable of approximating any function
with a finite number of discontinuities. Standard back propagation [6] is a gradient descent algorithm, as is the Widrow-Hoff learning rule, in
which the network weights are moved along the negative of the gradient of the performance function. The term back propagation refers to the
manner in which the gradient is computed for nonlinear multilayer networks. There are a number of variations on the basic algorithm that are
based on other standard optimization techniques, such as conjugate gradient and Newton methods Properly trained back propagation networks
tend to give reasonable answers when presented with inputs that they have never seen. Typically, a new input leads to an output similar to the
correct output for input vectors used in training that are similar to the new input being presented. This generalization property makes it possible
to train a network on a representative set of input/target pairs and get good results without training the network on all possible input/output pairs
[7]. The simplest implementation of back propagation learning updates the network weights and biases in the direction in which the
performance function decreases most rapidly, the negative of the gradient [8].
VI. TRAINING THE NETWORK Once the network weights and biases are initialized, the network is ready for training. The training process
requires a set of examples of proper network behaviour network inputs and target outputs. During training the weights and biases of the network
are iteratively adjusted to minimize the network performance function. The default performance function for feed forward networks is mean
square error (mse), the average squared error between the networks outputs and the target outputs. All these algorithms use the gradient of the
performance function to determine how to adjust the weights to minimize performance [9]. The gradient is determined using back propagation,
which involves performing computations backward through the network. There are generally four steps in the training process: 1) Assemble the
training data. 2) Create the network object. 3) Train the network. 4) Simulate the network response to new inputs.

Multilayer feedforward ANNs have two different phases: A training phase (sometimes also referred to as the learning phase) and an execution
phase. In the training phase the ANN is trained to return a specific output when given a specific input, this is done by continuous training on a set
of training data. In the execution phase the ANN returns outputs on the basis of inputs.

The way the execution of a feedforward ANN functions are the following: An input is presented to the input layer, the input is propagated
through all the layers (using equation 2.1) until it reaches the output layer, where the output is returned. In a feedforward ANN an input can easily
be propagated through the network and evaluated to an output. It is more difficult to compute a clear output from a network where connections
are allowed in all directions (like in the brain), since this will create loops. feedforward networks are usually a better choice for problems that are
not time dependent.

Figure 4: A fully connected multilayer network with hidden layer.

Figure 4 shows a multilayer feedforward ANN where all the neurons in each layer are connected to all the neurons in the next layer. This is called
a fully connected network and although ANNs do not need to be fully connected, they often are.

(l ) L
Apply the pattern X j
to the input layer shown in figure.4 and propagate the signal forward through the network until the final outputs Xj
have been calculated for each j and L

 D ( L1) ( l ) 
 s           ( 4)
(l) (l) (l) (l)
X j j  x j w
j
 w
j 
 i 0 

(l  1)
Where D (L-1) is the number of neurons in layer (L-1), Xi output of the jth neuron in the (l-1) th layer, Wij(i) synaptic weight contained in the
(l )
current neuron, Wij(i) current neuron’s bias weight, X output of the current neuron.
j

Starting with the output layer, and moving back towards the input layer, calculates the error terms and gradient as follows:

u  x j  lL
(l)
for
e (jl)        (5)
 ( l  1) ( l  1)
 w ij s j for l  1,2,3, ....... L  1
where e lj is the error term for jth neuron in the lth layer

s(jl 1)  e(jl 1) (s(jl ) ) for l  1,2,.......      (6)

where  ( s (lj ) ) is the derivative if the activation function. Calculate the changes for all the weights as follows:

 w ij(l)  s (jl) x (jl1) ..........l  1,2,.........L      (7)

where  is the learning rate. Update all the weights as follows:

wij(l) (L  1)  w ij(l) (L)  wij(l) (L)    (8)

Where l=1,2...L (l) and j=0, 1 ...L (l-1), Wij (l)(L) is the current synaptic weight.

Wij (l) (L+1) is the updated synaptic weights to be used in the next feed forward iteration. Figure.4 shows the complete cycle of period in neural
networks training the term period is used to describe a complete pass through all of the training patterns. The weight in the neural net may be
updated after each pattern is presented to the net, or they may be updated just once at the end of the period.

Matlab and Toolbox Neural Networks

Matlab (The MathWorks Inc, Natick, MA) is a high-level technical computing language and interactive environment for algorithm
development, data visualization, data analysis, and numeric computation. It can be used in a wide range of applications, including signal
and image processing, and computational biology. In an easily used environment Matlab integrates numerical analysis, the computation of
matrices, the processing of signals and graphics, where the problems and solutions are expressed in a similar way to how they are expressed
mathematically, avoiding the traditional programming. Matlab is an interactive system whose basic element of data is the matrix, which does
not require to be dimensioned. The Matlab language is a high-level matrix/array language with control flow statements, functions, data
structures, input/output, and object-oriented programming features. It allows both programming in the small to rapidly create quick programs
you do not intend to reuse. You can also do programming in the large to create complex application programs intended for reuse. Neural
Network Toolbox provides tools for designing, implementing, visualizing, and simulating neural networks. Neural Network Toolbox supports
feedforward networks, radial basis networks, dynamic networks, self-organizing maps, and other proven network paradigms.
There are other work environments with neuronal networks. Some of them are Neural Works Professional II (Neural-Ware Inc., Pittsburgh,
PA), SAS Enterprise Miner Software (SAS Institute Inc., Cary, NC), and Neural Connection (SPSS Inc., Chicago, IL).

Methodology

Fig.5: Block diagram

The proposed method of block diagram shown in the figure5. Fundus images of database created which is in JEPG format. First step in the
process is pre-processing step. The term of the operations is pre-processing on images at the lowest level of abstraction. Pre-processing is the aim
to improvement of the image data that suppresses undesired distortions or enhances image features relevant for further analysis task and
processing. Here the image is resized to a colour space conversions and particular size based on requirement. Pre-processed output is applied to
the segmentation process to extract cup and disc shapes from the input image. The purpose of image segmentation is to partition an image into
meaningful regions with respect to a particular application. The segmentation is based on measurements taken from the image and might be grey
level, colour, texture, depth or motion. The cup-to-disc (CTD) is calculated from the areas of segmented optic cup and disc. CTD ratios are
categorized in to stages like normal, 1st stage, 2nd stage and 3rd stage. The CTD ratios are applied to the Artificial Neural Networks for verifying
the performance and error histogram.
Flow Chart

Flow chart of artificial neural network (ANN) has shown in the figure 7. The cup-to-disc ratios of fundus database values are feed to the ANN.
ANN architecture is three layered which consists of input, hidden and output layer. Input to the ANN is two and output is four i.e. database is
classified to four classes. After executing the ANN architecture performance, error histogram and regression plots are obtained. Afterwards ANN
classify CTD ratios into four stages i.e. normal, 1 st stage, 2nd stage and 3rd or advanced stage depending on CTD ratios. In the normal stage the
CTD ratios are varies from 0 to 0.36. In the 1 st stage the CTD ratio are varies from 0.36 to 0.5. In the 2 nd stage CTD ratio are varies from 0.5 to
0.7. 3rd stage or Advanced stage CTD ratio varies from 0.7 to 0.9.The performance plot of the ANN model demonstrates that mean square error
becomes minimum as the number of epochs is increasing. The test set error and validation set error has comparable characteristics and no major
over fitting has happened near some epoch where best validation performance has taken place. The error histogram plot is to present further
authentication of network performance. It points towards outliers, which are data features where the fit is drastically not as good as than the best
part of data. The largest part of data coincides with zero error line which offers a scheme to verify the outliers to decide if the data is imperfect, or
if those data features are unlike than the leftover of data set.
Flow Chart

Fig7: Flow chart of ANN

The correlation coefficient (Regression -value) determines the association among outputs and targets value of the ANN model. The perfect fit
indicates that the data should fall along line (slope is close to 1), means network output is equal to targets. The R value is an indication of the
relationship between the outputs and targets. If R = 1, this indicates that there is an exact linear relationship between outputs and targets. If R is
close to zero, then there is no linear relationship between outputs and targets.

RESULTS

Estimates time, performance, and mu, gradient values

Fig.11: Artificial neural networks architecture

Artificial Neural Network is the class that encapsulates the neural network nonlinearity estimator. A neural net object let’s use networks, created
using Neural Network Toolbox™ software, in nonlinear ARX models. The neural network nonlinearity estimator defines a nonlinear
function y=F(x), where F is a multilayer feed-forward (static) neural network, as defined in the Neural Network Toolbox software. Y is a scalar
and x is an m-dimensional row vector. The neural network will executed for 1000 epochs, the estimates time, performance, and mu, gradient
values and are displayed in figure 6.

Performance plot
Fig.12: Performance plot of ANN

The neural network performance plot is shown in the figure 12. When the training stops and increases in validation error, and the best
performance is taken from the epoch with the lowest validation error. Here the best validation performance has taken place at 2 epochs.

Training plot

Fig.13: Training state on ANN

The neural network training plot is displayed in the figure 13. Plots the training state from a training record TR returned by train.

Error histogram plot

Fig.14: Error histogram plot of ANN

The neural network training plot as displayed in the figure 14. Plots a histogram of error values e from targets and outputs. The blue,
green and red bars signify training data, validation data and testing data respectively.

Training, validation, Test and over All plot of ANN


Fig.15: Training, validation, Test and over All plot of ANN

The four plots represent the training, validation, testing data, and overall. The dashed line in each plot represents the perfect result –
outputs = targets. The solid line represents the best fit linear regression line between outputs and targets.

ANN clssification output

Fig.16: ANN classification output

ANN classifies the CTD ratios into four categories it is shown in figure 15. The ANN classifies the disease, Normal, 1st stage, 2nd stage and 3rd or
advanced stage.

Conclusion

In proposed method detection of glaucoma disease by extracting the optic disc in retinal fundus images. In this study, we have
presented a method to calculate the CDR automatically from fundus images. The image pre-processing is the first step to extract the optic disc
and cup. The morphological operations are efficient to detect the cup to disc ratio in glaucoma patients and normal patients and then check the
level of disease. Finally watershed segmentation process for separating optic cup and disc from fundus image. If the cup to disc ratio is more 0.3
then those patients are glaucoma patients and if the disc ratio is less than those are normal patient .This operations has been tested on a different
fundas images. These CTD ratios served as inputs to neural network for the classification of different four categories which shows accurate
classification. Artificial Neural Networks plots the performance, error histogram and regression plots are plotted.

Glaucoma is actually originating due to the increased pressure within eye ball leading to the damage of optic nerve. Here Matlab is used for
training and simulating artificial neural network to detect the presence of glaucoma and classify the disease as mild, severe and normal.. The
various parameters are easily extracted using Matlab and compared
with standard values using neural network.
The artificial neural network makes the Glaucoma detection accurate and adaptive. The advantage of the system is simplicity of operation. The
manual segmentation is extremely difficult and moreover the reproducibility is low. This software intended to help the doctors in their decision
making process. To make this more user friendly graphical user interface is also given which makes the handling of this tool very simple. In
future application, it can be used to detect more eye diseases by taking more parameters..

Chapter 9

Results and Discussions


The energy based approach assumes that different texture patterns have different energy distributions in the
space- frequency domain. This approach is very appealing due to its low computational complexity involving
mainly the calculation of first and second order moments of transform coefficients. Its provides a snapshot of
the results obtained from Feature extraction described in the methodology section.

Here one level Wavelet decomposition is done, and the wavelet filters used here were, the daubechies (db3),
the symlets (sym3), and the biorthogonal (bio3.3) filters. The extractedfeatures were used for
Classification.Fig.6shows the energy feature extraction using 2D-DWT.

The execution is finished utilizing Matlab .A Graphical User Interface is created and shown in Figure.9
.The image which is to be analyzed is selected through the GUI. The selected image is pre-processed
utilizing Z-score normalization. The pre-processed image is applied to Wavelet filters Daubechies,
Symlets and Biorthogonal(anju soman)..Then feature extraction option is selected to get rundown of the
energy levels extracted from the image. In the GUI, there are two tables: one compares to list of energy
levels of the input image. The second corresponds to list of selected energy levels of all the images in
the database.

Table.1 shows the enlarged version of the wavelet Subbands in the GUI. The rows of the table are
individual energy level of each retina image in the database. The Columns of the table shows energy
level extracted from the images of database with respect to each wavelet filter. Initial two columns are
corresponding to Daubechies filter, the third and fourth columns are corresponding to symlets12. The
fifth, sixth and seventh columns are corresponding to Biorthogonal filter (Bio3.7).The eighth, ninth and
tenth columns are corresponding to Biorthogonal filter (Bio3.9).The Eleventh ,twelve and thirteenth
columns are corresponding to Biorthogonal filter (Bio4.4) (Anju soman). Using these energy levels the
Naive Bayes and MLP-BP ANN algorithm will classify the images as normal or abnormal. Figure.10
indicates normal condition of the glaucoma and displays the tables corresponding to Naive Bayes and
MLP-BP ANN Algorithm. Naive Bayes classifies the images in the database with the accuracy of 89.6%.
MLP-BP ANN algorithm classifies the images in the database with the accuracy of 97.6%. This GUI also
shows two popup windows which are nothing but the report generated by Naive Bayes and MLP-BP
ANN algorithm for the selected input image(annu). Figure.11 shows abnormal condition of glaucoma
and displays the tables corresponding to Naive Bayes and MLP-BP ANN Algorithm.

E. Classification Results

The extracted features are used for training the system that is the classifiers like SMO, random
forest, naïve bayes, SVM and ANN. Fig 5 & Fig 6 shows classification result for the retinal images are
classified as ―glaucoma detected or glaucoma not detected

Fig.5 shows the GUI, displays the tables corresponding to Naive Bayes and MLP-BP ANN algorithm.
Naive Bayes classifies the images in the database with the accuracy of 89.6% MLP-BP ANN algorithm
classifies the images in the database with the accuracy of 97.6% This GUI also shows two popup
windows which are nothing but the report generated by Naive Bayes and MLP-BP ANN algorithm for the
selected input image.

Table.1 and Table.2 shows the Cup to Disc ratio for normal image dataset and abnormal image dataset.
Here we comparethe present technique to the previous technique. In the previous technique
thresholding the region having uniform pixel values are grouped which gives the measurement of the
segmented cup area[6].To find the disc area of input image with different threshold values are applied
and related to the cup areas. After finding disc area keep disc area as constant for all the different
images of the eye and cup area varied by applying threshold values for different regions[21]. In the
present technique the author measures the cup area and disc area keeping the structureing element on
it and do the segmentation for required region,Which gives the more accurate result compare to
previous result[21]. Here the simulation results gives the better accuracy compared to the previous
work. Figure 8 gives a snapshot of the graphical user interface (GUI) to identify the glaucoma using the
fundus pictures

A normal image is chosen the database connected to GUI window and the outcomes are shown as
appeared in the Figure.9. The outcomes Cup area, Disc area and Cup to Disc ratio are 269.25, 836.876
and 0.32212 respectively. A glaucomatous image is chosen from the database connected to GUI window
and the GUI representation is Figure 10[19].

The outcomes Cup area, Disc area and Cup to Disc ratio are 569.25, 1134.375 and 0.5081
respectively.

CDR (Cup to disc ratio) - The vertical cup-to-disc ratio (CDR) is one of the most important risk factors in the
diagnosis of glaucoma [9].It is defined as the ratio of the vertical cup diameter over the vertical disc diameter. The
optic disc is the location where the optic nerve connects to the retina. In a typical 2D fundus image, the optic disc
is an elliptic region which is brighter than its surroundings. The disc has a deep excavation in the center called the
optic cup. It is a cup-like area devoid of neural retinal tissues and normally white in color, OC of a glaucomatous
eye tends to grow over time due to persistently increased intraocular pressure. As the OC grows, the neuroretinal
rim located between the edge of the OD and the OC which contains optic nerve fibers becomes smaller in area. If
the neuroretinalrim istoo thin,vision willbedeteriorated. Thus, quantitative analysis of the opticdisc cupping can be
used to evaluate the progression of glaucoma [10]. As more and more optic nerve fibers die, the OC becomes
larger with respect to the OD, which corresponds to an increased CDR value. For a normal subject, the CDR value is
typically around 0.2 to 0.3. Typically, subjects with CDR value greater than 0.6or0.7aresuspected ofhaving
glaucoma and further testingisoftenneededtomakethediagnosis[11].

NeuroretinalRimThinning

In a retinal image, neuroretinal rim thinning is a definite indicator of glaucoma. The rim begins to thin with the
onset of disease as the cup boundary starts approaching the disc. As a result the cup-to- disc diameter ratio
increases. Thinning can be a global or local phenomenon around the cup boundary.
Fig.1.3showsanexampleofglobalrimthinninginthesecondrow. Itcanbenotedthatnearlytheentire
boundaryalongthecupisextendedtothediscinthiscase. Fig.1.4showsanexamplewhereonlyasmall
localregionintherimhasundergonethinning. Here, thinningisasubtlevariationintheill-definedcup boundary. While
easily visible to a trained human eye, automatic detection of such minor variations across retinal images, already
inflicted with other non-disease variations is a significant challenge.
PeripapillaryAtrophy(PPA)

PPAisanotherindicatorforglaucoma(seeFig1.4(inyellow)).Whilethecupboundaryisnotaffected due to this, a change


in intensity adjoining the disc boundarycanbeobservedclearly. PPAoccursdueto atrophy of retinal cells around the
optic disc. Some atrophy appears in both normal and glaucomatous eyes but it is more commonly observed in
glaucomatous cases. We can observe that an increase in have similar neuroretinal rim as the confirmed cases in
Fig.1.5(b). This is because the intensity is not considered as primary indicator for cup boundary within optic disc.
The vessel bends serve as cues for the depth change. Similarly, normal cases and confirmed cases seem to have
regions which look alike (refer Figure.1.5) which may be misinterpreted as atrophy or nerve fibre loss defect.

The block diagram is explained as follows, each block has its own important in problem solving. A Retinal Fundus Image Retinal
fundus images are the images of interior parts of eye including the retina , optic disc, fundus. The fundus images are used by
ophthalmologist for the detection of any eye disease[3]. In our work we use fundus images to detect the glaucoma disease and
also the the stage of disease. B Superpixel Generation Superpixel are greater than normal pixel, these pixel have same color
and contrast, we easily get the required information from image, superpixel have wide application in defence field, traffic,
disease detection . In our proposed work superpixel is generated using SLIC algorithm. C RGB Separation RGB model is color
model which is formed by the combination of three color called red, green, blue , these are the primary color by the
combination of these color other color are formed ,when these color are mixed they formed a broad array of colors[4][6] . The
main purpose of this model is to represent the image on digital media like computer, television. In propose work we increase
the sharpness of image so that image can be easily analysis. D Histogram Representation Histogram represent as a graphical
representation of tonal distribution of digital image , it plots number of pixel for each tonal value, by the histogram the viewer
can easily get the tonal distribution at a glance. Histogram is widely used in digital camera, disease detection. Horizontal axis of
histogram represented tonal distribution and vertical axis represent number of pixel. E Histogram Equalization This method is
used to increase the global contrast of image especially when image having close contrast. This method widely used in x-ray
technique for better view of bone structure, in thermal imaging also this technique is used. Histogram gives a realistic view to a
dull image and help in extracting required information. F HSV Conversion In RGB separation we do not get the required
brightness or contrast so HSV conversion is required, it help in giving different maturity to image, each superpixel is so clear to
mark the cup and disc on fundus image. G Cup Segmentation For the detection of glaucoma cup segmentation is very
important, cup is interior part of eye through which light travel inside the eye, to detect glaucoma the cup diameter is
calculated to find out the CDR. H Disc Segmentation Disc is the white part of the eye, when glaucoma spread the diameter of
disc start decreasing, with the help of disc diameter we can calculate CDR. Disc and cup segmentation are the important steps
to detect glaucoma[1][2][3]. G Neural Network These are interconnected neural which transfer information, after calculating
CDR we feed the information to feed forward neural network that compare the CDR with threshold and open up a dialogue box
to so stage of glaucoma .

Super pixels are local, coherent and provide a convenient primitive to compute local image features. They capture redundancy
in the image and reduce the complexity of subsequent processing. A super pixel generation step is used to divide the image into
super pixels. Many algorithms have been proposed for super pixel classification .The simple linear iterative clustering algorithm
(SLIC) is used to aggregate nearby pixels into super pixels in retinal fundus images. SLIC is fast, memory efficient and has
excellent boundary adherence. SLIC is also simple to use with only one parameter, i.e., the number of desired super pixels SLIC
ALGORITHM SLIC is a simple and efficient method to decompose an image in visually homogeneous regions. It is based on a
spatially localized version of k-means clustering. Similar to mean shift or quick shift, each pixel is associated to a feature vector
SLIC takes two parameters: the nominal size of the regions (super pixels) region Size and the strength of the spatial
regularization regularizer. The image is first divided into a grid with step region Size. The centre of each grid tile is then used to
initialize a corresponding k-means (up to a small shift to avoid image edges). Finally, the k-means centres and clusters are
refined, yielding the segmented image. As a further restriction and simplification, during the k-means iterations each pixel can
be assigned to only the 2 x 2 centres corresponding to grid tiles adjacent to the pixel. After the k- means step, SLIC optionally
removes any segment whose area is smaller than a threshold min Region Size by merging them into larger ones. In SLIC, k initial
cluster centres Ck are sampled on a regular grid spaced by pixels apart from the image with N pixels. The centres are first
moved towards the lowest gradient position in a 3 × 3 neighbourhood. Clustering is then applied. For each Ck, SLIC iteratively
searches for its best matching pixel from the neighborhood around Ck based on colour and spatial proximity and then compute
the new cluster centre based on the found pixel. The iteration continues until the distance between the new centres and
previous ones is small enough.

The proposed method of block diagram shown in the figure5. Fundus images of database created which is in JEPG format. First step in the
process is pre-processing step. The term of the operations is pre-processing on images at the lowest level of abstraction. Pre-processing is the aim
to improvement of the image data that suppresses undesired distortions or enhances image features relevant for further analysis task and
processing. Here the image is resized to a colour space conversions and particular size based on requirement. Pre-processed output is applied to
the segmentation process to extract cup and disc shapes from the input image. The purpose of image segmentation is to partition an image into
meaningful regions with respect to a particular application. The segmentation is based on measurements taken from the image and might be grey
level, colour, texture, depth or motion. The cup-to-disc (CTD) is calculated from the areas of segmented optic cup and disc. CTD ratios are
categorized in to stages like normal, 1st stage, 2nd stage and 3rd stage. The CTD ratios are applied to the Artificial Neural Networks for verifying
the performance and error histogram.

Flow chart of artificial neural network (ANN) has shown in the figure 7. The cup-to-disc ratios of fundus database values are feed to the ANN.
ANN architecture is three layered which consists of input, hidden and output layer. Input to the ANN is two and output is four i.e. database is
classified to four classes. After executing the ANN architecture performance, error histogram and regression plots are obtained. Afterwards ANN
classify CTD ratios into four stages i.e. normal, 1 st stage, 2nd stage and 3rd or advanced stage depending on CTD ratios. In the normal stage the
CTD ratios are varies from 0 to 0.36. In the 1st stage the CTD ratio are varies from 0.36 to 0.5. In the 2nd stage CTD ratio are varies from 0.5 to
0.7. 3rd stage or Advanced stage CTD ratio varies from 0.7 to 0.9.The performance plot of the ANN model demonstrates that mean square error
becomes minimum as the number of epochs is increasing. The test set error and validation set error has comparable characteristics and no major
over fitting has happened near some epoch where best validation performance has taken place. The error histogram plot is to present further
authentication of network performance. It points towards outliers, which are data features where the fit is drastically not as good as than the best
part of data. The largest part of data coincides with zero error line which offers a scheme to verify the outliers to decide if the data is imperfect, or
if those data features are unlike than the leftover of data set

Final Conclusions

VII.CONCLUSION
In this work , a wavelet-based texture feature set hasbeen used. The texture feature set is made up of the
energy of sub images. Wavelet transform is an efficient tool for feature extraction and they are successfully used
in biomedical image processing. Classification technique is developed to detect whether glaucoma is
present or not. Segmentation is done to highlight the glaucoma affected portion. If more powerful classifiers
used, classification accuracy may further be improved

For the glaucomatous image, the cup to disc ratio is used to segment glaucoma portion, the segmented image is
applied for energy levels extracted using wavelet sub bands Daubechies (Db4), Symlets (sym4) and Biorthogonal
filters (bio3.7, bio4.2 & bio4.7) gives the clear indication of difference in the energy levels compared to that of
normal retina image.

Here Matlab is used for taining and simulating artificialneural network to detect the presence of glaucoma and
classify the disease as mild, severe and normal.. The variousparameters are easily extracted using Matlab and
comparedwith standard values using neural network.anng1The artificial neural network makes the Glaucoma
detection accurate and adaptive. The advantage of the systemis simplicity of operation. The manual segmentation is
anng1extremely difficult and moreover the reproducibility is low.This software intended to help the doctors in their
decisionmaking process. To make this more user friendly graphicaluser interface is also given which makes the
handling of thistool very simple. In future application, it can be used to detectmore eye diseases by taking more
parameters
The segmentation of optic disc, optic cup and smooth their boundaries by morphological operations will be
used. The morphological operations are efficient to detect the cup to disc ratio in glaucoma patients and normal
patients and then check the level of disease. If the cup to disc ratio is more 0.3 then those patients are glaucoma
patients and if the cup to disc ratio is less than 0.3 then those are normal patient .This operations has been tested on a
different images. Various threshold values are applied to these images cup to disc ratio has been identified and
histogram is plotted.

The ANN algorithms Naive Bayes and MLP-BP are trained with normal Bayes classifies the images in the database
with the accuracy of 89.6%. MLP-BP ANN algorithm classifies the images in the database with the accuracy of
97.6%. The proposed system exhibits better accuracy compared to existing glaucoma classification systems. This
system is cost effective and can be readily used in the hospitals. This system reduces doctor’s burden and overcomes
human error. In future the system can be incorporated with artificial intelligence to classify other abnormalities of
the eye as well. It can also be designed to generate a report by itself with full information of the patient so that it can
be used for telemedicine purpose. In proposed method detection of glaucoma disease by extracting the optic disc in
retinal fundus images . In this study, we have presented a method to calculate the CDR automatically from fundus
images.

The image pre-processing is the first step to extract the optic disc and cup. Finally watershed segmentation process
for separating optic cup and disc from fundus image .This operations has been tested on a different funds images.
These CTD ratios served as inputs to neural network for the classification of different four categories which shows
accurate classification. Artificial Neural Networks plots the performance, error histogram and regression plots are
plotted.

Scope for future work

The cup to disc ratio (CDR) is an important indicator of the risk of the presence of glaucoma in an individual. In this
study, we have presented a method to calculate the CDR automatically from fundus images.

The image pre-processing is the first step to localize the optic disc and cup. The optic disc is extracted using an edge
detection approach and a variational level-set approach separately. The optic cup is then segmented using a color
component analysis method and threshold level-set method.

After obtaining the contours, an ellipse fitting step is introduced to smoothen the obtained results. Using images
obtained from different hospital, the performance of our approach is evaluated using the proximity of the calculated
CDR to the manually graded CDR. The results indicate that our approach provides 94% accuracy in glaucoma
analysis. As a result, this study has a good potential in automated screening systems for the early detection of
glaucoma.

The further developments for this study are to enhance the performance of the cup segmentation method by
including a method of vessel detection and vessel in painting. In addition, machine-learning techniques will be
applied in order to find the suitable parameters in several formulas, including edge detection approach and threshold
level set approach.

The detection of glaucoma is particularly significant since it allows timely treatment to prevent major visual field
loss. The diagnosis of glaucoma can be done through measurement of CDR (cup to disc ratio).The cup to disc ratio
is evaluated by this Superpixel Classification technique ,The problem is that the cup boundary at the nasal side of
the cup is often difficult to be determine due to the presence of blood vessels. And this multithresholding
segmentation in which cup to disc ratio is detected but the problem is that the detection CDR measurement process
is complex due to the unclearly defined color texture between the optic cup and optic disc. The OD boundaries
cannot clear due to the presences of blood vessels. Due to the complexity of funds images their high number of
elements makes a perfect segmentation difficult. In this work I will use the Mathematical Morphology operations to
overcome this problem and to detect better cup to disc ratio.
Future work

In future the existing system can be incorporated with SVM toclassify the retinal images as normal and abnormal
which in turn would help in the detection of Glaucoma. The system can also be
incorporated with artificial intelligence to classify otherabnormalities of the eye as well. It can also be designed to
generatereport by itself with full information of the patient so that it can beused for telemedicine purpose
REFERENCES

[1] N.Annu, “Classification of Glaucoma Images using Wavelet based Energy Features and PCA”,
International Journal of Scientific & Engineering Research, Volume 4, Issue 5, ISSN 2229-
5518, May 2013
[2] U. Rajendra Acharya, Sunmeet Due, Xian Du and Chua Kuang, “Automated diagnosis of
glaucoma using texture and higher spectra features”, IEEE Transanction on information
technology in biomedicine, Vol. 15, No.3, 1089-7771, May 2011
[3] Gwenole quellec , Stephen R. Russell and Michael D. Abramoff, senior Member, IEEE,
“Optimal filter framework for automated, instantaneous detection of lesions in retinal
images”, IEEE Transanction on medical imaging, Vol.30, N0.2, 0278-0062, February 2011
[4] Linlin Shen and Sen Jia, “Three-Dimensional gobar wavelets for pixel based hyperspectra
imagenary classification”, IEEE Transanction on Geoscinece and remote sensing, Vol.49,
N0.12, 0196-2892, December 2011
[5] Cheng Huesn Li, Bor-Chen Kuo, Member, IEEE, “A Spatial-contextual Support vector machine
for remotely sensed image classification”, IEEE Transanction on Geoscinece and remote
sensing, Vol.50,No.3, 0196-2892, March 2012.
[6] Fereidoun A and Mianji, Member, IEEE, “SVM-Based Unmixing-to-ClassificationConversion
for Hyperspectral Abundance Quantification”, IEEE Transanction on Geoscinece and remote
sensing, Vol.49,No.11, 0196-2892, November 2011
[7] Sumeet Dua, Senior Member, IEEE, “Wavelet-based energy features for glaucomatous image
classification”, IEEE Transanction on information technology in biomedicine, Vol. 16, No.1,
1089-7771, January 2012..
[8] Divyabharati and Ganesh babu, “Measurement of RNFL thichness using OCT measures for
glucoma detection”, ICTACT journal of Image and Video processing, Vol 4, pp 637-641,
ISSN 0976-9102, Aug 2013,
[9] Kurnika Choudhary, Shamik Tiwari, “ANN Glaucoma Detection using Cup-to-Disk Ratio and
Neuroretinal Rim”, International Journal of Computer Applications (0975 – 8887),Volume
111 – No 11, ,Feb 2015
[10] Ms. Gauri Borkhade, Dr. Mrs. Ranjana Raut, “Application of Neural Network for Diagnosing
Eye Disease”,IETE 46th Mid Term Symposium “Impact of Technology on Skill
Development” MTS-, Special Issue of International Journal of Electronics, Communication
& Soft Computing Science and Engineering , ISSN: 2277-9477, 2015.
[11] Vijayan T, Ashutosh Singh, “Glaucoma Recognition and Segmentation Using Feed Forward
Neural Network and Optical physics”, International Journal of Advanced Research in
Electrical, Electronics and Instrumentation Engineering (An ISO 3297: 2007 Certified
organization) Vol. 4, Issue 4, April 2015.
[12] Husandeep Kaur Amandeep Kaur, “Early Stage Glaucoma Detection in Diabetic Patients: A
Review” ,Volume 4, Issue 5, ISSN: 2277 128X , May 2014
[13] Ms. Pooja Chaudhari Prof. Girish A. Kulkarni, “Using Artificial Neural Network to Detect
Glaucoma with the Help of Cup to Disk Ratio”,International Journal of Advanced Research
in Electronics and Communication Engineering (IJARECE)Volume 5, Issue 7, July 2016 .
[14] Nikita V. Orlov, Member, IEEE, Wayne W. Chen, David Mark Eckley, Tomasz J. Macura,
Member, IEEE, Lior Shamir, Elaine S. Jaffe, and Ilya G. Goldberg, Member, IEEE,
“Automatic Classification of Lymphoma Images With Transform-Based Global Features”,
IEEE TRANSACTIONS ON INFORMATION TECHNOLOGY IN BIOMEDICINE, VOL.
14, NO. 4,1003, JULY 2010.
[15] R.Sharanya, Mr A.M.Saravanan, “DEEP CONVOLUTIONAL NEURAL NETWORKFOR
GLAUCOMA SCREENING” , International Journal For Technological Research In
EngineeringVolume 3, Issue 10, ISSN (Online): 2347 – 4718, June-2016.
[16] Jyotika Pruthi, Dr.Saurabh Mukherjee , “Computer Based Early Diagnosis of Glaucoma in
Biomedical Data Using Image Processing and Automated Early Nerve Fiber Layer Defects
Detection using Feature Extraction in Retinal Colored Stereo Fundus Images”, International
Journal of Scientific & Engineering Research, Volume 4, Issue 4, 1822, ISSN 2229-5518
IJSER © 2013, April-2013.
[17] Anju Soman, Deepthy Mathew,Glaucoma Detection And Segmentation Using Retinal Images ,
International Journal of Science, Engineering and Technology Research (IJSETR) Volume 5,
Issue 5, ISSN: 2278 – 7798, May 2016.
[18] ANINDITA SEPTIARINI, AGUS HARJOKO, “AUTOMATIC GLAUCOMA DETECTION
BASED ON THE TYPE OF FEATURES USED”: A REVIEW Journal of Theoretical and
Applied Information Technology . Vol.72 No.3 © 2005 - 2015 JATIT & LLS. All rights
reserved. ISSN: 1992-8645 www.jatit.org , E-ISSN: 1817-
3195 , Feb 2015
[19] Darsana S1, Rahul M Nair, “A Novel Approach towards Automatic Glaucoma Assessment”,
International Journal of Scientific Research Engineering & Technology (IJSRET), ISSN
2278 – 0882 Volume 3, Issue 2, May 2014
[20] Apeksha Padaria1, Bhailal Limbasiya2, “Detection of Glaucoma Using Retinal Fundus Images
with Gabor Filter”, International Journal of Advance Engineering and Research Development
Volume 2,Issue 6, @IJAERD-2015, All rights Reserved , Scientific Journal of Impact
Factor(SJIF): 3.134 e-ISSN(O): 2348-4470 p-ISSN(P): 2348-6406 , june 2014
[21] Nitha Rajandran, “Glaucoma Detection Using Dwt Based Energy Features and Ann Classifier”,
IOSR Journal of Computer Engineering (IOSR-JCE) e-ISSN: 2278-0661,p-ISSN: 2278-
8727, Volume 16, Issue 5, Ver. IV (Sep – Oct. 2014), PP 35-42 www.iosrjournals.org ,
www.iosrjournals.org , 35 | Page ,2014
[22] Nidhi Shah1 Narendra Limbad2, “A Literature Survey on Glaucoma Detection Techniques using
Fundus Images”, IJSRD - International Journal for Scientific Research & Development| Vol.
2, Issue 09, ISSN (online): 2321-0613 , 2014.
[23] Preeti1, Jyotika Pruthi2, “Review of Image Processing Technique for Glaucoma Detection”,
International Journal of Computer Science and Mobile Computing ,A Monthly Journal of
Computer Science and Information Technology ,ISSN 2320–088X, IJCSMC, Vol. 2, Issue.
11, pg.99 – 105 , November 2013
[24] Chaitali D. Dhumane 1, S.B. Patil 2, “Automated Glaucoma Detection using Cup to Disc Ratio”,
International Journal of Innovative Research in Science, Engineering and Technology ,(An
ISO 3297: 2007 Certified Organization) Vol. 4, Issue 7, July 2015
[25] D.S. GREWAL1,2, R. JAIN1, S.P.S. GREWAL1, V. RIHANI, “Artificial neural network-based
glaucoma diagnosis using retinal nerve fiber layer analysis”, European Journal of
Ophthalmology / Vol. 18 no. 6, pp. 915-921,2008
[26] Chalinee Burana-Anusorn1, Waree Kongprawechnon1, Toshiaki Kondo1, Sunisa Sintuwong2
and KanokvateTungpimolrut3, “Image Processing Techniques for Glaucoma Detection
Using the Cup-to-Disc Ratio”,Thammasat International Journal of Science and Technology,
Vol. 18, No. 1, January-March 2013
[27] R.Radha1 and Bijee Lakshman2, “RETINAL IMAGE ANALYSIS USING
MORPHOLOGICAL PROCESS AND CLUSTERING TECHNIQUE”, Signal & Image
Processing : An International Journal (SIPIJ) Vol.4, No.6, December 2013
[28] P.K ABIRAMI1, Prof. T.K. GANGA2 , Prof. I. ANETTE REGINA3, Prof. S. GEETHA4,
“NEURAL NETWORK BASED CLASSIFICATION AND DETECTION OF GLAUCOMA
USING OPTIC DISC AND CUP FEATURES”, International Journal of Scientific Research
Engineering & Technology (IJSRET), ISSN 2278 – 0882 Volume 4, Issue 7, July 2015
[29] Swapna P.P.1, and Mini M.G, “A Regression Neural Network based Glaucoma Detection
System using Texture Features”, Int'l Journal of Computing, Communications &
Instrumentation Engg. (IJCCIE) Vol. 3, Issue 2 ,ISSN 2349-1469 EISSN 2349-1477,2016
[30] Dr. S.S. Sreeja Mole, Prathu.P.T , Sreesankar.J, “Diagonization Of Glaucomatous Images Using
Wavelet Transforms”, International Journal of Advances in Engineering, Science and
Technology (IJAEST), 2016
[31] A. Iyyanarappan, G.Tamilpavai, “Glaucomatous Image Classification Using Wavelet Based
Energy Features And PNN”, INTERNATIONAL JOURNAL OF TECHNOLOGY
ENHANCEMENTS AND EMERGING ENGINEERING RESEARCH, VOL 2, ISSUE 4 ,
85 ISSN 2347-4289 ,Copyright © 2014 IJTEEE, 2014
[32] Inmaculada Dópido, Alberto Villa, Antonio Plaza, Paolo Gamba, “A Quantitative and
Comparative Assessment of Unmixing-Based Feature Extraction Techniques for
Hyperspectral Image Classification”, IEEE JOURNAL OF SELECTED TOPICS IN
APPLIED EARTH OBSERVATIONS AND REMOTE SENSING ,VOL.5 ,NO.2,
APRIL2012.
[33] Rahil Garnavi, Mohammad Aldeen, and James Bailey, “Computer-Aided Diagnosis of
Melanoma Using Border- and Wavelet-Based Texture Analysis”, IEEE TRANSACTIONS
ON INFORMATION TECHNOLOGY IN BIOMEDICINE, VOL. 16, NO. 6, NOVEMBER
2012
[34] Syeda Fazilath Banu and Chandrashekar M Patil2 1Vidyavardhaka College of Engineering,
“Retinal Abnormality Detection Using Artificial Neural Network”,Proc. of Int. Conf. on
Current Trends in Eng., Science and Technology, ICCTEST,2015
[35] Zhuo Zhang, Jiang Liu, Neetu Sara,Cherian, Ying Sun, Joo Hwee Lim, Wing Kee Wong,Ngan
Meng Tan, Shijian Lu, Huiqi Li, Tien Ying Wong” Convex Hull Based Neuro- Retinal Optic
Cup Ellipse Optimization in Glaucoma Diagnosis” 31st Annual International Conference of
the IEEE EMBSMinneapolis, Minnesota, USA, September 2-6, 2009.
[36] Huiqi Li, Opas Chutatape” A Model- Based Approach for Automated Feature Extraction in
Fundus Images” Proceedings of the Ninth IEEE International Conference on Computer
Vision (ICCV 2003) 2-Volume Set 0-7695-1950- 4/03 $17.00 © 2003 IEEE.
[37] Archana Nandibewoor S B Kulkarni Sridevi Byahatti Ravindra Hegadi” Computer Based
Diagnosis of Glaucoma using Digital Fundus Images” Proceedings of the World Congress on
Engineering 2013 Vol III,WCE 2013, July 3 - 5, 2013, London, U.K..
[38] Inoue, Kenji Yanashima, Kazushige Magatani, Takuro Kurihara, Naoto, “Development Of A
Simple Diagnostic Method For The Glaucoma Using Ocular Fundus Pictures”, in the
Proceedings of the 2005 IEEE Engineering in Medicine and Biology 27th Annual
Conference Shanghai, China, September 1-4, 2005.
[39] R¨udiger Bock, J¨org Meier, Georg Michelson, L´aszl´o G. Ny´ul, and Joachim Hornegger,
“Classifying Glaucoma with Image-Based Features from Fundus Photographs”, DAGM
2007, LNCS 4713, pp. 355–364, 2007. Springer-Verlag Berlin Heidelberg 2007.
[40] M muthu rama krishnan, oliver faust “Automated glaucoma detection using hybrid feature
extraction in retinal fundus images” Journal of Mechanics in Medicine and Biology Vol. 13,
No. 1 (2013) 1350011 (21 pages) °c World Scientific Publishing Company DOI:
10.1142/S0219519413500115.
[41] Vicente Grau, J. Crawford Downs, and Claude F. Burgoyne, “Segmentation Of Trabeculated
Structures Using An Anisotropic Markov Random Field: Application To The Study Of The
Optic Nerve Head In Glaucoma”, IEEE Transactions on Medical Imaging, Vol. 25, No. 3,
March 2006. Pg.: 245.
[42] Gopal Datt Joshi, Jayanthi Sivaswamy,“Optic Disk and Cup Segmentation From Monocular
Color Retinal Images for Glaucoma Assessment”, IEEE Transactions On Medical Imaging,
June 2011, pp. 1192-1205.
[43] Jun Cheng, Jiang Liu, Beng Hai Lee, “Closed Angle Glaucoma Detection in RetCam Images”,
IEEE Proceedings of the 32nd Annual International Conference, Sep 2004, pp 4096-4099.
[44] Koen A. Vermeer, Frans M. Vos, Barrick Lo,“Modeling of Scanning Laser Polarimetry
Images of the Human Retina for Progression Detection of Glaucoma”, IEEE Transactions On
Medical Imaging, May 2006, pp. 517-528
[45] Mei-Ling Huang, Hsin-Yi Chen, Jian-Jun Huang: “Glaucoma detection using adaptive neuro-
fuzzy inference system”, Expert Systems with Applications 32 (2007) 458–468.
[46] Yuji Hatanaka, Atsushi Noudo, Chisako Muramatsu, Akira Sawada, Takeshi Hara, Tetsuya
Yamamoto, Hiroshi Fujita: “Vertical cup-to-disc ratio measurement for diagnosis of
glaucoma on fundus images”, Medical Imaging 2010: Computer-Aided Diagnosis. Proc. of
SPIE Vol. 7624, 76243C.
[47] L´aszl´o G. Ny´ul: “Retinal image analysis for automated glaucoma risk evaluation”. MIPPR
2009: Medical Imaging, Parallel Processing of Images, and Optimization Techniques. Proc.
of SPIE Vol. 7497. .
[48] M. Caroline Viola Stella Mary, B. Jainudhin Sudar Marri: “Automatic Optic Nerve Head
Segmentation for Glaucomatous Detection using Hough Transform and Pyramidal
Decomposition”, International Conference on Recent Trends in Computational Methods,
Communication and Controls (ICON3C 2012) Proceedings published in International Journal
of Computer Applications (IJCA).
[49] Sobia Nazi, Sheela N Rao: “Glaucoma Detection in Color Fundus Images Using Cup to Disc
Ratio” The International Journal Of Engineering And Science (IJES) Vol. 3 Issue 6 Pages
51-58, 2014.
[50] S. Chandrika, K. Nirmala “Analysis of CDR Detection for Glaucoma Diagnosis”,
International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622.
NCACCT-19 March 2013.
[51] Arturo Aquino, Manuel Emilio Gegúndez-Arias, and Diego Marín: “Detecting the Optic Disc
Boundary in Digital Fundus Images Using Morphological Edge Detection and Feature
Extraction Techniques”, IEEE TRANSACTIONS ON MEDICAL IMAGING 2010.
[52] N.Anil Kumar, M. Satya Anuradha, Prakash.SSVD.Vepa, Ravuri Daniel: “Active Contours
Techniques for Automatic Detection of Glaucoma”, International Journal of Recent
Technology and Engineering (IJRTE) Volume-1, Issue-4, October 2012.
[53] Arulmozhivarman Pachiyappan, Undurti N Das, Tatavarti VSP Murthy and Rao Tatavarti:
“Automated diagnosis of diabetic retinopathy and glaucoma using fundus and OCT images”
Lipids in Health and Disease 2012, 11:73.
[54] Preeti Kailas Suryawanshi: “An approach to glaucoma using image segmentation techniques”
IJESRT 2(9): September, 2013.
[55] K.Narasimhan, Dr.K.Vijayarekha: “An efficient automated system for glaucoma detection
using fundus image”, Journal of Theoretical and Applied Information Technology 2011.Vol.
33 No.1
[56] Neelapala Anil Kumar and P.A.Nageswara Rao, Prof P.Mallikarjuna Rao, Smt. M. Satya
Anuradha: “Automatic detection of glaucoma in eye by angle opens distance 500 calculation
by using GUI”, IJSAT (ISSN 2221-8386) Volume 1 No 6 August 2011. [27] Noor Elaiza
Abdul Khalid, Noorhayati Mohamed Noor, Zamalia Mahmud, Saadiah Yahya, and
Norharyati Md Ariff “Bridging Quantitative and Qualitative of Glaucoma Detection” World
Academy of Science, Engineering and Technology Vol:6 2012-12-20. [28] Priya kumbhare,
manisha Turkr and Rashmi kularkar “Computer Aided Automatic Glaucoma
Diagnosis”International Journal of Electrical, Electronics and Data Communication, ISSN:
2320-2084 Volume-2, Issue-2, Feb.-2014.
[57] Urja Zade and Prof. S. S. Lokhande” Retinal Image Glaucoma Detection using Optic Disk
and Incremental Cup Segmentation” IJECCE Volume 5, Issue (4) July, Technovision-2014,
ISSN 2249–071X.
[58] S.Kavitha and K.Duraiswamy” An Efficient Decision Support System For Detection Of
Glaucoma In Fundus Images Using Anfis” International Journal of Advances in Engineering
& Technology, Jan 2012. IJAET ISSN: 2231-1963.
[59] Asma Mansour, Jihene Malek and Rached Tourki “A Development of Computer-Aided
Diagnosis Tools of vascular retinopathy using Fundus Images” International Conference on
Control Engineering and Information Technology Proceedings Engineering and Technology
Vol 1 PP 87- 2,2013.
[60] Sheeba O., Jithin George, Rajin P. K., Nisha Thomas, and Sherin George” Glaucoma
Detection Using Artificial Neural Network” IACSIT International Journal of Engineering
and Technology, Vol. 6, No. 2, April 2014.
[61] Darsana S1, Rahul M Nair” A Novel Approach towards Automatic Glaucoma Assessment”
International Journal of Scientific Research Engineering & Technology (IJSRET), ISSN
2278 – 0882 Volume 3, Issue 2, May 2014
[62] S.Kavitha S.Karthikeyan and Dr.K.Duraiswamy”Neuroretinal rim Quantification in Fundus
Images to Detect Glaucoma” IJCSNS VOL.10 No.6, June 2010.
[63] S. Sekhar W. Al-Nuaimy and A. K. Nandi” Automated Localisation of Optic Disk and Fovea
In Retinal Fundus Images” 16th European Signal Processing Conference (EUSIPCO 2008),
Lausanne, Switzerland, August 25-29, 2008, copyright by EURASIP.
[64] K.Kavitha, IIM.Malathi “Optic Disc and Optic Cup Segmentation for Glaucoma
Classification” International Journal of Advanced Research in Computer Science &
Technology (IJARCST 2014).
[65] Chalinee Burana-Anusorn, Waree Kongprawechnon, Toshiaki Kondo, Sunisa Sintuwong and
KanokvateTungpimolrut” Image Processing Techniques for Glaucoma Detection Using the
Cup-to-Disc Ratio” TIJSAT. [39] Subi. P .P” Glaucoma Screening Based On Super Pixel
Classification and the Detection of Macula in Human Retinal Imagery” IJCAT, Volume 1,
Issue 5, June 2014 ISSN: 2348 – 6090.
[66] Y. Tarabalka, M. Fauvel, J. Chanussot, and J. A. Benediktsson, “SVM- and MRF-based method
for accurate classification of hyperspectral images,” IEEE Geosci. Remote Sens.Lett., vol. 7,
no. 4, pp. 736–740, Oct. 2010 .
[67] Morales, S., Naranjo, V., Angulo, J., & Alcañiz, M. . Automatic detection of optic disc based on
PCA and mathematical morphology.Medical Imaging, IEEE Transactions on, 32(4), 786-
796, 2013
[68] Kurnika Choudhary, Shamik Tiwari, “ANN Glaucoma Detection using Cup-to-Disk Ratio and
Neuroretinal Rim”, International Journal of Computer Applications (0975 – 8887),Volume
111 – No 11, ,Feb 2015
[69] Ms. Gauri Borkhade, Dr. Mrs. Ranjana Raut, “Application of Neural Network for Diagnosing
Eye Disease”,IETE 46th Mid Term Symposium “Impact of Technology on Skill
Development” MTS-, Special Issue of International Journal of Electronics, Communication
& Soft Computing Science and Engineering , ISSN: 2277-9477, 2015.
[70] Er.Parveen Kumar1, Er.Pooja Sharma2, “Artificial Neural Networks-A Study”,International
Journal of Emerging Engineering Research and Technology Volume 2, Issue 2, PP 143-148,
May 2014.
[71] Imran Qurishe, “Glaucoma Detection in Retinal Images Using Image Processing Techniques”,
Int. J. Advanced Networking and Applications , Volume: 07 , Issue: 02 Pages: 2705-2718
, ISSN: 0975-0290,2015
[72] Dnyaneshwari D. Patil ,Ramesh R. Manz , “Primary Open Angle Glaucoma Diagnosis using
Neuro Retinal Rim Ratio”,IJCA Proceedings on National Conference on Digital Image and
Signal Processing © 2016 by IJCA Journal ,NCDISP 2016 - Number 2, Year of Publication:
2016
[73] Srinivasan Aruchamy, Partha Bhattacharjee and Goutam Sanya, “Automated Glaucoma
Screening in Retinal Fundus Images”, International Journal of Multimedia and Ubiquitous
Engineering Vol.10, No.9 (2015), pp.129-136 http://dx.doi.org/10.14257/ijmue.2015.10.9.14
ISSN: 1975-0080 IJMUE Copyright ⓒ 2015 SERSC , 2015
[74] Evangelin Hepsiba, Josephine Puspha Arasi, “An Enhanced Automated Diagnosis Method for
Glaucoma Detection Using Wavelet”, IOSR Journal of Electronics and Communication
Engineering (IOSR-JECE) e-ISSN: 2278-2834,p- ISSN: 2278-8735.Volume 9, Issue 3, Ver.
VI, PP 20-22 www.iosrjournals.org www.iosrjournals.org , May – Jun 2014
[75] Khan, Fauzia, et al. "Detection of glaucoma using retinal fundus images."Biomedical
Engineering International Conference (BMEiCON), 2013 6th. IEEE, 2013.
[76] Babu, TR Ganesh, and S. Shenbagadevi. "Automatic detection of glaucoma using fundus
image." European Journal of Scientific Research 59.1, 22-32 ,2011.
[77] Murthi, A., and M. Madheswaran. "Enhancement of optic cup to disc ratio detection in
glaucoma diagnosis." Computer Communication and Informatics (ICCCI), 2012
International Conference on. IEEE, 2012.
[78] Ho, Chih-Yin, et al. "An automatic fundus image analysis system for clinical diagnosis of
glaucoma." Complex, Intelligent and Software Intensive Systems (CISIS), 2011 International
Conference on. IEEE, 2011.
[79] Chandrika, S., and K. Nirmala. "Analysis of Cdr Detection for Glaucoma Diagnosis",
International Journal of Engineering Research and Applications (IJERA) ISSN:2248-9622
National Conference on Advanced communication& Computing Techniques (NCACCT-19),
23-27, March 2013.
[80] M. Yang, P.-T. Yu, and B.-C. Kuo, “A nonparametric feature extraction and its application to
nearest neighbor classification for hyperspectral image data,” IEEE Trans. Geosci. Remote
Sens., vol. 48, no. 3, pp. 1279–1293, Mar. 2010.
[81] K. Ersahin, I. G. Cumming, and R. K. Ward, “Segmentation and classification of polarimetric
SAR data using spectral graph partitioning,” IEEE Trans. Geosci. Remote Sens., vol. 48, no.
1, pp. 164–174, Jan. 2010.
[82] Ahmed E. Mahfouz and Ahmed S. Fahmy” Fast Localization of the Optic Disc Using Projection
of Image Features” IEEE Transactions on Image Processing, vol. 19, no. 12, December 2010
[83] Cheng J, Liu J,” Super pixel Classification Based Optic Disc and Optic Cup Segmentation for
Glaucoma Screening” IEEE Transactions on Medical Imaging, vol. 32, no. 6, June 2013.
[84] Deepali A. Godse” Automated Localization of Optic Disc in Retinal Images” (IJACSA)
International Journal of Advanced Computer Science and Applications, Vol. 4, No. 2, 2013.
[85] German Sanchez Torres and John Alexander Taborda” Optic Disk Detection and Segmentation
of Retinal Images Using an Evolution Strategy on GPU” 978-1-4799-1121-9/13/$31.00
©2013 IEEE.
[86] Gopal datt Joshi “Optic Disk and Cup Segmentation from Monocular Color Retinal Images for
Glaucoma Assessment” IEEE Transactions on Medical Imaging, vol. 30, no. 6, June 2011
[87] Jau A.C and, Yu H.G” Detection of Neovascularization in the Optic Disc Using An AM-FM
Representation, Granulometry, and Vessel Segmentation” 34th Annual International
Conference of the IEEE EMBS San Diego, California USA, 28 August - 1 September, 2012
[88] Krishnan M.R and Acharya R.M.” Application of Intuitionistic Fuzzy Histon Segmentation for
the Automated Detection of Optic Disc in Digital Funds Images” Proceedings of the IEEE-
EMBS International Conference on Biomedical and Health Informatics (BHI 2012) Hong
Kong and Shenzhen, China, 2-7 Jan 2012.
[89] Kyungmoo Lee, Student Member” Segmentation of the Optic Disc in 3-D OCT Scans of the
Optic Nerve Head” IEEE. Transactions on Medical Imaging vol 29, no. 1, January 2010
[90] Siddalingaswamy P. C.” Automatic Localization and Boundary Detection of Optic Disc Using
Implicit Active Contours” ©2010 International Journal of Computer Applications (0975 –
8887) Volume 1 – No. 7,2010
[91] Tan M.M and, Wong D.W.K” Mixture Model-based Approach for Optic Cup Segmentation”
32nd Annual International Conference of the IEEE embs Buenos Aires, Argentina, August 31
- September 4, 2010.
[92] S. Jia, Y. Qian, J. Li, W. Liu, and Z. Ji, “Feature extraction and selection hybrid algorithm for
hyperspectral imagery classification,” in Proc. IEEE IGARSS, pp. 72–75,2010.
[93] Y.-Q. Zhao, L. Zhang, and S. G. Kong, “Band-subset-based clustering and fusion for
hyperspectral imagery classification,” IEEE Trans. Geosci. Remote Sens., vol. 49, no. 2, pp.
747–756, Feb. 2011.
[94] Y. Tarabalka, J. A. Benediktsson, J. Chanussot, and J. C. Tilton, “Multiple spectral–spatial
classification approach for hyperspectral data,” IEEE Trans. Geosci. Remote Sens., vol. 48,
no. 11, pp. 4122–4132, Nov. 2010.
[95] B. Mojaradi, H. Abrishami-Moghaddam, M. J. V. Zoej, and R. P. W. Duin, “Dimensionality
reduction of hyperspectral data via spectral feature extraction,” IEEE Trans. Geosci. Remote
Sens., vol. 47, no. 7, pp. 2091–2105, Jul. 2009.
[96] A. Martínez-usó, F. Pla, J. M. Sotoca, and P. García-sevilla, “Clusteringbased hyperspectral band
selection using information measures,” IEEE Trans. Geosci. Remote Sens., vol. 45, no. 12,
pp. 4158–4171, Dec. 2007.
[97] Y. Tarabalka, M. Fauvel, J. Chanussot, and J. A. Benediktsson, “SVM- and MRF-based method
for accurate classification of hyperspectral images,” IEEE Geosci. Remote Sens. Lett., vol. 7,
no. 4, pp. 736–740, Oct. 2010
[98] L. Shen and L. Bai, “3D Gabor wavelets for evaluating SPM normalization algorithm,” Med.
Image Anal., vol. 12, no. 3, pp. 375–383, Jun. 2008.
[99] T. C. Bau, S. Sarkar, and G. Healey, “Hyperspectral region classification using a three-
dimensional Gabor filterbank,” IEEE Trans. Geosci. Remote Sens., vol. 48, no. 9, pp. 3457–
3464, Sep. 2010.
[100] Michelson G, Warntges S, Hornegger J, Lausen B The papilla as screening parameter for early
diagnosis of glaucoma. Dtsch Arztebl Int 105: 583-589,2008.
[101] Lin SC, Singh K, Jampel HD, Hodapp EA, Smith SD, et al. Optic nerve head and retinal nerve
fiber layer analysis: a report by the American Academy of Ophthalmology. Ophthalmology
114: 1937-1949, 2007
[102] Parikh RS, Parikh S, Sekhar GC, Kumar RS, Prabakaran S, et al. Diagnostic capability of optical
coherence tomography (Stratus OCT 3) in early glaucoma. Ophthalmology 114: 2238-
2243,2007.
[103] Liu J, Wong DWK, Lim JH, Li H, Tan NM, ARGALI : An Automatic Cup-to-Disc Ratio
Measurement System for Glaucoma Analysis Using Level-set Image Processing. 13th
International Conference on Biomedical Engineering 23: 559-562,2009.
[104] Wong DW, Liu J, Lim JH, Tan NM, Zhang Z, Intelligent fusion of cup-to-disc ratio
determination methods for glaucoma detection in ARGALI. Conf Proc IEEE Eng Med Biol
Soc 2009: 5777-5780,2009.
[105] Inoue N, Yanashima K, Magatani K, Kurihara T ,Development of a simple diagnostic method
for the glaucoma using ocular Fundus pictures. Conf Proc IEEE Eng Med Biol Soc 4: 3355-
3358,2005.
[106] Harizman N, Oliveira C, Chiang A, Tello C, Marmor M, The ISNT rule and differentiation of
normal from glaucomatous eyes. Arch Ophthalmol 124: 1579-1583,2006.
[107] Joshi GD, Sivaswamy J, Krishnadas SR Optic Disk and Cup Segmentation From Monocular
Color Retinal Images for Glaucoma Assessment. IEEE Trans Med Imaging 30: 1192-
1205.,2011.
[108] Wong D, Liu J, Tan N, Zhang Z, Yin F, Automatic Detection of Peripapillary Atrophy in Digital
Fundus Photographs. In Association for Vision Research and Ophthalmology Annual
Meeting 2010, Fort Lauderdale, USA,2010.
[109] Tan NM, Liu J, Wong DWK, Zhang Z, Lu S,. Classification of Left and Right Eye Retinal
Images. Medical Imaging : Computer - Aided Diagnosis 7624,2010.
[110] Jan J, Odstrcilik J, Gazarek J, Kolar R Retinal Image Analysis Aimed at Support of Early
Neural-layer Deterioration Diagnosis. 9th International Conference on Information
Technology and Applications in Biomedicine 506- 509, 2009.
[111] Zhang Z, Yin FS, Liu J, Wong WK, Tan NM, ORIGA(-light): an online retinal fundus image
database for glaucoma analysis and research. Conf Proc IEEE Eng Med Biol Soc 2010:
3065-3068, 2010
[112] J. Ham, Y. Chen, M. M. Crawford, and J. Ghosh, “Investigation of the random forest framework
for classification of hyperspectral U. Rajendra Acharya, Sumeet Dua, Xian Du, Vinitha Sree
S, and Chua Kuang Chua, “Automated Diagnosis of Glaucoma Using Texture and Higher
Order Spectra Features”, Vol. 15, no. 3, IEEE Transactions on Information Technology in
Biomedicine,May 2011.
[113] Glaucoma research foundation.[Online].
Available:http://www.glaucoma.org/glaucoma/glaucoma-facts-and-stats.php , 2009.
[114] F. Fink, K. W¨orle, P. Gruber, A. M. Tom´e, J. M. G´orriz - S´aez, C. G. Puntonet, E. W. Lang,
“ICA Analysis of Retina Images for Glaucoma Classification”,30th Annual International
IEEE Conference,August 20-24, 2008.
[115] Ke Huang and Selin Aviyente, “Wavelet Feature Selection for Image Classification”,Vol. 17,
No. 9,IEEE Transactions On Image Processing, September 2008.
[116] Priya.R; Aruna.P,”Automated diagnosis of Age-related macular degeneration from color retinal
fundus images”,vol 2, page (227-230), IEEE 2011.
[117] Amir Rajaei, Lalitha Rangarajan,”Wavelet Features Extraction for Medical Image
Classification”, Vol. 4, page(131-141), International Journal of Engineering Sciences, Sept
2011.
[118] Kavitha, S.Arivazhagan, N.Kayalvizhi, ”Wavelet Based Spatial - Spectral Hyperspectral image
Classification Technique Using Support Vector Machines”, 2010 IEEE.
[119] Andrew Busch, Wageeh W. Boles, “Texture Classification Using Multiple Wavelet
Analysis”,Digital Image Computing Techniques and Applications,, Melbourne, Australia,
January 2002.
[120] M Muthu Rama Krishnan, Oliver Faust,” Automated Glaucoma Detection Using Hybrid Feature
Extraction In Retinal Fundus Images’,Vol. 13, No. 1 (2013) 1350011 (21 pages),Journal of
Mechanics in Medicine and Biology, August 2012.
[121] J. M. Miguel Jiménez1, R. Blanco Velasco2, L. Boquete Vázquez1 , J. M. Rodríguez Ascariz1,
P. De laVilla Polo3,”Multifocal Electroretinography. Glaucoma Diagnosis by Means of the
Wavelet Transform”,IEEE, 2008.
[122] Ali Asghar Beheshti Shirazi, Leila Nasseri,”Novel Algorithm to Classify Iris Image Based on
Entropy by Using Neural Network” ,International Conference on Advanced Computer
Theory and Engineering, IEEE 2008.
[123] Malaya Kumar Nath, Samarendra Dandapat,”Techniques of Glaucoma Detection from Color
Fundus Images: A Review”, Page.(4451),I.J. Image, Graphics and Signal Processing, 2012.
[124] Ibrahiem M.M. El Emary and S. Ramakrishnan,”On the Application of Various Probabilistic
Neural Networks in Solving Different Pattern Classification Problems”,page (772-780),ISSN
1818-4952,World Applied Sciences Journal,2008.
[125] Rüdiger Bock, Jörg Meier, László G. Nyúl, Joachim Hornegger, Georg Michelson,”Glaucoma
risk index:Automated glaucoma detection from color fundus images”,page (471–
481),Elsevier 2009.
[126] Yuji Hatanaka, Chisako Muramatsu, Akira SawadaTakeshi Hara, Tetsuya Yamamoto, and
Hiroshi Fujita, ”Glaucoma Risk Assessment Based on Clinical Data and Automated Nerve
Fiber Layer Defects Detection”, 34th Annual International Conference of the IEEE, 28
August - 1 September, 2012.
[127] Glaucoma guide.[Online]. Available: http://www.medrounds .org/glaucoma-
guide/2006/02/section-1b-meaning-ofcupping.html,2010
[128] R. George, R. S. Ve, and L. Vijaya, “Glaucoma in India: Estimated burden of disease,” J.
Glaucoma, vol. 19, pp. 391–397, Aug. 2010.
[129] K. R. Sung et al., “Imaging of the retinal nerve fiber layer with spectral domain optical
coherence tomography for glaucoma diagnosis,” Br. J. Ophthalmol., 2010.
[130] J. M. Miquel-Jimenez et al., “Glaucoma detection by wavelet-based analysis of the global flash
multifocal electroretinogram,” Med. Eng. Phys., vol. 32, pp. 617–622, 2010.
[131] B. Brown, “Structural and functional imaging of the retina: New ways to diagnose and assess
retinal disease,” Clin. Exp. Optometry, vol. 91 , pp. 504–514, 2008.
[132] J. Nayak, U. R. Acharya, P. S. Bhat, A. Shetty, and T. C. Lim, “Automated diagnosis of
glaucoma using digital fundus images,” J. Med. Syst., vol. 33 , no. 5, pp. 337–346, Aug.
2009.
[133] J. Wang and C.-I Chang, “Applications of independent component analys is in end member
extraction and abundance qualification for hyperspectral imagery,” IEEE Trans. Geosci.
Remote Sens., vol. 44, no. 9, pp. 2601–2616, Sep. 2006..
[134] T.-H. Chan, C.-Y. Chi, Y.-M. Huang, and W.-K. Ma, “A convex analysisbased minimum-
volume enclosing simplex algorithm for hyperspectral unmixing,” IEEE Trans. Signal
Process., vol. 57, no. 11, pp. 4418–4432,Nov. 2009.
[135] L. Wang and X. Jia, “Integration of soft and hard classifications usingextended support vector
machines,” IEEE Geosci. Remote Sens. Lett.,vol. 6, no. 3, pp. 543–547, Jul. 2009.
[136] M. Brown, H. Lewis, and S. Gunn, “Linear spectral mixture models andsupport vector machine
for remote sensing,” IEEE Trans. Geosci. RemoteSens., vol. 38, no. 5, pp. 2346–2360, Sep.
2000.
[137] A. Mathur and G. M. Foody, “Crop classification by a SVM with intelligentlyselected training
data for an operational application,” Int. J. RemoteSens., vol. 29, no. 8, pp. 2227–2240, Apr.
2008.
[138] B. Demir and S. Erturk, “Clustering-based extraction of border training patterns for accurate
SVM classification of hyperspectral images,” IEEE Geosci. Remote Sens. Lett., vol. 6, no. 4,
pp. 840–844, Oct. 2009..
[139] L. Bruzzone and C. Persello, “A novel context-sensitive semisupervisedSVM classifier robust to
mislabeled training samples,” IEEE Trans.Geosci. Remote Sens., vol. 47, no. 7, pp. 2142–
2154, Jul. 2009.
[140] B.-C. Kuo, C.-H. Li, and J.-M. Yang, “Kernel nonparametric weighted feature extraction for
hyperspectral image classification,” IEEE Trans.Geosci. Remote Sens., vol. 47, no. 4, pp.
1139–1155, Apr. 2009.
[141] F. Melgani and L. Bruzzone, “Classification of hyperspectral remote sensing images with
support vector machines,” IEEE Trans. Geosci. RemoteSens., vol. 42, no. 8, pp. 1778–1790,
Aug. 2004.
[142] F. A. Mianji and Y. Zhang, “Robust hyperspectral classification using relevance vector
machine,” IEEE Trans. Geosci. Remote Sens., vol. 49,no. 6, pp. 2100–2112, Jun. 2011..
[143] MIANJI AND ZHANG: SVM-BASED UNMIXING-TO-CLASSIFICATION CONVERSION
4327[48] N. Dobigeon, S. Moussaoui, M. Coulon, J.-Y. Tourneret, and A. O. Hero,“Joint
Bayesian endmember extraction and linear unmixing for hyperspectra imagery,” IEEE Trans.
Signal Process., vol. 57, no. 11, pp. 4355–4368, Nov. 2009.
[144] R. George, R. S. Ve, and L. Vijaya, “Glaucoma in India: Estimated burden of disease,” J.
Glaucoma, vol. 19, pp. 391–397, Aug. 2010
[145] Glaucoma guide. (2010).[Online]. Available: http://www.medrounds .org/glaucoma-
guide/2006/02/section-1b-meaning-ofcupping.html,2010
[146] Ke Huang and Selin Aviyente, “Wavelet Feature Selection for Image Classification”,Vol. 17,
No. 9,IEEE Transactions On Image Processing, September 2008.
[147] Priya.R; Aruna.P,”Automated diagnosis of Age-related macular degeneration from color retinal
fundus images”,vol 2, page (227-230), IEEE 2011.
[148] Amir Rajaei, Lalitha Rangarajan,”Wavelet Features Extraction for Medical Image
Classification”, Vol. 4, page(131-141), International Journal of Engineering Sciences, Sept
2011.
[149] Kavitha, S.Arivazhagan, N.Kayalvizhi, ”Wavelet Based Spatial - Spectral Hyperspectral image
Classification Technique Using Support Vector Machines”, 2010 IEEE.
[150] Malaya Kumar Nath, Samarendra Dandapat,”Techniques of Glaucoma Detection from Color
Fundus Images: A Review”, Page.(4451),I.J. Image, Graphics and Signal Processing, 2012.

You might also like