You are on page 1of 43

Wavelet Based Brain Tumor Detection Using Haar Algorithm

CHAPTER 1: INTRODUCTION

Digital image processing is vast fields which can be using various applications. Which include
Detection of criminal face, figure print authentification system, in medical field, object recognition etc.
Brain tumor detection plays an important role in medical field. Brain tumor detection is detection of tumor
affected part in the brain along with its shape size and boundary, so it useful in medical field.

Recent techniques achieved in researches for detection of brain tumor can be broadly classified as

1. Histogram based method.

2. Morphological operation is applied to MRI images of brain.

3. Edge base segmentation and color base segmentation.

4. Cohesion self merging based partition K-mean algorithm.

Brain tumor detection can be done gray as well as color image researches in the field is still going on but
remarkable result is not achieved until now. Accurate measurements in brain diagnosis are quite difficult
because of diverse shapes, sizes and appearances of tumors. Tumors can grow abruptly causing defects in
neighboring tissues also, which gives an overall abnormal structure for healthy tissues as well. In this paper,
we will develop a technique of 3D segmentation of a brain tumor by using segmentation in conjunction with
morphological operations.

1.1 Types of Tumor

Tumor: The word tumor is a synonym for a word neoplasm which is formed by an abnormal growth
of cells Tumor is something totally different from cancer.

There are three common types of tumor:

1) Benign

2) Pre-Malignant

Department of ECE,VBIT 1
Wavelet Based Brain Tumor Detection Using Haar Algorithm

3) Malignant

Benign Tumor

A benign tumor is a tumor is the one that does not expand in an abrupt way; it doesn’t affect its neighboring
healthy tissues and also does not expand to non-adjacent tissues. Moles are the common example of benign
tumors.

Pre-Malignant Tumor

Premalignant Tumor is a precancerous stage, considered as a disease, if not properly treated it may lead to
cancer.

Malignant Tumor

Malignancy (mal- = "bad" and -ignis = "fire") is the type of tumor, that grows worse with the passage of
time and ultimately results in the death of a person. Malignant is basically a medical term that describes a
severe progressing disease. Malignant tumor is a term which is typically used for the description of cancer.

1.2 Magnetic Resonance Imaging(MRI)

MRI is basically used in the biomedical to detect and visualize finer details in the internal structure of the
body. This technique is basically used to detect the differences in the tissues which have a far better
technique as compared tocomputed tomography. So this makes this technique a very special one for the
brain tumor detection and cancer imaging.

Department of ECE,VBIT 2
Wavelet Based Brain Tumor Detection Using Haar Algorithm

CHAPTER 2: LITERATURE SURVEY

Glotsos et al (2003) proposes the uses of support vector machines (SVMs) and decision tree (DT)
classification as a possible methodology for the characterization of the degree of malignancy of brain
tumours astrocytomas (ASTs). A two-level hierarchical DT model was constructed for the discrimination of
87 ASTs in accordance to the WHO grading system. The first level concerned the detection of low versus
high-grade tumours and the second level, the detection of less aggressive as opposed to highly aggressive
tumours. The decision rule at each level was based on SVM classification methodology comprising three
steps: i) From each biopsy, images were digitized and segmented to isolate nuclei from surrounding tissue.
ii) Descriptive quantitative variables related to chromatin distribution and DNA content was generated to
encode the degree of tumour malignancy. iii) Exhaustive search was performed to determine best feature
combination that led to the smallest classification error. SVM classifier training was based on the leave-one-
out method. Finally, SVMs were comparatively evaluated with the Bayesian classifier and the probabilistic
neural network.

Othman and Basri (2011) uses Probabilistic Neural Network with image and data processing techniques to
implement an automated brain tumor classification. The conventional method for medical resonance brain
images classification and tumors detection is by human inspection. Operator-assisted classification methods
are impractical for large amounts of data and are also non-reproducible. Medical Resonance images contain
a noise caused by operator performance which can lead to serious inaccuracies in classification. The use of
artificial intelligent techniques like neural networks, and fuzzy logic shows great potential in this field. In
this work the Probabilistic Neural Network was applied for the purposes. Decision making was performed in
two stages: feature extraction using the principal component analysis and the Probabilistic Neural Network
(PNN). The performance of the PNN classifier was evaluated in terms of training the performance and
classification accuracies. Probabilistic Neural Network gives fast and accurate classification and is a
promising tool for classification of the tumors. Jing Huo et al (2009) proposed the novel methods novel
methods using diffusion weighted (DW) MR images as a biomarker to detect early GBM brain tumor
response to treatment. Apparent Diffusion Coefficient (ADC) map, calculated from DW-MR images, can
provide unique information of tumour response at cellular level.

In this study, they investigate whether changes in ADC histograms between two scans, taken 5-7 weeks
apart before and after treatment, could predict treatment effectiveness before lesion size changes are
observed on later scans. The contribution of the work is to exploit quantitative pattern classification
Department of ECE,VBIT 3
Wavelet Based Brain Tumor Detection Using Haar Algorithm

techniques for the prediction. For both pre and post treatment scans, the histogram was computed from the
ADC values covered within the tumour. Then supervised learning was applied on features extracted from the
histogram for classification. Then evaluated the approach with pool data of 86 patients with GBM under
chemotherapy while responded and 46 did not respond based on tumour size reduction. It was compared
with Fisher's linear discriminate analysis, AdaBoost and random forests classifier using leave one out cross
validation (LOOCV), resulting in the best accuracy of 67.44%.

Singh et al (2009) proposed a method to characterize a brain tumor. A tumor characterization technique has
been developed using Marker Controlled Watershed Segmentation method and region property functions
using image processing toolbox. The parameters extracted are area, major and minor axis length,
eccentricity, orientation, equivdiameter, solidity and perimeter. This method is quite versatile, fast and
simple to use. This can be applied to all types of 2D MR Images representing all tumors irrespective of their
location in human body and their size. This technique was simulated on Matlab and results were compared
with experimental data obtained from diagnostic centre.

Karnan and Logheshwari (2010) proposed a hybrid with Fuzzy segmentation. Ant Colony Optimization
(ACO) metaheuristic is a recent population-based approach inspired by the observation of real ants colony
and based upon their collective foraging behavior. In the first step, the MRI brain image is Segmented ACO
Hybrid with Fuzzy method to extract the suspicious region. The second step deals with similarity between
proposed segmented algorithms and Radiologist report. The tumor position and pixel similarity of the Aco
Hybrid with Fuzz techniques are measured with Radiologist report.

Wang and Ma (2011) proposed a new general method for segmenting brain tumors in 3D magnetic
resonance images. This method is applicable to different types of tumors. First, the brain is segmented using
a new approach, robust to the presence of tumors. Then tumor detection is performed, based on improved
fuzzy classification. Its result constitutes the initialization of a segmentation method based on a deformable
model, leading to a precise segmentation of the tumors. Imprecision and variability are taken into account at
all levels, using appropriate fuzzy models. The result obtained on different types of tumors has been
evaluated by comparison with manual segmentations.

Kharrat et al (2009) introduced an efficient detection of brain tumor from cerebral MRI images. The
methodology consists of three steps: enhancement, segmentation and classification. To improve the quality
of images and limit the risk of distinct regions fusion in the segmentation phase an enhancement process is
applied. Mathematical morphology was adapted to increase the contrast in MRI images. Then Wavelet
Transform was applied in the segmentation process to decompose MRI images. At last, the k-means

Department of ECE,VBIT 4
Wavelet Based Brain Tumor Detection Using Haar Algorithm

algorithm is implemented to extract the suspicious regions or tumors. Some of experimental results on brain
images show the feasibility and the performance of the proposed approach.

Akram and Usman (2011) proposed a method for automatic brain tumor diagnostic system from MR images.
The system consists of three stages to detect and segment a brain tumor. In the first stage, MR image of
brain is acquired and preprocessing is done to remove the noise and to sharpen the image. In the second
stage, global threshold segmentation is done on the sharpened image to segment the brain tumor. In the third
stage, the segmented image is post processed by morphological operations and tumor masking in order to
remove the false segmented pixels. Results and experiments show that this technique accurately identifies
and segments the brain tumor in MR images.

Vrji and Jayakumari (2011) investigates the potential use of MRI data for improving brain tumor shape
approximation and 2D & 3D visualization for surgical planning and assessing tumor. In medical image
processing, Segmentation of anatomical regions of the brain is the fundamental problem. Here, a brain tumor
segmentation method has been developed and validated using MRI Data. In Preprocessing and Enhancement
stage, medical image is converted into standard formatted image. Segmentation subdivides an image into its
constituent regions or objects. This method can segment a tumor provided that the desired parameters are set
properly.

Ming-Ni Wu et al (2007) proposed a color-based segmentation method that uses the K-means clustering
technique to track tumor objects in magnetic resonance (MR) brain images. The key concept in this color-
based segmentation algorithm with K-means is to convert a given gray-level MR image into a color space
image and then separate the position of tumor objects from other items of an MR image by using K-means
clustering and histogram-clustering. Experiments demonstrate that the method can successfully achieve
segmentation for MR brain images to help pathologists distinguish exactly lesion size and region.

Iftekharuddin et al (2006) proposed the classification of the tumor regions from non-tumor regions. Two
novel fractal-based texture features are exploited for pediatric brain tumor segmentation and classification in
MRI. One of the two texture features uses Piecewise-Triangular-Prism-SurfaceArea (PTPSA) algorithm for
fractal feature extraction. The other texture feature exploits novel Fractional Brownian Motion (FBM)
framework that combines both fractal and wavelet analyses for fractal wavelet feature extraction. Three MRI
modalities such as Tl (gadolinium-enhanced), T2 and fluid-attenuated inversion-recovery (FLAIR) are
exploited in this work. The self-organizing map (SOM) algorithm is used for tumor segmentation. For a total
of 204 Tl contrast-enhanced, T2 and FLAIR MR Images obtained from nine different pediatric patients; the

Department of ECE,VBIT 5
Wavelet Based Brain Tumor Detection Using Haar Algorithm

successful tumor segmentation rate is 100%. Two classification methods, multi-layer feedforward neural
network and support vector machine (SVM), are used to classify the tumor regions from non-tumor regions.

For neural network classifier, at a threshold value of 0.7, the True Positive Fraction (TPF) values range from
75% to 100% for different patients, with the average value of 90%. For SVM classifier, the average accuracy
rate is 95% and 92% for using 1/3 and 1/2 of data for testing respectively. Leung et al (2003) proposed a
new approach to detect the boundary of brain tumor based on the Generalized Fuzzy Operator (GFO).
Boundary detection in MR image with brain tumor is an important image processing technique applied in
radiology for 3D reconstruction. The non homogeneities in density tissue of the brain with tumor can result
in achieving the inaccurate location in any boundary detection algorithms. Some studies using the contour
deformable model with regional base technique, show that the performance is insufficient to obtain the fine
edge in the tumor, and the considerable error in accuracy exist. Moreover, even in some of the normal tissue
region, edge created by this method has also been encompassed. One typical example is used for evaluating
this method with the contour deformable model.

Koley and Majumder (2011) presents the segmentation of brain MRI for the purpose of determining the
exact location of brain tumor using CSM based partitional K means clustering algorithm. CSM has attracted
much attention as it gives efficient result as a self merging algorithm compared to other merging processes
and the effect of noise is also less and the probability of obtaining the exact location of tumor is more. This
approach is much simpler and computationally less complex and computation time is very less.

Badran et al (2010) proposed a computer-based method for defining tumor region in the brain using MRI
images. A classification of brain into healthy brain or a brain having a tumor is first done which is then
followed by further classification into benign or malignant tumor. The algorithm incorporates steps for
preprocessing, image segmentation, feature extraction and image classification using neural network
techniques. Finally the tumor area is specified by region of interest technique as confirmation step. A user
friendly Matlab GUI program has been constructed to test the proposed algorithm.

Shasidhar et al (2011) presented the application of modified FCM algorithm for MR brain tumor detection.
A comprehensive feature vector space is used for the segmentation technique. Comparative analysis in terms
of segmentation efficiency and convergence rate is performed between the conventional FCM and the
modified FCM. The effectiveness of the FCM algorithm in terms of computational rate is improved by
modifying the cluster center and membership value updating criterion.

Phooi and Ozawa (2005) presented an analytical method to detect lesions or tumors in digitized medical
images for 3D visualization. A tumor detection method has been developed using three parameters; edge (E),

Department of ECE,VBIT 6
Wavelet Based Brain Tumor Detection Using Haar Algorithm

gray (G), and contrast (H) values. The method proposed here studied the EGH parameters in a supervised
block of input images. These feature blocks were compared with standardized parameters (derived from
normal template block) to detect abnormal occurrences, e.g. image block which contain lesions or tumor
cells. The abnormal blocks were transformed into three-dimension space for visualization and studies of
robustness. Experiments were performed on different brain disease based on single and multiple slices of the
MRI dataset. The experiments results have illustrated that this conceptually simple technique is able to
effectively detect tumor blocks while being computationally efficient.

Dubey et al (2011) proposed that an accurate segmentation is critical, especially when the tumor
morphological changes remain subtle, irregular and difficult to assess by clinical examination. A automated
tumor segmentation in MRI brain tumor poses many challenges with regard to characteristics of an image. A
comparison of three different semi-automated methods, viz., Modified Gradient Magnitude Region Growing
Technique (MGRRGT), level set and a marker controlled watershed method is undertaken here for
evaluating their relative performance in the segmentation of tumor. A study on 9 samples using MGRRGT
reveals that all the errors are within 6 to 23% in comparison to other two methods.

Pathology identification is performed by the image classification technique and then the treatment is planned
based on the nature of abnormality. After treatment, it is highly essential to estimate the response of the
patient to the treatment. In case of brain tumor abnormalities, the size of the tumor may decrease which
indicates a positive effect and sometimes it may increase which shows a negative effect. In any case, it is
important to perform a volumetric analysis on MR brain tumor images. Image segmentation covers this
objective by extracting the abnormal portion from the image which is useful for analyzing the size and shape
of the abnormal region. This method is also called as “pixel based classification” since the individual pixels
are clustered unlike the classification techniques which categorizes the whole image. Several research works
are reported in the area of medical image segmentation. All the research works performed on image
segmentation can be classified into two broad categories: (a) Non-AI techniques 18 and (b) AI techniques.
Initially, a survey is performed on Non-AI techniques followed by the report on AI techniques.

2.1 Image Segmentation Based on Non Al Techniques

Pathology identification is performed by the image classification technique and then the treatment is
planned based on the nature of abnormality. After treatment, it is highly essential to estimate the response of
the patient to the treatment. In case of brain tumor abnormalities, the size of the tumor may decrease which

Department of ECE,VBIT 7
Wavelet Based Brain Tumor Detection Using Haar Algorithm

indicates a positive effect and sometimes it may increase which shows a negative effect. In any case, it is
important to perform a volumetric analysis on MR brain tumor images. Image segmentation covers this
objective by extracting the abnormal portion from the image which is useful for analyzing the size and shape
of the abnormal region. This method is also called as “pixel based classification” since the individual pixels
are clustered unlike the classification techniques which categorizes the whole image. Several research works
are reported in the area of medical image segmentation. All the research works performed on image
segmentation can be classified into two broad categories: (a) Non-AI techniques18 and (b) AI techniques.
Initially, a survey is performed on Non-AI techniques followed by the report on AI techniques.

Image segmentation based on Non- AI techniques

Zavaljevski et al (2000) have used the Maximum Likelihood (ML) approach to segment the pathological
tissues from the normal tissues. The drawback of this approach is that the proposed system is dependent on
class probabilities and threshold values. A model based tumor segmentation technique was implemented by
Nathan et al (2002). Modified Expectation Maximization (EM) algorithm is used in this work to differentiate
the healthy and the timorous tissues. A set of tumor characteristics are presented in this paper which is
highly essential for accurate segmentation. But the drawback of this work is the lack of quantitative analysis
on the extracted tumor region. Fuping et al (2003) have developed a level set method based tumor
segmentation technique. This method involves the method of boundary detection with the seed point.
Watershed algorithm is also used to capture the weak edges.

The main problem of this approach is the selection of seed point. Random selection of seed point may lead
to inappropriate results and also consumes large convergence time period. A complete analysis of various
types of brain tumors and the effect of MR image segmentation techniques on the treatment is studied by
Sundeep et al (2006). The report concluded that the enhanced MR image segmentation techniques play a
major role for brain tumor treatment. This study shows the requirement for an accurate and quick image
segmentation technique. Habib et al (2006) have elaborated the merits and demerits of various statistical
segmentation techniques. This work analyzed the performance measures of histogram based method, EM
technique and the Statistical Parameter Mapping (SPM2) package in detail. This report aimed at
differentiating four different types of brain tissues. Experiments are carried out on simulated brain images.
But the statistical techniques fail in the case of large deformations.

An enhanced version of symmetry analysis which also incorporates the deformable models is reported by
Hassan et al (2007). The segmentation efficiencies reported in this approach is19 very low and the report
also concluded that the proposed approach is a failure in case of symmetrical tumor across the mid-sagittal

Department of ECE,VBIT 8
Wavelet Based Brain Tumor Detection Using Haar Algorithm

plane. Mathematical morphology based segmentation is implemented by Abdelouahab et al (2007). The


experimental results suggested the usage of Skeleton by Influence Zones detection (SKIZ) for brain image
segmentation. This technique involves the initialization of several parameters which is one of the demerits of
this approach. Withey et al (2007) have revealed the various softwares available for medical image
segmentation. This work also suggested appropriate techniques for various types of segmentation methods.
An analysis on evaluation procedure for segmented images is performed by Ranjith et al (2007).

Few conventional algorithms such as EM algorithm mean shift filtering algorithm, etc. are experimented in
this work. But the comparative analysis between various performance measures is not reported in this work.
Pierre et al (2007) have implemented a topology preserving tissue classification on MR brain images. The
advantages of statistical techniques and image registration are combined in this technique which is also
suitable for noisy images. But the requirement for high computational time period is the major drawback of
this approach. Symmetry based brain tumor extraction is performed by Nilanjan et al (2008). The ability of
this approach is limited since it can detect only the densely packed tumor tissues. Chi-Hoon et al (2008) have
demonstrated the application of pseudo-conditional random fields for brain tumor segmentation.

This technique is implemented on images of different tumor size. This system also claimed to be highly
accurate and much faster than other conventional techniques. The mode of training used in this approach is
patient - specific training which is one of the limitations of this technique. Jason et al (2008) have
implemented a Bayesian model based tissue segmentation technique for tumor detection. This method
proved to be computationally efficient besides yielding improved results over the conventional techniques.
K-Nearest Neighbour technique based MR brain image classification is performed by Petronella et al (2008).
An extensive comparative analysis is20 performed with other techniques. The dependency on threshold
values for accurate output is the drawback of this approach. A volumetric image analysis based on mesh and
level set method is illustrated by Aloui et al (2009).

This work concentrated on 3D image processing and employed on different tumor types. But the results
yielded by this approach are inferior and not comparable to the other pixel based classifiers. Zhen et al
(2009) have performed a survey on various medical image segmentation algorithms and analyzed the merits
and demerits of these techniques. The drawbacks of several techniques are clearly illustrated in this report
and also suggested suitable techniques for tumor segmentation.

Barnabas Wilsona et al (2001) proposed a study and treatment on alzheimer's disease. Alzheimer's disease is
a progressive and fatal neurodegenerative disorder manifested by cognitive and memory deterioration,
progressive impairment of activities of daily living, and a variety of neuropsychiatric symptoms and

Department of ECE,VBIT 9
Wavelet Based Brain Tumor Detection Using Haar Algorithm

behavioral disturbances.45 Alzheimer's disease affects 15 million people worldwide and it has been
estimated that Alzheimer's disease affects 4.5 million Americans. Rivastigmine is a reversible cholinesterase
inhibitor used for the treatment of Alzheimer's disease. Central nervous system drug efficacy depends upon
the ability of a drug to cross the blood–brain barrier and reach therapeutic concentrations in brain following
systemic administration. The clinical failures of most of the potentially effective therapeutics to treat the
central nervous system disorders are often not due to a lack of drug potency but rather shortcomings in the
method by which the drug is delivered.

Hence, considering the importance of treating Alzheimer's disease, an attempt has been made to target the
anti-Alzheimer's drug rivastigmine in the brain by using poly(n-butylcyanoacrylate) nanoparticles. The drug
was administered as a free drug, bound to nanoparticles and also bound to nanoparticles coated with
polysorbate 80. In the brain, a significant increase in rivastigmine uptake was observed in the case of poly(n-
butylcyanoacrylate) nanoparticles coated with 1% polysorbate 80 compared to the free drug. The study
demonstrates that the brain concentration of intravenously injected rivastigmine can be enhanced over 3.82
fold by binding to poly(n-butylcyanoacrylate) nanoparticles coated with 1% nonionic surfactant polysorbate
80.

Maria et al (2009) investigated Alzheimer's disease. Currently, drugs approved for AD address symptoms
which are generally manifest after the disease is already well-established. But, there is a growing pipeline of
drugs that may alter the underlying pathology and therefore slow or halt progression of the disease. As these
drugs become available, it will become increasingly imperative that those at risk for AD be detected and
possibly treated early, especially given recent indications that the disease process may start decades before
the first clinical symptoms are recognized. Early detection must go hand-in-hand with qualified tools to
determine the efficacy of drugs in people who may be asymptomatic or who have only very mild46
symptoms of the disease. Devising strategies and screening tools to identify and monitor those at risk in
order to perform prevention trials is seen by many as a top public-health priority, made all the more urgent
by an impending growth in the elderly population worldwide. John and Breitner (2003) investigated on the
Alzheimer’s disease and its prevention.

Epidemiologic evidence suggests that nonsteroidal anti-inflammatory drugs (NSAIDs) delay onset of
Alzheimer’s dementia, but randomized trials show no benefit from NSAIDs in patients with symptomatic
Alzheimer’s dementia. The Alzheimer’s Disease Anti-inflammatory Prevention Trial (ADAPT) randomized
2528 elderly persons to naproxen or celecoxib versus placebo for 2 years (standard deviation = 11 months)
before treatments were terminated. During the treatment interval, 32 cases of AD revealed increased rates in
both NSAID-assigned groups. The double-masked ADAPT protocol for 2 additional years to investigate

Department of ECE,VBIT 10
Wavelet Based Brain Tumor Detection Using Haar Algorithm

incidence of Alzheimer’s dementia (primary outcome). Then they collected cerebrospinal fluid (CSF) from
117 volunteer participants to assess their ratio of CSF. Including 40 new events observed during follow-up
of 2071 randomized individuals (92% of participants at treatment cessation), there were 72 Alzheimer’s
dementia cases. Overall, NSAID-related harm was no longer evident, but secondary analyses showed that
increased risk remained notable in the first 2.5 years of observations, especially in 54 persons enrolled with
cognitive dementia (CIND). These same analyses showed later reduction in Alzheimer’s dementia incidence
among asymptomatic enrollees who were given naproxen. CSF biomarker assays suggested that the latter
result reflected reduced Alzheimer-type neurodegeneration.

Formichi et al (2006) found that the diagnosis of Alzheimer’s dementia is still largely based on exclusion
criteria of secondary causes and47 other forms of dementia with similar clinical pictures. The diagnostic
accuracy of Alzheimer’s dementia is low. Improved methods of early diagnosis are needed, particularly
because drugs treatment is more effective in the early stages of the disease. Recent research focused the
attention to biochemical diagnostic markers (biomarkers) and according to the proposal of a consensus group
on biomarkers, three candidate CSF markers reflecting the pathological Alzheimer’s dementia processes,
have recently been identified: total tau protein (t-tau), amyloid beta(1-42) protein (A beta42), and tau protein
phosphorylated at Alzheimer’s dementia -specific epitopes (p-tau). Several articles report reduced CSF
levels of A beta42 and increased CSF levels of t-tau and p-tau in Alzheimer’s dementia; the sensitivity and
specificity of these data are able for discrimination of Alzheimer’s dementia patients from controls.
However, the specificity for other dementias is low. Boudraa et al (2000) proposed a method for fully
automated detection of Multiple Sclerosis (MS) lesions in multispectral magnetic resonance (MR) imaging.

Based on the Fuzzy C-Means (FCM) algorithm, the method starts with a segmentation of an MR image to
extract an external CSF/lesions mask, preceded by a local image contrast enhancement procedure. This
binary mask is then superimposed on the corresponding data set yielding an image containing only CSF
structures and lesions. The FCM is then reapplied to this masked image to obtain a mask of lesions and some
undesired substructures which are removed using anatomical knowledge. Any lesion size found to be less
than an input bound is eliminated from consideration. Results are presented for test runs of the method on 10
patients. Finally, the potential of the method as well as its limitations are discussed. The geometric data
consists of polyhedral objects representing anatomically important structures such as cortical gyri and deep
gray matter nuclei. The method consists of iteratively registering the data set to be48 segmented to the
Volumetric Brain Structure Model (VBSM) using deformations based on local image correlation. This
segmentation process is performed hierarchically in scale-space. Each step in decreasing levels of scale

Department of ECE,VBIT 11
Wavelet Based Brain Tumor Detection Using Haar Algorithm

refines the fit of the previous step and provides input to the next. Results from phantom and real MR data are
presented.

Christopher Lisanti et al (2001) presents an article to review the normal appearance of CSF, flow physics in
relation to CSF flow dynamics, and commonly encountered appearances and arti-facts of CSF due to
superimposed flow effects. Alperin et al (2005) proposed a work based on Cerebrospinal Fluid (CSF). The
diagnosis of Chiari Malformation (CM) is based on the degree of tonsilar herniation, although this finding
does not necessarily correlate with the presence or absence of symptoms. Intracranial compliance (ICC) and
local craniocervical hydrodynamic parameters derived using magnetic resonance (MR) imaging flow
measurements were assessed in symptomatic patients and control volunteers to evaluate the role of these
factors in the associated pathophysiology. Seventeen healthy volunteers and 34 symptomatic patients with
CM were studied using a 1.5-tesla MR imager. Cine phase-contrast images of blood and Cerebrospinal Fluid
(CSF) flow to and from the cranium were used to quantify local hydrodynamic parameters (for example,
cord displacement and systolic CSF velocity and flow rates) and ICC.

The ICC was derived using a previously described method that measures the small, natural changes in
intracranial volume and pressure with each cardiac cycle. Differences in the average cord displacement and
systolic CSF velocity and flow, comparing healthy volunteers and patients with CM were not statistically
significant. Note, however, that a statistically significant lower ICC (20%) was observed in patients
compared with controls.49 Previous investigators have focused on CSF flow velocities and cord
displacement to explain the pathogenesis of CM. Analysis of results have indicated that ICC is more
sensitive than local hydrodynamic parameters to changes in the craniospinal biomechanical properties in
symptomatic patients. It has been concluded that decreased ICC better explains CM pathophysiology than
local hydrodynamic parameters such as cervical CSF velocities and cord displacement. Low ICC also better
explains the onset of symptoms in adulthood given the decline in ICC with aging.

Riemenschneider et al (2002) investigated CSF tau and A 42 concentrations in 34 patients with FTD, 74
patients with Alzheimer’s dementia, and 40 cognitively healthy control subjects. CSF levels of tau and 42
were measured by ELISA. With use of receiver operating characteristic–derived cutoff points and linear
discrimination lines, the diagnostic sensitivity and specificity of both markers were determined. CSF tau
concentrations were significantly higher in FTD than in control subjects but were significantly lower than in
Alzheimer’s dementia. CSF A 42 levels were significantly lower in FTD than in control subjects but were
significantly higher than in Alzheimer’s dementia. In subjects with FTD, neither tau nor A 42 levels
correlated with the severity of dementia. The best discrimination between the diagnostic groups was
obtained by simultaneous measurement of tau and A 42, yielding a sensitivity of 90% at a specificity of 77%

Department of ECE,VBIT 12
Wavelet Based Brain Tumor Detection Using Haar Algorithm

(FTD vs controls) and a sensitivity of 85% at a specificity of 85% (FTD vs Alzheimer’s dementia). In FTD,
CSF levels of tau are elevated and A 42 levels are decreased. With use of these markers, subjects with FTD
can be distinguished from control subjects and from patients with Alzheimer’s dementia with reasonable
accuracy.50

Blennow et al (2001) reviewed the performance of cerebrospinal fluid (CSF) protein biomarkers for
Alzheimer’s dementia. The introduction of acetylcholine esterase (AChE) inhibitors as a symptomatic
treatment of Alzheimer’s disease has made patients seek medical advice at an earlier stage of the disease.
This has highlighted the importance of diagnostic markers for early Alzheimer’s disease. However, there is
no clinical method to determine which of the patients with Mild Cognitive Impairment (MCI) will progress
to Alzheimer’s disease with dementia, and which have a benign form of MCI without progression. The
diagnostic performance of the three biomarkers, total tau, phospho-tau, and the 42 amino acid form of -
amyloid have been evaluated in numerous studies and their ability to identify incipient AD in MCI cases has
also been studied. Some candidate Alzheimer’s disease biomarkers including ubiquitin, neurofilament
proteins, growth-associated protein 43 (neuromodulin), and neuronal thread protein (AD7c) show interesting
results but have been less extensively studied.

It is concluded that CSF biomarkers may have clinical utility in the differentiation between Alzheimer’s
disease and several important differential diagnoses, including normal aging, depression, alcohol dementia,
and Parkinson’s disease, and also in the identification of Creutzfeldt-Jakob disease in cases with rapidly
progressive dementia. Early diagnosis of Alzheimer’s disease is not only of importance to be able to initiate
symptomatic treatment with AChE inhibitors, but will be the basis for initiation of treatment with drugs
aimed at slowing down or arresting the degenerative process, such as –secretase inhibitors, if these prove to
affect Alzheimer’s disease pathology and to have a clinical effect

Department of ECE,VBIT 13
Wavelet Based Brain Tumor Detection Using Haar Algorithm

CHAPTER 3:BASICS OF IMAGE PROCESSING

In electric engineering and laptop technological know-how, picture processing is any form of sign
processing for which the center is an photograph, inclusive of a pictures or video body the output of picture
processing can be either an picture or, a hard and fast of characteristics or parameters related to the image.
Most image-processing techniques contain treating the picture as a two-dimensional signal and applying
fashionable sign-processing techniques to it.

Image processing commonly refers to virtual photo processing, however optical and analog photograph
processing are also possible. This article is set standard strategies that apply to they all. The acquisition of
pictures (producing the input image within the first place) is referred to as imaging.

Imageprocessing is a physical method used to transform an image signal into a physical picture. The picture
signal may be both digital or analog. The actual output itself can be an real bodily image or the traits of an
photograph.

The maximum commonplace kind of imageprocessing is images. In this procedure, an photo is captured the
usage of a digital cam to create a digital or analog picture. In order to supply a physical picture, the photo is
processed using the proper era based at the input source type.

In digital pictures, the photograph is saved as a computer document. This file is translated the use of
photographic software to generate an real image. The shades, shading, and nuances are all captured on the
time the photograph is taken the software interprets this facts into an photo

• Euclidean geometry alterations including expansion, reduction, and rotation

• Color corrections which includes brightness and evaluation modifications, coloration mapping,
coloration balancing, quantization, or shade translation to a special color area

• Digital compositing or optical compositing (aggregate of two or more photos), that is utilized in film-
making to make a "matte"

• Interpolation, demosaicing, and recovery of a full picture from a raw photo layout using a Bayer
filter out pattern

• Image registration, the alignment of or extra images

• Image differencing and morphing

Department of ECE,VBIT 14
Wavelet Based Brain Tumor Detection Using Haar Algorithm

• Image popularity, for example, may extract the text from the photo the usage of optical person
popularity or checkbox and bubble values using optical mark reputation

• Image segmentation

• High dynamic variety imaging with the aid of combining multiple images

• Geometric hashing for 2-D object reputation with affine invarian

3.1 Digital Image Processing

Digital image processing is the usage of computer algorithms to carry out image processing on virtual
photos. As a subcategory or subject of virtual sign processing, digital picture processing has many benefits
over analog photograph processing. It allows a miles wider variety of algorithms to be carried out to the
input records and may keep away from troubles inclusive of the construct-up of noise and sign distortion
during processing. Since photos are defined over two dimensions (possibly extra) virtual photograph
processing can be modeled within the shape of Multidimensional Systems.

Many of the techniques of virtual image processing, or virtual photo processing because it frequently turned
into called, have been advanced inside the Sixties on the Jet Propulsion Laboratory, Massachusetts Institute
of Technology, Bell Laboratories, University of Maryland, and a few different research centers, with
software to satellite tv for pc imagery, twine-photograph requirements conversion, clinical imaging,
videophone, man or woman reputation, and image enhancement. The value of processing turned into fairly
excessive, but, with the computing system of that generation. That modified within the 1970s, when digital
image processing proliferated as inexpensive computers and dedicated hardware have become available.
Images then might be processed in real time, for some dedicated problems consisting of television
requirements conversion. As standard-motive computers have become faster, they commenced to take over
the role of dedicated hardware for all however the maximum specialised and laptop-intensive operations.

With the quick computer systems and signal processors available in the 2000s, virtual picture processing has
come to be the maximum common shape of picture processing and generally, is used as it isn't always most
effective the most versatile approach, however also the most inexpensive.

Digital photograph processing technology for clinical programs was inducted into the Space Foundation
Space Technology Hall of Fame in 1994.

Department of ECE,VBIT 15
Wavelet Based Brain Tumor Detection Using Haar Algorithm

Digital image processing permits the use of a lot greater complex algorithms for picture processing, and for
this reason, can offer each more sophisticated performance at easy responsibilities, and the implementation
of strategies which could be not possible by using analog means.

In particular, digital photo processing is the simplest sensible technology for:

• Classification

• Feature extraction

• Pattern recognition

• Projection

• Multi-scale signal evaluation

Some techniques that are used in digital image processing consist of:

• Pixelization

• Linear filtering

• Principal additives analysis

• Independent issue analysis

• Hidden Markov models

• Anisotropic diffusion

• Partial differential equations

• Self-organizing maps

• Neural networks

• Wavelets

3.1.1 Feature Extraction

Feature Extraction does not mean geographical functions seen on the image but rather "statistical" traits of
image facts like person bands or aggregate of band values that carry information regarding systematic
variant inside the scene. Thus in a multispectral statistics it helps in portraying the need elements of the

Department of ECE,VBIT 16
Wavelet Based Brain Tumor Detection Using Haar Algorithm

photo. It also reduces the range of spectral bands that needs to be analyzed. After the characteristic
extraction is complete the analyst can paintings with the desired channels or bands, however inturn the
person bandwidths are stronger for information. Finally, any such pre-processing will increase the rate and
decreases the fee of analysis.

3.2 Theory of Digital Image Processing

An picture is represented technically as two dimensional characteristic f(x, y) which represents the depth of
selected pixel and here f denotes the intensity and x,y terms is termed as sparsity of pixel or weight of the
pixel which offers the exact place of pixel in an virtual image. Literally the virtual image is likewise termed
as “an picture is not an photo without any item in it”.

Y
A
X f(x,y)
I
S

X-AXIS

Fig3.1 Digital image

3.2.1 Representation of Digital Image


Generally a virtual image is represented in pixels that are taken into consideration as minute elements of an
photograph or also termed as photographs. A pixel is a combination of 8 bits composed of both maximum
tremendous bits and as well as least sizeable bits. Here an exciting point is that the most big bits (MSB’s)
have the resistive behaviour and least sizeable bits (LSB’s) have the reputation behaviour. Whenever an
photo is at risk of noise or some other variation in brightness, contrast, resolution then it will its impact
specially on least considerable bits because of its acceptance behaviour. The respective bits are represented
are seems as proven within the determine three.2. These bits in a pixel are arranged in the cascading
behaviour where all eight bits intensity is proven in maximum giant bits of all pixels. Digital photo picture
intensities are relies upon upon the arrangement of these bits in a right manner with a view to visualize in a
proper way to human visual gadget (HVS).

Department of ECE,VBIT 17
Wavelet Based Brain Tumor Detection Using Haar Algorithm

BIT 8
LSB’s
BIT 7
BIT6
BIT 5
BIT 4
BIT 3 BIT 6
BIT 2
BIT 1 MSB’s

Fig 3.2 Cascading approach of bits in an pixel

In the parent 3.2, the cascading method of bits with appreciate to maximum full-size bits and least
widespread bits are shown and how all the respective 8 bits intensity is blended to form the final depth cost
within the final most vast bit and the way the final depth falls on human visual gadget to make the texture of
object in an virtual photo and technically it is termed as human notion of virtual photo. These bits are
logically present even as the least detail visualized with the aid of the human visible device is the pixel and
for you to shape the pixel we need to compose all of the 8 respective bits price to visualize the digitalized
content material in an high-quality technique. These bits plays a crucial function in safety associated
programs along with watermarking, steganography and many others.

The want for processing an digital photograph

Digital image processing performs an prominent position many utility oriented fields as navy, biometric,
robotics, genetics, radar image processing, satellite tv for pc photograph processing and medical photograph
processing and so on. When ever an photo tends to trade its behaviour from the regular form to strange form
then it suggests that the variation inside the brightness, version within the contrast tiers, variation inside the
decision and so on and in any other state of affairs it may tends to exchange the behaviour because of
environmental disturbances termed as Gaussian noise and guy made errors along with applying the incorrect
algorithm, hand jitter and particularly clinical subject is an utility in which the processing plays a vital role to
Department of ECE,VBIT 18
Wavelet Based Brain Tumor Detection Using Haar Algorithm

apprehend the patient condition by using the respective doctor however this take place after we've the
records in a proper manner if processing isn't performed then so many medical associated applications are
fails.

MEDICINE MILITARY
RADAR

BIOMETRICS
SATELLITE

Digital image processing

INDUSTRY
SECURITY ACADEMIA

Fig 3.3 Digital image processing applications

3.3 Steps in Digital Image Processing

Image Acquisition

Acquiring the virtual picture is achieved by means of using the sensors, radars, satellites, cameras and so on.
Although acquisition appears as a simple technique when come to logical manner it's far a difficult task.
Mainly it includes two important steps compression of photograph and enhancement of picture. Whenever
we're acquiring an object with the aid of the usage of any digital sensory tool it first compress the specific
item by using 0 percent then iut enhance the respective object in step with the decision of the tool for better
view of the photo with the aid of the human visible machine.

Department of ECE,VBIT 19
Wavelet Based Brain Tumor Detection Using Haar Algorithm

Digital image sensory


device

OBJECT

Fig 3.4 Digital image Acquisition process

Image Enhancement

Image processing has associated troubles like inpainting trouble. Generally due to the non-stop version in the
lighting fixtures conditions and variations within the other elements we gather the low pleasant snap shots.
Due to the adjustments inside the real time lighting conditions in place of excessive excellent pics we gather
the low satisfactory pics, in addition to decorate the excellent of low first-rate pics we have to improve the
several parameters and factors that are associated with the virtual photo on the way to yield the high quit
high best photos in area of low best photos.

The elements related to virtual image to enhance the quality are contrast, brightness stages, lowering the
noise effect on photo etc. Naturally a query arises why we need to convert the low first-rate virtual photo to
the high nice virtual picture or why we ought to beautify the best of virtual photo. In order to enhance the
low high-quality virtual we need opts for the better enhancement techniques that are exist already. In the
category of enhancement techniques most successful and extraordinarily used enhancement method is
comparison enhancement method.

Department of ECE,VBIT 20
Wavelet Based Brain Tumor Detection Using Haar Algorithm

The most important issue considered while enhancing the low exceptional virtual pics is the important
technique have to adaptive to the respective relative displays specifically the comparison enhancement
method. In literature such a lot of frameworks and algorithms are proposed but most of the algorithms are
based on the enhancement. Lot of research has been to improve the digital best based totally on no longer
best enhancement techniques however additionally simply at the electricity saving also simultaneously.

Image Compression

Digital picture Compression performs an distinguished function in lots of photograph processing


applications. But compression an image depends upon many vital elements such as photo electricity that is
mainly relies upon upon the brightness, evaluation stages so on. Especially in packages like steganography
and watermarking in which the statistics is embedded or hidden on the photograph. After successfully
embedding the statistics before transmission the respective picture want to compress for safety issues.

The compression of virtual photograph may additionally has a tendency to unfastened the statistics if well no
longer compressed and in a few cases due to susceptible encoding set of rules approach the retrieval system
became extra hard to retrieve the information then it has a tendency to unfastened the vital records.
Compression strategies has a few famous approaches like DCT compression approach, JPEG compression
and so on.

Fig 3.5 Compression of digital image

Digital picture may be represented in two distinctive approaches, in first technique in which you
possibly can view the content material of photo like object but we cannot see the pixels and its values and it's
miles called the digital picture. In second technique we are able to view the pixels and its data but we cannot
view the content like object in it and it's miles called as the histogram. Histogram technical calls because the
graphical illustration of the minute pixels values of the photograph this is pixels. The major advantage of the

Department of ECE,VBIT 21
Wavelet Based Brain Tumor Detection Using Haar Algorithm

digital photo histogram is that by viewing the data of histogram it is easy to get clear estimation of the tonal
estimation of the respective virtual photo.

Fig 3.6 Digital image

In Histogram Equalization (HE) method so one can equalize the all the heritage pixel intensities we
employ the normalization approach. But while assessing the power consumption the usage of the Histogram
Equalization (HE) technique is mainly laid low with the pixel intensities in preference to historical past mild
intensities. So an more advantageous strength consumption model is carried out based on the Histogram
Equalization (HE) term and index term

Fig 3.7 Histogram of Respective digital image

Department of ECE,VBIT 22
Wavelet Based Brain Tumor Detection Using Haar Algorithm

Histogram Equalization

Histogram has many different sorts like histogram rotation, histogram distribution, histogram transferring,
and histogram equalization. Histogram equalization performs crucial position in lots of virtual processing
packages and furthermore while the crucial in an digital photograph is on the close contrast values by means
of the use of histogram equalization technique we can increase the tiers of global comparison of many
distinctive photographs which has the important records at the close evaluation values. By using the
histogram equalization intensities of pixels related to digital photograph can be allotted in higher way to
visualise better by human visible machine. The major advantage of the digital picture histogram equalization
is that it equalizes the all values of pixels so that the pixels with low intensities can get the better visible
appearance this is accomplished by way of spreading the higher values to the low pixel values by using the
usage of the histogram equalization method.

Object popularity:

An object recognition machine unearths objects within the real international from an image of the world,
using object models that are regarded a priori. This challenge is relatively difficult. Humans carry out item
recognition effects and right away. Algorithmic description of this mission for implementation on machines
has been very difficult. In this chapter we will talk specific steps in item reputation and introduce a few
strategies that have been used for item popularity in lots of packages. We will speak the distinctive styles of
popularity tasks that a vision system may additionally want to perform. We will examine the complexity of
these tasks and present techniques useful in exceptional levels of the recognition project. The object
reputation hassle may be described as a labeling problem based totally on models of regarded objects.
Formally, given an photograph containing one or more items of hobby (and history) and a fixed of labels
similar to a hard and fast of fashions recognized to the system, the gadget need to assign accurate labels to
regions, or a hard and fast of areas, inside the photo.

The object popularity trouble is closely tied to the segmentation hassle: without as a minimum a partial
recognition of items, segmentation cannot be accomplished, and without segmentation, item reputation is not
possible.

Department of ECE,VBIT 23
Wavelet Based Brain Tumor Detection Using Haar Algorithm

Fig 3.8 Different components ofan object recognition system are shown

An object reputation system have to pick appropriate tools and techniques for the steps mentioned above.
Many factors have to be considered within the choice of appropriate techniques for a particular software.
The significant troubles that ought to be considered in designing an object popularity machine are:

• Object or version illustration: How ought to objects be represented inside the version database? What are
the important attributes or functions of items that have to be captured in these models? For some gadgets,
geometric descriptions can be to be had and may additionally be efficient, even as for some other
magnificence one may have to rely on standard or practical features. The representation of an object need to
capture all relevant records without any redundancies and should prepare this facts in a form that lets in easy
get admission to by way of distinct additives of the item recognition gadget.

• Feature extraction: Which features ought to be detected, and how can they be detected reliably? Most
functions can be computed in twodimensionalimages however they are related to three-dimensional traits of
items. Due to the nature of the image formation process,some features are clean to compute reliably while
others are very difficult. Feature detection problems had been discussed in lots of chapters on this book.

• Feature-version matching: How can features in images be matched to fashions in the database? In most
object recognition duties, there are many capabilities and numerous items. An exhaustive matching method
will clear up the recognition trouble however may be too sluggish to be useful. Effectiveness of capabilities
and performance of a matching technique ought to be considered in developing a matching method.

Department of ECE,VBIT 24
Wavelet Based Brain Tumor Detection Using Haar Algorithm

• Hypotheses formation: How can a set of probable objects primarily based at the feature matching be
decided on, and the way can probabilities be assigned to each viable object? The speculation formation step
is largely a heuristic to lessen the dimensions of the hunt area. This step makes use of understanding of the
software area to assign some sort of chance or self belief degree to unique objects within the area. This
measure reflects the chance of the presence of gadgets primarily based on the detected features.

• Object verification: How can item fashions be used to choose the most probably object from the set of in
all likelihood gadgets in a given image? The presence ofeach probable object can be established through the
usage of their models. One must observe each practicable speculation to verify the presence of the item or
ignore it. If the models are geometric, it is straightforward to exactly confirm objects the usage of digicam
vicinity and other scene parameters. In different instances, it may now not be possible to confirm a
speculation.

Depending at the complexity of the problem, one or extra modules in Figure 15.1 may end up trivial. For
example, sample reputation-based totally object popularity structures do not use any characteristic-version
matching or object verification; they at once assign possibilities to objects and choose the item with the very
best chance.

Image restoration

Image Restoration is the technique of obtaining the authentic image from the degraded picture given the
understanding of the degrading factors. Digital picture recuperation is a subject of engineering that studies
techniques used to get better unique scene from the degraded photos and observations. Techniques used for
picture recuperation are oriented closer to modeling the degradations, commonly blur and noise and making
use of various filters to reap an approximation of the authentic scene [16]. There are a selection of motives
that could cause degradation of an image and photo healing is one of the key fields in state-of-the-art Digital
Image Processing because of its huge region of packages. Commonly taking place degradations consist of
blurring, movement and noise [3][14]. Blurring can be induced while object within the photograph is out of
doors the camera’s intensity of discipline sometime throughout the publicity, whereas movement blur can be
induced when an object movements relative to the digital camera at some stage in an publicity.

Department of ECE,VBIT 25
Wavelet Based Brain Tumor Detection Using Haar Algorithm

CHAPTER 4: BRAIN TUMOR AND CLUSTERING

A brain tumor is an abnormal growth of cells within the brain, which can be cancerous (malignant) or
non-cancerous (benign). It is defined as any intracranialtumor created by abnormal and uncontrolled cell
division, normally either in the brain itself (neurons, glial cells (astrocytes, oligodendrocytes, ependymal
cells, myelin-producing Schwann cells), lymphatic tissue, blood vessels), in the cranial nerves, in the brain
envelopes (meninges), skull, pituitary and pineal gland, or spread from cancers primarily located in other
organs (metastatic tumors).

4.1 Causes

Aside from exposure to vinyl chloride or ionizing radiation, there are no known environmental factors
associated with brain tumors. Mutations and deletions of so-called tumor suppressor genes are thought to be
the cause of some forms of brain tumors. Patients with various inherited diseases, such as Von Hippel-
Lindau syndrome, multiple endocrine neoplasia, neurofibromatosis type 2 are at high risk of developing
brain tumors. It is alleged that mobile phones/cell phones might be a cause of brain tumors, according to one
report. (see Mobile phone radiation and health) There is an association of brain tumor incidence and malaria,
suggesting that the anopheles mosquito, the carrier of malaria, might transmit a virus or other agent that
could cause a brain tumor. Malignant brain tumor incidence and Alzheimer's disease prevalence are
associated in 19 US states. The two diseases may share a common cause, possibly inflammation.

4.2 Sign and Symptoms

Symptoms of brain tumors may depend on two factors: tumor size (volume) and tumor location. The
time point of symptom onset in the course of disease correlates in many cases with the nature of the tumor
("benign", i.e. slow-growing/late symptom onset, or malignant, fast growing/early symptom onset) is a
frequent reason for seeking medical attention in brain tumor cases.

Large tumors or tumors with extensive perifocal swelling edema inevitably lead to elevated
intracranial pressure (intracranial hypertension), which translates clinically into headaches, vomiting
(sometimes without nausea), altered state of consciousness (somnolence, coma), dilatation of the pupil on
the side of the lesion (anisocoria), papilledema (prominent optic disc at the funduscopic eye examination).
However, even small tumors obstructing the passage of cerebrospinal fluid (CSF) may cause early signs of
increased intracranial pressure. Increased intracranial pressure may result in herniation (i.e. displacement) of

Department of ECE,VBIT 26
Wavelet Based Brain Tumor Detection Using Haar Algorithm

certain parts of the brain, such as the cerebellar tonsils or the temporal uncus, resulting in lethal brainstem
compression. In young children, elevated intracranial pressure may cause an increase in the diameter of the
skull and bulging of the fontanelles.

Depending on the tumor location and the damage it may have caused to surrounding brain structures,
either through compression or infiltration, any type of focal neurologic symptoms may occur, such as
cognitive and behavioral impairment, personality changes, hemiparesis, hypoesthesia, aphasia, ataxia, visual
field impairment, facial paralysis, double vision, tremor etc. These symptoms are not specific for brain
tumors they may be caused by a large variety of neurologic conditions (e.g.stroke, traumatic brain injury).
What counts, however, is the location of the lesion and the functional systems (e.g. motor, sensory, visual,
etc.) it affects.

A bilateral temporal visual field defect (bitemporal hemianopia due to compression of the optic
chiasm), often associated with endocrine disfunction either hypopituitarism or hyperproduction of pituitary
hormones and hyperprolactinemia is suggestive of a pituitary tumor.

4.3 Types of Brain Tumor

 Glioblastoma multiforme
 Medulloblastoma
 Astrocytoma
 CNS lymphoma
 Brainstem glioma
 Germinoma
 Meningioma
 Oligodendroglioma
 Schwannoma
 Craniopharyngioma
 Ependymoma
 Mixed gliomas
 Brain metastasis

Department of ECE,VBIT 27
Wavelet Based Brain Tumor Detection Using Haar Algorithm

4.4 Diagnosis

Although there is no specific clinical symptom or sign for brain tumors, slowly progressive focal
neurologic signs and signs of elevated intracranial pressure, as well as epilepsy in a patient with a negative
history for epilepsy should raise red flags. However, a sudden onset of symptoms, such as an epileptic
seizure in a patient with no prior history of epilepsy, sudden intracranial hypertension (this may be due to
bleeding within the tumor, brain swelling or obstruction of cerebrospinal fluid's passage) is also possible.

Glioblastoma multiforme and anaplastic astrocytoma have been associated in case reports on
PubMed with the genetic acute hepatic porphyrias (PCT, AIP, HCP and VP), including positive testing
associated with drug refractory seizures. Unexplained complications associated with drug treatments with
these tumors should alert physicians to an undiagnosed neurological porphyria.

Imaging plays a central role in the diagnosis of brain tumors. Early imaging methods invasive and
sometimes dangerous such as pneumoencephalography and cerebral angiography, have been abandoned in
recent times in favor of non-invasive, high-resolution modalities, such as computed tomography (CT) and
especially magnetic resonance imaging (MRI). Benign brain tumors often show up as hypodense (darker
than brain tissue) mass lesions on cranial CT-scans. On MRI, they appear either hypo- (darker than brain
tissue) or isointense (same intensity as brain tissue) on T1-weighted scans, or hyperintense (brighter than
brain tissue) on T2-weighted MRI, although the appearance is variable. Perifocal edema also appears
hyperintense on T2-weighted MRI. Contrast agent uptake, sometimes in characteristic patterns, can be
demonstrated on either CT or MRI-scans in most malignant primary and metastatic brain tumors. This is
because these tumors disrupt the normal functioning of the blood-brain barrier and lead to an increase in its
permeability. However it is not possible to diagnose high versus low grame gliomas based on enhancement
pattern alone.

Electrophysiological exams, such as electroencephalography (EEG) play a marginal role in the


diagnosis of brain tumors.

The definitive diagnosis of brain tumor can only be confirmed by histological examination of
tumortissue samples obtained either by means of brain biopsy or open surgery. The histological examination
is essential for determining the appropriate treatment and the correct prognosis. This examination, performed
by a pathologist, typically has three stages: interoperative examination of fresh tissue, preliminary

Department of ECE,VBIT 28
Wavelet Based Brain Tumor Detection Using Haar Algorithm

microscopic examination of prepared tissues, and followup examination of prepared tissues after
immunohistochemical staining or genetic analysis.

4.5 Treatment and Prognosis

Many meningiomas, with the exception of some tumors located at the skull base, can be successfully
removed surgically. In more difficult cases, stereotacticradiosurgery, such as Gamma knife, Cyberknife or
Novalis Txradiosurgery, remains a viable option.

Most pituitary adenomas can be removed surgically, often using a minimally invasive approach
through the nasal cavity and skull base (trans-nasal, trans-sphenoidal approach). Large pituitary adenomas
require a craniotomy (opening of the skull) for their removal. Radiotherapy, including stereotactic
approaches, is reserved for the inoperable cases.

Although there is no generally accepted therapeutic management for primary brain tumors, a surgical
attempt at tumor removal or at least cytoreduction (that is, removal of as much tumor as possible, in order to
reduce the number of tumor cells available for proliferation) is considered in most cases. However, due to
the infiltrative nature of these lesions, tumor recurrence, even following an apparently complete surgical
removal, is not uncommon. Several current research studies aim to improve the surgical removal of brain
tumors by labeling tumor cells with a chemical (5-aminolevulinic acid) that causes them to fluoresce.
Postoperative radiotherapy and chemotherapy are integral parts of the therapeutic standard for malignant
tumors. Radiotherapy may also be administered in cases of "low-grade" gliomas, when a significant tumor
burden reduction could not be achieved surgically.

Survival rates in primary brain tumors depend on the type of tumor, age, functional status of the
patient, the extent of surgical tumor removal, to mention just a few factors.

UCLA Neuro-Oncology publishes real-time survival data for patients with this diagnosis. They are
the only institution in the United States that shows how brain tumor patients are performing on current
therapies. They also show a listing of chemotherapy agents used to treat high grade glioma tumors.

Patients with benign gliomas may survive for many years, while survival in most cases of
glioblastoma multiforme is limited to a few months after diagnosis if treatment is ignored.

Department of ECE,VBIT 29
Wavelet Based Brain Tumor Detection Using Haar Algorithm

The main treatment option for single metastatic tumors is surgical removal, followed by radiotherapy
and/or chemotherapy. Multiple metastatic tumors are generally treated with radiotherapy and chemotherapy.
Stereotacticradiosurgery (SRS), such as Gamma Knife, Cyberknife or Novalis Tx, radiosurgery, remains a
viable option. However, the prognosis in such cases is determined by the primary tumor, and it is generally
poor.

Radiotherapy is the most common treatment for secondary cancer brain tumors. The amount of
radiotherapy depends on the size of the area of the brain affected by cancer. Conventional external beam
whole brain radiotherapy treatment (WBRT) or 'whole brain irradiation' may be suggested if there is a risk
that other secondary tumors will develop in the future. Stereotactic radiotherapy is usually recommended in
cases of under three small secondary brain tumors.

In 2008 a study published by the University of Texas M. D. Anderson Cancer Center indicated that
cancer patients who receive stereotactic radiosurgery (SRS) and whole brain radiation therapy (WBRT) for
the treatment of metastatic brain tumors have more than twice the risk of developing learning and memory
problems than those treated with SRS alone.[15][16]

A shunt operation is used not as a cure but to relieve the symptoms. The hydrocephalus caused by the
blocking drainage of the cerebrospinal fluid can be removed with this operation.

Fig 4.1 A brain-stemglioma in four year old. MRI sagittal, without contrast

In the US, about 2000 children and adolescents younger than 20 years of age are diagnosed with
malignant brain tumors each year. Higher incidence rates were reported in 1975–83 than in 1985–94. There
is some debate as to the reasons; one theory is that the trend is the result of improved diagnosis and

Department of ECE,VBIT 30
Wavelet Based Brain Tumor Detection Using Haar Algorithm

reporting, since the jump occurred at the same time that MRIs became available widely, and there was no
coincident jump in mortality. The CNS cancer survival rate in children is approximately 60%. The rate
varies with the type of cancer and the age of onset: younger patients have higher mortality.

In children under 2, about 70% of brain tumors are medulloblastoma, ependymoma, and low-grade glioma.
Less commonly, and seen usually in infants, are teratoma and atypical teratoid rhabdoid tumor. Germ cell
tumors, including teratoma, make up just 3% of pediatric primary brain tumors, but the worldwide incidence
varies significantly.

Watershed Algorithm

Understanding the watershed transform requires that you think of an image as a surface. For example,
consider the image below:

Fig 4.2 Synthetically generated image of two dark blobs.

If you imagine that bright areas are "high" and dark areas are "low," then it might look like the surface (left).
With surfaces, it is natural to think in terms of catchment basins and watershed lines. The Image Processing
Toolbox function watershed can find the catchment basins and watershed lines for any grayscale image. The
key behind using the watershed transform for segmentation is this: Change your image into another image
whose catchment basins are the objects you want to identify.

Department of ECE,VBIT 31
Wavelet Based Brain Tumor Detection Using Haar Algorithm

CHAPTER 5: PROPOSED METHOD

Fig 5.1 block diagram of the proposed method

Step 1. Input image of dimensions 320×320 is taken and read by using MATLAB.

Step 2. Then that image is decomposed by DWT (Discrete Wavelet Transform) into four sub bands namely
Low-low (LL), Low-high (LH), High-low (HL), and High-high (HH).

Step 3. For further processing lower sub band low-low(LL) is for mutual information matrix. Lower band is
taken because of it contains the approximate information and lot of information is contains in the LL band.

Step 4. Same steps are carried out on the database image onto which decomposed image using DWT into
four parts and LL part is taken into consideration and further processing.

Step 5. The mutual information matrix is calculated by taking the uncommon content in both mutual
information matrix of the input image and database image.

Department of ECE,VBIT 32
Wavelet Based Brain Tumor Detection Using Haar Algorithm

Step 6. From uncommon information the IDWT is calculated and the resultant image is the detected tumor
which is having low sharpness and brightness. Step 7. The brightness and sharpness and intensity is
corrected at the last to obtain the tumor. Step 8: obtained resultant image is the detected tumor from input
image.

Input MRI image

The MRI image is selected by the user from database. Magnetic resonance imaging (MRI) of the head uses a
powerful magnetic field, radio waves and a computer to produce detailed pictures of the brain and other
cranial structures that are clearer and more detailed than other imaging methods. This exam does not use
ionizing radiation and may require an injection of a contrast material called gadolinium, which is less likely
to cause an allergic reaction than iodinated contrast material. Tell your doctor about any health problems,
recent surgeries or allergies and whether there’s a possibility you are pregnant.

The magnetic field is not harmful, but it may cause some medical devices to malfunction. Most orthopedic
implants pose no risk, but you should always tell the technologist if you have any devices or metal in your
body. Guidelines about eating and drinking before your exam vary between facilities. Unless you are told
otherwise, take your regular medications as usual. Leave jewelry at home and wear loose, comfortable
clothing. You may be asked to wear a gown. If you have claustrophobia or anxiety, you may want to ask
your doctor for a mild sedative prior to the exam.

Gray scale conversion

In photography, computing, and colorimetry, a grayscale or greyscale image is one in which the value of
each pixel is a single samplerepresenting only an amount of light, that is, it carries only intensity
information. Images of this sort, also known as black-and-white or monochrome, are composed exclusively
of shades of gray, varying from black at the weakest intensity to white at the strongest. Grayscale images are
distinct from one-bit bi-tonal black-and-white images which, in the context of computer imaging, are images
with only two colors: black and white (also called bilevel or binary images).

Grayscale images have many shades of gray in between. Grayscale images can be the result of measuring the
intensity of light at each pixel according to a particular weighted combination of frequencies (or
wavelengths), and in such cases they are monochromatic proper when only a single frequency (in practice, a
narrow band of frequencies) is captured. The frequencies can in principle be from anywhere in the
electromagnetic spectrum (e.g. infrared, visible light, ultraviolet, etc.). A colorimetric (or more specifically

Department of ECE,VBIT 33
Wavelet Based Brain Tumor Detection Using Haar Algorithm

photometric) grayscale image is an image that has a defined grayscale colorspace, which maps the stored
numeric sample values to the achromatic channel of a standard colorspace, which itself is based on measured
properties of human vision. If the original color image has no defined colorspace, or if the grayscale image is
not intended to have the same human-perceived achromatic intensity as the color image, then there is no
unique mapping from such a color image to a grayscale image.

The intensity of a pixel is expressed within a given range between a minimum and a maximum, inclusive.
This range is represented in an abstract way as a range from 0 (or 0%) (total absence, black) and 1 (or 100%)
(total presence, white), with any fractional values in between. This notation is used in academic papers, but
this does not define what "black" or "white" is in terms of colorimetry. Sometimes the scale is reversed, as in
printing where the numeric intensity denotes how much ink is employed in halftoning, with 0% representing
the paper white (no ink) and 100% being a solid black (full ink). In computing, although the grayscale can be
computed through rational numbers, image pixels are usually quantized to store them as unsigned integers,
to reduce the required storage and computation.

Some early grayscale monitors can only display up to sixteen different shades, which would be stored in
binary form using 4-bits. But today grayscale images (such as photographs) intended for visual display (both
on screen and printed) are commonly stored with 8 bits per sampled pixel. This pixel depth allows 256
different intensities (i.e., shades of gray) to be recorded, and also simplifies computation as each pixel
sample can be accessed individually as one full byte. However, if these intensities were spaced equally in
proportion to the amount of physical light they represent at that pixel (called a linear encoding or scale), the
differences between adjacent dark shades could be quite noticeable as banding artifacts, while many of the
lighter shades would be "wasted" by encoding a lot of perceptually-indistinguishable increments.

Therefore, the shades are instead typically spread out evenly on a gamma-compressed nonlinear scale, which
better approximates uniform perceptual increments for both dark and light shades, usually making these 256
shades enough (just barely) to avoid noticeable increments. Technical uses (e.g. in medical imaging or
remote sensing applications) often require more levels, to make full use of the sensor accuracy (typically 10
or 12 bits per sample) and to reduce rounding errors in computations. Sixteen bits per sample (65,536 levels)
is often a convenient choice for such uses, as computers manage 16-bit words efficiently. The TIFF and
PNG (among other) image file formats support 16-bit grayscale natively, although browsers and many
imaging programs tend to ignore the low order 8 bits of each pixel. Internally for computation and working
storage, image processing software typically uses integer or floating-point numbers of size 16 or 32 bits.

Department of ECE,VBIT 34
Wavelet Based Brain Tumor Detection Using Haar Algorithm

ApplyMedianfiltering

Median filtering is a nonlinear method used to remove noise from images. It is widely used as it is very
effective at removing noise while preserving edges. It is particularly effective at removing ‘salt and pepper’
type noise. The median filter works by moving through the image pixel by pixel, replacing each value with
the median value of neighbouring pixels. The pattern of neighbours is called the "window", which slides,
pixel by pixel over the entire image 2 pixel, over the entire image. The median is calculated by first sorting
all the pixel values from the window into numerical order, and then replacing the pixel being considered
with the middle (median) pixel value.

There are other approaches that have different properties that might be preferred in particular circumstances:

– Avoid processing the boundaries, with or without cropping the signal or image boundary afterwards.

– Fetching entries from other places in the signal.

DWTDECOMPOSITION and IDWT

In numerical analysis and functional analysis, a discrete wavelet transform (DWT) is any wavelet transform
for which the wavelets are discretely sampled. As with other wavelet transforms, a key advantage it has over
Fourier transforms is temporal resolution: it captures both frequency and location information (location in
time).

Department of ECE,VBIT 35
Wavelet Based Brain Tumor Detection Using Haar Algorithm

Fig.5.2 Example of Wavelet Decomposition

IDWT will recover the data to its original data which is exactly reverse operation to DWT.

ENTROPY

Measure of information of a message is termed as entropy. It is actually concern with communication. How
much amount of information is transmitted and amount of information is received. Hartley prosed theory of
entropy. In which he proposed measure of information of a message that forms the basis of many present-
day measures. He consider message of a string of symbol. Each symbol is represented as s different
possibilities if there are n number of symbols then total amount of possible combinations are of message are
sn. He sought to define an information measure that increases with message length. The measure complies,
but the amount of information would increase exponentially with the length of the message and that is not
realistic. Hartley wanted a measure H that increases linearly withn, i.e. H = Kn, where K is a constant
depending on the number of symbols s.

MUTUAL INFORMATION

The research that eventually led to the introduction of mutual information as a registration measure dates
back to the early 1990‟s. Woods et al. [5, 6] first introduced a registrationmeasure for multimodality images
based on the assumption that regions of similar tissue (and hence similargrey values) in one image would

Department of ECE,VBIT 36
Wavelet Based Brain Tumor Detection Using Haar Algorithm

correspond to regionsin the other image that also consist of similar grey values (though probably different
values to those of the firstimage). Ideally, the ratio of the grey values for all correspondingpoints in a certain
region in either image varieslittle. Consequently, the average variance of this ratio forall regions is
minimized to achieve registration.

Contrast enhancement

Contrast is an important factor in any subjective evaluation of image quality. Contrast is created by the
difference in luminance reflected from two adjacent surfaces. In other words, contrast is the difference in
visual properties that makes an object distinguishable from other objects and the background. In visual
perception, contrast is determined by the difference in the colour and brightness of the object with other
objects. Our visual system is more sensitive to contrast than absolute luminance; therefore, we can perceive
the world similarly regardless of the considerable changes in illumination conditions. Many algorithms for
accomplishing contrast enhancement have been developed and applied to problems in image processing.

The contrast enhancement technique plays a vital role in image processing to bring out the information that
exists within low dynamic range of that gray level image. To improve the quality of an image, it required to
perform the operations like contrast enhancement and reduction or removal of noise. This paper proposed a
concept of contrast enhancement using the global mean of entire image and local mean of 3×3 sub images.
While global contrast-enhancement techniques enhance the overall contrast, their dependences on the global
content of the image limit their ability to enhance local details. They also result in significant change in
image brightness and introduce saturation artifacts. Local enhancement methods, on the other hand, improve
image details but can produce block discontinuities, noise amplification and unnatural image modifications.
To remedy these shortcomings, this article presents a fusion-based contrast-enhancement technique which
integrates information to overcome the limitations of different contrast-enhancement algorithms.

Morphological processing

Morphological operators often take a binary image and a structuring element as input and combine them
using a set operator (intersection, union, inclusion, complement). They process objects in the input image
based on characteristics of its shape, which are encoded in the structuring element. The mathematical details
are explained in Mathematical Morphology. Usually, the structuring element is sized 3×3 and has its origin
at the center pixel. It is shifted over the image and at each pixel of the image its elements are compared with
the set of the underlying pixels. If the two sets of elements match the condition defined by the set operator
(e.g. if the set of pixels in the structuring element is a subset of the underlying image pixels), the pixel
underneath the origin of the structuring element is set to a pre-defined value (0 or 1 for binary images).

Department of ECE,VBIT 37
Wavelet Based Brain Tumor Detection Using Haar Algorithm

A morphological operator is therefore defined by its structuring element and the applied set operator. For
the basic morphological operators the structuring element contains only foreground pixels (i.e. ones) and
`don't care's'. These operators, which are all a combination of erosion and dilation, are often used to select or
suppress features of a certain shape, e.g. removing noise from images or selecting objects with a particular
direction. The more sophisticated operators take zeros as well as ones and `don't care's' in the structuring
element.

The most general operator is the hit and miss, in fact, all the other morphological operators can be deduced
from it. Its variations are often used to simplify the representation of objects in a (binary) image while
preserving their structure, e.g. producing a skeleton of an object using skeletonization and tidying up the
result using thinning. Morphological operators can also be applied to graylevel images, e.g. to reduce noise
or to brighten the image. However, for many applications, other methods like a more general spatial filter
produces better results. ---

Segmentation:

Threshold segmentation is the simplest method of image segmentation and also one of the most common
parallel segmentation methods. It is a common segmentation algorithm which directly divides the image
gray scale information processing based on the gray value of different targets. Threshold segmentation can
be divided into local threshold method and global threshold method. The global threshold method divides
the image into two regions of the target and the background by a single threshold. The local threshold
method needs to select multiple segmentation thresholds and divides the image into multiple target regions
and backgrounds by multiple thresholds.

The most commonly used threshold segmentation algorithm is the largest interclass variance method, which
selects a globally optimal threshold by maximizing the variance between classes. In addition to this, there
are entropy-based threshold segmentation method, minimum error method, co-occurrence matrix method,
moment preserving method, simple statistical method, probability relaxation method, fuzzy set method and
threshold methods combined with other methods

Department of ECE,VBIT 38
Wavelet Based Brain Tumor Detection Using Haar Algorithm

CHAPTER 6: RESULT

Fig.6.1 Final GUI for proposed work

Department of ECE,VBIT 39
Wavelet Based Brain Tumor Detection Using Haar Algorithm

Fig.6.2 Input Image and its reference image is selected

Department of ECE,VBIT 40
Wavelet Based Brain Tumor Detection Using Haar Algorithm

Fig. 6.3 Tumor segmented

Fig.6.4 type of tumor

Department of ECE,VBIT 41
Wavelet Based Brain Tumor Detection Using Haar Algorithm

Fig.6.5 Accuracy with calculations

Department of ECE,VBIT 42
Wavelet Based Brain Tumor Detection Using Haar Algorithm

Department of ECE,VBIT 43

You might also like