You are on page 1of 59

ABSTRACT

Tuberculosis (TB) is a common disease with high mortality and morbidity rates
worldwide. The chest radiograph (CXR) is frequently used in diagnostic algorithms for
pulmonary TB. Automatic systems to detect TB on CXRs can improve the efciency of such
diagnostic algorithms. The diverse manifestation of TB on CXRs from different populations
requires a system that can be adapted to deal with different types of abnormalities.
A computer aided detection (CAD) system was developed which combines the results of
supervised subsystems detecting textural, shape, and focal abnormalities into one TB score. The
textural abnormality subsystem provided several subscores analyzing different types of textural
abnormalities and different regions in the lung. The shape and focal abnormality subsystem each
provided one subscore. A general framework was developed to combine an arbitrary number of
subscores: subscores were normalized, collected in a feature vector and then combined using a
supervised classier into one combined TB score.
Two databases, both consisting of 200 digital CXRs, were used for evaluation, acquired
from (A) a Western high-risk group screening and (B) TB suspect screening in Africa. The
subscores and combined TB score were compared to two references: an external, nonradiological, reference and a radiological reference determined by a human expert. The area
under the Receiver Operator Characteristic (ROC) curve.
The combined TB score performed better than the individual subscores and approaches
performance of human observers with respect to the external and radiological reference.
Supervised combination to compute an overall TB score allows for a necessary adaptation of the
CAD system to different settings or different operational requirements.

CHAPTER 1
INTRODUCTION
Image processing operations can be roughly divided into three major categories,
Image Compression, Image Enhancement and Restoration, and Measurement Extraction.
It involves reducing the amount of memory needed to store a digital image. Image
2

defects which could be caused by the digitization process or by faults in the imaging setup (for example, bad lighting) can be corrected using Image Enhancement techniques.
Once the image is in good condition, the Measurement Extraction operations can be used
to obtain useful information from the image. The Image Enhancement and Measurement
Extraction are used to 256 grey-scale images. This means that each pixel in the image is
stored as a number between 0 to 255, where 0 represents a black pixel, 255 represents a
white pixel and values in-between represent shades of grey. These operations can be
extended to operate on colour images.
1.1 Introduction to Image Processing
Image processing is a method to convert an image into digital form and perform
some operations on it, in order to get an enhanced image or to extract some useful
information from it. It is a type of signal dispensation in which input is image, like video
frame or photograph and output may be image or characteristics associated with that
image. Usually Image Processing system includes treating images as two dimensional
signals while applying already set signal processing methods to them. Image processing
basically includes the following three steps.
Importing the image with optical scanner or by digital photography.
Analyzing and manipulating the image which includes data compression and
image enhancement and spotting patterns that are not to human eyes like satellite
photographs.
Output is the last stage in which result can be altered image or report that is based
on image analysis.

1.1.1 Purpose of Image processing


The purpose of image processing is divided into 5 groups. They are:
Visualization - Observe the objects that are not visible.
3

Image sharpening and restoration - To create a better image.


Image retrieval - Seek for the image of interest.
Measurement of pattern Measures various objects in an image.
Image Recognition Distinguish the objects in an image.

1.1.2 Types
The two types of methods used for Image Processing that isAnalog and Digital
Image Processing. Analog or visual techniques of image processing can be used for the
hard copies like printouts and photographs. Image analysts use various fundamentals of
interpretation while using these visual techniques. The image processing is not just
confined to area that has to be studied but on knowledge of analyst. Association is
another important tool in image processing through visual techniques. So analysts apply a
combination of personal knowledge and collateral data to image processing.
Digital Processing techniques help in manipulation of the digital images by using
computers. As raw data from imaging sensors from satellite platform contains
deficiencies. To get over such flaws and to get originality of information, it has to
undergo various phases of processing. The three general phases that all types of data have
to undergo while using digital technique are Pre- processing, enhancement and display,
information extraction.
There are two general groups of images: vector graphics or line art and bitmaps
pixel-based or images. Some of the most common file formats are:

GIF

-An 8-bit (256 colour), non-destructively compressed bitmap format.It is


commonly used to save photos. The photo quality is better andthe size of
the file can be limited

JPEG

-24bit destructively compressed bitmap format. This format is commonly


used on the Internet. This format limits the number of colour tones possible
in the photo to 256. It is frequently used for logos, icons or black and white
photos and the quality is lower.

TIFF

-The standard 24 bit publication bitmap format.It is used for high-quality


photos. It is used for scanners, digital cameras and printers.Giventhe
superior quality of the image, the file size is also very large.

PS

-Postscript, a standard vector format. Has numerous sub-standards andan be


difficult to transport across platforms and operating systems.

PSD

-A dedicated Photoshop format that keeps all the information in animage


including all the layers.
Pictures are the most common and convenient means of conveying or transmitting

information. A picture is worth a thousand words. Pictures concisely convey information


about positions, sizes and inter relationships between objects. They portray spatial
information that we can recognize as objects. Human beings are good at deriving
information from such images, because of our innate visual and mental abilities. About
75% of the information received by human is in pictorial form. An image is digitized to
convert it to a form which can be stored in a computer's memory or on some form of
storage media such as a hard disk or CD-ROM. This digitization procedure can be done
by a scanner, or by a video camera connected to a frame grabber board in a computer.
Once the image has been digitized, it can be operated upon by various image processing
operations.

1.1.3 RGB Color


The RGB color model is an additive color model in which red, green, and blue
light are added together in various ways to reproduce a broad array of colors. RGB uses
additive color mixing and is the basic color model used in television or any other medium
that projects color with light. It is the basic color model used in computers and for web
graphics, but it cannot be used for print production.The secondary colors of RGB is cyan,
magenta, and yellow are formed by mixing two of the primary colors (red, green or blue)
and excluding the third color. Red and green combine to make yellow, green and blue to
make cyan, and blue and red form magenta. The combination of red, green, and blue in
full intensity makes white.
1.2 Applications
Image processing has an enormous range of applications; almost every area of
science and technology can make use of image processing methods. Here is a short list
just to give some indication of the range of image processing applications.
Medicine
Inspection and interpretation of images obtained from X-rays, MRI
or CAT scans,
Analysis of cell images.
Agriculture
Satellite/aerial views of land, for example to determine how much
land is being used for different purposes, or to investigate the
suitability of different regions for different crops,
Inspection of fruit and vegetables distinguishing good and fresh
produce from old.
Industry
Automatic inspection of items on a production line,
6

Inspection of paper samples.


Law enforcement
Fingerprint analysis,
Sharpening or de-blurring of speed-camera images.
1.3Aspects of image processing
It is convenient to subdivide different image processing algorithms into broad
subclasses. There are different algorithms for different tasks and problems, and often
would like to distinguish the nature of the task at hand.
1.3.1 Image Enhancement
This is refers to processing an image and the result is more suitable for a particular
application. Examples include

sharpening or de-blurring an out of focus image,


highlighting edges,
improving image contrast, or brightening an image,
Removing noise.

1.3.2 Image Restoration


This may be considered as reversing the damage done to an image by a known
cause, for example
removing of blur caused by linear motion,
removal of optical distortions,
Removing periodic interference.
1.3.3 Image Segmentation
This involves subdividing an image into constituent parts, or isolating certain
aspects of an image.
circles, or particular shapes in an image,
In an aerial photograph, identifying cars, trees, buildings, or roads.
These classes are not disjoint a given algorithm may be used for both image
enhancement or for image restoration.
7

1.4.Acquiring the image:


A digital image can be done using either a CCD camera, or a scanner.
1.4.1 Preprocessing
This is the step taken before the major image processing task. The problem here is
to perform some basic tasks in order to render the resulting image more suitable for the
job to follow. In this case it may involve enhancing the contrast, removing noise, or
identifying regions likely to contain the postcode.
1.4.2 Segmentation
Segmentation actually get the postcode, in other words to extract from the image
that part of it which contains just the postcode.
1.4.3 Representation and description
These terms refer to extracting the particular features which allow us to
differentiate between objects that is curves, holes and corners which allow us to
distinguish the different digits which constitute a postcode.
1.4.4 Recognition and interpretation
This means assigning labels to objects based on their descriptors (from the
previous step), and assigning meanings to those labels. Then identify particular digits,
and we interpret a string of four digits at the end of the address as the postcode.
1.4.5Image processing techniques
Image processing is any form of signal processing for which the input is an image,
such as a photograph or video frame. The output of image processing may be either an
image or a set of characteristics or parameters related to the image.

Most image-

processing techniques involve treating the image as a two-dimensional signal and


applying standard signal-processing techniques to it. Image processing usually refers to
digital image processing, but optical and analog image processing also are possible.
8

Image processing is closely related to computer graphics and computer vision. Image
processing is a method to convert an image into digital form and perform some
operations on it, in order to get an enhanced image or to extract some useful information
from it. It is a type of signal dispensation in which input is image, like video frame or
photograph and output may be image or characteristics associated with that image.
Usually Image Processing system includes treating images as two dimensional signals
while applying already set signal processing methods to them.
1.5 Digital Image Processing
Image Processing Toolbox provides a comprehensive set of reference-standard
algorithms, functions, and apps for image processing, analysis, visualization, and
algorithm development. It can perform image analysis, image segmentation, image
enhancement, noise reduction, geometric transformations, and image registration. Many
toolbox functions support multicore processors, GPUs, and C-code generation. Image
Processing Toolbox supports a diverse set of image types, including high dynamic range,
gigapixel resolution, embedded ICC profile, and tomography. Visualization functions and
apps let you explore images and videos, examine a region of pixels, adjust color and
contrast, create contours or histograms, and manipulate regions of interest (ROIs). The
toolbox supports workflows for processing, displaying, and navigating large images.
As a fundamental problem in the field of imageprocessing, image restoration has
been extensively studiedin the past two decades. It aims to reconstructthe original highquality image x from its degraded observedversion y, which is a typical ill-posed linear
inverse problem.
Classical regularization terms utilize local structural patternsand are built on the
assumption that images are locallysmooth except at the edges. Some representative works
in theliterature are the total variation (TV), half quadratureformulation, and MumfordShah (MS) models. These regularization terms demonstrate high effectiveness
inpreserving edges and recovering smooth regions. However,they usually smear out
9

image details and cannot deal wellwith fine structures, since they only exploit local
statistics,neglecting nonlocal statistics of images.

CHAPTER-2
PROBLEM IDENTIFICATION
A SVM is a binary classifier, that is, the class labels can only take two values:
1.
Cannot predict multiple result with SVM Binary classifier.
This binary classification can classify only normal and abnormal type.
Not able to classify multiple stage with this classifier.

10

CHAPTER 3
LITERATURE REVIEW
3.1 Introduction
Cavitation at the lung parenchyma is a hallmark sign of tuberculosis, a common
deadly infectious disease. It is dened as a gas lled space within a pulmonary
consolidation, a mass, or a nodule, produced by the expulsion of the necrotic part of the
lesion via the bronchial tree. Cavities can also occur in diseases such as primary
bronchogenic carcinoma, lung cancer, pulmonary metastasis and other infections.
Cavities are quite visible and distinct in CT images but are often barely visible in chest
radiographs due to other superimposed 3D lung structures in the 2D projection image. In
chest radiographs, the appearance of cavities is hazy, and the cavity walls are often ill11

dened or completely invisible .This poses a big problem for radiologists to detect and
accurately segment cavities in chest radiographs.
A dynamic programming based approach for cavity border segmentation. The center of
the cavity is taken as an input to dene the region of interest for dynamic programming.
A pixel classier is trained to discriminate between cavity borders and normal lung pixels
using texture, Hessian and location based features constructing a cavity likelihood map.
This likelihood map is then used as a cost function in polar space to nd optimal path
along the cavity border. The proposed technique is tested on a large cavity dataset and
Jaccard overlapping measure is used to calculate the segmentation accuracy of our
system.

3.2 SEGMENTATION
12

Image segmentation refers to the process of partitioning a digital image into


multiple segments i.e. set of pixels, pixels in a region are similar according to some
homogeneity criteria such as colour, intensity or texture, so as to locate and identify
objects and boundaries in an image .In Practical application of image segmentation range
from filtering of noisy images, medical applications (Locate tumors and other
pathologies,

Measure

tissue

volumes,

Computer

guided

surgery,

Diagnosis,

Treatmentplanning, study of anatomical structure), Locate objects in satellite images


(roads, forests, etc.), Face Recognition, Finger print Recognition, etc.
3.2.1 Cavity segmentation
A novel technique to automatically segment cavities based on dynamic programming
which uses the likelihood map output of pixel classier as cost function. We have
validated our results with those obtained by three human expert readers on a large dataset
including prominent as well as subtle cavities. Our results are very encouraging and
comparable with the degree of overlap between trained human readers and a chest
radiologist. The accuracy of our technique for dicult cavities can be increased by
improving the pixel classier and optimizing the parameters for dynamic programming. It
may be possible to develop pixel based features more specic to cavity borders so as to
dierentiate it with ribs and other bone structures. Such a tool could be very helpful in
treatment monitoring for tuberculosis.

3.3 Review on Paper


An improved uid vector ow for cavity segmentation in chest radiographs year
(2010) .Xu, T., Cheng, I. present the tuberculosis detection. Assessing the size of cavity
and its variation between temporal scans is important for disease diagnosis and to
measure the response to therapy. Studies have shown the existence of cavitation in
postprimary tuberculosis (TB) which is even higher in TB patients having diabetes . The
13

number and the size of cavities is a vital element in tuberculosis scoring systems for chest
radiographs. Small agreement (0.55 kappa statistic) has been reported on detection of
cavities in 56 chest radiographs obtained from a TB screening database .Automated
detection and segmentation of cavities is a less explored research area. proposed a
detection system for cavities in chest radiographs for screening of TB. Their system is
based on a supervised learning approach in
which candidates are segmented using a mean shift segmentation technique with adaptive
thresholding for initial contour placement followed by segmentation using a snake model.
Segmented candidates are then classified as cavity or noncavity candidate using Bayesian
classifier trained on gradient inverse coefficient of variation and circularity measure
features. The technique was tested on only 16 cavity chest radiographs. Threshold on
Tanimoto overlapping measure has been used to classify detected cavity regions as true or
false positives. The accuracy of contour segmentation of cavities has not been mentioned
in the work . proposed cavity segmentation based on an improved edge-based fluid vector
flow snake model. This was validated on 20 chest radiographs and resulted in a Jaccard
overlapping degree of 68.8%.
clavicle segmentation in chest radiographs

In the year of 2012 Laurens Hogeweg [3]

present Automated delineation of anatomical structures in chest radiographs is difficult


due to superimposition of multiple structures. In this work an automated technique to
segment the clavicles in posterior-anterior chest radiographs is presented in which three
methods are combined. Pixel classification is applied in two stages and separately for the
interior, the border and the head of the clavicle. This is used as input for active shape
model segmentation. Finally dynamic programming is employed with an optimized cost
function that combines appearance information of the interior of the clavicle, the border,
the head and shape information derived from the active shape mode. The 2D projection
in a radiograph causes several other structures to overlap the clavicle. Notably these are
the ribs, the mediastinum and the larger vessels of the pulmonary vessel tree. In this paper
the focus of the segmentation algorithm is the part of the clavicle contained inside the
14

projection of the lung fields and the mediastinum. The lateral parts at the acromial end
outside the lung fields are not consider. Obtaining an accurate segmentation of the
clavicles is useful for a number of applications. The segmentation can be used to digitally
subtract the clavicle from the radiograph. Accurate localization of the medial parts of the
clavicles can also serve to automatically determine possible rotation of the ribcage, an
important quality aspect of chest radiographs. When chest radiographs are rotated, false
abnormalities might appear in either or both of the lung fields due to apparent changes in
parenchymal density.
In the year of 2011 stefan jaeger et.al [4] presentthe detection of TB and other
diseases in CXRs as a pattern-recognition problem. The algorithms are developed by
using x-rays from the Japanese Society of Radiology Technology database. The
preprocessing step first enhanced the contrast of the image using a histogram equalization
technique. Next step include lung field extraction from the other structures in the xraysuch as the heart, clavicles, and ribsbased on an adaptive segmentation method.
Deviations from the lung shape and increased lung opacity indicate abnormalities, such
as consolidations or nodules. These abnormalities with a bag-of-features approach that
included descriptors for shape and texture. To detect nodules, for example first applied a
Gaussian filter and computed the Eigen values of the Hessian matrix. Then computed a
multi-scale similarity measure that responds to spherical blobs with high
curvature.Finally these features are used to train a binary classifier that discriminates
between normal and abnormal CXRs. The implementation of a preliminary system that is
capable of detecting some manifestations of disease in CXRs. Novel algorithms can be
implemented on any portable x-ray unit.
In the year of 2002 bram van ginnekenet.al [5]presenta fully automatic method is
presented to detect abnormalities in frontal chest radiographs which are aggregatedinto an
overall abnormality score. The method is aimed at finding abnormal signs of a diffuse
textural nature, such as they are encountered in mass chest screening against tuberculosis
(TB).The scheme starts with automatic segmentation of the lungfields, using active shape
15

models. The segmentation is used tosubdivide the lung fields into overlapping regions of
varioussizes. Texture features are extracted from each region, usingthe moments of
responses to a multiscale filter bank. Thedifference features are obtained by subtracting
feature vectorsfrom corresponding regions in the left and right lung fields. Aseparate
training set is constructed for each region. All regionsare classified by voting among the
nearest neighbors, withleave-one-out. Next, the classification results of each region
arecombined, using a weighted multiplier in which regions withhigher classification
reliability weigh more heavily. This produces an abnormality score for each image. The
method is evaluated ontwo databases. The first database was collected from a TB
masschest screening program, from which 147 images with texturalabnormalities and 241
normal images were selected. Although thisdatabase contains many subtle abnormalities,
the classificationhas a sensitivity of 0.86 at a specificity of 0.50 and an area underthe
receiver operating characteristic (ROC) curve of 0.820. Thesecond database consists of
100 normal images and 100 abnormalimages with interstitial disease. For this database,
the results werea sensitivity of 0.97 at a specificity of 0.90 and an area under theROC
curve of 0.986
In the year of 2000 bram van ginnekenet.al[6]present the algorithms for the
automatic delineation of lung fields in chest radiographs is develop a rule-based scheme
and pixel classification. Rule-based approach is the observation that the bordersbetween
anatomical structures in chest radiographs largely coincide with edgesand ridges in the
image. Segmentation can also be treated as a pixel classification problem by calculating
feature vector for each pixel in the input image. Output is the anatomical class. Although
different types of classifiers will obviously lead todifferent results, the performance of
these segmentation algorithms will dependmostly on the features of the input vector. As
features we use pixel location,pixel intensity, entropy, and the corrected location
computed from a scaling andtranslation computed from the rule based scheme.A
hybridsystem that combines both approaches. The performance of hybrid scheme turns
out to be accurate and robust; the accuracy is 0.969 0.00803, and above 94% for all 115
test images.
16

In the year of 2013 H.B Rachanaet.al[7]present TB detection is based on sputum


examination microscopically by using Ziehl- Neelsen stain (ZN-stain) method.The
developed algorithm detects the TB bacilli automatically. This automated system reduces
fatigue by providing images on the screen and avoiding visual inspection of microscopic
images. The system has a high degree of accuracy, specificity and better speed in
detecting TB bacilli. The method is simple and inexpensive for use in rural/remote areas
in the emerging economies. Segmentation algorithm is developed to automate the process
of detection of TB using digital microscopic images of different subjects. A performance
comparison of clustering and thresholding algorithms for segmenting TB bacilli in ZNstained tissue slide images is carried out. The results presented showed that a more
convincing segmentation performance has been achieved by using the clustering
methods, as compared to the thresholding method. These results also suggest that k-mean
clustering is the best method for segmenting the bacilli, as it is highly sensitive to the TB
pixels.
In the year of 2000 bram van ginnekenet.al [8] presentChest radiography is one of
the key techniques for investigating suspected tuberculosis (TB). Computerized reading
of chest radio graphs (CXRs) is an appealing concept because there is a severe shortage
of human experts trained to interpret CXRs in countries with a high prevalence of TB. A
comprehensive computerized system for the detection of abnormalities in CXRs and
evaluates the system on digital data from a TB prevalence.The system contains
algorithms to normalize the images, segment the lung fields, analyze the shape of the
segmented lungs, detect textural abnormalities, measure bluntness of the costophrenic
angles and quantify the asymmetry in the lung fields. These subsystems are combined
with a Random Forest classifier into an overall score indicating the abnormality of the
radiograph. The results approach the performance of an independent human reader.
In the year of 2013 semacandemiret.al [9] present the computer-aided diagnosis of
digital CXRs is the automatic detection of the lung regions. A non-rigid registrationdriven robust lung segmentation method using image retrieval-based patient specific
17

adaptive lung models that detects lung boundaries, surpassing state-of-the-art


performance. The method consists of three main stages: (i) acontent-based image
retrieval approach for identifying training images with masks most similar to the patient
CXR using a partial Radon transform and Bhattacharyya shape similarity measure, (ii)
creating the initial patient-specific anatomical model of lung shape using SIFT-flow for
deformable registration of training masks to the patient CXR, and (iii) extracting refined
lung boundaries using a cavity segmentation optimization approach with a customized
energy function.
In the year of 2002 Bram van Ginnekenet.al[10] presents an active shape model
segmentation scheme is presented that is steered by optimal local features, contrary to
normalized first order derivative profiles, as in the original formulation.A nonlinearKNNclassifier is used, instead of the linear Mahalanobis distance, to findoptimal
displacements for landmarks. For each of the landmarksthat describe the shape, at each
resolution level taken into accountduring the segmentation optimization procedure, a
distinct set ofoptimal features is determined. The selection of features is automatic, using
the training images and sequential feature forwardand backward selection. The new
approach is tested on syntheticdata and in four medical segmentation tasks: segmenting
the rightand left lung fields in a database of 230 chest radiographs, and segmenting the
cerebellum and corpus call sum in a database of 90slices from MRI brain images. In all
cases, the new method produces significantly better results in terms of an overlap error
measure (0001using a paired T-test) than the original activeshape model scheme.
In the year of 2011HaithemBoussaid et.al[11] present a machine learning approach
to improve shape detection accuracy in medical images with deformable contour models
(DCMs).DCMs can efficiently recover globally optimal solutions that take into account
constraints on shape and appearance in the model fitting criterion. This model can also
deal with global scale variations by operating in a multi-scale pyramid. The main
contribution consists in formulating the task of learning the DCM score function as a
large-margin structured prediction problem. The algorithm trains DCMs in an joint
manner all the parameters are learned simultaneously, while use rich local features for
18

landmark localization. Then evaluate a method on lung field, heart, and clavicle
segmentation tasks using 247 standard posterior-anterior (PA) chest radiographs from the
Segmentation in Chest Radiographs (SCR) benchmark. DCMs systematically outperform
the state of the art methods according to a host of validation measures including the
overlap coefficient, mean contour distance and pixel error rate.

CHAPTER 4
PROPOSED SYSTEM
Tuberculosis is a major health threat in many regions of the world. Diagnosing
tuberculosis still remains a challenge. When left undiagnosed and thus untreated,
mortality rates of patients with tuberculosis are high. Standard diagnostics still rely on
methods developed in the last century. An automated approach for detecting tuberculosis
in conventional poster anterior chest radiographs. First to remove the noise from the
images. For filtering the images we use the wiener filter for diagnosing. In a second step
use cavity segmentation approach and model the lung boundary detection with an
objective function. "cavity segmentation" is applied specifically to those models which
perform a max-flow/min-cut optimization. After lung segmentation we extract three
features such as LBP, HOG, and HIE features are extracted. Then classified the image
using binary classifier.
19

4.1 Modules
Preprocessing

Cavity Segmentation

LBP feature Extraction


HOG feature Extraction
HIE(Hessian Image Enhancement) feature Extraction
SVM classifier

Input Images

Preprocessing

Cavity Segmentation

Feature Extraction

LBP Features

HOG Features

SVM classifier

HIE Features

Database
20

Result

Figure: 4.1 system architecture

The figure 4.1 shows,first give the input image,then the input image move from
the preprocessing step. In this preprocessing step

to remove the noise from the

image.after that it sends graph cut segmentation.By using cavity segmentation the lungs
are segmented. Then it goes from feature extraction part.there are three types of feature
extraction that is LBP,HOG and HIE.finally it sends the svm classifier to classify the
image and compare to the database.then it prodce the result for either normal or
abnormal.
4.2Modules Description
4.2.1 Preprocessing

21

In pre-processing step first applyaGaussian filtering to our input image. Gaussian


filtering is often used to remove the noise from the image.Gaussian filter is windowed
filter of linear class; by its nature is weighted mean.The Gaussian Smoothing Operator
performs a weighted average of surrounding pixels based on the Gaussian distribution. It
is used to remove Gaussian noise and is a realistic model of defocused lens. Sigma
defines the amount of blurring. The radius slider is used to control how large the template
is. Large values for sigma will only give large blurring for larger template sizes. Noise
can be added using the sliders.
4.2.2 F.Contour Segmentation
Accuracy of these techniques is highly dependent on initial contour initialization or
seed point localization. Most of these methods assume the foreground object to have a
uniform
structure which is dierent from background pixels. In case of cavities, only the border is
visible whereas the inside of cavity shares similar characteristics with other lung tissues
due to 2D projection. To address these drawbacks, we propose a dynamic programming
based solution for cavity segmentation. Given a cost image, dynamic programming can
be used to nd a minimum (or maximum) cost path between two pixels. Since cavities
are mostly elliptical in shape, optimal path calculation is done in polar space. The polar
image is constructed by extracting a circular region of interest (ROI) of radius R around
the seed
point given as input by user. Start and end point for the path calculation is set to the same
location to ensure a closed contour when the maximum cost path is projected back to the
original image space.

22

4.2.3 LBP Feature Extraction


Local binary patterns (LBP) are a type of feature used for classification in
computervision. The LBP feature vector is created in the following manner:

Divide the examined window into cells (e.g. 16x16 pixels for each cell).

For each pixel in a cell, compare the pixel to each of its 8 neighbors (on its lefttop, left-middle, left-bottom, right-top, etc.). Follow the pixels along a circle,
i.e. clockwise or counter-clockwise.

Where the center pixel's value is greater than the neighbor's value, write "1".
Otherwise, write "0". This gives an 8-digit binary number (which is usually
converted to decimal for convenience).

Compute the histogram, over the cell, of the frequency of each "number"
occurring (i.e., each combination of which pixels are smaller and which are
greater than the center).

Optionally normalize the histogram.

Concatenate (normalized) histograms of all cells. This gives the feature vector
for the window

Initially we separate the image as patches. For each patch of image we apply
the LBP(Local Binary Pattern).

The LBP operator assigned a label to every pixel of a gray level image. The label
mapping to a pixel is affected by the relationship between this pixel and its eight
neighbors of the pixel. If we set the gray level image is I, and Z0 is one pixel in this
image. So we can define the operator as a function of Z0 and its neighbors, Z1, , Z8.
And it can be written as:
T = t (Z0, Z0-Z1, Z0-Z2, , Z0-Z8).
23

However, the LBP operator is not directly affected by the gray value of Z0, so we
can redefine the function as following:
T t (Z0-Z1, Z0-Z2, , Z0-Z8).
To simplify the function and ignore the scaling of grey level, we use only the sign
of each element instead of the exact value. So the operator function will become:
T t (s(Z0-Z1), s(Z0-Z2), , s(Z0-Z8)).

4.2.4 HOG Feature Extraction


Histogram of Oriented Gradients (HOG) is feature descriptors used in computer
vision and image processing for the purpose of object detection. The technique counts
occurrences of gradient orientation in localized portions of an image. This method is
similar to that of edge orientation histograms, scale-invariant feature transform
descriptors, and shape contexts, but differs in that it is computed on a dense grid of
uniformly spaced cells and uses overlapping local contrast normalization for improved
accuracy. The first step of calculation is the computation of the gradient values. The most
common method is to simply apply the 1-D centered, point discrete derivative mask in
one or both of the horizontal and vertical directions. Specifically, this method requires
filtering the color or intensity data of the image. The second step of calculation involves
creating the cell histograms. Each pixel within the cell casts a weighted vote for an
orientation-based histogram channel based on the values found in the gradient
computation. The cells themselves can either be rectangular or radial in shape, and the
histogram channels are evenly spread over 0 to 180 degrees or 0 to 360 degrees,
depending on whether the gradient is unsigned or signed.
4.2.5 HIE Feature Extraction
Cavity walls appear like broken line segments in chest radiographs This line like structure
can be captured using the eigen values of Hessian matrix H of the Gaussian ltered
images.
24

4.2.6 SVM Classifier


(i).data setup: our dataset contains three classes, each N samples. The data is 2D
plot original data for visual inspection
(ii).SVM with linear kernel (-t 0). We want to find the best parameter value C
using 2-fold cross validation (meaning use 1/2 data to train, the other
1/2 to test).
(iii).After finding the best parameter value for C, we train the entire data
again using this parameter value
(iv). plot support vectors
(v). plot decision area
SVM maps input vectors to a higher dimensional vector space where an optimal
hyper plane is constructed. Among the many hyper planes available, there is only one
hyper plane that maximizes the distance between itself and the nearest data vectors of
each category. This hyper plane which maximizes the margin is called the optimal
separating hyper plane and the margin is defined as the sum of distances of the hyper
plane to the closest training vectors of each category.

CHAPTER 5
RESULT ND IMPLEMENTATION

5.1SCREEN SHOTS

25

26

27

28

29

30

31

32

33

34

35

function varargout = Main(varargin)


% MAIN M-file for Main.fig
%
MAIN, by itself, creates a new MAIN or raises the
existing
%
singleton*.
%
%
H = MAIN returns the handle to a new MAIN or the
handle to
%
the existing singleton*.
%
%
MAIN('CALLBACK',hObject,eventData,handles,...) calls
the local
%
function named CALLBACK in MAIN.M with the given
input arguments.
%
36

%
MAIN('Property','Value',...) creates a new MAIN or
raises the
%
existing singleton*. Starting from the left,
property value pairs are
%
applied to the GUI before Main_OpeningFcn gets
called. An
%
unrecognized property name or invalid value makes
property application
%
stop. All inputs are passed to Main_OpeningFcn via
varargin.
%
%
*See GUI Options on GUIDE's Tools menu. Choose "GUI
allows only one
%
instance to run (singleton)".
%
% See also: GUIDE, GUIDATA, GUIHANDLES
% Edit the above text to modify the response to help Main
% Last Modified by GUIDE v2.5 04-Jul-2014 15:08:48
% Begin initialization code - DO NOT EDIT
gui_Singleton = 1;
gui_State = struct('gui_Name',
mfilename, ...
'gui_Singleton', gui_Singleton, ...
'gui_OpeningFcn', @Main_OpeningFcn, ...
'gui_OutputFcn', @Main_OutputFcn, ...
'gui_LayoutFcn', [] , ...
'gui_Callback',
[]);
if nargin && ischar(varargin{1})
gui_State.gui_Callback = str2func(varargin{1});
end
if nargout
[varargout{1:nargout}] = gui_mainfcn(gui_State,
varargin{:});
else
gui_mainfcn(gui_State, varargin{:});
end
% End initialization code - DO NOT EDIT
% --- Executes just before Main is made visible.
37

function Main_OpeningFcn(hObject, eventdata, handles,


varargin)
% This function has no output args, see OutputFcn.
% hObject
handle to figure
% eventdata reserved - to be defined in a future version of
MATLAB
% handles
structure with handles and user data (see
GUIDATA)
% varargin
command line arguments to Main (see VARARGIN)
% Choose default command line output for Main
handles.output = hObject;
% Update handles structure
guidata(hObject, handles);
% UIWAIT makes Main wait for user response (see UIRESUME)
% uiwait(handles.figure1);
% --- Outputs from this function are returned to the command
line.
function varargout = Main_OutputFcn(hObject, eventdata,
handles)
% varargout cell array for returning output args (see
VARARGOUT);
% hObject
handle to figure
% eventdata reserved - to be defined in a future version of
MATLAB
% handles
structure with handles and user data (see
GUIDATA)
% Get default command line output from handles structure
varargout{1} = handles.output;
% --- Executes on button press in pushbutton1.
function pushbutton1_Callback(hObject, eventdata, handles)
% hObject
handle to pushbutton1 (see GCBO)
% eventdata reserved - to be defined in a future version of
MATLAB
% handles
structure with handles and user data (see
GUIDATA)
38

set(handles.text2,'String','Input Image Is Reading....')


global name pathname image
[filename pathname]=uigetfile('*.jpg','Select An Image');
[pathstr, name, ext] = fileparts(filename);
image=imread([pathname filename]);
axes(handles.axes1)
imshow(image);
title('Input Image','fontsize',11,'fontname','Cambria');
axis equal;axis off;
% --- Executes on button press in pushbutton2.
function pushbutton2_Callback(hObject, eventdata, handles)
% hObject
handle to pushbutton2 (see GCBO)
% eventdata reserved - to be defined in a future version of
MATLAB
% handles
structure with handles and user data (see
GUIDATA)
set(handles.text2,'String','Noise Reduction in lung
Image....');
global image preimage
[m n c]=size(image);
if c==3
image=rgb2gray(image);
else
image=image;
end
preimage=wiener2(image,[3 3]);
%filtering image using
wiener filters
axes(handles.axes2)
imshow(preimage);
title('Filtered Image','fontsize',11,'fontname','Cambria');
% --- Executes on button press in pushbutton3.
function pushbutton3_Callback(hObject, eventdata, handles)
% hObject
handle to pushbutton3 (see GCBO)
% eventdata reserved - to be defined in a future version of
MATLAB
% handles
structure with handles and user data (see
GUIDATA)
39

set(handles.text2,'String','In Image Segmentation


Process....');
global preimage
global binaryImage3
vesimage=[];
preimage1=[];
fontSize = 20;
se = strel('disk',4);
preimage1=adapthisteq(preimage);
vesimage =
imsubtract(imadd(preimage1,imtophat(preimage1,se)),
imbothat(preimage1,se));
image=imresize(vesimage,[512,512]);
preimage=imresize(preimage,[512,512]);
[rows columns numberOfColorBands] = size(image);
prompt = 'Enter the Seed Value of Lung [0.5-0.7]';
dialogTitle = 'Enter Threshold';
numberOfLines = 1;
defaultResponse = {'0'};
deg = str2double(cell2mat(inputdlg(prompt, dialogTitle,
numberOfLines, defaultResponse)));
if deg>0.7 || deg<0.5
msgbox('Enter correct Threshold');
prompt = 'Enter the Seed Value of Lung[0.5 0.7]';
dialogTitle = 'Enter Threshold';
numberOfLines = 1;
defaultResponse = {'0'};
deg = str2double(cell2mat(inputdlg(prompt, dialogTitle,
numberOfLines, defaultResponse)));
else
binaryimage=im2bw(preimage,deg);
axes(handles.axes3)
imshow(~binaryimage);
title('Segmented
Image','fontsize',11,'fontname','Cambria');
binaryImage3 = (image > 40) & (image < 150);
%
Threshold to isolate lung tissue
[labeledImage numberOfBlobs] = bwlabel(binaryImage3);
blobMeasurements = regionprops(labeledImage, 'Area');
allAreas = [blobMeasurements.Area];
[sortedAreas sortIndices] = sort(allAreas, 'descend');
keeperIndexes = [sortIndices(1), sortIndices(2)];
40

keeperBlobsImage = ismember(labeledImage, keeperIndexes);


binaryImage3 = imfill(keeperBlobsImage>0, 'holes');
labeledImage = bwlabel(keeperBlobsImage, 8);
boundaries = bwboundaries(binaryImage3);
numberOfBoundaries = size(boundaries);
se=strel('disk',5);
binaryImage3=imclose(binaryImage3,se);
% axes(handles.axes3);
% imshow(binaryImage3);
% title('Segmented
Lungs','fontsize',11,'fontname','Cambria');
axis equal;axis off;
Lrgb = label2rgb(binaryImage3, 'jet', 'w', 'shuffle');
figure('name','Mapping lung with CT
Image'),imshow(preimage), hold on
himage = imshow(Lrgb);
set(himage, 'AlphaData', 0.3);
title('lung segmented
Image','fontsize',11,'fontname','Cambria');
end
% --- Executes on button press in pushbutton4.
function pushbutton4_Callback(hObject, eventdata, handles)
% hObject
handle to pushbutton4 (see GCBO)
% eventdata reserved - to be defined in a future version of
MATLAB
% handles
structure with handles and user data (see
GUIDATA)
% set(handles.text2,'String','Analysis the Lung
Cavity....');
% global binaryImage3 image
% im1=image;
%
% prompt={'Enter the value of Difference of the image[0.20.4]:','Enter the value of alpha[0.002- 0.005]'};
% name='Shape Analysis';
% numlines=1;
% defaultanswer={'0.4','0.005'};
% answer=inputdlg(prompt,name,numlines,defaultanswer);
% im1=imresize(im1,[512 512]);
% iter
= 150;
% dt
= .4;
41

% alpha
= .005;
% flag_approx
= 1;
% figure,
% [result1,pin,pout] =
graphcut(im1,iter,dt,alpha,flag_approx,binaryImage3);
% axes(handles.axes4)
% imshow((im1),'InitialMagnification', 200);
% fat_contour(result1);
% title('L*a*b Process','fontsize',11,'fontname','Cambria');
% global image;
% fprintf('%s', image);
% I = imread( image);
% m = zeros(size(I,2),size(I,2));
%-- create
initial mask
% m(88:180,28:200) =1;
%
I = imresize(I,.5); %-- make image smaller
%
m = imresize(m,.5); %
for fast computation
% figure,imshow(m)
% seg = region_seg(I, m, 250); %-- Run boundary
% figure,imshow(~seg)

% cavitysegmentation
global name pathname image
[filename pathname]=uigetfile('*.jpg','Select An Image');
[pathstr, name, ext] = fileparts(filename);
image=imread([pathname filename]);
axes(handles.axes3)
m = zeros(size(image,1),size(image,2));
%-- create
initial mask
m(123:234,111:200) = 1;
% m(250:350,250:400) = 1;
image= imresize(image,.5); %-- make image smaller
m = imresize(m,.5); %
for fast computation
subplot(2,2,1); imshow(image); title('Input Image');
subplot(2,2,2); imshow(m); title('Initialization');
subplot(2,2,3); title('Segmentation');

42

seg = region_seg(image, m, 200); %-- Run segmentation


subplot(2,2,4); imshow(seg); title('Detected Cavity in
lungs');
% --- Executes on button press in pushbutton5.
function pushbutton5_Callback(hObject, eventdata, handles)
% hObject
handle to pushbutton5 (see GCBO)
% eventdata reserved - to be defined in a future version of
MATLAB
% handles
structure with handles and user data (see
GUIDATA)
Feature_main1
% --- Executes on button press in pushbutton6.
function pushbutton6_Callback(hObject, eventdata, handles)
% hObject
handle to pushbutton6 (see GCBO)
% eventdata reserved - to be defined in a future version of
MATLAB
% handles
structure with handles and user data (see
GUIDATA)
% --- Executes on button press in pushbutton7.
function pushbutton7_Callback(hObject, eventdata, handles)
% hObject
handle to pushbutton7 (see GCBO)
% eventdata reserved - to be defined in a future version of
MATLAB
% handles
structure with handles and user data (see
GUIDATA)
% --- Executes on button press in pushbutton8.
function pushbutton8_Callback(hObject, eventdata, handles)
% hObject
handle to pushbutton8 (see GCBO)
% eventdata reserved - to be defined in a future version of
MATLAB
% handles
structure with handles and user data (see
GUIDATA)

43

% --- Executes on button press in pushbutton9.


function pushbutton9_Callback(hObject, eventdata, handles)
% hObject
handle to pushbutton9 (see GCBO)
% eventdata reserved - to be defined in a future version of
MATLAB
% handles
structure with handles and user data (see
GUIDATA)
% --- Executes on button press in pushbutton10.
function pushbutton10_Callback(hObject, eventdata, handles)
% hObject
handle to pushbutton10 (see GCBO)
% eventdata reserved - to be defined in a future version of
MATLAB
% handles
structure with handles and user data (see
GUIDATA)
% --- Executes on button press in pushbutton11.
function pushbutton11_Callback(hObject, eventdata, handles)
% hObject
handle to pushbutton11 (see GCBO)
% eventdata reserved - to be defined in a future version of
MATLAB
% handles
structure with handles and user data (see
GUIDATA)
% --- Executes on button press in pushbutton12.
function pushbutton12_Callback(hObject, eventdata, handles)
% hObject
handle to pushbutton12 (see GCBO)
% eventdata reserved - to be defined in a future version of
MATLAB
% handles
structure with handles and user data (see
GUIDATA)
% --- Executes on button press in pushbutton13.
function pushbutton13_Callback(hObject, eventdata, handles)
% hObject
handle to pushbutton13 (see GCBO)
% eventdata reserved - to be defined in a future version of
MATLAB

44

% handles
GUIDATA)

structure with handles and user data (see

% --- Executes on button press in pushbutton17.


function pushbutton17_Callback(hObject, eventdata, handles)
% hObject
handle to pushbutton17 (see GCBO)
% eventdata reserved - to be defined in a future version of
MATLAB
% handles
structure with handles and user data (see
GUIDATA)
% --- Executes on button press in pushbutton18.
function pushbutton18_Callback(hObject, eventdata, handles)
% hObject
handle to pushbutton18 (see GCBO)
% eventdata reserved - to be defined in a future version of
MATLAB
% handles
structure with handles and user data (see
GUIDATA)
% --- Executes on button press in pushbutton19.
function pushbutton19_Callback(hObject, eventdata, handles)
% hObject
handle to pushbutton19 (see GCBO)
% eventdata reserved - to be defined in a future version of
MATLAB
% handles
structure with handles and user data (see
GUIDATA)
% --- Executes on button press in pushbutton20.
function pushbutton20_Callback(hObject, eventdata, handles)
% hObject
handle to pushbutton20 (see GCBO)
% eventdata reserved - to be defined in a future version of
MATLAB
% handles
structure with handles and user data (see
GUIDATA)
% --- Executes on button press in pushbutton21.
function pushbutton21_Callback(hObject, eventdata, handles)
% hObject
handle to pushbutton21 (see GCBO)
45

% eventdata
MATLAB
% handles
GUIDATA)

reserved - to be defined in a future version of


structure with handles and user data (see

% --- Executes on button press in pushbutton22.


function pushbutton22_Callback(hObject, eventdata, handles)
% hObject
handle to pushbutton22 (see GCBO)
% eventdata reserved - to be defined in a future version of
MATLAB
% handles
structure with handles and user data (see
GUIDATA)
function varargout = Feature_main1(varargin)
% FEATURE_MAIN1 M-file for Feature_main1.fig
%
FEATURE_MAIN1, by itself, creates a new FEATURE_MAIN1
or raises the existing
%
singleton*.
%
%
H = FEATURE_MAIN1 returns the handle to a new
FEATURE_MAIN1 or the handle to
%
the existing singleton*.
%
%
FEATURE_MAIN1('CALLBACK',hObject,eventData,handles,...)
calls the local
%
function named CALLBACK in FEATURE_MAIN1.M with the
given input arguments.
%
%
FEATURE_MAIN1('Property','Value',...) creates a new
FEATURE_MAIN1 or raises the
%
existing singleton*. Starting from the left,
property value pairs are
%
applied to the GUI before Feature_main1_OpeningFcn
gets called. An
%
unrecognized property name or invalid value makes
property application
%
stop. All inputs are passed to
Feature_main1_OpeningFcn via varargin.
%
%
*See GUI Options on GUIDE's Tools menu. Choose "GUI
allows only one
%
instance to run (singleton)".
46

%
% See also: GUIDE, GUIDATA, GUIHANDLES
% Edit the above text to modify the response to help
Feature_main1
% Last Modified by GUIDE v2.5 25-Feb-2015 15:07:04
% Begin initialization code - DO NOT EDIT
gui_Singleton = 1;
gui_State = struct('gui_Name',
mfilename, ...
'gui_Singleton', gui_Singleton, ...
'gui_OpeningFcn',
@Feature_main1_OpeningFcn, ...
'gui_OutputFcn',
@Feature_main1_OutputFcn, ...
'gui_LayoutFcn', [] , ...
'gui_Callback',
[]);
if nargin && ischar(varargin{1})
gui_State.gui_Callback = str2func(varargin{1});
end
if nargout
[varargout{1:nargout}] = gui_mainfcn(gui_State,
varargin{:});
else
gui_mainfcn(gui_State, varargin{:});
end
% End initialization code - DO NOT EDIT
% --- Executes just before Feature_main1 is made visible.
function Feature_main1_OpeningFcn(hObject, eventdata,
handles, varargin)
% This function has no output args, see OutputFcn.
% hObject
handle to figure
% eventdata reserved - to be defined in a future version of
MATLAB
% handles
structure with handles and user data (see
GUIDATA)
% varargin
command line arguments to Feature_main1 (see
VARARGIN)

47

% Choose default command line output for Feature_main1


handles.output = hObject;
% Update handles structure
guidata(hObject, handles);
% UIWAIT makes Feature_main1 wait for user response (see
UIRESUME)
% uiwait(handles.figure1);
% --- Outputs from this function are returned to the command
line.
function varargout = Feature_main1_OutputFcn(hObject,
eventdata, handles)
% varargout cell array for returning output args (see
VARARGOUT);
% hObject
handle to figure
% eventdata reserved - to be defined in a future version of
MATLAB
% handles
structure with handles and user data (see
GUIDATA)
% Get default command line output from handles structure
varargout{1} = handles.output;
% --- Executes on button press in pushbutton1.
function pushbutton1_Callback(hObject, eventdata, handles)
% hObject
handle to pushbutton1 (see GCBO)
% eventdata reserved - to be defined in a future version of
MATLAB
% handles
structure with handles and user data (see
GUIDATA)
set(handles.text4,'String','Extracting LBP Features....');
global binaryImage3 image lung
global lbpfea
[m n c]=size(binaryImage3);
image=imresize(image,[m n]);
lung=zeros(m,n);
lung(binaryImage3)=image(binaryImage3);
% feature=imhist(lung);
48

% figure,plot(feature);
% set(handles.uitable1,'data',feature);
SP=[-1 -1; -1 0; -1 1; 0 -1; -0 1; 1 -1; 1 0; 1 1];
LBPim1=LBP(lung,SP,0,'i');
axes(handles.axes2);
imshow(LBPim1);
title('LBP Image','fontsize',11,'fontname','Cambria');
figure('name','Histogram of LBP Image'),imhist(LBPim1);
title('Histogram of Images...');
lbpfea=imhist(LBPim1);
set(handles.uitable1,'visible','on');
set(handles.text1,'visible','on');
set(handles.uitable1,'data',lbpfea);
% --- Executes on button press in pushbutton2.
function pushbutton2_Callback(hObject, eventdata, handles)
% hObject
handle to pushbutton2 (see GCBO)
% eventdata reserved - to be defined in a future version of
MATLAB
% handles
structure with handles and user data (see
GUIDATA)
set(handles.text4,'String','Extracting HOG Features....');
global HOGfea
global lung
HOGfea=HOG(lung);
set(handles.uitable2,'visible','on');
set(handles.text2,'visible','on');
set(handles.uitable2,'data',HOGfea);
% --- Executes on button press in pushbutton3.
function pushbutton3_Callback(hObject, eventdata, handles)
% hObject
handle to pushbutton3 (see GCBO)
% eventdata reserved - to be defined in a future version of
MATLAB
% handles
structure with handles and user data (see
GUIDATA)
set(handles.text4,'String','Extracting HIE Features....');
global lbpfea HOGfea
global lung
[Fdir nrm] = tamura_dir(lung);
features=[lbpfea' HOGfea' Fdir nrm];
49

set(handles.uitable3,'visible','on');
set(handles.text3,'visible','on');
set(handles.uitable3,'data',features');
save features features
% --- Executes on button press in pushbutton4.
function pushbutton4_Callback(hObject, eventdata, handles)
% hObject
handle to pushbutton4 (see GCBO)
% eventdata reserved - to be defined in a future version of
MATLAB
% handles
structure with handles and user data (see
GUIDATA)
set(handles.text4,'String','Classifying Lungs....');
Result1
% load target
% groups=target;
% figure('Name','Graph for Classification Process')
% ylim([-1 3]);
% hold on
% plot(groups(1:6),'g.');
% hold on
% plot([-3*ones(1,6) groups(7:16)],'r+');
%
title('CLASSIFICATION','fontsize',20,'fontname','Baskerville
Old face','fontweight','bold');
% xlabel('Gropus','fontsize',12,'fontname','Times New
Romen','fontweight','bold')
% ylabel('Test Feature','fontsize',12,'fontname','Times New
Romen','fontweight','bold')
% legend('Normal','Tuborclusis');

50

51

52

53

CHAPTER 6
54

PROPOSED SYSTEM
In Proposed work going to implement multi RVM classifier with some other
extracting features. A system framework is presented to recognize multiple kinds of
activities from a RVM multi-class classifier with a binary tree architecture. The thought
of hierarchical classification is introduced and multiple RVMs are aggregated to
accomplish the recognition of actions. Each RVM in the multi-class classifier is trained
separately to achieve its best classification performance by choosing proper features
before they are aggregated. The main advantage of multiple classification is divide into
the normal stage, moderate stage, beginning stage or severe stage.

CHAPTER 7
55

CONCLUSION
We have proposed a novel technique to automatically segment cavities based on
dynamic programming which uses the likelihood map output of pixel classier as cost
function. We have validated our results with those obtained by three human expert
readers on a large dataset including prominent as well as subtle cavities. Our results are
very encouraging and comparable with the degree of overlap between trained human
readers and a chest radiologist. Cases with low inter-observer agreement often contain
subtle cavities or cavities in the diseased regions. This indicates that accurate cavity
segmentation is a dicult problem. Our work has a few limitations. In some cases the
dynamic programming is attracted to rib borders. The accuracy of our technique for
dicult cavities can be increased by improving the pixel classier and optimizing the
parameters for dynamic programming. It may be possible to develop pixel based features
more specic to cavity borders so as to dierentiate it with ribs and other bone structures.
Alternatively we could include a rib suppression technique.
The dynamic programming path can be calculated more precisely if a few reference
points on the contour are clicked and the path is forced to pass through those points.
Providing more than one reference point can be useful for subtle cavities for precise
boundary segmentation. Such a tool could be very helpful in treatment monitoring for
tuberculosis.

56

REFERENCES
[1] Antani,S,Candemir,S,Folio,L,Jaeger,S,Karargyris,A,Siegelman,J,&
a,G,2013Automatic screening for tuberculosis in chest radiographs A
survey,Quant. Imag. Med. Surg., vol. 3, no. 2, pp. 8999
[2] Antani,S,Jaeger,S,Karargyri,A,&Thoma,G
2012Detecting

tuberculosis

in

radiographs using combined lung masks, in Proc. Int. Conf. IEEE Eng. Med.
Biol.Soc, pp. 49784981
[3] Antani,S,Candemir,S, Jaeger,S, Palaniappan,K, & Thoma,G,2012Graph-cut
based automatic lung boundary detection in chest radiographs,inProc. IEEE
Healthcare Technol. Conf.: Translat. Eng. Health Med, pp. 3134.
[4] Antani,S,Jaeger,S,&Thoma,G 2011Tuberculosis screening of chestradiographs,
in SPIE Newsroom.
[5] Doi,K, Katsuragawa,S, terHaarRomeny,B, van Ginneken,B, &Viergever,M,2002
Automatic detection of abnormalities in chest radiographusing local texture
analysis, IEEE Trans. Med. Imag., vol. 21, no. 2, pp. 139149.
[6] TerHaarRomeny,B &van Ginneken,B 2000Automatic segmentation oflung fields
in chest radiographs, Med. Phys., vol. 27, no. 10, pp. 24452455.
[7] Rachna ,H.B, MallikarjunaSwamy 2013Detection of Tuberclosis Bacilliusing
Image

Processing

TechniqueInternationaljournalof

soft

computingand

engineering ISSN:2231-2307,vol.3.
[8] Candemir,S,Jaeger,S ,Palaniappan,K,Musco,J, Singh,R, Xue,Z,Karargyris,A,
Antani,S,
Thoma,G, &McDonald,C 2013 Lung Segmentation radiographs using anatomical
atlaseswith non- rigidregistration, IEEE Trans. Med. Imag.
[9] Ginneken,B, Frangi,A, Staal,J, Romeny,B, &Viergever,M 2002 Activeshape
model
segmentation with optimal features, IEEE Trans.Med.Imag, vol. 21, no. 8, pp. 924
933.

57

[10]Ginneken,B, Rick ,H.H.M ,Laurence Hogweg , Pragnya Madskar,2000


Automated
scoring of chest radiographs for tuberculosis prevalence, IEEE Trans.Med. Imag,
vol.
21,no. 8, pp. 924933.
[11]

HaithenBouussaid,Iasonas Kokkinos, Nikos Paragios, 2011 Discriminative

Learning
Of Deformable Contour Models, IEEE TransMed.Img.

58

You might also like