Professional Documents
Culture Documents
Tuberculosis (TB) is a common disease with high mortality and morbidity rates
worldwide. The chest radiograph (CXR) is frequently used in diagnostic algorithms for
pulmonary TB. Automatic systems to detect TB on CXRs can improve the efciency of such
diagnostic algorithms. The diverse manifestation of TB on CXRs from different populations
requires a system that can be adapted to deal with different types of abnormalities.
A computer aided detection (CAD) system was developed which combines the results of
supervised subsystems detecting textural, shape, and focal abnormalities into one TB score. The
textural abnormality subsystem provided several subscores analyzing different types of textural
abnormalities and different regions in the lung. The shape and focal abnormality subsystem each
provided one subscore. A general framework was developed to combine an arbitrary number of
subscores: subscores were normalized, collected in a feature vector and then combined using a
supervised classier into one combined TB score.
Two databases, both consisting of 200 digital CXRs, were used for evaluation, acquired
from (A) a Western high-risk group screening and (B) TB suspect screening in Africa. The
subscores and combined TB score were compared to two references: an external, nonradiological, reference and a radiological reference determined by a human expert. The area
under the Receiver Operator Characteristic (ROC) curve.
The combined TB score performed better than the individual subscores and approaches
performance of human observers with respect to the external and radiological reference.
Supervised combination to compute an overall TB score allows for a necessary adaptation of the
CAD system to different settings or different operational requirements.
CHAPTER 1
INTRODUCTION
Image processing operations can be roughly divided into three major categories,
Image Compression, Image Enhancement and Restoration, and Measurement Extraction.
It involves reducing the amount of memory needed to store a digital image. Image
2
defects which could be caused by the digitization process or by faults in the imaging setup (for example, bad lighting) can be corrected using Image Enhancement techniques.
Once the image is in good condition, the Measurement Extraction operations can be used
to obtain useful information from the image. The Image Enhancement and Measurement
Extraction are used to 256 grey-scale images. This means that each pixel in the image is
stored as a number between 0 to 255, where 0 represents a black pixel, 255 represents a
white pixel and values in-between represent shades of grey. These operations can be
extended to operate on colour images.
1.1 Introduction to Image Processing
Image processing is a method to convert an image into digital form and perform
some operations on it, in order to get an enhanced image or to extract some useful
information from it. It is a type of signal dispensation in which input is image, like video
frame or photograph and output may be image or characteristics associated with that
image. Usually Image Processing system includes treating images as two dimensional
signals while applying already set signal processing methods to them. Image processing
basically includes the following three steps.
Importing the image with optical scanner or by digital photography.
Analyzing and manipulating the image which includes data compression and
image enhancement and spotting patterns that are not to human eyes like satellite
photographs.
Output is the last stage in which result can be altered image or report that is based
on image analysis.
1.1.2 Types
The two types of methods used for Image Processing that isAnalog and Digital
Image Processing. Analog or visual techniques of image processing can be used for the
hard copies like printouts and photographs. Image analysts use various fundamentals of
interpretation while using these visual techniques. The image processing is not just
confined to area that has to be studied but on knowledge of analyst. Association is
another important tool in image processing through visual techniques. So analysts apply a
combination of personal knowledge and collateral data to image processing.
Digital Processing techniques help in manipulation of the digital images by using
computers. As raw data from imaging sensors from satellite platform contains
deficiencies. To get over such flaws and to get originality of information, it has to
undergo various phases of processing. The three general phases that all types of data have
to undergo while using digital technique are Pre- processing, enhancement and display,
information extraction.
There are two general groups of images: vector graphics or line art and bitmaps
pixel-based or images. Some of the most common file formats are:
GIF
JPEG
TIFF
PS
PSD
Most image-
Image processing is closely related to computer graphics and computer vision. Image
processing is a method to convert an image into digital form and perform some
operations on it, in order to get an enhanced image or to extract some useful information
from it. It is a type of signal dispensation in which input is image, like video frame or
photograph and output may be image or characteristics associated with that image.
Usually Image Processing system includes treating images as two dimensional signals
while applying already set signal processing methods to them.
1.5 Digital Image Processing
Image Processing Toolbox provides a comprehensive set of reference-standard
algorithms, functions, and apps for image processing, analysis, visualization, and
algorithm development. It can perform image analysis, image segmentation, image
enhancement, noise reduction, geometric transformations, and image registration. Many
toolbox functions support multicore processors, GPUs, and C-code generation. Image
Processing Toolbox supports a diverse set of image types, including high dynamic range,
gigapixel resolution, embedded ICC profile, and tomography. Visualization functions and
apps let you explore images and videos, examine a region of pixels, adjust color and
contrast, create contours or histograms, and manipulate regions of interest (ROIs). The
toolbox supports workflows for processing, displaying, and navigating large images.
As a fundamental problem in the field of imageprocessing, image restoration has
been extensively studiedin the past two decades. It aims to reconstructthe original highquality image x from its degraded observedversion y, which is a typical ill-posed linear
inverse problem.
Classical regularization terms utilize local structural patternsand are built on the
assumption that images are locallysmooth except at the edges. Some representative works
in theliterature are the total variation (TV), half quadratureformulation, and MumfordShah (MS) models. These regularization terms demonstrate high effectiveness
inpreserving edges and recovering smooth regions. However,they usually smear out
9
image details and cannot deal wellwith fine structures, since they only exploit local
statistics,neglecting nonlocal statistics of images.
CHAPTER-2
PROBLEM IDENTIFICATION
A SVM is a binary classifier, that is, the class labels can only take two values:
1.
Cannot predict multiple result with SVM Binary classifier.
This binary classification can classify only normal and abnormal type.
Not able to classify multiple stage with this classifier.
10
CHAPTER 3
LITERATURE REVIEW
3.1 Introduction
Cavitation at the lung parenchyma is a hallmark sign of tuberculosis, a common
deadly infectious disease. It is dened as a gas lled space within a pulmonary
consolidation, a mass, or a nodule, produced by the expulsion of the necrotic part of the
lesion via the bronchial tree. Cavities can also occur in diseases such as primary
bronchogenic carcinoma, lung cancer, pulmonary metastasis and other infections.
Cavities are quite visible and distinct in CT images but are often barely visible in chest
radiographs due to other superimposed 3D lung structures in the 2D projection image. In
chest radiographs, the appearance of cavities is hazy, and the cavity walls are often ill11
dened or completely invisible .This poses a big problem for radiologists to detect and
accurately segment cavities in chest radiographs.
A dynamic programming based approach for cavity border segmentation. The center of
the cavity is taken as an input to dene the region of interest for dynamic programming.
A pixel classier is trained to discriminate between cavity borders and normal lung pixels
using texture, Hessian and location based features constructing a cavity likelihood map.
This likelihood map is then used as a cost function in polar space to nd optimal path
along the cavity border. The proposed technique is tested on a large cavity dataset and
Jaccard overlapping measure is used to calculate the segmentation accuracy of our
system.
3.2 SEGMENTATION
12
Measure
tissue
volumes,
Computer
guided
surgery,
Diagnosis,
number and the size of cavities is a vital element in tuberculosis scoring systems for chest
radiographs. Small agreement (0.55 kappa statistic) has been reported on detection of
cavities in 56 chest radiographs obtained from a TB screening database .Automated
detection and segmentation of cavities is a less explored research area. proposed a
detection system for cavities in chest radiographs for screening of TB. Their system is
based on a supervised learning approach in
which candidates are segmented using a mean shift segmentation technique with adaptive
thresholding for initial contour placement followed by segmentation using a snake model.
Segmented candidates are then classified as cavity or noncavity candidate using Bayesian
classifier trained on gradient inverse coefficient of variation and circularity measure
features. The technique was tested on only 16 cavity chest radiographs. Threshold on
Tanimoto overlapping measure has been used to classify detected cavity regions as true or
false positives. The accuracy of contour segmentation of cavities has not been mentioned
in the work . proposed cavity segmentation based on an improved edge-based fluid vector
flow snake model. This was validated on 20 chest radiographs and resulted in a Jaccard
overlapping degree of 68.8%.
clavicle segmentation in chest radiographs
projection of the lung fields and the mediastinum. The lateral parts at the acromial end
outside the lung fields are not consider. Obtaining an accurate segmentation of the
clavicles is useful for a number of applications. The segmentation can be used to digitally
subtract the clavicle from the radiograph. Accurate localization of the medial parts of the
clavicles can also serve to automatically determine possible rotation of the ribcage, an
important quality aspect of chest radiographs. When chest radiographs are rotated, false
abnormalities might appear in either or both of the lung fields due to apparent changes in
parenchymal density.
In the year of 2011 stefan jaeger et.al [4] presentthe detection of TB and other
diseases in CXRs as a pattern-recognition problem. The algorithms are developed by
using x-rays from the Japanese Society of Radiology Technology database. The
preprocessing step first enhanced the contrast of the image using a histogram equalization
technique. Next step include lung field extraction from the other structures in the xraysuch as the heart, clavicles, and ribsbased on an adaptive segmentation method.
Deviations from the lung shape and increased lung opacity indicate abnormalities, such
as consolidations or nodules. These abnormalities with a bag-of-features approach that
included descriptors for shape and texture. To detect nodules, for example first applied a
Gaussian filter and computed the Eigen values of the Hessian matrix. Then computed a
multi-scale similarity measure that responds to spherical blobs with high
curvature.Finally these features are used to train a binary classifier that discriminates
between normal and abnormal CXRs. The implementation of a preliminary system that is
capable of detecting some manifestations of disease in CXRs. Novel algorithms can be
implemented on any portable x-ray unit.
In the year of 2002 bram van ginnekenet.al [5]presenta fully automatic method is
presented to detect abnormalities in frontal chest radiographs which are aggregatedinto an
overall abnormality score. The method is aimed at finding abnormal signs of a diffuse
textural nature, such as they are encountered in mass chest screening against tuberculosis
(TB).The scheme starts with automatic segmentation of the lungfields, using active shape
15
models. The segmentation is used tosubdivide the lung fields into overlapping regions of
varioussizes. Texture features are extracted from each region, usingthe moments of
responses to a multiscale filter bank. Thedifference features are obtained by subtracting
feature vectorsfrom corresponding regions in the left and right lung fields. Aseparate
training set is constructed for each region. All regionsare classified by voting among the
nearest neighbors, withleave-one-out. Next, the classification results of each region
arecombined, using a weighted multiplier in which regions withhigher classification
reliability weigh more heavily. This produces an abnormality score for each image. The
method is evaluated ontwo databases. The first database was collected from a TB
masschest screening program, from which 147 images with texturalabnormalities and 241
normal images were selected. Although thisdatabase contains many subtle abnormalities,
the classificationhas a sensitivity of 0.86 at a specificity of 0.50 and an area underthe
receiver operating characteristic (ROC) curve of 0.820. Thesecond database consists of
100 normal images and 100 abnormalimages with interstitial disease. For this database,
the results werea sensitivity of 0.97 at a specificity of 0.90 and an area under theROC
curve of 0.986
In the year of 2000 bram van ginnekenet.al[6]present the algorithms for the
automatic delineation of lung fields in chest radiographs is develop a rule-based scheme
and pixel classification. Rule-based approach is the observation that the bordersbetween
anatomical structures in chest radiographs largely coincide with edgesand ridges in the
image. Segmentation can also be treated as a pixel classification problem by calculating
feature vector for each pixel in the input image. Output is the anatomical class. Although
different types of classifiers will obviously lead todifferent results, the performance of
these segmentation algorithms will dependmostly on the features of the input vector. As
features we use pixel location,pixel intensity, entropy, and the corrected location
computed from a scaling andtranslation computed from the rule based scheme.A
hybridsystem that combines both approaches. The performance of hybrid scheme turns
out to be accurate and robust; the accuracy is 0.969 0.00803, and above 94% for all 115
test images.
16
landmark localization. Then evaluate a method on lung field, heart, and clavicle
segmentation tasks using 247 standard posterior-anterior (PA) chest radiographs from the
Segmentation in Chest Radiographs (SCR) benchmark. DCMs systematically outperform
the state of the art methods according to a host of validation measures including the
overlap coefficient, mean contour distance and pixel error rate.
CHAPTER 4
PROPOSED SYSTEM
Tuberculosis is a major health threat in many regions of the world. Diagnosing
tuberculosis still remains a challenge. When left undiagnosed and thus untreated,
mortality rates of patients with tuberculosis are high. Standard diagnostics still rely on
methods developed in the last century. An automated approach for detecting tuberculosis
in conventional poster anterior chest radiographs. First to remove the noise from the
images. For filtering the images we use the wiener filter for diagnosing. In a second step
use cavity segmentation approach and model the lung boundary detection with an
objective function. "cavity segmentation" is applied specifically to those models which
perform a max-flow/min-cut optimization. After lung segmentation we extract three
features such as LBP, HOG, and HIE features are extracted. Then classified the image
using binary classifier.
19
4.1 Modules
Preprocessing
Cavity Segmentation
Input Images
Preprocessing
Cavity Segmentation
Feature Extraction
LBP Features
HOG Features
SVM classifier
HIE Features
Database
20
Result
The figure 4.1 shows,first give the input image,then the input image move from
the preprocessing step. In this preprocessing step
image.after that it sends graph cut segmentation.By using cavity segmentation the lungs
are segmented. Then it goes from feature extraction part.there are three types of feature
extraction that is LBP,HOG and HIE.finally it sends the svm classifier to classify the
image and compare to the database.then it prodce the result for either normal or
abnormal.
4.2Modules Description
4.2.1 Preprocessing
21
22
Divide the examined window into cells (e.g. 16x16 pixels for each cell).
For each pixel in a cell, compare the pixel to each of its 8 neighbors (on its lefttop, left-middle, left-bottom, right-top, etc.). Follow the pixels along a circle,
i.e. clockwise or counter-clockwise.
Where the center pixel's value is greater than the neighbor's value, write "1".
Otherwise, write "0". This gives an 8-digit binary number (which is usually
converted to decimal for convenience).
Compute the histogram, over the cell, of the frequency of each "number"
occurring (i.e., each combination of which pixels are smaller and which are
greater than the center).
Concatenate (normalized) histograms of all cells. This gives the feature vector
for the window
Initially we separate the image as patches. For each patch of image we apply
the LBP(Local Binary Pattern).
The LBP operator assigned a label to every pixel of a gray level image. The label
mapping to a pixel is affected by the relationship between this pixel and its eight
neighbors of the pixel. If we set the gray level image is I, and Z0 is one pixel in this
image. So we can define the operator as a function of Z0 and its neighbors, Z1, , Z8.
And it can be written as:
T = t (Z0, Z0-Z1, Z0-Z2, , Z0-Z8).
23
However, the LBP operator is not directly affected by the gray value of Z0, so we
can redefine the function as following:
T t (Z0-Z1, Z0-Z2, , Z0-Z8).
To simplify the function and ignore the scaling of grey level, we use only the sign
of each element instead of the exact value. So the operator function will become:
T t (s(Z0-Z1), s(Z0-Z2), , s(Z0-Z8)).
CHAPTER 5
RESULT ND IMPLEMENTATION
5.1SCREEN SHOTS
25
26
27
28
29
30
31
32
33
34
35
%
MAIN('Property','Value',...) creates a new MAIN or
raises the
%
existing singleton*. Starting from the left,
property value pairs are
%
applied to the GUI before Main_OpeningFcn gets
called. An
%
unrecognized property name or invalid value makes
property application
%
stop. All inputs are passed to Main_OpeningFcn via
varargin.
%
%
*See GUI Options on GUIDE's Tools menu. Choose "GUI
allows only one
%
instance to run (singleton)".
%
% See also: GUIDE, GUIDATA, GUIHANDLES
% Edit the above text to modify the response to help Main
% Last Modified by GUIDE v2.5 04-Jul-2014 15:08:48
% Begin initialization code - DO NOT EDIT
gui_Singleton = 1;
gui_State = struct('gui_Name',
mfilename, ...
'gui_Singleton', gui_Singleton, ...
'gui_OpeningFcn', @Main_OpeningFcn, ...
'gui_OutputFcn', @Main_OutputFcn, ...
'gui_LayoutFcn', [] , ...
'gui_Callback',
[]);
if nargin && ischar(varargin{1})
gui_State.gui_Callback = str2func(varargin{1});
end
if nargout
[varargout{1:nargout}] = gui_mainfcn(gui_State,
varargin{:});
else
gui_mainfcn(gui_State, varargin{:});
end
% End initialization code - DO NOT EDIT
% --- Executes just before Main is made visible.
37
% alpha
= .005;
% flag_approx
= 1;
% figure,
% [result1,pin,pout] =
graphcut(im1,iter,dt,alpha,flag_approx,binaryImage3);
% axes(handles.axes4)
% imshow((im1),'InitialMagnification', 200);
% fat_contour(result1);
% title('L*a*b Process','fontsize',11,'fontname','Cambria');
% global image;
% fprintf('%s', image);
% I = imread( image);
% m = zeros(size(I,2),size(I,2));
%-- create
initial mask
% m(88:180,28:200) =1;
%
I = imresize(I,.5); %-- make image smaller
%
m = imresize(m,.5); %
for fast computation
% figure,imshow(m)
% seg = region_seg(I, m, 250); %-- Run boundary
% figure,imshow(~seg)
% cavitysegmentation
global name pathname image
[filename pathname]=uigetfile('*.jpg','Select An Image');
[pathstr, name, ext] = fileparts(filename);
image=imread([pathname filename]);
axes(handles.axes3)
m = zeros(size(image,1),size(image,2));
%-- create
initial mask
m(123:234,111:200) = 1;
% m(250:350,250:400) = 1;
image= imresize(image,.5); %-- make image smaller
m = imresize(m,.5); %
for fast computation
subplot(2,2,1); imshow(image); title('Input Image');
subplot(2,2,2); imshow(m); title('Initialization');
subplot(2,2,3); title('Segmentation');
42
43
44
% handles
GUIDATA)
% eventdata
MATLAB
% handles
GUIDATA)
%
% See also: GUIDE, GUIDATA, GUIHANDLES
% Edit the above text to modify the response to help
Feature_main1
% Last Modified by GUIDE v2.5 25-Feb-2015 15:07:04
% Begin initialization code - DO NOT EDIT
gui_Singleton = 1;
gui_State = struct('gui_Name',
mfilename, ...
'gui_Singleton', gui_Singleton, ...
'gui_OpeningFcn',
@Feature_main1_OpeningFcn, ...
'gui_OutputFcn',
@Feature_main1_OutputFcn, ...
'gui_LayoutFcn', [] , ...
'gui_Callback',
[]);
if nargin && ischar(varargin{1})
gui_State.gui_Callback = str2func(varargin{1});
end
if nargout
[varargout{1:nargout}] = gui_mainfcn(gui_State,
varargin{:});
else
gui_mainfcn(gui_State, varargin{:});
end
% End initialization code - DO NOT EDIT
% --- Executes just before Feature_main1 is made visible.
function Feature_main1_OpeningFcn(hObject, eventdata,
handles, varargin)
% This function has no output args, see OutputFcn.
% hObject
handle to figure
% eventdata reserved - to be defined in a future version of
MATLAB
% handles
structure with handles and user data (see
GUIDATA)
% varargin
command line arguments to Feature_main1 (see
VARARGIN)
47
% figure,plot(feature);
% set(handles.uitable1,'data',feature);
SP=[-1 -1; -1 0; -1 1; 0 -1; -0 1; 1 -1; 1 0; 1 1];
LBPim1=LBP(lung,SP,0,'i');
axes(handles.axes2);
imshow(LBPim1);
title('LBP Image','fontsize',11,'fontname','Cambria');
figure('name','Histogram of LBP Image'),imhist(LBPim1);
title('Histogram of Images...');
lbpfea=imhist(LBPim1);
set(handles.uitable1,'visible','on');
set(handles.text1,'visible','on');
set(handles.uitable1,'data',lbpfea);
% --- Executes on button press in pushbutton2.
function pushbutton2_Callback(hObject, eventdata, handles)
% hObject
handle to pushbutton2 (see GCBO)
% eventdata reserved - to be defined in a future version of
MATLAB
% handles
structure with handles and user data (see
GUIDATA)
set(handles.text4,'String','Extracting HOG Features....');
global HOGfea
global lung
HOGfea=HOG(lung);
set(handles.uitable2,'visible','on');
set(handles.text2,'visible','on');
set(handles.uitable2,'data',HOGfea);
% --- Executes on button press in pushbutton3.
function pushbutton3_Callback(hObject, eventdata, handles)
% hObject
handle to pushbutton3 (see GCBO)
% eventdata reserved - to be defined in a future version of
MATLAB
% handles
structure with handles and user data (see
GUIDATA)
set(handles.text4,'String','Extracting HIE Features....');
global lbpfea HOGfea
global lung
[Fdir nrm] = tamura_dir(lung);
features=[lbpfea' HOGfea' Fdir nrm];
49
set(handles.uitable3,'visible','on');
set(handles.text3,'visible','on');
set(handles.uitable3,'data',features');
save features features
% --- Executes on button press in pushbutton4.
function pushbutton4_Callback(hObject, eventdata, handles)
% hObject
handle to pushbutton4 (see GCBO)
% eventdata reserved - to be defined in a future version of
MATLAB
% handles
structure with handles and user data (see
GUIDATA)
set(handles.text4,'String','Classifying Lungs....');
Result1
% load target
% groups=target;
% figure('Name','Graph for Classification Process')
% ylim([-1 3]);
% hold on
% plot(groups(1:6),'g.');
% hold on
% plot([-3*ones(1,6) groups(7:16)],'r+');
%
title('CLASSIFICATION','fontsize',20,'fontname','Baskerville
Old face','fontweight','bold');
% xlabel('Gropus','fontsize',12,'fontname','Times New
Romen','fontweight','bold')
% ylabel('Test Feature','fontsize',12,'fontname','Times New
Romen','fontweight','bold')
% legend('Normal','Tuborclusis');
50
51
52
53
CHAPTER 6
54
PROPOSED SYSTEM
In Proposed work going to implement multi RVM classifier with some other
extracting features. A system framework is presented to recognize multiple kinds of
activities from a RVM multi-class classifier with a binary tree architecture. The thought
of hierarchical classification is introduced and multiple RVMs are aggregated to
accomplish the recognition of actions. Each RVM in the multi-class classifier is trained
separately to achieve its best classification performance by choosing proper features
before they are aggregated. The main advantage of multiple classification is divide into
the normal stage, moderate stage, beginning stage or severe stage.
CHAPTER 7
55
CONCLUSION
We have proposed a novel technique to automatically segment cavities based on
dynamic programming which uses the likelihood map output of pixel classier as cost
function. We have validated our results with those obtained by three human expert
readers on a large dataset including prominent as well as subtle cavities. Our results are
very encouraging and comparable with the degree of overlap between trained human
readers and a chest radiologist. Cases with low inter-observer agreement often contain
subtle cavities or cavities in the diseased regions. This indicates that accurate cavity
segmentation is a dicult problem. Our work has a few limitations. In some cases the
dynamic programming is attracted to rib borders. The accuracy of our technique for
dicult cavities can be increased by improving the pixel classier and optimizing the
parameters for dynamic programming. It may be possible to develop pixel based features
more specic to cavity borders so as to dierentiate it with ribs and other bone structures.
Alternatively we could include a rib suppression technique.
The dynamic programming path can be calculated more precisely if a few reference
points on the contour are clicked and the path is forced to pass through those points.
Providing more than one reference point can be useful for subtle cavities for precise
boundary segmentation. Such a tool could be very helpful in treatment monitoring for
tuberculosis.
56
REFERENCES
[1] Antani,S,Candemir,S,Folio,L,Jaeger,S,Karargyris,A,Siegelman,J,&
a,G,2013Automatic screening for tuberculosis in chest radiographs A
survey,Quant. Imag. Med. Surg., vol. 3, no. 2, pp. 8999
[2] Antani,S,Jaeger,S,Karargyri,A,&Thoma,G
2012Detecting
tuberculosis
in
radiographs using combined lung masks, in Proc. Int. Conf. IEEE Eng. Med.
Biol.Soc, pp. 49784981
[3] Antani,S,Candemir,S, Jaeger,S, Palaniappan,K, & Thoma,G,2012Graph-cut
based automatic lung boundary detection in chest radiographs,inProc. IEEE
Healthcare Technol. Conf.: Translat. Eng. Health Med, pp. 3134.
[4] Antani,S,Jaeger,S,&Thoma,G 2011Tuberculosis screening of chestradiographs,
in SPIE Newsroom.
[5] Doi,K, Katsuragawa,S, terHaarRomeny,B, van Ginneken,B, &Viergever,M,2002
Automatic detection of abnormalities in chest radiographusing local texture
analysis, IEEE Trans. Med. Imag., vol. 21, no. 2, pp. 139149.
[6] TerHaarRomeny,B &van Ginneken,B 2000Automatic segmentation oflung fields
in chest radiographs, Med. Phys., vol. 27, no. 10, pp. 24452455.
[7] Rachna ,H.B, MallikarjunaSwamy 2013Detection of Tuberclosis Bacilliusing
Image
Processing
TechniqueInternationaljournalof
soft
computingand
engineering ISSN:2231-2307,vol.3.
[8] Candemir,S,Jaeger,S ,Palaniappan,K,Musco,J, Singh,R, Xue,Z,Karargyris,A,
Antani,S,
Thoma,G, &McDonald,C 2013 Lung Segmentation radiographs using anatomical
atlaseswith non- rigidregistration, IEEE Trans. Med. Imag.
[9] Ginneken,B, Frangi,A, Staal,J, Romeny,B, &Viergever,M 2002 Activeshape
model
segmentation with optimal features, IEEE Trans.Med.Imag, vol. 21, no. 8, pp. 924
933.
57
Learning
Of Deformable Contour Models, IEEE TransMed.Img.
58