You are on page 1of 50

PROJECT TITLE

Lungs Tumor Detection

GROUP MEMBERS

Muhammad Jabeer Khan


Mian Wisal Ahmad
Zeeshan Nazir

(Sp10-Bce-011)
(Sp10-Bee-068)
(Sp10-bce-031)

PROJECT SUPERVISOR
&
CO-SUPERVISOR

Engr Atiqa Kayan


Engr Umairullah Tariq

PRESENTATION LAYOUT
Database (IMBA home public access library &
INOR)
Image Acquisition
Pre-Processing
Gray level slicing
Connected Components and labelling
Morphological operations

Erosion

and dilation

Features
Support vector machine

FLOW CHART
Image
Acquisition

PreProcessing

Segmentation

Feature Extraction

Post Processing

DATABASE

Collection of Lung images

Conversion from DICOM to jpg format

Training

Reference:
https://eddie.via.cornell.edu/cgi-bin/datac/signon.cgi

IMAGE ACQUISITION

C.T image from database

Input into MATLAB

PRE-PROCESSING
Gray scale image
(elimination of hue and saturation)
o Histogram Equalization
Overview:

It

is for instance used to enhance


Bone structures inx-raysor C.T images
under-exposedphotographs

Application:
Contrast

Adjustment using image histogram

GRAY SCALE IMAGE

HISTOGRAM EQUALIZED IMAGE

GRAY LEVEL SLICING

Highlighting a specific range of gray-levels

Enhancing flaws in X-rays, C.T scans

Bit plane slicing

Plane by plane information acquisition

Threshold value of lung tumor

Refrence:
Digital image processing by S Jayaraman, S
Esakkirajan , S Veerakumar

GRAY LEVEL SLICING(CONTD)

There are two main different approaches:

highlight a range of intensities while diminishing all

others to a constant low level.


highlight a range of intensities but preserve all
others

GRAY LEVEL SLICED IMAGE

CONNECTED COMPONENTS AND


LABBELING

Finding the total no of connected regions in an


image

Assigning a label to each connected region

ALGORITHM (FIRST PASS ASSIGNING LABELS)


Image
Scan Pixel by
Pixel
Pixel is not
Background
Neighbors
already
labeled
Assign
Neighbors
parent label to
main label

Check
Neighbors

Neighbors Not
Labeled

Assign New
Label to pixel

ALGORITHM (SECOND PASS AGREEGATION)


Scan Each Pixel

Pixel is
Labeled
Yes
Get Labels Parent

Add to Existing
List

Yes

Parent
is in
Pattern
list

No

Add to a new List

STEP BY STEP WALKTHROUGH


In the beginning, we have this image, we start with currentLabelCount = 1

We found our non-background pixel

get its non-background neighbors

we set the current pixel to the


currentLabelCount and increment it

on to the next pixel, this one has a neighbour which is already labeled

assigns the pixel's parent label to that


of the neighbor

We continue on, none of the neighbours of this pixel is labeled

We increment currentLabelCount
and assign it to the pixel, again its
parent is set to itself

It gets interesting here, when neighbours have different labels

1) We choose main label (would be the smallest label in list--> (1))


2) We set it to be the parent of the other labels

A few more rounds and we should end up with this. Notice the
blue number in the upper right corner, that's the parent
label, the de facto one upon which we aggregate later.

That's it, now all we have to do is pass the image again


pixel by pixel, getting the root of each (if labeled)
and store it in our patterns' list.

MORPHOLOGICAL OPERATIONS

Erosion

Dilation

Combine to
Openning

Closening

Object
Background

STRUCTURING ELEMENT

Small set to probe the image under study

For each SE, define origo

Shape and size must be adapted to geometric

Properties for the objects

EROSION

The contraction of image (binary or grayscale)


a.k.a region shrinking
Use structuring element on image data to
produce new image
SE patterns that fits best on the image

IMAGE OF EROSION

HOW IT WORKS???

A pixel is turned on (1) only when the pixels of


both structuring element and the pixels match
other.

Both ON (1) and OFF (0) pixels should match.

Erodes to the right

EROSION EXAMPLE

difference

erosion

MATHEMATICAL DEFINITION OF EROSION


1.

Erosion is the morphological dual to dilation.

2.

It combines two sets using the vector subtraction


of set elements.

3.

Let AB denotes the erosion of A by B

AB {x Z 2

for every b B, exist an a A s.t. x a b}

{x Z 2 x b A for every b B )

DILATION FILLS HOLES

Fills in holes.
Smoothes object
boundaries.
Adds an extra outer ring of
pixels onto object
boundary, ie, object
becomes slightly larger.

IMAGE OF DILATION

EXAMPLE OF DILATION

difference
dilation

MATHEMATICAL DEFINATION OF
DILATION
Dilation : x = (x1,x2) such that if we center B on them,
then the so translated B intersects X.

IMAGE OF MORPHOLOGICAL
OPERATIONS

TRAINING

One by one extraction of each labelled region.

Identifying tumor region.

Supervised learning through Support vector


machine.

FEATURES FOR EXTRACTION

Area (305)

Eccentricity (0.5828)

Perimeter (84.7696)

Standard deviation (0.0275)

Mean (7.4599e-4)

Extent (0.6689)

SUPPORT VECTOR MACHINE

SVMs maximize the margin around the


separating hyperplane.

A.k.a. large margin classifiers

The decision function is fully specified by a subset


of training samples, the support vectors.
Solving SVMs is a quadratic programming
problem

MAXIMUM MARGIN:
FORMALIZATION
w: decision hyperplane normal vector
xi: data point i

yi: class of data point i (+1 or -1)

Classifier is:

Functional margin of xi is:

f(xi) = sign(wTxi + b)
yi (wTxi + b)

Functional margin of dataset is twice the


minimum functional margin for any point
The

factor of 2 comes from measuring the whole


width of the margin

Sec. 15.1

GEOMETRIC MARGIN

wT x b
Distance from example to the separator is r y
w

Examples closest to the hyperplane are support vectors.


Margin of the separator is the width of separation between
support vectors of classes.

Finding r:
Dotted line xx is perpendicular to
decision boundary so parallel to w.
Unit vector is w/|w|, so line is rw/|
w|.
x = x yrw/|w|.
x satisfies wTx+b = 0.
So wT(x yrw/|w|) + b = 0
since |w| = sqrt(wTw).
So wTx yr|w| + b = 0
So, solving for r gives:
43
r = y(wTx + b)/|w|

Sec. 15.1

LINEAR SVM MATHEMATICALLY


THE LINEARLY SEPARABLE CASE

Assume that all data is at least distance 1 from the hyperplane,


then the following two constraints follow for a training set {(xi ,yi)}

wTxi + b 1 if yi = 1
wTxi + b 1 if yi = 1

For support vectors, the inequality becomes an equality

Then, since each examples distance


from the hyperplane is
T
ry

The margin is:

w x b
w
2
w

44

LINEAR SUPPORT VECTOR


MACHINE (SVM)

wTxa + b = 1

Hyperplane
wT x + b = 0

Sec. 15.1

wTxb + b = -1

Extra scale constraint:


mini=1,,n |wTxi + b| = 1
This implies:
wT(xaxb) = 2

wT x + b = 0

= ||xaxb||2 = 2/||w||2

45

Sec. 15.1

LINEAR SVMS MATHEMATICALLY


(CONT.)

Then we can formulate the quadratic optimization

Find w and b such that

2
w

is maximized; and for all {(xi , yi)}

wTxi + b 1 if yi=1; wTxi + b -1 if yi = -1

A better formulation (min ||w|| = max 1/ ||w|| ):

Find w and b such that


(w) = wTw is minimized;
and for all {(xi ,yi)}:

yi (wTxi + b) 1
46

Sec. 15.1

THE OPTIMIZATION PROBLEM


SOLUTION
w =iyixi

b= yk- wTxk for any xk such that k 0

Each non-zero i indicates that corresponding xi is a


support vector.
Classifying function will have the form:
f(x) = iyixiTx + b

Relies on an inner product between the test point x and the


support vectors xi

47

Sec. 15.2.3

NON-LINEAR SVMS

Datasets that are linearly separable (with some noise) work


out great:
x

Hard dataset to be classified


x

Mapping data to a higher-dimensional


space:
2
x

x
48

Sec. 15.2.3

NON-LINEAR SVMS: FEATURE


SPACES

General idea: the original feature space can


always be mapped to some higher-dimensional
feature space where the training set is separable:

: x (x)

49

Sec. 15.2.3

THE KERNEL TRICK

The linear classifier relies on an inner product between vectors


K(xi,xj)=xiTxj
If every datapoint is mapped into high-dimensional space via some
transformation : x (x), the inner product becomes:
K(xi,xj)= (xi) T(xj)

A kernel function is some function that corresponds to an inner product


in some expanded feature space.
Example:
2-dimensional vectors x=[x1 x2]; let K(xi,xj)=(1 + xiTxj)2,
Need to show that K(xi,xj)= (xi) T(xj):
K(xi,xj)=(1 + xiTxj)2,= 1+ xi12xj12 + 2 xi1xj1 xi2xj2+ xi22xj22 + 2xi1xj1 + 2xi2xj2=
= [1 xi12 2 xi1xi2 xi22 2xi1 2xi2]T [1 xj12 2 xj1xj2 xj22 2xj1 2xj2]
= (xi) T(xj)

where (x) = [1 x12 2 x1x2 x22 2x1 2x2]


50

Sec. 15.2.3

KERNELS

Why use kernels?


Make

non-separable problem separable.


Map data into better representational space

Common kernels
Linear

Polynomial

K(x,z) = (1+xTz)d

Gives feature conjunctions

Radial

basis function (infinite dimensional space)

51

TIMELINE
1st
Presentation

2nd
Presentation

3rd
Presentation

4th
Presentation

5th
Presentation

Study of project
Image Acquisition and Pre-Processing
Gray Level Slicing and Connected Components Labeling
Feature Extraction and SVM
Presenting the project to external examiner

52

You might also like