You are on page 1of 18

Segmentation refers to the process of partitioning a digital image into

multiple segments (sets of pixels) (Also known as super pixels). The goal of
segmentation is to simplify and/or change the representation of an image into
something that is more meaningful and easier to analyze. Image segmentation is
typically used to locate objects and boundaries (lines, curves, etc.) in images. More
precisely, image segmentation is the process of assigning a label to every pixel in
an image such that pixels with the same label share certain visual characteristics.

3.1 EDGE DETECTION

An edge is the boundary between two regions with relatively distinct gray-
level properties. Edge detection is a terminology in image processing and computer
vision, particularly in the areas of feature detection and feature extraction, to refer
to algorithms which aim at identifying points in a digital image at which the image
brightness changes sharply or more formally has discontinuities.

3.1.1 SOBEL OPERATOR

The Sobel operator is used in image processing, particularly within edge


detection algorithms. Technically, it is a discrete differentiation operator,
computing an approximation of the gradient of the image intensity function. At
each point in the image, the result of the Sobel operator is either the corresponding
gradient vector or the norm of this vector. The Sobel operator is based on
convolving the image with a small, separable, and integer valued filter in
horizontal and vertical direction and is therefore relatively inexpensive in terms of
computations. On the other hand, the gradient approximation which it produces is
relatively crude, in particular for high frequency variations in the image. The
operator consists of a pair of 3×3 convolution kernels as shown in Figure. One
kernel is simply the other rotated by 90°.

These kernels are designed to respond maximally to edges running vertically


and horizontally relative to the pixel grid, one kernel for each of the two
perpendicular orientations. The kernels can be applied separately to the input
image, to produce separate measurements of the gradient component in each
orientation (call these Gx and Gy). These can then be combined together to find the
absolute magnitude of the gradient at each point and the orientation of that
gradient. The gradient magnitude is given by equation 3.1,

(3.1)

Typically, an approximate magnitude is computed using equation 3.2,

(3.2)

which is much faster to compute.

The angle of orientation of the edge (relative to the pixel grid) giving rise to the
spatial gradient is given by equation 3.3,
(3.3)

Figure 3.1 Original Brain MR Image

Figure 3.2 Output of Edge Detection by Sobel Operator

3.1.2 CANNY OPERATOR


Canny (1986) considered the mathematical problem of deriving an optimal
smoothing filter given the criteria of detection, localization and minimizing
multiple responses to a single edge. He showed that the optimal filter given these
assumptions is a sum of four exponential terms. He also showed that this filter can
be well approximated by first-order derivatives of Gaussians. Canny also
introduced the notion of non-maximum suppression, which means that given the
presmoothing filters, edge points are defined as points where the gradient
magnitude assumes a local maximum in the gradient direction.

Although his work was done in the early days of computer vision, the Canny
edge detector (including its variations) is still a state-of-the-art edge detector.
Unless the preconditions are particularly suitable, it is hard to find an edge detector
that performs significantly better than the Canny edge detector.

The Canny-Deriche detector (Deriche 1987) was derived from similar


mathematical criteria as the Canny edge detector, although starting from a discrete
viewpoint and then leading to a set of recursive filters for image smoothing instead
of exponential filters or Gaussian filters.
Figure 3.3 Output of Edge Detection by Canny Operator

Fig 3.3 shows the edge detection output by applying the Canny operator. Canny
operator has detected not only the tumor region also detects the unwanted artifacts.

3.1.3 PREWITT’S OPERATOR

Prewitt is a method of edge detection in image processing which calculates


the maximum response of a set of convolution kernels to find the local edge
orientation for each pixel.Prewitt operator is similar to the Sobel operator and is
used for detecting vertical and horizontal edges in images.
Various kernels can be used for this operation. The whole set of 8 kernels is
produced by taking one of the kernels and rotating its coefficients circularly. Each
of the resulting kernels is sensitive to an edge orientation ranging from 0° to 315°
in steps of 45°, where 0° corresponds to a vertical edge.

The maximum response for each pixel is the value of the corresponding
pixel in the output magnitude image. The values for the output orientation image
lie between 1 and 8, depending on which of the 8 kernels produced the maximum
response.

This edge detection method is also called edge template matching, because a
set of edge templates is matched to the image, each representing an edge in a
certain orientation. The edge magnitude and orientation of a pixel is then
determined by the template that matches the local area of the pixel the best.

The Prewitt edge detector is an appropriate way to estimate the magnitude


and orientation of an edge. Although differential gradient edge detection needs a
rather time-consuming calculation to estimate the orientation from the magnitudes
in the x- and y-directions, the Prewitt edge detection obtains the orientation
directly from the kernel with the maximum response. The set of kernels is limited
to 8 possible orientations; however experience shows that most direct orientation
estimates are not much more accurate.

On the other hand, the set of kernels needs 8 convolutions for each pixel,
whereas the set of kernel in gradient method needs only 2, one kernel being
sensitive to edges in the vertical direction and one to the horizontal direction. The
result for the edge magnitude image is very similar with both methods, provided
the same convolving kernel is used.
Figure 3.4 Output of Edge Detection by Prewitt Operator

Fig 3.4 shows the edge detection output by applying the Prewitt operator.
Like the Sobel operator, Prewitt operator detects only the boundary of object.

3.1.4 ROBERT’S CROSS OPERATOR

The Roberts Cross operator performs a simple, quick to compute, 2-D


spatial gradient measurement on an image. Pixel values at each point in the output
represent the estimated absolute magnitude of the spatial gradient of the input
image at that point.

The operator consists of a pair of 2×2 convolution kernels as shown in


Figure. One kernel is simply the other rotated by 90°. This is very similar to the
Sobel operator.
These kernels are designed to respond maximally to edges running at 45° to
the pixel grid, one kernel for each of the two perpendicular orientations. The
kernels can be applied separately to the input image, to produce separate
measurements of the gradient component in each orientation (call these Gx and
Gy). These can then be combined together to find the absolute magnitude of the
gradient at each point and the orientation of that gradient. The gradient magnitude
is given by equation 3.4,

(3.4)

Although typically, an approximate magnitude is computed using equation


3.5,

(3.5)

which is much faster to compute.

The angle of orientation of the edge giving rise to the spatial gradient (relative to
the pixel grid orientation) is given by:
Figure 3.5 Output of Edge Detection by Roberts Operator

Fig 3.5 shows the edge detection output by applying the Robert operator.
From the above outputs, all operators have failed to detect the tumor location.

3.2 HISTOGRAM EQUALIZATION

Histogram equalization is a method in image processing of contrast


adjustment using the image's histogram. This method usually increases the global
contrast of many images, especially when the usable data of the image is
represented by close contrast values. Through this adjustment, the intensities can
be better distributed on the histogram. This allows for areas of lower local contrast
to gain a higher contrast without affecting the global contrast. Histogram
equalization accomplishes this by effectively spreading out the most frequent
intensity values.

The method is useful in images with backgrounds and foregrounds that are
both bright or both dark. In particular, the method can lead to better views of bone
structure in x-ray images, and to better detail in photographs that are over or under-
exposed. A key advantage of the method is that it is a fairly straightforward
technique and an invertible operator. So in theory, if the histogram equalization
function is known, then the original histogram can be recovered. The calculation is
not computationally intensive. A disadvantage of the method is that it is
indiscriminate. It may increase the contrast of background noise, while decreasing
the usable signal.

Figure 3.6 Histogram

Figure 3.7 Output of Histogram equalized image


The spatial domain enhancement technique, histogram equalization
improves contrast of the MR image by reassigning the brightness values of pixels
based on the image histogram. Generally, images have unique brightness
histograms. Even images of different areas of the same sample, in which the
various structures present have consistent brightness levels wherever they occur,
will have different histograms, depending on the area fraction of each structure.
Here the pixel intensities are modified by a position invariant transformation
function. The traditional histogram equalization method for MR image suffers
from the following drawbacks:

 It lacks of a mechanism to adjust the degree of enhancement.

 It often causes unpleasant visual artifacts, such as over enhancement, level


saturation and raised noise level.

 It could dramatically change the character of the image, e.g., the average
luminance (mean) of the image. Changing the overall illumination of MR
image will shifts the peaks in the histogram, there is a very little scope to
improve contrast by global transformation.

3.3 THRESHOLDING TECHNIQUES

Thresholding is the simplest method of image segmentation. From a


grayscale image, thresholding can be used to create binary images. During the
thresholding process, individual pixels in an image are marked as “object” pixels if
their value is greater than some threshold value (assuming an object to be brighter
than the background) and as “background” pixels otherwise. This convention is
known as threshold above. Variants include threshold below, which is opposite of
threshold above; threshold inside, where a pixel is labeled "object" if its value is
between two thresholds; and threshold outside, which is the opposite of threshold
inside (Shapiro, et al. 2001:83). Typically, an object pixel is given a value of “1”
while a background pixel is given a value of “0.” Finally, a binary image is created
by coloring each pixel white or black, depending on a pixel's label.

Thresholding Between 100-200 Thresholding Between 175-200


Thresholding Between 200-225 Thresholding above 240

Figure 3.8 Output for various Threshold values

Fig 3.8 shows the output images by applying various threshold values.

The drawbacks of thresholding includes

• Threshold selection is not always straightforward.

• Pixels assigned to a single class need not form coherent regions as the
spatial locations of pixels are completely ignored.
CHAPTER 4

PROPOSED TECHNIQUES

4.1 REGION BASED SEGMENTATION

Region-based segmentation methods attempt to partition or group regions


according to common image properties. These image properties consist of

1. Intensity values from original images, or computed values based on an


image operator

2. Textures or patterns that are unique to each type of region

3. Spectral profiles that provide multidimensional image data

These can be classified as two main classes

 Merging Algorithms - in which neighboring regions are compared and


merged if they are close enough in some property.

 Splitting Algorithms – in which large non-uniform regions are broken up


into small areas which may be uniform.

These algorithms which are combination of splitting and merging. In all


cases some uniformity criterion must be applied to decide if a region should be
split or two regions should be merged. This criterion is based on some region
property which will be decided by the application and could be one of the
measurable image attributes such as image mean intensity, color, etc.,
Fig 3.9 Output of region based segmentation

Fig 3.9 shows the segmented image by applying the region based algorithm. From
the output tumor regions are segmented exactly but the drawback of region based
algorithm is it is difficult to identify the seed points.
MODULES

1. IMAGE ACQUISITION

2. PREPROCESSING

3. IMAGE RESIZING

4. SEGEMENTATION

5. FILTERING

6. FINAL OUTPUT IMAGE

5.2 MODULES DESCRIPTION

1. IMAGE ACQUISITION

Image acquisition in image processing can be broadly defined as the action of


retrieving an image from some source, usually a hardware-based source, so it can
be passed through whatever processes need to occur afterward. Performing image
acquisition in image processing is always the first step in the workflow sequence
because, without an image, no processing is possible. The image that is acquired is
completely unprocessed and is the result of whatever hardware was used to
generate it, which can be very important in some fields to have a consistent
baseline from which to work. One of the ultimate goals of this process is to have a
source of input that operates within such controlled and measured guidelines that
the same image can, if necessary, be nearly perfectly reproduced under the same
conditions so anomalous factors are easier to locate and eliminate.

2. IMAGE PRE-PROCESSING
Image pre-processing can significantly increase the reliability of an optical
inspection. Several filter operations which intensify or reduce certain image details
enable an easier or faster evaluation. Users are able to optimize a camera image
with just a few clicks.

3. IMAGE RESIZING

To resize an image, use the image resize function. When you resize an image, you
specify the image to be resized and the magnification factor. To enlarge an image,
specify a magnification factor greater than 1. To reduce an image, specify a
magnification factor between 0 and 1.

4. IMAGE SEGMENTATION

Segmentation partitions an image into distinct regions containing each pixels with
similar attributes. To be meaningful and useful for image analysis and
interpretation, the regions should strongly relate to depicted objects or features of
interest. Meaningful segmentation is the first step from low-level image processing
transforming a greyscale or colour image into one or more other images to high-
level image description in terms of features, objects, and scenes. The success of
image analysis depends on reliability of segmentation, but an accurate partitioning
of an image is generally a very challenging problem.

Segmentation techniques are either contextual or non-contextual. The latter take no


account of spatial relationships between features in an image and group pixels
together on the basis of some global attribute, e.g. grey level or colour. Contextual
techniques additionally exploit these relationships, e.g. group together pixels with
similar grey levels and close spatial locations.

5. IMAGE FILTERING
Image filtering allows you to apply various effects on photos. The type of image
filtering described here uses a 2D filter similar to the one included in Paint Shop
Pro as User Defined Filter and in Photoshop as Custom Filter.

You might also like