You are on page 1of 26

Implementation of image segmentation algorithms

Chapter 1 Introduction
1.1. Introduction:
In computer vision, segmentation refers to the process of partitioning a digital image into multiple segments (sets of pixels, also known as super pixels). The goal of segmentation is to simplify and/or change the representation of an image into something that is more meaningful and easier to analyse. Image segmentation is typically used to locate objects and boundaries (lines, curves, etc.) in images. More precisely, image segmentation is the process of assigning a label to every pixel in an image such that pixels with the same label share certain visual characteristics. The result of image segmentation is a set of segments that collectively cover the entire image, or a set of contours extracted from the image .Each of the pixels in a region is similar with respect to some characteristic or computed property, such as colour, intensity, or texture. Adjacent regions characteristic(s). are significantly different with respect to the same

1.2 Thresholding
The simplest method of image segmentation is called the thresholding method. This method is based on a clip-level (or a threshold value) to turn a gray-scale image into a binary image. The key of this method is to select the threshold value (or values when multiple-levels are selected). Several popular methods are used in industry including the maximum entropy method, Otsu's method (maximum variance), and et al. k-means clustering can also be used.

Implementation of image segmentation algorithms

Fig1.1 Original image

Fig 1.2 Thresholding image

Implementation of image segmentation algorithms

1.3 Clustering methods


The K-means algorithm is an iterative technique that is used to partition an image into K clusters. The basic algorithm is: 1. Pick K cluster centres, either randomly or based on some heuristic 2. Assign each pixel in the image to the cluster that minimizes the distance between the pixel and the cluster centre 3. Re-compute the cluster centres by averaging all of the pixels in the cluster 4. Repeat steps 2 and 3 until convergence is attained (e.g. no pixels change clusters) In this case, distance is the squared or absolute difference between a pixel and a cluster centre. The difference is typically based on pixel colour, intensity, texture, and location, or a weighted combination of these factors. K can be selected manually, randomly, or by a heuristic. This algorithm is guaranteed to converge, but it may not return the optimal solution. The quality of the solution depends on the initial set of clusters and the value of K. In statistics and machine learning, the k-means algorithm is a clustering algorithm to partition n objects into k clusters, where k < n. It is similar to the expectation-maximization algorithm for mixtures of Gaussians in that they both attempt to find the centres of natural clusters in the data. The model requires that the object attributes correspond to elements of a vector space. The objective it tries to achieve is to minimize total intra-cluster variance, or, the squared error function. The k-means clustering was invented in 1956. The most common form of the algorithm uses an iterative refinement heuristic known as Lloyds algorithm. Lloyds algorithm starts by partitioning the input points into k initial sets, either at random or using some heuristic data. It then calculates the mean point, or centroid, of each set. It constructs a new partition by associating each point with the closest centroid. Then the centroids are recalculated for the new clusters, and algorithm repeated by alternate application of these two steps until convergence, which is obtained when the points no longer switch clusters (or alternatively centroids are no longer changed). Lloyds algorithm and k-means are often used synonymously, but in reality Lloyds algorithm is a heuristic for solving the k-means problem, as with certain combinations of starting points and centroids, Lloyds algorithm can in fact converge to the wrong answer. Other variations exist, but Lloyds algorithm has remained popular, because it converges extremely quickly in practice. In terms of performance the algorithm is not guaranteed to return a global optimum. The quality of the

Implementation of image segmentation algorithms


final solution depends largely on the initial set of clusters, and may, in practice, be much poorer than the global optimum. Since the algorithm is extremely fast, a common method is to run the algorithm several times and return the best clustering found. A drawback of the k-means algorithm is that the number of clusters k is an input parameter. An inappropriate choice of k may yield poor results. The algorithm also assumes that the variance is an appropriate measure of cluster scatter. They are two types of algorithms are used in our project. There are shown below

1. Drop fall algorithm 2. Fuzzy min max algorithm 1.4 DROPFALL ALGORITHM:
Segmentation is pivotal work in character recognition especially in case handwritten characters are connected. During past 50 years, many methods have been set forth in segmenting connected characters. Drop fall algorithm is a classical segmentation algorithm often used in character segmentation because of its simplexes and effectiveness in application. Firstly advanced by G. Conge do in 1995, Drop Fall algorithm mimics the motions of a falling raindrop that falls from above the characters rolls along the contour of the characters and cuts through the contour when it cannot fall further. The raindrop follows a set of movement rules to determine the segmentation trace. Concretely, the Drop Fall algorithm selects one pixel out of the neighbours of the current pixel as a new pixel of the segmentation trace. Although Extended Drop Fall algorithm has been advanced to improve the performance of drop fall algorithm, when the raindrop falls into the concave pixel between the small convexnesses on the contour of characters, these algorithms will treat it as connected strokes and therefore start splitting it. Obviously it could split a single character and result in invalid segmentation. In this case, we introduce Inertial Drop Fall algorithm which follows the previous direction in the segmentation. Furthermore, Big Inertial Drop Fall algorithm is advanced to increase the size of the raindrop. When there is no big enough free space for the big raindrop to fall down, it will search for other direction and thus can avoid fall into the concave.

Implementation of image segmentation algorithms 1.5 TYPES OF DROPFALL ALGORITHM:


1.5.1 Traditional Drop Fall algorithm: The basic idea of Traditional Drop Fall (TDF) algorithms is to simulate a drop-falling process. The cut tracing is defined with both the information of neighbour pixels and perhaps of more pixels. The algorithm considers only five adjacent pixels: the three pixels blow the current pixel and the pixels to the left and right .Upward moves are not considered, because the rules are meant to mimic a falling motion. Here xi, yi presents the coordinate of the current pixel and (x0, y0) when i equals to zero presents the start point from which the segmentation begins. The Wi is a measurement of the weight of the raindrop. Its value depends on the Zi of the neighbour pixels N1, N2, N3, N4, N5 as shown below. According to the TDF algorithm, N1, N2, N3, N4, N5

segmentation is obtained.

. N5 N1 N0 N2 N4 N3

Traditional dropfall algorithm

1.5.2 Inertial Drop Fall algorithm (IDF)


The TDF algorithm didnt take it into account that handwritten numerals are generally silken, so do printed numerals. It can be conceived that when a raindrop falls along a smooth marble, the drop will get inertia. Therefore this character should be embodied in the segmenting trace. Then we presume that when all of the five neighbour pixels are black, which one out of them is the next pixel, will also depend on how the trace crawls before as well as the gravity of the drop. In another words, the inertia of the drop will affect the direction of segmentation together with the weight.

1.5.3 Big Inertial Drop Fall algorithm (BIDF)


Inertial Drop Fall can improve the segmentation performance in some cases. But the result is not good enough to overcome the defect in the TDF that incorrect segmentation may occur when the drop falls into the recess between burrs. But assume that the drop is big enough; it is then more possible for the drop to escape from the recess just as the water in a river can

Implementation of image segmentation algorithms


flow forward without prevented by the unsmooth bank. It could be conceived that a rain strip can work as a rain ball in dealing with burrs. One of the other types used in segmentation process is Min-Max algorithm.

1.6 Min-max algorithm


It is a simple implementation of double ended priority queues is presented. The proposed structure, called a min-max heap, can be built in linear time in contrast to conventional heaps, it allows both Find Min and Find Max to be performed in constant time; Insert, DeleteMin, and DeleteMax operations can be performed in logarithmic time. In-Max heaps can be generalized to support other similar order-statistics operations efficiently (e.g. Constant time Find Median and logarithmic timeDeleteMedian); furthermore, the notion of min-maxordering can be extended to other heap-ordered structures, such as leftist trees.

Fig 1.3 Original image

Implementation of image segmentation algorithms

Fig 1.4 Blurred Image

Fig 1.5 DeBlurred image

Implementation of image segmentation algorithms

Chapter 2 Implementation
2.1 Adaptive colour image segmentation using fuzzy min-max algorithm:

Histogram & FMMN clustering

Threshold& target

Input color image

S/v

Neural net work

Segmented output

Fuzzy entropy Fig. 2.1. Block diagram of ACISFMC calculation NN tuning

Fig. 2.1. Block diagram of ACISFMC.

Implementation of image segmentation algorithms 2.1.1. ACISFMC architecture overview:


Adaptive Colour Image Segmentation Using Fuzzy Min-Max Clustering (ACISFMC) is as depicted in Fig.2. 1. ACISFMC uses HSV colour space for the colour image segmentation. HSV colour representation is compatible with vision psychology of human eyes and its three components such as hue (H), saturation (S), and intensity (V) are relatively independent. It is better than RGB transformation since there exists a high correlation among three colour components such as red (R), green (G), and blue (B) which makes these three components dependent upon each other and associate strongly with intensity. Hence, RGB colour space is very difficult to discriminate highlights, shadows and shading in colour images. HSV colour space can solve this problem.

2.1.2 Advantages:
HSV colour model is having following advantages. 1. Hue is an invariant to certain types of highlights, shading, and shadows. 2. HSV colour model decouples the intensity component from colour information (hue and saturation) in a colour image.

Implementation of image segmentation algorithms 2.2. System flowchart:

Colour image

Calculating histogram

no
FMMN

yes

Non adaptive thresholding

no

yes
Segmented output image

A general flowchart of the proposed algorithm is depicted in Fig. 2.2. First, clusters and their labels are automatically found out by applying FMMN clustering algorithm on image histogram in respective plane respectively. ACISFMC is a histogram multi thresholding technique hence it is necessary to find different thresholds and target to segment objects in the image. Once the clusters are found out, average of two cluster centre in respective planes is taken as a threshold value. After detecting thresholds, labels for the objects are decided. The information about labels is used to construct networks activation function. 10

Implementation of image segmentation algorithms


Neuron uses a multilevel sigmoid function as an activation function. This activation function takes care of thresholding and labelling the pixels during training process

2.2.1 Adaptive threshold selection block:


Adaptive threshold selection block consists of adaptive thresholding system itself as shown in the above figure. The purpose of this block is to find out number of clusters. With the aim of keeping the system totally adaptive, there is a need of an automatic way to determine number of clusters. In the proposed work, this was done by using a FMMN clustering technique. The main aspire here is to locate the number of clusters without a priori knowledge of the image. To accomplish this, first the histogram of given colour image for saturation and intensity planes are found out. Clusters and their labels for the objects are found out by applying a FMMN clustering algorithm to image histogram in respective planes. Threshold and target values are obtained from the clusters. Cluster centres are considered as a target while the average of two targets is considered as a threshold. The average value as a threshold helps to segment the objects with a colour appropriate to its original colour.

2.2.2 Neural network segmentation block:


Neural network segmentation block does the actual segmentation based on the number of objects found out by adaptive threshold selection block. Neural network segmentation block consists of fuzzy entropy calculation block and NN tuning/training block. The proposed ACISFMC system consists of two independent neural networks one each used for saturation and intensity planes respectively. We depict the three layered proposed network architecture. The layer where the inputs are presented is known as the input layer. On the other hand the output producing layer is called as output layer. Besides, the input and output layer, there exists a third layer called as a hidden layer. Each layer is having a fixed number of neurons equal to the size (M x N) of image. Each neuron in a layer represents a single pixel. The input to a neuron in the input layer is normalized between [01]. The output value of each neuron is between [0-1]. Each neuron in a layer is connected to the corresponding neuron in the previous layer and to its neighbours over N neighbourhood. So for N neighbourhood connection scheme, a neuron has five links, representing 1 as depicted. Whereas for N neighbourhood connection scheme, there are nine links associated with every neuron representing 1 and 2 and so on. Neurons in the same layer do not have connection among themselves. The output of the nodes in one layer is transmitted to the nodes in another layer via links that amplify or inhibit such outputs through weighting factors. Except for the input layer nodes, the total input to each node is the sum of weighted
2 1 d

11

Implementation of image segmentation algorithms


outputs of the nodes in the previous layer. Each node is activated in accordance with the input to the node and the activation function (3) of the node.

12

Implementation of image segmentation algorithms

Chapter 3 APPLICATIONS
3.1

Medical imaging:
Measure tissue volumes Computer-guided surgery Diagnosis Treatment planning Study of anatomical structure

3.2 Locate objects in satellite images 3.2.1 Face recognition: Facial recognition system is a computer application for automatically identifying or verifying a person from a digital image or a video frame from a video source. One of the ways to do this is by comparing selected facial features from the image and a facial database. It is typically used in security systems and can be compared to other biometrics such as fingerprint or eye iris recognition systems.

3.2.2 Iris recognition:


Iris recognition is an automated method of biometric identification that uses mathematical pattern-recognition techniques on video images of the irides of an individual's eyes, whose complex random patterns are unique and can be seen from some distance

3.2.3 Fingerprint recognition: Fingerprint verification or fingerprint authentication refers to the automated method of verifying a match between two human fingerprints. Fingerprints are one of many forms of biometrics used to identify individuals and verify their identity. This article touches on two major classes of algorithms (minutia and pattern) and four sensor designs (optical, ultrasonic, passive capacitance, and active capacitance)

13

Implementation of image segmentation algorithms 3.2.4 Machine vision:


Machine vision (MV) is the process of applying a range of technologies and methods to provide imaging-based automatic inspection, process control and robot guidance in industrial applications. While the scope of MV is broad and a comprehensive definition is difficult to distil, a "generally accepted definition of machine vision is '... the analysis of images to extract data for controlling a process or activity.

14

Implementation of image segmentation algorithms

Chapter 4 Software
4.1 Introduction:
MATLAB (matrix laboratory) is a numerical computing environment and fourth-generation programming language. Developed by Math Works, MATLAB allows matrix manipulations, plotting of functions and data, implementation of algorithms, creation of user interfaces, and interfacing with programs written in other languages, including C, C++, Java, and Fortran. Although MATLAB is intended primarily for numerical computing, an optional toolbox uses the MuPAD symbolic engine, allowing access to symbolic computing capabilities. An additional package, Simulink, adds graphical multi-domain simulation and model based for dynamic and embedded systems. In 2004, MATLAB had around one million users across industry and academia. MATLAB users come from various backgrounds of

engineering, science, and economics. MATLAB is widely used in academic and research institutions as well as industrial enterprises.

4.2 coding % Min-Max algorithm%


I = imread ('hh.jpg'); I = I (10+[1:256],222+[1:256],:); Figure; Imshow (I); Title ('Original Image'); LEN = 31; THETA = 11; PSF = fspecial ('motion', LEN, THETA); Blurred = imfilter (I, PSF,'circular','conv'); Figure; imshow (Blurred); title ('Blurred Image'); wnr1 = deconvwnr (Blurred, PSF); Figure; imshow (wnr1); Title ('Restored, True PSF');

15

Implementation of image segmentation algorithms


Rgb=imread ('hh.jpg'); % Imshow (rgb); [X_no_dither, map]= rgb2ind (rgb, 8,'nodither'); Figure, imshow (X_no_dither, map); Onion = imread ('hh.jpg'); Onion Array = repmat (onion, [1 1 1 4]); Montage (onion Array); RGB = imread ('hh.jpg'); I = rgb2gray (RGB); h = [1 2 1; 0 0 0; -1 -2 -1]; I2 = filter2 (h, I); Imshow (I2,'DisplayRange',[]), colour bar Imtool ('hh.jpg'); [X1, map1]=imread ('hh.jpg'); [X2, map2]=imread ('face.jpg'); Subplot (1, 2, 1), imshow(X1, map1) Subplot (1, 2, 2), imshow(X2, map2)

16

Implementation of image segmentation algorithms

Output:

Fig 4.1 Original image

17

Implementation of image segmentation algorithms

Fig 4.2 Blurred image

18

Implementation of image segmentation algorithms

Fig 4.3 Ddeblurred image

Fig 4.4 Segmented images

19

Implementation of image segmentation algorithms

Fig 4.5 Gray scale image

20

Implementation of image segmentation algorithms %Drop fall algorithm%


%% Image segmentation and extraction %% Read Image Imagen=imread ('image_a.jpg'); %% Show image Figure (1) Imshow (imagen); Title ('INPUT IMAGE WITH NOISE')

%% Convert to gray scale If size (imagen, 3) ==3 % RGB image Imagen=rgb2gray (imagen); End %% Convert to binary image Threshold = gray thresh (imagen); Imagen =~im2bw (imagen, threshold); %% Remove all object containing fewer than 30 pixels Imagen = bwareaopen (imagen, 30); Pause (1) %% Show image binary image Figure (2) Imshow (~imagen); Title ('INPUT IMAGE WITHOUT NOISE') %% Label connected components [L Ne]=bwlabel (imagen);

%% Measure properties of image regions Propied=region props (L,'Bounding Box'); Hold on %% Plot Bounding Box For n=1: size (propied,1) End

21

Implementation of image segmentation algorithms


Hold off Pause (1) %% Objects extraction Figure

Output:

Fig 4.6 Input images with noise 22

Implementation of image segmentation algorithms

Fig 4.7Input image without noise

23

Implementation of image segmentation algorithms

CONCLUSION
.Segmentation is just the right process and technique required for this task. The increase of the number of algorithms for image segmentation, how to evaluate the performance of these algorithms becomes indispensable in the study of segmentation. Hence by using two algorithmic methods we implemented image segmentation using MATLAB.They are Adaptive colour images segmentation using fuzzy min max algorithm and Drop fall algorithm. Our project is very most advanced technological project. Present the speed of internet is 250KB/MIN.If we want to transmit an image of 1MB memory or an image it requires 5 or 6 minutes time.Inorder to overcome this we just extract the image or data without loosing its original information. Similarly if we want to transmit a video signal of different frequencies in different channels of bandwidth we just extract the sound by using the other process known as thresholding in order to transmit the signal very easily.Like these advantages we thought that our project might give successful wonders to this updated technology.

24

Implementation of image segmentation algorithms

Chapter 6
Bibliography
Digital image processing -Rafacel C. Gonzalez -Richard E. woods .www.Image segmentation and extraction Wikipidea.com . www.studentstechnogy.com www.everlight.com www.alldatasheets.com www.national .com www.fairchild.com

25

Implementation of image segmentation algorithms

Appendix:
TDF IDF BIDF ACISFMC HSV Imread PSF : : : : : : : Traditional drop fall algorithm Inertial drop fall algorithm Big inertial drop fall algorithm Adaptive colour image segmentation using Fuzzy Min Max clustering Hue, Saturation, Intensity Image reading from the current file Point spread function DE blurred image using wiener filter replicate and file array morphological open binary image.

Deconvwnr : Repmat Bwareaopen: :

26

You might also like