You are on page 1of 70

INTELLIGENT CLASSIFICATION

TECHNIQUE OF HUMAN BRAIN MRI WITH


EFFICIENT WAVELET BASED FEATURE
EXTRACTION
Abstract
Classification of brain images using Magnetic resonance Imaging (MRI) is a difficult task due to
the variance and complexity of various disease like Tumor, Intracranial Bleed, Alzheimer’s and
Stroke. This study presents Artificial Neural Network (ANN) techniques, linear discriminant
analysis and Support Vector Machine (SVM) for the classification of the magnetic resonance
human brain images. The classification problem was addressed as two class and four class
classification cases. The proposed techniques consist of three stages, preprocessing, Discrete
Wavelet Transform based feature extraction and classification. Wavelet Transform is used to
decompose the Image with Daub-4 wavelet. In the classification stage the Artificial Neural
Network, Linear and SVM has been used as classifiers to classify subjects as normal, Tumor, ICB
and Alzheimer’s MRI brain images. The result of the ANN classifier was compared with the results
of linear classifier. By using SVM classifier the number of features being used was reduced
comparing to linear classifier. An accuracy of 100% with sensitivity and specificity of 100% was
achieved in this study using SVM classifier.

Keywords: Artificial Neural Network, Linear classifier, MRI, SVM classifier, Wavelet Transform

CHAPTER 1
INTRODUCTION
Introduction
Magnetic resonance imaging (MRI) is a test that uses a magnetic field and pulses of radio wave
energy to make pictures of organs and structures inside the body. In many cases, MRI gives
different information about structures in the body than can be seen with an X-ray, ultrasound, or
computed tomography (CT) scan. MRI also may show problems that cannot be seen with other
imaging methods. For an MRI test, the area of the body being studied is placed inside a special
machine that contains a strong magnet. Pictures from an MRI scan are digital images that can be
saved and stored on a computer for more study. The images also can be reviewed remotely, such
as in a clinic or an operating room. In some cases, contrast material may be used during the MRI
scan to show certain structures more clearly. Fig. 1 shows the MRI (Magnetic resonance imaging)
of the human brain. Classification of tumors in magnetic resonance images (MRI) is an important
task. But it is quite time consuming when performed manually by experts. MRI is a noninvasive
method for producing threedimensional (3D) tomographic images of the human body. MRI is most
often used for the classification of Normal, tumors, Alzheimer’s, Intracranial Bleed and other
abnormalities in soft tissues, such as the brain. Clinically, radiologists qualitatively analyze films
produced by MRI scanners. Classification of brain images using MRI is a difficult task due to the
variance and complexity of disease. The proposed techniques consist of three stages,
preprocessing, Discrete Wavelet Transform based feature extraction, and classification.

Fig. 1: A Magnetic Resonance Imaging (MRI) of the brain


This work presents Artificial Neural Network techniques, linear discriminant analysis and Support
Vector Machine for the classification of the magnetic resonance human brain images. The result
of the ANN classifier was compared with the results of linear classifier and showed that the
classification accuracy of ANN is 100% when using DWT. SVM classifier can reduce the features
and improve the accuracy of prediction. MRI medical imaging techniques is a relatively new
technology with its foundations beginning during the year of 1946. Until the 1970s MRI was being
used for chemical and physical analysis. Then in 1971 MRI was used to study different diseases
with the advent of computed tomography (using computer techniques to develop images from MRI
information) in 1973 by Hounsfield, and echo-planar imaging (a rapid imaging technique) in 1977
by Mansfield. Many scientists over the next 20 years developed MRI into the present technology.
Perhaps one of the most exciting developments of these was the advent of superconductors. These
superconductors make the strong magnetic fields used in MRIs possible. The first human being
MRI examination did not occur until 1977. The most significant advancement in MRIs occurred
in 2003. Many methods have been proposed for classification for MRI brain images. Features
extracted from the images include: angular second moment, inverse difference moment, and
contrast ratio, entropy based on the gray-level co-occurrence matrix, and the 60-dimensional
Gabor wavelet texture feature. Classification the input data set based on the generated decision
tree and build a decision tree by using the training sets. The primary features were obtained using
a three-level-two-dimensional discrete wavelet transform. Dimension of primary feature vector
was high-dimensional vector requires huge computational complexity. Spectral regression
discriminant analysis was used to reduce the dimension and Support vector machine was used to
classify low-dimension feature vector.

Projection images are useful in determining the primary location of tumors. Automating process
is challenging task due to the high diversity in appearance of tumor tissue in different patients, and
in many cases, similarity between tumor and normal tissues. The images are in a standard format
usable in digital imaging and communication for medicine (DICOM). This is the standard format
for all medical images. It was developed by the National Electronic Manufactures Association
(NEMA). This standard format is mainly used for storing, printing and transmitting information in
medical imaging. Many diagnostic imaging techniques can be performed for early detection of
brain tumors such as Computed Tomography (CT), Positron Emission Tomography (PET) and
Magnetic Resonance Imaging (MRI). Compared to all other imaging techniques, MRI is more
efficient in brain tumor detection and identification, mainly due to the high contrast of soft tissues,
high spatial resolution and since it does not produce any harmful radiation, and is a non invasive
technique. Fig. 2(a),(b) and (c) shows the Magnetic Resonance Image (MRI) from BRATS
database is categorized into three distinct classes as normal, Astrocytomas and Meaningiomas
brain and it is considered for the implementation of DWT feature extraction and classification.

Fig. 2: MRI of the normal and abnormal images of the brain

Chapter 1
Background
Digital Image Processing
Digital image processing is the use of algorithms to perform image processing of digital images.
As a subcategory or field of digital signal processing, digital image processing has many advantages
over analog image processing. It allows a much wider range of algorithms to be applied to the input
data and can avoid problems such as the build-up of noise and signal distortion during processing.
Since images are defined over two dimensions (perhaps more) digital image processing may be
modelled in the form of multidimensional systems.
With the fast computers and signal processors available in the 2000s, digital image processing has
become the most common form of image processing and generally, is used because it is not only
the most versatile method, but also the cheapest.

Digital image processing technology for medical applications was inducted into the Space
Foundation Space Technology Hall of Fame in 1994.

Image processing in its broadest sense is an umbrella term for representing and analyzing of data
in visual form. More narrowly, image processing is the manipulation of numeric data contained in
a digital image for the purpose of enhancing its visual appearance. Through image processing, faded
pictures can be enhanced, medical images clarified, and satellite photographs calibrated. Image
processing software can also translate numeric information into visual images that can be edited,
enhanced, filtered, or animated in order to reveal relationships previously not apparent. Image
analysis, in contrast, involves collecting data from digital images in the form of measurements that
can then be analyzed and transformed.

Originally developed for space exploration and biomedicine, digital image processing and analysis
are now used in a wide range of industrial, artistic, and educational applications. Software for image
processing and analysis is widely available on all major computer platforms. This software supports
the modern adage that "a picture is worth a thousand words, but an image is worth a thousand
pictures."
Fig 1.1: Standard processing of space borne images

Each of the pixels that represent an image stored inside a computer has a pixel value which
describes how bright that pixel is, and/or what color it should be. In the simplest case of binary
images, the pixel value is a 1-bit number indicating either foreground or background. For a gray
scale images, the pixel value is a single number that represents the brightness of the pixel. The
most common pixel format is the byte image, where this number is stored as an 8-bit integer
giving a range of possible values from 0 to 255. Typically zero is taken to be black, and 255 are
taken to be white. Values in between this make up the different shades of gray.

Although simple 8-bit integers or vectors of 8-bit integers are the most common sorts of pixel values
used, some image formats support different types of value, for instance 32-bit signed integers or
floating point values. Such values are extremely useful in image processing as they allow processing
to be carried out on the image where the resulting pixel values are not necessarily 8-bit integers. If
this approach is used then it is usually necessary to set up a color map which relates particular
ranges of pixel values to particular displayed colors.
1.1 History

Many of the techniques of digital image processing, or digital picture processing as it often was
called, were developed in the 1960s at the Jet Propulsion Laboratory, Massachusetts Institute of
Technology, Bell Laboratories, University of Maryland, and a few other research facilities, with
application to satellite imagery, wire-photo standards conversion, medical imaging, videophone,
character recognition, and photograph enhancement. The cost of processing was fairly high,
however, with the computing equipment of that era. That changed in the 1970s, when digital image
processing proliferated as cheaper computers and dedicated hardware became available. Images
then could be processed in real time, for some dedicated problems such as television standards
conversion. As general-purpose computers became faster, they started to take over the role of
dedicated hardware for all but the most specialized and computer-intensive operations.

1.2 Image Sampling and Quantization

To create a digital image, we need to convert the continuous sensed data into digital form. This
involves two processes:

1)Sampling

2) Quantization.

The basic idea behind sampling and quantization is illustrated in Fig. 1.4 and Fig. 1.5. An image
may be continuous with respect to the x- and y-coordinates, and also in amplitude. To convert it to
digital form, we have to sample the function in both coordinates and in amplitude. Digitizing the
coordinate values is called sampling. Digitizing the amplitude values is called quantization.

Fig 1.2 ContinuousImage.


Fig 1.3 a plot of amplitude values along the line of continuous image.

Fig 1.4 Sampling of the above continuous image.

Fig 1.5Quantization of the sampled image.


The one-dimensional function shown in Fig. 1.3 is a plot of amplitude (gray level) values of the
continuous image along the line segment AB in Fig. 1.2.The random variations are due to image
noise. To sample this function, we take equally spaced samples along line AB, as shown in Fig.
1.4.The location of each sample is given by a vertical tick mark in the bottom part of the figure.
The samples are shown as small white squares superimposed on the function.

The set of these discrete locations gives the sampled function.However, the values of the samples
still span (vertically) a continuous range of gray-level values.

In order to form a digital function, the gray-level values also must be converted (quantized) into
discrete quantities. The right side of Fig. 1.4 shows the gray-level scale divided into eight discrete
levels, ranging from black to white. The vertical tick marks indicate the specific value assigned to
each of the eight gray levels. The continuous gray levels are quantized simply by assigning one of
the eight discrete gray levels to each sample. The assignment is made depending on the vertical
proximity of a sample to a vertical tick mark. The digital samples resulting from both sampling and
quantization are shown in Fig. 1.6. Starting at the top of the image and carrying out this procedure
line by line produces a two-dimensional digital image. Sampling in the manner just described
assumes that we have a continuous image in both coordinate directions as well as in amplitude. In
practice, the method of sampling is determined by the sensor arrangement used to generate the
image.

Fig 1.6 a) Continuous image projected onto a sensor array. b) Result of image sampling and
quantization.
1.3 Image Representation

The result of sampling and quantization is a matrix of real numbers. We will use two principal ways
to represent digital images. Assume that an image f(x, y) is sampled so that the resulting digital
image has M rows and N columns.

The values of the coordinates (x, y) now become discrete quantities. For notational clarity and
convenience, we shall use integer values for these discrete coordinates. Thus, the values of the
coordinates at the origin are (x, y) = (0, 0). The next coordinate values along the first row of the
image are represented as (x, y) = (0, 1). It is important to keep in mind that the notation (0, 1) is
used to signify the second sample along the first row. It does not mean that these are the actual
values of physical coordinates when the image was sampled.

Fig 1.7 Coordinate convention to represent digital images

The notation introduced in the preceding paragraph allows us to write the complete M*N digital
image in the following compact matrix form:
Fig. 1.8 Image in matrix form

The right side of this equation is by definition a digital image. Each element of this matrix array is
called an image element, picture element, pixel, or pel. The terms image and pixel will be used
throughout the rest of our discussions to denote a digital image and its elements. In some
discussions, it is advantageous to use a more traditional matrix notation to denote a digital image
and its elements:

Fig 1.9 Array


CHAPTER 2

Image Classification

Pattern Recognition

A physical object is usually represented in image analysis and computer vision by a region in a
segmented image. The set of objects can be divided into disjoint subsets, that, from the
classification point of view, have some common features and are called classes. How objects are
divided into classes is not fixed and depends on the classification goal. Object recognition assigns
classes to objects, and the algorithm that does this is called a classifier. The number of classes is
usually known beforehand, and typically can be derived from the problem specification.
Nevertheless, there are approaches in which the number of classes may not be known .The
classifier (similarly to a human) does not decide about the class from the object itself—rather,
sensed object properties serve this purpose. For example, to distinguish steel from sandstone, we
do not have to determine molecular structures, even though this would describe them well.
Properties such as texture, specific weight, hardness, etc., are used instead. These properties are
called the pattern, and the classifier actually recognizes the patterns and not the objects.

Object recognition and pattern recognition can be considered synonymous. The main pattern
recognition steps are shown in Figure 1. The block ‘Construction of formal description’ is based
on the experience and intuition of the designer. A set of elementary properties is chosen which
describe some characteristics of the object; these properties are measured in an appropriate way
and form the description pattern of the object. These properties can be either quantitative or
qualitative in character and their form can vary (numerical vectors, chains, etc.). The theory of
recognition deals with the problem of designing the classifier for the specific (chosen) set of
elementary object descriptions.
Figure 2.1 Main pattern recognition steps.

Automated Classification

The identification of cells in a histological slide as healthy or abnormal, the separation of ‘high-
quality’ fruit from inferior specimens in a fruit-packing plant and the categorization of remote-
sensing images are just a few simple examples of classification tasks. The first two examples are
fairly typical of binary classification. In these cases, the purpose of the classification is to assign a
given cell or piece of fruit to either of just two possible classes. In the first example above, the two
classes might be ‘healthy’ and ‘abnormal’; in the second example, these might be ‘top quality’
(expensive) or ‘market grade’ (cheap). The third example typically permits a larger number of
classes to be assigned (‘forest’, ‘urban’, ‘cultivated’, ‘unknown’, etc.). Autonomous classification
is a broad and widelystudied field that more properly belongs within the discipline of pattern
recognition than in image processing. However, these fields are closely related and, indeed,
classification is so important in the broader context and applications of image processing that some
basic discussion of classification seems essential. In this chapter we will limit ourselves to some
essential ideas and a discussion of some of the best-known techniques. Classification takes place
in virtually all aspects of life and its basic role as a necessary first step before selection or other
basic forms of decision-making hardly needs elaboration. In the context of image processing, the
goal of classification is to identify characteristic features, patterns or structures within an image
and use these to assign them (or indeed the image itself) to a particular class. As far as images and
visualstimuli go, human observers will often perform certain classification tasks very accurately.
Why then should we attempt to build automatic classification systems? The answer is that the
specific demands or nature of a classification task and the sheer volume of data that need to be
processed often make automated classification the only realistic and cost-effective approach to
addressing a problem. Nonetheless, expert human input into the design and training of automated
classifiers is invariably essential in two key areas:

(1) Task specification: What exactly do we want the classifier to achieve? The designer of a
classification system will need to decide what classes are going to be considered and whatvariables
or parameters are going to be important in achieving the classification.1 For example, a simple
classifier designed to make a preliminary medical diagnosis based on image analysis of
histological slides may only aim to classify cells as ‘abnormal’ or ‘normal’. If the classifier
produces an ‘abnormal’ result, a medical expert is typically called upon to investigate further. On
the other hand, it may be that there is sufficient information in the shape, density, size and colour
of cells in typical slides to attempt a more ambitious classification system. Such a system might
assign the patient to one of a number of categories, such as ‘normal’, ‘anaemic’, ‘type A viral
infection’ and so on.

(2) Class labeling: The process of training an automated classifier can often require ‘manual
labelling’ in the initial stage, a process in which an expert human user assigns examples to specific
classes based on selected and salient properties. This forms part of the process in generating so-called
supervised classifiers.

Supervised and unsupervised classification

Classification techniques can be grouped into two main types: supervised and unsupervised.
Supervised classification relies on having example pattern or feature vectors which have already
been assigned to a defined class. Using a sample of such feature vectors as our training data, we
design a classification system with the intention and hope that new examples of feature vectors
which were not used in the design will subsequently be classified accurately. In supervised
classification then, the aim is to use training examples to design a classifier which generalizes well
to new examples. By contrast, unsupervised classification does not rely on possession of existing
examples from a known pattern class. The examples are not labelled and we seek to identify groups
directly within the overall body of data and features which enables us to distinguish one group
from another. Clustering techniques are an example of unsupervised classification.

Example of Image Classification

Consider the following illustrative problem. Images are taken at a food processing plant in which
three types of object occur: pine-nuts, lentils and pumpkin seeds. We wish to design a classifier
that will enable us to identify the three types of object accurately. A typical image frame is shown
in Figure 11.1. Let us assume that we can process these images to obtain reliable measurements
on the two quantities of circularity and line-fit error.2 This step (often the most difficult) is called
feature extraction. For each training example of a pine-nut, lentil or pumpkin seed we thus obtain
a 2-D feature vector which we can plot as a point in a 2-D feature space. In Figure 11.2,training
examples of pine-nuts, lentils and pumpkin seeds are respectively identified by squares, triangles
and circles. We can see by direct inspection of Figure 11.2 that the three classes form more or less
distinct clusters in the feature space. This indicates that these two chosen features are broadly
adequate to discriminate between the different classes (i.e. to achieve satisfactory classification).
By contrast, note from Figure 11.2 what happens if we consider the use of just one of these features
in isolation. In this case, the feature space reduces to a single dimension given by the orthogonal
projection of the points either onto the vertical (line-fit error) or horizontal (circularity) axis. In
either case, there is considerable overlap or confusion of classes, indicating that misclassification
occurs in a certain fraction of cases. Now, the real aim of any automatic classification system is to
generalize to new examples, performing accurate classification on objects or structures whose class
is a priori unknown.

Figure 2.2 Three types of object exist in this image: pine-nuts, lentils and pumpkin seeds. The
image on the right has been processed to extract two: circularity and line-fit error
Figure 2.3 A simple 2-D feature space for discriminating between certain objects. The 1-D bar
plots in the horizontal x and vertical y directions show the projection of the data onto the
respective axes

Supervised Classification

When you browse your email, you can usually tell right away whether a message is spam. Still,
you probably do not enjoy spending your time identifying spam and have come to rely on a filter
to do that task for you, either deleting the spam automatically or filing it in a different mailbox.
An email filter is based on a set of rules applied to each incoming message, tagging it as spam or
“ham” (not spam). Such a filter is an example of a supervised classification algorithm. It is
formulated by studying a training sample of email messages that have been manually classified as
spam or ham. Information in the header and text of each message is converted into a set of
numerical variables such as the size of the email, the domain of the sender, or the presence of the
word “free.” These variables are used to define rules that determine whether an incoming message
is spam or ham. An effective email filter must successfully identify most of the spam without
losing legitimate email messages: That is, it needs to be an accurate classification algorithm. The
filter must also be efficient so that it does not become a bottleneck in the delivery of mail. Knowing
which variables in the training set are useful and using only these helps to relieve the filter of
superfluous computations. Supervised classification forms the core of what we have recently come
to call data mining. The methods originated in statistics in the early nineteenth century, under the
moniker discriminant analysis. An increase in the number and size of databases in the late twentieth
century has inspired a growing desire to extract knowledge from data, which has contributed to a
recent burst of research on new methods, especially on algorithms. There are now a multitude of
ways to build classification rules, each with some common elements. A training sample contains
data with known categorical response values for each recorded combination of explanatory
variables. The training sample is used to build the rules to predict the response. The basic steps
involved in supervised classification technique is shown in the form of a flowchart in figure.

Figure 2.4 Simple flowchart explanation of supervised classification

Steps In Supervised Classification

(1) Class definition:Clearly, the definition of the classes is problem specific. For example, an
automated image-processing system that analysed mammograms might ultimately aim to classify
the images into just two categories of interest: normal and abnormal. This would be a binary
classifier or, as it is sometimes called, a dichotomizer. On the other hand, a more ambitious system
might attempt a more detailed diagnosis, classifying the scans into several different classes
according to the preliminary diagnosis and the degree of confidence held. (2) Data exploration In
this step, a designer will explore the data to identify possible attributes which will allow
discrimination between the classes. There is no fixed or best way to approach this step, but it will
generally rely on a degree of intuition and common sense. The relevant attributes can relate to
absolutely any property of an image or image region that might be helpful in discriminating one
class from another.

Most common measures or attributes will be broadly based on intensity, colour, shape, texture or
some mixture thereof.

Figure 2.5 Flow diagram representing the main steps in supervised classifier design

(3) Feature selection and extraction: Selection of the discriminating features defines the feature
space. This, of course, implicitly assumes that we have established reliable image-processing
procedures to perform the necessary feature extraction.3 In general, it is crucial to select features
that possess two key properties. The first is that the feature set be as compact as possible and
thesecond that they should possess what, for want of a more exact word, we will call discriminatory
power. A compact set of features is basically a small set and it is of paramount importance, as
larger numbers of selected features require an increasingly large number of training samples to
train the classifier effectively. Second, it obviously makes sense to select (i) attributes whose
distribution over the defined classes is as widely separated as possible and (ii) that the selected set
of attributes should be, as closely as possible, statistically independent of each other. A set of
attributes possessing these latter two properties would have maximum discriminatory power. A
simple but concrete illustration of this idea might be the problem of trying to discriminate between
images of a lion and a leopard. Defining feature vectors which give some measure of the boundary
(e.g. a truncated radial Fourier series) or some gross shape measure like form factor or elongation4
constitute one possible approach. However, it is clearly not a very good one, since lions and
leopards are both big cats of similar physique. A better measure in this case will be one based on
colour, since the distribution in r–g chromatic space is simple to measure and is quite sufficient to
discriminate between them.

(4) Build the classifier using training data: The training stage first requires us to find a sample
of examples which we can reliably assign to each of our selected classes. Let us assume that we
have selected N features which we anticipate will be sufficient to achieve the task of discrimination
and, hence, classification. For each training example we thus record the measurements on the N
features selected as the elements of an N-dimensional feature vector . All the
training examples thus provide feature vectors which may be considered to occupy a specific
location in an (abstract) N-dimensional feature space (the jth element xj specifying the length along
the jth axis of the space). If the feature selection has been performed judiciously, then the feature
vectors for each class of interest will form more or less distinct clusters or groups of points in the
feature space.

(5) Test the classifier: The classifier performance should be assessed on a new sample of feature
vectors to see how well it can generalize to new examples. If the performance is unsatisfactory
(what constitutes an unsatisfactory performance is obviously application dependent), the designer
will return to the drawing board, considering whetherthe selected classes are adequate, whether
the feature selection needs to be reconsidered or whether the classification algorithm itself needs
to be revised. The entire procedure would then be repeated until a satisfactory performance was
achieved.

The main advantage of supervised classification is that an operator can detect errors and correct
them. The disadvantages of this technique are that it is time consuming and costly. Moreover, the
training data chosen by the analyst may not highlight all the conditions encountered throughout
the image and hence it is prone to human error.

Unsupervised Classification
In case of unsupervised classification, no prior information is essential. It does not require any
form of human intervention. This algorithm helps in identifying clusters in data.

The steps in unsupervised classification are:

i. Clustering the data.

ii. All pixels are then classified based on clusters.

iii. Spectral class map.

iv. Cluster labeling done by analyst

v. Map the informational class The advantages of unsupervised technique are that it is faster, free
from human errors and there is no requirement of detailed prior knowledge.

The main drawback of this technique is maximally-separable clusters

Commonly used Unsupervised Classifiers

 K means clustering
 ISODATA

K means clustering

K-means clustering is a type of unsupervised learning, which is used when you have
unlabeled data (i.e., data without defined categories or groups). The goal of this algorithm is
to find groups in the data, with the number of groups represented by the variable K. The
algorithm works iteratively to assign each data point to one of K groups based on the features
that are provided. Data points are clustered based on feature similarity. The results of the K-
means clustering algorithm are:

1. The centroids of the K clusters, which can be used to label new data

2. Labels for the training data (each data point is assigned to a single cluster)
Rather than defining groups before looking at the data, clustering allows you to find and
analyze the groups that have formed organically. The "Choosing K" section below describes
how the number of groups can be determined.

Each centroid of a cluster is a collection of feature values which define the resulting groups.
Examining the centroid feature weights can be used to qualitatively interpret what kind of
group each cluster represents.

Algorithm

The Κ-means clustering algorithm uses iterative refinement to produce a final result. The
algorithm inputs are the number of clusters Κ and the data set. The data set is a collection of
features for each data point. The algorithms starts with initial estimates for the Κ centroids,
which can either be randomly generated or randomly selected from the data set. The
algorithm then iterates between two steps:

1. Data assigment step:

Each centroid defines one of the clusters. In this step, each data point is assigned to its nearest
centroid, based on the squared Euclidean distance. More formally, if ci is the collection of
centroids in set C, then each data point x is assigned to a cluster based on

where dist( ci,x )^2 is the standard (L2) Euclidean distance. Let the set of data point
assignments for each cith cluster centroid be S i.

2. Centroid update step:


In this step, the centroids are recomputed. This is done by taking the mean of all data points
assigned to that centroid's cluster.

The algorithm iterates between steps one and two until a stopping criteria is met (i.e., no data
points change clusters, the sum of the distances is minimized, or some maximum number of
iterations is reached).

This algorithm is guaranteed to converge to a result. The result may be a local optimum (i. e.
not necessarily the best possible outcome), meaning that assessing more than one run of the
algorithm with randomized starting centroids may give a better outcome.

ISODATA

ISODATA is a method of unsupervised classification in which one need not know the number of
clusters. The algorithm splits and merges clusters. User defines threshold values for parameters.
The computer runs algorithm through many iterations until threshold is reached.

Methodology of ISODATA

1) Cluster centers are randomly placed and pixels are assigned based on the shortest distance to
center method

2) The standard deviation within each cluster, and the distance between cluster centers is
calculated z Clusters are split if one or more standard deviation is greater than the user-defined
threshold z Clusters are merged if the distance between them is less than the user-defined threshold
3) A second iteration is performed with the new cluster centers

4) Further iterations are performed until:

– i) the average inter-center distance falls below the user-defined threshold,

– ii) the average change in the inter-center distance between iterations is less than a threshold, or

– iii) the maximum number of iterations is reached


CHAPTER 3

Types of Supervised Classifiers

There are few really popular supervised machine learning algorithms for image classification, such
as:

1. Minimum Distance from Mean (MDM)


2. Support vector machines
3. Parallelepiped
4. Maximum Likelihood (ML)
5. Artificial Neural Networks (ANN)

Minimum Distance from Mean (MDM)

MDM method is commonly used for remote sensing applications, where images captured by
satellites and aircrafts are classified. In the figure given below, the axes correspond to the image
spectral bands. Each pixel of the satellite image corresponds to a point in the feature space. The
figure shows three classes, that are in red, green and blue points. The red point cloud overlaps with
the green and blue ones. There is also a black point cloud that does not belong to any class. After
the image is classified these points will correspond to classified pixels.
An imaginary example of a minimum distance algorithm to be used to distinguish classes

Figure 1 on the left shows a situation where the classification does not include the possibility of
unclassified pixels. Figure 1 on the right, on the contrary, a case with unclassified pixels in
theresults of the classification. The grey arrows show the distance from the green point A and the
red point B to the centers of green and red classes. We see that both points are closer to the green
class center. Therefore points A and B will be classified by the minimum distance to the green
class. Here we see the principle of determining membership in the class and the source of errors
in the classification. But the number of errors will be less than when we limit the classes to
rectangles, as in the classification by the parallelepiped algorithm. That is why when brightness
values of classes overlap it is recommended to use a minimum distance algorithm, rather than a
parallelogram algorithm.

If we assume the presence of unclassified pixels, the algorithm of the minimum distance gets
slightly more complicated. Figure 1 shows a black point marked as C. The closest class center to
it is the center of the red class. To exclude this point from classification procedure, you need to
limit the search range around the class centers. For this, set the maximum permissible distance
from the center of the class. Figure 1 on the right shows an example of this. Maximum distances
from the centers of the class that limit the search radius are marked with dashed circles. Without
this restriction, most black points would be assigned to the red class, and some – to green (fig. 1,
left). And with the restriction (Fig. 1, on the right) they will remain unclassified.

You can apply a search restriction of the same value to all classes. This is the case when all classes
have a similar spread of values. And if the classes have a very different spread of values, then it is
necessary to set for each class its own size of the search radius. This more complex case is shown
in Figures 1 on the right when a greater distance from the center of the class is defined for the red
class than for the blue or the green one.

Support Vector Machine (SVM)

In basic SVM, a set of input data and predicts are computed for each given input. The two possible
classes determine the output. The SVM model can be used for representation of points in space.
They are mapped so that the examples of the different categories are divided by a clear gap which
should be as wide as possible. The new examples are again mapped into the same space. An SVM
makes use of kernel mapping which map the data in input space onto a high-dimensional feature
space. Hence, the problem now becomes linearly separable. The decision making function of an
SVM depends upon the number of SVs as well as on their weights.

In figure given below, SVM fairly separates the two classes. Any point that is left of line falls into
black circle class and on right falls into blue square class. Separation of classes. That’s what SVM
does. It finds out a line/ hyper-plane (in multidimensional space that separate outs classes).

Classification of two classes using SVM

Parallelepiped

To perform parallelepiped classification the software requires two parameters for each of the
classes. These are the average brightness value and standard deviation from the mean in all
channels of the image.
Classification process has three stages (Fig. 2). In the first stage the algorithm sets the centers for
each class in the spectral feature space. This is why it is necessary to know the average brightness
of classes in all channels of the image. Figure 2A shows the centers of three classes marked with
red, blue and green dots.

Next, the algorithm finds extreme points of each class. To do this lines are laid from the class
centres parallel to the feature space axes. Their lengths equal to several standard deviations. The
exact value is set by the user, and this setting influences the sensitivity of the classification. This
amount may be the same for all classes, but it may also vary.

Figure 2B shows the second stage of classification. 3 standard deviations are laid for the red class,
and 4 standard deviations for the green and blue classes.

Classification stages

On the final stage (Fig. 2C) the algorithm lays lines, that enclose the point clouds, parallel to
axes, crossing the points of extremes. All values that are inside are assigned to the appropriate
class. In Figure 2, we see that the geometric shape that limits a class is a parallelepiped. That’s
where the algorithm name comes from.
Maximum Likelihood (ML)

In nature the classes that we classify exhibit natural variation in their spectral patterns. Further
variability is added by the effects of haze, topographic shadowing, system noise, and the effects of
mixed pixels. As a result, remote sensing images seldom record spectrally pure classes; more
typically, they display a range of brightness’s in each band. The classification strategies considered
thus far do not consider variation that may be present within spectral categories and do not address
problems that arise when frequency distributions of spectral values from separate categories
overlap. The maximum likelihood (ML) procedure is the most common supervised method used
with remote sensing. It can be described as a statistical approach to pattern recognition where the
probability of a pixel belonging to each of a predefined set of classes is calculated; hence the pixel
is assigned to the class with the highest probability.
CHAPTER 4
PROPOSED WORK

The general overview of the proposed approach is illustrated in Fig. 3. This approach uses the
standard benchmark Brain Research and Analysis in Tissues (BRATS) tumor dataset [11] for the
experiments. The input tumor images are smoothed by median filter. It is necessary to pre-process
all the tumor images for robust feature extraction and classification. Then BRATS dataset divided
into three classes (normal, Astrocytomas and Meaningiomas) for feature extraction process. The
extracted features are modeled using SVM, k-NN and Decision tree for classification.The main
steps of a typical image processing system consist of three steps: Preprocessing, feature extraction
and classification. MRI images of various diseases were obtained from Moulana Hospital,
Perinthalmanna. After the preprocessing stage, wavelet-based features were extracted from these
MRI images. These extracted feature values were then given to the classifier and the results were
analyzed. Accuracy, sensitivity, and specificity of the classifier were found as their performance
evaluation metrics. The image classification problem is addressed as two cases: two class
classification and four class classification. In two class classification, normal and abnormal MRI
images were used whereas in four class classification, normal, tumor, intracranial bleed and
Alzheimer’s images were used. The method used in this work is depicted in Figure 3 and 4

Fig.3 Image processing system for normal and abnormal


Fig.4 Image processing system for 4 classes

4.1 Image Acquisition:

The proposed method was applied to analyze the MRI images taken from Moulana Hospital. The
data set consists of two sets of data. First set having 40 normal images and 40 abnormal images.
The second set consists of 75 brain MR images in which 20 images with normal cases, 20 Tumor
Images, 20 Intracranial Bleed Images and 15 Alzheimer’s Images (Figure 5).

Fig.5 Images of normal, abnormal, tumor, intracranial bleed and Alzheimer’s


4.2 Pre-Processing

Median filtering is a nonlinear method used to remove noise from images. It is widely used as it
is very effective at removing noise while preserving edges. It is particularly effective at removing
‘salt and pepper’ type noise. The median filter works by moving through the image pixel by pixel,
replacing each value with the median value of neighboring pixels. The pattern of neighbours is
called the "window", which slides, pixel by pixel over the entire image to pixels, over the entire
image. The median is calculated by first sorting all the pixel values from the window into
numerical order, and then replacing the pixel being considered with the middle (median) pixel
value. In median filtering, the neighboring pixels are ranked according to brightness (intensity)
and the median value becomes the new value for the central pixel. Median filters can do an
excellent job of rejecting certain types of noise, in particular, “shot” or impulse noise in which
some individual pixels have extreme values. In the median filtering operation, the pixel values in
the neighborhood window are ranked according to intensity, and the middle value (the median)
becomes the output value for the pixel under evaluation.

4.3 Feature extraction using dwt:

Features are extracted for normal, Tumor, Intracranial Bleed, Alzheimer’s & abnormal MRI
images. Features are Mean, STD, kurtosis, MAD, Variance, RMS value, Entropy and Median. A
wavelet transform have properties like Sub-band coding, Multi resolution analysis, Time
frequency localization. The wavelet is a powerful mathematical tool for feature extraction and has
been used to extract the wavelet coefficient from MR images. Wavelets are localized basis
functions, which are scaled and shifted versions of some fixed mother wavelets. The main
advantage of wavelets is that they provide localized frequency information about a function of a
signal, which is particularly beneficial for classification. The origin of a wavelet transform that is
restricted or localized is called the mother wavelet. Daub-4 wavelet is a family of orthogonal
wavelets defining a discrete wavelet transforms and differentiated by a highest quantity of
disappearance moments for some given support. Daub-4 are the best tool for feature extraction,
due to this reason we decide to extract the Daub-4 wavelet coefficients of brain MRI and use these
coefficients as feature vector for classification. Daub-4 Wavelet improves the low signals which
are neglected by other Wavelet transforms because Daub-4 improves the contrast of an image.
DWT is used as a first step to extract features from images. Fourier transform (FT) provides
representation of an image based only on its frequency content. The FT decomposes a signal into
a spectrum of frequencies whereas the wavelet analysis decomposes a signal into a hierarchy of
scales ranging from the coarsest scale.

The continuous wavelet transforms of the signal f(t) relative to a real valued wavelet  (t)is defined
as,

W(a,  ) is the wavelet transform,  acts to translate the function across ‘f (t)’ and the variable ‘a’
acts to vary the time scale of the probing function  . Equation can be discretized by restraining
‘a’ and ‘ ’ to a discrete lattice (a=2 & =2 k) to give the discrete wavelet transform and expressed
as,

Here, and refer to the coefficients of the approximation components and detail components
respectively. l (n) and h (n) denote for the low pass and high pass filters respectively. j and k
represent the wavelet scale and translation factors respectively. The approximation component
contains low frequency components of the image while the detailed components contain high
frequency components. The original image is processed along the x and y directions by low pass
and high pass filters which is the row representation of the image. In this study, a one-level 2D
DWT with Daubechies-4 filters is used to extract efficient features from MRI. Sub bands obtained
during feature extraction are shown in Fig.4 for a typical image
Fig.6. (a) Normal brain image, (b, c) obtained sub band in One level 2D DWT

2.4 Ranking of features using T-test :

Ranking features give the most significant features in sequential order. T-test is the absolute value
of 2-sample test with pooled variance estimate.
After ranking the eight features using t-test class separability criterion, the most significant
features were selected. These features were submitted to different classifiers. The results were
compared with changing the number of features.

4.5 Classification

4.5.1 Artificial Neural Network classifier

ANN is based on a large collection of neural units (artificial neurons). In an ANN, processing
element, weight, add function, activation function and exit nods are present respectively to neuron,
synapse, dendrite, cell body, and axon in a biological neural network.

Figure 7 Artificial Neural Network classifier

An artificial neuron is a computational model, inspired by the natural neurons. Natural neurons
receive signals through synapses located on the dendrites or membrane of the neuron. When the
signals received are strong enough, the neuron is activated and emits a signal though the axon.
This signal might be sent to another synapse and might activate other neurons. The higher a weight
of an artificial neuron is, the stronger the input which is multiplied by it will be. Depending on the
weights, the computation of the neuron will be different. There are weights assigned with each
arrow, represent the information flow. These weights are multiplied by the values which go
through each arrow, to give more or less strength to the signal which they transmit. The neurons
of this network just sum their inputs [1].
The output is some function y= f(v) of the weighted sum.

4.5.2 Linear classifier

In, linear classifier classification achieves by making a classification decision based on the value
of a linear combination of the characteristics. Linear classifiers work well for practical problems
such as document classification, and more generally for problems with many variables (features),
reaching accuracy levels comparable to non-linear classifiers while taking less time to train and
use. A classification algorithm that makes its classification based on a linear predictor function
combining a set of weights with the feature vector.

Where ’w’ is a real vector of weights and ‘ f’ is a function that converts the dot product of the two
vectors into the desired output. The weight vector ’w’ is learned from a set of labeled training
samples.

Support vector machines (SVMs) are a set of new supervised learning methods used for binary
classification, regression and outlier’s detection. SVM is strong because of its simple structure and
it requires less number of features. SVM is a structural risk minimization classifier algorithm
derived from statistical learning theory. Support Vector Machines is used to solve the pattern
classification and regression problems [8]. SVM constructs a hyperplane or set of hyperplanes in
a high or infinite-dimensional space, which can be used for classification, regression, or other tasks
[5]. Given a training dataset of n points of the form of any hyperplane can be written as the set of
points x satisfying
Where ‘w’ is the normal vector to the hyperplane. The parameter ‘b’ determines the offset of the
hyperplane from the origin along the normal vector ‘w’.

Fig.8. Linear SVM.

If the training data are linearly separable, select two parallel hyper planes that separate the two
classes of data, so that the distance between them is as large as possible. The region bounded by
these two hyper planes is called the "margin", and the maximum- margin hyperplane is the
hyperplane that lies halfway between them. These hyper planes can be described by the equation
CHAPTER-5

Literature Review

W. Yu and Y. Xiaowei proposed Application of decision tree for MRI images of premature brain
injury classification. To reduce background noise in the training set use optimization classification
algorithm. Features extracted from the images include: angular second moment, inverse. An
optimization classification algorithm for MRI images of premature brain injury is introduced.
Based on the shortcomings of the classical ID3 algorithm in dealing with the continuous attributes
of medical image, the new algorithm selects the testing feature by comparing the information gain
ratio and adds the handling methods for filling null values. Then it discrete the continuous
attributes by dividing them into segments to classify the object. The result shows that the new
algorithm can accurately classify the MRI images.

From the literature survey, initially, it can be concluded that, various research works have been
performed in classifying MR brain images into normal and abnormal [1], [2]. Priyanka,
BalwinderSingh [3] focused on survey of well-known brain tumor detection algorithms that have
been proposed so far to detect the location of the tumor.The main concentration is on those
techniques which use image segmentation to detect brain tumor. Image segmentation is the process
of partitioning a digital image into multiple segments.

R. J. Ramteke, KhachaneMonali Y [4] proposed a method for automatic classification of medical


images in two classes Normal and Abnormal based on image features and automatic abnormality
detection. KNN classifier is used for classifying image. K-Nearest Neighbour (K-NN)
classification technique is the simplest technique conceptually and computationally that provides
good classification accuracy.

Khushboo Singh, SatyaVerma [5] proposed sophisticated classification techniques based on


Support Vector Machines (SVM) are proposed and applied to brain image classification using
features derived.

Shweta Jain [6] classifies the type of tumor using Artificial Neural Network (ANN) in MRI images
of different patients with Astrocytomas type of brain tumor. The extraction of texture features in
the detected tumor has been achieved by using Gray Level Co-occurrence Matrix (GLCM).
Statistical texture analysis techniques are constantly being refined by researchers and the range of
applications is increasing [7], [8], [9]. Gray level co-occurrence matrix method is considered to be
one of the important texture analysis techniques used for obtaining statistical properties for further
classification, which is employed in this research work. Probabilistic Neural Network is found to
be superior over other conventional neural networks such as Support Vector Machine and Back
propagation Neural Network in terms of its accuracy in classifying brain tumors [10]. Hence a
wavelet and co-occurrence matrix method based texture feature extraction and Probabilistic Neural
Network for classification has been used in this method of brain tumor classification.
CHAPTER-6

RESULT

Results obtained with SVM

The confusion matrices of the SVM classifier on BRATS dataset is shown in Table 3, where
diagonal of the table shows that accurate responses of tumor types.

The average recognition rate of SVM is 78.61%. In SVM, the normal class is classified well, where
as in Astrocytomas class is confused with Meaningiomas class and vice versa. Thus, it needs
further attention.

Results obtained with k-NN

The confusion matrices of the k-NN classifier on BRATS dataset is shown in Table 4, where
diagonal of the table shows that accurate responses of tumor types. The average recognition rate
of k-NN is 88.89%. In k-NN, the normal and Meaningiomas classes are classified well and good,
where as the Astrocytomas class is confused with Meaningiomas class as 33.33%.
Results obtained with Decision Tree

The confusion matrices of the Decision Tree classifier on BRATS dataset is shown in Table 5,
where diagonal of the table shows that accurate responses of tumor types. The average recognition
rate of DT is 81.48%. In DT, the normal class is classified well, where as the Astrocytomas and
Meaningiomas class are confused respectively. Thus, it needs further attention

The quantitative evaluation results are tabulated in Table 6, which shows that the proposed
approach has a higher precision, recall and F-measure for the k-NN classifier on BRATS dataset,
when compared to SVM and DT classifiers. The overall performance of the proposed method with
various classifiers on BRATS dataset is shown in Fig. 8.
Fig 8: Overall accuracy obtained for BRATS dataset on SVM, k-NN and DT classifiers
CHAPTER-7

CONCLUSION&FUTURE WORK

This project presents an efficient method of classifying MR brain images into normal and abnormal
tumor, using a SVM, k-NN and Decision Tree. This paper presents a method called Discrete
Wavelet Transform (DWT) features is extracted from the brain MRI images, which signify the
important texture features of tumor tissue and gives very promising results in classifying MR
images. From the experimental results, it is observed that k-NN shows a classification accuracy of
88.89%, and demonstrated that the proposed feature method performs well and achieved good
recognition results for tumor classification. It is observed from the experiments that the system
could not distinguish Astrocytomas class with high accuracy and is of future interest.
CHAPTER-7
REFERENCES

[1] G. S. Raghtate and S. S. Salankar, "Automatic Brain MRI Classification Using Modified Ant
Colony System and Neural Network Classifier," Proc. IEEE International Conference on
Computational Intelligence and Communication Networks, Jabalpur, pp. 1241-1246, 2015.

[2] W. Yu and Y. Xiaowei, "Application of decision tree for MRI images of premature brain injury
classification," Proc. IEEE 11th International Conference on Computer Science &Education,
Nagoya, pp.792-795, 2016.

[3] B. Mohammad- Jafarzadeh, H. Kalbkhani and M. G. Shayesteh, "Spectral regression


discriminant analysis for brain MRI classification," Iranian Conference on Electrical Engineering,
Tehran, pp. 353-357 , 2015.

[4] R. M. Chen and C. M. Wang, "MRI Brain Tissue Classification Using Unsupervised
Optimized ExtenicsBased Methods," IEEE International Symposium on Computer, Consumer and
Control, Taiwan, pp. 502-505, 2016.

[5] Mubashir Ahmad, Mahmood ul-Hassan, Imran Shafi, Abdelrahman Osman,”Classification of


Tumors in Human Brain MRI using Wavelet and Support Vector Machine”, IOSR Journal of
Computer Engineering, vol.8,pp. 25-31,2012.

[6] [3] V. S. Takate, P. S. Vikhe, “Classification of MRI Brain Images using K-NN and k-means”,
International Journal on Advanced Computer Theory and Engineering, vol.1, pp. 2319–2526,
2012.

[7] Ms. Girja Sahu, Mr. Lalitkumar P. Bhaiya, “Classification of MRI Brain images using GLCM,
Neural Network, Fuzzy Logic & Genetic Algorithm”, International Journal on Recent and
Innovation Trends in Computing and Communication , vol.3, pp. 3498–3504,2015.

[8] Pravada Deshmukh, P. S. Malge, “Classification of Brain MRI using Wavelet Decomposition
and SVM”, International Journal of Computer Applications, vol.154, pp. 29–33, November 2016.
[9] S. Yazdani, R. Yusof, M. Pashna and A. Karimian, "A hybrid method for brain MRI
classification," Proc. 2015 10th Asian Control Conference (ASCC), Kota Kinabalu, pp.1-5, 2015.

[10] A. Kharrat, M. B. Halima and M. Ben Ayed, "MRI brain tumor classification using Support
Vector Machines and meta-heuristic method," Proc. IEEE 15th International Conference on
Intelligent Systems Design and Applications, Marrakech, pp.446-451, 2015.

[11] N. Abdullah, Lee Wee Chuen, U. K. Ngah and Khairul Azman Ahmad, "Improvement of
MRI brain classification using principal component analysis," Proc. 2011 IEEE International
Conference on Control System, Computing and Engineering, Penang, pp.557-561,2011.

[12] M. Saritha, K. Paul Joseph, Abraham T. Mathew.- “Classification of MRI brain images using
combined wavelet entropy based spider web plots and probabilistic neural network”. Pattern
Recognition Letters, vol. 1, pp. 2151–2156, 2013.

Appendix A

MATLAB
A.1 Introduction

MATLAB is a high-performance language for technical computing. It integrates


computation, visualization, and programming in an easy-to-use environment where problems and
solutions are expressed in familiar mathematical notation. MATLAB stands for matrix laboratory,
and was written originally to provide easy access to matrix software developed by LINPACK
(linear system package) and EISPACK (Eigen system package) projects. MATLAB is therefore
built on a foundation of sophisticated matrix software in which the basic element is array that does
not require pre dimensioning which to solve many technical computing problems, especially those
with matrix and vector formulations, in a fraction of time.
MATLAB features a family of applications specific solutions called toolboxes. Very
important to most users of MATLAB, toolboxes allow learning and applying specialized
technology. These are comprehensive collections of MATLAB functions (M-files) that extend the
MATLAB environment to solve particular classes of problems. Areas in which toolboxes are
available include signal processing, control system, neural networks, fuzzy logic, wavelets,
simulation and many others.
Typical uses of MATLAB include: Math and computation, Algorithm development, Data
acquisition, Modeling, simulation, prototyping, Data analysis, exploration, visualization,
Scientific and engineering graphics, Application development, including graphical user interface
building.

A.2 Basic Building Blocks of MATLAB

The basic building block of MATLAB is MATRIX. The fundamental data type is the array.
Vectors, scalars, real matrices and complex matrix are handled as specific class of this basic data
type. The built in functions are optimized for vector operations. No dimension statements are
required for vectors or arrays.

A.2.1 MATLAB Window

The MATLAB works based on five windows: Command window, Workspace window,
Current directory window, Command history window, Editor Window, Graphics window and
Online-help window.

A.2.1.1 Command Window


The command window is where the user types MATLAB commands and expressions at
the prompt (>>) and where the output of those commands is displayed. It is opened when the
application program is launched. All commands including user-written programs are typed in this
window at MATLAB prompt for execution.

A.2.1.2 Work Space Window

MATLAB defines the workspace as the set of variables that the user creates in a work
session. The workspace browser shows these variables and some information about them. Double
clicking on a variable in the workspace browser launches the Array Editor, which can be used to
obtain information.

A.2.1.3 Current Directory Window

The current Directory tab shows the contents of the current directory, whose path is shown
in the current directory window. For example, in the windows operating system the path might be
as follows: C:\MATLAB\Work, indicating that directory “work” is a subdirectory of the main
directory “MATLAB”; which is installed in drive C. Clicking on the arrow in the current directory
window shows a list of recently used paths. MATLAB uses a search path to find M-files and other
MATLAB related files. Any file run in MATLAB must reside in the current directory or in a
directory that is on search path.

A.2.1.4 Command History Window

The Command History Window contains a record of the commands a user has entered in
the command window, including both current and previous MATLAB sessions. Previously entered
MATLAB commands can be selected and re-executed from the command history window by right
clicking on a command or sequence of commands. This is useful to select various options in
addition to executing the commands and is useful feature when experimenting with various
commands in a work session.

A.2.1.5 Editor Window

The MATLAB editor is both a text editor specialized for creating M-files and a graphical
MATLAB debugger. The editor can appear in a window by itself, or it can be a sub window in the
desktop. In this window one can write, edit, create and save programs in files called M-files.
MATLAB editor window has numerous pull-down menus for tasks such as saving,
viewing, and debugging files. Because it performs some simple checks and also uses color to
differentiate between various elements of code, this text editor is recommended as the tool of
choice for writing and editing M-functions.

A.2.1.6 Graphics or Figure Window

The output of all graphic commands typed in the command window is seen in this window.

A.2.1.7 Online Help Window

MATLAB provides online help for all it’s built in functions and programming language
constructs. The principal way to get help online is to use the MATLAB help browser, opened as a
separate window either by clicking on the question mark symbol (?) on the desktop toolbar, or by
typing help browser at the prompt in the command window. The help Browser is a web browser
integrated into the MATLAB desktop that displays a Hypertext Markup Language (HTML)
documents. The Help Browser consists of two panes, the help navigator pane, used to find
information, and the display pane, used to view the information. Self-explanatory tabs other than
navigator pane are used to perform a search.

A.3 MATLAB Files

MATLAB has three types of files for storing information. They are: M-files and MAT-
files.

A.3.1 M-Files

These are standard ASCII text file with ‘m’ extension to the file name and creating own
matrices using M-files, which are text files containing MATLAB code. MATLAB editor or
another text editor is used to create a file containing the same statements which are typed at the
MATLAB command line and save the file under a name that ends in .m. There are two types of
M-files:

1. Script Files
It is an M-file with a set of MATLAB commands in it and is executed by typing name of
file on the command line. These files work on global variables currently present in that
environment.

2. Function Files

A function file is also an M-file except that the variables in a function file are all local.
This type of files begins with a function definition line.

A.3.2 MAT-Files

These are binary data files with .mat extension to the file that are created by MATLAB
when the data is saved. The data written in a special format that only MATLAB can read. These
are located into MATLAB with ‘load’ command.

A.4 the MATLAB System:

The MATLAB system consists of five main parts:

A.4.1 Development Environment:

This is the set of tools and facilities that help you use MATLAB functions and files. Many
of these tools are graphical user interfaces. It includes the MATLAB desktop and Command
Window, a command history, an editor and debugger, and browsers for viewing help, the
workspace, files, and the search path.

A.4.2 the MATLAB Mathematical Function:

This is a vast collection of computational algorithms ranging from elementary functions like
sum, sine, cosine, and complex arithmetic, to more sophisticated functions like matrix inverse,
matrix eigen values, Bessel functions, and fast Fourier transforms.

A.4.3 the MATLAB Language:


This is a high-level matrix/array language with control flow statements, functions, data
structures, input/output, and object-oriented programming features. It allows both "programming
in the small" to rapidly create quick and dirty throw-away programs, and "programming in the
large" to create complete large and complex application programs.

A.4.4 Graphics:

MATLAB has extensive facilities for displaying vectors and matrices as graphs, as well as
annotating and printing these graphs. It includes high-level functions for two-dimensional and
three-dimensional data visualization, image processing, animation, and presentation graphics. It
also includes low-level functions that allow you to fully customize the appearance of graphics as
well as to build complete graphical user interfaces on your MATLAB applications.

A.4.5 the MATLAB Application Program Interface (API):

This is a library that allows you to write C and FORTRAN programs that interact with
MATLAB. It includes facilities for calling routines from MATLAB (dynamic linking), calling
MATLAB as a computational engine, and for reading and writing MAT-files.

A.5 SOME BASIC COMMANDS:

pwd prints working directory

Demo demonstrates what is possible in Mat lab

Who lists all of the variables in your Mat lab workspace?

Whose list the variables and describes their matrix size

clear erases variables and functions from memory

clear x erases the matrix 'x' from your workspace

close by itself, closes the current figure window

figure creates an empty figure window


hold on holds the current plot and all axis properties so that subsequent graphing

commands add to the existing graph

hold off sets the next plot property of the current axes to "replace"

find find indices of nonzero elements e.g.:

d = find(x>100) returns the indices of the vector x that are greater than 100

break terminate execution of m-file or WHILE or FOR loop

for repeat statements a specific number of times, the general form of a FOR

statement is:

FOR variable = expr, statement, ..., statement END

for n=1:cc/c;

magn(n,1)=NaNmean(a((n-1)*c+1:n*c,1));

end

diff difference and approximate derivative e.g.:

DIFF(X) for a vector X, is [X(2)-X(1) X(3)-X(2) ... X(n)-X(n-1)].

NaN the arithmetic representation for Not-a-Number, a NaN is obtained as a

result of mathematically undefined operations like 0.0/0.0

INF the arithmetic representation for positive infinity, a infinity is also produced

by operations like dividing by zero, e.g. 1.0/0.0, or from overflow, e.g. exp(1000).

save saves all the matrices defined in the current session into the file,

matlab.mat, located in the current working directory

load loads contents of matlab.mat into current workspace


save filename x y z saves the matrices x, y and z into the file titled filename.mat

save filename x y z /ascii save the matrices x, y and z into the file titled filename.dat

load filename loads the contents of filename into current workspace; the file can

be a binary (.mat) file

load filename.dat loads the contents of filename.dat into the variable filename

xlabel(‘ ’) : Allows you to label x-axis

ylabel(‘ ‘) : Allows you to label y-axis

title(‘ ‘) : Allows you to give title for

plot

subplot() : Allows you to create multiple

plots in the same window

A.6 SOME BASIC PLOT COMMANDS:

Kinds of plots:

plot(x,y) creates a Cartesian plot of the vectors x & y

plot(y) creates a plot of y vs. the numerical values of the elements in the y-vector

semilogx(x,y) plots log(x) vs y

semilogy(x,y) plots x vs log(y)

loglog(x,y) plots log(x) vs log(y)

polar(theta,r) creates a polar plot of the vectors r & theta where theta is in radians

bar(x) creates a bar graph of the vector x. (Note also the command stairs(x))

bar(x, y) creates a bar-graph of the elements of the vector y, locating the bars
according to the vector elements of 'x'

Plot description:

grid creates a grid on the graphics plot

title('text') places a title at top of graphics plot

xlabel('text') writes 'text' beneath the x-axis of a plot

ylabel('text') writes 'text' beside the y-axis of a plot

text(x,y,'text') writes 'text' at the location (x,y)

text(x,y,'text','sc') writes 'text' at point x,y assuming lower left corner is (0,0)

and upper right corner is (1,1)

axis([xmin xmax ymin ymax]) sets scaling for the x- and y-axes on the current plot

A.7 ALGEBRIC OPERATIONS IN MATLAB:

Scalar Calculations:

+ Addition

- Subtraction

* Multiplication

/ Right division (a/b means a ÷ b)

\ left division (a\b means b ÷ a)

^ Exponentiation

For example 3*4 executed in 'matlab' gives ans=12

4/5 gives ans=0.8


Array products: Recall that addition and subtraction of matrices involved addition
or subtraction of the individual elements of the matrices. Sometimes it is desired to simply multiply
or divide each element of an matrix by the corresponding element of another matrix 'array
operations”.

Array or element-by-element operations are executed when the operator is preceded by a '.'
(Period):

a .* b multiplies each element of a by the respective element of b

a ./ b divides each element of a by the respective element of b

a .\ b divides each element of b by the respective element of a

a .^ b raise each element of a by the respective b element

A.8 MATLAB WORKING ENVIRONMENT:

A.8.1 MATLAB DESKTOP

Matlab Desktop is the main Matlab application window. The desktop contains five sub
windows, the command window, the workspace browser, the current directory window, the
command history window, and one or more figure windows, which are shown only when the user
displays a graphic.

The command window is where the user types MATLAB commands and expressions at the
prompt (>>) and where the output of those commands is displayed. MATLAB defines the
workspace as the set of variables that the user creates in a work session.

The workspace browser shows these variables and some information about them. Double
clicking on a variable in the workspace browser launches the Array Editor, which can be used to
obtain information and income instances edit certain properties of the variable.

The current Directory tab above the workspace tab shows the contents of the current
directory, whose path is shown in the current directory window. For example, in the windows
operating system the path might be as follows: C:\MATLAB\Work, indicating that directory
“work” is a subdirectory of the main directory “MATLAB”; WHICH IS INSTALLED IN DRIVE
C. clicking on the arrow in the current directory window shows a list of recently used paths.
Clicking on the button to the right of the window allows the user to change the current directory.

MATLAB uses a search path to find M-files and other MATLAB related files, which are
organize in directories in the computer file system. Any file run in MATLAB must reside in the
current directory or in a directory that is on search path. By default, the files supplied with
MATLAB and math works toolboxes are included in the search path. The easiest way to see
which directories are soon the search path, or to add or modify a search path, is to select set path
from the File menu the desktop, and then use the set path dialog box. It is good practice to add
any commonly used directories to the search path to avoid repeatedly having the change the current
directory.

The Command History Window contains a record of the commands a user has entered in the
command window, including both current and previous MATLAB sessions. Previously entered
MATLAB commands can be selected and re-executed from the command history window by right
clicking on a command or sequence of commands.

This action launches a menu from which to select various options in addition to executing
the commands. This is useful to select various options in addition to executing the commands. This
is a useful feature when experimenting with various commands in a work session.

A.8.2 Using the MATLAB Editor to create M-Files:

The MATLAB editor is both a text editor specialized for creating M-files and a graphical
MATLAB debugger. The editor can appear in a window by itself, or it can be a sub window in the
desktop. M-files are denoted by the extension .m, as in pixelup.m.
The MATLAB editor window has numerous pull-down menus for tasks such as saving,
viewing, and debugging files. Because it performs some simple checks and also uses color to
differentiate between various elements of code, this text editor is recommended as the tool of
choice for writing and editing M-functions.

To open the editor , type edit at the prompt opens the M-file filename.m in an editor
window, ready for editing. As noted earlier, the file must be in the current directory, or in a
directory in the search path.

A.8.3 Getting Help:

The principal way to get help online is to use the MATLAB help browser, opened as a
separate window either by clicking on the question mark symbol (?) on the desktop toolbar, or by
typing help browser at the prompt in the command window. The help Browser is a web browser
integrated into the MATLAB desktop that displays a Hypertext Markup Language(HTML)
documents. The Help Browser consists of two panes, the help navigator pane, used to find
information, and the display pane, used to view the information. Self-explanatory tabs other than
navigator pane are used to perform a search.
Appendix B

INTRODUCTION TO DIGITAL IMAGE PROCESSING

6.1 What is DIP?

An image may be defined as a two-dimensional function f(x, y), where x & y are
spatial coordinates, & the amplitude of f at any pair of coordinates (x, y) is called the
intensity or gray level of the image at that point. When x, y & the amplitude values of f are
all finite discrete quantities, we call the image a digital image. The field of DIP refers to
processing digital image by means of digital computer. Digital image is composed of a finite
number of elements, each of which has a particular location & value. The elements are called
pixels.

Vision is the most advanced of our sensor, so it is not surprising that image play the
single most important role in human perception. However, unlike humans, who are limited
to the visual band of the EM spectrum imaging machines cover almost the entire EM
spectrum, ranging from gamma to radio waves. They can operate also on images generated
by sources that humans are not accustomed to associating with image.

There is no general agreement among authors regarding where image processing stops
& other related areas such as image analysis& computer vision start. Sometimes a distinction
is made by defining image processing as a discipline in which both the input & output at a
process are images. This is limiting & somewhat artificial boundary. The area of image
analysis (image understanding) is in between image processing & computer vision.
There are no clear-cut boundaries in the continuum from image processing at one end
to complete vision at the other. However, one useful paradigm is to consider three types of
computerized processes in this continuum: low-, mid-, & high-level processes. Low-level
process involves primitive operations such as image processing to reduce noise, contrast
enhancement & image sharpening. A low- level process is characterized by the fact that both
its inputs & outputs are images. Mid-level process on images involves tasks such as
segmentation, description of that object to reduce them to a form suitable for computer
processing & classification of individual objects. A mid-level process is characterized by the
fact that its inputs generally are images but its outputs are attributes extracted from those
images. Finally higher- level processing involves “Making sense” of an ensemble of
recognized objects, as in image analysis & at the far end of the continuum performing the
cognitive functions normally associated with human vision.

Digital image processing, as already defined is used successfully in a broad range of


areas of exceptional social & economic value.

6.2 What is an image?

An image is represented as a two dimensional function f(x, y) where x and y are spatial
co-ordinates and the amplitude of ‘f’ at any pair of coordinates (x, y) is called the intensity of
the image at that point.

Gray scale image:

A grayscale image is a function I (xylem) of the two spatial coordinates of the image
plane.

I(x, y) is the intensity of the image at the point (x, y) on the image plane.

I (xylem) takes non-negative values assume the image is bounded by a rectangle [0, a] [0,
b]I: [0, a]  [0, b]  [0, info)

Color image:
It can be represented by three functions, R (xylem) for red, G (xylem) for green and
B (xylem) for blue.

An image may be continuous with respect to the x and y coordinates and also in
amplitude. Converting such an image to digital form requires that the coordinates as well as
the amplitude to be digitized. Digitizing the coordinate’s values is called sampling. Digitizing
the amplitude values is called quantization.

6.3 Coordinate convention:

The result of sampling and quantization is a matrix of real numbers. We use two
principal ways to represent digital images. Assume that an image f(x, y) is sampled so that
the resulting image has M rows and N columns. We say that the image is of size M X N. The
values of the coordinates (xylem) are discrete quantities. For notational clarity and
convenience, we use integer values for these discrete coordinates. In many image processing
books, the image origin is defined to be at (xylem)=(0,0).The next coordinate values along
the first row of the image are (xylem)=(0,1).It is important to keep in mind that the notation
(0,1) is used to signify the second sample along the first row. It does not mean that these are
the actual values of physical coordinates when the image was sampled. Following figure
shows the coordinate convention. Note that x ranges from 0 to M-1 and y from 0 to N-1 in
integer increments.

The coordinate convention used in the toolbox to denote arrays is different from the
preceding paragraph in two minor ways. First, instead of using (xylem) the toolbox uses the
notation (race) to indicate rows and columns. Note, however, that the order of coordinates
is the same as the order discussed in the previous paragraph, in the sense that the first
element of a coordinate topples, (alb), refers to a row and the second to a column. The other
difference is that the origin of the coordinate system is at (r, c) = (1, 1); thus, r ranges from 1
to M and c from 1 to N in integer increments. IPT documentation refers to the coordinates.
Less frequently the toolbox also employs another coordinate convention called spatial
coordinates which uses x to refer to columns and y to refers to rows. This is the opposite of
our use of variables x and y.

6.4 Image as Matrices:

The preceding discussion leads to the following representation for a digitized image
function:

f (0, 0) f (0, 1) ……….. f (0, N-1)

f (1, 0) f (1, 1) ………… f (1, N-1)

f (xylem) = . . .

. . .

f (M-1, 0) f (M-1, 1) ………… f (M-1, N-1)

The right side of this equation is a digital image by definition. Each element of
this array is called an image element, picture element, pixel or pel. The terms image and pixel
are used throughout the rest of our discussions to denote a digital image and its elements.

A digital image can be represented naturally as a MATLAB matrix:

f (1, 1) f (1, 2) ……. f (1, N)

f (2, 1) f (2, 2) …….. f (2, N)

. . .

f= . . .

f (M, 1) f (M, 2) …….f (M, N)

Where f (1, 1) = f (0, 0) (note the use of a monoscope font to denote MATLAB
quantities). Clearly the two representations are identical, except for the shift in origin. The
notation f (p, q) denotes the element located in row p and the column q. For example f (6, 2)
is the element in the sixth row and second column of the matrix f. typically we use the letters
M and N respectively to denote the number of rows and columns in a matrix. A 1xN matrix is
called a row vector whereas an Mx1 matrix is called a column vector. A 1x1 matrix is a scalar.

Matrices in MATLAB are stored in variables with names such as A, a, RGB, real array and
so on. Variables must begin with a letter and contain only letters, numerals and underscores.
As noted in the previous paragraph, all MATLAB quantities are written using monoscope
characters. We use conventional Roman, italic notation such as f(x, y), for mathematical
expressions

6.5 Reading Images:

Images are read into the MATLAB environment using function imread whose syntax is

imread(‘filename’)

Format name Description recognized extension

TIFF Tagged Image File Format .tif, .tiff

JPEG Joint Photograph Experts Group .jpg, .jpeg

GIF Graphics Interchange Format .gif

BMP Windows Bitmap .bmp

PNG Portable Network Graphics .png

XWD X Window Dump .xwd

Here filename is a spring containing the complete of the image file(including any
applicable extension).For example the command line

>> f = imread (‘8. jpg’);


reads the JPEG (above table) image chestxray into image array f. Note the use of single quotes
(‘) to delimit the string filename. The semicolon at the end of a command line is used by
MATLAB for suppressing output. If a semicolon is not included. MATLAB displays the results
of the operation(s) specified in that line. The prompt symbol(>>) designates the beginning
of a command line, as it appears in the MATLAB command window.

When as in the preceding command line no path is included in filename, imread reads
the file from the current directory and if that fails it tries to find the file in the MATLAB search
path. The simplest way to read an image from a specified directory is to include a full or
relative path to that directory in filename.

For example,

>> f = imread ( ‘D:\myimages\chestxray.jpg’);

reads the image from a folder called my images on the D: drive, whereas

>> f = imread(‘ . \ myimages\chestxray .jpg’);

reads the image from the my images subdirectory of the current of the current working
directory. The current directory window on the MATLAB desktop toolbar displays
MATLAB’s current working directory and provides a simple, manual way to change it.
Above table lists some of the most of the popular image/graphics formats supported by
imread and imwrite.

Function size gives the row and column dimensions of an image:

>> size (f)

ans = 1024 * 1024

This function is particularly useful in programming when used in the following form to
determine automatically the size of an image:

>>[M,N]=size(f);
This syntax returns the number of rows(M) and columns(N) in the image.

The whole function displays additional information about an array. For instance ,the
statement

>> whos f

gives

Name size Bytes Class

F 1024*1024 1048576 unit8 array

Grand total is 1048576 elements using 1048576 bytes

The unit8 entry shown refers to one of several MATLAB data classes. A semicolon at the
end of a whose line has no effect ,so normally one is not used.

6.6 Displaying Images:

Images are displayed on the MATLAB desktop using function imshow, which has the
basic syntax:

imshow(f,g)

Where f is an image array, and g is the number of intensity levels used to display
it. If g is omitted ,it defaults to 256 levels .using the syntax

Imshow (f, {low high})

Displays as black all values less than or equal to low and as white all values
greater than or equal to high. The values in between are displayed as intermediate intensity
values using the default number of levels .Finally the syntax

Imshow(f,[ ])
Sets variable low to the minimum value of array f and high to its maximum value.
This form of imshow is useful for displaying images that have a low dynamic range or that
have positive and negative values.

Function pixval is used frequently to display the intensity values of individual pixels
interactively. This function displays a cursor overlaid on an image. As the cursor is moved
over the image with the mouse the coordinates of the cursor position and the corresponding
intensity values are shown on a display that appears below the figure window .When
working with color images, the coordinates as well as the red, green and blue components
are displayed. If the left button on the mouse is clicked and then held pressed, pixval displays
the Euclidean distance between the initial and current cursor locations.

The syntax form of interest here is Pixval which shows the cursor on the last image
displayed. Clicking the X button on the cursor window turns it off.

The following statements read from disk an image called rose_512.tif extract basic
information about the image and display it using imshow :

>>f=imread(‘rose_512.tif’);

>>whos f

Name Size Bytes Class

F 512*512 262144 unit8 array

Grand total is 262144 elements using 262144 bytes

>>imshow(f)

A semicolon at the end of an imshow line has no effect, so normally one is not used.
If another image,g, is displayed using imshow, MATLAB replaces the image in the screen with
the new image. To keep the first image and output a second image, we use function figure as
follows:
>>figure ,imshow(g)

Using the statement

>>imshow(f),figure ,imshow(g) displays both images.

Note that more than one command can be written on a line ,as long as different
commands are properly delimited by commas or semicolons. As mentioned earlier, a
semicolon is used whenever it is desired to suppress screen outputs from a command line.

Suppose that we have just read an image h and find that using imshow produces the
image. It is clear that this image has a low dynamic range, which can be remedied for display
purposes by using the statement.

>>imshow(h,[ ])

6.7 WRITING IMAGES:

Images are written to disk using function imwrite, which has the following basic syntax:

Imwrite (f,’filename’)

With this syntax, the string contained in filename must include a recognized file format
extension .Alternatively, the desired format can be specified explicitly with a third input
argument. >>imwrite(f,’patient10_run1’,’tif’)

Or alternatively

For example the following command writes f to a TIFF file named patient10_run1:

>>imwrite(f,’patient10_run1.tif’)

If filename contains no path information, then imwrite saves the file in the current
working directory.
The imwrite function can have other parameters depending on e file format selected.
Most of the work in the following deals either with JPEG or TIFF images ,so we focus attention
here on these two formats.

More general imwrite syntax applicable only to JPEG images is

imwrite(f,’filename.jpg,,’quality’,q)

where q is an integer between 0 and 100(the lower number the higher the degradation due
to JPEG compression).

For example, for q=25 the applicable syntax is

>> imwrite(f,’bubbles25.jpg’,’quality’,25)

The image for q=15 has false contouring that is barely visible, but this effect becomes
quite pronounced for q=5 and q=0.Thus, an expectable solution with some margin for error
is to compress the images with q=25.In order to get an idea of the compression achieved and
to obtain other image file details, we can use function imfinfo which has syntax.

Imfinfo filename

Here filename is the complete file name of the image stored in disk.

For example,

>> imfinfo bubbles25.jpg

outputs the following information(note that some fields contain no information in this
case):

Filename: ‘bubbles25.jpg’
FileModDate: ’04-jan-2003 12:31:26’

FileSize: 13849

Format: ‘jpg’

Format Version: ‘‘

Width: 714

Height: 682

Bit Depth: 8

Color Depth: ‘grayscale’

Format Signature: ‘ ‘

Comment: { }

Where file size is in bytes. The number of bytes in the original image is corrupted simply
by multiplying width by height by bit depth and dividing the result by 8. The result is
486948.Dividing this file size gives the compression ratio:(486948/13849)=35.16.This
compression ratio was achieved. While maintaining image quality consistent with the
requirements of the appearance. In addition to the obvious advantages in storage space, this
reduction allows the transmission of approximately 35 times the amount of un compressed
data per unit time.

The information fields displayed by imfinfo can be captured in to a so called structure


variable that can be for subsequent computations. Using the receding an example and
assigning the name K to the structure variable.

We use the syntax >>K=imfinfo(‘bubbles25.jpg’);


To store in to variable K all the information generated by command imfinfo, the
information generated by imfinfo is appended to the structure variable by means of fields,
separated from K by a dot. For example, the image height and width are now stored in
structure fields K. Height and K. width.

As an illustration, consider the following use of structure variable K to commute the


compression ratio for bubbles25.jpg:

>> K=imfinfo(‘bubbles25.jpg’);

>> image_ bytes =K.Width* K.Height* K.Bit Depth /8;

>> Compressed_ bytes = K.FilesSize;

>> Compression_ ratio=35.162

Note that iminfo was used in two different ways. The first was t type imfinfo
bubbles25.jpg at the prompt, which resulted in the information being displayed on the
screen. The second was to type K=imfinfo (‘bubbles25.jpg’),which resulted in the
information generated by imfinfo being stored in K. These two different ways of calling
imfinfo are an example of command_ function duality, an important concept that is explained
in more detail in the MATLAB online documentation.

More general imwrite syntax applicable only to tif images has the form

Imwrite(g,’filename.tif’,’compression’,’parameter’,….’resloution’,[colres rowers] )

Where ‘parameter’ can have one of the following principal values: ‘none’ indicates no
compression, ‘pack bits’ indicates pack bits compression (the default for non ‘binary images’)
and ‘ccitt’ indicates ccitt compression. (the default for binary images).The 1*2 array [colres
rowers]
Contains two integers that give the column resolution and row resolution in dot per_
unit (the default values). For example, if the image dimensions are in inches, colres is in the
number of dots(pixels)per inch (dpi) in the vertical direction and similarly for rowers in the
horizontal direction. Specifying the resolution by single scalar, res is equivalent to writing
[res res].

>>imwrite(f,’sf.tif’,’compression’,’none’,’resolution’,……………..[300 300])

the values of the vector[colures rows] were determined by multiplying 200 dpi by the ratio
2.25/1.5, which gives 30 dpi. Rather than do the computation manually, we could write

>> res=round(200*2.25/1.5);

>>imwrite(f,’sf.tif’,’compression’,’none’,’resolution’,res)

where its argument to the nearest integer.It function round rounds is important to note
that the number of pixels was not changed by these commands. Only the scale of the image
changed. The original 450*450 image at 200 dpi is of size 2.25*2.25 inches. The new 300_dpi
image is identical, except that is 450*450 pixels are distributed over a 1.5*1.5_inch area.
Processes such as this are useful for controlling the size of an image in a printed document
with out sacrificing resolution.

Often it is necessary to export images to disk the way they appear on the MATLAB
desktop. This is especially true with plots .The contents of a figure window can be exported
to disk in two ways. The first is to use the file pull-down menu is in the figure window and
then choose export. With this option the user can select a location, filename, and format.
More control over export parameters is obtained by using print command:

Print-fno-dfileformat-rresno filename

Where no refers to the figure number in the figure window interest, file format refers
one of the file formats in table above. ‘resno’ is the resolution in dpi, and filename is the name
we wish to assign the file.
If we simply type print at the prompt, MATLAB prints (to the default printer) the
contents of the last figure window displayed. It is possible also to specify other options with
print, such as specific printing device.

You might also like