You are on page 1of 25

CHAPTER 3

SOFTWARE IMPLEMENTATION
MATLAB is used in every facet of computational mathematics. Following are some
commonly used mathematical calculations where it is used most commonly −
 Dealing with Matrices and Arrays
 2-D and 3-D Plotting and graphics
 Linear Algebra
 Algebraic Equations
 Non-linear Functions
 Statistics
 Data Analysis
 Calculus and Differential Equations
 Numerical Calculations
 Integration
 Transforms
 Curve Fitting
 Various other special functions

FEATURES OF MATLAB
 It is a high-level language for numerical computation, visualization and application
development.
 It also provides an interactive environment for iterative exploration, design and
problem solving.
 It provides vast library of mathematical functions for linear algebra, statistics, Fourier
analysis, filtering, optimization, numerical integration and solving ordinary
differential equations.
 It provides built-in graphics for visualizing data and tools for creating custom plots.
 MATLAB's programming interface gives development tools for improving code
quality maintainability and maximizing performance.
 It provides tools for building applications with custom graphical interfaces.
 It provides functions for integrating MATLAB based algorithms with external
applications and languages such as C, Java, .NET and Microsoft Excel.
USES OF MATLAB
MATLAB is widely used as a computational tool in science and engineering encompassing
the fields of physics, chemistry, math and all engineering streams. It is used in a range of
applications including −
 Signal Processing and Communications
 Image and Video Processing
 Control Systems
 Test and Measurement
 Computational Finance
 Computational Biology

MATLAB 2013a

MATLAB Product Description

MATLAB is a high-level language and interactive environment for numerical


computation, visualization, and programming. Using MATLAB, you can analyze data,
develop algorithms, and create models and applications. The language, tools, and built-in
math functions enable you to explore multiple approaches and reach a solution faster than
with spreadsheets or traditional programming languages, such as C/C++ or Java. You can use
MATLAB for a range of applications, including signal processing and communications,
image and video processing, control systems, test and measurement, computational finance,
and computational biology. More than a million engineers and scientists in industry and
academic use MATLAB, the language of technical computing.

Desktop Basics

When you start MATLAB, the desktop appears in its default layout.

The desktop includes these panels:

i) Current Folder — Access your files.


ii) Command Window — Enter commands at the command line, indicated by the prompt
(>>).
iii) Workspace — Explore data that you create or import from files.
iv) Command History — View or rerun commands that you entered at the command line.

As you work in MATLAB, you issue commands that create variables and call
functions. For example, create a variable named a by typing this statement at the command
line:

Matlab Window

MATLABINTRODUCTION

MATLAB (matrix laboratory) is a numerical computing environment and fourth-


generation programming language. Developed byMathWorks, MATLAB
allows matrix manipulations, plotting of functions and data, implementation of algorithms,
creation of user interfaces, and interfacing with programs written in other languages,
including C, C++, Java, and FORTRAN.

Although MATLAB is intended primarily for numerical computing, an optional


toolbox uses the MuPAD symbolic engine, allowing access to symbolic
computing capabilities. An additional package, Simulink, adds graphical multi-domain
simulation and Model-Based Design for dynamic and embedded systems.

In 2004, MATLAB had around one million users across industry and
academia. MATLAB users come from various backgrounds ofengineering, science,
and economics. MATLAB is widely used in academic and research institutions as well as
industrial enterprises.
Matlab and numerical computing. This first chapter introduces Matlab by presenting
several programs that investigate elementary, but interesting, mathematical problems. If you
already have some experience programming in another language, we hope that you can see
how Matlab works by simply studying these programs.
If you want a more comprehensive introduction, there are many resources available.
You can select the Help tab in the tool strip atop the Matlab command window, then select
Documentation, MATLAB and Getting Started. A MathWorks Website, MATLAB Tutorials
and Learning Resources, offers a number of introductory videos and a PDF manual entitled
Getting Started with MATLAB.
An introduction to MATLAB through a collection of mathematical and computational
projects is provided by Moler’s free online Experiments with MATLAB
MATLAB is widely used in all areas of applied mathematics, in education and
research at universities, and in the industry. MATLAB stands for MATrixLABoratory and
the software is built up around vectors and matrices. This makes the software particularly
useful for linear algebra but MATLAB is also a great tool for solving algebraic and
differential equations and for numerical integration. MATLAB has powerful graphic tools
and can produce nice pictures in both 2D and 3D. It is also a programming language, and is
one of the easiest programming languages for writing mathematical programs. MATLAB
also has some tool boxes useful for signal processing, image processing, optimization, etc.
Goal
The goal of this tutorial is to give a brief introduction to the mathematical software
MATLAB. After completing the worksheet you should know how to start MATLAB, how to
use the elementary functions in MATLAB and how to use MATLAB to plot functions.
The MATLAB environment

From now on an instruction to press a certain key will be denoted by <>, e.g.,
pressing the enter key will be denoted as <enter>. Commands that should be typed at the
prompt will be written in courier font.

The MATLAB environment (on most computer systems) consists of menus, buttons
and a writing area similar to an ordinary word processor. There are plenty of help functions
that you are encouraged to use. The writing area that you will see when you start MATLAB
is called the command window. In this window you give the commands to MATLAB. For
example, when you want to run a program you have written for MATLAB you start the
program in the command window by typing its name at the prompt. The command window is
also useful if you just want to use MATLAB as a scientific calculator or as a graphing tool. If
you write longer programs, you will find it more convenient to write the program code in a
separate window, and then run it in the command window (discussed in Intro to
programming).

In the command window you will see a prompt that looks like >>. You type your
commands immediately after this prompt. Once you have typed the command you wish
MATLAB to perform, press <enter>. If you want to interrupt a command that MATLAB is
running, type <ctrl> + <c>.
The commands you type in the command window are stored by MATLAB and can be
viewed in the Command History window. To repeat a command you have already used, you
can simply double-click on the command in the history window, or use the <up arrow> at the
command prompt to iterate through the commands you have used until you reach the
command you desire to repeat.
MATLAB is a numerical computing environment and programming language.
Maintained by the Math Works MATLAB allows easy matrix manipulation, Plotting of
functions and data, implementation of algorithms ,creation of user interfaces, and interfacing
with programs in other languages. Although it is numeric only, an optional toolbox uses the
MuPAD symbolic engine, allowing access to computer algebra capabilities. An additional
package, Simulink, adds graphical multi domain simulation and Model –based Design for
dynamic and embedded systems.
MATLAB(meaning ”matrix laboratory”)was invented in the late
1970s by Cleve Moler, then chairman of the computer science department at the University
of New Mexico. He designed it to give his students access to LINPACK and EISPACK
without having to learn Fortran. It soon spread to other universities and found a strong
audience within the applied mathematics community. Jack Little, an engineer, was exposed to
it during a visit Moler made to Standford University in 1983.Recognizing its commercial
potential, he joined with Moler and Steve Bangert. They rewrote MATLAB in C and founded
The MathWorks in 1984 to continue its development. These rewritten libraries were known
as JACKPAC.IN 2000,MATLAB in C and rewritten to use a newer set of libraries for matrix
manipulation ,LAPACK.
MATLAB was first adopted by control design engineers, Little’s specially, but
quickly spread to many other domains. It is now used education, in particular the teaching of
linear algebra and numerical analysis, and is popular amongst scientists involved with image
processing.
MATLAB is built around the MATLAB language, sometimes called M-code
or simply M.One of the elements of the MATLAB Desktop,In this way,MATLAB can be
saved in a text file,typically using the MATLAB Editor, as a script or encapsulated into a
function, extending the commands available.
Indexing is one-based, which is the usual convention for matrices in
mathematics. This is a typical for programming languages, whose arrays more often start
with zero.
A square identity matrix of size n can be generated using the function eye, and
matrices of any size with zeros or ones can be generated with the functions zeros and ones,
respectively
MATLAB lacks a package system, like those found in modern languages
such as Java and Python, where classes can be resolved unambiguously, e.g. Java’s java. In
MATLAB all functions share the global namespace and precedence of functions with a same
name is determined by the vectors.
MATLAB can execute a sequence of statements stored on disk files. Such files are called
"M-files" because they must have the file type of ".m" as the last part of their filename. Much
of your work with MATLAB will be in creating and refining M-files.

MATLAB (matrix laboratory) is a numerical computing environment and fourth-generation


programming language. Developed by Math Works, MATLAB allows matrix manipulations,
plotting of functions and data, implementation of algorithms, creation of user interfaces, and
interfacing with programs written in other languages, including C, C++, Java, and Fortran.
Although MATLAB is intended primarily for numerical computing, an optional toolbox uses
the MuPAD symbolic engine, allowing access to symbolic computing capabilities. An
additional package, Simulink, adds graphical multi-domain simulation and Model-Based
Design for dynamic and embedded systems.

In 2004, MATLAB had around one million users across industry and academia. MATLAB
users come from various backgrounds of engineering, science, and economics. MATLAB is
widely used in academic and research institutions as well as industrial enterprises.

a. Variables

Variables are defined using the assignment operator, =. MATLAB is a weakly


typed programming language. It is a weakly typed language because types are implicitly
converted. It is a dynamically typed language because variables can be assigned without
declaring their type, except if they are to be treated as symbolic objects, and that their type
can change. Values can come from constants, from computation involving values of other
variables, or from the output of a function.
b. Vectors/matrices

As suggested by its name (a contraction of "Matrix Laboratory"), MATLAB can create and
manipulate arrays of 1 (vectors), 2 (matrices), or more dimensions. In the MATLAB
vernacular, a vector refers to a one dimensional (1×N or N×1) matrix, commonly referred to
as an array in other programming languages. A matrix generally refers to a 2-dimensional
array, i.e. an m×n array where m and n are greater than 1. Arrays with more than two
dimensions are referred to as multidimensional arrays. Arrays are a fundamental type and
many standard functions natively support array operations allowing work on arrays without
explicit loops.
Structures

MATLAB has structure data types. Since all variables in MATLAB are arrays, a more
adequate name is "structure array", where each element of the array has the same field names.
In addition, MATLAB supports dynamic field names (field look-ups by name, field
manipulations, etc.). Unfortunately, MATLAB JIT does not support MATLAB structures,
therefore just a simple bundling of various variables into a structure will come at a cost.
CHAPTER 4
METHODOLOGY
IMAGE PROCESSING

Digital signal processing is the methodology to achieve fast and accurate result
about the plant leaf diseases. it will reduce many agricultural aspect and improve
productivity by detecting the appropriate diseases. For diseases detection image of an
infected leaf should examine through the set of procedures. input image should pre-
processed then its feature should be extracted according to the dataset. After then some
classifier techniques should be used to classify the diseases according to the specific data
set. Image Acquisition is the process in which acquired and converted to the desired
output format. For this application an analog image is first captured and then converted to
the digital image for further processing.

Preprocessing Segmentation contains process for image segmentation, image


enhancement and color space conversion firstly image digital image is enhanced by filter.
Leaf image is filtered form the background image. Then filtered image’s RGB colors
are converted into color space parameter. Hue Saturation Value (HSV) is a good
method for color perception. Further image is segmented to a meaning full part which is
easier to analyze. Any of the model based, threshold based, edge based, Region based and
feature based segmentation has been done on the images.

Feature extractions, is the process done after segmentation. According to the


segmented information and predefined dataset some features of the image should be
extracted. This extraction could be the any of statistical, structural, fractal or signal
processing. Color co-occurrence Method, Grey Level Co-occurrence Matrices (GLCM),
Spatial Gray-level Dependence Matrices (SGDM) method, Gabor Filters, Wavelets
Transform and Principal component analysis are some methods used for feature
extraction.

EDGE DETECTION
Edge detection is an important image processing task, both as a process itself, and as a
component in other process. The purpose of edge detection in medical images is to identify
the areas of image where the large change in intensity occurs. The edges in the images
characterise the object boundaries and are useful for registration ,segmentation and
identification of object in a scene. The canny edge detector is one of the standard edge
detection method used to find out the real edge points by maximizing the signal to noise ratio
in medical images.
Introduction to canny edge detection

The Canny edge detector is an edge detection operator that uses a multi-stage algorithm to
detect a wide range of edges with noise suppressed at the same time. It is a technique to
extract useful structural information from different vision objects and dramatically reduce the
amount of data to be processed. The Conversion of input image to the gray-scale is necessary
in order to limit the edge detection computational requirements.

IMAGE PROCESSING

Image Processing is processing of images using mathematical operations by using


any form of signal processing for which the input is an image, a series of images, or a video,
such as a photograph or video frame the output of image processing may be either an image
or a set of characteristics or parameters related to the image. Images are also processed as
three-dimensional signals where the third-dimension being time or the z-axis.

BLOCK DIAGRAM

Radial Basis Function Network (RBFN) Tutorial


A Radial Basis Function Network (RBFN) is a particular type of neural network. In this
article, I’ll be describing it’s use as a non-linear classifier.

Generally, when people talk about neural networks or “Artificial Neural Networks” they are
referring to the Multilayer Perceptron (MLP). Each neuron in an MLP takes the weighted
sum of its input values. That is, each input value is multiplied by a coefficient, and the results
are all summed together. A single MLP neuron is a simple linear classifier, but complex non-
linear classifiers can be built by combining these neurons into a network.

To me, the RBFN approach is more intuitive than the MLP. An RBFN performs classification
by measuring the input’s similarity to examples from the training set. Each RBFN neuron
stores a “prototype”, which is just one of the examples from the training set. When we want
to classify a new input, each neuron computes the Euclidean distance between the input and
its prototype. Roughly speaking, if the input more closely resembles the class A prototypes
than the class B prototypes, it is classified as class A.

RBF Network Architecture

The above illustration shows the typical architecture of an RBF Network. It consists of an
input vector, a layer of RBF neurons, and an output layer with one node per category or class
of data.
The Input Vector

The input vector is the n-dimensional vector that you are trying to classify. The entire input
vector is shown to each of the RBF neurons.

The RBF Neurons

Each RBF neuron stores a “prototype” vector which is just one of the vectors from the
training set. Each RBF neuron compares the input vector to its prototype, and outputs a value
between 0 and 1 which is a measure of similarity. If the input is equal to the prototype, then
the output of that RBF neuron will be 1. As the distance between the input and prototype
grows, the response falls off exponentially towards 0. The shape of the RBF neuron’s
response is a bell curve, as illustrated in the network architecture diagram.

The neuron’s response value is also called its “activation” value.

The prototype vector is also often called the neuron’s “center”, since it’s the value at the
center of the bell curve.

The Output Nodes

The output of the network consists of a set of nodes, one per category that we are trying to
classify. Each output node computes a sort of score for the associated category. Typically, a
classification decision is made by assigning the input to the category with the highest score.

The score is computed by taking a weighted sum of the activation values from every RBF
neuron. By weighted sum we mean that an output node associates a weight value with each of
the RBF neurons, and multiplies the neuron’s activation by this weight before adding it to the
total response.

Because each output node is computing the score for a different category, every output node
has its own set of weights. The output node will typically give a positive weight to the RBF
neurons that belong to its category, and a negative weight to the others.

RBF Neuron Activation Function


Each RBF neuron computes a measure of the similarity between the input and its prototype
vector (taken from the training set). Input vectors which are more similar to the prototype
return a result closer to 1. There are different possible choices of similarity functions, but the
most popular is based on the Gaussian. Below is the equation for a Gaussian with a one-
dimensional input.

Where x is the input, mu is the mean, and sigma is the standard deviation. This produces the
familiar bell curve shown below, which is centered at the mean, mu (in the below plot the
mean is 5 and sigma is 1).

The RBF neuron activation function is slightly different, and is typically written as:

In the Gaussian distribution, mu refers to the mean of the distribution. Here, it is the
prototype vector which is at the center of the bell curve.

For the activation function, phi, we aren’t directly interested in the value of the standard
deviation, sigma, so we make a couple simplifying modifications.

The first change is that we’ve removed the outer coefficient, 1 / (sigma * sqrt(2 * pi)). This
term normally controls the height of the Gaussian. Here, though, it is redundant with the
weights applied by the output nodes. During training, the output nodes will learn the correct
coefficient or “weight” to apply to the neuron’s response.

The second change is that we’ve replaced the inner coefficient, 1 / (2 * sigma^2), with a
single parameter ‘beta’. This beta coefficient controls the width of the bell curve. Again, in
this context, we don’t care about the value of sigma, we just care that there’s some coefficient
which is controlling the width of the bell curve. So we simplify the equation by replacing the
term with a single variable.

RBF Neuron activation for different values of beta

There is also a slight change in notation here when we apply the equation to n-dimensional
vectors. The double bar notation in the activation equation indicates that we are taking the
Euclidean distance between x and mu, and squaring the result. For the 1-dimensional
Gaussian, this simplifies to just (x - mu)^2.

It’s important to note that the underlying metric here for evaluating the similarity between an
input vector and a prototype is the Euclidean distance between the two vectors.

Also, each RBF neuron will produce its largest response when the input is equal to the
prototype vector. This allows to take it as a measure of similarity, and sum the results from
all of the RBF neurons.

As we move out from the prototype vector, the response falls off exponentially. Recall from
the RBFN architecture illustration that the output node for each category takes the weighted
sum of every RBF neuron in the network–in other words, every neuron in the network will
have some influence over the classification decision. The exponential fall off of the activation
function, however, means that the neurons whose prototypes are far from the input vector will
actually contribute very little to the result.

If you are interested in gaining a deeper understanding of how the Gaussian equation
produces this bell curve shape, check out my post on the Gaussian Kernel.

Example Dataset
Before going into the details on training an RBFN, let’s look at a fully trained example.

In the below dataset, we have two dimensional data points which belong to one of two
classes, indicated by the blue x’s and red circles. I’ve trained an RBF Network with 20 RBF
neurons on this data set. The prototypes selected are marked by black asterisks.

We can also visualize the category 1 (red circle) score over the input space. We could do this
with a 3D mesh, or a contour plot like the one below. The contour plot is like a topographical
map.
The areas where the category 1 score is highest are colored dark red, and the areas where the
score is lowest are dark blue. The values range from -0.2 to 1.38.

I’ve included the positions of the prototypes again as black asterisks. You can see how the
hills in the output values are centered around these prototypes.

It’s also interesting to look at the weights used by output nodes to remove some of the
mystery.

Finally, we can plot an approximation of the decision boundary (the line where the category 1
and category 2 scores are equal).
To plot the decision boundary, I’ve computed the scores over a finite grid. As a result, the
decision boundary is jagged. I believe the true decision boundary would be smoother.

Training The RBFN


The training process for an RBFN consists of selecting three sets of parameters: the
prototypes (mu) and beta coefficient for each of the RBF neurons, and the matrix of output
weights between the RBF neurons and the output nodes.

There are many possible approaches to selecting the prototypes and their variances. The
following paper provides an overview of common approaches to training RBFNs. I read
through it to familiarize myself with some of the details of RBF training, and chose specific
approaches from it that made the most sense to me.

Selecting the Prototypes

It seems like there’s pretty much no “wrong” way to select the prototypes for the RBF
neurons. In fact, two possible approaches are to create an RBF neuron for every training
example, or to just randomly select k prototypes from the training data. The reason the
requirements are so loose is that, given enough RBF neurons, an RBFN can define any
arbitrarily complex decision boundary. In other words, you can always improve its accuracy
by using more RBF neurons.
What it really comes down to is a question of efficiency–more RBF neurons means more
compute time, so it’s ideal if we can achieve good accuracy using as few RBF neurons as
possible.

One of the approaches for making an intelligent selection of prototypes is to perform k-


Means clustering on your training set and to use the cluster centers as the prototypes. I won’t
describe k-Means clustering in detail here, but it’s a fairly straight forward algorithm that you
can find good tutorials for.

When applying k-means, we first want to separate the training examples by category–we
don’t want the clusters to include data points from multiple classes.

Here again is the example data set with the selected prototypes. I ran k-means clustering with
a k of 10 twice, once for the first class, and again for the second class, giving me a total of 20
clusters. Again, the cluster centers are marked with a black asterisk ‘*’.

I’ve been claiming that the prototypes are just examples from the training set–here you can
see that’s not technically true. The cluster centers are computed as the average of all of the
points in the cluster.

How many clusters to pick per class has to be determined “heuristically”. Higher values of k
mean more prototypes, which enables a more complex decision boundary but also means
more computations to evaluate the network.
Selecting Beta Values
If you use k-means clustering to select your prototypes, then one simple method for
specifying the beta coefficients is to set sigma equal to the average distance between all
points in the cluster and the cluster center.

Here, mu is the cluster centroid, m is the number of training samples belonging to this cluster,
and x_i is the ith training sample in the cluster.

Once we have the sigma value for the cluster, we compute beta as:

Output Weights
The final set of parameters to train are the output weights. These can be trained using
gradient descent (also known as least mean squares).

First, for every data point in your training set, compute the activation values of the RBF
neurons. These activation values become the training inputs to gradient descent.

The linear equation needs a bias term, so we always add a fixed value of ‘1’ to the beginning
of the vector of activation values.

Gradient descent must be run separately for each output node (that is, for each class in your
data set).

For the output labels, use the value ‘1’ for samples that belong to the same category as the
output node, and ‘0’ for all other samples. For example, if our data set has three classes, and
we’re learning the weights for output node 3, then all category 3 examples should be labeled
as ‘1’ and all category 1 and 2 examples should be labeled as 0.
RBFN as a Neural Network
So far, I’ve avoided using some of the typical neural network nomenclature to describe
RBFNs. Since most papers do use neural network terminology when talking about RBFNs, I
thought I’d provide some explanation on that here. Below is another version of the RBFN
architecture diagram.

Here the RBFN is viewed as a “3-layer network” where the input vector is the first layer, the
second “hidden” layer is the RBF neurons, and the third layer is the output layer containing
linear combination neurons.

One bit of terminology that really had me confused for a while is that the prototype vectors
used by the RBFN neurons are sometimes referred to as the “input weights”. I generally think
of weights as being coefficients, meaning that the weights will be multiplied against an input
value. Here, though, we’re computing the distance between the input vector and the “input
weights” (the prototype vector).
DISCRETE WAVELET TRANSFORM
In numerical analysis and functional analysis, a discrete wavelet transform (DWT) is
any wavelet transform for which the wavelets are discretely sampled. As with other wavelet
transforms, a key advantage it has over Fourier transforms is temporal resolution: it captures
both frequency and location information (location in time).

TYPES OF DWT
1.Haar wavelets
The first DWT was invented by Hungarian mathematician Alfréd Haar. For an input
represented by a list of numbers, the Haar wavelet transform may be considered to pair up
input values, storing the difference and passing the sum. This process is repeated recursively,
pairing up the sums to prove the next scale, which leads to differences and a final sum.

2.Daubechies wavelets
The most commonly used set of discrete wavelet transforms was formulated by the Belgian
mathematician Ingrid Daubechies in 1988. This formulation is based on the use of recurrence
relations to generate progressively finer discrete samplings of an implicit mother wavelet
function; each resolution is twice that of the previous scale. In her seminal paper, Daubechies
derives a family of wavelets, the first of which is the Haar wavelet. Interest in this field has
exploded since then, and many variations of Daubechies' original wavelets were developed.
3. The dual-tree complex wavelet transform (DℂWT)
The dual-tree complex wavelet transform (ℂWT) is a relatively recent enhancement to the
discrete wavelet transform (DWT), with important additional properties: It is nearly shift
invariant and directionally selective in two and higher dimensions. It achieves this with a
redundancy factor of only substantially lower than the undecimated DWT. The
multidimensional (M-D) dual-tree ℂWT is nonseparable but is based on a computationally
efficient, separable filter bank (FB).[2]
PROPERTIES OF DWT
The Haar DWT illustrates the desirable properties of wavelets in general. First, it can be
performed in operations; second, it captures not only a notion of the frequency content of
the input, by examining it at different scales, but also temporal content, i.e. the times at which
these frequencies occur. Combined, these two properties make the Fast wavelet transform
(FWT) an alternative to the conventional fast Fourier transform (FFT).
1. TIME ISSUES

Due to the rate-change operators in the filter bank, the discrete WT is not time-invariant but
actually very sensitive to the alignment of the signal in time. To address the time-varying
problem of wavelet transforms, Mallat and Zhong proposed a new algorithm for wavelet
representation of a signal, which is invariant to time shifts. According to this algorithm,
which is called a TI-DWT, only the scale parameter is sampled along the dyadic sequence 2^j
(j∈Z) and the wavelet transform is calculated for each point in time.
APPLICATIONS OF DWT
The discrete wavelet transform has a huge number of applications in science, engineering,
mathematics and computer science. Most notably, it is used for signal coding, to represent a
discrete signal in a more redundant form, often as a preconditioning for data compression.
Practical applications can also be found in signal processing of accelerations for gait analysis,
image processing,[7] in digital communications and many others.
It is shown that discrete wavelet transform (discrete in scale and shift, and continuous in
time) is successfully implemented as analog filter bank in biomedical signal processing for
design of low-power pacemakers and also in ultra-wideband (UWB) wireless
communications.
Example in Image Processing
Image with Gaussian noise.

Image with Gaussian noise removed.

Wavelets are often used to denoise two dimensional signals, such as images. The
following example provides three steps to remove unwanted white Gaussian noise from the
noisy image shown. Matlab was used to import and filter the image.
The first step is to choose a wavelet type, and a level N of decomposition. In this case
biorthogonal 3.5 wavelets were chosen with a level N of 10. Biorthogonal wavelets are
commonly used in image processing to detect and filter white Gaussian noise,due to their
high contrast of neighboring pixel intensity values. Using this wavelets a wavelet
transformation is performed on the two dimensional image.
Following the decomposition of the image file, the next step is to determine threshold
values for each level from 1 to N. Birgé-Massart strategy[13] is a fairly common method for
selecting these thresholds. Using this process individual thresholds are made for N = 10
levels. Applying these thresholds are the majority of the actual filtering of the signal.
The final step is to reconstruct the image from the modified levels. This is accomplished
using an inverse wavelet transform. The resulting image, with white Gaussian noise removed
is shown below the original image. When filtering any form of data it is important to quantify
the signal-to-noise-ratio of the result. In this case, the SNR of the noisy image in comparison
to the original was 30.4958%, and the SNR of the denoised image is 32.5525%. The resulting
improvement of the wavelet filtering is a SNR gain of 2.0567%.
It is important to note that choosing other wavelets, levels, and thresholding strategies can
result in different types of filtering. In this example, white Gaussian noise was chosen to be
removed. Although, with different thresholding, it could just as easily have been amplified.
CHAPTER 5
MODULE DESCRIPTION

1) IMAGE PRE-PROCESSING

The techniques of pre-processing are used to update the image quality without
changing the information content that are hidden in the image. The necessity of the pre-
processing is to improve the simulation percentage, to reduce distortion, to get higher
assurance etc. The basic steps involve in pre-processing are image resize, image
enhancement.

2) SEGMENTATION

In segmentation process DWT (Discrete Wavelet Transform) is used for high


possibilities to find the properties of melanoma Discrete wavelet transforms (DWTs),
including the maximal overlap discrete wavelet transform (MODWT), analyse signals and
images into progressively finer octave bands. Wavelet packets provide a family of transforms
that partition the frequency content of signals and images into progressively finer equal-width
intervals. The wavelets are used to find the features of the image.

3) FEATURE EXTRACTION

The process is required to concentrate the highlights. Entropy, standard deviation,


maximum pixel values and the mean values are put towards the description of segments or
picture qualities, which will help to address the information that is needed for examination
and detection of disease. The histogram is a graphical representation of the image. The
technique used for feature extraction is Histogram of Oriented Gradient (HOG).

4) CLASSIFICATION USING RBFN

RBFN is a robust supervised machine learning method used for efficient and accurate
classification in various applications like object detection, speech recognition, bioinformatics,
image classification, medical diagnosis and others. The supervised learning method is
normally composed of two main phases: training/learning, and classification. RBFN is based
on the concept of a decision boundary, which separates two different classes of data in order
to discriminate classes with high accuracy.
CHAPTER 6
RESULT SNAP

You might also like