Professional Documents
Culture Documents
All rights reserved. Portions of this manuscript may be reproduced with proper
referencing and due acknowledgement of the authors.
PAMANTASANNGLUNGSODNGMAYNILA
Real-Time Vehicle Parking Logging System with the use of Multi-Layered Artificial
Neural Network
A Thesis
Presented to the Faculty of the College of Engineering and Technology
Pamantasan ng Lungsod ng Maynila
Intramuros, Manila
Proponents:
Adviser:
March 2018
P A M A N T A S A N 105
NG LUNGSOD NG MAYNILA
CERTIFICATION
This thesis entitled REAL-TIME VEHICLE PARKING LOGGING SYSTEM
WITH THE USE OF MULTI-LAYERED ARTIFICIAL NEURAL NETWORK prepared
and submitted by VINCE HARROLD A. ALTURA, BONIFACIO S. MACABANTI,
FRANCIS NICKO D. RICARDO and MICAH ELLA R. UY in partial fulfillment of the
requirements for the degree BACHELOR OF SCIENCE IN ELECTRONICS
ENGINEERING has been examined and recommended for Oral Examination.
Evaluation Committee
APPROVAL
Approved by the Panel on Oral Examination on February 23, 2018 with the grade
of ________.
CERTIFICATION OF ORIGINALITY
This is to certify that the research work presented in this thesis entitled REAL-
of original and scholarly work carried out by the undersigned. This thesis does not
contain words ideas taken from published sources or written works that have been
accepted as basis for the award of a degree from any higher education institution,
ACKNOWLEDGEMENTS
Their thesis adviser, Engr. Reynaldo Ted L. Peñas II, for his valuable technical
guidance, and for accommodating each and every need of the researchers during the
The course adviser, Engr. Jeremeh Dela Cruz, for providing encouragement,
enthusiasm and support for the researchers’ aim to produce a competitive research in
the program.
The panel members for all defenses, Engr. Jeremeh Dela Cruz and Dr. Carlos
Sison, headed by Engr. Charles Juarizo, for providing constructive criticism of the
To their classmates, who provided unending enthusiasm and support during the
entire course.
PAMANTASAN NG LUNGSOD NG MAYNILA v
To their research team composed of third year students, for helping the team
during oral presentations, Nyna Dominic D. Chan and Camille Ann C. Dizon
To the family of all the researchers, who provided all the needs which enabled
Most especially to the Almighty Lord Jesus, the creator and source of guidance,
ABSTRACT
Due to the growing need for surveillance and license plate identification for
recognition system (LPRS) is proposed in this work. This study aims to create
automatic license plate recognition without human intervention and evaluate its overall
processing time and accuracy. To this end, the proponents proposed suitable
algorithms for the four main stages after image acquisition namely, license plate
in recognizing old and new license plates of private vehicles for a specific parking lot.
of gray-scaled image is used for enhancement. The plate localization is carried out
based edge processing where vertical and horizontal histogram are calculated. In
order to avoid loss of important information in the image a digital low-pass filter will be
used. Next, is to pass both histograms through a band-pass filter to filter out
unnecessary histogram values. Localized plates are converted into a binary image for
character segmentation. Characters are separated from each other using connected
component analysis. Characters are prepared for the character recognition stage by
the characters are determined. For network training, a mean square error of 0.0001 is
PAMANTASAN NG LUNGSOD NG MAYNILA v ii
set to stop the training. Upon recognizing the plate, it checks in the database if the
vehicle is registered or not. A scale model of parking lot is made in order to evaluate
the proposed system. It is found that the overall accuracy of the system is 86% and it
took an average time of about 2.57 seconds. However the proposed system has some
limitations which will be studied in the future in order to make the techniques applicable
TABLE OF CONTENTS
Page
TITLE PAGE ............................................................................................................. i
CERTIFICATE AND APPROVAL SHEET ................................................................ ii
CERTIFICATION OF ORIGINALITY ....................................................................... iii
ACKNOWLEDGEMENTS........................................................................................ iv
ABSTRACT ............................................................................................................ vi
TABLE OF CONTENTS ......................................................................................... viii
LIST OF FIGURES .................................................................................................. x
LIST OF TABLES ................................................................................................... xv
Bibliography ..........................................................................................................110
Appendices
Appendix I: Computations............................................................................. 116
Appendix II: Source Codes ........................................................................... 121
Gantt chart ............................................................................................................153
Costing ................................................................................................................. 154
About the Proponents ...........................................................................................155
Proofread Certification ..........................................................................................153
PAMANTASANNGLUNGSODNGMAYNILA x
LIST OF FIGURES
2.2 (a) Gray image (b) edge detected (c) localized plate region ................ 21
eight-connectivity ................................................................................ 33
PAMANTASANNGLUNGSODNGMAYNILA xi
4.10 Images captured at 40cm horizontal distance (a & b), Failure Detection
of plate region at 40cm horizontal distance (c & d) .............................. 87
PAMANTASAN NG LUNGSOD NG MAYNILA xiii
LIST OF TABLES
2.6 Different font styles of printed text and the results ............................... 39
4.1 Percentage Success of recognizing the license plate region varying the
distance of camera to the vehicle........................................................ 83
4.2 Percentage Success of recognizing the license plate region varying the
horizontal distance of camera to the vehicle ........................................ 86
Chapter I
Introduction
With electronic technology and machines being produced and improved all the time,
everything is becoming convenient and accessible for every human being. No doubt
As the automotive industry moves toward autonomous driving and smart roads,
the encountered conflicts about vehicles continue to increase. These conflicts include
areas, stolen vehicle verification, and traffic flow control in some national roads.
An October 2016 online article written by Anne Marie pointed out another form
of a hit-and-run issue. She said that “if you hit a parked car and leave the scene without
making an effort to contact the owner of the car, it can also be considered a hit-and-
run”. This scenario mostly occurred in some parking areas where the cars are leaving
inattentively. Based on this situation, it is difficult for the latter to trace the culprit of the
incident. Though there are some CCTV cameras installed in the nearby areas that
captured the incident, recognizing the identity of the culprit is a major task of security
personnel. One effective way to address this kind of conflict is by identifying the
information about one’s vehicle. Though it can be done by manual human observation,
there is a probability for mistakes due to reading errors or failed eyesight, caused by
blurred images or lack of concentration. With that, automatic license plate recognition
is introduced.
which lies under the computer vision field (Thanin, 2008). It has been a special area
of interest due to its many applications such as its potential to be used by police forces
around the world for law enforcement purposes, re-acquisition of stolen cars, and for
Automatic license plate recognition systems commonly have two types of basic
approaches. First, the process is entirely performed at the designated lane location in
real-time, while the second one performs the process at the remote computer location
where all the images from many lanes are transmitted (Habineza, 2016). When the
process is done at the lane site, the whole process is completed in approximately 250
milliseconds. This includes the capturing and identifying of license plate characters,
date-time, lane identification, and any other information required. If there is further
to finish the process. On the other hand, if the process doesn’t need further
requirement the information could be stored at the lane for later retrieval. Furthermore,
handling high workloads typically demand large numbers of PCs used in a server farm.
PAMANTASAN NG LUNGSOD NG MAYNILA 3
stages. The first stage is image acquisition. At this stage, the image of a vehicle’s front
or rear is being captured by the camera. However, there are some parameters of the
camera that have to be considered such as the type of camera, camera resolution,
shutter speed, orientation, and light. The second stage is license plate detection. In
this stage, the system is going to determine if the input image contains the region of
a license plate and will find its location on that picture. The extraction of the license
plate from the image will be based on some features, such as the boundary, the color,
or the existence of the characters. The third stage is character segmentation. In this
stage, the system takes the image of a plate and separates the characters from each
other. The last stage of the process is character recognition. This stage is about
as neural networks, SVMs and fuzzy classifiers (Azad et. al, 2014). Every stage of the
license plate recognition plays a significant role in the success rate of it. The higher
the accuracy of one of these stages, the higher the success rate of result that can be
obtained.
For the past two decades, a lot of techniques have been implemented in license
plate recognition. Each stage has their own specific methods that acquired significant
results. For license plate detection, there are four considered major processes to
execute the input image: (1) Binary Image Processing, (2) Gray-Level Processing, (3)
Color Processing, and (4) Classifiers. Among the four processes, binary-image shows
the highest rate of reliability with relatively fast extraction rate compared to other
horizontal projection is the most common method used in this stage, but the connected
than the projections. Hidden Markov Model and Template Matching are said to be an
effective technique for character recognition phase, but the Artificial Neural Network
accordingly to the guiding parameters of specific country traffic norms and standards.
In the Philippines, there is no further study about the automatic license plate
recognition system based from international and national patent websites (Dalida et.
al, 2016). With that, the proponents desired to conduct a research study about
automatic license plate recognition system with higher accuracy in terms of the
PAMANTASAN NG LUNGSOD NG MAYNILA 5
mentioned stages. The proponents also aim to develop a real-time automatic license
plate recognition system that takes less processing time, uses low computing power,
and has better recognition rates under fewer restrictions using ANNs and various
Theoretical Framework
According to Ying Wen and M. von Deneen in their study entitled “An Algorithm
first uses a series of image manipulation techniques to detect, normalize and enhance
the image of the number plate, and then optical character recognition (OCR) to extract
the alphanumeric of the license plate. Figure 1.2 shows the functional block diagram
Server
Microcontroller Unit
(Arduino Plus with Image Pre-processing
ATMEGA644 )
Morphological Operations
Edge Statistics
Serial Camera
Character Segmentation
Character Recognition
Graphic User
Interface
(Recognized Character Image Data Mining
into Text)
The serial camera is used to capture the front/rear view of the vehicles where
the license plate is located. Then, the image captured is transmitted using the
microcontroller that is interfaced in the serial camera. For the image processing in the
server, if the license plate can be detected exactly then character segmentation and
character recognition can yield twice the result with half the effort. However, it needs
to consider the various luminance conditions of the environment. With that, an image
pre-processing approach was used because of it has the advantage of being simple
and thus faster. Its process involves RGB to Grayscale, median filter and dilation.
For the extraction of number plate, vertical and horizontal edge processing was
used along with the histogram approach to show the trends of pixel components of
the whole image. When the presence of a license plate is detected, the characters on
the plate have to be separated from each other. The character segmentation system
has to acquire the full image of a license plate and to deliver a set of separated
characters to the next process which is the character recognition. For this, Connected
Component Analysis (CCA) is implemented. By applying the CCA to the image of the
license plate, each character will be seen as individual images. All characters will be
acquired by filtering all of the elements through a threshold value such as the size of
a component (Jing, 2016). Next is character recognition, the most integral part of
license plate recognition. Using the Artificial Neural Network, the accuracy and
efficiency of being real time of the system is guaranteed. Finally, the recognized
characters will undergo matching with the stored characters in the database. This
PAMANTASAN NG LUNGSOD NG MAYNILA 7
large data sets involving methods at the intersection of machine learning, statistics,
applied to extract data patterns. Sometimes this analysis step referred of the
The block diagram in Figure 1.2 demonstrates the structural model of how the
characters of a license plate in the input image will be recognized. The success of the
backgrounds, defects on the plate surface, and a range of distances and viewpoints
Conceptual Framework
Figure 1.3 shows the conceptual framework of this study. An image acquired
There would be five major processes. The first step consists of the following
These have a function to detect the location of the license plate in the image. Then,
separating the characters from the image, which will be the main function of connected
component analysis. Next, the algorithm that will recognize the characters from the
license plate image is the three layer feed forward neural network with back
propagation algorithm. And lastly, determining the match of the recognized characters
INPUT PROCESS
• re 1
Figu Im
.3plementation of Image pre-processing,
• Hardware:
Morphological operation and Edge Statistics
Serial Camera
• Implementation of Connected Component
Server – Laptop
Analysis
Microcontroller –
• Implementation of Three layer feed forward
Arduino
Neural Network with Back Propagation
• Software:
Algorithm.
MATLAB 2016a
• Implementation of Data Mining
Graphic User
FEEDBACK OUTPUT
Recognized License plate
Unmatched Characters will be stored as training set data characters from the
vehicle’s front/rear view
image displayed in Graphic
User Interface.
The reliability of the systems was determined by the accuracy and processing
system that can be used for parking management systems and security of the
1. What is the effect in the accuracy of the designed system when the input is
b. Viewing Angle?
c. Luminosity?
2. What is the necessary average difference between the histogram value of the
original gray-scaled image and the processed image in order for the system
3. How fast and accurate is the performance of the designed system for license
Objectives
The main objective of this study is to create a real-time system that will fully
recognize the characters in the license plate using artificial neural network.
in terms of:
b. Viewing Angle
c. Luminosity
original gray scaled image and the processed image that the system can
3. Determine the accuracy and processing time of the proposed license plate
recognition system.
automatic license plate recognition here in the Philippines. The Philippines’ license
plate has its own distinct format and style, and finding ways to be able to recognize
it will have a great impact on the security and safety of the society. First, (1) it could
lessen the crime in the street such as car-napping and hit-and-run incidents by
providing an automatic alert when any of the vehicles on a watch list passes by on
the establishment that has a license plate recognition system installed. Second, (2)
it could help to solve traffic related problems by monitoring the traffic flow control of
the vehicles. Lastly, (3) it could enhance the security in some restricted areas such
as access control. For example, only authorized vehicles and all visiting vehicles that
establishment. This study aims to produce more accurate and more reliable result
The capability of the system to fully recognize the character in the license plate
under with any various luminance conditions and even in any tilted position is also
demonstrated in this study. With this kind of system, it is possible to have better
The beneficiaries and their gains from this study are as follows:
facilitates or even automates the payment, entry and exit of the parking areas.
2. Private establishments who need a tight security system for their areas.
This study aims to provide a system that automatically converts license plate
image into text. Vehicles were held stationary during image-taking with angle and
distance having very little variations. The images taken all have resolutions of 640x480
pixels and are in JPEG format. The images being put through the Artificial Neural
Network for recognition are all binarized Images. Only registered Philippine license
plates were considered for the study. Lastly, the software used for simulation was
MatLab 2016a. The host used has the following specifications, an i-5 dual core
PAMANTASAN NG LUNGSOD NG MAYNILA 12
processor operating at 1.70 and 2.40 GHz. It also has a 6-GB ram installed. The
Artificial Neural Network has the following specifications, 351 input neurons and 7
output neurons, while the number of hidden neurons was determined during the
testing phase. The database contains a number of license plates along with the
following data: owner, model, color, and the violations made by the owner.
Definition of Terms
The following terminologies were used in the study and were listed accordingly:
functions of biological neural networks. Information that flows through the network
Binarization - is the conversion of a picture to only black and white or Logic “0” or
Logic “1”.
symbols.
PAMANTASAN NG LUNGSOD NG MAYNILA 13
printed characters such as numbers or letters and to change them into a form
Color to Gray conversion – this process removes all color information, leaving only
any form of signal processing for which the input is an image, a series of images
image.
Image Resolution - is the detail an image holds. The term applies to raster
digital images, film images, and other types of images. Higher resolution means
License Plate Region - is the area inside the edges of the license plate including all
Viewing Angle – The angle made by the x-axis and the varying position with respect
Chapter II
This chapter presents the review and relevant information about license plate
recognition and artificial neural network that was used as basis for this research. In
this section, the related algorithms for image processing toward recognition process
are discussed and further explained through the works of previous researchers.
The rapid increase in the number of vehicles across the world resulted to
vehicle license plate recognition becoming one of the most relevant and important
image processing systems being used for many traffic control and surveillance
systems, security systems, toll collection at toll plaza and parking assistance system,
etc. Because of the wide use of technology, Intelligent Transport System has
Plenty of research in the field of vehicle plate number or VPN recognition have
been carried out since early 2000s. License plate localization methods are classified
broadly into morphology-based methods, edge statistics, and neural networks and
PAMANTASAN NG LUNGSOD NG MAYNILA 16
fuzzy-based, template-based and so on. The color, variance, aspect ratio, and edge
density are some of the image features used by the above methods. High contrast
recognition system were proposed. According to Anand Sumatilal Jain and Jayshree
Artificial Neural Network” published in 2015, the typical License Plate Recognition
(LPR) system is comprised of four main sections: image pre-processing, license plate
Normally, the license plate has a rectangular shape with a known aspect ratio,
common method used to find these rectangles is edge detection. In a paper published
in 2003 conducted by M. Sarfraz, M. J. Ahmed, and S. A. Ghazi with the title “Saudi
Arabian license plate recognition system”, edges were detected in their proposed
system. Candidate regions are generated by matching between vertical edges only.
In their study, the vertical edges are matched to obtain some candidate rectangles.
Rectangles that are considered as candidates are those that have the same aspect
ratio as the license plate. The result of this method was 96.2% on images under
Jain and Kundargi indicated the block diagram of their system as shown in
Figure 2.1. They also indicated in their study that the parameters of the camera to be
used in acquiring the image shall be considered, such as the camera type, resolution
of the camera, orientation, light, and the speed of the camera. The next step which is
the extraction of license plate from the acquired image, is based on some image
features such as the color, various illuminations, boundary, or the existence of the
characters. It was also discussed that in the third stage, segmentation of characters
in the extracted license plate, it was suggested to be done by projecting their color
information, labelling them, or matching their positions with templates. Finally, for the
last stage, the recognition of the characters, Artificial Neural Network (ANN) was used.
There are also other methods that was written in this paper for the recognition of
PAMANTASAN NG LUNGSOD NG MAYNILA 18
characters. These are template matching, or using other classifiers related to ANN
For the process of license plate detection and extraction from the acquired
morphology featured very good results according to the published paper in IEEE last
method has some advantages and disadvantages of its own. As an example, the
disadvantage of this method that was also discussed in this paper is that edge-based
methods alone can hardly be applied to complex images. It is because they are too
sensitive to unwanted edges, which may also show high edge magnitude or variance,
for example the radiator region in the front view of the vehicle could be detected by
the system instead of the plate number region. In spite of this, when combined with
morphological steps that eliminate unwanted edges in the processed images, the
license plate extraction rate is relatively higher and faster compared to other methods
This study was conducted also to discuss the differences in the success rate,
previous researchers, wherein 20 of them were snipped in its original table and
presented here. A research cited in this paper that used the combination of edge
approximately 10000 images had a recognition rate of 99.6% and average processing
time was recorded as approximately 0.1 second. This approach was labelled as
that process images based on shapes. That is why this is suggested for the extraction
of the boundary of license plate based on the research conducted by Khaled Mahmud,
et. al. published on March 2012 entitled “Bangla Automatic Number Plate Recognition
System using Artificial Neural Network.” These researchers also regarded other
methods used by previous studies for number plate detection and comes up with the
conclusion that edge analysis method combined with mathematical morphology gives
a better result. According to this study, this technique provides strong edge information
where the plates contain dark characters on light background, which can be used as
an indication to detect the number plate. This condition satisfies the color of the plate
Network” conducted by Shrutika Saunshi et. al., edge detection method was used for
license plate detection and extraction. They indicated the importance of taking in mind
the boundaries of the plate in the image thus resulting to the need of using this edge
detection method. This method helped these researchers find out the actual shapes
in an image by the grouping method of intersection points of the shapes. Hence, they
could finally tell whether a shape is a rectangle or not depending upon the number of
points in that obtained group. Figure 2.2 shows 3 images wherein the first one shows
PAMANTASAN NG LUNGSOD NG MAYNILA 22
the Gray image, the second one shows the edge detected, and the third one shows
Figure 2.2. (a) Gray image (b) edge detected (c) localized plate region
Image Pre-processing
Based on the study conducted in University of Rome by R.Parisi, et. al. entitled
“Car Plate Recognition By Neural Networks And Image Processing”, the image
proposed alternative, such as edge enhancement, for the better robustness and
suitability for the next processing stage. In a paper entitled “Block-Based Neural
Network for Automatic Number Plate Recognition” published in 2014, Deepti Sagar
and Maitreyee Dutta showed the steps they performed for improving the image quality
as shown below:
PAMANTASAN NG LUNGSOD NG MAYNILA 23
Gray scale processing, according to Sagar and Dutta, is a very important step
in image pre-processing because its results will be the foundation of later steps. For
the binarization, this study used the Otsu’s method of binarization. The image of
various Gray level intensities are converted into binary image. In the matrix equivalent
of a binary image, it will only be composed of ones and zeros with one representing
white and zero for black. This was done using a threshold value that will decide
according to Shrutika Saunshi, et. al. in their paper entitled ”License Plate Recognition
Using Convolutional Neural Network”. It will highly affect the latter stages of the
recognition and faulty results. The image has to be enhanced through pre-processing.
One important step for pre-processing the image that was done in this study was
with only two pixels value, either 0, for the background pixel, or 1, for the foreground
pixel. In binary image, edges will be clearer and will make detecting the license plate
easier thus performing binarization process before license plate detection and
-Median Filter
For eliminating the unwanted noisy regions, Median filter was used in a study
conducted in Turkey by H.Erdinc Kocer and K.Kursat Cevik entitled “Artificial Neural
Networks-based Vehicle License Plate Recognition”. They used 3x3 matrices for this
filtering method, which was passed around the image. The dimension of these
matrices can be adjusted depending on the noise level. The researchers used the
process explained in a book entitled Computer vision and Image processing written
by S. Umbaugh on New Jersey published 1999. The process was explained and
worked as follows: First, a pixel was assigned as the center pixel of the 3x3 matrices.
Second, neighborhood pixels were labelled, these are the pixels around theses
matrices. Then, the sorting process was employed to these nine pixels from the
PAMANTASAN NG LUNGSOD NG MAYNILA 25
smallest to the biggest. After this, the median element was assigned which was the
fifth element. Finally, these processes are implemented to all the pixels in the plate
image. A given example of the median filtered image was shown below, in figure 2.5.
This helped the plate region extraction process to have a success rate of 98.45% or
Figure 2.5. (a) Original Gray-processed image (b) Contrast extended image (c) Median-filtered image
-Binarization
last 2014, different techniques have been used to successfully extract objects from its
simple but effective tool and is now widely used. Different binarization methods have
already been performed to evaluate different kinds of data. Niblack’s method was
found better for thresholding in Gray scale images with low contrast, variety of
background intensity and presence of noise. In this paper, the researchers tried to
by sliding a rectangular window over the Gray-level image. This size of rectangular
window may vary. Their proposed efficient implementation resulted to a sharper and
more accurate object area. This study then concluded, that the effective
such as MRI brain images or others that requires clearer view for the object in the
image.
Figure 2.6. (a) Original Gray-processed image (b) Binary image produced by Niblack’s algorithm
Yoon, Kyu-Dae Ban, Hosub Yoon, and Jaehong Kim. Three binarization methods
were differentiated in this study namely Otsu’s method, Niblack’s algorithm, and
Sauvola’s algorithm. These were few of the most commonly used methods because
of their promising results. These methods were compared in this study using license
The two local thresholding methods had almost similar performance but the
Niblack’s algorithm produced better results in most cases. The characters were
clearer and the separation of the colors of the foreground to the background was
-Dilation
Dilation process was further explained and used in the study conducted by G.
image, sharpening the edges of the objects, joining the broken lines, and increasing
the brightness of an image. Noise present in the image can also be removed using
dilation. Also stated in this paper that by making the edges sharper, the difference of
Gray value between neighboring pixels at the edge of an object can be increased and
this enhances the edge detection. Since different environment and various illumination
and shade are expected in the captured images, the given has to be converted from
RGB to Gray form. The dilation process is usually used for probing and expanding the
“A Fast Algorithm for License Plate Detection” last 2007, the techniques based on
edge statistics yield promising results because of the sufficient information in the plate
region. The simplicity and speed of this method is an advantage over other license
extraction methods. However, the rate of success is low when edge-statistics method
alone is used.
PAMANTASAN NG LUNGSOD NG MAYNILA 29
Character Segmentation
such as characters and numbers from the plate’s background. It is preferable to isolate
each character from the extracted image of the plate number to make the process of
numbers and letters, in the extracted license plate image from one another and from
its background.
that scans an image and labels its pixels into components which are based upon the
was also explained that CCA works on binary or Gray-level images and different
measures of connectivity are possible, wherein in this proposed method they used
searching in eight-connectivity.
Network for Automatic Number Plate Recognition” the researchers, Deepti Sagar and
Maitreyee Dutta, also used CCA labelling each 8-connected component in the binary
license plate image with a unique number to make an indexed image. This study was
conducted in India. Indian license plates have unnecessary textual details mostly
PAMANTASAN NG LUNGSOD NG MAYNILA 30
found at the bottom of the license plate, therefore the researchers decided to use the
Extraction was used after this through processing the image vertically and horizontally
to fin the starting and ending position of each blob using maximum and minimum
parameters, an example of which is shown in figure 2.9 below. This proposed method
P.Jeyakumar, showed how CCA is an important technique that scans and labels the
pixels of a binarized image into components based on its pixel connectivity. Shown in
the figure 2.10 below is the orientation or process how to label pixels through four-
Figure 2.10. Process how to label pixels through four-connectivity and eight-connectivity.
Then, these connected components are analyzed to be able to filter out long
and wide components and only leave the components according to the defined values.
September, 2011, conducted by Ying Wen, et. al., character segmentation using CCA
Artificial intelligence has been used in many applications as of today. It has been
successful in many areas, such as image recognition and sound recognition. Optical
Character Recognition (OCR) programs have the capability to read printed texts. This
could be a text scanned from a document or hand-written text that was drawn to a
hand-held device such as a personal digital assistant, and then converted to an image.
OCR has been applied in computer guided traffic system for the past few years. For
example, an intelligent traffic system has been continually developed, which can work
computer science, which has been widely used in signal processing, pattern
based on a paper published last June 2012 entitled “A Neuroplasticity (Brain Plasticity)
Approach to Use in Artificial Neural Network” by Yusuf Perwej and Firoj Perwej.
their study entitled “Text recognition from image using Artificial Neural Network and
Genetic Algorithm” and published on 2015, Artificial neural networks (ANN) are a
family of statistical learning models inspired by biological neural networks (the central
nervous systems of animals, in particular the brain). ANNs are used to approximate
functions that can depend on a large number of inputs and are generally unknown.
neurons, as we can notice in Figure 2.15. These neurons send messages to each
other. These connections have a hidden layer that have weights which the user could
PAMANTASAN NG LUNGSOD NG MAYNILA 33
modify and adjust based on experience, making this adaptive network to inputs and
piece of datum from one category upon presentation of data from another category
Ioannis D. Psoroulas and Vassili Loumos entitled “License Plate Recognition From
Still Images and Video Sequences: A Survey” published last September 2008, multi-
layered feed forward neural networks are used for license plate character recognition
training method for feed forward neural networks is error back propagation. The
network has to be trained for many training cycles to attain good performance. In the
figure shown below (Table 2.5) labelled as Table II, various multi-layered neural
Table 2.5. Multilayered Feedforward Neural Networks for Character Recognition Details
PAMANTASAN NG LUNGSOD NG MAYNILA 34
network topologies are cited along with specific details as reported in few literatures
most specially the systems’ performance or their success rate which is noticeably high
recognizing some letters and numbers which they labelled as problematic pairs such
as the letter 'I’ and number ‘1’, ‘B’ and ‘8’, ‘O’ and ‘0’, etc. This system therefore, gave
special attention to the training of this three-layered MLP for the correct identification
of such pairs. In order to give this minimal problem a solution, the researchers trained
these problematic pairs more often. After this kind of special training, the rate of correct
There are two basic methods used for OCR according to a study conducted bin
“Recognition of Text Image Using Multilayer Perceptron” namely Matrix Matching and
PAMANTASAN NG LUNGSOD NG MAYNILA 35
Feature Extraction. While matrix matching is the simpler and more common way,
feature extraction produces a much more robust and accurate results and it is much
more versatile. The Multilayer Perceptron (MLP) neural network that was implemented
in this study was composed of three layers namely the input, hidden, and output layer.
The input layer consisted 180 neurons which receive printed image data from a 30x20
symbol pixel matrix. The hidden layer had 256 neurons wherein this number was
decided on the basis of optimal results on a trial and error basis. The output layer is
composed of 16 neurons.
This set up was illustrated in the figure shown above (Figure 2.13). Feed
Forward Neural Network approach is used to combine all the unique features, which
are taken as inputs, one hidden layer is used to integrate and collaborate similar
features and if required, adjust the inputs by adding or subtracting weight values,
finally one output layer is used to find the overall matching score of the network. On
PAMANTASAN NG LUNGSOD NG MAYNILA 36
the other hand, learning was implemented using the back-propagation algorithm with
learning rate. The performance index for the training of this ANN was given in terms
of its mean square error (MSE).The tolerance limit for the MSE was set to 0.001. When
the number of iteration reaches 350, the training set became stable at 0.0007.
For this system’s data set, the researchers applied their system with different
number of neurons in hidden layer. They use different font styles of printed text and
Table 2.6. Different font styles of printed text and the results
No. of
A study conducted by V. Koval et. al. entitled “Smart License Plate Recognition
2003, used a feed-forward neural network for their proposed license plate recognition
system. The structure of this used neural network consisted of 366 inputs on the input
layer, 50 neurons on its one hidden layer, and an output layer with 46 neurons. Back
propagation method with momentum and adaptive learning rate was used for neural
network training. Neural network was trained on the quality images showed by
recognized the characters in the number plates used in this experiment with a success
In the proposed approach by H.Erdinc Kocer and K.Kursat Cevik in their study
entitled “Artificial Neural Networks-based Vehicle License Plate Recognition” and was
published on 2011, a multi-layered perceptron ANN model was used for character
recognition. The processing units in MLP was composed of three layers namely the
input layer, which includes the information used in making decision, then the hidden
layer, which helps the network to compute more complicated associations, and finally,
the output layer, which includes the resulting decision. What makes this study different
from many other researches already conducted was the use of two separate ANN.
The numbers and letters was classified separately, thus the use of two ANN model.
This was done to increase the success rate of the recognition phase where both
numbers and letters have the same architecture but different input numbers. The
researchers try to prevent the complexity of recognition of similar numbers and letters
such as ‘0’ and ‘O’, ‘8’ and ‘B’, and also ‘2’ and ‘Z’, etc.
PAMANTASAN NG LUNGSOD NG MAYNILA 38
ANN and used the mean square error (MSE) function for measuring the performance
of the network in terms of training. This value of MSE was used to determine how well
the network output fits the desired output. This system was tested using 259 vehicles
and a maximum of 500 iterations were performed for each input set and when the
minimum error rate, defined by the user, is reached, the iteration will stopped. For this
Figure 2.14 showed the relationship of the number of iterations to the MSE. The
first graph shows that the training reaches 1180 iterations for the letters before it
reaches the minimum error rate while for the numbers, the training had 4457 iterations.
The success rate for this character recognition state was recorded to be 98.17%
combining the performance of the letter and number recognition by this proposed
system. 344 over 347 letters was successfully recognized and 1000 over 1022
numbers.
PAMANTASAN NG LUNGSOD NG MAYNILA 39
Network for Automatic Number Plate Recognition” conducted by Deepti Sagar and
network proposed in this system has a simple architecture which classifies the input
1,000 license plates were used as dataset for this research and was divided into
3 sets. The first set contains dataset of 3450 character images recognition rate, shown
Figure 2.15. First set contains dataset of 3450 character images recognition rate
On the other hand, the second set contains 6071 character images recognition
rate of which we can see in figure 2.16. Finally, the third set contains 8699 character
Figure 2.16. Second set contains 6071 character images recognition rate
rate at high processing speed. This system was found to have promising results even
of the license plate such as having joined or broken characters, dirty images, bad
Figure 2.17. The third set contains 8699 character recognition images recognition rate
PAMANTASAN NG LUNGSOD NG MAYNILA 41
Table 2.7 shows the summary of results of the recognition rates of this
proposed system. This system had a convincing result of 98.2% compared to other
neural network-based systems previously cited in this paper wherein one of them was
97.3% for 3700 character images. This system had a total processing time of 115.006
seconds for 3399 characters meaning, only 3.39 millisecond each character compared
to the cited study in this paper which had 8.4 millisecond in average.
Table 2.7. Summary of results of the recognition rates of this proposed system
S.No Character Match Cases Unmatch Recog. Process
Images Cases Rate time
1 3450 3399 51 98.521% 115.006s
Characters Characters Characters
2 6071 5955 116 98.089% 256.451s
Characters Characters Characters
3 8699 8523 167 98.080% 379.374s
Characters Characters Characters
Average Recognition Rate 98.2%
March 2012, the segmented characters in the detected plate number were recognized
using MLP neural network. The designed three layers feed forward supervised neural
network is composed of one input layer, hidden layer, and then the output layer. The
hidden layer was consisted of 158 neurons and the output layer has 40 neurons since
there are 40 output of the network. This network also used back-propagation for the
learning algorithm. Log-Sigmoid function was used for the transfer function of the
The MLP network used in this study was trained with six different Bangla font.
Bangla is the alphabet used by the people in Bangladesh. Having different kinds of
fonts in license plates would be a factor in having a high percentage of success rate
for character recognition, that is why this system was trained in six different fonts for
it to be versatile in whatever fonts the characters would have. The experimental results
of the application of this system for character and word recognition was provided and
shown in the Table 2.8 below. The success rate of the recognition process was
Table 2.8. Experimental results of the application of this system for character and word recognition
Input Number of Success rate
Set Font name Recognized
type input sample (%)
character 20 19 95
1 SutonnyMJ
word 12 11 91.67
character 20 18 90
2 ChondonaMJ
word 12 11 91.67
Character 20 16 80
3 AtraiMJ
word 12 9 75
character 20 17 85
4 DhanshirhiMJ word 12 10 83.3
character 20 17 85
5 BhagirathiMJ
word 12 10 83.3
character 20 15 75
6 GoomtiMJ
word 12 9 75
A lot of methods have been used for different automatic license plate
entitled “Automatic License Plate Recognition (ALPR): A State of the Art Review” the
processing speed categorizing them according to the features they used for each
stage of the system. The output of their research was summarized in 4 tables shown
below.
Table 2.9. Summary of the pros and cons of various plate extraction methods classified
through their used features
Methods Rationale Pros Cons References
Hardly be
applied to
Using The boundary of Simplest, fast
complex images
boundary license plate is and [5][8]-[16]
since they are
features rectangular. straightforward.
too sensitive to
unwanted edges.
Using Find a connected Straightforward;
global object whose Independent of May generate [27][28]
image dimension is like the license broken objects. [29][30]
features a license plate. plate position.
Be able to Computationally
Using Frequent color
detect even if complex when [31][39]
texture transition on
the boundary is there are many [40][41]
features license plate.
deformed. edges.
RGB is limited to
Be able to
Using illumination
Specific color on detect inclined
color condition; HLS is [50][51][52]
license plate. and deformed
features sensitive to
license plates.
noise.
Time consuming
(Processing all
Using There must be binary objects);
Robust to
character characters on the Produce [63][6s4]
rotation.
features license plate. detection erros
when other text
in the image.
Using
Combining
two or Computationally [70]-
features is more More reliable.
more complex. [72][74][81]
effective.
features
PAMANTASAN NG LUNGSOD NG MAYNILA 44
This table shows the pros and cons of different license plate extraction
Table 2.10. Summary of the pros and cons of various character segmentation methods
classified through their used features
Methods Pros Cons
Simple and Fails to extract all the
Using pixel straightforward; Robust characters when there
connectivity [12][30] to the license plate are joined or broken
rotation. characters.
Noise affects the
Indepent of character
Using projection projection value;
positions;
profiles Requires prior knowledge
Be able to deal with
[21][24][51][101] of the number of license
some rotation.
plate characters.
Using prior Limited by the prior
knowledge of knowledge;
Simple
characters Any change may result in
[6][14][105][106] errors.
Slow and may generate
Using chacters Can get exact character
incomplete or distorted
contours [107][108] boundaries.
contour.
Using combined
More reliable. Computationally complex.
features [111][112]
This table, from the same source as the table above, shows the pros and cons
rate of each methods, processing time, qualification as real time or not, etc. are shown
processing time is less than or equal to 100ms, this could be reconsidered depending
on the use of the ALPR system. Most of these systems was used on traffic related
situations, that’s why the consideration for a real-time system has to be a very fast
system.
PAMANTASAN NG LUNGSOD NG MAYNILA 49
Local Studies
Here in the Philippines, there are only few studies conducted about
license plate recognition system. One of the first papers published in the country was
Networks” has been presented in the 2012 International Electrical and Electronics
through images captured from the entrance of a parking area. These images are
processed to extract the licensed plates of any vehicle entering the parking area.
Extracted plates images are then converted into numerical forms devised by
researchers to fit the requirements of the artificial neural network. Table 2.14 shown
the equivalent character of the binary output of the artificial neural networks block.
Table 2.14. Equivalent character of the binary output of the artificial neural networks block
Binary Equivalent Binary Equivalent
Output Character Output Character
000001 A 010011 S
000010 B 010100 T
000011 C 010101 U
000100 D 010110 V
000101 E 010111 W
000110 F 011000 X
000111 G 011001 Y
001000 H 011010 Z
001001 I 011011 0
001010 J 011100 1
001011 K 011101 2
001100 L 011110 3
001101 M 011111 4
001110 N 100000 5
PAMANTASAN NG LUNGSOD NG MAYNILA 50
001111 O 100001 6
010000 P 100010 7
010001 Q 100011 8
010010 R 100100 9
The system used in the study is a feed forward neural network and trained it
using 5860 sets of training data yielding a system with 0.0001645724% error and
Early last 2017, another study of a license plate recognition system have been
proponents of the said study is from institution of University of Santo Tomas. The
License Plate Recognition” was proposed by the following individuals: J.P Dalida, A.J
Galiza, A.G Godoy, M. Nakaegawa, J.M. Vallesters and A. Dela Cruz. Their study
focuses on adopting accurate and suitable algorithms for following phases, namely,
recognition, in recognizing old and new license plates of both private and public
vehicles and motorcycles in the Philippines. The proposed system was discussed
based on its structure and how it is works. For the license plate pre-processing, they
have observed that the output figures of the raw captures of license plates have
significant loss of information in the plates due to shadow and uneven illumination.
With this, an Improved Bernsen Algorithm was used because it solved properly the
shadow and overexposure conflicts. For plate localization, this phase aims to isolate
the plate area from the rest of the image. Connected Component Analysis is utilized
PAMANTASAN NG LUNGSOD NG MAYNILA 51
to label regions and preserve the characters by removing the unwanted information.
for tilt correction of extracted plate. After the plate correction, the horizontal and
vertical projections were used to locate the characters and the boundaries of the
characters. Lastly, the type of Character recognition used in the system involves two
phases, namely, feature extraction using Dual-Tree Complex Wavelet Transform and
classification using Artificial Neural Networks. Based on their gathered data, the
system acquired 85.6667% of accuracy for plate character detection. While for
practical license plate recognition or on actual test, only 72.83% accuracy was
gathered.
processes which are (1) image acquisition, (2) license plate detection and extraction,
methods were also used for the image that was obtained in order to have a higher
success rate and faster recognition at the last part of the proposed system.
Multiple methods were already proposed and used in the previous researchers
as stated in this chapter. However, different success rates and recognition times were
illumination and environmental effect seen in the image, or primarily because of the
PAMANTASAN NG LUNGSOD NG MAYNILA 52
methods themselves that were used. All methods have their advantages and
disadvantages. A specific method must be chosen depending on how and where the
system is going to be used. Another factor that affects the results of artificial license
plate recognition systems is the visual graphics found in the background of the
characters in a license plate that varies depending on which country the license is
from. This is why the researchers focused on using the proposed method in Philippine
license plate, which used to have a Rizal monument in its background. However,
recently, the Philippines has changed the design of its national license plate, which
Only a few studies have been conducted for the license plate recognition here
in the Philippines and used different methods for each process. The proponents
wanted to improve the license plate recognition system established in the country by
considering some relevant factors that haven’t been considered in the previous local
studies such as the camera position, lighting condition and processing time in the
system.
Neural Networks”, they didn’t include the set-up of how they will acquire the input
image in their system. They take the input image manually and then put it to their
system for the processing part. Same with the study conducted by the students of
for Philippine License Plate Recognition” they just also skipped the part of image
acquisition and focus on how they will gather higher success rate for other stages of
license plate recognition. In terms of lighting condition in the actual set-up, both
studies also didn’t consider this factor since it is a part of image acquisition. Lastly, the
processing time of the whole system didn’t mention in the respective studies.
Therefore, it implies that both studies didn’t emphasize the processing time
addressing the above mentioned gaps in the local studies. With this study, creating
designation. As well as, it could also help in constructing a reliable real-time license
plate recognition by considering the processing time of each stages. In chapter 3, the
Chapter III
RESEARCH METHODOLOGY
This chapter presents the method of research employed in the study, and
statistical processes and tools used for analysis, interpretation and validation of the
data to prove the feasibility of the system. All of these are for the purpose of
Methods of Research
This study conducted by the researchers was considered quantitative. The goal
one thing and another within a population. Quantitative research focuses on numeric
and unchanging data, and detailed, convergent reasoning rather than divergent
in this study in order to achieve promising results to answer the stated problem.
mathematically show the results of this study. This will help to analyze the results
more accurately.
Instrumentation
Hardware
The list of different components along with their usage and specifications used
in developing the system is presented in this part. The system assembly and circuit
• Materials Used:
-Serial Camera
The serial camera was used in the actual test to capture the vehicles entering
the parking area. The modules have a few features built-in, such as the ability to
adjustment, and motion detection. The output format is a standard JPEG. The lens
resolution is up to VGA 640x480 with 30fps frame speed. The operating voltage is 5V
DC. The SNR and dynamic range is 45dB and 60dB respectively. For communication,
The Gizduino Plus with ATMEGA644 and SD/MMC Card Shield is used to store
the captured image into the SD Card and upload the image on the server for
Figure 3.2. (a) Gizduino Plus with ATMEGA644 (b) SD/MMC Card Shield
PAMANTASAN NG LUNGSOD NG MAYNILA 57
-Triggering System
The microcontroller will accept input from any number of sources that will
measure the distance of the vehicle that will enter. The serial camera will be
automatically triggered when the sonar sensor detects an object 75 centimeters from
the boom.
-Servo Motor
The servo motor rotates the boom 90 degrees upon entry when the vehicle is
recognized and is registered on the database otherwise the servo will not rotate.
• Circuitry
The circuitry of the system basically consists of the MCU, SD/MMC card shield,
Software
All mathematical model used in developing the system are presented in this
section.
-MATLAB
-Arduino IDE
Gray-scale image processing is used in order to extract the license plate location
b. Character Segmentation
segment each of the characters in the license plate, binarization of the license plate
image must be done first. To determine the threshold for Niblack’s binarization
∑ pi 2
𝑻(𝒙, 𝒚) = 𝒎(𝒙, 𝒚) + 𝒌√ 𝑖 − 𝒎(𝒙, 𝒚)𝟐 = 𝒎(𝒙, 𝒚) + 𝒌√𝒔𝒕𝒅(𝒙, 𝒚) (3.2)
𝑵𝑷
where NP represents the number of pixels in the gray image, m and std
denote the mean and standard deviation values respectively, of the pixels pi.
c. Character Recognition
For character recognition, especially in training the neural network, each neuron
uses a transfer function in order to determine its output based on its input. Hidden and
output layer must have a transfer function. The logarithmic-sigmoidal transfer function
is used in this system for both hidden and output layer. This function takes an input
valued between negative infinity and positive infinity and outputs a value between zero
In order to determine the number of iterations that must be done for training the
neural network, a threshold of mean squared error must be set. This stopping criterion
is defined as:
𝟏 𝒏 𝟐
𝑴𝑺𝑬 = ∑ (𝐲𝐢 − 𝐟(𝐱𝐢)) (3.4)
𝒏 𝒊=𝟏
where here yi is the i th value of the variable to be predicted, xi is the i th value of the
A generalized and specified algorithm used in developing the system and its
• General Procedure
The general procedures used in the study, Real-Time Vehicle Parking Logging
System with the use of Multi-Layered Artificial Neural Network, are stated in the figure
below:
➢ Image Acquisition
X
Figure 3.7 Scale model of the ALPR
For the scale model shown in Figure 3.7, one of the most relevant data of the
system will be provided by the serial camera. Again, serial camera is interfaced in the
microcontroller and the latter has a function to transmit the image captured to the
distant server in order to process the input data. Regarding image acquisition, the
distance and viewing angle of the serial camera from the vehicles will be one of the
biggest issues in the study. It is because some of the parameters in the designed
system are already fixed such as in neural network, the resolution of the trained data
has an assigned value of resolution and if the image input does not meet the minimum
or exceed the required resolution of the system, a higher percentage of error will occur.
In results, the proponents considered the following scenarios to test the capability of
the designed system to response on the input data considering the distance and
viewing angle of camera to the vehicles. Also, the proponents have set the distance
of the camera to the vehicles and to the ground. Using three dimensional axes in the
Scenario 1: X-axis position of the camera varying with Y and Z axis coordinates are
held constant.
Figure 3.8 displays the position of the camera in terms of X - axis given that Y
and Z axis coordinates are held constant. The proponents aim to identify the effect of
changes in the resolution of license plate region to the accuracy of the system.
Scenario 2: Y-axis and Z-axis are varied while other axes are held constant
Figure 3.9 shows the variations in camera position in the other two axis, by doing
Scenario 3: Captured image under varying light conditions with position at the center.
PAMANTASAN NG LUNGSOD NG MAYNILA 63
The camera being placed at the origin of the three varying axes will be subjected
the Y signal of an image or the luminosity, which can be computed from the gray
scaled image. Then to obtain the image’s luminosity the mean2 function will be used
For the second problem, with a set number of images, its gray scaled image and
processed image will be compared with one another with the use of Edge Statistics
Method specifically their horizontal and vertical histogram values. This is done to
obtain the necessary average difference between original and processed that can
completely segment the image. The process of edge statistics can be seen at the
Lastly, the accuracy will be obtained by dividing the total number of success with
the total number of trials. Processing time will be obtained with the use of MatLab
functions tic and toc and for Arduino, millis function is used. Summing up the
processing time of all methods applied will be the basis of considering the design
system as real-time.
PAMANTASAN NG LUNGSOD NG MAYNILA 64
• Specified Algorithms
the first step is to obtain a value from the ultrasonic sonar sensor. Then when the
distance from the vehicle is 75 cm the serial camera will be triggered to take an image
of the vehicle. Then the image is stored to an SD card inserted in the SD/MMC shield.
Lastly, the image will then be transferred to the server through serial processing.
PAMANTASAN NG LUNGSOD NG MAYNILA 65
The next part of the procedure is License Plate Extraction. The first step is to
convert the image from a colored image to a gray-scale image. Then, noise will be
removed with the use of non-linear filtering to smoothen out the image. Next will be
dilation morphology to further sharpen out the edges and to fill holes and gaps in the
Next will be evaluating all of the pixels in both x and y axes. Vertical edges and
horizontal edges will go through Vertical Edge Processing and Horizontal Edge
After finishing all of the pixels and obtaining the total difference, a digital low-
pass filter will be used in order to avoid loss of important information in the image. This
is where the left and right sides of a histogram value will be averaged to obtain the
new histogram value. Next, is to pass both histograms through a band-pass filter to
filter out unnecessary histogram values. With the obtained histogram values, detecting
probable LP regions will be possible, and using both horizontal and vertical data, the
After extracting the license plate region, the image is binarized wherein the
license plate characters will be Logic 1 and the background will be Logic 0. The
possesses advantages over global thresholding methods. Set the constant values for
b and k which are -0.2 and 18 respectively, where k is a constant which determines
how much of the total print object edge is retained. The value of k and the size b of
the sliding window define the quality of binarization. Evaluating the image by a set size
per iteration, the mean and standard deviation will be computed since they are
necessary for computation. Next is to compute the threshold value of that rectangle.
PAMANTASAN NG LUNGSOD NG MAYNILA 70
c. Character Segmentation
is used. 8-connectivity will be used for image evaluation, where each non-background
already labeled set the pixel value to the current pixel value and if not, the lesser pixel
value will have a higher priority to the current pixel value. Then we obtain the possible
License Plate Characters by obtaining the highest and lowest pixel coordinate of both
row and columns of each labeled segment by the use of the MATLAB function
sizes, Pixel Ratio Analysis will be used wherein character regions will only be saved
when both row and column size pass the criteria, repeat until all segments are
processed.
PAMANTASAN NG LUNGSOD NG MAYNILA 72
d. Character Recognition
➢ Network Training
networks how to perform a given task. The algorithm has two phases. First, a training
input pattern is presented to the network input layer. The network propagates the
pattern from layer to layer until the output pattern is generated by the output layer.
Second, if this pattern is different from the desired output, an error is calculated and
propagated backwards from the output layer to the input layer. The weights are
modified as error is propagated. These are repeated until the stopping criterion is
satisfied.
PAMANTASAN NG LUNGSOD NG MAYNILA 75
➢ Feed-forward network
least one hidden layer of computational neurons, and output layer of computational
Data Mining
The last part of the pre-processing is the data mining. After getting the
recognized characters from the sample license plate through the neural network, it is
now subjected to match the recognized characters with the database of the system.
The database of the system consists of 40 samples of license plate as a trained data.
All the alphanumerical characters gained from the sample plates will be considered
as individual trained data of the system. Since the covered license plate proper
composed of three letters and three numbers, the process of data mining will start at
the first three letters. By scanning the whole database, the system will search all
possible matches of the first three letters in the database providing that the sequence
of scanning is from left to right. In the process, all unmatched characters will be
eliminated from the searching and the most matched trained data will be displayed.
The process will continue until all recognized characters from the subject test find its
matched characters from the database. Otherwise, the recognized characters will be
The data gathered from the MCU and after data processing will be statistically
treated so as the necessary analysis will be carried throughout the analysis and
experimentation. Mean and standard deviation will be used to numerically analyze the
data.
PAMANTASAN NG LUNGSOD NG MAYNILA 79
The mean will be used to obtain the average processing time and average
accuracy of the designed process. The standard deviation will also be used to
measure the stability of the designed system, to see whether there are significant drop
∑(𝑥̅𝑖−𝑥̅)2 (3.6)
𝜎=√ 𝑛
The accuracy of the system will be determined by dividing the total success over
A set of inputs will be used upon conducting the simulation and scale model
testing of the designed system to gather relevant data addressing the problems
formulated in the study. For mathematical model, Table 3.1 displays the parameters
For the scale model testing, the following parameters will be used in the
Parameters Inputs
Image (JPEG
Type of message
Format)
Number of scenarios 3
Number of Trials per
5
scenario
Testing time per
3 minutes
scenario
For the network training, the following parameters will be coded in Arduino at
Parameters Inputs
Mean square error
<=0.0001
goal
Number of hidden
60
neurons
Maximum epochs 5000000
Learning Rate 0.001
36, 71, 100
Training set sizes
and 240
PAMANTASAN NG LUNGSOD NG MAYNILA 81
Chapter IV
This chapter presents the result and analysis of the proposed system in the
study. The input image was taken by the serial camera with a resolution of 320 X 240
pixels and the system was implemented in MatLab. Upon taking photos of 25 vehicles,
A. Image Localization
(d) Dilated Image (e) Horizontal scan (f) Vertical Scan Image
Image
PAMANTASAN NG LUNGSOD NG MAYNILA 82
Images a-h show the step-by-step process of the license plate localization.
The camera was placed at the center of the car with a height of 60cm and a distance
of 75cm with respect to the vehicle. The result of the set-up shows significant success
rate for plate localization. Hence, the height of the camera to the ground will be fixed
With the distance varying and with constant height (60 cm) of the camera to the
vehicle, the following are the computed percentage success of recognizing the license
Table 4.1: Percentage Success of recognizing the license plate region varying the
distance of camera to the vehicle
50 28
75 52
100 44
125 68
150 48
PAMANTASAN NG LUNGSOD NG MAYNILA 83
Based on the given results, it shows that the 125 cm distance from the vehicle
has more success rate of identifying the region plate than the others while 50 cm
distance has the least success rate. However, the results from 125 cm distance are
not only the license plate region but also the surroundings of the plate region itself.
On the other hand, the 75 cm distance seems more reliable than the 125cm distance
because all the captured images were accurate plate region only. Figure 4.2 to 4.5
another region instead of the plate region (see Figure 4.6 and 4.7). This is because
those regions have higher difference between the magnitudes of neighboring pixels
- X-axis Variation
Setting the camera at the center of vehicle with a fixed distance of 150 cm from
it, the camera will move horizontally by 40cm and by 80cm from the center. The
percentage success of recognizing the license plate region varying the horizontal
Table 4.2: Percentage Success of recognizing the license plate region varying the
horizontal distance of camera to the vehicle
40 48
80 36
The 40cm horizontal distance from the center shows higher success rate of
recognizing the plate region than the 80cm horizontal distance. Nevertheless, more
than half of the recognition is failure. This is because the same problem with the first
set-up (varying the distance of camera to the vehicle) was experienced, which is that
some other regions have higher difference between the magnitudes of neighboring
To show the image results of 40cm and 80cm horizontal distance, the
Since majority of reliable results came from 75cm distance, the proponents
utilized the 75cm distance in determining the range value of luminosity that have a
Luminosity
Vehicle Success Fail
75cm
1 122.1627 ✓
2 127.7624 ✓
3 137.2689 ✓
4 121.9346 ✓
5 138.917 ✓
6 140.556 ✓
7 123.229 ✓
8 132.7774 ✓
9 154.0718 ✓
10 139.5105 ✓
11 133.1734 ✓
12 141.6663 ✓
13 119.0135 ✓
14 136.2395 ✓
15 160.9698 ✓
16 39.53302 ✓
17 71.99651 ✓
18 47.36117 ✓
19 14.72848 ✓
✓
20 128.9205
✓
21 138.2576
✓
22 131.9526
✓
23 123.9636
✓
24 120.9085
✓
25 120.5023
PAMANTASAN NG LUNGSOD NG MAYNILA 89
Based on table 4.3, the range of luminosity value that shows higher probability
of localizing the license plate region was between 110 to 140 luminosity.
B. Character Segmentation
After the localization of the plate region, the next step is the character
gathered.
Figures 4.14 and 4.15 showed the success and failure of segmentation
processing in the system. One of the factors that affects the result of character
segmentation is the physical condition of the license plate. If the license plate is in a
poor condition, it is likely that the results for character segmentation will have a lower
success rate. Most of the failure in segmented characters has this issue. Observing
the result, either the character segmented is incomplete or the character is distorted.
These characters may be covered in dirt or displaced in their position. However, these
results are not considered failure yet. After the process of connected component
analysis, the results of this method will undergo into the data mining. This method will
run again the failure results and afterwards will successfully segmented the
characters.
The proponents trained four different sets of training data. Each set has different
numbers of frequency characters. The x-axis in the graph pertains to the denomination
of the characters (where the range of 0 to 9 represents the numbers while the range
of 10 to 36 represents the letters). On the other hand, the y-axis represents the number
of frequency character itself. The following figures show the characters used to train
the data set and the four different sets of training data.
PAMANTASAN NG LUNGSOD NG MAYNILA 91
Each set of training data, namely 36, 71,100 and 240 characters will be tested
in order to determine the accuracy of recognition. The set of training sample will be
Based on the given results, the first data set of characters recognized 121 out
recognized 165 out of 240 characters or equivalent to 68.75%. The third data set of
characters recognized 185 out of 240 characters or equivalent to 77.08%. Lastly, the
fourth data set of characters recognized 237 out of 240 characters or equivalent to
98.75%.
Among the four sets of training data, 240 characters show the highest success
rate in recognizing characters. The weights generated in this network training are used
Figure 4.22 shows the actual training data of mean square error where it set
C. Character Recognition
After segmenting the plate region of the license plate, character recognition is
consist of two data sets – old and new plates. The first data set consists of purely new
plates while the second data set consists of the combination of old and new plates.
For the first set of data which is purely new plates, the system identified 43
images out of 50 images, earning 86% success rate. On the other hand, when the 82
old plates were combined with the 50 new plates, the system accumulated a
percentage success rate of 38. Based on the given results of the combination of
plates, the system has been able to recognize 9 old plates only out of 82 plates added
to the system. The results in the second set of data drastically decrease compared to
The system scans the image per character and those adjacent to it, to see if it
matches license plates in the database; a counter exists wherein if it exceeds a certain
number, the license plate would be removed from the list and the system moves to
identified. However, out of these 43 correctly identified vehicles, only 7 vehicles were
correctly recognized. The rest of the 36 vehicles in total had a minimal error in
character recognition. Fortunately, the system for the data mining was highly
successful. Shown below are figures of a few vehicles which were correctly identified
The license plate was still successfully identified since most of the characters
were successfully recognized, so when the image was scanned, the image did not
The reason why some characters are classified wrongly is due to some similar
present or very minimal, and is diminished enough that the artificial neural network
cannot identify those characteristics. For example, the small line at the lower right of
the letter “Q”, or a serif, is nearly indistinguishable. Another factor is the frequency of
characters for training. There are only 3 Q images trained compared to several 0's
In this table shown below, the proponents summarized the total number of
successful recognition made by the system over the total number of license plates to
P(5|5) P(S|5)
5
14/15 1/15
P(Z|Z) P(Z|2)
Z
0/3 3/3
P(Z|2) P(2|2)
2
0/18 18/18
The first column indicates the characters classified wrongly by the system and
the succeeding columns indicate the corresponding recognized character by the first
column. Shown below are a few examples of the failed recognitions with these
characters.
Despite these failed recognitions, the system was still able to identify the vehicle
In order to determine the localization of the plate region, the proponents used
vertical and horizontal difference of the neighboring pixels. The following results have
Horizontal
Vehicle No. Vertical Difference
Difference
1 1.7813 4.4875
2 1.6125 1.6313
3 0 0
4 0 10.5188
5 0 11.1125
6 0 20.9563
7 1.9813 3.9594
8 1.1625 4.7031
9 0.4938 14.0906
10 0 13.5625
11 0 0.4313
12 2.6 0
13 0 5.6594
14 0.025 15
15 0.0063 10.9969
16 0 8.4281
17 0 12.7625
18 0.2875 3.525
19 0.8125 1.2906
20 0.5938 3.0063
21 0 3.35
22 0 0.0219
23 0 0
24 0 5.3531
25 0 7.1531
Average 0.4542 6.4800
PAMANTASAN NG LUNGSOD NG MAYNILA 103
Based on the given results, one plate can be localized if the average horizontal
and vertical difference of the neighboring pixels is almost around 0.4542 and 6.4800
respectively.
GUI. This will be the final outcome of the system displaying all the results of pre-
As seen in the table above the average total processing time for software
average total time with interfacing added is around 2.56709, which can be considered
nearly real-time, and with a standard deviation of 0.1533 which means there is little
variation in the system’s processing time. It also implies that there is no overshoot or
Chapter V
This chapter discusses the summary of the findings of the study, the
Summary of Research
This study aimed to design a system which can translate and recognize images
to readable text to be utilized in parking applications. To produce the system, the study
was subdivided into four sections such as image localization, image segmentation,
image recognition and data mining. Furthermore, machine learning was utilized in the
study giving the system an adaptive method of translating the image into text.
Findings
The license plate resolution was varied by adjusting the distance of the vehicle
from the camera which results in the LP image having different number of pixels or
details. The proponents used the range of 50-150 with a step size of 25. In Table
4.1 the accuracy rates of localization were shown, with 125cm had the highest
accuracy. Though, as observed by the proponents, images that were not segmented
PAMANTASAN NG LUNGSOD NG MAYNILA 06
properly had objects at both sides which may cause problems in the recognition
subsection.
B. Viewing Angle
Viewing angle was varied by changing the position of the camera horizontally.
Using the center of the license plate as reference, the camera was adjusted 40cm
and 80cm to the left and right of the center. Table 4.2 showed that as the camera
lens goes farther from the center of the license plate, the accuracy of localization
decreases.
C. Luminosity
Since old models of license plates are still being used, which uses non-
evaluate. As shown in Table 4.3, most license plates that possess a luminosity of
vertical difference. In addition, most localized plates had a vertical differences within
the range of 0-2 with an average of 0.4542 and with an average difference of 6.48
finish all computations which can be seen in Table 4.7. According to the same table,
localization of the license plate took most of the elapsed time while data mining took
the least. However, if the system is to be applied on a larger scale, data mining might
have the biggest impact to the processing time since the database would be
PAMANTASAN NG LUNGSOD NG MAYNILA 107
significantly larger. For accuracy, the system can perform very well with new plates
with around 86% accuracy, but only at 38% with old plates.
Conclusion
1. A. In conclusion, varying the distance of the camera from the license plate had a
significant effect to the accuracy of the system. It can also be concluded that 75cm
and 125cm are the only acceptable ranges of the camera. However, due to other
objects being included in the image when the camera is significantly far from the
B. Placing the camera further from the center of the license plate would have a
2. In conclusion, the average for horizontal difference and vertical difference are
0.763622 seconds for the software processes, which include image localization,
image recognition and data mining. Additionally, the average total time with interfacing
added is around 2.56709 seconds. The system can perform more accurately if the
PAMANTASAN NG LUNGSOD NG MAYNILA 108
license plate is the new model. Unfortunately, when the license plate is the old model,
particularly the one with the Rizal Shrine in the middle, the system performs poorly.
Recommendations
Based on the preceding findings and the corresponding conclusions drawn, the
1. The researchers used 240x320 in order to achieve a real-time system that is fit
for the processor used. Future researchers may use cameras with higher resolution
than the one used by the proponents without highly affecting the processing time of
the system, thus hardware specifications are also improved which will lead to higher
accuracy.
2. The proponent’s neural network method is not embedded to the localization and
segmentation process but only to the recognition part. It is suggested that the machine
simulate the data and to complete the system. However, it may be possible for future
or Raspberry Pi.
4. For this study, the proponents used 240 characters for the training data set.
Though it produced a high success rate, this data set may still be increased and
5. The researchers obtained a lower success rate when recognizing old license
plate than the new ones. One of the main reasons of this, is the presence of the Rizal
statue in the center part of the old license plate. It was being localized and recognized
to train the system using a segmented image of the Rizal statue and when recognized,
6. This study is limited only to stationary vehicles. One way to improve the reliability
of the system would be to let the system recognize license plate despite using vehicles
in motion.
PAMANTASAN NG LUNGSOD NG MAYNILA 110
Bibliography
Abolghasemi, V., & Ahmadyfard, A. (2007). A Fast Algorithm for License Plate Detection.
Agarwal, M., & Kaushik, B. (2015). Text recognition from image using Artificial Neural
Anagnostopoulos, C., Anagnostopoulos, I., Loumos, V., & Kayafas, E. (September 2006). A
doi:10.1109/TITS.2006.880641
Anagnostopoulos, C., Anagnostopoulos, I., Psoroulas, I., Loumos, V., & Kayafas, E.
(September 2008). License Plate Recognition From Still Images and Video
Azad, B., Azad, R., & Mogharreb, I. (September 2014). Iranian License Plate Character
Babbie, E. (2010). The Practice of Social Research (12th ed.). Belmont, CA: Wadsworth
Cengage.
Dalida, J. P., Galiza, A. J., Godoy, A. G., Nakaegawa, M., Vallester, J., & Dela Cruz, A.
doi:10.1109/tencon.2016.7848764
Du, S., Ibrahim, M., Shehata, M., & Badawy, W. (2011). Automatic License Plate
Recognition (ALPR): A State of the Art Review. IEEE TCSVT 5656, 1-14.
Fault Determination in a Parking Lot Accident. (2017, February 1). Retrieved July 19, 2017,
from https://www.insurancehotline.com/fault-determination-in-a-parking-lot-accident/
Jain, A. S., & Kundargi, J. (July,2015). Automatic Number Plate Recognition Using Artificial
Jing, Y., Youssefi, B., Mirhassani, M., & Muscedere, R. (2017). An Efficient FPGA
Joarder, M. A., Mahmud, K., Ahmed, T., Kawser, M., & Ahamed, B. (March, 2012). Bangla
Automatic Number Plate Recognition System using Artificial Neural Network. Asian
Karthikeyan, V., Vijayalakshmi, V. J., & Jeyakumar, P. (Jan-Feb 2013). License Plate
doi:10.1016/j.procs.2010.12.169
Koval, V., Turchenko, V., Kochan, V., Sachenko, A., & Markowsky, G. (September 8,2003).
Lazrus, A., & Choubey, S. (2011). A Robust Method of License Plate Recognition using
Madhuri Latha, G., & Chakravarthy, G. (Sep-Oct. 2012). An Improved Bernsen Algorithm
Mahgoub, A., Talab, A., Huang, Z., & Junfei, W. (2014). An Enhanced Bernsen Algorithm
doi:10.14257/ijsip.2014.7.4.20
Misra, C., Swain, P., & Mantri, J. (February,2012). Text Extraction and Recognition from
Muijs, D. (2010). Doing Quantitative Research in Education with SPSS (2nd ed.). London:
SAGE Publications.
Öz, C., & Koker, R. (2003). Vehicle Licence Plate Recognition Using Artificial Neural
Pardeshi, C., & Rege, P. (2017). Morphology Based Approach for Number Plate Extraction.
Perwej, Y., & Perwej, F. (June 2012). A Neuroplasticity (Brain Plasticity) Approach to Use in
Perwej, Y., Akhtar, N., & Parwej, F. (July 2014). The Kingdom of Saudi Arabia Vehicle
Poovizhichelvi, C., Jeevanantham, S., & Rukkumani, V. (2017). Automatic License Plate
Rahman, C., Badawy, W., & Radmanesh, A. (2003). A Real Time Vehicle’s License Plate
Sagar, D., & Dutta, M. (September,2014). Block-Based Neural Network for Automatic
Saunshi, S., Sahani, V., Patil, J., Yadav, A., & Rathi, S. (2017). License Plate Recognition
Singh, V., Parashar, H., & Vasudeva, N. (2012). Recognition of Text Image Using Multilayer
India , 1-4.
Thanin, K., Mashohor, S., & Al-Faqheri, W. (n.d.). An improved Malaysian Automatic
License Plate Recognition (M-ALPR) system using hybrid fuzzy in C++ environment.
Yoon, Y., Ban, K. D., Yoon, H., & Kim, J. (2011). Blob Extraction based Character
Segmentation Method for Automatic License Plate Recognition System. IEEE, 2192-
2196.
Zheng, L., Samali, B., He, X., & Yang, L. (2010). Accuracy Enhancement for License Plate
Appendix I
Computations
I. Percentage Success of recognizing the license plate region varying the distance
of camera to the vehicle:
𝑆𝑢𝑐𝑐𝑒𝑠𝑠 𝑡𝑟𝑖𝑎𝑙𝑠
𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = 𝑥̅ 100
𝑇𝑜𝑡𝑎𝑙 𝑇𝑟𝑖𝑎𝑙𝑠
At 50 cm:
7
𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = 𝑥̅ 100
25
𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = 28%
At 75 cm:
13
𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = 𝑥̅ 100
25
𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = 52%
At 100 cm:
11
𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = 𝑥̅ 100
25
𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = 44%
At 125 cm:
17
𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = 𝑥̅ 100
25
𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = 68%
At 150 cm:
12
𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = 𝑥̅ 100
25
𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = 48%
PAMANTASAN NG LUNGSOD NG MAYNILA 117
II. Percentage Success of recognizing the license plate region varying the
horizontal distance of camera to the vehicle:
𝑆𝑢𝑐𝑐𝑒𝑠𝑠 𝑡𝑟𝑖𝑎𝑙𝑠
𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = 𝑥̅ 100
𝑇𝑜𝑡𝑎𝑙 𝑇𝑟𝑖𝑎𝑙𝑠
At 40 cm:
12
𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = 𝑥̅ 100
25
𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = 48%
At 80 cm:
9
𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = 𝑥̅ 100
25
𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = 36%
𝑆𝑢𝑐𝑐𝑒𝑠𝑠 𝑡𝑟𝑖𝑎𝑙𝑠
𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = 𝑥̅ 100
𝑇𝑜𝑡𝑎𝑙 𝑇𝑟𝑖𝑎𝑙𝑠
a) The first data set trained 36 characters. When testing it recognized 121 out of
240 characters.
121
𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = 𝑥̅ 100
240
𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = 50.42%
b) The second data set trained 71 characters. When testing it recognized 165 out
of 240 characters.
165
𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = 𝑥̅ 100
240
𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = 68.75%
PAMANTASAN NG LUNGSOD NG MAYNILA 118
c) The third data set trained 100 characters. When testing it recognized 185 out
of 240 characters.
185
𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = 𝑥̅ 100
240
𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = 77.08%
d) The fourth data set trained 240 characters. When testing it recognized 237 out
of 240 characters.
237
𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = 𝑥̅ 100
240
𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = 98.75%
𝑆𝑢𝑐𝑐𝑒𝑠𝑠 𝑡𝑟𝑖𝑎𝑙𝑠
𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = 𝑥̅ 100
𝑇𝑜𝑡𝑎𝑙 𝑇𝑟𝑖𝑎𝑙𝑠
43
𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = 𝑥̅ 100
50
𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = 86%
b) For the old plate:
7
𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = 𝑥̅ 100
82
𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = 9%
c) For All plate:
50
𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = 𝑥̅ 100
132
𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = 37.88%
PAMANTASAN NG LUNGSOD NG MAYNILA 119
Σ 𝑉𝑒𝑟𝑡𝑖𝑐𝑎𝑙 𝐷𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒
𝑚𝑒𝑎𝑛 =
𝑇𝑜𝑡𝑎𝑙 𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑉𝑒ℎ𝑖𝑐𝑙𝑒𝑠
11.3565
𝑚𝑒𝑎𝑛 =
25
𝑚𝑒𝑎𝑛 = 0.45426
b) For Horizontal Difference:
Σ 𝐻𝑜𝑟𝑖𝑧𝑜𝑛𝑡𝑎𝑙 𝐷𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒
𝑚𝑒𝑎𝑛 =
𝑇𝑜𝑡𝑎𝑙 𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑉𝑒ℎ𝑖𝑐𝑙𝑒𝑠
162
𝑚𝑒𝑎𝑛 =
25
𝑚𝑒𝑎𝑛 = 6.4800
27.0525
𝑚𝑒𝑎𝑛 =
15
𝑚𝑒𝑎𝑛 = 1.80350
b) Localization Time:
6.13152
𝑚𝑒𝑎𝑛 =
15
PAMANTASAN NG LUNGSOD NG MAYNILA 120
𝑚𝑒𝑎𝑛 = 0.408768
c) Recognition time:
4.9131
𝑚𝑒𝑎𝑛 =
15
𝑚𝑒𝑎𝑛 = 0.32754
d) Data Mining:
0.06141
𝑚𝑒𝑎𝑛 =
15
𝑚𝑒𝑎𝑛 = 0.004094
e) Arduino Servo Time:
0.348315
𝑚𝑒𝑎𝑛 =
15
𝑚𝑒𝑎𝑛 = 0.023221
PAMANTASAN NG LUNGSOD NG MAYNILA 121
Appendix II
Source Codes:
Arduino Code:
#include <SPI.h>
#include <arduino.h>
#include <SD.h>
#define PIC_PKT_LEN 128 //data length of each read, dont set this too
big because ram is limited
#define PIC_FMT_VGA 7
#define PIC_FMT_CIF 5
#define PIC_FMT_OCIF 3
#define CAM_ADDR 0
#define CAM_SERIAL Serial
File myFile;
/*********************************************************************/
void setup()
{
Serial.begin(115200);
PAMANTASAN NG LUNGSOD NG MAYNILA 122
pinMode(utrigger, OUTPUT);
pinMode(uecho, INPUT);
pinMode(buttonPin, INPUT); // initialize the pushbutton pin as an input
digitalWrite(buttonPin, HIGH);
Serial.println("Initializing SD card....");
pinMode(10,OUTPUT); // CS pin of SD Card Shield
if (!SD.begin(10))
{
Serial.print("initialization failed");
return;
}
Serial.println("initialization done.");
initialize();
}
/*********************************************************************/
void loop()
{
int n = 0;
while(1){
Serial.println("\r\nPress the button to take a picture");
digitalWrite(utrigger, LOW);
delayMicroseconds(2);
digitalWrite(utrigger, HIGH); //the Ultrasonic sensor will transmit 40
kHz ultrasonic burst
delayMicroseconds(10);
digitalWrite(utrigger, LOW);
duration = pulseIn(uecho, HIGH); //getting the time of echo or the
reflected ultrasonic burst
inches = microsecondsToInches(duration); // converting duration to Inches
cm = microsecondsToCentimeters(duration); // result is converted to
Centimeter
Serial.println(cm);
while (cm==75); //wait for buttonPin status to HIGH
if(cm!=75){
delay(20); //Debounce
if (cm==75)
{
Serial.println("\r\nCOPYING JPG TO CARD, PLS WAIT...");
delay(200);
if (n == 0) preCapture();
Capture();
GetData();
PAMANTASAN NG LUNGSOD NG MAYNILA 123
}
Serial.print("\r\nTaking pictures success ,number : ");
Serial.println(n);
n++ ;
}
}
}
/*********************************************************************/
void clearRxBuf()
{
while (Serial.available())
{
Serial.read();
}
}
/*********************************************************************/
void sendCmd(char cmd[], int cmd_len)
{
for (char i = 0; i < cmd_len; i++) Serial.print(cmd[i]);
}
/*********************************************************************/
void initialize()
{
char cmd[] = {0xaa,0x0d|cameraAddr,0x00,0x00,0x00,0x00} ;
unsigned char resp[6];
Serial.setTimeout(500);
while (1)
{
//clearRxBuf();
sendCmd(cmd,6);
if (Serial.readBytes((char *)resp, 6) != 6)
{
continue;
}
if (resp[0] == 0xaa && resp[1] == (0x0e | cameraAddr) && resp[2] == 0x0d &&
resp[4] == 0 && resp[5] == 0)
{
if (Serial.readBytes((char *)resp, 6) != 6) continue;
if (resp[0] == 0xaa && resp[1] == (0x0d | cameraAddr) && resp[2] == 0 &&
resp[3] == 0 && resp[4] == 0 && resp[5] == 0) break;
}
}
cmd[1] = 0x0e | cameraAddr;
PAMANTASAN NG LUNGSOD NG MAYNILA 124
cmd[2] = 0x0d;
sendCmd(cmd, 6);
Serial.println("\nCamera Ready.");
}
/*********************************************************************/
void preCapture()
{
char cmd[] = { 0xaa, 0x01 | cameraAddr, 0x00, 0x07, 0x00, PIC_FMT };
unsigned char resp[6];
Serial.setTimeout(100);
while (1)
{
clearRxBuf();
sendCmd(cmd, 6);
if (Serial.readBytes((char *)resp, 6) != 6) continue;
if (resp[0] == 0xaa && resp[1] == (0x0e | cameraAddr) && resp[2] == 0x01 &&
resp[4] == 0 && resp[5] == 0) break;
}
}
void Capture()
{
char cmd[] = { 0xaa, 0x06 | cameraAddr, 0x08, PIC_PKT_LEN & 0xff,
(PIC_PKT_LEN>>8) & 0xff ,0};
unsigned char resp[6];
Serial.setTimeout(100);
while (1)
{
clearRxBuf();
sendCmd(cmd, 6);
if (Serial.readBytes((char *)resp, 6) != 6) continue;
if (resp[0] == 0xaa && resp[1] == (0x0e | cameraAddr) && resp[2] == 0x06 &&
resp[4] == 0 && resp[5] == 0) break;
}
cmd[1] = 0x05 | cameraAddr;
cmd[2] = 0;
cmd[3] = 0;
cmd[4] = 0;
cmd[5] = 0;
while (1)
{
clearRxBuf();
sendCmd(cmd, 6);
PAMANTASAN NG LUNGSOD NG MAYNILA 125
}
/*********************************************************************/
void GetData()
{
unsigned int pktCnt = (picTotalLen) / (PIC_PKT_LEN - 6);
if ((picTotalLen % (PIC_PKT_LEN-6)) != 0) pktCnt += 1;
if (SD.exists(picName))
PAMANTASAN NG LUNGSOD NG MAYNILA 126
{
SD.remove(picName);
}
int retry_cnt = 0;
retry:
delay(10);
clearRxBuf();
sendCmd(cmd, 6);
uint16_t cnt = Serial.readBytes((char *)pkt, PIC_PKT_LEN);
Processing Code:
import processing.serial.*;
Serial myPort;
OutputStream output;
void setup() {
size(320, 240);
//println( Serial.list() );
myPort = new Serial( this, Serial.list()[0], 115200);
myPort.clear();
output = createOutput("pic06.jpg");
}
void draw() {
try {
while ( myPort.available () > 0 ) {
output.write(myPort.read());
}
}
catch (IOException e) {
e.printStackTrace();
}
}
void keyPressed() {
PAMANTASAN NG LUNGSOD NG MAYNILA 128
try {
output.flush(); // Writes the remaining data to the file
output.close(); // Finishes the file
}
catch (IOException e) {
e.printStackTrace();
}
}
Matlab Code:
if nargout
[varargout{1:nargout}] = gui_mainfcn(gui_State, varargin{:});
else
gui_mainfcn(gui_State, varargin{:});
end
% End initialization code - DO NOT EDIT
delete(instrfind({'Port'},{'COM7'}));
clear s;
global s;
s=serial('COM7','BAUD', 9600); % Make sure the baud rate and COM port is
% same as in Arduino IDE
fopen(s);
GUIcounter=0;
while 1
n=num2str(GUIcounter);
if GUIcounter>=10
PAMANTASAN NG LUNGSOD NG MAYNILA 130
A=[num2str(n),'.JPG'];
A=['PIC' A];
else
A=[num2str(n),'.JPG'];
A=['PIC0' A];
end
checker=exist(A, 'file');
if checker==2
a=ones(240,320);
axes(handles.axes1);
imshow(a);
a=ones(240,320);
axes(handles.axes2);
imshow(a);
a=ones(240,320);
axes(handles.axes3);
imshow(a);
a=ones(24,12);
axes(handles.axes4);
imshow(a);
a=ones(24,12);
axes(handles.axes5);
imshow(a);
a=ones(24,12);
axes(handles.axes6);
imshow(a);
a=ones(24,12);
axes(handles.axes7);
imshow(a);
a=ones(24,12);
axes(handles.axes8);
imshow(a);
a=ones(24,12);
axes(handles.axes9);
imshow(a);
a=ones(24,12);
axes(handles.axes10);
imshow(a);
a=ones(24,12);
axes(handles.axes11);
imshow(a);
PAMANTASAN NG LUNGSOD NG MAYNILA 131
%% Initialize variables.
filename = 'C:\Users\Vince Harrold\Desktop\standalone\authorized cars.csv';
delimiter = '';
set(handles.listbox2,'String',UIK633);
handles.UIK633=UIK633;
PAMANTASAN NG LUNGSOD NG MAYNILA 132
load('legit12.mat')
%%Localize
a=imread(A);
axes(handles.axes1);
imshow(a);
drawnow;
GUIcounter=GUIcounter+1;
a= 0.2989 * a(:,:,1) + 0.5870 * a(:,:,2) + 0.1140 * a(:,:,3);
a=medfilt2(a);
[row,col]=size(a);
a=a((row/3)+1:row,1:col);
[row,col]=size(a);
b=a;
for (x=1:row)
for(y=2:col-1)
temp=max(a(x,y-1),a(x,y));
b(x,y)=max(temp,a(x,y+1));
end
end
difference=0;
totalsum=0;
difference=uint32(difference);
maximum=0;
maxvertical=0;
for x=2:row
sum=0;
for y=2:col
if(b(x,y)>b(x,y-1))
difference=uint32(b(x,y)-b(x,y-1));
end
if(b(x,y)<=b(x,y-1))
difference=uint32(b(x,y-1)-b(x,y));
end
if(difference >20)
sum=sum+difference;
end
end
vertical(x)=sum;
PAMANTASAN NG LUNGSOD NG MAYNILA 133
if(sum>maximum)
maxvertical=x;
maximum=sum;
end
totalsum=totalsum+sum;
end
average=totalsum/row;
sum=0;
newvertical=vertical;
for x=21:(row-21)
sum=0;
for y=(x-20):(x+20)
sum=sum+vertical(y);
end
newvertical(x)=sum/41;
end
for x=1:row
if(newvertical(x)<(average))
newvertical(x)=0;
for y=1:col
b(x,y)=0;
end
end
end
sum=0;
totalsum=0;
difference=0;
difference=uint32(difference);
maxhorizontal=0;
maximum=0;
for x=2:col
sum=0;
for y=2:row
if(b(y,x) > b(y-1,x))
difference=uint32(b(y,x)-b(y-1,x));
else
difference=uint32(b(y-1,x)-b(y,x));
end
if(difference >20)
sum=sum+difference;
end
end
horizontal(x)=sum;
if(sum>maximum)
PAMANTASAN NG LUNGSOD NG MAYNILA 134
maxhorizontal=x;
maximum=sum;
end
totalsum=totalsum+sum;
end
average=totalsum/col;
sum=0;
newhorizontal=horizontal;
for x=21:(col-21)
sum=0;
for y=(x-20):(x+20)
sum=sum+horizontal(y);
end
newhorizontal(x)=sum/41;
end
for x=1:col
if(newhorizontal(x)<average)
newhorizontal(x)=0;
for y=1:row
b(y,x)=0;
end
end
end
y=1;
for x=2:col-2
if(newhorizontal(x)~= 0 && newhorizontal(x-1) == 0 && newhorizontal(x+1) ==0)
column(y)=x;
column(y+1)=x;
y=y+2;
elseif((newhorizontal(x)~=0 && newhorizontal(x-1) ==0) || (newhorizontal(x)~=0
&& newhorizontal(x+1) ==0))
column(y)=x;
y=y+1;
end
end
y=1;
for x=2:row-2
if(newvertical(x)~= 0 && newvertical(x-1) == 0 && newvertical(x+1)==0)
row(y)=x;
row(y+1)=x;
y=y+2;
elseif((newvertical(x)~=0 && newvertical(x-1)==0) || (newvertical(x) ~= 0 &&
newvertical(x+1) ==0))
row(y)=x;
PAMANTASAN NG LUNGSOD NG MAYNILA 135
y=y+1;
end
end
[temp columnsize]=size(column);
if(mod(columnsize,2))
column(columnsize+1)=col;
end
[temp rowsize]=size(row);
if(mod(rowsize,2))
row(rowsize+1)=row;
end
for x=1:2:rowsize
for y=1:2:columnsize
if(~((maxhorizontal >=column(y) && maxhorizontal <= column(y+1)) &&
(maxvertical >=row(x) && maxvertical<=row(x+1))))
for m=row(x):row(x+1)
for n=column(y):column(y+1)
b(m,n)=0;
end
end
end
end
end
c=b;
[row,col]=size(c);
for x=1:row
for y=1:col
if c(x,y)==0
a(x,y)=0;
else
a(x,y)=a(x,y);
end
end
end
a( ~any(a,2), : ) = [];
a( :, ~any(a,1) ) = [];
axes(handles.axes2);
imshow(a);
drawnow;
%%Binarization
k=-0.2;
b=18;
[my,mx]=size(a);
PAMANTASAN NG LUNGSOD NG MAYNILA 136
thresh=zeros(my,mx);
result=zeros(my,mx);
for x=(b/2+1):(mx-b/2-1)
for y=(b/2+1):(my-b/2-1)
[L Ne]=bwlabel(d);
propied=regionprops(L,'BoundingBox');
for n=1:size(propied,1)
rectangle('Position',propied(n).BoundingBox,'EdgeColor','g','LineWidth',2);
end
m=1;
handles.hAxes=[handles.axes4, handles.axes5, handles.axes6, handles.axes7,
handles.axes8, handles.axes9, handles.axes10, handles.axes11];
image=0;
for n=1:Ne
[r,c] = find(L==n);
n1=d(min(r):max(r),min(c):max(c));
[a,b]=size(n1);
if a>=12 && a<=35
if b>=8 && b<=18
axes(handles.hAxes(m));
imshow(n1);
drawnow;
m=m+1;
image=image+1;
newn1=imresize(n1,[24 12]);
PAMANTASAN NG LUNGSOD NG MAYNILA 137
newn1=reshape(newn1,[288 1]);
n11=W1*newn1;
A11=logsig(n11);
n21=W2*A11;
A21=(logsig(n21));
threshold=max(A21);
for c=1:36
if A21(c)==threshold
A21(c)=1;
else
A21(c)=0;
end
end
ad=1;
for n=1:36
if A21(n)==0
ad=ad+1;
else
break
end
end
switch ad
case 1
plate(image)='0';
case 2
plate(image)='1';
case 3
plate(image)='2';
case 4
plate(image)='3';
case 5
plate(image)='4';
case 6
plate(image)='5';
case 7
plate(image)='6';
case 8
plate(image)='7';
case 9
plate(image)='8';
case 10
plate(image)='9';
case 11
plate(image)='A';
PAMANTASAN NG LUNGSOD NG MAYNILA 138
case 12
plate(image)='B';
case 13
plate(image)='C';
case 14
plate(image)='D';
case 15
plate(image)='E';
case 16
plate(image)='F';
case 17
plate(image)='G';
case 18
plate(image)='H';
case 19
plate(image)='I';
case 20
plate(image)='J';
case 21
plate(image)='K';
case 22
plate(image)='L';
case 23
plate(image)='M';
case 24
plate(image)='N';
case 25
plate(image)='O';
case 26
plate(image)='P';
case 27
plate(image)='Q';
case 28
plate(image)='R';
case 29
plate(image)='S';
case 30
plate(image)='T';
case 31
plate(image)='U';
case 32
plate(image)='V';
case 33
plate(image)='W';
PAMANTASAN NG LUNGSOD NG MAYNILA 139
case 34
plate(image)='X';
case 35
plate(image)='Y';
case 36
plate(image)='Z';
end
else
continue
end
else
continue
end
end
plate;
set(handles.edit1,'String',plate);
%% Initialize variables.
filename = 'C:\Users\Vince Harrold\Desktop\standalone\Datadata.csv';
delimiter = ',';
% code. If an error occurs for a different file, try regenerating the code
% from the Import Tool.
dataArray = textscan(fileID, formatSpec, 'Delimiter', delimiter, 'ReturnOnError',
false);
%%Data Mining
celldata = cellstr(UIK633);
chr = char(celldata);
size(chr)
BB=cellstr(ToyotaHiace);CC=cellstr(ThirdyRavena);DD=cellstr(CUTTINGANOVER
TAKENVEHICLE);
BBB=char(BB);CCC=char(CC);DDD=char(DD);
output=plate;[aa,bb]=size(output);[cc,dd]=size(chr);
E=[chr BBB CCC DDD];
if length(output)<7
for n=1:7-length(output)
output=[output ' '];
end
end
n=1;d=length(chr);errorcounter=0;le2=0;
while 1
le=length(chr);
if d>1
PAMANTASAN NG LUNGSOD NG MAYNILA 141
counter=0;
for b=2:6
if output(b-1)==chr(n,b) || output(b)==chr(n,b)
else
counter=counter+1;
end
end
if counter >=2
chr(n,:)=[];BBB(n,:)=[];CCC(n,:)=[];DDD(n,:)=[];
n=n-1;
end
else
break
end
[d,col]=size(chr);
n=n+1;
errorcounter=errorcounter+1;
le2=length(chr);
if le==le2
[erow,ecol]=size(chr);
for y=1:erow
counter2=0;
for x=2:ecol-1
if output(x-1)==chr(y,x) || output(x)==chr(y,x)
counter2=counter2+1;
end
end
counter3(y)=counter2;
end
[mag,ind]=max(counter3);
chr=chr(ind,:);BBB=BBB(ind,:);CCC=CCC(ind,:);DDD=DDD(ind,:);
break
end
end
counter=0;
for b=2:6
if output(b-1)==chr(1,b) || output(b)==chr(1,b) || output(b+1)==chr(1,b)
else
counter=counter+1;
PAMANTASAN NG LUNGSOD NG MAYNILA 142
end
end
if counter >=2
chr(1,:)=[];BBB(1,:)=[];CCC(1,:)=[];DDD(1,:)=[];
end
else
d='Vehicle is not registered'
set(handles.listbox1,'String',d);
drawnow;
access=0; %% 1 or 0 depend if registered or not
fprintf(s,access); %This command will send entered value to Arduino
end
else
GUIcounter=GUIcounter;
end
A=[];
end
% --- Outputs from this function are returned to the command line.
function varargout = standalonegui_OutputFcn(hObject, eventdata, handles)
% varargout cell array for returning output args (see VARARGOUT);
% hObject handle to figure
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)
PAMANTASAN NG LUNGSOD NG MAYNILA 143
Histogram Code:
for aa=1:30
n=num2str(aa);
I=imread([num2str(n),'.jpg']);
GrayI = 0.2989 * I(:,:,1) + 0.5870 * I(:,:,2) + 0.1140 * I(:,:,3);
NMedFilteredI=medfilt2(GrayI);
[row,col]=size(NMedFilteredI);
NMedFilteredI=NMedFilteredI((row/3)+1:row,1:col);
GrayI=GrayI((row/3)+1:row,1:col);
[row,col]=size(NMedFilteredI);
DilatedI=NMedFilteredI;
for (x=1:row)
for(y=2:col-1)
temp=max(NMedFilteredI(x,y-1),NMedFilteredI(x,y));
DilatedI(x,y)=max(temp,NMedFilteredI(x,y+1));
end
end
difference=0;
totalsum=0;
difference=uint32(difference);
maximum=0;
maxvertical=0;
for x=2:row
sum1=0;
for y=2:col
if(DilatedI(x,y)>DilatedI(x,y-1))
difference=uint32(DilatedI(x,y)-DilatedI(x,y-1));
end
if(DilatedI(x,y)<=DilatedI(x,y-1))
difference=uint32(DilatedI(x,y-1)-DilatedI(x,y));
end
if(difference >20)
sum1=sum1+difference;
end
end
PAMANTASAN NG LUNGSOD NG MAYNILA 146
vertical(x)=sum1;
if(sum1>maximum)
maxvertical=x;
maximum=sum1;
end
totalsum=totalsum+sum1;
end
average=totalsum/row;
sum1=0;
newvertical=vertical;
for x=21:(row-21)
sum1=0;
for y=(x-20):(x+20)
sum1=sum1+vertical(y);
end
newvertical(x)=sum1/41;
end
for x=1:row
if(newvertical(x)<(average))
newvertical(x)=0;
for y=1:col
DilatedI(x,y)=0;
end
end
end
sum1=0;
totalsum=0;
difference=0;
difference=uint32(difference);
maxhorizontal=0;
maximum=0;
for x=2:col
sum1=0;
for y=2:row
if(DilatedI(y,x) > DilatedI(y-1,x))
difference=uint32(DilatedI(y,x)-DilatedI(y-1,x));
else
difference=uint32(DilatedI(y-1,x)-DilatedI(y,x));
end
if(difference >20)
sum1=sum1+difference;
end
end
horizontal(x)=sum1;
PAMANTASAN NG LUNGSOD NG MAYNILA 147
if(sum1>maximum)
maxhorizontal=x;
maximum=sum1;
end
totalsum=totalsum+sum1;
end
average=totalsum/col;
sum1=0;
newhorizontal=horizontal;
for x=21:(col-21)
sum1=0;
for y=(x-20):(x+20)
sum1=sum1+horizontal(y);
end
newhorizontal(x)=sum1/41;
end
for x=1:col
if(newhorizontal(x)<average)
newhorizontal(x)=0;
for y=1:row
DilatedI(y,x)=0;
end
end
end
y=1;
for x=2:col-2
if(newhorizontal(x)~= 0 && newhorizontal(x-1) == 0 && newhorizontal(x+1) ==0)
column(y)=x;
column(y+1)=x;
y=y+2;
elseif((newhorizontal(x)~=0 && newhorizontal(x-1) ==0) || (newhorizontal(x)~=0
&& newhorizontal(x+1) ==0))
column(y)=x;
y=y+1;
end
end
y=1;
for x=2:row-2
if(newvertical(x)~= 0 && newvertical(x-1) == 0 && newvertical(x+1)==0)
row(y)=x;
row(y+1)=x;
y=y+2;
elseif((newvertical(x)~=0 && newvertical(x-1)==0) || (newvertical(x) ~= 0 &&
newvertical(x+1) ==0))
PAMANTASAN NG LUNGSOD NG MAYNILA 148
row(y)=x;
y=y+1;
end
end
[temp columnsize]=size(column);
if(mod(columnsize,2))
column(columnsize+1)=col;
end
[temp rowsize]=size(row);
if(mod(rowsize,2))
row(rowsize+1)=row;
end
for x=1:2:rowsize
for y=1:2:columnsize
if(~((maxhorizontal >=column(y) && maxhorizontal <= column(y+1)) &&
(maxvertical >=row(x) && maxvertical<=row(x+1))))
for m=row(x):row(x+1)
for n=column(y):column(y+1)
DilatedI(m,n)=0;
GrayI(m,n)=0;
end
end
end
end
end
LPI=DilatedI;[rowrow,colcol]=size(LPI);
GrayI( ~any(LPI,2), : ) = [];
GrayI( :, ~any(LPI,1) ) = [];
LPI( ~any(LPI,2), : ) = [];
LPI( :, ~any(LPI,1) ) = [];
difference=0;
totalsum1=0;
difference=uint32(difference);
maximum=0;
maxvertical=0;
[row,col]=size(LPI);
for x=2:row
sum1=0;
PAMANTASAN NG LUNGSOD NG MAYNILA 149
for y=2:col
if(LPI(x,y)>LPI(x,y-1))
difference=uint32(LPI(x,y)-LPI(x,y-1));
end
if(LPI(x,y)<=LPI(x,y-1))
difference=uint32(LPI(x,y-1)-LPI(x,y));
end
if(difference >20)
sum1=sum1+difference;
end
end
vertical(x)=sum1;
if(sum1>maximum)
maxvertical=x;
maximum=sum1;
end
totalsum1=totalsum1+sum1;
end
average=totalsum1/row;
sum1=0;
newvertical=vertical;
for x=21:(row-21)
sum1=0;
for y=(x-20):(x+20)
sum1=sum1+vertical(y);
end
newvertical(x)=sum1/41;
end
for x=1:row
if(newvertical(x)<(average))
newvertical(x)=0;
for y=1:col
LPI(x,y)=0;
end
end
end
sum1=0;
totalsum1=0;
difference=0;
difference=uint32(difference);
maxhorizontal=0;
maximum=0;
for x=2:col
sum1=0;
PAMANTASAN NG LUNGSOD NG MAYNILA 150
for y=2:row
if(LPI(y,x) > LPI(y-1,x))
difference=uint32(LPI(y,x)-LPI(y-1,x));
else
difference=uint32(LPI(y-1,x)-LPI(y,x));
end
if(difference >20)
sum1=sum1+difference;
end
end
horizontal(x)=sum1;
if(sum1>maximum)
maxhorizontal=x;
maximum=sum1;
end
totalsum1=totalsum1+sum1;
end
vertLPI=vertical;horzLPI=horizontal;
difference=0;
totalsum1=0;
difference=uint32(difference);
maximum=0;
maxvertical=0;
for x=2:row
sum1=0;
for y=2:col
if(GrayI(x,y)>GrayI(x,y-1))
difference=uint32(GrayI(x,y)-GrayI(x,y-1));
end
if(GrayI(x,y)<=GrayI(x,y-1))
difference=uint32(GrayI(x,y-1)-GrayI(x,y));
end
if(difference >20)
sum1=sum1+difference;
end
end
vertical(x)=sum1;
if(sum1>maximum)
maxvertical=x;
maximum=sum1;
end
totalsum1=totalsum1+sum1;
end
PAMANTASAN NG LUNGSOD NG MAYNILA 151
average=totalsum1/row;
sum1=0;
newvertical=vertical;
for x=21:(row-21)
sum1=0;
for y=(x-20):(x+20)
sum1=sum1+vertical(y);
end
newvertical(x)=sum1/41;
end
for x=1:row
if(newvertical(x)<(average))
newvertical(x)=0;
for y=1:col
GrayI(x,y)=0;
end
end
end
sum1=0;
totalsum1=0;
difference=0;
difference=uint32(difference);
maxhorizontal=0;
maximum=0;
for x=2:col
sum1=0;
for y=2:row
if(GrayI(y,x) > GrayI(y-1,x))
difference=uint32(GrayI(y,x)-GrayI(y-1,x));
else
difference=uint32(GrayI(y-1,x)-GrayI(y,x));
end
if(difference >20)
sum1=sum1+difference;
end
end
horizontal(x)=sum1;
if(sum1>maximum)
maxhorizontal=x;
maximum=sum1;
end
totalsum1=totalsum1+sum1;
end
vertGray=vertical;horzGray=horizontal;
PAMANTASAN NG LUNGSOD NG MAYNILA 152
differenceV(aa)=(sum(vertLPI-vertGray))/length(vertGray)
differenceH(aa)=(sum(horzLPI-horzGray))/length(horzGray)
end
differenceV=differenceV'
differenceH=differenceH'
Luminosity:
[row,col]=size(image);
X=sumsum(image);
X=image/(row*col); %%X=luminosity
PAMANTASAN NG LUNGSOD NG MAYNILA 153
GANTT CHART
PAMANTASAN NG LUNGSOD NG MAYNILA 154
COSTING
TOTAL Php2430.00
PAMANTASAN NG LUNGSOD NG MAYNILA 155
OBJECTIVES:
Board of Director Construction of Chebyshev Type I Band-Stop Digital Filter using Bilinear
(2016-Present) Transformation for an Audio Input Signal with a 44100 Hz Sampling Frequency
(October 2016)
PLM- - Created a code for digital filter using MatLab and verified the
MICROCONTROLLE waveform of the different input audio signal using Audacity.
RS AND ROBOTICS
SOCIETY SUMOBOT (October 2016)
- Programmed a sumobot in Arduino using Gizduino Microcontroller.
Secretary (2016-
Present) Estimation of Resistance for a Specific Temperature using Direct Method of
Polynomial Interpolation (March 2017)
- Obtain temperature and resistance data using LM35 sensor and
OTHER SKILLS: thermistor using Arduino and simulate in MATLAB using the designed
system.
SOFT SKILLS
Able to Listen SEMINARS ATTENDED:
Accept Feedback
Microcontrollers and Robotics
Adaptable April 16 – May 21, 2016
Society (MICROS) Summer Camp
Attentive
Communication Pamantasan ng Lungsod ng Maynila
HOBBIES I hereby certify that the above information are true and correct to the best
Reading pocket books of my knowledge and belief.
Playing computer games
Watching TV series
Listening to music
Internet Surfing
Vince Harrold A. Altura
BONIFACIO S. MACABANTI
2117 P.Gil Street, Sta. Ana, Manila
bsmacabanti@gmail.com
554-04-34/09363941327
Looking for a job where I could learn under working professional environment to
acquire significant knowledge and advancement of my skills. Prior with that, I’m willing
to impart all my knowledge and skills that I’ve been learned in my academic to have an
exceptional progress.
Qualifications
• Capable of writing clearly and concisely and is able to speak effectively. An attentive listener and
provides appropriate feedback. Works well with others but can be independent.
• Has knowledge in designing and assembling basic circuits.
• Has knowledge in Turbo C++, Dev C++, AutoCAD, Multisim, Arduino IDE, Matlab, fritzing and
Basic Cisco packet tracer programs.
• Proficient with Microsoft office such as word, excel, power point, publisher, and etc.
Education
Bachelor of Science in Electronics and Communication Engineering 2012 – 2018
Pamantasan ng Lungsod ng Maynila
Intramuros, Manila
Experience
Seminar Attended
Encounter Weekend (2014)
Sampaguita Christian Church, Sta. Ana, Manila
Thesis Project
• Real-time Vehicle Parking Logging System With The Use Of Multi-layered Artificial Neural Network
Character Reference
Engr. James M. Miedes Engr. Carlos C. Sison, DEM, PECE
IT Officer, Oriental Inc. Faculty, Pamantasan ng Lungsod ng Maynila
#09178412521 #09995212011
RICARDO, FRANCIS NICKO D.
Objective: A Electronics Engineering Student applying to obtain work and experience in the field to
apply and further develop the skills that I have acquired.
Educational Background:
Tertiary:
Bachelor of Electronics Engineering (BSEcE)
College of Engineering and Techonology
Pamantasan ng Lungsod ng Maynila
2013-Present
Secondary:
Immaculate Conception Academy of Manila
2212 S. del Rosario Street Gagalangin, Tondo, Manila
2009-2013
Primary:
Immaculate Conception Academy of Manila
2212 S. del Rosario Street Gagalangin, Tondo, Manila
2003-2009
Seminars:
Microcontrollers and Robotics Society Summer Camp (2016)
Pamantasan ng Lunsod ng Maynila
April 16-May 21, 2016
Technical Skills
PCB designing
PCB etching
Microsoft Office
Programming Skills
Basic C++
MatLAB
R-Programming
Arduino IDE
Projects:
Microcontroller and Robotics Society MiniCamp 2016
Lecturer, Pamantasan ng Lungsod ng Maynila
Reference:
Engr. Carlos Sison, DEM, PECE
Faculty, Electronics Engineering Department
Pamantasan ng Lungsod ng Maynila
Contact No: 09995212011
I hereby certify that all of the above information are true and correct to the best of my knowledge and belief.
EDUCATION COLLEGE
Pamantasan ng Lungsod ng Maynila (2013 – Present)
SECONDARY
Caloocan High School (Engineering and Science Education Program) (2009 –
2013)
ELEMENTARY
Maypajo Integrated School (2003 – 2009)
Graduated as Class Valedictorian