You are on page 1of 181

PAMANTASANNGLUNGSODNGMAYNILA

Philippine Copyright 2018


by
Vince Harold A. Altura
Bonifacio S. Macabanti
Francis Nicko D. Ricardo
Micah Ella R. Uy
Pamantasan ng Lungsod ng Maynila

All rights reserved. Portions of this manuscript may be reproduced with proper
referencing and due acknowledgement of the authors.
PAMANTASANNGLUNGSODNGMAYNILA

Real-Time Vehicle Parking Logging System with the use of Multi-Layered Artificial
Neural Network

A Thesis
Presented to the Faculty of the College of Engineering and Technology
Pamantasan ng Lungsod ng Maynila
Intramuros, Manila

In Partial Fulfillment of the Requirements for the Degree Bachelor of Science in


Electronics Engineering (BSECE)

Proponents:

Altura, Vince Harrold A.


Macabanti, Bonifacio S.
Ricardo, Francis Nicko D.
Uy, Micah Ella R.

Adviser:

Engr. Reynaldo Ted L. Peñas II

March 2018
P A M A N T A S A N 105
NG LUNGSOD NG MAYNILA

CERTIFICATION
This thesis entitled REAL-TIME VEHICLE PARKING LOGGING SYSTEM
WITH THE USE OF MULTI-LAYERED ARTIFICIAL NEURAL NETWORK prepared
and submitted by VINCE HARROLD A. ALTURA, BONIFACIO S. MACABANTI,
FRANCIS NICKO D. RICARDO and MICAH ELLA R. UY in partial fulfillment of the
requirements for the degree BACHELOR OF SCIENCE IN ELECTRONICS
ENGINEERING has been examined and recommended for Oral Examination.

Evaluation Committee

REYNALDO TED L .PEÑAS II, MScEngg


Adviser

JEREMEH M. DELA CRUZ CHARLES G. JUARIZO, PECE


Member Member

APPROVAL

Approved by the Panel on Oral Examination on February 23, 2018 with the grade
of ________.

CHARLES G. JUARIZO, PECE


Chairman

JEREMEH M. DELA CRUZ CARLOS C. SISON, DEM, PECE


Member Member

Accepted in partial fulfillment of the requirements for the degree Bachelor of


Science in Electronics Engineering.

CHARLES G. JUARIZO, PECEfdll


Chairperson, ECE Department
PAMANTASAN NG LUNGSOD NG MAYNILA ii i

CERTIFICATION OF ORIGINALITY

This is to certify that the research work presented in this thesis entitled REAL-

TIME VEHICLE PARKING LOGGING SYSTEM WITH THE USE OF MULTI-

LAYERED ARTIFICIAL NEURAL NETWORK for the degree Bachelor of Science in

Electronics Engineering at Pamantasan ng Lungsod ng Maynila embodies the result

of original and scholarly work carried out by the undersigned. This thesis does not

contain words ideas taken from published sources or written works that have been

accepted as basis for the award of a degree from any higher education institution,

except where proper referencing and acknowledgement were made.

Vince Harrold A. Altura Bonifacio S. Macabanti


Researcher Researcher

Francis Nicko D. Ricardo Micah Ella R. Uy


Researcher Researcher

March 16, 2018


PAMANTASAN NG LUNGSOD NG MAYNILA iv

ACKNOWLEDGEMENTS

The researchers would like to give acknowledgement to the following people

who were instrumental in completing the study.

Their thesis adviser, Engr. Reynaldo Ted L. Peñas II, for his valuable technical

guidance, and for accommodating each and every need of the researchers during the

entire course of their Thesis Design program.

The course adviser, Engr. Jeremeh Dela Cruz, for providing encouragement,

enthusiasm and support for the researchers’ aim to produce a competitive research in

the program.

The panel members for all defenses, Engr. Jeremeh Dela Cruz and Dr. Carlos

Sison, headed by Engr. Charles Juarizo, for providing constructive criticism of the

study, which enabled the researchers to further improve the research.

To their classmates, who provided unending enthusiasm and support during the

entire course.
PAMANTASAN NG LUNGSOD NG MAYNILA v

To their research team composed of third year students, for helping the team

during oral presentations, Nyna Dominic D. Chan and Camille Ann C. Dizon

To the family of all the researchers, who provided all the needs which enabled

the researchers to complete their study.

Most especially to the Almighty Lord Jesus, the creator and source of guidance,

wisdom, knowledge, understanding, and blessings. Thank you.


PAMANTASAN NG LUNGSOD NG MAYNILA vi

ABSTRACT

Due to the growing need for surveillance and license plate identification for

security purposes in the country, a high performance Philippine license plate

recognition system (LPRS) is proposed in this work. This study aims to create

automatic license plate recognition without human intervention and evaluate its overall

processing time and accuracy. To this end, the proponents proposed suitable

algorithms for the four main stages after image acquisition namely, license plate

preprocessing, plate localization, character segmentation and character recognition,

in recognizing old and new license plates of private vehicles for a specific parking lot.

Preprocessing is based on median filter, a non-linear filtering, and dilation morphology

of gray-scaled image is used for enhancement. The plate localization is carried out

based edge processing where vertical and horizontal histogram are calculated. In

order to avoid loss of important information in the image a digital low-pass filter will be

used. Next, is to pass both histograms through a band-pass filter to filter out

unnecessary histogram values. Localized plates are converted into a binary image for

character segmentation. Characters are separated from each other using connected

component analysis. Characters are prepared for the character recognition stage by

a thinning process. At the character recognition stage, a three-layer feedforward

artificial neural network using a backpropagation learning algorithm is constructed and

the characters are determined. For network training, a mean square error of 0.0001 is
PAMANTASAN NG LUNGSOD NG MAYNILA v ii

set to stop the training. Upon recognizing the plate, it checks in the database if the

vehicle is registered or not. A scale model of parking lot is made in order to evaluate

the proposed system. It is found that the overall accuracy of the system is 86% and it

took an average time of about 2.57 seconds. However the proposed system has some

limitations which will be studied in the future in order to make the techniques applicable

in less restrictive working conditions.

Keywords - license plate recognition, preprocessing, plate detection, character

segmentation, character recognition, median filter, dilation, histogram,

connected component analysis, thinning process, artificial neural network.


PAMANTASAN NG LUNGSOD NG MAYNILA vi i

TABLE OF CONTENTS

Page
TITLE PAGE ............................................................................................................. i
CERTIFICATE AND APPROVAL SHEET ................................................................ ii
CERTIFICATION OF ORIGINALITY ....................................................................... iii
ACKNOWLEDGEMENTS........................................................................................ iv
ABSTRACT ............................................................................................................ vi
TABLE OF CONTENTS ......................................................................................... viii
LIST OF FIGURES .................................................................................................. x
LIST OF TABLES ................................................................................................... xv

CHAPTER 1: THE PROBLEM AND ITS BACKGROUND


Introduction ....................................................................................................... 1
Theoretical Framework .....................................................................................5
Conceptual Framework .................................................................................... 7
Statement of the Problems .............................................................................. 8
Objectives ......................................................................................................... 9
Significance of the Study ............................................................................... 10
Scope and Limitations of the Study ............................................................... 11
Definition of Terms ......................................................................................... 12

CHAPTER 2: REVIEW OF RELATED LITERATURE AND STUDIES


Foreign Literature and Studies ....................................................................... 15
Local Studies ................................................................................................. 50
Synthesis of Related Literature and Studies to the Research ........................ 52

CHAPTER 3: RESEARCH METHODOLOGY


Methods of Research ..................................................................................... 55
Instrumentation ...............................................................................................56
Hardware ............................................................................................. 56
Software............................................................................................... 59
Data Gathering Procedure .............................................................................. 61
Specified Algorithms ....................................................................................... 65
Data mining .................................................................................................... 79
Statistical Treatment of Data .......................................................................... 79
Table of Inputs Subjected to the Experimentation .......................................... 80
PAMANTASAN NG LUNGSOD NG MAYNILA ix

CHAPTER 4: PRESENTATION, ANALYSIS AND INTERPRETATION OF DATA


Image Localization .......................................................................................... 82
Character Segmentation ................................................................................. 90
Data Set Training............................................................................................ 92
Character Recognition .................................................................................... 96
Data mining .................................................................................................... 98
Localization of the Plate region..................................................................... 103
Graphic User Interface (GUI) ........................................................................ 104
Time Elapsed for System Processing ........................................................... 105

CHAPTER 5: SUMMARY OF FINDINGS, CONCLUSION, AND


RECOMMENDATIONS
Summary of Research .................................................................................. 106
Findings ....................................................................................................... 106
Conclusion .................................................................................................... 108
Recommendations........................................................................................ 109

Bibliography ..........................................................................................................110
Appendices
Appendix I: Computations............................................................................. 116
Appendix II: Source Codes ........................................................................... 121
Gantt chart ............................................................................................................153
Costing ................................................................................................................. 154
About the Proponents ...........................................................................................155
Proofread Certification ..........................................................................................153
PAMANTASANNGLUNGSODNGMAYNILA x

LIST OF FIGURES

Figure Title Page

1.1 A sample set-up of License Plate Recognition .................................... 3

1.2 Functional block diagram of a License Plate Recognition system . . . 5

1.3 The Conceptual Framework of the study .............................................7

2.1 APNR system ...................................................................................... 17

2.2 (a) Gray image (b) edge detected (c) localized plate region ................ 21

2.3 Steps for Image Processing ................................................................. 22

2.4 Binarized Image of a License plate ...................................................... 23

2.5 (a) Original Gray-processed image (b) Contrast extended


image (c) Median-filtered image........................................................... 24

2.6 (a) Original Gray-processed image (b) Binary image


produced by Niblack’s algorithm (c) Efficient implementation
of Niblack’s algorithm ........................................................................... 25

2.7 Comparison of the three binarization methods.................................... 26

2.8 Horizontal and Vertical tilting............................................................... 28

2.9 Blob Extraction process ...................................................................... 32

2.10 Process how to label pixels through four-connectivity and

eight-connectivity ................................................................................ 33
PAMANTASANNGLUNGSODNGMAYNILA xi

2.11 Vertical and Horizontal Coordinate . . . . . . . . . . . . . . . . . . . . . . . . . . 33

2.12 Hetero-associative neural network . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

2.13 Feed Forward Neural Network approach . . . . . . . . . . . . . . . . . . . . . . 38

2.14 Relationship of the number of iterations to the MSE . . . . . . . . . . . . . 40

2.15 First set with 3450 character images recognition rate . . . . . . . . . . . . 42

2.16 Second set with 6071 character images recognition rate . . . . . . . . . 42

2.17 Third set with 8699 character images recognition rate . . . . . . . . . . . 43

3.1 TTL Serial Camera . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

3.2 Gizduino Plus with ATMEGA644 . . . . . . . . . . . . .. . . . . . . . . . . . . . . 57

3.3 Ultrasonic Sensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57

3.4 Circuit of the system’s hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58

3.5 General Procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

3.6 Scale model of the ALPR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

3.7 X-axis position of the camera . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62

3.8 (a) Y-axis position (b) Z-axis position . . . . . . . . . . . . . . . . . . . . . . . . 63

3.9 Flowchart of Image Capturing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

3.10a Flowchart of License Plate Extraction . . . . . . . . . . . . . . . . . . . . . . . . 67

3.10b Flowchart of License Plate Extraction . . . . . . . . . . . . . . . . . . . . . . . . 68


PAMANTASAN NG LUNGSOD NG MAYNILA x ii

3.11 Flowchart of Niblack’s Method of binarization ...................................... 70

3.12 a Flowchart of Character Segmentation.................................................. 72

3.12 b Flowchart of Character Segmentation.................................................. 73

3.13 a Flowchart of Backpropagation Algorithm ............................................ 75

3.13 b Flowchart of Backpropagation Algorithm ............................................ 76

3.14 Flowchart of Feedforward Neural Network ........................................... 77

4.1 Pre-processing image results for localization ....................................... 82

4.2 Images captured at 125 cm distance ...................................................84

4.3 Detected license plate region at 125cm distance ................................. 84

4.4 Images captured at 75 cm distance .....................................................85

4.5 Detected license plate region at 75 cm distance ................................. 85

4.6 Images captured at 75 cm distance ..................................................... 85

4.7 Detected plate region at 75 cm distance .............................................. 85

4.8 Images captured at 40cm horizontal distance ..................................... 87

4.9 Successful Detection of plate region at 40cm horizontal distance . . . 87

4.10 Images captured at 40cm horizontal distance (a & b), Failure Detection
of plate region at 40cm horizontal distance (c & d) .............................. 87
PAMANTASAN NG LUNGSOD NG MAYNILA xiii

4.11 Images captured at 80cm horizontal distance ...................................... 88

4.12 Successful Detection of plate region at 40cm horizontal distance . . . 88

4.13 Images captured at 80cm horizontal distance (Left), Failure Detection of


plate region at 80cm horizontal distance (Right) .................................. 88

4.14 Successful Segmented Characters of the license plate ....................... 90

4.15 Failure Segmented Characters of the license plate ............................. 91

4.16 The Characters used to train the data sets .......................................... 92

4.17 The first data set contains 36 characters ............................................. 93

4.18 The second data set contains 71 characters....................................... 93

4.19 The third data set contains 100 characters .......................................... 94

4.20 The fourth data set contains 240 characters ........................................ 94

4.21 Accuracy of the set of training data ...................................................... 95

4.22 Actual Training Data of MSE (MSE vs Iterations) ................................ 96

4.23 Successful Plate Localization, Character Segmentation and


Recognition, and Vehicle Identification ................................................ 97

4.24 Successful Plate Localization, Character Segmentation and


Recognition, and Vehicle Identification ................................................ 98

4.25 Successful Vehicle Identification with Error in Recognition .................. 99

4.26 Successful Vehicle Identification with Error in Recognition .................. 99


PAMANTASAN NG LUNGSOD NG MAYNILA x iv

4.27 Comparison of serif in letter “Q” ......................................................... 100

4.28 Failed Recognition of “7” as “1”.......................................................... 101

4.29 Failed Recognition of “Q” as “O” ........................................................ 102

4.30 Failed Recognition of “Z” as “2” ......................................................... 102

4.31 Graphic User Interface of the whole system ...................................... 104


PAMANTASAN NG LUNGSOD NG MAYNILA xv

LIST OF TABLES

Figure Title Page

2.1 Performance of the LPR System in Literature ...................................... 19

2.2 Summary of Horizontal Tilt Correction results ...................................... 29

2.3 Summary of Vertical Tilt Correction results ......................................... 30

2.4 Summary of the results of five methods tested ................................... 30

2.5 A various multi-layered neural network topologies ............................... 36

2.6 Different font styles of printed text and the results ............................... 39

2.7 Summary of results of the recognition rates of this


proposed system ................................................................................. 43

2.8 Experimental results of the application of this system for


character and word recognition ............................................................ 45

2.9 Summary of the pros and cons of various plate extraction


methods classified through their used features .................................... 46

2.10 Summary of the pros and cons of various character segmentation


methods classified through their used features .................................... 46

2.11 Summary of the pros and cons of various character recognition


methods classified through their used features .................................... 47

2.12 Summary of the overall performance of different ALPR systems ........ 48


PAMANTASAN NG LUNGSOD NG MAYNILA x vi

2.13 (continuation of Table 2.10) Summary of the overall performance


of different ALPR system .................................................................... 49

2.14 Equivalent character of the binary output of the artificial


neural networks block ......................................................................... 50

3.1 Input parameters for mathematical model simulation .......................... 79

3.2 Programmed parameters in the Arduino .............................................. 80

4.1 Percentage Success of recognizing the license plate region varying the
distance of camera to the vehicle........................................................ 83

4.2 Percentage Success of recognizing the license plate region varying the
horizontal distance of camera to the vehicle ........................................ 86

4.3 Luminosity Value at 75 cm distance ..................................................... 89

4.4 Vehicle Identification Success Rate ..................................................... 97

4.5 Misrecognized Character ...................................................................100

4.6 Vertical and Horizontal Difference of 25 vehicles ............................... 103

4.7 Summary of Elapsed Time ................................................................. 105


PAMANTASAN NG LUNGSOD NG MAYNILA 1

Chapter I

THE PROBLEM AND ITS BACKGROUND

Introduction

The advancement in technology has been exceptionally fast in the 21 st century.

With electronic technology and machines being produced and improved all the time,

everything is becoming convenient and accessible for every human being. No doubt

that even in transportation, innovations have already made their way.

As the automotive industry moves toward autonomous driving and smart roads,

the encountered conflicts about vehicles continue to increase. These conflicts include

ineffective enforcement of parking management systems, security control of restricted

areas, stolen vehicle verification, and traffic flow control in some national roads.

An October 2016 online article written by Anne Marie pointed out another form

of a hit-and-run issue. She said that “if you hit a parked car and leave the scene without

making an effort to contact the owner of the car, it can also be considered a hit-and-

run”. This scenario mostly occurred in some parking areas where the cars are leaving

inattentively. Based on this situation, it is difficult for the latter to trace the culprit of the

incident. Though there are some CCTV cameras installed in the nearby areas that

captured the incident, recognizing the identity of the culprit is a major task of security

personnel. One effective way to address this kind of conflict is by identifying the

vehicle’s license plate.


PAMANTASAN NG LUNGSOD NG MAYNILA 2

The license plate identification plays an important role in recognizing the

information about one’s vehicle. Though it can be done by manual human observation,

there is a probability for mistakes due to reading errors or failed eyesight, caused by

blurred images or lack of concentration. With that, automatic license plate recognition

is introduced.

Automatic license plate recognition system is an image processing system,

which lies under the computer vision field (Thanin, 2008). It has been a special area

of interest due to its many applications such as its potential to be used by police forces

around the world for law enforcement purposes, re-acquisition of stolen cars, and for

electronic toll collection on pay-per-use roads.

Automatic license plate recognition systems commonly have two types of basic

approaches. First, the process is entirely performed at the designated lane location in

real-time, while the second one performs the process at the remote computer location

where all the images from many lanes are transmitted (Habineza, 2016). When the

process is done at the lane site, the whole process is completed in approximately 250

milliseconds. This includes the capturing and identifying of license plate characters,

date-time, lane identification, and any other information required. If there is further

necessary processing required, this information will be transmitted to a remote server

to finish the process. On the other hand, if the process doesn’t need further

requirement the information could be stored at the lane for later retrieval. Furthermore,

handling high workloads typically demand large numbers of PCs used in a server farm.
PAMANTASAN NG LUNGSOD NG MAYNILA 3

Often in such systems, larger bandwidth transmission media is a necessary to forward

images to the remote server (Mudra et. al., 2015).

Figure 1.1. A sample set-up of License Plate Recognition

The software of recognition process is generally composed of four main

stages. The first stage is image acquisition. At this stage, the image of a vehicle’s front

or rear is being captured by the camera. However, there are some parameters of the

camera that have to be considered such as the type of camera, camera resolution,

shutter speed, orientation, and light. The second stage is license plate detection. In

this stage, the system is going to determine if the input image contains the region of

a license plate and will find its location on that picture. The extraction of the license

plate from the image will be based on some features, such as the boundary, the color,

or the existence of the characters. The third stage is character segmentation. In this

stage, the system takes the image of a plate and separates the characters from each

other. The last stage of the process is character recognition. This stage is about

recognizing the extracted characters by template matching or using classifiers, such


PAMANTASAN NG LUNGSOD NG MAYNILA 4

as neural networks, SVMs and fuzzy classifiers (Azad et. al, 2014). Every stage of the

license plate recognition plays a significant role in the success rate of it. The higher

the accuracy of one of these stages, the higher the success rate of result that can be

obtained.

For the past two decades, a lot of techniques have been implemented in license

plate recognition. Each stage has their own specific methods that acquired significant

results. For license plate detection, there are four considered major processes to

execute the input image: (1) Binary Image Processing, (2) Gray-Level Processing, (3)

Color Processing, and (4) Classifiers. Among the four processes, binary-image shows

the highest rate of reliability with relatively fast extraction rate compared to other

methods (Anagnostopoulos et.al, 2008). In character segmentation, vertical and

horizontal projection is the most common method used in this stage, but the connected

component analysis (CCA) seems more accurate in terms of separating characters

than the projections. Hidden Markov Model and Template Matching are said to be an

effective technique for character recognition phase, but the Artificial Neural Network

is more efficient and faster than the latter.

The current methods of automatic license plate recognition system worked

accordingly to the guiding parameters of specific country traffic norms and standards.

In the Philippines, there is no further study about the automatic license plate

recognition system based from international and national patent websites (Dalida et.

al, 2016). With that, the proponents desired to conduct a research study about

automatic license plate recognition system with higher accuracy in terms of the
PAMANTASAN NG LUNGSOD NG MAYNILA 5

mentioned stages. The proponents also aim to develop a real-time automatic license

plate recognition system that takes less processing time, uses low computing power,

and has better recognition rates under fewer restrictions using ANNs and various

image processing techniques.

Theoretical Framework

According to Ying Wen and M. von Deneen in their study entitled “An Algorithm

for License Plate Recognition Applied to Intelligent Transportation System”, automatic

license plate recognition is another form of Intelligent Transportation System (ITS). It

first uses a series of image manipulation techniques to detect, normalize and enhance

the image of the number plate, and then optical character recognition (OCR) to extract

the alphanumeric of the license plate. Figure 1.2 shows the functional block diagram

of a license plate recognition system.

Server
Microcontroller Unit
(Arduino Plus with Image Pre-processing
ATMEGA644 )

Morphological Operations

Edge Statistics
Serial Camera

Character Segmentation

Character Recognition
Graphic User
Interface
(Recognized Character Image Data Mining
into Text)

Figure 1.2. Functional block diagram of a License Plate Recognition system


PAMANTASAN NG LUNGSOD NG MAYNILA 6

The serial camera is used to capture the front/rear view of the vehicles where

the license plate is located. Then, the image captured is transmitted using the

microcontroller that is interfaced in the serial camera. For the image processing in the

server, if the license plate can be detected exactly then character segmentation and

character recognition can yield twice the result with half the effort. However, it needs

to consider the various luminance conditions of the environment. With that, an image

pre-processing approach was used because of it has the advantage of being simple

and thus faster. Its process involves RGB to Grayscale, median filter and dilation.

These processes improve the image resolution for edge detection.

For the extraction of number plate, vertical and horizontal edge processing was

used along with the histogram approach to show the trends of pixel components of

the whole image. When the presence of a license plate is detected, the characters on

the plate have to be separated from each other. The character segmentation system

has to acquire the full image of a license plate and to deliver a set of separated

characters to the next process which is the character recognition. For this, Connected

Component Analysis (CCA) is implemented. By applying the CCA to the image of the

license plate, each character will be seen as individual images. All characters will be

acquired by filtering all of the elements through a threshold value such as the size of

a component (Jing, 2016). Next is character recognition, the most integral part of

license plate recognition. Using the Artificial Neural Network, the accuracy and

efficiency of being real time of the system is guaranteed. Finally, the recognized

characters will undergo matching with the stored characters in the database. This
PAMANTASAN NG LUNGSOD NG MAYNILA 7

process is called data mining. It is the computing process of discovering patterns in

large data sets involving methods at the intersection of machine learning, statistics,

and database systems. It is an essential process where intelligent methods are

applied to extract data patterns. Sometimes this analysis step referred of the

“knowledge discovery in databases" process.

The block diagram in Figure 1.2 demonstrates the structural model of how the

characters of a license plate in the input image will be recognized. The success of the

system is based on their robustness to environmental conditions, complex

backgrounds, defects on the plate surface, and a range of distances and viewpoints

between the vehicle and camera.

Conceptual Framework

Figure 1.3 shows the conceptual framework of this study. An image acquired

from the camera will be transmitted in the distant server.

There would be five major processes. The first step consists of the following

implementation: image pre-processing, morphological operation and edge statistics.

These have a function to detect the location of the license plate in the image. Then,

separating the characters from the image, which will be the main function of connected

component analysis. Next, the algorithm that will recognize the characters from the

license plate image is the three layer feed forward neural network with back

propagation algorithm. And lastly, determining the match of the recognized characters

to the database will be the main concern of data mining.


PAMANTASAN NG LUNGSOD NG MAYNILA 8

INPUT PROCESS
• re 1
Figu Im
.3plementation of Image pre-processing,
• Hardware:
Morphological operation and Edge Statistics
Serial Camera
• Implementation of Connected Component
Server – Laptop
Analysis
Microcontroller –
• Implementation of Three layer feed forward
Arduino
Neural Network with Back Propagation
• Software:
Algorithm.
MATLAB 2016a
• Implementation of Data Mining
Graphic User

FEEDBACK OUTPUT
Recognized License plate
Unmatched Characters will be stored as training set data characters from the
vehicle’s front/rear view
image displayed in Graphic
User Interface.

The Conceptual Framework of the study

The reliability of the systems was determined by the accuracy and processing

time of each method.

Statement of the Problem

This study aims to develop a real-time automatic license plate recognition

system that can be used for parking management systems and security of the

restricted areas. Specifically, it will aim to answer the following problems:


PAMANTASAN NG LUNGSOD NG MAYNILA 9

1. What is the effect in the accuracy of the designed system when the input is

varied in terms of:

a. License Plate Region Resolution?

b. Viewing Angle?

c. Luminosity?

2. What is the necessary average difference between the histogram value of the

original gray-scaled image and the processed image in order for the system

to completely segment the license plate region?

3. How fast and accurate is the performance of the designed system for license

plate extraction, character segmentation and recognition?

Objectives

The main objective of this study is to create a real-time system that will fully

recognize the characters in the license plate using artificial neural network.

Specifically, this study aims to:

1. Determine the changes in accuracy of the system in image to text conversion

in terms of:

a. License Plate Region Resolution

b. Viewing Angle

c. Luminosity

2. Determine the average difference of the histogram value between the

original gray scaled image and the processed image that the system can

completely recognize the license plate region.


PAMANTASAN NG LUNGSOD NG MAYNILA 10

3. Determine the accuracy and processing time of the proposed license plate

recognition system.

Significance of the Study

The importance of this study is focused on improving the system of the

automatic license plate recognition here in the Philippines. The Philippines’ license

plate has its own distinct format and style, and finding ways to be able to recognize

it will have a great impact on the security and safety of the society. First, (1) it could

lessen the crime in the street such as car-napping and hit-and-run incidents by

providing an automatic alert when any of the vehicles on a watch list passes by on

the establishment that has a license plate recognition system installed. Second, (2)

it could help to solve traffic related problems by monitoring the traffic flow control of

the vehicles. Lastly, (3) it could enhance the security in some restricted areas such

as access control. For example, only authorized vehicles and all visiting vehicles that

are automatically registered will be allowed to have an access in a particular private

establishment. This study aims to produce more accurate and more reliable result

than the previous study in the country.

The capability of the system to fully recognize the character in the license plate

under with any various luminance conditions and even in any tilted position is also

demonstrated in this study. With this kind of system, it is possible to have better

security and monitoring of the vehicular management system.


PAMANTASAN NG LUNGSOD NG MAYNILA 11

Furthermore, this study also aims to determine the minimum system

requirements that are needed for hardware implementations.

The beneficiaries and their gains from this study are as follows:

1. Parking managers of different establishments who could use the system to

facilitates or even automates the payment, entry and exit of the parking areas.

2. Private establishments who need a tight security system for their areas.

3. Airport traffic management, to ensure that only authorized vehicles can

access the taxi / public transport lanes.

4. Transportation engineers and urban planners who are tasked to solve

vehicular traffic problems.

5. Law enforcers chasing “Hot Cars”.

6. Students and researchers who are interested in taking up license plate

recognition for thesis or research work.

Scope and Limitations of the Study

This study aims to provide a system that automatically converts license plate

image into text. Vehicles were held stationary during image-taking with angle and

distance having very little variations. The images taken all have resolutions of 640x480

pixels and are in JPEG format. The images being put through the Artificial Neural

Network for recognition are all binarized Images. Only registered Philippine license

plates were considered for the study. Lastly, the software used for simulation was

MatLab 2016a. The host used has the following specifications, an i-5 dual core
PAMANTASAN NG LUNGSOD NG MAYNILA 12

processor operating at 1.70 and 2.40 GHz. It also has a 6-GB ram installed. The

Artificial Neural Network has the following specifications, 351 input neurons and 7

output neurons, while the number of hidden neurons was determined during the

testing phase. The database contains a number of license plates along with the

following data: owner, model, color, and the violations made by the owner.

Definition of Terms

The following terminologies were used in the study and were listed accordingly:

Artificial Neural Network - is a computational model based on the structure and

functions of biological neural networks. Information that flows through the network

affects the structure of the ANN because a neural network changes -

or learns, in a sense - based on that input and output.

Automatic License Plate Recognition - is a technology that uses optical character

recognition on images to read vehicle registration plates. It can use existing

closed-circuit television, road-rule enforcement cameras, or cameras

specifically designed for the task.

Binarization - is the conversion of a picture to only black and white or Logic “0” or

Logic “1”.

Character Segmentation – it can be defined as a technique, which partitions images

of lines or words into individual characters. It is an operation that seeks to

decompose an image of a sequence of character into sub-images of individual

symbols.
PAMANTASAN NG LUNGSOD NG MAYNILA 13

Character Recognition – is a process which allows computers to recognize written or

printed characters such as numbers or letters and to change them into a form

that the computer can use.

Color to Gray conversion – this process removes all color information, leaving only

the luminance of each pixel.

Feed Forward Neural Network – is an artificial neural network wherein connections

between the units do not form a cycle.

Image Processing – is processing of images using mathematical operations by using

any form of signal processing for which the input is an image, a series of images

or a video, such as a photograph or video frame; the output of image processing

may be either an image or a set of characteristics or parameters related to the

image.

Image Resolution - is the detail an image holds. The term applies to raster

digital images, film images, and other types of images. Higher resolution means

more image detail. Image resolution can be measured in various ways.

License Plate Recognition – is a technology that uses optical character recognition

on images to read vehicle registration plates.

License Plate Region - is the area inside the edges of the license plate including all

the alphanumeric characters.

MATLAB® – It is a high-level language and interactive environment that enables one

to perform computationally intensive tasks faster than with traditional

programming languages such as C, C++, and Fortran.


PAMANTASAN NG LUNGSOD NG MAYNILA 14

Morphology - is a broad set of image processing operations that process images

based on shapes. Morphological operations apply a structuring element to an

input image, creating an output image of the same size.

Viewing Angle – The angle made by the x-axis and the varying position with respect

to both y and z-axes.


PAMANTASAN NG LUNGSOD NG MAYNILA 15

Chapter II

REVIEW OF RELATED LITERATURE AND STUDIES

This chapter presents the review and relevant information about license plate

recognition and artificial neural network that was used as basis for this research. In

this section, the related algorithms for image processing toward recognition process

are discussed and further explained through the works of previous researchers.

Foreign Literature and Studies

The rapid increase in the number of vehicles across the world resulted to

vehicle license plate recognition becoming one of the most relevant and important

image processing systems being used for many traffic control and surveillance

systems, security systems, toll collection at toll plaza and parking assistance system,

etc. Because of the wide use of technology, Intelligent Transport System has

developed and grew continually, and it becomes more important in traffic

management. Human intervention needs to be minimized in order to have a system

that will reduce the load on human operators.

Plenty of research in the field of vehicle plate number or VPN recognition have

been carried out since early 2000s. License plate localization methods are classified

broadly into morphology-based methods, edge statistics, and neural networks and
PAMANTASAN NG LUNGSOD NG MAYNILA 16

fuzzy-based, template-based and so on. The color, variance, aspect ratio, and edge

density are some of the image features used by the above methods. High contrast

between characters and background in a license plate is a strong feature which is

considered in edge analysis.

In previous papers, a lot of techniques for developing automatic license plate

recognition system were proposed. According to Anand Sumatilal Jain and Jayshree

M. Kundargi in their study entitled “Automatic Number Plate Recognition Using

Artificial Neural Network” published in 2015, the typical License Plate Recognition

(LPR) system is comprised of four main sections: image pre-processing, license plate

extraction, character segmentation and character recognition.

Normally, the license plate has a rectangular shape with a known aspect ratio,

making it possible to be extracted by finding all possible rectangles in the image. A

common method used to find these rectangles is edge detection. In a paper published

in 2003 conducted by M. Sarfraz, M. J. Ahmed, and S. A. Ghazi with the title “Saudi

Arabian license plate recognition system”, edges were detected in their proposed

system. Candidate regions are generated by matching between vertical edges only.

In their study, the vertical edges are matched to obtain some candidate rectangles.

Rectangles that are considered as candidates are those that have the same aspect

ratio as the license plate. The result of this method was 96.2% on images under

various illumination conditions.


PAMANTASAN NG LUNGSOD NG MAYNILA 17

Figure 2.1. APNR system

Jain and Kundargi indicated the block diagram of their system as shown in

Figure 2.1. They also indicated in their study that the parameters of the camera to be

used in acquiring the image shall be considered, such as the camera type, resolution

of the camera, orientation, light, and the speed of the camera. The next step which is

the extraction of license plate from the acquired image, is based on some image

features such as the color, various illuminations, boundary, or the existence of the

characters. It was also discussed that in the third stage, segmentation of characters

in the extracted license plate, it was suggested to be done by projecting their color

information, labelling them, or matching their positions with templates. Finally, for the

last stage, the recognition of the characters, Artificial Neural Network (ANN) was used.

There are also other methods that was written in this paper for the recognition of
PAMANTASAN NG LUNGSOD NG MAYNILA 18

characters. These are template matching, or using other classifiers related to ANN

such as fuzzy classifiers or SVMs (Support Vector Machines).

License Plate Extraction

For the process of license plate detection and extraction from the acquired

image, techniques based upon combinations of edge statistics and mathematical

morphology featured very good results according to the published paper in IEEE last

2006 by Christos Nikolaos E. Anagnostopoulos, et. al. entitled “A License Plate-

Recognition Algorithm for Intelligent Transportation System Applications”. Every

method has some advantages and disadvantages of its own. As an example, the

disadvantage of this method that was also discussed in this paper is that edge-based

methods alone can hardly be applied to complex images. It is because they are too

sensitive to unwanted edges, which may also show high edge magnitude or variance,

for example the radiator region in the front view of the vehicle could be detected by

the system instead of the plate number region. In spite of this, when combined with

morphological steps that eliminate unwanted edges in the processed images, the

license plate extraction rate is relatively higher and faster compared to other methods

as shown in Table 2.1 below.


PAMANTASAN NG LUNGSOD NG MAYNILA 19

Table 2.1 Performance of the LPR System in Literature

PERFORMANCE OF THE LPR SYSTEMS IN THE LITERATURE


Ref. Recognition Rate Platform & Time Scientific background
pl: plate detection Processor (sec) P: Licenste plate detection
ch: char. method
recognition R: Plate character
ov: overall recognition method
success
[2] 100% (pl) Matlab 6.0. 4 P: Connected component
92.5% (ch) P II, 300 analysis; R: 2 PNNs (one for
MHz letter and one for numeral
recognition). Test: 40
images.
[3] 80.4% (ov) AMD K6, 5 P: Mathematical
350 MHz morphology: R: Hausdorff
distance.
[4] 99.6% (pl) P IV, 1.7 0.1 P: Edge statistics and
GHz morphology, Test: 10000
images
[5] see scientific P IV, 2.4 0.5 P: Horizontal – vertical edge
background GHz statistics, 96.8% (first set),
99.6% (second set), 99.7%
(third set)
[6] 94.2% (pl) C++, P II, Not P: Magnitude of the vertical
98.6% (ch) 300 MH reported gradients and geometrical
features. Test: 104 images
[7] 92.3% (pl) Not 1.6 P: Block-based technique
95.7% (ch) reported between a processing and a
reference image. Motorcycle
vehicles are also identified.
[8] 93.1% (ov) Not Not P: Spatial gray-scale
reported reported processing, R: dynamic
projection warping.
[9] 90% (ov) P III, 1 GH 0.3 P: Combination of color and
shape information of plate.
[10] 97% (ov at day) Not 0.1 – High Performance
90% (ov at night) reported 1.5 Computing Unit for LPR.

[11] 91% (ov) RS/6000 1.1 P: Contrast information. R:


mod 560 Template matching. Test:
3200 images
[12] 80% (ov) Intel 486/66 15 P: Grey-scale processing.
MH R: Feedforward ANN
PAMANTASAN NG LUNGSOD NG MAYNILA 20

[13] 89% (pl) C++, PIII 1.3 P: Support Vector Machine


800 MH (SVM), continuous data
adaptive meanshift
algorithm (CAMShift) in
color measurements.
[14] 98.5% (pl) AMD Athlon 34 P: A supervised classifier
1.2 GH trained on the texture
features. R: “kd-tree” data
and “approximate nearest
neightbour”. Test: 131
images.
[15] 95% (ov) Visual C++, 2 P: Multiple Interlacing
P IV method. R: ANN.
[16] 97% (pl) Workstation 2 P: Fuzzy logic rules are
SG-INDIGO applied for plate location
2 only. Test: 100 images.
[17] 75.4% (pl) Not Not P: Fuzzy logic. R: Neural
98.5% (ch) reported reported network (Multi-Layer
Perceptron). Test: 10000
images.
[18] 97.6% (pl) P IV, 1.6 2.5 P: Fuzzy logic color rules. R:
GH Self Organized Neural
Network. Test: 1065 images
of 71 vehicles.
[19] 98% (pl) P IV, 1.4 3.12 P: Gabor Transform and
GH Vector quantization.
Performs only plate location
and character segmentation
(94.2%).
[20] Not reported P III, 600 0.15 P: Sub-machine-code
MH Genetic Programming.
[21] 80.6% (pl) P III, 500 0.18 P: Real-coded genetic
average MH algorithm (RGA).

This study was conducted also to discuss the differences in the success rate,

processing time, and computational power of different methods used by at least 37

previous researchers, wherein 20 of them were snipped in its original table and

presented here. A research cited in this paper that used the combination of edge

statistics and morphology as license plate detection method tested using


PAMANTASAN NG LUNGSOD NG MAYNILA 21

approximately 10000 images had a recognition rate of 99.6% and average processing

time was recorded as approximately 0.1 second. This approach was labelled as

superior in terms of plate segmentation by this paper.

Morphological operation refers to a broad set of image processing operations

that process images based on shapes. That is why this is suggested for the extraction

of the boundary of license plate based on the research conducted by Khaled Mahmud,

et. al. published on March 2012 entitled “Bangla Automatic Number Plate Recognition

System using Artificial Neural Network.” These researchers also regarded other

methods used by previous studies for number plate detection and comes up with the

conclusion that edge analysis method combined with mathematical morphology gives

a better result. According to this study, this technique provides strong edge information

where the plates contain dark characters on light background, which can be used as

an indication to detect the number plate. This condition satisfies the color of the plate

number here in the Philippines.

In a study entitled “License Plate Recognition Using Convolutional Neural

Network” conducted by Shrutika Saunshi et. al., edge detection method was used for

license plate detection and extraction. They indicated the importance of taking in mind

the boundaries of the plate in the image thus resulting to the need of using this edge

detection method. This method helped these researchers find out the actual shapes

in an image by the grouping method of intersection points of the shapes. Hence, they

could finally tell whether a shape is a rectangle or not depending upon the number of

points in that obtained group. Figure 2.2 shows 3 images wherein the first one shows
PAMANTASAN NG LUNGSOD NG MAYNILA 22

the Gray image, the second one shows the edge detected, and the third one shows

the localized plate region.

Figure 2.2. (a) Gray image (b) edge detected (c) localized plate region

Image Pre-processing

Based on the study conducted in University of Rome by R.Parisi, et. al. entitled

“Car Plate Recognition By Neural Networks And Image Processing”, the image

captured should be pre-processed by tone equalization and contrast reduction or its

proposed alternative, such as edge enhancement, for the better robustness and

suitability for the next processing stage. In a paper entitled “Block-Based Neural

Network for Automatic Number Plate Recognition” published in 2014, Deepti Sagar

and Maitreyee Dutta showed the steps they performed for improving the image quality

as shown below:
PAMANTASAN NG LUNGSOD NG MAYNILA 23

Figure 2.3. Steps for Image Processing

Gray scale processing, according to Sagar and Dutta, is a very important step

in image pre-processing because its results will be the foundation of later steps. For

the binarization, this study used the Otsu’s method of binarization. The image of

various Gray level intensities are converted into binary image. In the matrix equivalent

of a binary image, it will only be composed of ones and zeros with one representing

white and zero for black. This was done using a threshold value that will decide

whether the certain pixel should be marked as one or zero.

Image pre-processing is an important step for a license plate recognition system

according to Shrutika Saunshi, et. al. in their paper entitled ”License Plate Recognition

Using Convolutional Neural Network”. It will highly affect the latter stages of the

proposed system if an image is not properly pre-processed resulting to an ineffective


PAMANTASAN NG LUNGSOD NG MAYNILA 24

Figure 2.4. Binarized Image of a License plate

recognition and faulty results. The image has to be enhanced through pre-processing.

One important step for pre-processing the image that was done in this study was

binarization. Binarization is the process of converting an RGB image into an image

with only two pixels value, either 0, for the background pixel, or 1, for the foreground

pixel. In binary image, edges will be clearer and will make detecting the license plate

easier thus performing binarization process before license plate detection and

extraction would be a big help.

-Median Filter

For eliminating the unwanted noisy regions, Median filter was used in a study

conducted in Turkey by H.Erdinc Kocer and K.Kursat Cevik entitled “Artificial Neural

Networks-based Vehicle License Plate Recognition”. They used 3x3 matrices for this

filtering method, which was passed around the image. The dimension of these

matrices can be adjusted depending on the noise level. The researchers used the

process explained in a book entitled Computer vision and Image processing written

by S. Umbaugh on New Jersey published 1999. The process was explained and

worked as follows: First, a pixel was assigned as the center pixel of the 3x3 matrices.

Second, neighborhood pixels were labelled, these are the pixels around theses

matrices. Then, the sorting process was employed to these nine pixels from the
PAMANTASAN NG LUNGSOD NG MAYNILA 25

smallest to the biggest. After this, the median element was assigned which was the

fifth element. Finally, these processes are implemented to all the pixels in the plate

image. A given example of the median filtered image was shown below, in figure 2.5.

This helped the plate region extraction process to have a success rate of 98.45% or

255 correct results over 259 samples.

Figure 2.5. (a) Original Gray-processed image (b) Contrast extended image (c) Median-filtered image

-Binarization

Based on the study conducted by Senthilkumaran N and Kirubakaran C entitled

“Efficient Implementation of Niblack Thresholding for MRI Brain Image Segmentation”

last 2014, different techniques have been used to successfully extract objects from its

background through pre-processing the image beforehand. Thresholding becomes a

simple but effective tool and is now widely used. Different binarization methods have

already been performed to evaluate different kinds of data. Niblack’s method was

found better for thresholding in Gray scale images with low contrast, variety of

background intensity and presence of noise. In this paper, the researchers tried to

modify this method for better results.


PAMANTASAN NG LUNGSOD NG MAYNILA 26

Niblack’s algorithm was used determines a threshold value to each pixel-wise

by sliding a rectangular window over the Gray-level image. This size of rectangular

window may vary. Their proposed efficient implementation resulted to a sharper and

more accurate object area. This study then concluded, that the effective

implementation of Niblack algorithm proposed in this study, is suitable for applications

such as MRI brain images or others that requires clearer view for the object in the

image.

Figure 2.6. (a) Original Gray-processed image (b) Binary image produced by Niblack’s algorithm

(c) Efficient implementation of Niblack’s algorithm


PAMANTASAN NG LUNGSOD NG MAYNILA 27

Image binarization step is considered very important before the process of

character segmentation in a license plate recognition system according to a study

published in IEEE on 2011 entitled “Blob Extraction based Character Segmentation

Method for Automatic License Plate Recognition System” conducted by Youngwoo

Yoon, Kyu-Dae Ban, Hosub Yoon, and Jaehong Kim. Three binarization methods

were differentiated in this study namely Otsu’s method, Niblack’s algorithm, and

Sauvola’s algorithm. These were few of the most commonly used methods because

of their promising results. These methods were compared in this study using license

plate images as shown in the figure below.

Figure 2.7. Comparison of the three binarization methods

The two local thresholding methods had almost similar performance but the

Niblack’s algorithm produced better results in most cases. The characters were

clearer and the separation of the colors of the foreground to the background was

further enhanced thus produces better results.


PAMANTASAN NG LUNGSOD NG MAYNILA 28

-Dilation

Dilation process was further explained and used in the study conducted by G.

Madhuri Latha and G. Chakravarthy entitled “An Improved Bernsen Algorithm

Approaches for License Plate Recognition” on September - October, 2012. According

to them, Dilation is a process of improving a given image through filling holes in an

image, sharpening the edges of the objects, joining the broken lines, and increasing

the brightness of an image. Noise present in the image can also be removed using

dilation. Also stated in this paper that by making the edges sharper, the difference of

Gray value between neighboring pixels at the edge of an object can be increased and

this enhances the edge detection. Since different environment and various illumination

and shade are expected in the captured images, the given has to be converted from

RGB to Gray form. The dilation process is usually used for probing and expanding the

shapes contained in the input image.

According to Vahid Abolghasemi and Alireza Ahmadyfard in their study entitled

“A Fast Algorithm for License Plate Detection” last 2007, the techniques based on

edge statistics yield promising results because of the sufficient information in the plate

region. The simplicity and speed of this method is an advantage over other license

extraction methods. However, the rate of success is low when edge-statistics method

alone is used.
PAMANTASAN NG LUNGSOD NG MAYNILA 29

Character Segmentation

Character segmentation is the process of extracting the plates’ elements

such as characters and numbers from the plate’s background. It is preferable to isolate

each character from the extracted image of the plate number to make the process of

character recognition easier. Segmentation is the process of isolating the characters,

numbers and letters, in the extracted license plate image from one another and from

its background.

Based on a survey entitled “A License Plate-Recognition Algorithm for

Intelligent Transportation System Applications” conducted by Christos Nikolaos E.

Anagnostopoulos, et. al. last September 2006, Connected Component Analysis

(CCA) is an internationally acknowledged and used technique in image processing

that scans an image and labels its pixels into components which are based upon the

connectivity of the pixels and is done either through four-connected or 8-connected. It

was also explained that CCA works on binary or Gray-level images and different

measures of connectivity are possible, wherein in this proposed method they used

searching in eight-connectivity.

In a paper published last September 2014 entitled “Block-Based Neural

Network for Automatic Number Plate Recognition” the researchers, Deepti Sagar and

Maitreyee Dutta, also used CCA labelling each 8-connected component in the binary

license plate image with a unique number to make an indexed image. This study was

conducted in India. Indian license plates have unnecessary textual details mostly
PAMANTASAN NG LUNGSOD NG MAYNILA 30

found at the bottom of the license plate, therefore the researchers decided to use the

Centre-line Rule to eliminate all other unwanted connected-components. Blob

Extraction was used after this through processing the image vertically and horizontally

to fin the starting and ending position of each blob using maximum and minimum

parameters, an example of which is shown in figure 2.9 below. This proposed method

provides an overall success rate of 98.2%.

Figure 2.9 Blob Extraction process

A paper entitled “License Plate Segmentation Using Connected Component

Analysis” conducted on January 2013 further emphasized the promising effect of

connected component analysis in character segmentation process of license plate

recognition system. The researchers namely V.Karthikeyan, V.J.Vijayalakshmi, and

P.Jeyakumar, showed how CCA is an important technique that scans and labels the

pixels of a binarized image into components based on its pixel connectivity. Shown in

the figure 2.10 below is the orientation or process how to label pixels through four-

connectivity and eight-connectivity.


PAMANTASAN NG LUNGSOD NG MAYNILA 31

Figure 2.10. Process how to label pixels through four-connectivity and eight-connectivity.

Then, these connected components are analyzed to be able to filter out long
and wide components and only leave the components according to the defined values.

Figure 2.11. Vertical and Horizontal Coordinate

Based on the research entitled “An Algorithm for License Plate

Recognition Applied to Intelligent Transportation System” and published on

September, 2011, conducted by Ying Wen, et. al., character segmentation using CCA

searching in an eight-connectivity situation resulted to an average performance of

98.34% success rate in a dataset having 7940 datasets.

Character Recognition (Feed-forward ANN)


PAMANTASAN NG LUNGSOD NG MAYNILA 32

Artificial intelligence has been used in many applications as of today. It has been

successful in many areas, such as image recognition and sound recognition. Optical

Character Recognition (OCR) programs have the capability to read printed texts. This

could be a text scanned from a document or hand-written text that was drawn to a

hand-held device such as a personal digital assistant, and then converted to an image.

OCR has been applied in computer guided traffic system for the past few years. For

example, an intelligent traffic system has been continually developed, which can work

on its own with little or no human intervention.

Artificial Neural Network (ANN) is an interdisciplinary study of biology and

computer science, which has been widely used in signal processing, pattern

recognition, computer vision, intelligent control, non-linear optimization, and so on,

based on a paper published last June 2012 entitled “A Neuroplasticity (Brain Plasticity)

Approach to Use in Artificial Neural Network” by Yusuf Perwej and Firoj Perwej.

According to the researchers namely Mohit Agarwal and Baijnath Kaushik in

their study entitled “Text recognition from image using Artificial Neural Network and

Genetic Algorithm” and published on 2015, Artificial neural networks (ANN) are a

family of statistical learning models inspired by biological neural networks (the central

nervous systems of animals, in particular the brain). ANNs are used to approximate

functions that can depend on a large number of inputs and are generally unknown.

Artificial neural networks are generally presented as systems of interconnected

neurons, as we can notice in Figure 2.15. These neurons send messages to each

other. These connections have a hidden layer that have weights which the user could
PAMANTASAN NG LUNGSOD NG MAYNILA 33

modify and adjust based on experience, making this adaptive network to inputs and

capable of learning. In this study, hetero-associative neural network was presented as

shown in Figure 2.12 below. Hetero-associative memories can recall an associated

piece of datum from one category upon presentation of data from another category

and is a big help in image classification.

Figure 2.12. Hetero-associative neural network


According to a survey conducted by Christos-Nikolaos E. Anagnostopoulos,

Ioannis D. Psoroulas and Vassili Loumos entitled “License Plate Recognition From

Still Images and Video Sequences: A Survey” published last September 2008, multi-

layered feed forward neural networks are used for license plate character recognition

in many previous works usually following a common methodology. The classical

training method for feed forward neural networks is error back propagation. The

network has to be trained for many training cycles to attain good performance. In the

figure shown below (Table 2.5) labelled as Table II, various multi-layered neural

Table 2.5. Multilayered Feedforward Neural Networks for Character Recognition Details
PAMANTASAN NG LUNGSOD NG MAYNILA 34

network topologies are cited along with specific details as reported in few literatures

most specially the systems’ performance or their success rate which is noticeably high

having 98.5% as the highest and 81% as the lowest.

Ref. Topology Performance No. of output classes


[15] 16-20-9 95% 9 numbers
[19,29] 24-15-36 98.5% 26 letters + 10 numerals
[80] 209-104-36 95% 26 letters + 10 numerals
8 (binary ASCII code of 36
[126] 15-10-8 96% characters)
[127] 108-50-35 95% 25 letters + 10 numerals
[128] 50-60-100-36 92.5% 26 letters + 10 numerals
[133,136] *not stated* 81% *not stated*
36 alphanumeric classes + 1 class
[134] 420-26-38 98.2%
for border + 1 class for stamps
Previous researches are cited in this study which used similar approach for

character recognition and came up to a system that encountered a problem

recognizing some letters and numbers which they labelled as problematic pairs such

as the letter 'I’ and number ‘1’, ‘B’ and ‘8’, ‘O’ and ‘0’, etc. This system therefore, gave

special attention to the training of this three-layered MLP for the correct identification

of such pairs. In order to give this minimal problem a solution, the researchers trained

these problematic pairs more often. After this kind of special training, the rate of correct

classification was reported to be 98.2%.

There are two basic methods used for OCR according to a study conducted bin

India by Vijendra Singh, Hemjyotsana Parashar and Nisha Vasudeva entitled

“Recognition of Text Image Using Multilayer Perceptron” namely Matrix Matching and
PAMANTASAN NG LUNGSOD NG MAYNILA 35

Feature Extraction. While matrix matching is the simpler and more common way,

feature extraction produces a much more robust and accurate results and it is much

more versatile. The Multilayer Perceptron (MLP) neural network that was implemented

in this study was composed of three layers namely the input, hidden, and output layer.

The input layer consisted 180 neurons which receive printed image data from a 30x20

symbol pixel matrix. The hidden layer had 256 neurons wherein this number was

decided on the basis of optimal results on a trial and error basis. The output layer is

composed of 16 neurons.

Figure 2.13. Feed Forward Neural Network approach

This set up was illustrated in the figure shown above (Figure 2.13). Feed

Forward Neural Network approach is used to combine all the unique features, which

are taken as inputs, one hidden layer is used to integrate and collaborate similar

features and if required, adjust the inputs by adding or subtracting weight values,

finally one output layer is used to find the overall matching score of the network. On
PAMANTASAN NG LUNGSOD NG MAYNILA 36

the other hand, learning was implemented using the back-propagation algorithm with

learning rate. The performance index for the training of this ANN was given in terms

of its mean square error (MSE).The tolerance limit for the MSE was set to 0.001. When

the number of iteration reaches 350, the training set became stable at 0.0007.

For this system’s data set, the researchers applied their system with different

number of neurons in hidden layer. They use different font styles of printed text and

the results are shown in Table 2.6 below.

Table 2.6. Different font styles of printed text and the results
No. of

A study conducted by V. Koval et. al. entitled “Smart License Plate Recognition

System Based on Image Processing Using Neural Network”, published on September

2003, used a feed-forward neural network for their proposed license plate recognition

system. The structure of this used neural network consisted of 366 inputs on the input

layer, 50 neurons on its one hidden layer, and an output layer with 46 neurons. Back

propagation method with momentum and adaptive learning rate was used for neural

network training. Neural network was trained on the quality images showed by

alphabetic characters with supervisor outputs. Neural network successfully


PAMANTASAN NG LUNGSOD NG MAYNILA 37

recognized the characters in the number plates used in this experiment with a success

rate or probability of 95% with a presence of 50% noise density.

In the proposed approach by H.Erdinc Kocer and K.Kursat Cevik in their study

entitled “Artificial Neural Networks-based Vehicle License Plate Recognition” and was

published on 2011, a multi-layered perceptron ANN model was used for character

recognition. The processing units in MLP was composed of three layers namely the

input layer, which includes the information used in making decision, then the hidden

layer, which helps the network to compute more complicated associations, and finally,

the output layer, which includes the resulting decision. What makes this study different

from many other researches already conducted was the use of two separate ANN.

The numbers and letters was classified separately, thus the use of two ANN model.

This was done to increase the success rate of the recognition phase where both

numbers and letters have the same architecture but different input numbers. The

researchers try to prevent the complexity of recognition of similar numbers and letters

such as ‘0’ and ‘O’, ‘8’ and ‘B’, and also ‘2’ and ‘Z’, etc.
PAMANTASAN NG LUNGSOD NG MAYNILA 38

This research used feed-forward back-propagation algorithm for training the

ANN and used the mean square error (MSE) function for measuring the performance

of the network in terms of training. This value of MSE was used to determine how well

Figure 2.14. Relationship of the number of iterations to the MSE

the network output fits the desired output. This system was tested using 259 vehicles

and a maximum of 500 iterations were performed for each input set and when the

minimum error rate, defined by the user, is reached, the iteration will stopped. For this

proposed system, the proponents defined minimum error rate of 0.001.

Figure 2.14 showed the relationship of the number of iterations to the MSE. The

first graph shows that the training reaches 1180 iterations for the letters before it

reaches the minimum error rate while for the numbers, the training had 4457 iterations.

The success rate for this character recognition state was recorded to be 98.17%

combining the performance of the letter and number recognition by this proposed

system. 344 over 347 letters was successfully recognized and 1000 over 1022

numbers.
PAMANTASAN NG LUNGSOD NG MAYNILA 39

Based on a study previously cited in this paper entitled “Block-Based Neural

Network for Automatic Number Plate Recognition” conducted by Deepti Sagar and

Maitreyee Dutta published on September, 2014, the two-layer feed-forward neural

network proposed in this system has a simple architecture which classifies the input

to the set of target categories.

1,000 license plates were used as dataset for this research and was divided into

3 sets. The first set contains dataset of 3450 character images recognition rate, shown

below in figure 2.15.

Figure 2.15. First set contains dataset of 3450 character images recognition rate

On the other hand, the second set contains 6071 character images recognition

rate of which we can see in figure 2.16. Finally, the third set contains 8699 character

recognition images recognition rate illustrated in figure 2.17.


PAMANTASAN NG LUNGSOD NG MAYNILA 40

Figure 2.16. Second set contains 6071 character images recognition rate

This proposed system was successful in obtaining high character recognition

rate at high processing speed. This system was found to have promising results even

in different circumstances affected by outdoor environments or unusual appearances

of the license plate such as having joined or broken characters, dirty images, bad

quality images, and can handle some degree of inclination.

Figure 2.17. The third set contains 8699 character recognition images recognition rate
PAMANTASAN NG LUNGSOD NG MAYNILA 41

Table 2.7 shows the summary of results of the recognition rates of this

proposed system. This system had a convincing result of 98.2% compared to other

neural network-based systems previously cited in this paper wherein one of them was

97.3% for 3700 character images. This system had a total processing time of 115.006

seconds for 3399 characters meaning, only 3.39 millisecond each character compared

to the cited study in this paper which had 8.4 millisecond in average.

Table 2.7. Summary of results of the recognition rates of this proposed system
S.No Character Match Cases Unmatch Recog. Process
Images Cases Rate time
1 3450 3399 51 98.521% 115.006s
Characters Characters Characters
2 6071 5955 116 98.089% 256.451s
Characters Characters Characters
3 8699 8523 167 98.080% 379.374s
Characters Characters Characters
Average Recognition Rate 98.2%

In a study entitled “Bangla Automatic Number Plate Recognition System using

Artificial Neural Network” conducted by Tasnuva Ahmed, et. al.and published on

March 2012, the segmented characters in the detected plate number were recognized

using MLP neural network. The designed three layers feed forward supervised neural

network is composed of one input layer, hidden layer, and then the output layer. The

hidden layer was consisted of 158 neurons and the output layer has 40 neurons since

there are 40 output of the network. This network also used back-propagation for the

learning algorithm. Log-Sigmoid function was used for the transfer function of the

hidden neurons and the output neurons.


PAMANTASAN NG LUNGSOD NG MAYNILA 42

The MLP network used in this study was trained with six different Bangla font.

Bangla is the alphabet used by the people in Bangladesh. Having different kinds of

fonts in license plates would be a factor in having a high percentage of success rate

for character recognition, that is why this system was trained in six different fonts for

it to be versatile in whatever fonts the characters would have. The experimental results

of the application of this system for character and word recognition was provided and

shown in the Table 2.8 below. The success rate of the recognition process was

averagely 84.16% and had a recognition time of 1.3 seconds.

Table 2.8. Experimental results of the application of this system for character and word recognition
Input Number of Success rate
Set Font name Recognized
type input sample (%)
character 20 19 95
1 SutonnyMJ
word 12 11 91.67
character 20 18 90
2 ChondonaMJ
word 12 11 91.67
Character 20 16 80
3 AtraiMJ
word 12 9 75
character 20 17 85
4 DhanshirhiMJ word 12 10 83.3
character 20 17 85
5 BhagirathiMJ
word 12 10 83.3
character 20 15 75
6 GoomtiMJ
word 12 9 75

Automatic License Plate Recognition

A lot of methods have been used for different automatic license plate

recognition system. To help compare these methods, researchers Shan Du,

Mahmoud Ibrahim, Mohamed Shehata, and Wael Badawy presented in a paper


PAMANTASAN NG LUNGSOD NG MAYNILA 43

entitled “Automatic License Plate Recognition (ALPR): A State of the Art Review” the

comparison of these techniques in terms of pros, cons, recognition accuracy, and

processing speed categorizing them according to the features they used for each

stage of the system. The output of their research was summarized in 4 tables shown

below.

Table 2.9. Summary of the pros and cons of various plate extraction methods classified
through their used features
Methods Rationale Pros Cons References
Hardly be
applied to
Using The boundary of Simplest, fast
complex images
boundary license plate is and [5][8]-[16]
since they are
features rectangular. straightforward.
too sensitive to
unwanted edges.
Using Find a connected Straightforward;
global object whose Independent of May generate [27][28]
image dimension is like the license broken objects. [29][30]
features a license plate. plate position.
Be able to Computationally
Using Frequent color
detect even if complex when [31][39]
texture transition on
the boundary is there are many [40][41]
features license plate.
deformed. edges.
RGB is limited to
Be able to
Using illumination
Specific color on detect inclined
color condition; HLS is [50][51][52]
license plate. and deformed
features sensitive to
license plates.
noise.
Time consuming
(Processing all
Using There must be binary objects);
Robust to
character characters on the Produce [63][6s4]
rotation.
features license plate. detection erros
when other text
in the image.
Using
Combining
two or Computationally [70]-
features is more More reliable.
more complex. [72][74][81]
effective.
features
PAMANTASAN NG LUNGSOD NG MAYNILA 44

This table shows the pros and cons of different license plate extraction

methods used by previous researchers as cited in the study mentioned above.

Table 2.10. Summary of the pros and cons of various character segmentation methods
classified through their used features
Methods Pros Cons
Simple and Fails to extract all the
Using pixel straightforward; Robust characters when there
connectivity [12][30] to the license plate are joined or broken
rotation. characters.
Noise affects the
Indepent of character
Using projection projection value;
positions;
profiles Requires prior knowledge
Be able to deal with
[21][24][51][101] of the number of license
some rotation.
plate characters.
Using prior Limited by the prior
knowledge of knowledge;
Simple
characters Any change may result in
[6][14][105][106] errors.
Slow and may generate
Using chacters Can get exact character
incomplete or distorted
contours [107][108] boundaries.
contour.
Using combined
More reliable. Computationally complex.
features [111][112]

This table, from the same source as the table above, shows the pros and cons

of various character segmentation methods.


PAMANTASAN NG LUNGSOD NG MAYNILA 45
Table 2.11. Summary of the pros and cons of various character recognition methods
classified through their used features
METHODS PROS CONS
Processing non-
important pixels
and slow;
Vulnerable to
Simple and
Template matching any font
Using straightforward
change,
pixel
rotation, noise
values
and thickness
change
Be able to
Several templates for each More
recognize tilted
character processing time
characters
Horizontal and vertical projections
Hotelling transform
The number of black pixels in
each 3x3 pixels block
Count the number of elements
that have certain degrees
inclination
The number of transitions from
character to background and Be able to extract
spacing between them salient features; Feature
Sampling the character contour Robust to any extraction takes
Using all around distortion; time;
extracted Gabor filter Fast recognition Non-robust
features Kirsch edge detection since the number features will
Convert the direction of the of features is degrade the
character strokes into one code smaller than that recognition
Pixels’ values of 11 sub-blocks of the pixels.
Non-overlapping 5x5 blocks
Contour-crossing counts (CCs),
directional counts (DCs), and
peripheral background area(PBA)
Topological features of characters
including the number of holes,
endpoints, three-way nodes, and
four-way nodes
Table 2.11 shows the pros and cons of character recognition methods used

in different studies cited in the study mentioned above.


PAMANTASAN NG LUNGSOD NG MAYNILA 46
Table 2.12 Summary of the overall performance of different ALPR systems
Main Procedures Database Image Condition
Method
LPE LPS OCR Size
Template 650x480 pixels
Matching matching Various illumination
Vertical 610
[2] of vertical on conditions, some
projection images
edges hamming angle of view and
distance dirty plates
Vertical 1165
[8] --- --- 384x288 pixels
edges images
Block- Multi-plates with
based Template 180 pairs
[18] --- occlusion and
processin matching of images
different sizes
g
Hough
Vertical Hidden 800x600 pixels
transform
and Markov 805 Different rotation
[20] and
horizontal model images and lighting
contour
projections (HMM) conditions
algorithm
Generaliz
ed 330 Various viewing
[21] symmetry --- ---
images directions
transform
(GST)
Edge Back
detection
Vertical propagatio
and
and n and
[22] vertical 12s video 320x240 pixels
horizontal neural
and
projections network
horizontal (BPNN)
projection
Edge
statistics 768x534 pixels
9825
[5] and --- --- Different lighting
images
morpholo conditions
gy
Connecte
d 320x240 pixels
Compone 4hrs+
[26] --- --- Degraded low
nt video
resolution video
Analysis
(CCA)
PAMANTASAN NG LUNGSOD NG MAYNILA 47

Vector 768x256 pixels


300+
[49] Quantizati --- --- Different brightness
images
on and sensor positions
Two-layer
Sliding
probabilisti Different
concentric 1334
[50] SCW c neural background and
window images
network illumination
(SCW)
(PNN)
Local
Gabor vector 300 Fixed angle and
[52] ---
filter quantizatio images different illumination
n

The overall performance comparison of different methods indicating the success

rate of each methods, processing time, qualification as real time or not, etc. are shown

in Table 2.12 and continued in Table 2.13.


PAMANTASAN NG LUNGSOD NG MAYNILA 48
Table 2.13. (continuation of Table 2.12) Summary of the overall performance of different
ALPR system
LPE LPS OCR Total Processing Real Plate
Method
Rate Rate Rate Rate Time Time format
Saudi
[2] 96.2% --- --- 95% --- --- Arabic
plates
Chinese
[8] ~100% --- --- --- 47.9ms Yes
plates
Taiwanese
[18] 94.4% --- 95.7% --- 75ms for LPE Yes
plates
0.65s for LPE
Vietnames
[20] 98.8% 97.6% 97.5% 92.9% and 0.1s for No
e plates
OCR
Korean
[21] 93.6% N/A N/A N/A 1.3s No plates
85.5%
and
Taiwanese
[22] --- --- --- ~100% 100ms Yes
plates
after
retraining
Chinese
[5] 99.6% --- --- --- 100ms Yes
plates
Taiwanese
[26] 96.6% --- --- --- 30ms Yes
plates
Italian
[49] 98% --- --- --- 200ms No
plates
276ms
(111ms- LPE, Greek
[50] 96.5% --- 89.1% 86% Yes
37ms-LPS, plates
128ms-OCR)
Multi-
[52] 98% --- 94.2% --- 3.12s No
national

The researchers considered an ALPR system as real-time system if it’s

processing time is less than or equal to 100ms, this could be reconsidered depending

on the use of the ALPR system. Most of these systems was used on traffic related

situations, that’s why the consideration for a real-time system has to be a very fast

system.
PAMANTASAN NG LUNGSOD NG MAYNILA 49

Local Studies

Here in the Philippines, there are only few studies conducted about

license plate recognition system. One of the first papers published in the country was

proposed by the students of De La Salle University. Their study entitled “Vehicle

Parking Inventory System Utilizing Image Recognition through Artificial Neural

Networks” has been presented in the 2012 International Electrical and Electronics

Engineers Region 10 Conference. The proponent’s study focuses on creating and

developing an automated vehicle logging system that utilizes character recognition

through images captured from the entrance of a parking area. These images are

processed to extract the licensed plates of any vehicle entering the parking area.

Extracted plates images are then converted into numerical forms devised by

researchers to fit the requirements of the artificial neural network. Table 2.14 shown

the equivalent character of the binary output of the artificial neural networks block.

Table 2.14. Equivalent character of the binary output of the artificial neural networks block
Binary Equivalent Binary Equivalent
Output Character Output Character
000001 A 010011 S
000010 B 010100 T
000011 C 010101 U
000100 D 010110 V
000101 E 010111 W
000110 F 011000 X
000111 G 011001 Y
001000 H 011010 Z
001001 I 011011 0
001010 J 011100 1
001011 K 011101 2
001100 L 011110 3
001101 M 011111 4
001110 N 100000 5
PAMANTASAN NG LUNGSOD NG MAYNILA 50

001111 O 100001 6
010000 P 100010 7
010001 Q 100011 8
010010 R 100100 9

The system used in the study is a feed forward neural network and trained it

using 5860 sets of training data yielding a system with 0.0001645724% error and

99.98% of accuracy for character recognition using 200 samples.

Early last 2017, another study of a license plate recognition system have been

published again in Institute of Electrical and Electronics Engineers (IEEE).The

proponents of the said study is from institution of University of Santo Tomas. The

study entitled “Development of Intelligent Transportation System for Philippine

License Plate Recognition” was proposed by the following individuals: J.P Dalida, A.J

Galiza, A.G Godoy, M. Nakaegawa, J.M. Vallesters and A. Dela Cruz. Their study

focuses on adopting accurate and suitable algorithms for following phases, namely,

license plate preprocessing, plate localization, character segmentation and character

recognition, in recognizing old and new license plates of both private and public

vehicles and motorcycles in the Philippines. The proposed system was discussed

based on its structure and how it is works. For the license plate pre-processing, they

have observed that the output figures of the raw captures of license plates have

significant loss of information in the plates due to shadow and uneven illumination.

With this, an Improved Bernsen Algorithm was used because it solved properly the

shadow and overexposure conflicts. For plate localization, this phase aims to isolate

the plate area from the rest of the image. Connected Component Analysis is utilized
PAMANTASAN NG LUNGSOD NG MAYNILA 51

to label regions and preserve the characters by removing the unwanted information.

Prior to character segmentation, the proponents used Principle Component Analysis

for tilt correction of extracted plate. After the plate correction, the horizontal and

vertical projections were used to locate the characters and the boundaries of the

characters. Lastly, the type of Character recognition used in the system involves two

phases, namely, feature extraction using Dual-Tree Complex Wavelet Transform and

classification using Artificial Neural Networks. Based on their gathered data, the

system acquired 85.6667% of accuracy for plate character detection. While for

character recognition, an accuracy of 94.0183% is gathered (ideal). Meanwhile, on a

practical license plate recognition or on actual test, only 72.83% accuracy was

gathered.

Synthesis of Related Literature and Studies to the Research

Vehicle license plate recognition is generally composed of four major

processes which are (1) image acquisition, (2) license plate detection and extraction,

(3) character segmentation, and (4) character recognition. Various pre-processing

methods were also used for the image that was obtained in order to have a higher

success rate and faster recognition at the last part of the proposed system.

Multiple methods were already proposed and used in the previous researchers

as stated in this chapter. However, different success rates and recognition times were

recorded because of the orientation of the image obtained, or because of various

illumination and environmental effect seen in the image, or primarily because of the
PAMANTASAN NG LUNGSOD NG MAYNILA 52

methods themselves that were used. All methods have their advantages and

disadvantages. A specific method must be chosen depending on how and where the

system is going to be used. Another factor that affects the results of artificial license

plate recognition systems is the visual graphics found in the background of the

characters in a license plate that varies depending on which country the license is

from. This is why the researchers focused on using the proposed method in Philippine

license plate, which used to have a Rizal monument in its background. However,

recently, the Philippines has changed the design of its national license plate, which

only has a white background and black characters.

Only a few studies have been conducted for the license plate recognition here

in the Philippines and used different methods for each process. The proponents

wanted to improve the license plate recognition system established in the country by

considering some relevant factors that haven’t been considered in the previous local

studies such as the camera position, lighting condition and processing time in the

system.

In the study conducted by the students of De LaSalle University entitled

“Vehicle Parking Inventory System Utilizing Image Recognition through Artificial

Neural Networks”, they didn’t include the set-up of how they will acquire the input

image in their system. They take the input image manually and then put it to their

system for the processing part. Same with the study conducted by the students of

University of Santo Tomas entitled “Development of Intelligent Transportation System


PAMANTASAN NG LUNGSOD NG MAYNILA 53

for Philippine License Plate Recognition” they just also skipped the part of image

acquisition and focus on how they will gather higher success rate for other stages of

license plate recognition. In terms of lighting condition in the actual set-up, both

studies also didn’t consider this factor since it is a part of image acquisition. Lastly, the

processing time of the whole system didn’t mention in the respective studies.

Therefore, it implies that both studies didn’t emphasize the processing time

accumulated in the whole system.

The proponents wanted to design a license plate recognition system

addressing the above mentioned gaps in the local studies. With this study, creating

hardware implementation won’t become difficult to construct in terms of camera

designation. As well as, it could also help in constructing a reliable real-time license

plate recognition by considering the processing time of each stages. In chapter 3, the

proposed algorithms for these gaps are discussed thoroughly.


PAMANTASAN NG LUNGSOD NG MAYNILA 54

Chapter III

RESEARCH METHODOLOGY

This chapter presents the method of research employed in the study, and

algorithms and instrumentation used in developing the system. It also includes

statistical processes and tools used for analysis, interpretation and validation of the

data to prove the feasibility of the system. All of these are for the purpose of

answering the problem considered in the study.

Methods of Research

This study conducted by the researchers was considered quantitative. The goal

in conducting quantitative research study is to determine the relationship between

one thing and another within a population. Quantitative research focuses on numeric

and unchanging data, and detailed, convergent reasoning rather than divergent

reasoning. The principal advantage of experimental designs is that it provides the

opportunity to identify cause-and-effect relationships, which was found to be useful

in this study in order to achieve promising results to answer the stated problem.

According to Earl Babbie (2010), quantitative methods emphasize objective

measurements and the statistical, mathematical, or numerical analysis of data

collected through polls, questionnaires, and surveys, or by manipulating pre-existing

statistical data using computational techniques. Quantitative research focuses on


PAMANTASAN NG LUNGSOD NG MAYNILA 55

gathering numerical data and generalizing it across the population or to explain a

particular phenomenon. This is why the proponents used this method to

mathematically show the results of this study. This will help to analyze the results

more accurately.

Instrumentation

This section presents the assembly of hardware as well as software

programming for both Arduino and MATLAB.

Hardware

The list of different components along with their usage and specifications used

in developing the system is presented in this part. The system assembly and circuit

implementation are also presented here.

• Materials Used:

-Serial Camera

The serial camera was used in the actual test to capture the vehicles entering

the parking area. The modules have a few features built-in, such as the ability to

change the brightness/saturation/hue of images, auto-contrast and auto-brightness

adjustment, and motion detection. The output format is a standard JPEG. The lens

resolution is up to VGA 640x480 with 30fps frame speed. The operating voltage is 5V

DC. The SNR and dynamic range is 45dB and 60dB respectively. For communication,

3.3V TTL is required (Three wire TX, RX, GND).


PAMANTASAN NG LUNGSOD NG MAYNILA 56

Figure 3.1. TTL Serial Camera

-Microcontroller unit (MCU) and SD/MMC Card Shield

The Gizduino Plus with ATMEGA644 and SD/MMC Card Shield is used to store

the captured image into the SD Card and upload the image on the server for

recognition processing. It has an operation voltage of 3.3V. Its data interface is

Gizduino compatible UART connection.

Figure 3.2. (a) Gizduino Plus with ATMEGA644 (b) SD/MMC Card Shield
PAMANTASAN NG LUNGSOD NG MAYNILA 57

-Triggering System

The microcontroller will accept input from any number of sources that will

indicate when to capture image. The proponents used an ultrasonic sensor to

measure the distance of the vehicle that will enter. The serial camera will be

automatically triggered when the sonar sensor detects an object 75 centimeters from

the boom.

Figure 3.3. Ultrasonic Sonar Sensor

-Servo Motor

The servo motor rotates the boom 90 degrees upon entry when the vehicle is

recognized and is registered on the database otherwise the servo will not rotate.

Figure 3.4. Servo Motor


PAMANTASAN NG LUNGSOD NG MAYNILA 58

• Circuitry

Figure 3.5 Circuit of the system’s hardware

The circuitry of the system basically consists of the MCU, SD/MMC card shield,

serial camera, ultrasonic sonar sensor and servo motor.

Software

All mathematical model used in developing the system are presented in this

section.

-MATLAB

-Arduino IDE

a. License Plate Localization

Gray-scale image processing is used in order to extract the license plate location

in the image. In order to do this, conversion from RGB picture to gray-scale is

necessary which is given as:

𝑮𝒓𝒂𝒚 = 𝟎. 𝟐𝟗𝟖𝟗 ∗ 𝑹𝒆𝒅 + 𝟎. 𝟓𝟖𝟕𝟎 ∗ 𝑮𝒓𝒆𝒆𝒏 + 𝟎. 𝟏𝟏𝟒𝟎 ∗ 𝑩𝒍𝒖𝒆 (3.1)


PAMANTASAN NG LUNGSOD NG MAYNILA 59

b. Character Segmentation

Before undergoing connected component analysis method which is needed to

segment each of the characters in the license plate, binarization of the license plate

image must be done first. To determine the threshold for Niblack’s binarization

method, the formula below is used,

∑ pi 2
𝑻(𝒙, 𝒚) = 𝒎(𝒙, 𝒚) + 𝒌√ 𝑖 − 𝒎(𝒙, 𝒚)𝟐 = 𝒎(𝒙, 𝒚) + 𝒌√𝒔𝒕𝒅(𝒙, 𝒚) (3.2)
𝑵𝑷

where NP represents the number of pixels in the gray image, m and std

denote the mean and standard deviation values respectively, of the pixels pi.

The parameter k is fixed by the proponents to the value of -0.2.

c. Character Recognition

For character recognition, especially in training the neural network, each neuron

uses a transfer function in order to determine its output based on its input. Hidden and

output layer must have a transfer function. The logarithmic-sigmoidal transfer function

is used in this system for both hidden and output layer. This function takes an input

valued between negative infinity and positive infinity and outputs a value between zero

and positive one which is defined as:


𝟏
𝒍𝒐𝒈𝒔𝒊𝒈(𝒏) = (3.3)
𝟏+𝒆−𝒙

where x is the represents the input of the specific layer.


PAMANTASAN NG LUNGSOD NG MAYNILA 60

In order to determine the number of iterations that must be done for training the

neural network, a threshold of mean squared error must be set. This stopping criterion

is defined as:

𝟏 𝒏 𝟐
𝑴𝑺𝑬 = ∑ (𝐲𝐢 − 𝐟(𝐱𝐢)) (3.4)
𝒏 𝒊=𝟏

where here yi is the i th value of the variable to be predicted, xi is the i th value of the

explanatory variable, and is the predicted value of yi (also termed yi^).

Data Gathering Procedure

A generalized and specified algorithm used in developing the system and its

comprehensive flowchart are presented in this section.

• General Procedure

The general procedures used in the study, Real-Time Vehicle Parking Logging

System with the use of Multi-Layered Artificial Neural Network, are stated in the figure

below:

Figure 3.6 General Procedures


PAMANTASAN NG LUNGSOD NG MAYNILA 61

➢ Image Acquisition

X
Figure 3.7 Scale model of the ALPR

For the scale model shown in Figure 3.7, one of the most relevant data of the

system will be provided by the serial camera. Again, serial camera is interfaced in the

microcontroller and the latter has a function to transmit the image captured to the

distant server in order to process the input data. Regarding image acquisition, the

distance and viewing angle of the serial camera from the vehicles will be one of the

biggest issues in the study. It is because some of the parameters in the designed

system are already fixed such as in neural network, the resolution of the trained data

has an assigned value of resolution and if the image input does not meet the minimum

or exceed the required resolution of the system, a higher percentage of error will occur.

In results, the proponents considered the following scenarios to test the capability of

the designed system to response on the input data considering the distance and

viewing angle of camera to the vehicles. Also, the proponents have set the distance

of the camera to the vehicles and to the ground. Using three dimensional axes in the

system as the position identifier, the following scenarios are presented:


PAMANTASAN NG LUNGSOD NG MAYNILA 62

Scenario 1: X-axis position of the camera varying with Y and Z axis coordinates are

held constant.

Figure 3.8 X-axis position of the camera

Figure 3.8 displays the position of the camera in terms of X - axis given that Y

and Z axis coordinates are held constant. The proponents aim to identify the effect of

changes in the resolution of license plate region to the accuracy of the system.

Scenario 2: Y-axis and Z-axis are varied while other axes are held constant

Figure 3.9 (a) Y-axis position (b) Z-axis position

Figure 3.9 shows the variations in camera position in the other two axis, by doing

this the effect of viewing angle variations can be determined.

Scenario 3: Captured image under varying light conditions with position at the center.
PAMANTASAN NG LUNGSOD NG MAYNILA 63

The camera being placed at the origin of the three varying axes will be subjected

to varying light intensity. The parameter to be considered in this experimentation is

the Y signal of an image or the luminosity, which can be computed from the gray

scaled image. Then to obtain the image’s luminosity the mean2 function will be used

to calculate for the two-dimensional mean of the image.

For the second problem, with a set number of images, its gray scaled image and

processed image will be compared with one another with the use of Edge Statistics

Method specifically their horizontal and vertical histogram values. This is done to

obtain the necessary average difference between original and processed that can

completely segment the image. The process of edge statistics can be seen at the

specified algorithms section.

Lastly, the accuracy will be obtained by dividing the total number of success with

the total number of trials. Processing time will be obtained with the use of MatLab

functions tic and toc and for Arduino, millis function is used. Summing up the

processing time of all methods applied will be the basis of considering the design

system as real-time.
PAMANTASAN NG LUNGSOD NG MAYNILA 64

• Specified Algorithms

Here are the specific procedures in conducting the study:

a. Image Capturing and Transferring

The first part of the experimentation procedure is to obtain samples. Wherein,

the first step is to obtain a value from the ultrasonic sonar sensor. Then when the

distance from the vehicle is 75 cm the serial camera will be triggered to take an image

of the vehicle. Then the image is stored to an SD card inserted in the SD/MMC shield.

Lastly, the image will then be transferred to the server through serial processing.
PAMANTASAN NG LUNGSOD NG MAYNILA 65

Figure 3.10. Flowchart of Image Capturing


PAMANTASAN NG LUNGSOD NG MAYNILA 66

b. License Plate Extraction

The next part of the procedure is License Plate Extraction. The first step is to

convert the image from a colored image to a gray-scale image. Then, noise will be

removed with the use of non-linear filtering to smoothen out the image. Next will be

dilation morphology to further sharpen out the edges and to fill holes and gaps in the

image and to make it easier for edge processing.

Next will be evaluating all of the pixels in both x and y axes. Vertical edges and

horizontal edges will go through Vertical Edge Processing and Horizontal Edge

Processing respectively, wherein a pixel's histogram value will be evaluated by

computing the difference between the magnitudes of neighboring pixels.

After finishing all of the pixels and obtaining the total difference, a digital low-

pass filter will be used in order to avoid loss of important information in the image. This

is where the left and right sides of a histogram value will be averaged to obtain the

new histogram value. Next, is to pass both histograms through a band-pass filter to

filter out unnecessary histogram values. With the obtained histogram values, detecting

probable LP regions will be possible, and using both horizontal and vertical data, the

true LP region will be extracted from the image.


PAMANTASAN NG LUNGSOD NG MAYNILA 67

Figure 3.11a. Flowchart of License Plate Extraction


PAMANTASAN NG LUNGSOD NG MAYNILA 68

Figure 3.11b. Flowchart of License Plate Extraction


PAMANTASAN NG LUNGSOD NG MAYNILA 69

Niblack’s Binarization Method

After extracting the license plate region, the image is binarized wherein the

license plate characters will be Logic 1 and the background will be Logic 0. The

Algorithm used in binarization is Niblack method, since adaptive thresholding

possesses advantages over global thresholding methods. Set the constant values for

b and k which are -0.2 and 18 respectively, where k is a constant which determines

how much of the total print object edge is retained. The value of k and the size b of

the sliding window define the quality of binarization. Evaluating the image by a set size

per iteration, the mean and standard deviation will be computed since they are

necessary for computation. Next is to compute the threshold value of that rectangle.
PAMANTASAN NG LUNGSOD NG MAYNILA 70

Figure 3.12. Flowchart of Niblack’s method of binarization


PAMANTASAN NG LUNGSOD NG MAYNILA 71

c. Character Segmentation

The next part of pre-processing is Character Segmentation where CCA method

is used. 8-connectivity will be used for image evaluation, where each non-background

pixel will be evaluated. If a certain non-background pixel has no neighboring pixel

already labeled set the pixel value to the current pixel value and if not, the lesser pixel

value will have a higher priority to the current pixel value. Then we obtain the possible

License Plate Characters by obtaining the highest and lowest pixel coordinate of both

row and columns of each labeled segment by the use of the MATLAB function

regionprops('BoundingBox'). Since License Plate Characters will have similar matrix

sizes, Pixel Ratio Analysis will be used wherein character regions will only be saved

when both row and column size pass the criteria, repeat until all segments are

processed.
PAMANTASAN NG LUNGSOD NG MAYNILA 72

Figure 3.13a. Flowchart of Character Segmentation


PAMANTASAN NG LUNGSOD NG MAYNILA 73

Figure 3.13b. Flowchart of Character Segmentation


PAMANTASAN NG LUNGSOD NG MAYNILA 74

d. Character Recognition

➢ Network Training

Back propagation algorithm is a common method of teaching artificial neural

networks how to perform a given task. The algorithm has two phases. First, a training

input pattern is presented to the network input layer. The network propagates the

pattern from layer to layer until the output pattern is generated by the output layer.

Second, if this pattern is different from the desired output, an error is calculated and

propagated backwards from the output layer to the input layer. The weights are

modified as error is propagated. These are repeated until the stopping criterion is

satisfied.
PAMANTASAN NG LUNGSOD NG MAYNILA 75

Figure 3.14a. Flowchart of Back Propagation Algorithm


PAMANTASAN NG LUNGSOD NG MAYNILA 76

Figure 3.14b. Flowchart of Back Propagation Algorithm


PAMANTASAN NG LUNGSOD NG MAYNILA 77

➢ Feed-forward network

Feed-forward neural networks consist of an input layer of source neurons, at

least one hidden layer of computational neurons, and output layer of computational

neurons. The input signals are propagated in a forward direction on a layer-by-layer

basis; that is from input to output.

Figure 3.15. Flowchart of Feed forward neural network


PAMANTASAN NG LUNGSOD NG MAYNILA 78

Data Mining

The last part of the pre-processing is the data mining. After getting the

recognized characters from the sample license plate through the neural network, it is

now subjected to match the recognized characters with the database of the system.

The database of the system consists of 40 samples of license plate as a trained data.

All the alphanumerical characters gained from the sample plates will be considered

as individual trained data of the system. Since the covered license plate proper

composed of three letters and three numbers, the process of data mining will start at

the first three letters. By scanning the whole database, the system will search all

possible matches of the first three letters in the database providing that the sequence

of scanning is from left to right. In the process, all unmatched characters will be

eliminated from the searching and the most matched trained data will be displayed.

The process will continue until all recognized characters from the subject test find its

matched characters from the database. Otherwise, the recognized characters will be

stored in the database as new trained data set.

Statistical Treatment of Data

The data gathered from the MCU and after data processing will be statistically

treated so as the necessary analysis will be carried throughout the analysis and

experimentation. Mean and standard deviation will be used to numerically analyze the

data.
PAMANTASAN NG LUNGSOD NG MAYNILA 79

The formula for mean is:


1
𝑥̅ = ∑𝑁 𝑥̅ (3.5)
𝑁 𝑖=1 𝑖

The mean will be used to obtain the average processing time and average

accuracy of the designed process. The standard deviation will also be used to

measure the stability of the designed system, to see whether there are significant drop

offs or overshoot in the performance of the said system.

∑(𝑥̅𝑖−𝑥̅)2 (3.6)
𝜎=√ 𝑛

The accuracy of the system will be determined by dividing the total success over

the total number of trials.

𝑆𝑢𝑐𝑒𝑠𝑠𝑓𝑢𝑙 𝑇𝑟𝑖𝑎𝑙𝑠 (3.7)


𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = 𝑇𝑜𝑡𝑎𝑙 𝑇𝑟𝑖𝑎𝑙𝑠

Table of Inputs Subjected to the Experimentation

A set of inputs will be used upon conducting the simulation and scale model

testing of the designed system to gather relevant data addressing the problems

formulated in the study. For mathematical model, Table 3.1 displays the parameters

that will be used in simulation to the server.

Table 3.1 Input parameters for mathematical model simulation


Parameters Input
Camera 240 x 320
Resolution
PAMANTASAN NG LUNGSOD NG MAYNILA 80

For the scale model testing, the following parameters will be used in the

hardware testing as shown in Table 3.2.

Table 3.2 Testing parameters

Parameters Inputs
Image (JPEG
Type of message
Format)

Number of pixel 240x320

Number of scenarios 3
Number of Trials per
5
scenario
Testing time per
3 minutes
scenario

For the network training, the following parameters will be coded in Arduino at

MatLab as shown in Table 3.3.

Table 3.3 Network training parameters

Parameters Inputs
Mean square error
<=0.0001
goal
Number of hidden
60
neurons
Maximum epochs 5000000
Learning Rate 0.001
36, 71, 100
Training set sizes
and 240
PAMANTASAN NG LUNGSOD NG MAYNILA 81

Chapter IV

PRESENTATION, ANALYSIS AND INTERPRETATION OF DATA

This chapter presents the result and analysis of the proposed system in the

study. The input image was taken by the serial camera with a resolution of 320 X 240

pixels and the system was implemented in MatLab. Upon taking photos of 25 vehicles,

the following are the results (partial) on respective processes.

A. Image Localization

(a) Original Image (b) Gray-Scale Image (c) Filtered Image

(d) Dilated Image (e) Horizontal scan (f) Vertical Scan Image
Image
PAMANTASAN NG LUNGSOD NG MAYNILA 82

(g) License Plate Region (h) Binarized Image

Figure 4.1: Pre-processing image results for localization

Images a-h show the step-by-step process of the license plate localization.

The camera was placed at the center of the car with a height of 60cm and a distance

of 75cm with respect to the vehicle. The result of the set-up shows significant success

rate for plate localization. Hence, the height of the camera to the ground will be fixed

while the distance of camera to the vehicle will be varying accordingly.

- Varying the distance

With the distance varying and with constant height (60 cm) of the camera to the

vehicle, the following are the computed percentage success of recognizing the license

plate region for 25 vehicles.

Table 4.1: Percentage Success of recognizing the license plate region varying the
distance of camera to the vehicle

Distance from the Percentage of


vehicle (cm) Success (%)

50 28

75 52

100 44

125 68

150 48
PAMANTASAN NG LUNGSOD NG MAYNILA 83

Based on the given results, it shows that the 125 cm distance from the vehicle

has more success rate of identifying the region plate than the others while 50 cm

distance has the least success rate. However, the results from 125 cm distance are

unreliable compared to 75 cm distance. It is because the 125 cm distance captured

not only the license plate region but also the surroundings of the plate region itself.

On the other hand, the 75 cm distance seems more reliable than the 125cm distance

because all the captured images were accurate plate region only. Figure 4.2 to 4.5

shows the comparison of image results between 125 cm and 75 cm distance.

Figure 4.2: Images captured at 125 cm distance

Figure 4.3: Detected license plate region at 125cm distance


PAMANTASAN NG LUNGSOD NG MAYNILA 84

Figure 4.4: Images captured at 75 cm distance

Figure 4.5: Detected license plate region at 75 cm distance


However, there are some instances wherein the 75cm distance image identifies

another region instead of the plate region (see Figure 4.6 and 4.7). This is because

those regions have higher difference between the magnitudes of neighboring pixels

than the license plate region.

Figure 4.6: Images captured at 75 cm distance

Figure 4.7: Detected plate region at 75 cm distance


PAMANTASAN NG LUNGSOD NG MAYNILA 85

- X-axis Variation

Setting the camera at the center of vehicle with a fixed distance of 150 cm from

it, the camera will move horizontally by 40cm and by 80cm from the center. The

percentage success of recognizing the license plate region varying the horizontal

distance of camera to the vehicle is shown in table 4.2.

Table 4.2: Percentage Success of recognizing the license plate region varying the
horizontal distance of camera to the vehicle

Distance from the Percentage of


center (cm) Success (%)

40 48

80 36

The 40cm horizontal distance from the center shows higher success rate of

recognizing the plate region than the 80cm horizontal distance. Nevertheless, more

than half of the recognition is failure. This is because the same problem with the first

set-up (varying the distance of camera to the vehicle) was experienced, which is that

some other regions have higher difference between the magnitudes of neighboring

pixels than the license plate region.

To show the image results of 40cm and 80cm horizontal distance, the

following success and failure images are displayed.


PAMANTASAN NG LUNGSOD NG MAYNILA 86

- For 40cm horizontal distance

Figure 4.8: Images captured at 40cm horizontal distance

Figure 4.9: Successful Detection of plate region at 40cm horizontal


distance

(a) (b) (c & d)

Figure 4.10: Images captured at 40cm horizontal distance (a & b),


Failure Detection of plate region at 40cm horizontal distance (c & d)
PAMANTASAN NG LUNGSOD NG MAYNILA 87

- For 80cm horizontal distance

Figure 4.11: Images captured at 80cm horizontal distance

Figure 4.12: Successful Detection of plate region at 40cm horizontal distance

Figure 4.13: Images captured at 80cm horizontal distance (Left), Failure


Detection of plate region at 80cm horizontal distance (Right)
PAMANTASAN NG LUNGSOD NG MAYNILA 88
- Luminosity

Since majority of reliable results came from 75cm distance, the proponents

utilized the 75cm distance in determining the range value of luminosity that have a

higher percentage success of recognizing the plate region.

Table 4.3: Luminosity Value at 75 cm distance

Luminosity
Vehicle Success Fail
75cm
1 122.1627 ✓
2 127.7624 ✓
3 137.2689 ✓
4 121.9346 ✓
5 138.917 ✓
6 140.556 ✓
7 123.229 ✓
8 132.7774 ✓
9 154.0718 ✓
10 139.5105 ✓
11 133.1734 ✓
12 141.6663 ✓
13 119.0135 ✓
14 136.2395 ✓
15 160.9698 ✓
16 39.53302 ✓
17 71.99651 ✓
18 47.36117 ✓
19 14.72848 ✓
✓
20 128.9205
✓
21 138.2576
✓
22 131.9526
✓
23 123.9636
✓
24 120.9085
✓
25 120.5023
PAMANTASAN NG LUNGSOD NG MAYNILA 89

Based on table 4.3, the range of luminosity value that shows higher probability

of localizing the license plate region was between 110 to 140 luminosity.

B. Character Segmentation

After the localization of the plate region, the next step is the character

segmentation. Using connected component analysis, the following images are

gathered.

Figure 4.14: Successful Segmented Characters of the license plate

Figure 4.15: Failure Segmented Characters of the license plate


PAMANTASAN NG LUNGSOD NG MAYNILA 90

Figures 4.14 and 4.15 showed the success and failure of segmentation

processing in the system. One of the factors that affects the result of character

segmentation is the physical condition of the license plate. If the license plate is in a

poor condition, it is likely that the results for character segmentation will have a lower

success rate. Most of the failure in segmented characters has this issue. Observing

the result, either the character segmented is incomplete or the character is distorted.

These characters may be covered in dirt or displaced in their position. However, these

results are not considered failure yet. After the process of connected component

analysis, the results of this method will undergo into the data mining. This method will

run again the failure results and afterwards will successfully segmented the

characters.

Data Set Training

The proponents trained four different sets of training data. Each set has different

numbers of frequency characters. The x-axis in the graph pertains to the denomination

of the characters (where the range of 0 to 9 represents the numbers while the range

of 10 to 36 represents the letters). On the other hand, the y-axis represents the number

of frequency character itself. The following figures show the characters used to train

the data set and the four different sets of training data.
PAMANTASAN NG LUNGSOD NG MAYNILA 91

Figure 4.16: The Characters used to train the data sets

Figure 4.17: The first data set contains 36 characters


PAMANTASAN NG LUNGSOD NG MAYNILA 92

Figure 4.18: The second data set contains 71 characters

Figure 4.19: The third data set contains 100 characters


PAMANTASAN NG LUNGSOD NG MAYNILA 93

Figure 4.20: The fourth data set contains 240 characters

Each set of training data, namely 36, 71,100 and 240 characters will be tested

in order to determine the accuracy of recognition. The set of training sample will be

240 characters. The Fig.4.21 below displayed the results:

Figure 4.21: Accuracy of the set of training data


PAMANTASAN NG LUNGSOD NG MAYNILA 94

Based on the given results, the first data set of characters recognized 121 out

of 240 characters or equivalent to 50.42%. The second data set of characters

recognized 165 out of 240 characters or equivalent to 68.75%. The third data set of

characters recognized 185 out of 240 characters or equivalent to 77.08%. Lastly, the

fourth data set of characters recognized 237 out of 240 characters or equivalent to

98.75%.

Among the four sets of training data, 240 characters show the highest success

rate in recognizing characters. The weights generated in this network training are used

for the final simulation of the study.

Figure 4.22: Actual Training Data of MSE (MSE vs Iterations)


PAMANTASAN NG LUNGSOD NG MAYNILA 95

Figure 4.22 shows the actual training data of mean square error where it set

to 0.0001% before it stops the training.

C. Character Recognition

After segmenting the plate region of the license plate, character recognition is

implemented using Multi-layered Artificial Neural Network. The results gathered

consist of two data sets – old and new plates. The first data set consists of purely new

plates while the second data set consists of the combination of old and new plates.

Table 4.4 shows the result:

Table 4.4: Vehicle Identification Success Rate

Successfully Total No.


Plate Success Rate
Recognized of Images

New Plate 43 50 86%

All Plate 50 132 38%


PAMANTASAN NG LUNGSOD NG MAYNILA 96

For the first set of data which is purely new plates, the system identified 43

images out of 50 images, earning 86% success rate. On the other hand, when the 82

old plates were combined with the 50 new plates, the system accumulated a

percentage success rate of 38. Based on the given results of the combination of

plates, the system has been able to recognize 9 old plates only out of 82 plates added

to the system. The results in the second set of data drastically decrease compared to

the first set of data.

Figure 4.23: Successful Plate Localization, Character Segmentation and


Recognition, and Vehicle Identification
PAMANTASAN NG LUNGSOD NG MAYNILA 97

Figure 4.24: Successful Plate Localization, Character Segmentation and


Recognition, and Vehicle Identification
D. Data Mining

The system scans the image per character and those adjacent to it, to see if it

matches license plates in the database; a counter exists wherein if it exceeds a certain

number, the license plate would be removed from the list and the system moves to

the next plate.

The overall system had a success rate of 86% or 43 out of 50 vehicles

identified. However, out of these 43 correctly identified vehicles, only 7 vehicles were

correctly recognized. The rest of the 36 vehicles in total had a minimal error in

character recognition. Fortunately, the system for the data mining was highly

successful. Shown below are figures of a few vehicles which were correctly identified

despite having a small error in character recognition.


PAMANTASAN NG LUNGSOD NG MAYNILA 98

Figure 4.25: Successful Vehicle Identification with Error in Recognition

Figure 4.26: Successful Vehicle Identification with Error in Recognition

The license plate was still successfully identified since most of the characters

were successfully recognized, so when the image was scanned, the image did not

exceed the threshold.


PAMANTASAN NG LUNGSOD NG MAYNILA 99

The reason why some characters are classified wrongly is due to some similar

characteristics between other characters. Due to limitations of the camera, the

distinctive characteristic that differentiates similar characters between another is not

present or very minimal, and is diminished enough that the artificial neural network

cannot identify those characteristics. For example, the small line at the lower right of

the letter “Q”, or a serif, is nearly indistinguishable. Another factor is the frequency of

characters for training. There are only 3 Q images trained compared to several 0's

and O. The image below shows this comparison.

Figure 4.27: Comparison of serif in letter “Q”

In this table shown below, the proponents summarized the total number of

successful recognition made by the system over the total number of license plates to

be recognized with the given character on the left side.

Table 4.5: Misrecognized Characters

CHARACTER RECOGNIZED AS:

P(1|1) P(I/1) P(7|1)


1
17/18 0/18 1/18

P(1|7) P(I/7) P(7|7)


7
1/7 0/7 6/7

P(0|0) P(O|0) P(Q|0)


0
6/6 0/6 0/6
PAMANTASAN NG LUNGSOD NG MAYNILA 100

P(0|O) P(O|O) P(Q|O)


O
1/7 6/7 0/7

P(0|Q) P(O|Q) P(Q|Q)


Q
½ 0/2 ½

P(5|5) P(S|5)
5
14/15 1/15

P(Z|Z) P(Z|2)
Z
0/3 3/3

P(Z|2) P(2|2)
2
0/18 18/18

The first column indicates the characters classified wrongly by the system and

the succeeding columns indicate the corresponding recognized character by the first

column. Shown below are a few examples of the failed recognitions with these

characters.

Figure 4.28: Failed Recognition of “7” as “1”


PAMANTASAN NG LUNGSOD NG MAYNILA 101

Figure 4.29: Failed Recognition of “Q” as “O”

Figure 4.30: Failed Recognition of “Z” as “2”

Despite these failed recognitions, the system was still able to identify the vehicle

correctly, as seen in the lower part of the figures 4.28 to 4.30.


PAMANTASAN NG LUNGSOD NG MAYNILA 102

Localization of the Plate region

In order to determine the localization of the plate region, the proponents used

vertical and horizontal difference of the neighboring pixels. The following results have

been gathered for the 25 vehicle license plates.

Table 4.6: Vertical and Horizontal Difference of 25 vehicles

Horizontal
Vehicle No. Vertical Difference
Difference
1 1.7813 4.4875
2 1.6125 1.6313
3 0 0
4 0 10.5188
5 0 11.1125
6 0 20.9563
7 1.9813 3.9594
8 1.1625 4.7031
9 0.4938 14.0906
10 0 13.5625
11 0 0.4313
12 2.6 0
13 0 5.6594
14 0.025 15
15 0.0063 10.9969
16 0 8.4281
17 0 12.7625
18 0.2875 3.525
19 0.8125 1.2906
20 0.5938 3.0063
21 0 3.35
22 0 0.0219
23 0 0
24 0 5.3531
25 0 7.1531
Average 0.4542 6.4800
PAMANTASAN NG LUNGSOD NG MAYNILA 103

Based on the given results, one plate can be localized if the average horizontal

and vertical difference of the neighboring pixels is almost around 0.4542 and 6.4800

respectively.

D. Graphic User Interface (GUI)

Figure 4.31: Graphic User Interface of the whole system

Every process in the system was demonstrated in a graphic user interface or

GUI. This will be the final outcome of the system displaying all the results of pre-

processing and the database.


PAMANTASAN NG LUNGSOD NG MAYNILA 104

Time Elapsed for System Processing

Table 4.7: Summary of Elapsed Time

Arduino to Arduino Total


Localization Recognition Data Total Time
Matlab Servo Software
Interfacing Time (s) Time (s) Mining (s) Time (s) Time (s) (s)
1 1.57094 0.550302 0.356966 0.019297 0.048481 0.975046 2.54598
2 1.89610 0.453337 0.274469 0.008401 0.049597 0.785804 2.68190
3 1.51785 0.730655 0.370793 0.007113 0.033531 1.142092 2.65994
4 1.83936 0.768532 0.346883 0.001963 0.016436 1.133814 2.97317
5 1.69611 0.37354 0.364757 0.00188 0.028178 0.768355 2.46446
6 1.71088 0.367995 0.377169 0.001584 0.019774 0.766522 2.47740
7 1.97974 0.280098 0.327508 0.008181 0.021750 0.637537 2.61727
8 1.92456 0.317368 0.347616 0.002002 0.020896 0.687882 2.61244
9 1.87887 0.326254 0.321398 0.001293 0.018032 0.666977 2.54584
10 1.82773 0.252798 0.216779 0.00139 0.009741 0.480708 2.3084
11 1.95786 0.232504 0.269649 0.003636 0.020914 0.526703 2.48456
12 1.82787 0.296633 0.316259 0.001215 0.019265 0.633372 2.46124
13 1.96699 0.333817 0.340108 0.00092 0.018792 0.693637 2.66062
14 1.87156 0.378439 0.336946 0.001122 0.011505 0.728012 2.59957
15 1.58559 0.469242 0.345803 0.001406 0.011421 0.827872 2.41346
Avg 1.8035 0.408768 0.32754 0.004094 0.023221 0.763622 2.56709
Standard Deviation 0.1533

As seen in the table above the average total processing time for software

processing is only 0.763622 which can be considered real-time. Additionally, the

average total time with interfacing added is around 2.56709, which can be considered

nearly real-time, and with a standard deviation of 0.1533 which means there is little

variation in the system’s processing time. It also implies that there is no overshoot or

significant drop off in the system’s performance.


PAMANTASAN NG LUNGSOD NG MAYNILA 105

Chapter V

SUMMARY OF FINDINGS, CONCLUSION, AND RECOMMENDATIONS

This chapter discusses the summary of the findings of the study, the

researchers’ conclusions, and recommendations.

Summary of Research

This study aimed to design a system which can translate and recognize images

to readable text to be utilized in parking applications. To produce the system, the study

was subdivided into four sections such as image localization, image segmentation,

image recognition and data mining. Furthermore, machine learning was utilized in the

study giving the system an adaptive method of translating the image into text.

Additionally, parameters such as accuracy and processing time were determined to

evaluate the performance of the system.

Findings

1. A. License Plate Resolution

The license plate resolution was varied by adjusting the distance of the vehicle

from the camera which results in the LP image having different number of pixels or

details. The proponents used the range of 50-150 with a step size of 25. In Table

4.1 the accuracy rates of localization were shown, with 125cm had the highest

accuracy. Though, as observed by the proponents, images that were not segmented
PAMANTASAN NG LUNGSOD NG MAYNILA 06

properly had objects at both sides which may cause problems in the recognition

subsection.

B. Viewing Angle

Viewing angle was varied by changing the position of the camera horizontally.

Using the center of the license plate as reference, the camera was adjusted 40cm

and 80cm to the left and right of the center. Table 4.2 showed that as the camera

lens goes farther from the center of the license plate, the accuracy of localization

decreases.

C. Luminosity

Since old models of license plates are still being used, which uses non-

reflective materials, luminosity is another parameter which the proponents wish to

evaluate. As shown in Table 4.3, most license plates that possess a luminosity of

110 to 140 were properly localized by the system.

2. Most un-localized plates had significant differences in either horizontal or

vertical difference. In addition, most localized plates had a vertical differences within

the range of 0-2 with an average of 0.4542 and with an average difference of 6.48

for horizontal difference.

3. On average, software processes need only a quarter of a second in order to

finish all computations which can be seen in Table 4.7. According to the same table,

localization of the license plate took most of the elapsed time while data mining took

the least. However, if the system is to be applied on a larger scale, data mining might

have the biggest impact to the processing time since the database would be
PAMANTASAN NG LUNGSOD NG MAYNILA 107

significantly larger. For accuracy, the system can perform very well with new plates

with around 86% accuracy, but only at 38% with old plates.

Conclusion

1. A. In conclusion, varying the distance of the camera from the license plate had a

significant effect to the accuracy of the system. It can also be concluded that 75cm

and 125cm are the only acceptable ranges of the camera. However, due to other

objects being included in the image when the camera is significantly far from the

vehicle, 75cm would be more optimal.

B. Placing the camera further from the center of the license plate would have a

significant effect on the accuracy resulting to the system having an accuracy of

lower than 50%.

C. Out of the 15 vehicles that have a luminosity range of 110-140, 10 vehicles

were successfully localized, as compared to 5 of 10 of other luminosity values.

Therefore, for actual application having sufficient lighting, luminosity of an image

within the said range is needed for an improved system performance.

2. In conclusion, the average for horizontal difference and vertical difference are

6.48 and 0.4542.

3. In conclusion, the system can be classified as real-time with an average of

0.763622 seconds for the software processes, which include image localization,

image recognition and data mining. Additionally, the average total time with interfacing

added is around 2.56709 seconds. The system can perform more accurately if the
PAMANTASAN NG LUNGSOD NG MAYNILA 108

license plate is the new model. Unfortunately, when the license plate is the old model,

particularly the one with the Rizal Shrine in the middle, the system performs poorly.

Recommendations

Based on the preceding findings and the corresponding conclusions drawn, the

following recommendations were made by the researchers:

1. The researchers used 240x320 in order to achieve a real-time system that is fit

for the processor used. Future researchers may use cameras with higher resolution

than the one used by the proponents without highly affecting the processing time of

the system, thus hardware specifications are also improved which will lead to higher

accuracy.

2. The proponent’s neural network method is not embedded to the localization and

segmentation process but only to the recognition part. It is suggested that the machine

learning must be integrated to plate localization and segmentation to have a wholly

adaptive system that can achieve maximum performance.

3. The system developed by the researchers highly dependent on a computer to

simulate the data and to complete the system. However, it may be possible for future

researchers to develop a system independently on any design tool such as Arduino

or Raspberry Pi.

4. For this study, the proponents used 240 characters for the training data set.

Though it produced a high success rate, this data set may still be increased and

observe the effect of this to the accuracy of the system.


PAMANTASAN NG LUNGSOD NG MAYNILA 109

5. The researchers obtained a lower success rate when recognizing old license

plate than the new ones. One of the main reasons of this, is the presence of the Rizal

statue in the center part of the old license plate. It was being localized and recognized

as a different character, resulting to a failure in data mining. The researchers suggest

to train the system using a segmented image of the Rizal statue and when recognized,

let the system disregard that recognized character.

6. This study is limited only to stationary vehicles. One way to improve the reliability

of the system would be to let the system recognize license plate despite using vehicles

in motion.
PAMANTASAN NG LUNGSOD NG MAYNILA 110

Bibliography

Abolghasemi, V., & Ahmadyfard, A. (2007). A Fast Algorithm for License Plate Detection.

International Conference on Advances in Visual Information Systems, 468-477.

Agarwal, M., & Kaushik, B. (2015). Text recognition from image using Artificial Neural

Network and Genetic Algorithm. IEEE International Conference on Green Computing

and Internet of Things, 1610-1617.

Anagnostopoulos, C., Anagnostopoulos, I., Loumos, V., & Kayafas, E. (September 2006). A

License Plate-Recognition Algorithm for Intelligent Transportation System Applications.

IEEE Transactions On Intelligent Transportation Systems, Vol. 7, No. 3, 377-392.

doi:10.1109/TITS.2006.880641

Anagnostopoulos, C., Anagnostopoulos, I., Psoroulas, I., Loumos, V., & Kayafas, E.

(September 2008). License Plate Recognition From Still Images and Video

Sequences: A Survey. IEEE Transactions On Intelligent Transportation Systems, Vol.

9, No. 3, 377-391. doi:10.1109/TITS.2008.922938

Azad, B., Azad, R., & Mogharreb, I. (September 2014). Iranian License Plate Character

Recognition Using Classification Ensemble. International journal of Computer Science

& Network Solutions, Vol. 2, No.9.

Babbie, E. (2010). The Practice of Social Research (12th ed.). Belmont, CA: Wadsworth

Cengage.

Dalida, J. P., Galiza, A. J., Godoy, A. G., Nakaegawa, M., Vallester, J., & Dela Cruz, A.

(2016). Development of Intelligent Transportation System for Philippine License Plate


PAMANTASAN NG LUNGSOD NG MAYNILA 111
Recognition. IEEE Region 10 Conference (TENCON), 3762-3766.

doi:10.1109/tencon.2016.7848764

Du, S., Ibrahim, M., Shehata, M., & Badawy, W. (2011). Automatic License Plate

Recognition (ALPR): A State of the Art Review. IEEE TCSVT 5656, 1-14.

Fault Determination in a Parking Lot Accident. (2017, February 1). Retrieved July 19, 2017,

from https://www.insurancehotline.com/fault-determination-in-a-parking-lot-accident/

Gohil, N. B., & Jing, P. (2010). Car License Plate detection.

Jain, A. S., & Kundargi, J. (July,2015). Automatic Number Plate Recognition Using Artificial

Neural Network. International Research Journal of Engineering and Technology

(IRJET) Vol. 02, Issue 04, 1072-1078.

Jing, Y., Youssefi, B., Mirhassani, M., & Muscedere, R. (2017). An Efficient FPGA

Implementation of Optical Character Recognition System for License Plate

Recognition. IEEE 30th Canadian Conference on Electrical and Computer Engineering

(CCECE), 32-58. doi:10.1109/ccece.2017.7946734

Joarder, M. A., Mahmud, K., Ahmed, T., Kawser, M., & Ahamed, B. (March, 2012). Bangla

Automatic Number Plate Recognition System using Artificial Neural Network. Asian

Transactions on Science & Technology, Vol.02, Issue 01, 1-10. doi:10.30224/014

Karthikeyan, V., Vijayalakshmi, V. J., & Jeyakumar, P. (Jan-Feb 2013). License Plate

Segmentation Using Connected Component Analysis. IOSR Journal of Electronics and

Communication Engineering, Vol.4, Issue 5, 18-24.


PAMANTASAN NG LUNGSOD NG MAYNILA 112
Kocer, H. E., & Cevik, K. (2011). Artificial Neural Networks Based Vehicle License Plate

Recognition. Procedia Computer Science 3, 1033-1037.

doi:10.1016/j.procs.2010.12.169

Koval, V., Turchenko, V., Kochan, V., Sachenko, A., & Markowsky, G. (September 8,2003).

Smart License Plate Recognition System Based on Image Processing . IEEE

International Workshop on Intelligent Data Acquisition and Advanced Computing

Systems: Technology and Applications, 123-127.

Lazrus, A., & Choubey, S. (2011). A Robust Method of License Plate Recognition using

ANN. International Journal of Computer Science and Information

Technologies(IJCSIT), Vol. 2 (4), 1494-1497.

Madhuri Latha, G., & Chakravarthy, G. (Sep-Oct. 2012). An Improved Bernsen Algorithm

Approaches For License Plate Recognition. IOSR Journal of Electronics and

Communication Engineering, Vol.3, Issue 4, 1-5.

Mahgoub, A., Talab, A., Huang, Z., & Junfei, W. (2014). An Enhanced Bernsen Algorithm

Approaches for Vehicle Logo Detection. International Journal of Signal Processing

(IJSIP) Image Processing and Pattern Recognition, Vol.07, No.04, 203-210.

doi:10.14257/ijsip.2014.7.4.20

Misra, C., Swain, P., & Mantri, J. (February,2012). Text Extraction and Recognition from

Imageusing Neural Network. International Journal of Computer Applications, Vol.40-

No.2, 13-19. doi:10.1109@ICCCSP.2017.7944086


PAMANTASAN NG LUNGSOD NG MAYNILA 113
Mudra, S., V., N., & Nijhawan, S. (June 2015). A Model for License Plate Recognition. 2015

International Journal for Research in Applied Science & Engineering Technology,

Volume 3 Issue VI.

Muijs, D. (2010). Doing Quantitative Research in Education with SPSS (2nd ed.). London:

SAGE Publications.

Öz, C., & Koker, R. (2003). Vehicle Licence Plate Recognition Using Artificial Neural

Networks. Sakarya, Turkey, accepted paper, 1-5.

Pardeshi, C., & Rege, P. (2017). Morphology Based Approach for Number Plate Extraction.

International Conference on Data Engineering and Communication Technology,

Advances in Intelligent Systems and Computing, 11-18.

Perwej, Y., & Perwej, F. (June 2012). A Neuroplasticity (Brain Plasticity) Approach to Use in

Artificial Neural Network. International Journal of Scientific and Engineering Research

(IJSER), France, Vol.03, Issue 6, 1-9.

Perwej, Y., Akhtar, N., & Parwej, F. (July 2014). The Kingdom of Saudi Arabia Vehicle

License Plate Recognition using Learning Vector Quantization Artificial Neural

Network. International Journal of Computer Applications, Vol.98, No.11, 32-38.

Poovizhichelvi, C., Jeevanantham, S., & Rukkumani, V. (2017). Automatic License Plate

Recognition (LPR) System Using Cascade Forward Neural Network. Journal of

Embedded Systems and Processing, Vol.02, Issue 01, 1-8.

Rahman, C., Badawy, W., & Radmanesh, A. (2003). A Real Time Vehicle’s License Plate

Recognition System. IEEE Conference on Advanced Video and Signal Based

Surveillance (AVSS’03) , 1971-1974.


PAMANTASAN NG LUNGSOD NG MAYNILA 114
Rani, S., & Kaur, N. (September 2016). OCR using the Artificial Neural Network with

Character Localization using Combined PCA and MSER Features. International

Journal of Computer Applications Volume 150 – No.4, 16-19.

Sagar, D., & Dutta, M. (September,2014). Block-Based Neural Network for Automatic

Number Plate Recognition. International Journal of Scientific and Research

Publications, Vol. 4, Issue 9, 1-7.

Saunshi, S., Sahani, V., Patil, J., Yadav, A., & Rathi, S. (2017). License Plate Recognition

Using Convolutional Neural Network. IOSR Journal of Computer Engineering

International Conference on Computing and Virtualization, 28-33.

Senthilkumaran, N., & Kirubakaran, C. (2014). Efficient Implementation of Niblack

Thresholding for MRI Brain Image Segmentation. (IJCSIT) International Journal of

Computer Science and Information Technologies, Vol. 5 (2) , 2173-2176.

Shafii, M. (2014). Optical Character Recognition of Printed Persian/Arabic Documents.

Electronic Theses and Dissertations, Paper 5179.

Singh, V., Parashar, H., & Vasudeva, N. (2012). Recognition of Text Image Using Multilayer

Perceptron. Mody Institute of Technology & Science, Lakshmangarh, Sikar, Rajasthan,

India , 1-4.

Thanin, K., Mashohor, S., & Al-Faqheri, W. (n.d.). An improved Malaysian Automatic

License Plate Recognition (M-ALPR) system using hybrid fuzzy in C++ environment.

International Conference on Innovative Technologies in Intelligent Systems and

Industrial Applications, 51-55.

Umbaugh, S. (1999). Computer vision and Image processing. New Jersey.


PAMANTASAN NG LUNGSOD NG MAYNILA 115
Wen, Y., Lu, Y., Yan, J., Zhou, Z., Deneen, K., & Shi, P. (September, 2011). An Algorithm

for License Plate Recognition Applied to Intelligent Transportation System. IEEE

Transactions On Intelligent Transportation Systems, Vol. 12, No. 3, 830-845.

Yoon, Y., Ban, K. D., Yoon, H., & Kim, J. (2011). Blob Extraction based Character

Segmentation Method for Automatic License Plate Recognition System. IEEE, 2192-

2196.

Zheng, L., Samali, B., He, X., & Yang, L. (2010). Accuracy Enhancement for License Plate

Recognition. 10th IEEE International Conference on Computer and Information

Technology (CIT 2010), 511-516. doi:10.1109/CIT.2010.111


PAMANTASAN NG LUNGSOD NG MAYNILA 116

Appendix I

Computations

I. Percentage Success of recognizing the license plate region varying the distance
of camera to the vehicle:

𝑆𝑢𝑐𝑐𝑒𝑠𝑠 𝑡𝑟𝑖𝑎𝑙𝑠
𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = 𝑥̅ 100
𝑇𝑜𝑡𝑎𝑙 𝑇𝑟𝑖𝑎𝑙𝑠

At 50 cm:

7
𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = 𝑥̅ 100
25

𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = 28%
At 75 cm:
13
𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = 𝑥̅ 100
25
𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = 52%
At 100 cm:
11
𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = 𝑥̅ 100
25
𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = 44%
At 125 cm:
17
𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = 𝑥̅ 100
25
𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = 68%
At 150 cm:
12
𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = 𝑥̅ 100
25
𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = 48%
PAMANTASAN NG LUNGSOD NG MAYNILA 117

II. Percentage Success of recognizing the license plate region varying the
horizontal distance of camera to the vehicle:

𝑆𝑢𝑐𝑐𝑒𝑠𝑠 𝑡𝑟𝑖𝑎𝑙𝑠
𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = 𝑥̅ 100
𝑇𝑜𝑡𝑎𝑙 𝑇𝑟𝑖𝑎𝑙𝑠

At 40 cm:
12
𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = 𝑥̅ 100
25
𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = 48%
At 80 cm:
9
𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = 𝑥̅ 100
25
𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = 36%

III. Data Set Training:

𝑆𝑢𝑐𝑐𝑒𝑠𝑠 𝑡𝑟𝑖𝑎𝑙𝑠
𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = 𝑥̅ 100
𝑇𝑜𝑡𝑎𝑙 𝑇𝑟𝑖𝑎𝑙𝑠

a) The first data set trained 36 characters. When testing it recognized 121 out of
240 characters.

121
𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = 𝑥̅ 100
240
𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = 50.42%

b) The second data set trained 71 characters. When testing it recognized 165 out
of 240 characters.

165
𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = 𝑥̅ 100
240
𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = 68.75%
PAMANTASAN NG LUNGSOD NG MAYNILA 118

c) The third data set trained 100 characters. When testing it recognized 185 out
of 240 characters.

185
𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = 𝑥̅ 100
240
𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = 77.08%

d) The fourth data set trained 240 characters. When testing it recognized 237 out
of 240 characters.

237
𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = 𝑥̅ 100
240
𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = 98.75%

IV. Vehicle Identification Success Rate

𝑆𝑢𝑐𝑐𝑒𝑠𝑠 𝑡𝑟𝑖𝑎𝑙𝑠
𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = 𝑥̅ 100
𝑇𝑜𝑡𝑎𝑙 𝑇𝑟𝑖𝑎𝑙𝑠

a) For the new plate:

43
𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = 𝑥̅ 100
50
𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = 86%
b) For the old plate:

7
𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = 𝑥̅ 100
82
𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = 9%
c) For All plate:

50
𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = 𝑥̅ 100
132
𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = 37.88%
PAMANTASAN NG LUNGSOD NG MAYNILA 119

V. Vertical and Horizontal Difference of 25 vehicles:


a) For Vertical Difference:

Σ 𝑉𝑒𝑟𝑡𝑖𝑐𝑎𝑙 𝐷𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒
𝑚𝑒𝑎𝑛 =
𝑇𝑜𝑡𝑎𝑙 𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑉𝑒ℎ𝑖𝑐𝑙𝑒𝑠
11.3565
𝑚𝑒𝑎𝑛 =
25
𝑚𝑒𝑎𝑛 = 0.45426
b) For Horizontal Difference:

Σ 𝐻𝑜𝑟𝑖𝑧𝑜𝑛𝑡𝑎𝑙 𝐷𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒
𝑚𝑒𝑎𝑛 =
𝑇𝑜𝑡𝑎𝑙 𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑉𝑒ℎ𝑖𝑐𝑙𝑒𝑠
162
𝑚𝑒𝑎𝑛 =
25
𝑚𝑒𝑎𝑛 = 6.4800

VI. Time Elapsed for System Processing:

Σ 𝑇𝑜𝑡𝑎𝑙 𝑇𝑖𝑚𝑒 𝐴𝑐𝑐𝑢𝑚𝑢𝑙𝑎𝑡𝑒𝑑


𝑚𝑒𝑎𝑛 =
𝑇𝑜𝑡𝑎𝑙 𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑉𝑒ℎ𝑖𝑐𝑙𝑒𝑠

a) Arduino to Matlab Interfacing Time:

27.0525
𝑚𝑒𝑎𝑛 =
15
𝑚𝑒𝑎𝑛 = 1.80350

b) Localization Time:

6.13152
𝑚𝑒𝑎𝑛 =
15
PAMANTASAN NG LUNGSOD NG MAYNILA 120

𝑚𝑒𝑎𝑛 = 0.408768

c) Recognition time:

4.9131
𝑚𝑒𝑎𝑛 =
15
𝑚𝑒𝑎𝑛 = 0.32754
d) Data Mining:

0.06141
𝑚𝑒𝑎𝑛 =
15
𝑚𝑒𝑎𝑛 = 0.004094
e) Arduino Servo Time:

0.348315
𝑚𝑒𝑎𝑛 =
15
𝑚𝑒𝑎𝑛 = 0.023221
PAMANTASAN NG LUNGSOD NG MAYNILA 121

Appendix II

Source Codes:

Arduino Code:

Serial Camera with Sonar triggering

#include <SPI.h>
#include <arduino.h>
#include <SD.h>

#define utrigger 13 // digital pin 13 of arduino for the trigger or the


transmitter module of the Ultrasonic sensor
#define uecho 3 // digital pin 12 of arduino for the echo or the receiver
module of the Ultrasonic sensor

#define PIC_PKT_LEN 128 //data length of each read, dont set this too
big because ram is limited
#define PIC_FMT_VGA 7
#define PIC_FMT_CIF 5
#define PIC_FMT_OCIF 3
#define CAM_ADDR 0
#define CAM_SERIAL Serial

#define PIC_FMT PIC_FMT_VGA

File myFile;

const byte cameraAddr = (CAM_ADDR << 5); // addr


const int buttonPin = 19; // the number of the pushbutton pin
unsigned long picTotalLen = 0; // picture length
int picNameNum = 0;
long duration, inches, cm;

/*********************************************************************/
void setup()
{
Serial.begin(115200);
PAMANTASAN NG LUNGSOD NG MAYNILA 122

pinMode(utrigger, OUTPUT);
pinMode(uecho, INPUT);
pinMode(buttonPin, INPUT); // initialize the pushbutton pin as an input
digitalWrite(buttonPin, HIGH);
Serial.println("Initializing SD card....");
pinMode(10,OUTPUT); // CS pin of SD Card Shield

if (!SD.begin(10))
{
Serial.print("initialization failed");
return;
}
Serial.println("initialization done.");
initialize();
}
/*********************************************************************/
void loop()
{
int n = 0;
while(1){
Serial.println("\r\nPress the button to take a picture");
digitalWrite(utrigger, LOW);
delayMicroseconds(2);
digitalWrite(utrigger, HIGH); //the Ultrasonic sensor will transmit 40
kHz ultrasonic burst
delayMicroseconds(10);
digitalWrite(utrigger, LOW);
duration = pulseIn(uecho, HIGH); //getting the time of echo or the
reflected ultrasonic burst
inches = microsecondsToInches(duration); // converting duration to Inches
cm = microsecondsToCentimeters(duration); // result is converted to
Centimeter

Serial.println(cm);
while (cm==75); //wait for buttonPin status to HIGH
if(cm!=75){
delay(20); //Debounce
if (cm==75)
{
Serial.println("\r\nCOPYING JPG TO CARD, PLS WAIT...");
delay(200);
if (n == 0) preCapture();
Capture();
GetData();
PAMANTASAN NG LUNGSOD NG MAYNILA 123

}
Serial.print("\r\nTaking pictures success ,number : ");
Serial.println(n);
n++ ;
}
}
}
/*********************************************************************/
void clearRxBuf()
{
while (Serial.available())
{
Serial.read();
}
}
/*********************************************************************/
void sendCmd(char cmd[], int cmd_len)
{
for (char i = 0; i < cmd_len; i++) Serial.print(cmd[i]);
}
/*********************************************************************/
void initialize()
{
char cmd[] = {0xaa,0x0d|cameraAddr,0x00,0x00,0x00,0x00} ;
unsigned char resp[6];

Serial.setTimeout(500);
while (1)
{
//clearRxBuf();
sendCmd(cmd,6);
if (Serial.readBytes((char *)resp, 6) != 6)
{
continue;
}
if (resp[0] == 0xaa && resp[1] == (0x0e | cameraAddr) && resp[2] == 0x0d &&
resp[4] == 0 && resp[5] == 0)
{
if (Serial.readBytes((char *)resp, 6) != 6) continue;
if (resp[0] == 0xaa && resp[1] == (0x0d | cameraAddr) && resp[2] == 0 &&
resp[3] == 0 && resp[4] == 0 && resp[5] == 0) break;
}
}
cmd[1] = 0x0e | cameraAddr;
PAMANTASAN NG LUNGSOD NG MAYNILA 124

cmd[2] = 0x0d;
sendCmd(cmd, 6);
Serial.println("\nCamera Ready.");
}
/*********************************************************************/
void preCapture()
{
char cmd[] = { 0xaa, 0x01 | cameraAddr, 0x00, 0x07, 0x00, PIC_FMT };
unsigned char resp[6];

Serial.setTimeout(100);
while (1)
{
clearRxBuf();
sendCmd(cmd, 6);
if (Serial.readBytes((char *)resp, 6) != 6) continue;
if (resp[0] == 0xaa && resp[1] == (0x0e | cameraAddr) && resp[2] == 0x01 &&
resp[4] == 0 && resp[5] == 0) break;
}
}
void Capture()
{
char cmd[] = { 0xaa, 0x06 | cameraAddr, 0x08, PIC_PKT_LEN & 0xff,
(PIC_PKT_LEN>>8) & 0xff ,0};
unsigned char resp[6];

Serial.setTimeout(100);
while (1)
{
clearRxBuf();
sendCmd(cmd, 6);
if (Serial.readBytes((char *)resp, 6) != 6) continue;
if (resp[0] == 0xaa && resp[1] == (0x0e | cameraAddr) && resp[2] == 0x06 &&
resp[4] == 0 && resp[5] == 0) break;
}
cmd[1] = 0x05 | cameraAddr;
cmd[2] = 0;
cmd[3] = 0;
cmd[4] = 0;
cmd[5] = 0;
while (1)
{
clearRxBuf();
sendCmd(cmd, 6);
PAMANTASAN NG LUNGSOD NG MAYNILA 125

if (Serial.readBytes((char *)resp, 6) != 6) continue;


if (resp[0] == 0xaa && resp[1] == (0x0e | cameraAddr) && resp[2] == 0x05 &&
resp[4] == 0 && resp[5] == 0) break;
}
cmd[1] = 0x04 | cameraAddr;
cmd[2] = 0x1;
while (1)
{
clearRxBuf();
sendCmd(cmd, 6);
if (Serial.readBytes((char *)resp, 6) != 6) continue;
if (resp[0] == 0xaa && resp[1] == (0x0e | cameraAddr) && resp[2] == 0x04 &&
resp[4] == 0 && resp[5] == 0)
{
Serial.setTimeout(1000);
if (Serial.readBytes((char *)resp, 6) != 6)
{
continue;
}
if (resp[0] == 0xaa && resp[1] == (0x0a | cameraAddr) && resp[2] == 0x01)
{
picTotalLen = (resp[3]) | (resp[4] << 8) | (resp[5] << 16);
Serial.print("picTotalLen:");
Serial.println(picTotalLen);
break;
}
}
}

}
/*********************************************************************/
void GetData()
{
unsigned int pktCnt = (picTotalLen) / (PIC_PKT_LEN - 6);
if ((picTotalLen % (PIC_PKT_LEN-6)) != 0) pktCnt += 1;

char cmd[] = { 0xaa, 0x0e | cameraAddr, 0x00, 0x00, 0x00, 0x00 };


unsigned char pkt[PIC_PKT_LEN];

char picName[] = "pic00.jpg";


picName[3] = picNameNum/10 + '0';
picName[4] = picNameNum%10 + '0';

if (SD.exists(picName))
PAMANTASAN NG LUNGSOD NG MAYNILA 126

{
SD.remove(picName);
}

myFile = SD.open(picName, FILE_WRITE);


if(!myFile){
Serial.println("myFile open fail...");
}
else{
Serial.setTimeout(1000);
for (unsigned int i = 0; i < pktCnt; i++)
{
cmd[4] = i & 0xff;
cmd[5] = (i >> 8) & 0xff;

int retry_cnt = 0;
retry:
delay(10);
clearRxBuf();
sendCmd(cmd, 6);
uint16_t cnt = Serial.readBytes((char *)pkt, PIC_PKT_LEN);

unsigned char sum = 0;


for (int y = 0; y < cnt - 2; y++)
{
sum += pkt[y];
}
if (sum != pkt[cnt-2])
{
if (++retry_cnt < 100) goto retry;
else break;
}

myFile.write((const uint8_t *)&pkt[4], cnt-6);


//if (cnt != PIC_PKT_LEN) break;
}
cmd[4] = 0xf0;
cmd[5] = 0xf0;
sendCmd(cmd, 6);
}
myFile.close();
picNameNum ++;
}
PAMANTASAN NG LUNGSOD NG MAYNILA 127

long microsecondsToInches(long microseconds)


{
return microseconds / 74 / 2;
}
long microsecondsToCentimeters(long microseconds)
{
return microseconds / 29 / 2;
}

Processing Code:
import processing.serial.*;

Serial myPort;
OutputStream output;

void setup() {

size(320, 240);

//println( Serial.list() );
myPort = new Serial( this, Serial.list()[0], 115200);
myPort.clear();

output = createOutput("pic06.jpg");
}

void draw() {

try {
while ( myPort.available () > 0 ) {
output.write(myPort.read());
}
}
catch (IOException e) {
e.printStackTrace();
}
}

void keyPressed() {
PAMANTASAN NG LUNGSOD NG MAYNILA 128

try {
output.flush(); // Writes the remaining data to the file
output.close(); // Finishes the file
}

catch (IOException e) {
e.printStackTrace();
}
}

Matlab Code:

Stand-Alone GUI code


function varargout =c(varargin)
% STANDALONEGUI MATLAB code for standalonegui.fig
% STANDALONEGUI, by itself, creates a new STANDALONEGUI or raises the
existing
% singleton*.
%
% H = STANDALONEGUI returns the handle to a new STANDALONEGUI or
the handle to
% the existing singleton*.
%
% STANDALONEGUI('CALLBACK',hObject,eventData,handles,...) calls the
local
% function named CALLBACK in STANDALONEGUI.M with the given input
arguments.
%
% STANDALONEGUI('Property','Value',...) creates a new STANDALONEGUI
or raises the
% existing singleton*. Starting from the left, property value pairs are
% applied to the GUI before standalonegui_OpeningFcn gets called. An
% unrecognized property name or invalid value makes property application
% stop. All inputs are passed to standalonegui_OpeningFcn via varargin.
%
% *See GUI Options on GUIDE's Tools menu. Choose "GUI allows only one
% instance to run (singleton)".
%
% See also: GUIDE, GUIDATA, GUIHANDLES

% Edit the above text to modify the response to help standalonegui

% Last Modified by GUIDE v2.5 21-Feb-2018 21:41:31


PAMANTASAN NG LUNGSOD NG MAYNILA 129

% Begin initialization code - DO NOT EDIT


gui_Singleton = 1;
gui_State = struct('gui_Name', mfilename, ...
'gui_Singleton', gui_Singleton, ...
'gui_OpeningFcn', @standalonegui_OpeningFcn, ...
'gui_OutputFcn', @standalonegui_OutputFcn, ...
'gui_LayoutFcn', [] , ...
'gui_Callback', []);
if nargin && ischar(varargin{1})
gui_State.gui_Callback = str2func(varargin{1});
end

if nargout
[varargout{1:nargout}] = gui_mainfcn(gui_State, varargin{:});
else
gui_mainfcn(gui_State, varargin{:});
end
% End initialization code - DO NOT EDIT

% --- Executes just before standalonegui is made visible.


function standalonegui_OpeningFcn(hObject, eventdata, handles, varargin)
% This function has no output args, see OutputFcn.
% hObject handle to figure
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)
% varargin command line arguments to standalonegui (see VARARGIN)

% Choose default command line output for standalonegui


handles.output = hObject;

delete(instrfind({'Port'},{'COM7'}));
clear s;
global s;
s=serial('COM7','BAUD', 9600); % Make sure the baud rate and COM port is
% same as in Arduino IDE
fopen(s);

GUIcounter=0;
while 1
n=num2str(GUIcounter);
if GUIcounter>=10
PAMANTASAN NG LUNGSOD NG MAYNILA 130

A=[num2str(n),'.JPG'];
A=['PIC' A];
else
A=[num2str(n),'.JPG'];
A=['PIC0' A];
end
checker=exist(A, 'file');
if checker==2

a=ones(240,320);
axes(handles.axes1);
imshow(a);
a=ones(240,320);
axes(handles.axes2);
imshow(a);
a=ones(240,320);
axes(handles.axes3);
imshow(a);
a=ones(24,12);
axes(handles.axes4);
imshow(a);
a=ones(24,12);
axes(handles.axes5);
imshow(a);
a=ones(24,12);
axes(handles.axes6);
imshow(a);
a=ones(24,12);
axes(handles.axes7);
imshow(a);
a=ones(24,12);
axes(handles.axes8);
imshow(a);
a=ones(24,12);
axes(handles.axes9);
imshow(a);
a=ones(24,12);
axes(handles.axes10);
imshow(a);
a=ones(24,12);
axes(handles.axes11);
imshow(a);
PAMANTASAN NG LUNGSOD NG MAYNILA 131

%% Import data from text file.


% Script for importing data from the following text file:
%
% C:\Users\Vince Harrold\Desktop\standalone\authorized cars.csv
%
% To extend the code to different selected data or a different text file,
% generate a function instead of a script.

% Auto-generated by MATLAB on 2018/02/21 21:50:55

%% Initialize variables.
filename = 'C:\Users\Vince Harrold\Desktop\standalone\authorized cars.csv';
delimiter = '';

%% Format string for each line of text:


% column1: text (%s)
% For more information, see the TEXTSCAN documentation.
formatSpec = '%s%[^\n\r]';

%% Open the text file.


fileID = fopen(filename,'r');

%% Read columns of data according to format string.


% This call is based on the structure of the file used to generate this
% code. If an error occurs for a different file, try regenerating the code
% from the Import Tool.
dataArray = textscan(fileID, formatSpec, 'Delimiter', delimiter, 'ReturnOnError',
false);

%% Close the text file.


fclose(fileID);

%% Post processing for unimportable data.


% No unimportable data rules were applied during the import, so no post
% processing code is included. To generate code which works for
% unimportable data, select unimportable cells in a file and regenerate the
% script.

%% Allocate imported array to column variable names


UIK633 = dataArray{:, 1};

set(handles.listbox2,'String',UIK633);
handles.UIK633=UIK633;
PAMANTASAN NG LUNGSOD NG MAYNILA 132

%% Clear temporary variables


clearvars filename delimiter formatSpec fileID dataArray ans;

load('legit12.mat')

%%Localize
a=imread(A);
axes(handles.axes1);
imshow(a);
drawnow;
GUIcounter=GUIcounter+1;
a= 0.2989 * a(:,:,1) + 0.5870 * a(:,:,2) + 0.1140 * a(:,:,3);
a=medfilt2(a);
[row,col]=size(a);
a=a((row/3)+1:row,1:col);
[row,col]=size(a);
b=a;

for (x=1:row)
for(y=2:col-1)
temp=max(a(x,y-1),a(x,y));
b(x,y)=max(temp,a(x,y+1));
end
end

difference=0;
totalsum=0;
difference=uint32(difference);
maximum=0;
maxvertical=0;
for x=2:row
sum=0;
for y=2:col
if(b(x,y)>b(x,y-1))
difference=uint32(b(x,y)-b(x,y-1));
end
if(b(x,y)<=b(x,y-1))
difference=uint32(b(x,y-1)-b(x,y));
end
if(difference >20)
sum=sum+difference;
end
end
vertical(x)=sum;
PAMANTASAN NG LUNGSOD NG MAYNILA 133

if(sum>maximum)
maxvertical=x;
maximum=sum;
end
totalsum=totalsum+sum;
end
average=totalsum/row;
sum=0;
newvertical=vertical;
for x=21:(row-21)
sum=0;
for y=(x-20):(x+20)
sum=sum+vertical(y);
end
newvertical(x)=sum/41;
end
for x=1:row
if(newvertical(x)<(average))
newvertical(x)=0;
for y=1:col
b(x,y)=0;
end
end
end
sum=0;
totalsum=0;
difference=0;
difference=uint32(difference);
maxhorizontal=0;
maximum=0;
for x=2:col
sum=0;
for y=2:row
if(b(y,x) > b(y-1,x))
difference=uint32(b(y,x)-b(y-1,x));
else
difference=uint32(b(y-1,x)-b(y,x));
end
if(difference >20)
sum=sum+difference;
end
end
horizontal(x)=sum;
if(sum>maximum)
PAMANTASAN NG LUNGSOD NG MAYNILA 134

maxhorizontal=x;
maximum=sum;
end
totalsum=totalsum+sum;
end
average=totalsum/col;
sum=0;
newhorizontal=horizontal;
for x=21:(col-21)
sum=0;
for y=(x-20):(x+20)
sum=sum+horizontal(y);
end
newhorizontal(x)=sum/41;
end
for x=1:col
if(newhorizontal(x)<average)
newhorizontal(x)=0;
for y=1:row
b(y,x)=0;
end
end
end
y=1;
for x=2:col-2
if(newhorizontal(x)~= 0 && newhorizontal(x-1) == 0 && newhorizontal(x+1) ==0)
column(y)=x;
column(y+1)=x;
y=y+2;
elseif((newhorizontal(x)~=0 && newhorizontal(x-1) ==0) || (newhorizontal(x)~=0
&& newhorizontal(x+1) ==0))
column(y)=x;
y=y+1;
end
end
y=1;
for x=2:row-2
if(newvertical(x)~= 0 && newvertical(x-1) == 0 && newvertical(x+1)==0)
row(y)=x;
row(y+1)=x;
y=y+2;
elseif((newvertical(x)~=0 && newvertical(x-1)==0) || (newvertical(x) ~= 0 &&
newvertical(x+1) ==0))
row(y)=x;
PAMANTASAN NG LUNGSOD NG MAYNILA 135

y=y+1;
end
end
[temp columnsize]=size(column);
if(mod(columnsize,2))
column(columnsize+1)=col;
end
[temp rowsize]=size(row);
if(mod(rowsize,2))
row(rowsize+1)=row;
end
for x=1:2:rowsize
for y=1:2:columnsize
if(~((maxhorizontal >=column(y) && maxhorizontal <= column(y+1)) &&
(maxvertical >=row(x) && maxvertical<=row(x+1))))
for m=row(x):row(x+1)
for n=column(y):column(y+1)
b(m,n)=0;
end
end
end
end
end
c=b;
[row,col]=size(c);
for x=1:row
for y=1:col
if c(x,y)==0
a(x,y)=0;
else
a(x,y)=a(x,y);
end
end
end
a( ~any(a,2), : ) = [];
a( :, ~any(a,1) ) = [];
axes(handles.axes2);
imshow(a);
drawnow;
%%Binarization

k=-0.2;
b=18;
[my,mx]=size(a);
PAMANTASAN NG LUNGSOD NG MAYNILA 136

thresh=zeros(my,mx);
result=zeros(my,mx);

for x=(b/2+1):(mx-b/2-1)
for y=(b/2+1):(my-b/2-1)

box=a( (y-(b/2)):(y+(b/2)), (x-(b/2)):(x+(b/2)) );


thresh(y,x)=mean(mean(double(box))) + k*std(std(double(box)));
if a(y,x)>thresh(y,x)
result(y,x)=1;
end
end
end
result( ~any(result,2), : ) = [];
result( :, ~any(result,1) ) = [];
result=~result;
c=result;
axes(handles.axes3);
imshow(c);
drawnow;
d=c;

[L Ne]=bwlabel(d);
propied=regionprops(L,'BoundingBox');
for n=1:size(propied,1)
rectangle('Position',propied(n).BoundingBox,'EdgeColor','g','LineWidth',2);
end

m=1;
handles.hAxes=[handles.axes4, handles.axes5, handles.axes6, handles.axes7,
handles.axes8, handles.axes9, handles.axes10, handles.axes11];
image=0;
for n=1:Ne
[r,c] = find(L==n);
n1=d(min(r):max(r),min(c):max(c));
[a,b]=size(n1);
if a>=12 && a<=35
if b>=8 && b<=18
axes(handles.hAxes(m));
imshow(n1);
drawnow;
m=m+1;
image=image+1;
newn1=imresize(n1,[24 12]);
PAMANTASAN NG LUNGSOD NG MAYNILA 137

newn1=reshape(newn1,[288 1]);
n11=W1*newn1;
A11=logsig(n11);
n21=W2*A11;
A21=(logsig(n21));
threshold=max(A21);
for c=1:36
if A21(c)==threshold
A21(c)=1;
else
A21(c)=0;
end
end
ad=1;
for n=1:36
if A21(n)==0
ad=ad+1;
else
break
end
end
switch ad
case 1
plate(image)='0';
case 2
plate(image)='1';
case 3
plate(image)='2';
case 4
plate(image)='3';
case 5
plate(image)='4';
case 6
plate(image)='5';
case 7
plate(image)='6';
case 8
plate(image)='7';
case 9
plate(image)='8';
case 10
plate(image)='9';
case 11
plate(image)='A';
PAMANTASAN NG LUNGSOD NG MAYNILA 138

case 12
plate(image)='B';
case 13
plate(image)='C';
case 14
plate(image)='D';
case 15
plate(image)='E';
case 16
plate(image)='F';
case 17
plate(image)='G';
case 18
plate(image)='H';
case 19
plate(image)='I';
case 20
plate(image)='J';
case 21
plate(image)='K';
case 22
plate(image)='L';
case 23
plate(image)='M';
case 24
plate(image)='N';
case 25
plate(image)='O';
case 26
plate(image)='P';
case 27
plate(image)='Q';
case 28
plate(image)='R';
case 29
plate(image)='S';
case 30
plate(image)='T';
case 31
plate(image)='U';
case 32
plate(image)='V';
case 33
plate(image)='W';
PAMANTASAN NG LUNGSOD NG MAYNILA 139

case 34
plate(image)='X';
case 35
plate(image)='Y';
case 36
plate(image)='Z';
end
else
continue
end
else
continue
end
end
plate;
set(handles.edit1,'String',plate);

%% Import data from text file.


% Script for importing data from the following text file:
%
% C:\Users\Vince Harrold\Desktop\standalone\Datadata.csv
%
% To extend the code to different selected data or a different text file,
% generate a function instead of a script.

% Auto-generated by MATLAB on 2018/02/21 21:52:46

%% Initialize variables.
filename = 'C:\Users\Vince Harrold\Desktop\standalone\Datadata.csv';
delimiter = ',';

%% Format string for each line of text:


% column1: text (%s)
% column2: text (%s)
% column3: text (%s)
% column4: text (%s)
% For more information, see the TEXTSCAN documentation.
formatSpec = '%s%s%s%s%[^\n\r]';

%% Open the text file.


fileID = fopen(filename,'r');

%% Read columns of data according to format string.


% This call is based on the structure of the file used to generate this
PAMANTASAN NG LUNGSOD NG MAYNILA 140

% code. If an error occurs for a different file, try regenerating the code
% from the Import Tool.
dataArray = textscan(fileID, formatSpec, 'Delimiter', delimiter, 'ReturnOnError',
false);

%% Close the text file.


fclose(fileID);

%% Post processing for unimportable data.


% No unimportable data rules were applied during the import, so no post
% processing code is included. To generate code which works for
% unimportable data, select unimportable cells in a file and regenerate the
% script.

%% Allocate imported array to column variable names


UIK633 = dataArray{:, 1};
ToyotaHiace = dataArray{:, 2};
ThirdyRavena = dataArray{:, 3};
CUTTINGANOVERTAKENVEHICLE = dataArray{:, 4};

%% Clear temporary variables


clearvars filename delimiter formatSpec fileID dataArray ans;

%%Data Mining
celldata = cellstr(UIK633);
chr = char(celldata);
size(chr)
BB=cellstr(ToyotaHiace);CC=cellstr(ThirdyRavena);DD=cellstr(CUTTINGANOVER
TAKENVEHICLE);
BBB=char(BB);CCC=char(CC);DDD=char(DD);
output=plate;[aa,bb]=size(output);[cc,dd]=size(chr);
E=[chr BBB CCC DDD];
if length(output)<7
for n=1:7-length(output)
output=[output ' '];
end

end
n=1;d=length(chr);errorcounter=0;le2=0;
while 1
le=length(chr);

if d>1
PAMANTASAN NG LUNGSOD NG MAYNILA 141

counter=0;
for b=2:6
if output(b-1)==chr(n,b) || output(b)==chr(n,b)
else
counter=counter+1;
end
end
if counter >=2
chr(n,:)=[];BBB(n,:)=[];CCC(n,:)=[];DDD(n,:)=[];
n=n-1;
end
else
break
end
[d,col]=size(chr);
n=n+1;
errorcounter=errorcounter+1;
le2=length(chr);

if le==le2
[erow,ecol]=size(chr);
for y=1:erow
counter2=0;
for x=2:ecol-1
if output(x-1)==chr(y,x) || output(x)==chr(y,x)
counter2=counter2+1;
end
end
counter3(y)=counter2;
end
[mag,ind]=max(counter3);
chr=chr(ind,:);BBB=BBB(ind,:);CCC=CCC(ind,:);DDD=DDD(ind,:);
break
end

end

counter=0;
for b=2:6
if output(b-1)==chr(1,b) || output(b)==chr(1,b) || output(b+1)==chr(1,b)
else
counter=counter+1;
PAMANTASAN NG LUNGSOD NG MAYNILA 142

end
end
if counter >=2
chr(1,:)=[];BBB(1,:)=[];CCC(1,:)=[];DDD(1,:)=[];
end

if length(chr)>0 || chr==’1’ || chr==’2’ || chr==’3’ || chr==’4’


d=sprintf(' Plate number: %s \n \n Owner: %s \n \n Car Model: %s \n \n Violations
Committed: %s \n \n \n', chr, CCC, BBB, DDD);
set(handles.listbox1,'String',d);
drawnow;
access=1; %% 1 or 0 depend if registered or not
fprintf(s,access); %This command will send entered value to Arduino

else
d='Vehicle is not registered'
set(handles.listbox1,'String',d);
drawnow;
access=0; %% 1 or 0 depend if registered or not
fprintf(s,access); %This command will send entered value to Arduino
end

else
GUIcounter=GUIcounter;
end
A=[];
end

% Update handles structure


guidata(hObject, handles);

% UIWAIT makes standalonegui wait for user response (see UIRESUME)


% uiwait(handles.figure1);

% --- Outputs from this function are returned to the command line.
function varargout = standalonegui_OutputFcn(hObject, eventdata, handles)
% varargout cell array for returning output args (see VARARGOUT);
% hObject handle to figure
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)
PAMANTASAN NG LUNGSOD NG MAYNILA 143

% Get default command line output from handles structure


varargout{1} = handles.output;

% --- Executes on selection change in listbox1.


function listbox1_Callback(hObject, eventdata, handles)
% hObject handle to listbox1 (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)

% Hints: contents = cellstr(get(hObject,'String')) returns listbox1 contents as cell


array
% contents{get(hObject,'Value')} returns selected item from listbox1

% --- Executes during object creation, after setting all properties.


function listbox1_CreateFcn(hObject, eventdata, handles)
% hObject handle to listbox1 (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles empty - handles not created until after all CreateFcns called

% Hint: listbox controls usually have a white background on Windows.


% See ISPC and COMPUTER.
if ispc && isequal(get(hObject,'BackgroundColor'),
get(0,'defaultUicontrolBackgroundColor'))
set(hObject,'BackgroundColor','white');
end

% --- Executes on button press in pushbutton2.


function pushbutton2_Callback(hObject, eventdata, handles)
% hObject handle to pushbutton2 (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)

% --- Executes on selection change in listbox2.


function listbox2_Callback(hObject, eventdata, handles)
% hObject handle to listbox2 (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)
PAMANTASAN NG LUNGSOD NG MAYNILA 144

% Hints: contents = cellstr(get(hObject,'String')) returns listbox2 contents as cell


array
% contents{get(hObject,'Value')} returns selected item from listbox2

% --- Executes during object creation, after setting all properties.


function listbox2_CreateFcn(hObject, eventdata, handles)
% hObject handle to listbox2 (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles empty - handles not created until after all CreateFcns called

% Hint: listbox controls usually have a white background on Windows.


% See ISPC and COMPUTER.
if ispc && isequal(get(hObject,'BackgroundColor'),
get(0,'defaultUicontrolBackgroundColor'))
set(hObject,'BackgroundColor','white');
end

% --- Executes on button press in pushbutton1.


function pushbutton1_Callback(hObject, eventdata, handles)
% hObject handle to pushbutton1 (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)

function edit1_Callback(hObject, eventdata, handles)


% hObject handle to edit1 (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)

% Hints: get(hObject,'String') returns contents of edit1 as text


% str2double(get(hObject,'String')) returns contents of edit1 as a double

% --- Executes during object creation, after setting all properties.


function edit1_CreateFcn(hObject, eventdata, handles)
% hObject handle to edit1 (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles empty - handles not created until after all CreateFcns called

% Hint: edit controls usually have a white background on Windows.


% See ISPC and COMPUTER.
PAMANTASAN NG LUNGSOD NG MAYNILA 145

if ispc && isequal(get(hObject,'BackgroundColor'),


get(0,'defaultUicontrolBackgroundColor'))
set(hObject,'BackgroundColor','white');
end

Histogram Code:

for aa=1:30

n=num2str(aa);
I=imread([num2str(n),'.jpg']);
GrayI = 0.2989 * I(:,:,1) + 0.5870 * I(:,:,2) + 0.1140 * I(:,:,3);
NMedFilteredI=medfilt2(GrayI);
[row,col]=size(NMedFilteredI);
NMedFilteredI=NMedFilteredI((row/3)+1:row,1:col);
GrayI=GrayI((row/3)+1:row,1:col);
[row,col]=size(NMedFilteredI);
DilatedI=NMedFilteredI;

for (x=1:row)
for(y=2:col-1)
temp=max(NMedFilteredI(x,y-1),NMedFilteredI(x,y));
DilatedI(x,y)=max(temp,NMedFilteredI(x,y+1));
end
end

difference=0;
totalsum=0;
difference=uint32(difference);
maximum=0;
maxvertical=0;
for x=2:row
sum1=0;
for y=2:col
if(DilatedI(x,y)>DilatedI(x,y-1))
difference=uint32(DilatedI(x,y)-DilatedI(x,y-1));
end
if(DilatedI(x,y)<=DilatedI(x,y-1))
difference=uint32(DilatedI(x,y-1)-DilatedI(x,y));
end
if(difference >20)
sum1=sum1+difference;
end
end
PAMANTASAN NG LUNGSOD NG MAYNILA 146

vertical(x)=sum1;
if(sum1>maximum)
maxvertical=x;
maximum=sum1;
end
totalsum=totalsum+sum1;
end
average=totalsum/row;
sum1=0;
newvertical=vertical;
for x=21:(row-21)
sum1=0;
for y=(x-20):(x+20)
sum1=sum1+vertical(y);
end
newvertical(x)=sum1/41;
end
for x=1:row
if(newvertical(x)<(average))
newvertical(x)=0;
for y=1:col
DilatedI(x,y)=0;
end
end
end
sum1=0;
totalsum=0;
difference=0;
difference=uint32(difference);
maxhorizontal=0;
maximum=0;
for x=2:col
sum1=0;
for y=2:row
if(DilatedI(y,x) > DilatedI(y-1,x))
difference=uint32(DilatedI(y,x)-DilatedI(y-1,x));
else
difference=uint32(DilatedI(y-1,x)-DilatedI(y,x));
end
if(difference >20)
sum1=sum1+difference;
end
end
horizontal(x)=sum1;
PAMANTASAN NG LUNGSOD NG MAYNILA 147

if(sum1>maximum)
maxhorizontal=x;
maximum=sum1;
end
totalsum=totalsum+sum1;
end
average=totalsum/col;
sum1=0;
newhorizontal=horizontal;
for x=21:(col-21)
sum1=0;
for y=(x-20):(x+20)
sum1=sum1+horizontal(y);
end
newhorizontal(x)=sum1/41;
end
for x=1:col
if(newhorizontal(x)<average)
newhorizontal(x)=0;
for y=1:row
DilatedI(y,x)=0;
end
end
end
y=1;
for x=2:col-2
if(newhorizontal(x)~= 0 && newhorizontal(x-1) == 0 && newhorizontal(x+1) ==0)
column(y)=x;
column(y+1)=x;
y=y+2;
elseif((newhorizontal(x)~=0 && newhorizontal(x-1) ==0) || (newhorizontal(x)~=0
&& newhorizontal(x+1) ==0))
column(y)=x;
y=y+1;
end
end
y=1;
for x=2:row-2
if(newvertical(x)~= 0 && newvertical(x-1) == 0 && newvertical(x+1)==0)
row(y)=x;
row(y+1)=x;
y=y+2;
elseif((newvertical(x)~=0 && newvertical(x-1)==0) || (newvertical(x) ~= 0 &&
newvertical(x+1) ==0))
PAMANTASAN NG LUNGSOD NG MAYNILA 148

row(y)=x;
y=y+1;
end
end
[temp columnsize]=size(column);
if(mod(columnsize,2))
column(columnsize+1)=col;
end
[temp rowsize]=size(row);
if(mod(rowsize,2))
row(rowsize+1)=row;
end
for x=1:2:rowsize
for y=1:2:columnsize
if(~((maxhorizontal >=column(y) && maxhorizontal <= column(y+1)) &&
(maxvertical >=row(x) && maxvertical<=row(x+1))))
for m=row(x):row(x+1)
for n=column(y):column(y+1)
DilatedI(m,n)=0;
GrayI(m,n)=0;
end
end
end
end
end
LPI=DilatedI;[rowrow,colcol]=size(LPI);
GrayI( ~any(LPI,2), : ) = [];
GrayI( :, ~any(LPI,1) ) = [];
LPI( ~any(LPI,2), : ) = [];
LPI( :, ~any(LPI,1) ) = [];

difference=0;
totalsum1=0;
difference=uint32(difference);
maximum=0;
maxvertical=0;
[row,col]=size(LPI);
for x=2:row
sum1=0;
PAMANTASAN NG LUNGSOD NG MAYNILA 149

for y=2:col
if(LPI(x,y)>LPI(x,y-1))
difference=uint32(LPI(x,y)-LPI(x,y-1));
end
if(LPI(x,y)<=LPI(x,y-1))
difference=uint32(LPI(x,y-1)-LPI(x,y));
end
if(difference >20)
sum1=sum1+difference;
end
end
vertical(x)=sum1;
if(sum1>maximum)
maxvertical=x;
maximum=sum1;
end
totalsum1=totalsum1+sum1;
end
average=totalsum1/row;
sum1=0;
newvertical=vertical;
for x=21:(row-21)
sum1=0;
for y=(x-20):(x+20)
sum1=sum1+vertical(y);
end
newvertical(x)=sum1/41;
end
for x=1:row
if(newvertical(x)<(average))
newvertical(x)=0;
for y=1:col
LPI(x,y)=0;
end
end
end
sum1=0;
totalsum1=0;
difference=0;
difference=uint32(difference);
maxhorizontal=0;
maximum=0;
for x=2:col
sum1=0;
PAMANTASAN NG LUNGSOD NG MAYNILA 150

for y=2:row
if(LPI(y,x) > LPI(y-1,x))
difference=uint32(LPI(y,x)-LPI(y-1,x));
else
difference=uint32(LPI(y-1,x)-LPI(y,x));
end
if(difference >20)
sum1=sum1+difference;
end
end
horizontal(x)=sum1;
if(sum1>maximum)
maxhorizontal=x;
maximum=sum1;
end
totalsum1=totalsum1+sum1;
end
vertLPI=vertical;horzLPI=horizontal;

difference=0;
totalsum1=0;
difference=uint32(difference);
maximum=0;
maxvertical=0;
for x=2:row
sum1=0;
for y=2:col
if(GrayI(x,y)>GrayI(x,y-1))
difference=uint32(GrayI(x,y)-GrayI(x,y-1));
end
if(GrayI(x,y)<=GrayI(x,y-1))
difference=uint32(GrayI(x,y-1)-GrayI(x,y));
end
if(difference >20)
sum1=sum1+difference;
end
end
vertical(x)=sum1;
if(sum1>maximum)
maxvertical=x;
maximum=sum1;
end
totalsum1=totalsum1+sum1;
end
PAMANTASAN NG LUNGSOD NG MAYNILA 151

average=totalsum1/row;
sum1=0;
newvertical=vertical;
for x=21:(row-21)
sum1=0;
for y=(x-20):(x+20)
sum1=sum1+vertical(y);
end
newvertical(x)=sum1/41;
end
for x=1:row
if(newvertical(x)<(average))
newvertical(x)=0;
for y=1:col
GrayI(x,y)=0;
end
end
end
sum1=0;
totalsum1=0;
difference=0;
difference=uint32(difference);
maxhorizontal=0;
maximum=0;
for x=2:col
sum1=0;
for y=2:row
if(GrayI(y,x) > GrayI(y-1,x))
difference=uint32(GrayI(y,x)-GrayI(y-1,x));
else
difference=uint32(GrayI(y-1,x)-GrayI(y,x));
end
if(difference >20)
sum1=sum1+difference;
end
end
horizontal(x)=sum1;
if(sum1>maximum)
maxhorizontal=x;
maximum=sum1;
end
totalsum1=totalsum1+sum1;
end
vertGray=vertical;horzGray=horizontal;
PAMANTASAN NG LUNGSOD NG MAYNILA 152

differenceV(aa)=(sum(vertLPI-vertGray))/length(vertGray)
differenceH(aa)=(sum(horzLPI-horzGray))/length(horzGray)
end

differenceV=differenceV'

differenceH=differenceH'

Luminosity:

[row,col]=size(image);
X=sumsum(image);
X=image/(row*col); %%X=luminosity
PAMANTASAN NG LUNGSOD NG MAYNILA 153

GANTT CHART
PAMANTASAN NG LUNGSOD NG MAYNILA 154

COSTING

ITEM QUANTITY PRICE

Sd/mmc card shield 1 Php220.00

Ultrasonic sonar sensor 1 Php350.00

Servo motor 1 Php360.00

Serial camera 1 Php1500.00

TOTAL Php2430.00
PAMANTASAN NG LUNGSOD NG MAYNILA 155

ABOUT THE PROPONENTS


2650 Ilang-Ilang St. Goodwill Village Tambo, Parañaque City
Mobile No. 09064128296
Landline Residential No. 854-83-93
vhaaltura@gmail.com

OBJECTIVES:

 To continuously strive for higher achievement in life and be


part of the company’s growth and success by hard work and
perseverance.
 To enhance my capabilities and qualifications so that I have
the opportunity to prove myself as a competent employee.
 Share knowledge and skills among colleagues.
VINCE HARROLD A.
ALTURA EDUCATIONAL QUALIFICATIONS:

TERTIARY LEVEL: PAMANTASAN NG LUNGSOD NG MAYNILA (June


2013 – Present)
Personal Information:
BACHELOR OF SCIENCE IN ELECTRONICS ENGINEERING
Nickname: Bim
 Dean’s Lister (1st year – Present)
Sex: Male
 Academic Excellence Award for being Top 1 in BS ECE (1st
Nationality: Filipino
year – Present)
Birth Date: July 22, 1996
Birthplace: Manila SECONDARY LEVEL: MANILA SCIENCE HIGH SCHOOL (2009 - 2013)
Citizenship: Filipino  Silver Medalist
Religion: Roman Catholic
Civil Status: Single PRIMARY LEVEL: MALATE CATHOLIC SCHOOL (2003 - 2009)
Language Spoken: English  Valedictorian
and Filipino
SKILLS:

DESIGNING AND SIMULATION SKILLS :


Career Vision  ProgeCAD  Livewire
“To work in a  Multisim  Basic Cisco Packet Tracer
challenging  PCB Wizard  Audacity
environment
demanding all PROGRAMMING SKILLS :
my skills and  C++  Arduino
efforts to  MatLab
explore and
adapt myself in TECHNICAL SKILLS :
 PCB Designing
different fields
 PCB Etching
and realize my
potential where ACADEMIC PROJECTS AND RESEARCHES:
I get the
opportunity for Paste processed from Cucurbita moschata seeds as an alternative electrolyte
continuous in a dry cell (March 13, 2015)
learning.” - Proved the ability, efficiency and sufficiency of the Cucurbita
moschata seeds as alternative electrolytes for dry cells.
AFFILIATIONS: Clock Pulse Generator (September 29,2015)
- Produced a clock pulse generator circuit by using ExpressPCB and
CHARITY FIRST etched it into a printed circuit board.
FOUNDATION,
INC. Multistage Class A BJT Amplifier (March 29, 2016)
 Scholar (2014 – - Used 2N3904 transistor to construct a 2-stage common-emitter
Present) amplifier in a printed circuit board designed via PCB Wizard and
Livewire.
INSTITUTE OF
ELETRONICS Arduino-Based Robot Car Collision Safety Mechanism Powered by Wind
ENGINEER OF THE Generator (October 2016)
PHILIPPINES – - Programmed a safety mechanism for robot car using Gizduino
MSC Microcontroller and Arduino supplied by a wind generator.

 Board of Director Construction of Chebyshev Type I Band-Stop Digital Filter using Bilinear
(2016-Present) Transformation for an Audio Input Signal with a 44100 Hz Sampling Frequency
(October 2016)
PLM- - Created a code for digital filter using MatLab and verified the
MICROCONTROLLE waveform of the different input audio signal using Audacity.
RS AND ROBOTICS
SOCIETY SUMOBOT (October 2016)
- Programmed a sumobot in Arduino using Gizduino Microcontroller.
 Secretary (2016-
Present) Estimation of Resistance for a Specific Temperature using Direct Method of
Polynomial Interpolation (March 2017)
- Obtain temperature and resistance data using LM35 sensor and
OTHER SKILLS: thermistor using Arduino and simulate in MATLAB using the designed
system.
SOFT SKILLS
Able to Listen SEMINARS ATTENDED:
Accept Feedback
Microcontrollers and Robotics
Adaptable  April 16 – May 21, 2016
Society (MICROS) Summer Camp
Attentive
Communication Pamantasan ng Lungsod ng Maynila

COMPUTER SKILLS  October 26 – 28, 2016 MICROS Mini Robotics Camp


Microsoft Office Pamantasan ng Lungsod ng Maynila
Applications
 1st Sunday of the Month Life Skills Motivational Seminar
Philippine Buddhacare Academy
PERSONAL SKILLS
Creative thinking CHARACTER REFERENCE:
Developing Rapport
Diversity Engr. Charles Juarizo, PECE Dr. Carlos C. Sison, DEM, PECE
Empathy Chairperson, CET Faculty, CET
Encouraging
Flexible Electronics Engineering Department Electronics Engineering Department
Contact No. : 09158836249 Contact No.: 09995212011

HOBBIES I hereby certify that the above information are true and correct to the best
Reading pocket books of my knowledge and belief.
Playing computer games
Watching TV series
Listening to music
Internet Surfing
Vince Harrold A. Altura
BONIFACIO S. MACABANTI
2117 P.Gil Street, Sta. Ana, Manila
bsmacabanti@gmail.com
554-04-34/09363941327

Looking for a job where I could learn under working professional environment to
acquire significant knowledge and advancement of my skills. Prior with that, I’m willing
to impart all my knowledge and skills that I’ve been learned in my academic to have an
exceptional progress.

Qualifications
• Capable of writing clearly and concisely and is able to speak effectively. An attentive listener and
provides appropriate feedback. Works well with others but can be independent.
• Has knowledge in designing and assembling basic circuits.
• Has knowledge in Turbo C++, Dev C++, AutoCAD, Multisim, Arduino IDE, Matlab, fritzing and
Basic Cisco packet tracer programs.
• Proficient with Microsoft office such as word, excel, power point, publisher, and etc.

Education
Bachelor of Science in Electronics and Communication Engineering 2012 – 2018
Pamantasan ng Lungsod ng Maynila
Intramuros, Manila

2nd Honorable Mention 2008 – 2012


Manuel L. Quezon High School
Blumentritt St., Sta. Cruz, Manila

Lucas R. Pascual Memorial Elementary School 2002 – 2008


Quezon City

Experience

Special Program for Employment of Students (SPES)


Student Assistant, Inventory Section
Manila Social Welfare Development, City Hall Manila March – May 2015

On the Job Training


Student Engineer
Product Application Department
Analog Devices Inc., Gen. Trias, Cavite
April – May 2016

Seminar Attended
Encounter Weekend (2014)
Sampaguita Christian Church, Sta. Ana, Manila

ISDB – T: The Sole Digital Terrestrial Television


Standard of the Philippines (2016)
Pamantasan ng Lungsod ng Maynila, Intramuros Manila

Thesis Project
• Real-time Vehicle Parking Logging System With The Use Of Multi-layered Artificial Neural Network

Character Reference
Engr. James M. Miedes Engr. Carlos C. Sison, DEM, PECE
IT Officer, Oriental Inc. Faculty, Pamantasan ng Lungsod ng Maynila
#09178412521 #09995212011
RICARDO, FRANCIS NICKO D.

031 Magsaysay St. Tondo Manila


09953320494/254-07-52
nickoricardo23@gmail.com

Objective: A Electronics Engineering Student applying to obtain work and experience in the field to
apply and further develop the skills that I have acquired.

Educational Background:
Tertiary:
Bachelor of Electronics Engineering (BSEcE)
College of Engineering and Techonology
Pamantasan ng Lungsod ng Maynila
2013-Present

Secondary:
Immaculate Conception Academy of Manila
2212 S. del Rosario Street Gagalangin, Tondo, Manila
2009-2013

Primary:
Immaculate Conception Academy of Manila
2212 S. del Rosario Street Gagalangin, Tondo, Manila
2003-2009
Seminars:
Microcontrollers and Robotics Society Summer Camp (2016)
Pamantasan ng Lunsod ng Maynila
April 16-May 21, 2016

ECE Roadshow 2017: “Setting Goals & Money Management”


TIP Manila
Organizations:

Microcontrollers and Robotics Society (MICROS)


Vice-President InternalVice-President Internal
2016-2017
Institute of Electronics Engineer of the Philippines-MSC (IECEP-MSC)
Board of Director
2016-2017

Microcontrollers and Robotics Society (MICROS)


Treasurer
2016-2017

Engineering Mathematical Society (EMS)


Member
2016-2017
Skills:
Simulation and Design
Basic AutoCAD & ProgeCAD
Arduino
Multisim
PCB Wizard and Livewire
Basic Cisco Packet Tracer

Technical Skills
PCB designing
PCB etching
Microsoft Office
Programming Skills
Basic C++
MatLAB
R-Programming
Arduino IDE
Projects:
Microcontroller and Robotics Society MiniCamp 2016
Lecturer, Pamantasan ng Lungsod ng Maynila

Microcontroller and Robotics Society Organizational Fair 2016


Organizaer, Pamantasan ng Lungsod ng Maynila

Reference:
Engr. Carlos Sison, DEM, PECE
Faculty, Electronics Engineering Department
Pamantasan ng Lungsod ng Maynila
Contact No: 09995212011

Engr. Ronel Jandi


Faculty, Electronics Engineering Department
Pamantasan ng Lungsod ng Maynila
Contact No.: 09293795741

I hereby certify that all of the above information are true and correct to the best of my knowledge and belief.

Ricardo, Francis Nicko D.


Micah Ella R. Uy
Electronics and Communication Engineering Student
Km. 28 Aguinaldo Highway, Diana Compound, Salitran2,
Dasmarinas, Cavite
micahuy24@gmail.com / 09953306385

OBJECTIVE To find an internship program where I could learn under working


professionals to gain more knowledge and enhance my technical skills at
the same time contribute to the company.

EDUCATION  COLLEGE
Pamantasan ng Lungsod ng Maynila (2013 – Present)
 SECONDARY
Caloocan High School (Engineering and Science Education Program) (2009 –
2013)
 ELEMENTARY
Maypajo Integrated School (2003 – 2009)
 Graduated as Class Valedictorian

ORGANIZATIONS  Institute of Electronics Engineers of the Philippines – Manila


Student Chapter
Board of Director (2016-2017)
 Institute of Electronics Engineers of the Philippines – Manila
Student Chapter
Member (2014-2016)
 PLM - Engineering Mathematical Society
Assistant Secretary (2016-2017)
 PLM - Engineering Mathematical Society
Business Manager (2015-2016)
 PLM – Micro Controllers and Robotics Society
Member (2015-2017)
 PLM – Electronics Engineering Student Society
Member (2013-2017)
ACHIEVEMENTS  College of Engineering and Technology Recognition
Top 1 (2014-2015)
AND SEMINARS  College of Engineering and Technology Recognition
Top 2 (2015-2016)
 Dean’s Lister S.Y. 2014-2015
 Dean’s Lister S.Y. 2015-2016
 MATHira MATHibay university wide competition
3rd Place – Team Competition (S.Y. 2015-2016)
Finalist – Individual Category (S.Y. 2015-2016)
 UltiMathum Annual Quiz Bee
3rd Place (S.Y. 2013-2014)
2nd Place (S.Y. 2014-2015)
 ECE Roadshow seminar – Technological Institute of the
Philippines
 PLM-Micro controllers and Robotics Society Summer Camp
(2016)

SKILLS AND  Knowledgeable in some engineering software tools such as


ABILITIES Arduino, MatLab, AutoCad and MultiSim and some programming
language such as DevC, Turbo C and C++.
 Cognitive and analyzing skills

PERSONAL Nickname: Micah


Date of Birth: February 24, 1998
INFORMATION Place of Birth: Manila
Age: 19
Sex: Female
Status: Single
Religion: Apostolic Doctrine
Citizenship: Filipino
Height: 5’

CHARACTER  Dr. Carlos Sison


REFERENCE Instructor, ECE Department – PLM
09995212011
 Engr. Ronel Jandi
Instructor, ECE Department – PLM
09293795741
 Engr. Jeremeh Dela Cruz
Instructor, ECE Department – PLM
jhaydelacruz46915@gmail.com
09337203878
PAMANTASAN NG LUNGSOD NG MAYNILA 163

You might also like