You are on page 1of 104

International Journal of Research in Computer & Information Technology (IJRCIT)

http://www.garph.org
Aim & Scope:
International Journal of Research in Computer & Information Technology (IJRCIT) is an online journal in English
published in annually for Academicians, scientist, Engineers, and Research scholars involved in Computer Science
and Information Technology to publish high quality and refereed papers. Paper reporting original research and
innovative application from all parts of the world is invited. Papers for publication in the IJRCIT are selected through
peer review to ensure originality, relevance and readability. The aim of IJRCIT is to publish peer reviewed research
and review articles rapidly developing field of Computer Science and Information Technology.
The core vision of IJRCIT is to publish new knowledge and technology from for the benefits of every one ranging
from the academic and professional research communities to industry practioners in a range of topics in a Computer
Science and Information Technology. It also provides a venue for high caliber research scholars, PhD students,
Professionals
to
submit
on-going
research
and
developments
in
these
areas.

Frequency of Publication: One Volume with Four issues per year


Subject Category: all Computer Science and Information Technology
Submission of Manuscripts:
Authors are strongly urged to communicate to Editor-in-chief of the journal through graph.editor@gmail.com only.
The final decision on publication is made by the Editor-in-chief upon recommendation of an Associate Editor and or
an Editorial Board Member.

Regular Subscription Price:


Within India: Annual INR 2500
Outside India: Annual 500 USD
Published by
Global Advanced Research Publication House
Meherbaba colony, Dastur nagar,
Chatri Talav road,
Amravati-444606
Maharashtra, India
2015 Global Advanced Research Publication House, India

No part of the material protected by this copyright may be reproduced or utilized in any form or by any
means, electronic or mechanical including photo copying, recording or by any information storage and
retrieval system, without prior written permission from the publisher.

International Journal of Research in Computer & Information Technology (IJRCIT)


Editorial Board
Editor-in-Chief

Dr. Shrinivas P. Deshpande


================================================================
Associate Editorial Board Members
Dr. V. M. Thakare
Department of Computer Science and Engineering,
Sant Gadge Baba Amravati University, Amravati
Maharashtra, India.

Dr. Sandeep R. Shirsat


Department of Computer Science,
Shri Shivaji Science College Chikhali,
Maharashtra, India.

Dr. D. N. Chaudhari
Department of Computer Science and Engineering,
Jawaharlal Darda Institute of Engineering &
Technology, Yavatmal
Maharashtra, India.

Dr. Anjali B. Raut


Department of Computer Science & Engineering,
H.V.P.M. College of Engineering & Technology,
Amravati,
Maharashtra, India.

Dr. S. E. Yedey
P. G. Department of Computer Science &
Technology, D.C.P.E. H.V.P.M., Amravati,
Maharashtra, India.

Prof. Sachin Deshpande


Department of Computer Engineering,
Vidyalankar Institute of Technology, Mumbai,
Maharashtra, India.

Prof. P. L. Ramteke
Department of Information Technology,
H.V.P.M College of Engineering & Technology,
Amravati,
Maharashtra, India

Prof. Ritesh V. Patil


Department of Computer Engineering,
Pune District Education Associations College of
Engineering, Manjari, Hadapsar, Pune,
Maharashtra, India

Prof. Anand B. Deshmukh


Department of Information Technology,
Sipna College of Engineering & Technology,
Amravati,
Maharashtra, India.

Prof. Ramanand S. Samdekar


Department of Computer Science & Engineering,
S. B. Jain Institute of Technology Management &
Research, Nagpur,
Maharashtra, India

Prof. Vinay A. Rajgure


Department of Computer Science & Engineering,
Prof. Ram Meghe College of Engineering &
Management, Bandera,
Maharashtra, India

Prof. Amitkumar S. Manekar


Department of Computer Science and Engineering,
Shri Sant Gajanan Maharaj College of Engineering,
Shegaon,
Maharashtra, India

Prof. Bharat S. Kankate


Department of Computer Engineering,
Pune District Education Associations College of
Engineering, Manjari, Hadapsar, Pune,
Maharashtra, India

Prof. Pritam H. Gohatre


Department of Computer Technology,
Laxminarayan Agrawal Memorial Institute of
Technology, Dhamangoan,
Maharashtra, India

Editorial
Research is a creative work undertaken by applying systematic approach to establish new
knowledge or increase the existing knowledge. Research activity includes confirmation of
existing facts, verify and endorse the results, establish new theory, methods, and approaches. The
research undertaken by any individual or a group requires its publication and affirmation by the
peers. Publishing research work in journals and conferences authenticate the work done and
efforts taken by the researcher.
There are several journals available in the areas of Computer Science and Information
Technology having different strategy. IJRCIT aimed at providing an international forum for
scientists, researchers, engineers and developers from a wide range of information science areas
to exchange ideas and approaches in this evolving field. High quality papers in computer science
and information technology areas are solicited and original papers exploring research challenges
will receive especially careful interest from reviewers. Papers that had already been accepted or
currently under review for other conferences or journals were not considered for publications.
This journal publication would not have been possible without help of several individuals
who in one way or another contributed and extended their valuable assistance in the preparation
and completion of journal. My utmost gratitude is to the Editorial Board members and Reviewers
for their sincerity and encouragement. IJRCIT is strongly supported by a dedicated Editorial
Board consisting of renowned scientists. Thus, we ensure the highest quality standards of the
journal and provide prompt, detailed rigorous assessments that allow rapid editorial decisions
and result in significantly improved manuscripts.
We are requesting experts from academia, industry and research groups for active
participation in this publication activity as an editorial committee member, reviewer and
promoter.

IJRCIT Editorial Board


November 2015
www.garph.org

INDEX
Sr. No
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18

Title & Author(s)


An Approach For Prediction Of Driver Fatigue.
Pritam H. Gohatre
An Improved Image Fusion Algorithm Based On Wavelet Transforms Using
Particle Of Swarm Optimization.
Hrishikesh S. Holey
SAP Hana Database With Network
Prof. Anup. P. Date , Miss. Namrata S. Mahajan
Performance Evaluation Of Database Client Engine Using Modular Approach
Prof. Ramanand Samadekar

Page
No
1-7

8-14

15-19
20-22

Implementation Image Mosaic Using Phase Correlation And Harris Operator


23-28
Ms. Kanchan S.Tidke, Pritam H. Gohatre
To Study The Types Of Open-Source Applications Of Routing Software
29-32
Hemant Gadbail, Roshan Kalinkar
Universal Gate Application For Floating Point Arithmetic Logic Unit
33-37
Prof. Vishwajit K. Barbudhe
Review Of Independent Component Analysis Algorithms And Its Application
38-42
Naresh Nimje
A Review On Parallel Programming Models In High Performance Computing
43-47
Aniket Yawalkar , Ashish Pawar, Amitkumar Manekar
Automated Parking Slot Allotter Using Rfid And Nfc Technology
48-52
Harshal Phuse, Sumit Bajare, Ranjit Joshi, Amitkumar Manekar
Review Automated Students Attendance Management System Using Raspberry-Pi
53-56
And Nfc
,
Nikhil P. Shegokar, Kaustubh S. Jaipuria Amitkumar Manekar
A Survey Of Biometric Authentication Techniques
57-65
Ankush Deshmukh, Poonam Hajare, Rajeshri Kachole, Amitkumar Manekar
A Comprehensive Survey On Load Balancing Algorithms In Iaas
66-75
A Gawande, S Jain And K Raut
Proposed Automated Students Attendance Management System Using Raspberry
76-79
Pi And Nfc
Mahesh P. Sangewar, Shubham R. Waychol, Amitkumar Manekar
My Moments An Android Based Diary Application
80-84
Chetan Patil, Kavita Chaudhari, Snehal Deshmukh, Amitkumar Manekar
Comparison Of Particle Swarm Optimization And Genetic Algorithm For Load
85-96
Balancing In Cloud Computing Environment
K Pathak, G Vahinde
A Survey Paper On Tracking System By Using Smart Phone
97-100
Poonam Hajare, Rajeshri Kachole, Ankush Deshmukh, Amitkumar Manekar
101-105
Review On Security And Authentication System In Accessing Data
Mitali Lakade, Ruchi Kela, Ashwini, Amitkumar Manekar

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 12015

ISSN:2455-3743

AN APPROACH FOR PREDICTION OF DRIVER FATIGUE


PRITAM H. GOHATRE
Technocracts Institute of Technology, Bhopal
pritamgohatre@gmail.com
ABSTRACT: An Approach for Prediction of driver-fatigue monitor. It uses remotely located charge-coupled-device cameras
equipped with active infrared illuminators to acquire video images of the driver. Various visual cues that typically characterize
the level of alertness of a person are extracted in real time and systematically combined to infer the fatigue level of the driver.
The visual cues employed characterize eyelid movement, gaze movement, head movement, and facial expression. The eyes are
one of the most salient features of the human face, playing a critical role in understanding a persons desires, needs and
emotional states. Robust eye detection and tracking is therefore essential not only for human-computer interaction, but also for
Attentive user interfaces (like driver assistance systems), since the eyes contain a lot of information about the drivers
condition: gaze, attention level, fatigue. Furthermore, due to their unique physical properties (shape, size, reflectivity), the eyes
represent very useful cues in more complex tasks, such as face detection and face recognition. A probabilistic model is
developed to model human fatigue and to predict fatigue based on the visual cues obtained. The simultaneous use of multiple
visual cues and their systematic combination yields a much more robust and accurate fatigue characterization than using a
single visual cue. This system was validated under real-life fatigue conditions with human subjects of different ethnic
backgrounds, genders, and ages; with/without glasses; and under different illumination conditions. It was found to be
reasonably robust, reliable, and accurate in fatigue characterization.
Keywords driver-fatigue, eyelid movement, gaze movement, head movement, visual cues . Fatigue characterization.
1.

INTRODUCTION

Fatigue is a dormant physical condition that can be


witnessed right before one falls asleep. Fatigue affects ones
reaction time, ability, concentration and general
understanding particularly while driving on road adversely.
This thing is primarily based on the movement of human
eyelid which distinguished level of alertness. Various visual
causes that generally characterize the level of alertness of a
person are extracted systematically combined to check the
fatigue level of the person. A probabilistic model is
developed to model human fatigue and to predict fatigue
based on the visual causes obtained. The simultaneous use
of multiple visual causes and their systematic combination
yields a much more robust and accurate fatigue
characterization than using a single visual cause.
The system uses a camera that points directly
towards the persons face and monitors the persons eyes in
formed to detect fatigue. In such a case when fatigue is
detected, a warning signal is issued to alert the driver. This
system describes how to detect the eyes, and also how to
determine if the eyes are open or closed. The system deals
with using information obtained for the binary version of the
image to find the edges of the face, which narrows the area
of where the eyes may exist. Once the face area is found,
then eyes are found by computing the horizontal averages in
the area. After finding the eyes to monitor the eye
movement in the real time capacturing the video in the
camera is specific consecutive frame that gives 10 up to the
200 frames. If the eyes are open, it shows eyes in the normal
condition mean Fatigue is not predicted. If the eyes are open
and close in some consecutive way it shows the possible
fatigue detection. If the eyes are continuously close for a
while it predicted the fatigue is detected. It gives warning

signal given by the system so it alerts to the user to avoid an


accident.
Driver operation and vehicle behavior can be
implemented by monitoring the steering wheel movement,
accelerator or brake patterns, vehicle speed, lateral
acceleration, and lateral displacement. These too are nonintrusive ways of detecting drowsiness, but are limited to
vehicle type and driver conditions. The final technique for
detecting drowsiness is by monitoring the response of the
driver. This involves periodically requesting the driver to
send a response to the system to indicate alertness. The
problem with this technique is that it will eventually become
tiresome and annoying to the driver.
2.

LITERATURE SURVEY & BACKGROUND

[1] In this paper author developed the fatigue detection


techniques based on computer vision. Fatigue is detected
from face and facial features of driver. By Hybrid method is
used for face and facial feature detection. which not only
increase the accuracy of the system but also decrease the
processing time.
[2]In this paper author proposed, real time machine vision
based system is design for the detection of driver fatigue
which can be detect the driver fatigue and issue a warning
early enough to avoid an accident. Firstly the face is located
by machine vision based object detection algorithm and
detects eyes and eyebrows.
[3] In this paper author, developed the detect the driver
fatigue based on the eye tracking which comes under an
active safety system using ordinary color web camera to
initialize the face detection and eye location and eye
tracking.

Page 1

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 12015

[4] In this paper author present the drivers fatigue


approach for real-time detection of driver fatigue. The
system consists of a sensors directly pointed towards the
drivers face. The input to the system is a continuous stream
of signals from the sensors.
[5] In this paper author, Developed a real-time driver
fatigue detection system based on eye tracking and dynamic
template matching. Using two new matching functions, the
edge map overlapping (EMO) and the edge pixel count
(EPC), to enhance matching accuracy.
[6] In this paper author is to developed artificial neural
network has been used to detect the driver drowsiness level.
Ever-increasing number of traffic accidents that are due to a
diminished drivers vigilance level has become a problem of
serious concern to society. Drivers with a diminished
vigilance level suffer from a marked decline in their
perception, recognition, and vehicle-control abilities.
[7] In this paper author proposed the Eye tracking systems
have many potential applications such as learning emotion
monitoring and driver fatigue detection systems etc..So, how
to use an eye tracking system to implement an eye mouse to
provide computer access for people with severe factors. The
eye mouse allows people with severe disabilities to use their
eye movements to be manipulated by computers. It requires
only one low-cost Web camera and a personal computer and
five stage algorithm is developed to estimate the directions
of eye movements and then use the direction information to
manipulate the computer. Several experiments were
conducted to test the performance of the eye tracking
system.
[8] In this paper author proposed the system for skin, face,
eyes detection which together can be used for detecting
human presence in video. This system is build so that it can
applied to both real-time data although with lower detection
rate and static data i.e. .images and video for in processing
with higher detection rate.
[9] In this paper author proposed the face recognition
techniques (FRT) presented in face the issue and rarely state
the assumptions they make on their initialization; many
simply skip the feature extraction step, and assume perfect
localization by relying upon manual annotations of the facial
feature positions.
3. PROPOSED METHODOLOGY AND
PROPOSED ARCHITECTRE

ISSN:2455-3743

system can barely cope with 45 face rotation. The detection


area is both around the vertical and horizontal axis. Another
shortcoming with these processes was that they were
perceptive to lighting conditions. For a few cases the
designed system detected multiple recognition of the same
face, due to overlapping sub-windows.
In this proposed design take test cases up to 200
frames. This system describes a method to track the eyes
and detect whether the eyes are closed or open. The next
item to be considered in image acquisition is the video
camera. To demonstrate the project we have used the simple
Laptop camera. To create the video frames used Computer
Vision System Toolbox.
The
camera
uses
the
function
vision.CascadeObjectDetector creates
a
System
object, detector that detects objects using the Viola-Jones
algorithm. The Classification Model property controls the
type of object to detect. By default, the detector is
configured to detect faces. Computer Vision System
Toolbox provides algorithms, functions, and apps for the
design and simulation of computer vision and video
processing systems. Using this tool box the system can
perform object detection and tracking, feature detection and
extraction, feature matching, stereo vision, camera
calibration, and motion detection tasks. The system toolbox
also provides tools for video processing, including video file
I/O, video display, object annotation, drawing graphics, and
compositing.
Algorithms
are
available
as
MATLAB functions, System objects, and Simulink blocks.
The MATLAB script is supposed to do the following:
Step 1. It captures the video and opens a facial view where
the user has to point his face properly in front of the camera.
Step2. The MATLAB script detects the face and displays
the image and lets the user place a bounding box around the
face.
Step 3.Afterwards one the eye, mouth portions of the frame
are recognized, it rescales the eye, mouth portions to 24*24
pixels.
The camera uses Viola-Jones algorithm to scan a subwindow capable of detecting faces across a given input
frame. The standard image processing approach would be to
rescale the input image to different sizes and then run the
fixed size detector through these frames.

In this section discuss the proposed methodology in


this proposed work is to detect closed eyes simultaneously
to observed and alert the driver on fatigue detection. This is
done with the help of mounting a camera in front of the
driver and continuously captured its real time video using
skin detection, eye detection and Hough Transform
algorithm.The conventional fatigue detectors are most
efficient and successful only on frontal images of faces. The

Page 2

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 12015

PROPOSED MODEL

ISSN:2455-3743

Figure 2: Figure shows the GUI implementation of the


process when the eyes of the person are open and the eye
area are slice.
Outputs:
i)

Input video from camera for processing


under the MATLAB

Figure 1: Proposed model

4.

RESULT ANALYSIS

The system was tested on 15 people and was


successful with 12 people, resulting in 80% accuracy. Figure
below shows an example of the step-by-step result of
finding the face, eyes and process to detect the fatigue level
of the person using eyelid movement.

Figure 3: Figure shows the Person place their face in front


of the camera as per location of Head Portion,Eye
Region,Nose Region and Mouth region for processing the
operation
ii) Input frames captured from camera for processing
when the eyes of the person are opened and closed

I. Input video from camera for processing under the


MATLAB when the eyes of the person are opened.

Page 3

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 12015

ISSN:2455-3743

Figure 4: Figure shows eyes of the Person in Noormal


Condition.so No Fatigue Predicted
iii) Recognition if the eyes of the person are open or
closed

Figure 7 : Recognitising if the eyes of the person are close


then shows the Fatigue is detected
Figure 5: Recognitising if the eyes of the person are open
or closed

5.

COMPARISON WITH OTHER TECHNIQUES

The simplest method for driver fatigue detection is


based on applying a threshold on each extracted symptom.
In the systems presented driver drowsiness was detected by
applying a constant threshold on PERCLOS. In the first
stage was driver face identification and then an appropriate
threshold was chosen for the system based on physical and
psychological characteristics of the identified driver. Here,
the list down a few techniques and make a comparison of
them.
5.1 METHODS BASED ON THRESHOLD
In this system the fatigue detection was carried with 5
different persons under the age group 30 to 35 years. For a
total of 200 frames, they had opened their eyes for 100
frames and closed their eyes for 100 frames and monitoring
their results is discussed in the table below.
For Open Eye:
Figure 6: Fatigue Detection process initiated when the eyes
of the person are open and the eye area are sliced means it
shows the possibale Fatigue Detection.

No. of

Total

Eyes

Eyes

Correct

Person

Frames

Open

Open

Detection

Detected

Rate

iv) Recognition if the eyes of the person are open or

Person1

200

94

94

100%

closed.

Person2

200

92

92

100%

Person3

200

91

91

100%

Person4

200

85

85

100%

Person5

200

86

86

100%

Table 1: Open eye detection Rate

Page 4

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 12015

can be equivalent to simulating a tired person. For


small head movements, the system rarely loses
track of the eyes. When the head is turned too
much sideways there were some false alarms.

For Closed Eye:


No. of

Total

Eyes

Eyes

Correct

Person

Frames

Close

Close

Detection

Detected

Rate

200

91

91

100%

Person 2

200

83

83

100%

Person 3

200

88

88

100%

Person 4

200

92

92

100%

Person 5

200

85

85

100%

Person 1

Table 2: Closed eye detection Rate


6.

CONCLUSION

A noninvasive system to locate the eyes and face monitor


fatigue when it occurred. The detectors have been tested
different faces and eyes under the same lighting situations
and have obtain the result very well considering the amount
of parameter adjustment done during the testing. The
monitoring the fatigue if the eyes are opened or closed in
several continuative way then warning signal is issued. The
system is able to automatically detect eyes localizing error
that might have occurred so in case of this type of error then
system is able to recover and properly localize the eyes. The
following conclusions were made:

Image processing achieves highly accurate and


reliable detection of fatigue.

Image processing offers a non-invasive


approach to detecting fatigue without the
annoyance and interference.

A fatigue detection system developed around


the principle of image processing judges the
drivers alertness level on the basis of
continuous eye closures.
7.

LIMITATIONS

With 60% accuracy, it is observed that there are


limitations to the system.

The most significant limitation is that it will not


work with people who have very dark skin. This is
apparent since the core of the algorithm behind the
system is based on linearization. For dark skinned
people linearization doesnt work.
Any reflective objects behind the person. The more
uniform the background is, the more robust the
system becomes. For testing, rapid head movement
was not allowed. This may be acceptable, since it

ISSN:2455-3743

8.

FUTURE WORK

The system does not work in dark skinned individuals. This


can be corrected by having an adaptive light source. The
adaptive light source would measure the amount of light
being reflected back. If little light is being reflected, the
intensity of the light is increased. Darker skinned individual
need much more light, so that when the binary image is
constructed, the face is white, and the background is black.
Another big improvement would be to include
other salient features in the human face (i.e. the nose or the
mouth). This could introduce new geometrical constraints,
but it might provide much better accuracy overall. In the
long run, properly adjusting the parameters, and using a
parallel implementation, this method could actually provide
good results for real-time fatigue detection schemes.
9.

REFERENCES

[1] Ijaz Khan, Hadi Abdullah, Mohd Shamian Zainal,


Shipun Anuar,Hazwaj Mhd, Mohamad Md.Vision
Based Composite Approach for Lethargy
Detection. Md, 2014 IEEE CSPA2014, 7-9
Mac.2014 kuala Lumpur, Malaysia.
[2] Amandeep Singh, Jaspreet Kaur, Driver Fatigue
Detection Using Machine Vision Approach,
Robotics and Autonomous Systems, Kaur,978-14673-4529-3/12/$31.00@2012 IEEE.
[3] D.J.M.Bomriver, Vision-based Real-time Driver
Fatigue Detection System for Efficient Vehicle
Control, nternational Journal of Engineering and
Advance Technology (IJEAT) ISSN: 2249-8958,
Volume-2 Issue-1 Oct 2012.
[4] Narendra Kumar, Dr.N.C.Barwar Analysis of Real
Time Driver Fatigue Detection Based on Eye and
Yawning, (IJCSIT) international Journal of
Computer Science and Information Technologies
Vol5 (6), 2014.7821-7826.
[5] Narendra Kumar,Dr.N.C.Barwar, Detecting of Eye
Blinking and Yawning for Monitoring Drivers
Drowsiness in Real Time International journal of
Application or Innovation in Engineering &
Management(IJAIEM)
Volume
3,Issue
11,November 2014
[6]
Mr. Swapnil V. Deshmukh #1, Ms.Dipeeka P.
Radake*2, Mr. Kapil N. Hande#3, Driver fatigue
Detection Using Sensor Network. International
Journal of Engineering Science and Technology
(IJEST), NCICT Special Issue Feb 2011.
[7] Yang M. H., Kriegman J. and Ahuja N, Detecting
Faces in Images: A Survey. IEEE Transaction on

Page 5

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 12015

Pattern Analysis and Machine Intelligence, Vol. 24,


pp. 34-58 (2008).
[8] WEN-BING HORNG, CHIH-YUAN CHEN, JIANWEN
PENG,
CHEN-HSIANG
CHEN,
Improvements of Driver Fatigue Detection System
Based on Eye Tracking and Dynamic Template
Matching, Department of Computer Science and
Information Engineering Tamkang University, EISSN: 2224-3402, Issue 1, Volume 9, January
2012.
[9] Er. Manoram Vats1and Er. Anil Garg DETECTION
AND SECURITY SYSTEM FOR DROWSY BY
USING ARTIFICIAL NEURAL NETWORK
TECHNIQUE, International Journal of Applied
Science and Advance Technology January-June
2012, Vol. 1, No. 1, pp. 39-43.
[10] MU-CHUN SU, KUO-CHUNG WANG, GWO-DONG
CHENAn EYE TRACKING SYSTEM AND ITS
APPLICATION IN AIDS FOR PEOPLE WITH
SEVERAL DISABILITIES, Department Of
Computer Science And Information Engineering,
National Central University, Chung Li, Taiwan, Vol.
18 No. 6 December 2006.
[11] Dmitry Golomidov Human detect ion in video,
dmitrygo@seas.upenn.edu Advisor: Jianbo Shi April
18, 2008.
[12]

[13]

[14]

[15]

[16]

[17]

[18]

Paola Campadelli, Raffaella Lanzarotti and


Giuseppe Lipori, Automatic Facial Feature
Extraction for Face Recognition, ISBN 978-3902613-03-5, pp.558, I-Tech, Vienna, Austria, June
2007.
Jung-Ming Wang, DETECTING DRIVERS EYES
DURING DRIVING,18th IPPR Conference on
Computer Vision, Graphics and Image Processing
(CVGIP 2005)
Qiang Ji,Zhiwei Zhu,and Peilin Lan, Real Time
Nonintrusive Monitoring and Prediction Of Driver
Fatigue, IEEE Transaction on Vehicular
Technology,VOL-53.No.4 Jul-2004.
Yang M. H., Kriegman J. and Ahuja N., Detecting
Faces in Images: A Survey, IEEE Transaction on
Pattern Analysis and Machine Intelligence, Vol. 24,
pp. 34-58 (2002).
FLECK M., FORSYTH D. A., AND BREGLER C.
2002. Finding nacked people. In Proceedings of
the ECCV, vol. 2, 592602.
BRAND J., AND MASON J. 2000. A comparative
assessment of three approaches to pixellevel human
skin-detection. In Proceeding of the International
Conference on Pattern Recognition, vol. 1, 1056
1059.
ZARIT B. D., SUPER B. J., AND QUEK, F.K. H.
1999. Comparison of five color models in skin
pixel classification. In ICCV99 Intl Workshop

ISSN:2455-3743

on recognition, analysis and tracking of faces and


gestures in Real-Time systems, 5863.
[19] AHLBERG J. 1999. A system for face localization
and facial feature extraction Tech. Rep.LiTHISY-R-2172, Linkoping University.
[20] Z. Zhu, Q. Ji, K. Fujimura, and K. C. Lee, Combining
Kalman filtering and mean shift for real time eye
tracking under active IR illumination, presented at
the Int. Conf. Pattern Recognition, Quebec, PQ,
Canada, 2002.
[21]Mathworks http://www.mathworks.com
[22] http://www.cse.iitk.ac.in/users
[23] Duda, R.O. and P.E. Hart "Use of the Hough
Transformation to Detect Lines and Curves in
Pictures", Comm. ACM, Vol.15, pp.1115
(January, 1972).
[24] The Hough Transform,
http://planetmath.org/encyclopedia/HoughTransfor
m.html.

10. AUTHOR PROFILE


Pritam
H.
Gohatre
received the Master of
Techonology in System
Software from Rajiv Gandhi
Technical
University
Bhopal. Currently he is an
Assistant. Professor in
LAMIT,
Dhamangoan,
india. He has published two
papers in international
journals. He is having 7
year teaching experience
and
his
field
of
specialization is software
development,
Image
processing, Networking.

Page 6

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 12015

ISSN:2455-3743

AN IMPROVED IMAGE FUSION ALGORITHM BASED ON WAVELET TRANSFORMS


USING PARTICLE OF SWARM OPTIMIZATION
HRISHIKESH S. HOLEY
Patel Institute of Technology, Bhopal
hrishikeshsholey@gmail.com
ABSTRACT: Feature based image fusion is new area of research in the field of image fusion. The image fusion used lower
content of image feature. The lower content of image feature such as color texture and dimension. The texture features are very
important component of image. The processing and extraction of texture feature used various transform function such as
wavelet transform function, Gabor transform function and many more signal based transform function. In the process of image
fusion involve two and more image for the process of fusion. The fused image still image pervious quality as well as new feature
and area of improved by new and adopted reference image. In this paper, we proposed a feature based image fusion technique.
The feature based optimization technique also used feature selection and feature optimization process. The feature selection
and feature optimization used particle of swarm optimization technique. The particle of swarm optimization technique selects
the optimal texture feature of both image original image and reference image. The original and reference image find the
optimal feature sub set for the estimation of feature correlation.
Keywords Image fusion, wavelet transform function, swarm optimization technique, optimal texture.
1.

INTRODUCTION

Computers have been widely used in our daily lives, since


they can handle data and computation more efficiently and
more accurately than humans. Therefore, it is natural to
further exploit their capabilities for more intelligent tasks,
for example, analysis of visual scenes (images or videos) or
speeches (audios), which are followed by logical inference
and reasoning. For we humans, such tasks are performed
hundreds of times every day so easily from subconscious,
sometimes even without any awareness. In computer vision
applications, one of the challenging problems is the
combining of relevant information from various images of
the same scene without introducing artifacts in the resultant
image. Since images are captured by the use of different
devices which may have different sensors. Because of the
different types of sensors used in image capturing devices
and their principle of sensing and also, due to the limited
depth of focus of optical lenses used in camera, it is possible
to get several images of the same scene producing different
information. Image registration is the process of
systematically placing separate images in a common frame
of reference so that the information they contain can be
optimally integrated or compared. This is becoming the
central tool for image analysis, understanding, and
visualization in both medical and scientific applications.
There are many image fusion methods that can be used to
produce high-resolution multispectral images from a highresolution
panchromatic image and low-resolution
multispectral images. Starting from the physical principle of
image formation, Neural network and fuzzy theory is the
two main methods of intelligence, the image fusion system
based on these two methods of can simulate intelligent
human behavior, do not need a lot of background

knowledge of research subjects and precise mathematical


model, But find the law to resolve complex and uncertainty
issues on the basis of input and output data of objects. From
these characteristics and the advantages, it can be seen that
the use of the approach combined by neural networks and
fuzzy theory can better complete the multi-sensor image
pervasive fusion.
The goal of proposed system is , the object of
image fusion is to obtain a better visual understanding of
certain phenomena, and to introduce or enhance intelligence
and system control functions. Many advantages of multi
sensory data fusion such as improved system performance
(improved detection, tracking and identification, improved
situation assessment, and awareness), improved robustness
(lessens or redundancy and graceful degradation), improved
spatial and temporal coverage, shorter response time, and
reduced communication and computing, can be achieved.
2.

LITERATURE
BACKGROUND

SURVEY

&

[1] In this paper, author proposed a pixel-level


image fusion scheme using multi resolution steerable
pyramid wavelet transform. Wavelet coefficients at different
decomposition levels are fused using absolute maximum
fusion rule. Two important properties shift invariance and
self reversibility of steerable pyramid wavelet transform are
advantageous for image fusion because they are capable to
preserve edge information and hence reducing the distortion
in the fused image. Experimental results show that the
proposed method improves fusion quality by reducing loss
of relevant information present in individual images. For
quantitative evaluation, we have used fusion metrics as

Page 7

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 12015

fusion factor, fusion symmetry, entropy and standard


deviation. We proposed a pixel level image fusion scheme
using steerable pyramid wavelet transform. In the proposed
method, two main steps have to be followed: one, the source
images are decomposed into low pass and high pass subbands of different scale using steerable pyramid, and
secondly, low pass sub band is divided into a set of oriented
band pass sub-bands and a low pass sub-band.The
suitability of the proposed method is tested on multi focus
and medical images. For this, we have presented two pair of
images and their fusion results. The results are also tested
on two different conditions; when images are free from any
noise and other when they are corrupted with zero mean
white Gaussian noise. From experiments, we observed that
the proposed method performs better in all of the cases. The
performance is evaluated on the basis of qualitative and
quantitative criteria.
[2] In this paper, the fusion framework based on data as
similation and genetic algorithm for Multispectral image
and panchromatic image was presented. Data assimilation
can combine the advantage of model operator and observe
operator. Our proposed method can integrate the advantages
of DWT and HIS, construct object function according to
successive application to satisfy the aim of adaptively
adjustment of fusion parameters. Standard deviation and
aver-age gradient are chosen as object function. In general,
the higher the value, the better the texture information. And
two experiments (Spot, Quick bird) validate this framework.
The experiment results show that our proposed fusion
frame-work is feasible.
[3] In this paper presents a comprehensive framework, the
general image fusion (GIF) method, which makes it possible
to categorize, compare, and evaluate the existing image
fusion methods. Using the GIF method, it is shown that the
pixel values of the high-resolution multi spectral images are
determined by the corresponding pixel values of the lowresolution panchromatic image, the approximation of the
high-resolution panchromatic image at the low-resolution
level.
This paper proposes a framework, the GIF
method. Under different assumptions on how the LRPI is
computed and how the modulation coefficients are set,
many existing image fusion methods, including, but not
limited to, IHS, BT, HPF, HPM, PCA, ATW, and MRAIM,
are shown to be particular cases of the GIF method. The
performance of each method is determined by two factors:
how the LRPI is computed and how the modulation
coefficients are dened. If the LRPI is approximated from
the LRMIs, it usually has a weak correlation with the HRPI,
leading to color distortion in the fused image. If the LRPI is
a low-pass ltered HRPI, it usually shows less spectral
distortion. If the modulation coefcient is set as a constant
value, the reectance differences between the panchromatic
bands and the multispectral bands are not taken into
consideration, and the fused images bias the color of the
pixel toward the gray. Methods in which the modulation
coefficients are set following the GIF method can preserve
the ratios between the respective bands, give more emphasis

ISSN:2455-3743

to slight signature variations, and maintain the radio-metric


integrity of the data while increasing spatial resolution.
[4] This paper addresses the image registration problem
applying genetic algorithms. The image registrations
objective is to define mapping that best match two set of
points or images. In this work the point matching problem
was addressed employing a method based on nearestneighbor. The mapping was handled by affine
transformations. This paper presents a genetic algorithm
approach to the above stated problem of mis-registration.
The genetic algorithm is an iterative process which
repeatedly modifies a population of individual solutions. At
each step, the genetic algorithm selects individual at random
from the current population to be parents and uses them to
produce the children for the next generation. In each
generation, the fitness of every individual in the population
is evaluated, multiple individuals are stochastically selected
from the current population (based on their fitness), and
modified (recombined and possibly randomly mutated) to
form a new population. The new population is then used in
the next iteration of the algorithm. Over successive
generations population evolves toward an optimal
solution. The algorithm terminates when either a maximum
number of generations has been produced, or a satisfactory
fitness level has been reached for the population.
In this paper we have focused on genetic
algorithm for medical image registration. Genetic algorithm
is an evolutionary algorithm. There are other methods like
simulated annealing, mutual information theory for image
registration. Apart from this, there are other algorithms like
graph algorithm and sequence algorithms. We can
implement these algorithms and show the comparative study
and get the most suitable for medical applications.
3.

PROPOSED METHODOLOGY AND


ARCHITECTRE
In this section discuss the proposed methodology of feature
based image fusion technique based on wavelet transform
function and particle of swarm optimization, the feature of
transform function passes through feature selection process.
The feature selection process used particle of swarm
optimization technique. The particle of swarm optimization
select the optimal feature of given texture feature matrix. If
the correlation coefficient factor estimate the value of
correlation is zero then fusion process is done. The process
of proposed model divide into two section first section deals
with initially take host image and reference image passes
through wavelet transform function for feature extraction
after the feature extraction applied optimization task done
by particle of swarm optimization.

Page 8

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 12015

Step feature extraction


a.

input the host image and reference image

b.

apply separately Wavelet transform function


for feature extraction
F(x) =I(x, y) is host image F1(x) =I1(x1, y1) is
reference image
M (F) = F(x) G(x)
The convolution is perform in host image
through transform function here M (F) stored
the texture feature matrix of host image.

Then a feature vector is constructed using mn and mn as


feature components:
f= [00 00 01 01 . mn mn] .. (1)
We obtain a numerical vector of 60 dimensions for 10
orientations and 6 scales changes. This moment feature
value stored in M (F) matrix.

ISSN:2455-3743

Generate pattern of similar and dissimilar pattern of both


image.
3. Estimate the correlation coefficient of both patterns using
persons coefficient.
Estimate the feature correlation attribute as
Here a and b the pattern of host
image and reference image.
The estimated correlation coefficient data check the total
value of MSE

Create the relative feature difference value

N (F) =F1(x) G(x)


If the relative pattern difference value is 0
The convolution is perform in host image through transform
function here (F) stored the texture feature matrix of host
image.
Then a feature vector is constructed using 1mn and 1mn as
feature components:
f= [100 100 011 01 . 1mn 1mn] ..(2)
We obtain a numerical vector of 60 dimensions for 10
orientations and 6 scales changes. This moment feature
value stored in N (F) matrix.

4. Fusion process is done


5. Calculate PSNR value of fused image
6. Calculate IQI value of fused image
7. Calculate fusion MSER of fused image.
PROPOSED MODEL

1.

Both the feature matrix convent into feature vector


and pass through particle of swarm optimization
2. step two used here particle of swarm optimization
for classification of pattern Transform data to the
format of an SVM that is X is original data R is
d
transform data such that Xi
here d is
dimension of data.
Conduct scaling on the data

here is scaling factor and m is total data point and k is


total number of instant and sim find close point of data.
Consider the RBF kernel K(x; y)
H(x) =
this is kernel equation of plane.
Use cross-validation to 2nd the best parameter C and
Use the best parameter C and to train the whole training set
Ro=
where Ro is learning parameter
of kernel function.

Figure 1: Proposed model of image fusion technique based


on feature optimization

Page 9

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 12015

ISSN:2455-3743

DESCRIPTION OF MODEL
In this section describe the process of proposed model. The
proposed model contain with wavelet transform function
and particle of swarm optimization. The swarm optimization
used for the feature optimization process. Here discuss step
of proposed model.

IMAGE
NAME
Head CT

Name of
Method
DWT

MSER

PSNR

IQI

17.38

15.82

1.96

Head CT

DWTPOS

23.67

18.29

0.953

Step 1. Initially put the original image and reference image


for the processing of feature extraction
Step 2. After processing of image discrete wavelet
transform function are applied for the texture feature
extraction
Step 3. After the texture feature extraction calculate the
maximum value of feature using mean standard formula.
Step 4 the maximum value of feature set is global value of
fitness constraints of particle of swarm optimization
Step 5. The particle of swarm optimization select the all
feature as particle and measure value of difference and
move according to feature direction for the processing of
optimal
Step 6. The selection of optimal feature in both image
estimate the correlation coefficient function of value R.
Step 7. If the value of R is 0 image are going on process of
image fusion.
Step 8. If value of R not equal to 0 the processing going to
estimation function.
4.

RESULT ANALYSIS

To investigate the effectiveness of the proposed method for


image fusion based on wavelet transform function and
particle of swarm optimization. We used MATLAB
software 7.14.0 and some reputed image used for
experimental task such as the name given head image, head
CT image, head MRI image, Heart image and Hand image.
This all image is gray scale image size is 512 * 512. The
performance measuring parameter is MSER, PSNR and IQI.
Here we are using various types of image fusion techniques
such as wavelet and Particle of swarm optimization.

IMAGE
NAME

Name of
method

MSER

PSNR

IQI

Head

DWT

22.03

18.30

0.955

Head

DWT-POS

26.18

20.17

0.947

Table 2: Shows that the comparative result analysis for the


Head CT image with using DWT and DWT-POS method
and we find the value of MSER, PSNR and IQI.

IMAGE
NAME

Name of
Method

MSER

PSNR

IQI

Head MRI

DWT

15.89

14.43

1.964

Head MRI

DWTPOS

22.15

16.84

0.957

Table 3: Shows that the comparative result analysis for the


Head MRI image with using DWT and DWT-POS method
and we find the value of MSER, PSNR and IQI.
IMAGE
NAME

Name of
Method

MSER

PSNR

IQI

Heart

DWT

22.17

18.43

0.954

Heart

DWTPOS

26.53

21.01

0.943

Table 4: Shows that the comparative result analysis for the


Heart image with using DWT and DWT-POS method and
we find the value of MSER, PSNR and IQI.
IMAGE
NAME

Name of
Method

MSER

PSNR

IQI

Hand
image

DWT

24.76

20.86

0.948

Hand
image

DWTPOS

29.06

23.41

0.940

Table 1: Shows that the comparative result analysis for the


Head image with using DWT and DWT-POS method and
we find the value of MSER, PSNR and IQI.

Page 10

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 12015

Table 5: Shows that the comparative result analysis for the


Hand image with using DWT and DWT-POS method and
we find the value of MSER, PSNR and IQI.

ISSN:2455-3743

Comparative result graph for Head MRI


image with using DWT and DWT-POS
image fusion method and find the value of
MSER, PSNR and IQI
25

COMPARATIVE RESULT GRAPH

20

Comparative result graph for Head image


with using DWT and DWT-POS image
fusion method and find the value of MSER,
PSNR and IQI

DWT

15

DWT-POS

10

30
5

25
20

DWT

15

DWT-POS

0
MSER

PSNR

IQI

10
Figure 4: Shows that the comparative result graph for Head
MRI image with using DWT and DWT-POS image fusion
method and find the value of MSER, PSNR and IQI.

5
0
MSER

PSNR

IQI

Comparative result graph for Hand image


with using DWT and DWT-POS image
fusion method and find the value of MSER,
PSNR and IQI

Figure 2: Shows that the comparative result graph for Head


image with using DWT and DWT-POS image fusion
method and find the value of MSER, PSNR and IQI.
35
Comparative result graph for Head CT image
with using DWT and DWT-POS image fusion
method and find the value of MSER, PSNR
and IQI
25

30
25

DWT

20

DWT-POS

15

20

10

15

DWT

DWT-POS

10

MSER

PSNR

IQI

0
MSER

PSNR

IQI

Figure 5: Shows that the comparative result graph for Heart


image with using DWT and DWT-POS image fusion
method and find the value of MSER, PSNR and IQI.

Figure 3: Shows that the comparative result graph for Head


CT image with using DWT and DWT-POS image fusion
method and find the value of MSER, PSNR and IQI.

Page 11

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 12015

Comparative result graph for Hand image


with using DWT and DWT-POS image fusion
method and find the value of MSER, PSNR
and IQI
35

30
25

DWT

20

DWT-POS

ISSN:2455-3743

6.

FUTURE WORK

The proposed method of image fusion is very efficient for


the process of image quality improvement. The process of
fusion produces good result in term of quantitative analysis.
But it still need some improvement in IQI parameter. The
maximum value of IQI is 1. But in this dissertation only
reached 97-98% for quality factor. In future improve the
value of IQI up to 1. For this used two or more feature
combined with texture feature.

15
7.

10
5

0
MSER

PSNR

IQI

Figure 6: Shows that the comparative result graph for Hand


image with using DWT and DWT-POS image fusion
method and find the value of MSER, PSNR and IQI.
5. CONCLUSION
In this dissertation proposed a feature based image fusion
technique for the improvement of quality of image of
distorted and damage image. The process of proposed
algorithm used wavelet transform function for the feature
extraction process. The wavelet transform function extract
the lower content of texture feature. The lower content of
texture feature used for the process of feature optimization
process. The feature optimization process done by particle
of swarm optimization. Particle of swarm optimization is
dynamic population based optimization technique. The
correlation coefficient factor estimate the relation of
original image and reference image. If the value of
correlation is 0 then image are fused. If the value of
relation is not equal to zero the estimation factor recall.
Measure the q u a l i t y o f f u s e d i m a g e measures are
considered. These measures play an important role in
various Image Processing applications. Goal of image
quality assessment is to supply quality metrics that can
predict perceived image quality automatically. While
visual inspection has limitation due to human
judgment, quantitative approach based on the evaluation of
distortion in the resulting fused image is more
desirable for mathematical modeling. The goals of the
quantitative measures are normally used for the result of
visual inspection due to the limitations of human eyes. In
Mathematical modeling, quantitative measure is desirable.
One can develop quantitative measure to predict perceived
image quality. In this dissertation used PSNR, IQI and
MSER for estimation of quality of image.

REFERENCES

[1] Om Prakash, Arvind Kumar, Ashish Khare, Pixel-level


image fusion scheme based on steerable pyramid wavelet
transform using absolute maximum selection fusion rule
IEEE, 2014. Pp 765-771.
[2] Liang HongKun YangXianchun Pan, Multispectral
and panchromatic image fusion Based On Genetic
Algorithm and Data Assimilation IEEE, 2011. Pp 156-160.
[3] Zhijun Wang, Djemel Ziou, Costas Armenakis, Deren
Li, and Qingquan Li, A Comparative Analysis of Image
Fusion
Methods
IEEE
TRANSACTIONS
ON
GEOSCIENCE AND REMOTE SENSING, VOL. 43, 2005.
Pp 1391-1402.
[4] V. T. Ingole, C. N. Deshmukh, Anjali Joshi, Deepak
Shete, MEDICAL IMAGE REGISTRATION USING
GENETIC
ALGORITHM
Second
International
Conference on Emerging Trends in Engineering and
Technology, 2009. Pp 63-67.
[5] Won Hee Lee, Kyuha Choi, Jong Beom Ra, Frame
Rate Up Conversion Based on Variational Image Fusion
IEEE TRANSACTIONS ON IMAGE PROCESSING,
VOL. 23, 2014. Pp 399-413.
[6] De-gan Zhang, Chao Li, Dong Wang, Xiao-li Zhang, A
New Method of Image Data Fusion Based on FNN Sixth
International Conference on Natural Computation, 2010. Pp
3729-3733.
[7] Chaunt W. Lacewell, Mohamed Gebril, Ruben Buaba,
and Abdollah Homaifar, Optimization of Image Fusion
Using Genetic Algorithms and Discrete Wavelet
Transform IEEE, 2010. Pp 116-121.
[8] Ling Tao, Zhi-Yu Qian, An Improved Medical Image
Fusion Algorithm Based on Wavelet Transform eventh
International Conference on Natural Computation, IEEE
2011. Pp 76-80.
[9] LU Xiaoqi, ZHANG Baohua, GU Yong, Medical
Image Fusion Algorithm Based on clustering neural
network IEEE, 2007. Pp 637-640.
[10] Vikas Kumar Mishra, Shobhit Kumar, Ram Kailash
Gupta, Design and Implementation of Image Fusion
System International journal of computer science &
engineering, Vol-2, 2014. Pp 182-186.
[11] S. Bedi, Rati Khandelwal, Comprehensive and
Comparative Study of Image Fusion Techniques
International Journal of Soft Computing and Engineering,
Vol-3, 2013. Pp 300-304.

Page 12

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 12015

ISSN:2455-3743

[12] Jamal Saeedi, Karim Faez, Infrared and visible image


fusion using fuzzy logic and population-based optimization
Applied Soft Computing, Elsevier ltd. 2012. Pp 1041-1054.
[13] P. Shah, S. N. Merchant, U. B. Desai, "Fusion of
surveillance Images in Infrared and visible band using
curvelet wavelet and wavelet packet transform"
International Journal of Wavelets, Multire solution and
Information Processing Vo1.8, No.2, 2010. Pp 271-292.
[14] P. Shah, S. N. Merchant, U. B. Desai, "An efficient
spatial domain fusion scheme for multi focus images using
statistical properties of neighborhood" Multimedia and Expo
(ICME), 2011, Pp 1-6.
[15] V. P. S. Naidu, R. Raol, "Pixel-level image fusion
using wavelets and principal component analysis" Defence
Science Journal, Vo1.58, no.3, 2008, Pp 338-352.

8.

AUTHOR PROFILE
Prof. Hrishikesh S. Holey
received the Master of
Technology
in
Software
Engineering
from
Rajiv
Gandhi Technical University,
Bhopal. Currently he is an
Assistant Professor in HVPM
College of Engineering &
Technology, Amravati, India.
He has published two papers in
international journals. He is
having 6 year teaching
experience and his field of
specialization
is
Image
Processing, Digital Signal
Processing,
Software
Engineering,
Computer
Network
and
Network
Security.
.

Page 13

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 12015

ISSN:2455-3743

SAP HANA DATABASE WITH NETWORK


PROF. ANUP. P. DATE 1, MISS. NAMRATA S. MAHAJAN 2
1

Assistant Professor, Department of CSE, KGIET Darapur,


Sant Gadge Baba Amravati University, Maharashtra, India.
dateap@gmail.com,
Department of CSE. KGIET Darapur,
Sant Gadge Baba Amravati University, Maharashtra, India
2

namratasmahajan11@gmail.com
ABSTRACT: SAP HANA Database is stand for system, application and processing on database with High performance Analytic
Appliance. SAP HANA core database can serve real-time, complex queries and multi structured data needs. Enterprise
Requirement may increase day by day. In SAP HANA database no of user operate same data simultaneously. The goal of the SAP
HANA database is the reduced the problem related with storage and workload within management system. By this reasons it
have modern hardware with multiple processor, large main memory, and caches. SAP HANA database have various method for
compression of database content [1].
Keywords: SAP HANA, real time, multiple processor, cache, main memory.
2. THE SAP HANA DATABASE WITH NETWORK
CONNECTIVITY
1. INTRODUCTION
Network is defined as the set of two or more number of
It is a combination of hardware and software made to
computers that shared the information, resources to each
handle both high transaction rate and complex query
other.
processing. SAP HANA previously called SAP HighPerformance Analytic Appliance. In addition to database engine
HANA include an embedded web server, Trex search engine an
in memory column-oriented search engine [2]. SAP HANA has
the cloud platform for the data storage. And hence the data is
stored in secure and efficient way [3].
The SAP HANA has two type of connection to network:
1) Inbound connection.
2) Outbound connection
Inbound connections: In inbound connection for database
clients and data provisioning it used the protocols SQLDBC by
using port no.3xx15, 3xx17 and SAP HANA Studio operate
administrative functions by using port no.5xx13, 5xx14, 1128,
1129.for the web base access of SAP HANA us done by using
HTTP and HTTPS for this port no.80xx, 43xx is used[3].
Outbound connections: Outbound connection is used by the
solution manager for the purpose of diagnostics agent
installed on each system.

An SAP HANA data centre database can running range


from the single host to a complex distributed system having
multiple hosts location.
2.1 High availability and disaster recovery in SAP
HANA: SAP HANA is fully designed have high
availability. It supports recovery used for the detection of
error related to the software and other fault. High
availability in SAP HANA IS derived from set of
techniques, practices of engineer and design principles that
fulfil the requirement of business continuity [4].
For achieved High availability in SAP HANA first it
eliminating single points of failure as quickly and
providing the ability to repair the fault. Fault recovery is
the method of detection and correction of the fault. In
disaster recovery it take a backup of data. The backup is
used at the time of data is loss due to some reasons like due
to viruses, power supply fault.
2.2 Connectivity to Network: SAP HANA database has
the client-server model of connectivity. SAP HANA is
transaction-oriented databases because it used the
replication services.
The setup of an SAP HANA depends on following things.

Page 14

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 1, 2015

It Support to traditional database clients, Web-based


user, and admin.
The number of client used to the SAP HANA system,
ranging from a single-client system to a complex
distributed system is responsible for provides a
informational framework to use the most efficient
channels of transmission information with a low error
rate .with multiple client.
Support for high availability through the use of
multiple technique, and support for disaster recovery
datacenters with recovery methods.
SAP HANA has number of network communication
channels with the different SAP HANA setups.
There is separate channels used for external access to
SAP HANA function operate by end-user clients,
administration clients, application servers, and for
data providing via SQL or HTTP protocols.
For purpose of separate external and internal
communication, SAP HANA hosts use a separate network
adapter .network adapter means it is a board or PCMCLA
card that have RAM, DSP chips and link interface with a
separate logical address for each of the different networks.
In addition, SAP HANA can be configured to use SSL
(secure socket lock protocols for secure communication.
3. NETWORK ZONES
3.1 Client zone: In the simple means client is persons that
use the facility provides by server. The is used by SAP
application servers, by clients used client zone by SAP
HANA studio or Web applications running With help of
SAP HANA Extended service server, and for the storing
historical data of user it used data warehouse[8].
3.2 Internal zone: This zone covers the inter host network
between hosts which are in a distributed system and the
SAP HANA system replication technique.
3.3 Storage zone: it is responsible data backup purpose.

ISSN: 2455-3743

3.4 Connections used by Database Clients and Web Clients


to connect to the SAP HANA.
Connection is as follows
Administrative purposes connection.

Data provisioning (providing) connection.

Clients Connections from database that access the


database by using SQL/MDX interface.

4. PROTOCOLS USED BY DATABASE FOR


CONNECTING TO NETWORK
Protocols: A protocol is rules and regulation that govern
the communication. Without protocols there is no
communication.
For the database connectivity SAP HANA database used
following protocols.
4.1. SQLDBC:
It stands for the SQL database
connectivity. By using this protocol we can connect the
database to the network. This protocol again has to type
JBDC and OBDC. This protocols used by client and
administer to database for connection.
JDBC: Stand for JAVA database connectivity. This
protocol is responsible for the client connects and
access to the database. It used for the updating the
database contains. The stored procedures are invoked by
JDBC connection.
ODBC: Stand for open database connectivity. This
application which is written using the ODBC is ported
to both client and server side. It used concept of DNS
that is data source names. DNS collect the additional
information about the particular source connection.
4.2 HTTP/HTTPS: HTTP is tends for hypertext transfer
protocols and S is for the Secure. SAP HANA USED these
protocols for connecting web application to the database.
HTTP is reliable data transfer. HTTP used the port
no.80xx.HTTP protocols used the TCP connection [9].
4.3 SSL: SSL stand for the secure sockets layer protocols.
This is used for the secure communication between SAP
HANA database and client. This protocol provides the
authentication to server and data encryption. SSL used the
port no 5xx13 and 5xx14.

Figure 1: Network Zone.

Page 15

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 1, 2015

Database Client Access:


Client

Protocols used
by client

TCP
port
number

1. Client used
the SAP HANA
database as
application.

The protocol
used
for
database client
access
is
SQLDBC.

3 xx 15

Example:

3 xx 17

Business
Warehouse.
2. The user used
SAP HANA
directly
Example:
Microsoft Excel.
3. SAP HANA
studio:
SAPHANA
studio is
administrative
block

Main motto of server is to share resources and hardware


between the clients.
The SAP HANA has the following server.
Index Server: It is responsible for the storing of actual
data and the engines for processing the data. It acts also as
main server. User all the other servers coordinate by the
server
Name Server: It is having responsibility to store the
topology of Distributed SAP HANA database name server.
The SAP HANA used by distributed system by using the
instances number. Name server has information about the
running component of SAP HANA database.
Statistics Server: It has information about Status about the
Performance and Resource Consumption from all the other
server components. For the access the Statistics Server
there is used of the SAP HANA Studio. It monitors the
entire server working.
Pre-processor Server: It is used for to find Text Data and
extracting the information on which the text search
capabilities are based [2].
6. NETWORK SECURITY

Administrative Tasks:
Client

Protocols and
other
information

Port no.

1)SAP support

The internal SAP


protocol is used.
The protocols is
used by studio is
SQLDBC. here
the connection
agent play role
of administrator

3xx09

2)SAP HANA
studio

ISSN: 2455-3743

5xx13 5xx14
(SSL)

5. SAP HANA SERVER


Server: Server is the any computer program or
machines that accept the client request and give response.

6.1 Authentication and Authorization: SAP HANA


supports an authentication method. Authentication is
method to provide a way of identifying a user. The most
basic is username/ password combinations for database
users created and maintained through the SAP HANA
Studio [4].
6.2 Encryption: ENCRYPTION Is method of data hiding
in the network .The secure sockets layer (SSL) protocol be
used to encrypt client-server traffic and internal
communications in SAP HANA. SSL is not invulnerable
(not have any weakness). SSL proxies are widely available
and can be used to encrypt and decrypt packets passed
between endpoints within a network. Root encryption keys
are stored using the SAP Net Weaver secure storage file
system (SSFS). Keys should be periodically changed using
the sql command alter system persistence encryption create
new key followed by alter system persistence encryption
apply current key [5].
6.3 Auditing And Logging: An audit also called audit log.
The meaning of audit means recording the log file. Once
enabled, audit policies should be configured to log actions
that include SELECT, INSERT, UPDATE, DELETE, And
EXECUTE and other quires when combined with specific
conditions. Policies can be configured for specific users,
tables, views and procedures. It is recommending auditing
all actions performed by privileged users including the

Page 16

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 1, 2015

SYSTEM user and actions that impact sensitive database


objects. SAP HANA database supports to recovery. In it
database have one copy of original data if anyone make
changes in it then data is match with copy [5].
7. CONCLUSION
The requirement of every organization is to have the
powerful database for storage the SAP HANA provides all
things that make organization best. The SAP HANA has
one speciality that thousands of users may read or update
records of the same data. SAP HANA provides the best
security to data. As all organization give important to have
the powerful network of database the SAP HANA fulfill all
the requirement of organization. SAP HANA database with
network connectivity give benefits to the organization.
9.

REFERENCES

[1]F Farber, N.May, W.Lehner, P.Grobe The SAP HANA


DATABASE ARCHITECTUCTURE OVERVIEW
IEEE Data Eng-Bull 35(1), 2012.
[2]Tusal patel, Priti Gupta, Nishant Khatri Distributed
SAP
HANA
DATABASE
FOR
EFFICIRNT
PROCESSING. International Journal of advanced
Research in computer communication Engineering, volume
2 issue 6 June 2013
[3] T. Benson, A. Akella, A. Shaikh, and S. Sahu.

Cloudnaas: a cloud networking platform for enterprise


applications. In SOCC, 2011.
4]H. Plattner. A common database approach for OLTP and
OLAP
using an in-memory column database. In
Proceedings of the 35th SIGMOD international conference
on Management of data, SIGMOD '09.
[5] SUSE Linux Enterprise Server 11 SP3 Security and
Hardening, June 2013, SUSE LLC, SUSE Linux Enterprise
Server 11 SP3 Security Guide, July 2013, SUSE LLC.
[6] F. Farber, S. K. Cha, J. Primsch, C. Bornhovd, S.
Sigg, and W. Lehner. SAP HANA Database - Data
Management for Modern Business Applications. SIGMOD
Record, 40(4):4551, 2011.

ISSN: 2455-3743

[7] T. Legler, W. Lehner, and A. Ross. Data mining with


the sap netweaver bi accelerator. In VLDB, pages 1059
1068, 2006.
[8]P. Costa, M. Migliavacca, P. Pietzuch, et al. NaaS:
Network-as-a-Service in the cloud. In HotICE, 2012.
9] D. D. Clark, J. Romkey, and H. C. Salwe. An analysis of
TCP processing overhead. In LCN, 1988.
[10] SAP AG or an SAP affiliate company, SAP HANA
Introduction, Participant Handbook, 2013, pp. 2-18.

10. AUTHOR PROFILE


Prof. Anup. P. Date received

the MBA in Computer


Management from University
of Pune. Currently he is an
Assistant Professor in KGIET
Darapur, India.
He has
published three papers in
international journals and two
national conferences. He is
having 6 year teaching
experience and his field of
specialization is software
Engineering,
software
management,
computer
networking,
database
management,
artificial
intelligence,
web
base
engineering,
Image
processing.
Miss. Namrata S. Mahajan
Currently pursing Bachelor of
Engineering in Computer
Science and Engineering
from
Kamalatai
Gawai
Institute of Engineering &
Technology, Amravati, India.

Page 17

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 1, 2015

ISSN: 2455-3743

PERFORMANCE EVALUATION OF DATABASE CLIENT ENGINE USING MODULAR


APPROACH
PROF. RAMANAND SAMDEKAR
S.B. Jain Institute of Technology Management & Research, Nagpur, India
samdekar.ram@gmail.com
ABSTRACT: Materialized view creation is an important aspect for large data centric applications. Materialized views create an
abstraction over the actual database tables to the users. The MV view creation and selection is based on the various parameters
like access frequency, base update frequency etc. Database client engine use cluster base approach to create materializes view to
reduce query execution time.
Keywords: Database client engine, cluster base, Data Warehouse, Threshold, materialized views.
1.

INTRODUCTION

By observing the use of heterogeneous data in


Data warehouses are designed to facilitate reporting and
analysis of data, focuses on data storage. The data
warehouse is intended to provide decisions support services
for large volumes of data. So how to rapidly respond to
query request is a great challenge in data warehouse. Quick
response time and accuracy are important factors in the
success of any database.
This paper proposes an approach of grouping in
broader sense clustering the similar queries depending on
certain parameters like access frequency to find the result
from MV. The proposed work explores the area of a) query
clustering for the selection of materialized view to decrease
the response time and storage space deployment
environment b) Ease network goals c) Enabling data subsetting d) Enabling disconnected computing. To achieve
these benefits, a methodology is proposed in this paper to
form a quantitative optimize total query response time
under a disk space constraint for data warehouse
applications presented in [1] [3].
2.

RELATED WORKS

Ordinary views are loaded with data every time it is


called. Thus in real life applications materialized views are
found to be more suitable to reduce query execution time.
Materialized view creation involves several issues to
consider. However, the main concern is to ensure
availability of higher amount of user requested data directly
from materialized views. Automated selection [13] of
materialized views in large data oriented application is
desirable for dynamic changes. A very few research work
has been done about selection of materialized view using
clustering approach. A significant work about dynamic
clustering of Materialized view is done by [1].Paper [5]
proposes a greedy algorithm BPUS based on the lattice
model. And paper [6] discusses the issue of materialized
view selection with the B-tree index. Paper [7] proposes
PBS algorithm which make the size of materialized view as
selection criteria. Paper [8] proposes preprocessor of
materialized view selection, which reduces the

Search space cost and time complexity of static


materialized view selection algorithm. These algorithms are
based on the known distribution of query, or uniform
distribution under the premise, which essentially are static
algorithms. However, the query is random in actual OLAP
system, so materialized view set which static algorithm
generates cannot maximally enhance the query response
performance in data warehouse. In order to improve further
query response performance in data warehouse, paper [9]
proposes dynamic materialized view selection algorithm,
FPUS algorithm, which is based on query frequency in unit
space. Relationship among several attributes in the form of a
Quantitative metric using a robust mathematical model, which
is implemented here using line fitting algorithm. This
quantitative measure guides to construct the materialized
views.
3.

PROPOSED METHODOLOGY

Our solution is an approach based on user behavior and


their interactions with the system, particularly the distribution
of their queries, to create the set of views to materialize.
Materialized views are able to provide the better performance
for DW queries. However, these views have maintenance
cost, so materialization of all views is not possible. An
important challenge of DW environment is materialized view
selection because we have to realize the trade-off between
performance and view maintenance is needed to consider
following things.
1) Classification of queries: it is to determine the
categories of data which the user is interested.[11]
2) Classification of attribute groups: it is to determine the
groups of attributes for each class.
3) Merging classes: merge the data classes to make the
classes that are most compact.
A clustering method is suggested in which
similar queries will be clustered according to their
query access frequency to select the materialized views

Page 18

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 1, 2015

that will reduce the execution time and storage space.


When the query is posed, it will be compared with
already clustered or existing query, and the
precomputed MV will be returned as a result which
will reduce the execution time of the query. In this
approach, a framework is created which will reduce the
execution time of query when posed to this
framework.[9]
Algorithm for Database client view

ISSN: 2455-3743

Step 2.
/* In this step, the views are created one after one by
taking the attributes from IAAM in descending order
of importance. It takes as input numbers of views
user want to create and also the corresponding size
of each view. */
Call method Materialized_View_Creation.
End

The steps of the algorithm are as below.


I) Generation of random set of records for given
tables
in database by record generator.
II) Extraction or generation of all possible set of
queries
resolved by system on above created
records.
III) Optimization of above set of queries according to
their access frequency.
IV) Creation of MV according to query access
frequency called as Threshold Value and according to
Maximum Cluster Area Threshold % According to
above step a of MV creation, 3types of MV are
created as follows.
1) Single query to Multi table MV. In this response
of single query is obtained from multiple MV table.
2) Single query to single table MV. In this response
of single query is obtained from single MV table.
[10]
3) Multiple queries to single table MV. In this
response of multiple similar queries will be obtained
from single MV table.
4) After creation of these 3 different types of MV, we
will store these MV. Creating candidate views for
materialization in our approach, we assumed that a
data pattern is present in user queries, i.e. certain
categories of data will be queried more frequently
than others. Thus, it will be very useful to extract
these patterns given the basis of which we will create
the candidate views for materialization. Extracting the
attributes of interest. Generally in a mediation system,
a global schema representing the domain of use is
provided. It is in terms of the latter are expressed the
user queries. We analyze these queries to determine,
among all the attributes of this schema, those in
Which users are interested, i.e. the most frequent
attributes.
4. Algorithm Materialized_View_Creation
Begin
Step 1.
/* In this step, construct a (2m) matrix called
Important Attribute and Affinity Matrix (IAAM) from
the array Total_Use and the matrix AAM to compute
the degree of importance of attributes. */
Call method IAAM_Computation. [6]

5.

AN ILLUSTRATIVE EXAMPLE

Consider an example of a query set where 10 queries


are participating and 10 attributes are used in these
queries. Say the name of the attributes is A1, A2, and
A10.
Execution
of
the
algorithm
Attribute_Affinity_Scale is shown below.
A. Attribute_Affinity_ Scale:
Step 2.
/* In this step, the views are created one after one by
taking the attributes from IAAM in descending order
of importance. It takes as input numbers of views user
want to create and also the corresponding size of each
view. */
Call method Materialized_View_Creation.
End

6. Selection of views to materialize


The views created in the first phase of our approach
cannot be all materialized. Indeed, the space for
materialization, the frequency of update and the cost
of access to sources is critical.
The frequency of change: the views that rarely
change are good candidates for materialization.
The size of views: the views of small sizes are
favored
For materialization than large ones.
The availability of sources: The views, whose data
re sides in sources that are rarely available, should be
materialized.
The cost of access: the materialization of views
whose data resides in sources with a high cost of
access will improve the system performance. Thus, a
view will be materialized, if it satisfies at least two
criteria. [7]

Page 19

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 1, 2015

7.

CONCLUSIONS

Thus the paper proposes algorithm for the


materialized view design problem, e.g., how to select
the set of views to be materialized so that the cost of
processing a set of queries and storage space required
storing the data for the materialized views is
minimized. This approach realizes on analyzing the
queries so as to derive common intermediate results
which can be time and to eliminate the need for
creation of same MV for the query. The proposed
algorithm for determining a set of materialized views
is based on the idea of reusing temporary results from
the execution of the global queries. The cost model
takes into consideration of both query access
frequencies and % threshold. The work presented
here is the first stage research in selection of queries
with high access frequencies, clustering them and
creation of Materialized Views for the same. These
high access frequency queries are further analyzed for
required cluster area to create MV.
8.

REFERENCES

[1] A Gong, Weijing Zhao, Clustering-based


Dynamic
Materialized
View
Selection
AlgorithmProceedings of Fifth International
Conference on Fuzzy Systems and Knowledge
Discovery, 2008, China, pp391- 395.
[2] Jian Yang, Kamlakar Karlapalem, Quing
Li,Algorithms for materialized view design in
data warehousing environment. K. Aouiche,
P.Emmanuel Jouve, and J.Darmont
[3] , Clustering-Based Materialized View Selection
in Data WarehousesTechnical Report, University
of Lyon, 2007.
[4] Shukla A, Deshpande PM, Naughton JF,
Materialized view selection for multidimensional
datasets, Proceedings of the 24th International
[5] Conference on VLDB, Morgan Kaufmann
Publishers, San Francisco, 1996, pp. 51.
[6] Shukla A, Deshpande P M, Naughton J
F,Materialized
View
Selection
or
Multidimensional Datasets, Proceedings of the
24th VLDB Conference, 1998, pp. 488-499.
[7] Dynamic Materialized View Selection Algorithm
Proceedings of Fifth International Conference on
Fuzzy Systems and Knowledge Discovery, 2008,
China,pp391- 395
[8] Hadj Mahboubi, Kamel Aouiche and Jrm
Darmont, Materialized View Selection by Query
Clustering
in
XML
Data
Warehouses
FourthInternational Conference on Computer
Science and Information Technology-Jor dan.
[9] Gupta H, Harinarayan V, Rajaraman A, et al,
Index Selection for OLAP, Proceeding of

ISSN: 2455-3743

International
Conference
on
Data
Engineering,1997, pp. 208-219.
[10] Mistry H, Roy P, Sudarshan S, et al, Materialized
view selection and maintenance using multi-query
optimization, Proceedings of SIGMOD'01, 2001,
pp. 307-3.S.Agarawal, S.Chaudhuri, V.Narasayya
Automated Selection of Materialized Views and
Indexes for SQL Databases Proceedings of 26th
International Conference on Very Large
Databases, Cairo, Egypt, 2000
[11] P.A.Larsen, Jingren Zhou Efficient Maintenance
of Materialized Outer-Join Views 23rd
International Conference on Data Engineering
(ICDE 2007), Istanbul, Turkey, April 2007.
[12] Xia Sun, Wang Ziqiang An Efficient Materialized
Views SelectionAlgorithm Based on PSO.
Proceeding of the International Workshop on
Intelligent Systems and Applications, ISA 2009,
Wuhan, China, May 2009
[13] S.H. Talebian, S.A. Kareem Using Genetic
Algorithm to Select Materialized Views Subject to
Dual Constraints. InternationalConference on
Signal Processing Systems, Singapore, May 2000
9.

THOR PROFILE
Prof.
Ramanand
Samdekar received the
Master of Technology in
Computer Science and
Engineering
Rastrasant
Tukodoji Maharaj Nagpur
university,
Nagpur.
Currently
he
is
an
Assistant. Professor in S.B.
Jain
Institute
of
Technology Management
& Research, Nagpur, India.
He has published three
papers in international
journals, two papers in
national journal and one in
national conference. He is
having 7 year teaching
experience and his field of
specialization is Software
development, Data mining,
Software System, and
Network Security.

Page 20

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 1, 2015

ISSN: 2455-3743

IMPLEMENTATION IMAGE MOSAI USING PHASE CORRELATION AND HARRIS


OPERATOR
1

MS. KANCHAN S.TIDKE, 2PRITAM H. GOHATRE


1
GH Raisoni Amravati, India
k10tidke@gmail.com
2
LAMIT, Dhamangoan (Rly), India
pritamgohatre@gmail.com

ABSTRACT: Image mosaic is a technique used to composite two or more overlapped images into a seamless wide-angle
image through a series of processing and it is widely used in remote sensing areas, military applications, etc. When taking these
photos, it's difficult to make a precise registration due to the differences in rotation, exposure and location. The image mosaic
techniques are widely used in remote sensing, medical imaging, and military purposes and so on. Now days, many smart phones
are equipped with the mosaicing application which helps user in many different ways. The image mosaicing technique can be
broadly classified into feature-based and frequency-based techniques. Feature-based method uses the most similarity principle
among images to get the parameters with the help of calculation cost function. Method based on the frequency domain
transforms the image from spatial domain to frequency domain, and get the relationships of translation, rotation and zoom
factor through Fourier transformation. In frequency domain there are methods like phase-correlation, Walsh transform, etc.
Keywords:Image mosaic, remote sensing, medical imaging, Feature-based method, frequency domain transforms, phasecorrelation, Walsh transform.

1.

INTRODUCTION

Algorithms for processing images and mosaic them into


seamless photo-mosaics are among the oldest and most
widely used in computer vision. Frame-rate image
alignment is used in every camcorder that has an image
stabilization feature. Image stitching algorithms create
higher resolution photo-mosaics used to produce todays
digital maps and satellite photos. They also come bundled
with most digital cameras currently being sold, and can be
used to create beautiful ultra wide-angle panoramas.
In day to day life and work sometimes there is a need for
wide angle and high resolution panoramic images, which
the ordinary camera equipment cannot reach. However, it is
not feasible as far as the issues like whole scene,
professional photographic equipment, high price of
maintenance convenient for operation, lack of technical
personnel and unsuitability of general uses are concerned,
and hence the use of image mosaicing techniques has been
put forward.
2. MOTIVATION
Currently the image mosaicing technique has become the
popular computer graphics research. Also image mosaic has
been efficiently and precisely applied to areas such as
industry, military, and health care. Technique of image
mosaic for restoring images with larger visual angle and
more reality plays an essential role in detecting more
information from the image. In fact, to the limit of objective
conditions, i.e. equipments or weather, images are usually
unable to reflect the full scene, which makes it more
difficult for the further processing of those images. The
general task of image mosaic is to build the images in way
of their aligning series which overlaps in space. Compared

with single images, scene images built in this way are


usually of higher resolution and larger vision.
Image mosaic aims to combine a set of images, normally
overlapped, to form a single image as shown in the
following figures.

(a)
(b)
(c)
(d)
Figure 1: An overview of Image Mosaicing (a) First input
image (b) Second input image (c) & (d) Mosaiced images
by different mosaic techniques.
Figure (a) and Figure (b) are the input images,
while Figure (c) and Figure (d) are Mosaiced images.
3. OBJECTIVE
In this work we have used a technique which combines both
namely the feature-based method and frequency-domain
method for image Mosaicing. The feature-based method
used is the Harris corner detection and the frequencydomain method used is the Fourier transform-based crosscorrelation or phase correlation method.
4.

LITERATURE SURVEY

The original image alignment algorithm was the LucasKanade algorithm. The goal of Lucas- Kanade is to align a
template image to an input image, where is a column vector
containing the pixel coordinates. If the Lucas-Kanade
algorithm is being used to compute optical flow or to track
an image patch from time to time, the template is an
extracted sub-region (a window, maybe) of the image [1].

Page 21

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 1, 2015

Algorithms for aligning images and stitching them


into seamless photo-mosaics are among the oldest and most
widely used in computer vision. Frame-rate image
alignment is used in every camcorder that has an Image
Stabilization feature. Image stitching algorithms create the
high- resolution photo-mosaics used to produce todays
digital maps and satellite photos. They also come bundled
with most digital cameras currently being sold, and can be
used to create beautiful ultra wide-angle panoramas.
An early example of a widely used image registration
algorithm is the patch-based translational alignment (optical
flow) technique developed by Lucas and Kanade [1].
Variants of this algorithm are used in almost all motioncompensated video compression schemes such as MPEG
[3]. Similar parametric motion estimation algorithms have
found a wide variety of applications, including video
summarization [4][5], video stabilization [8], and video
compression [9][10]. More sophisticated image registration
algorithms have also been developed for medical imaging
and remote sensing. In the photogrammetric community,
more manually intensive methods based on surveyed ground
control points or manually registered tie points have long
been used to register aerial photos into large-scale photomosaics [11]. One of the key advances in this community
was the development of bundle adjustment algorithms that
could simultaneously solve for the locations of all of the
camera positions, thus yielding globally consistent solutions
[12]. One of the recurring problems in creating photomosaics is the elimination of visible seams, for which a
variety of techniques have been developed over the years
[13]-[17].
In film photography, special cameras were developed
at the turn of the century to take ultra wide-angle
panoramas, often by exposing the film through a vertical slit
as the camera rotated on its axis [18]. In the mid-1990s,
image alignment techniques were started being applied to
the construction of wide-angle seamless panoramas from
regular hand-held cameras [19]-[22]. More recent work in
this area has addressed the need to compute globally
consistent alignments [23]-[25], the removal of ghosts due
to parallax and object movement [26] [27], and dealing with
varying exposures. These techniques have spawned a large
number of commercial stitching products, for which reviews
and comparison can be found on the Web.
While most of the above techniques work by directly
minimizing pixel-to-pixel dissimilarities, a different class of
algorithms works by extracting a sparse set of features and
then matching these to each other. Feature-based approaches
have the advantage of being more robust against scene
movement and are potentially faster, if implemented the
right way. Their biggest advantage, however, is the ability
to recognize panoramas, i.e., to automatically discover the
adjacency (overlap) relationships among an unordered set of
images, which makes them ideally suited for fully
automated stitching of panoramas taken by casual users.
By the year 2011, at University of Victoria, Canada
in Department of Electrical and Computer Engineering,
Ioana S. Sevcenco, Peter J. Hampton and Pan Agathoklis

ISSN: 2455-3743

proposed a method of seamless stitching of images based on


a haar wavelet 2d integration [28].
Recently, Chengcheng Liu and Yong Shi proposed
SIFT algorithm for image registration. SIFT algorithm is
obtained by judging the feature points of local extreme,
combined with neighbourhood information to describe the
feature points to form a feature vector, in order to build the
matching relationship between the feature points.
An early example of a widely-used image registration
algorithm is the patch-based translational alignment (optical
flow) technique developed by Lucas and Kanade (1981).
Variants of this algorithm are used in almost all motioncompensated video compression schemes such as MPEG
and H.263 (Le Gall 1991). Similar parametric motion
estimation algorithms have found a wide variety of
applications, including video summarization (Bergen et al.
1992a, Teodosio and Bender 1993, Kumar et al. 1995, Irani
and Anandan 1998), video stabilization (Hansen et al.
1994), and video compression (Irani et al. 1995, Lee et al.
1997). More sophisticated image registration algorithms
have also been developed for medical imaging and remote
sensingsee (Brown 1992, Zitovaa and Flusser 2003,
Goshtasby 2005) for some previous surveys of image
registration techniques.
In the photogrammetric community, more manually
intensive methods based on surveyed ground control points
or manually registered tie points have long been used to
register aerial photos into large-scale photo-mosaics (Slama
1980). One of the key advances in this community was the
development of bundle adjustment algorithms that could
simultaneously solve for the locations of all of the camera
positions, thus yielding globally consistent solutions (Triggs
et al. 1999). One of the recurring problems in creating
photo-mosaics is the elimination of visible seams, for which
a variety of techniques have been developed over the years
(Milgram 1975, Milgram 1977, Peleg 1981, Davis 1998,
Agarwala et al. 2004).
According to the comparison and analysis above,
aiming at the mosaic between images that have larger scale
difference, we try to synthesize the advantages both in
frequency dispose and registration with features, a new
robust method combined the phase-correlation and Harris
corner is proposed. We can get the factor of translation and
zoom by cross-power spectrum in order to optimize the
detection of Harris. The feature detection then can be
restricted in the overlapped area to avoid the waste of
resource in irrelevant area when we do the search work.
More importantly, this method can eliminate the nonadaptive weakness because of scale change. It is superior to
SIFT and original Harris algorithm in terms of the
calculation speed and applicability.
5. PROBLEM DEFINITION
By keeping following things in mind as an objective, we are
expecting best results from this approach of mosaicing.
To propose a better mosaicing method, which can
stitch scattered images together of the same scene

Page 22

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 1, 2015

ISSN: 2455-3743

(or target), so as to restore an image (or target)


without losing a prior information in it.
To increase an accuracy and reduce the time to
mosaic the images which will shows better efficiency as
compared to other mosaicing techniques.
6.

METHODOLOGY

In order to improve the method of harris corner, we present


an auto-adjusted algorithm of image size based on phasecorrelation. First, we detect the zoom relationship and
translation co-efficiency between the images and modulate
the unregistrated image's scale to the same level as the
original image. We obtain the Region of Interest (ROI)
according to the translation parameter and then pre-treat the
images and mark the interest points in the area by using
improved Harris corner operator. Secondly, we adopt
Normalized Cross-Correlation (NCC) to wipe out the
mismatched points preliminary after edging process, and get
the final precise transformation matrix. At last, we are using
a method of weighted average to obtain a smooth mosaic
image. The experimental results have shown that the setting
of ROI and handling of the edge could cut the time down to
about only half of the time consuming compared to SIFT.
Besides, the scale difference between the images could
enlarge from 1.8 to 4.7 and can eventually obtain a clear and
stable mosaic result.
The translation, scale and rotation in the available
set of images are handled in the following way. Initially
Phase correlation algorithm is used to calculate the crosspower spectrum for registration of images and is used to get
the translation factor. For images that have relative
relationships in location and scale, we can also get the zoom
factor and rotation angle through a series of coordination
transform.
Feature extraction method:
The original Harris corner detection method has some
disadvantage that, even though it is robust to the
illumination changes and rotations, it is very sensitive to the
variation of image size. In addition, by doing a direct corner
checking to images whose textures are dense or who have
abundant details, we surely would get duplicate features in a
local area. Inevitably, we must do extra work to extract and
registration the points, including the useless ones. So
additional preprocessing the image before extraction can
offer a possibility to get more stable features. The
improvement is done in the following way:

Figure 2: Flowchart to compute the factors

Step 1: Get the shift and zoom factors with the help of
phase correlation calculation.
Step 2: Modulate the unregistrated image according to the
zoom factor obtained from step 1 to get a couple of images
with the same size.
Step 3: Ascertain the ROI (Region of Interest) between the
images.
Step 4: Preprocess image before other works. The edge
detection can reduce the search area and can greatly cut the
matching-time down.
7.

IMPLEMENTATION

Phase correlation algorithm uses the cross-power spectrum


to registration images and is used to get the translation
factor initially. Imagine there are two images I 1 and I2, and
the translation between them is as following:

The Fourier transformation:


F1 and F2 are the Fourier transformation of I1 and I2.
The Cross-power spectrum is: We can get an impulse
function (x x0, y y0) about the value of translation
invariant x0 and y0 by using Fourier inverse transform to
equation (3).

Page 23

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 1, 2015

ISSN: 2455-3743

Feature Extraction
EXTRACTION OF HARRIS CORNER
In this paper, we get the corner response function by
the ratio from the determinant and trace of matrix M, which
can avoid the randomness when choosing the scale factor
compared to the method based on the difference value of the
above ones. Besides, just as the experiments show, we could
get much more stable features along with a speed-up
procedure.
Set of tentative correspondences (pairs of matched points
between two images) are shown below.
8.

RESULTS

Figure 3.Results of Mosoic


MATLAB OUTPUT
Fig. 3 is an experiment to get the range shift of the given
input images i.e. Image A and Image B, by using phase
correlation algorithm. The preset value between image A
and B is (58, 28). So we can get the shift value via the
location of the maximum value.

Final Image after Mosaicing

EXPERIMENTAL RESULT
We load three distinct images one by one. For the mosaic
First Image

9.

Second Image

Third Image

CONCLUSION

An approach for image mosaic based on phase-correlation


and Harris operator is obtained through this project. First
the scaling and translation relationship is gained according
to the correlation method known as phase-correlation.
Then the unregistrated image is adjusted and the ROI
scope of matching is kept limited all according to the
factors derived. Finally the feature points are detected and
matched just in this area, based on the improved Harris
corner. We comprehensively apply the advantages of
spatial and frequency domain to conquer Harris's
maximum inadequacies for not possessing the scaleinvariant quality, and also we have enhanced robustness.
As a result, the setting of ROI and adoption of

Page 24

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 1, 2015

ISSN: 2455-3743

preprocessing avoid the useless extraction and registration


which leads to additional speed-ups and improvement of
the precision.

[7] L. Teodosio and W. Bender, Salient video stills:


Content and context preserved, in ACM Multimedia 93,
(Anaheim, California), pp. 3946, August 1993.

10. FUTURE SCOPE


While image stitching is by now a fairly mature field with
a variety of commercial products, there remain a large
number of challenges and open extensions. One of these is
to increase the reliability of fully automated stitching
algorithms. As discussed earlier, it is difficult to
simultaneously avoid matching spurious features or repeated
patterns while also being tolerant to large outliers such as
moving people. Advances in semantic scene understanding
could help resolve some of these problems, as well as better
machine learning techniques for feature matching and
validation. The problem of parallax has also not been
adequately solved. For small amounts of parallax, the
deghosting techniques can often adequately disguise these
effects through local warping and careful seam selection.
For high-overlap panoramas, concentric mosaics concentric
mosaics (Shum and He 1999), panoramas with parallax (Li
et al. 2004b) and careful seam selection (with potential user
guidance) (Agarwala et al. 2004) can be used. The most
challenging case is limited overlap panoramas with large
parallax, since the depth estimates needed to compensate for
the parallax are only available in the overlap regions.

[8] M. Hansen, P. Anandan, K. Dana, G. van der Wal, and


P. Burt, Real-time scene stabilization and mosaic
construction, in IEEE Workshop on Applications of
Computer Vision (WACV94), (Sarasota), pp. 5462,
December 1994. IEEE Computer Society.

11. REFERENCES
[1] Lucas, B. D. And Kanade, T. (1981). An iterative
image registration technique with an application in stereo
vision. In Seventh International Joint Conference on
Artificial Intelligence (IJCAI-81), pages 674679.
[2] Brown, L. G. (1992). A survey of image registration
techniques. Computing Surveys, 24(4), 325376.
[3] D. Le Gall, MPEG: A video compression standard for
multimedia applications,
Communications of the ACM, vol. 34, no. 4, pp. 4458,
April 1991.
[4] J. R. Bergen, P. Anandan, K. J. Hanna, and R.
Hingorani, Hierarchical model-based motion estimation,
in Second European Conference on Computer Vision
(ECCV92), (Santa Margherita Liguere, Italy), pp. 237252,
Springer-Verlag, May 1992.
[5] M. Irani and P. Anandan, Video indexing based on
mosaic representations,
Proceedings of the IEEE, vol. 86, no. 5, pp. 905921, May
1998.

[9] M. Irani, S. Hsu, and P. Anandan, Video compression


using mosaic representations
Signal Processing: Image Communication, vol. 7, pp. 529
552, 1995.
[10] M.-C. Lee et al., A layered video object coding system
using sprite and affine motion model, IEEE Transactions
on Circuits and Systems for Video Technology, vol. 7, no.
1, pp. 130145, February 1997.
[11] C. C. Slama, ed., Manual of Photogrammetry. Fourth
Edition, Falls Church, Virginia, 1980. American Society of
Photogrammetry.
[12] B. Triggs et al., International Workshop on Vision
Algorithms, in Bundle adjustment a modern synthesis,
(Kerkyra, Greece), pp. 298372, Springer, September 1999.
[13] A. Agarwala et al., Interactive digigtal
photomontage, ACM Transactions on Graphics, vol. 23,
no. 3, pp. 292300, August 2004.
[14] J. Davis, Mosaics of scenes with moving objects, in
IEEE Computer Society
Conference on Computer Vision and Pattern Recognition
(CVPR98), (Santa Barbara), pp. 354360, June 1998.
[15] D. L. Milgram, Computer methods for creating
photomosaics, IEEE Transactions on Computers, vol. C24, no. 11, pp. 11131119, November 1975.
[16] D. L. Milgram, Adaptive techniques for
photomosaicking, IEEE Transactions on Computers, vol.
C-26, no. 11, pp. 11751180, November 1977.
[17] S. Peleg, Elimination of seams from photomosaics,
Computer Vision, Graphics, and Image Processing, vol. 16,
pp. 12061210, 1981.
[18] J. Meehan, Panoramic Photography. Watson-Guptill,
1990.

[6] R. Kumar, P. Anandan, M. Irani, J. Bergen, and K.


Hanna, Representation of scenes from collections of
images, in IEEE Workshop on Representations of Visual
Scenes, (Cambridge, Massachusetts), pp. 1017, June 1995.

Page 25

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 1, 2015

ISSN: 2455-3743

12. AUTHOR PROFILE


Ms.

Kanchan

S.Tidke,

received the Master of


Engineering in Electronics
and Tele-communication
from Sant Gadge Baba
Amravati
University,
Amravati. Currently She is
an Assistant. Professor in
Govt.
Polytechnic,
Amravati, India. She has
published two papers in
international journals. She
is having 4 year teaching
experience and his field of
specialization is Image
processing,
VLSI,
Embedded system and
Networking.

H.
Gohatre
received the Master of
Technology in System
Software
from
Rajiv
Gandhi
Technical
University
Bhopal.
Currently
he
is
an
Assistant. Professor in
LAMIT,
Dhamangoan,
india. He has published
two papers in international
journals. He is having 7
year teaching experience
and
his
field
of
specialization is software
development,
Image
processing, Networking.
Pritam

Page 26

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 1, 2015

ISSN: 2455-3743

To STUDY THE TYPES OF OPEN-SOURCE APPLICATIONS OF ROUTING SOFTWARE


HEMANT GADBAIL1, ROSHAN KALINKAR2
Department of Electronics & Tele-Communication Engineering1, 2
HVPM College of Engineering & Technology, Amravati, India
ABSTRACT: Routing protocols are used by routers to dynamically get the knowledge of remote paths to various set of
networks and send the data between the networks. These protocols consists of Routing Information Protocol, Enhanced
Internal Gateway Routing Protocol, Open Shortest Path First, Border Gateway Protocol. In other words, a routing protocol
describes how routers communicate with each other, spreading the information that enables them to select routes between
any two nodes on a network. Routing algorithms determine the specific choice of route. Each router has a priori knowledge
only of networks attached to it directly. A routing protocol shares this information first among immediate neighbors, and then
throughout the network. Many routing software implementations exist for most of the common routing protocols. This open
source application software is used for fast developing well designed network. In this paper, we are discussing a various
types of open source application of Routing software.
Keywords: routing protocol, router, Routing Information Protocol, Enhanced Internal Gateway Routing Protocol, Open
Shortest Path First, Border Gateway Protocol., Routing algorithms

1.

INTRODUCTION

A generic term that refers to a protocol, used by a


router to calculate the appropriate path over which data is
transmitted. The routing protocol also specifies how routers
in a network share information with each other and report
changes. The routing protocol enables a network to make
dynamic adjustments to its conditions, so routing decisions
do not have to be predetermined and static. Routing protocol
is used to dynamically learn routing information so routers
know where to send packets. The only other option is to
manually define all routers within a network which would be
very impractical. What is needed in networking is a stable,
feature rich routing platform fostering innovation, fast
development and deployment of routing protocol innovations
in trial and production networks, without the bottleneck of
incumbent equipment vendors. There are some open source
application used for establishment of network. This software
suit helps and guides to use of routing protocols in network.
In this paper, we are discussing a various types of open
source application of Routing software.
2.

TYPES OF OPEN-SOURCE APPLICATIONS OF


ROUTING SOFTWARE

In this section discuss some types of open-source


application of Routing software.
[1] BIRD INTERNET ROUTING DAEMON (BIRD):
BIRD is an open source implementation of an Internet
protocol suite routing daemon for UNIX like systems.
BIRD is developed as a school project at the Faculty of
Mathematics and Physics, Charles University, Prague, with
major contributions from developers Martin Mares, Pavel

Machek and Ondrej Filip[1]. BIRD supports IPv4 or IPv6


(as separate daemons), multiple routing tables, and BGP,
RIP and OSPF routing protocols, as well as statically
defined routes. Its design differs significantly from the
better known routing daemons, GNU Zebra and Quagga [1].
BIRD is included in many Linux distributions like Debian,
Ubuntu and Fedora. BIRD implements an internal routing
table to which the supported protocols connect. Most of
these protocols import network routes to this internal
routing table and also export network routes from this
internal routing table to the given protocol. [2] This way
information about network routes is exchanged among
different routing protocols. BIRD also supports multiple
internal routing tables and multiple instances of supported
protocol types. Protocols may be connected to different
internal routing tables, these internal routing tables may
exchange information about network routes they contain
(controlled by filters) and each of these internal routing
tables may be connected to a different kernel routing table
thus allowing for policy routing [2].
Configuration is done by editing the configuration
file and telling BIRD to reconfigure itself. BIRD changes
to the new configuration without the need to restart the
daemon itself and restarts reconfigured protocols only if
necessary. There is also an option to do a soft
reconfiguration, which doesn't restart protocols but may
leave some stale information such as changed filters not
filtering out already exported network routes.
[2] QUAGGA : Quagga is a network routing software
suite providing implementations of Open Shortest Path
First (OSPF), Routing Information Protocol (RIP), Border
Gateway Protocol (BGP) and IS-IS for Unix-like
platforms, particularly Linux, Solaris, FreeBSD and
NetBSD[4]. Quagga is distributed under the terms of the

Page 27

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 1, 2015

GNU General Public License (GPL)[4]. The Quagga


architecture consists of a core daemon (zebra) which is an
abstraction layer to the underlying Unix kernel and
presents the Zserv API over a Unix-domain socket or TCP
socket to Quagga clients[4]. The Zserv clients typically
implement a routing protocol and communicate routing
updates to the zebra daemon [3].
[3] GNU ZEBRA: Zebra is a routing software package
that provides TCP/IP based routing services with routing
protocols support such as RIP, OSPF and BGP [6]. Zebra
also supports special BGP Route Reflector and Route
Server behavior[5][6]. In addition to traditional IPv4 routing
protocols, Zebra also supports IPv6 routing protocols. With
SNMP daemon which supports SMUX protocol, Zebra
provides routing protocol management information bases.
Zebra uses an advanced software architecture to provide a
high quality, multi server routing engine[5]. Zebra has an
interactive user interface for each routing protocol and
supports common client commands[6]. Due to this design,
new protocol daemons can be easily added. Zebra library
can also be used as a program's client user interface. Zebra
is distributed under the GNU General Public License[5].
The idea for Zebra originally came from Kunihiro Ishiguro,
after he realized the need for quality routing software [5].
[4] OpenBGPD : OpenBGPD is a server software
program that allows general purpose computers to be used
as routers. It is a UNIX system daemon that provides a
free, open-source implementation of the Border Gateway
Protocol version 4.[9] this allows a machine to exchange
routes with other systems that speak BGP. OpenBGPD is

ISSN: 2455-3743

developed by Henning Brauer and Claudio Jeker as part of


the OpenBSD project[9]. OpenOSPFD, developed by Esben
Nrby, is a companion daemon of OpenBGPD that
implements the Open Shortest Path First protocol[8]. The
suite was developed as an alternative to packages such as
Quagga, a Linux-focused routing suite which is licensed
under the GPL.[8][9] The design goals of OpenBGPD
include being secure, reliable, and lean enough for most
users, both in size and memory usage[8].
[5] OPENOSPFD : OpenOSPFD is a BSD licensed
implementation of the Open Shortest Path First Protocol.[9]
It is a network routing software suite which allows ordinary
general purpose computers to be used as routers
exchanging routes with other computer systems speaking
the OSPF protocol[9].
[6] XORP : XORP is an open source Internet Protocol
routing software suite. The name is derived from
eXtensible Open Router Platform[11]. It supports OSPF,
BGP, RIP, PIM, IGMP, OLSR. The product is designed
from principles of software modularity and extensibility
and aims at exhibiting stability and providing feature
requirements for production use while also supporting
networking research[10][12].

Page 28

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 1, 2015

3.

ISSN: 2455-3743

COMPARATIVE ANALYSIS OF TYPES OF OPEN-SOURCE APPLICATIONS OF ROUTING SOFTWARE

In this section to show the comparatively types of open source application Routing Software.
Sr.
No
.

Name

Bird
Internet
routing
daemon
(BIRD)

Quagga

GNU
Zebra

Developed
in

Charles
University
in Prague

International
Computer
Science
Institute in
Berkeley,
California

Kunihiro
Ishiguro

OpenBG
PD

The
OpenBSD
Project

OpenOSP
FD

The
OpenBSD
Project

XORP

International
Computer
Science
Institute in
Berkeley,
California

Release
d in

April 20
, 2015G

License Type

GNU General
Public
License

March 7
, 2015

GNU General
Public
License

Septem
ber 8, 2
005

GNU General
Public
License

4.6
/Novem
ber 1,
2009
4.6/Nov
ember
1, 2009

July
2004

ISC

ISC

GNU GPL
v2, GNU
LGPLv2

Type

Protocols
used

Web sites

Routing

RIP,
RIPv2,
RIPng,
OSPFv2,
OSPFv3,
BGPv4,
BGPv6

www.bird.netw
ork

Routing

RIP,
RIPv2,
RIPng,
OSPFv2,
OSPFv3,
ISIS (v4
only),
BGPv4,
BGPv6,
Babel,
SNMP

www.quagga.or
g

Linux

Routing

IPv4,RIP.
OSPF,
IPv6,
SNMP,
SMUX

www.gnu.org/s
oftware/zebra

OpenBSD,
FreeBSD

Border
Gateway
Protocol

BGPv4,
BGPv6

www.openbgpd
.org

OpenBSD,
FreeBSD

Open
Shortest
Path First

OSPF

www.openbgp.
org

Plateform

Unix,
Linux

Unix,
Linux,
Solaries,
FreeBSD

Linux,
BSD,
windows

Routing

RIP,
RIPv2,
RIPng,
OSPFv2,
OSPFv3,
BGPv4,
BGPv6,
IGMP,
MLD,
PIM-SM,
OLSR

www.xorp.org

Page 29

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 1, 2015

4.

CONCLUSION

A protocol, used by a router to determine the appropriate


path over which data is transmitted. The routing protocol
also specifies how routers in a network share information
with each other and report changes. In this paper, we are
discussing a various types of open source application of
Routing software. With the help of this routing software,
we are established a network and it was also help to guide,
how the routing protocol used in the network.
5.

REFERENCES

[1] www.bird.network.

ISSN: 2455-3743

[11] www.xorp.org.
[12] "ICSI Spins out Venture-Backed XORP, Inc.".
International Computer Science Institute. 2008-07-24.
6. AUTHOR PROFILE
Hemant Gadbail received
Bachelor of Engineering in
Electronics
and
TeleCommunication
from
HVPM
College
of
Engineering & Technology.,
Amravati, India.

[2] Davidson, Andy (2009-05-28). "LONAP's Route


Servers" (PDF). UKNOF13. Retrieved 30 July 2011.
[3] Benedikt Stockebrand. IPv6 in practice. Springer
[4] www.quagga.org.
[5] IP Infusion, "IP Infusion was founded in 1999 by
Kunihiro Ishiguro and Yoshinari Yoshikawa [...] Mr.
Ishiguros background as the co-founder and developer of
open source Zebra. Http://www.ipinfusion.com/about.
[6] www.gnu.org/software/zebra

Roshan Kalinkar pursuing


Bachelor of Engineering in
Electronics
and
TeleCommunication from HVPM
College of Engineering &
Technology.,
Amravati,
India.

[7] Citrix, "The NetScaler routing suite is based on


ZebOS, the commercial version of GNU Zebra.
[8] A Secure BGP Implementation.
[9] www.openbgpd.org.
[10] Mark Handley (2000-11-30). "Proposal to Develop an
Extensible Open Router Platform".

Page 30

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 1, 2015

ISSN: 2455-3743

UNIVERSAL GATE APPLICATION FOR FLOATING POINT ARITHMETIC LOGIC


UNIT
PROF. VISHWAJIT K. BARBUDHE
Jagdambha College of Engineering and Technology, Yavatmal, India
vishwajit.k.barbudhe@gmail.com
ABSTRACT: A floating point arithmetic and logic unit design using pipelining is proposed. By using pipeline with ALU
design, ALU provides a high performance. With pipelining plus parallel processing concept ALU execute multiple
instructions simultaneously. Floating point ALU unit is formed by combination of arithmetic modules (addition,
subtraction, multiplication, division), Universal gate module. Each module is divided into sub-module. Bits selection
determines which operation takes place at a particular time. The design is and validated using VHDL simulation in the
xilinx13.1i software.
Keywords - ALU - Arithmetic Logic Unit, Top-Down Design, Validation, Floating Point, Test-Vector.

1.

INTRODUCTION

Floating point describes a system for representing


numbers that would be too large or too small be
represented as integers. Floating point representation is
able to retain its resolution and accuracy compared to fixed
point representation. IEEE specified standard for floatingpoint representation known as IEEE 754 in 1985.
The IEEE 754 floating point format consists of
three fields.
Sign bit: 1 bit .It is 1 for a negative number and 0 for
positive number.
Exponent: 8 bits. The exponent represents a power of
two.
Mantissa: Final portion of word (23 bits) is the significant
that is also called as mantissa.
Mantissa is a Fractional
part.
Arithmetic logical unit is a combinational network that
performs arithmetic and logical operation on the data. For
Computation input data is given to A.L.U. code is also
given from control unit, according to that code it compute
the result. The ALU with floating point operations is called
a FPU.
Pipelining plus parallel processing execute is used
to execute multiple instructions simultaneously. The cycle
time of the processor is reduce .If pipelining is use, the
CPU Arithmetic logic Unit can be design faster. It
increases the overall performance of a system.
In the floating point ALU with universal logic gate we can
perform addition, subtraction, multiplication, division
operation and logical operation with less delay and less
area.

2. LITERATURE REVIEW
By referring first this paper 16 bit floating point
ALU is design using pipelining. Pipelining is
Use to execute multiple instructions simultaneously .TopDown design approach is use. In top-down design
approach, four arithmetic modules, addition, subtraction,
multiplication

And division is combined to form a floating point ALU


unit. Each module is divided into sub- modules. Two
selection bits are combined to select a particular operation.
Each module is independent to each other .all modules in
the ALU design are realized using VHDL, design
functionalities are validated through VHDL simulation .all
components and module is successfully run, Synthesis and
Simulation in the Xilinx. The problem in this ALU is that
hardware complexity in terms of synthesis is more. [1]
32 bit floating point ALU is design using
pipelining. The design approach is same as that of design
of 16 bit ALU. Concept of pipelining is use to execute
multiple instruction simultaneously. Top down approach is
also use. It performs four operations, Addition, subtraction,
Multiplication and Division. Each module is independent
to each other, all modules in the ALU design are realized
using VHDL, design functionalities are validated through
VHDL simulation .All components and module is
successfully run, Synthesis and Simulation in the Xilinx.
The problem in this ALU is that hardware complexity in
terms of synthesis is more. [2]
A floating point arithmetic and logic unit design
using pipelining. By using pipeline with ALU design, ALU
provides a high performance. With pipelining concept
ALU execute multiple instructions simultaneously. TopDown design approach is use. Floating point ALU unit is
formed by combination of arithmetic modules (addition,
subtraction, multiplication, division), logical operation
module (AND, OR, NOT). Each module is divided into
sub-module is shown in fig.1.Bits selection determines
which operation takes place at a particular time. The design
is and validated using vhdl simulation in the xilinx12.1i
software. The problem in this ALU is that hardware
complexity in terms of synthesis is more. [3]
32 bit floating point ALU is design. Floating
point operations are hard to implement on Field
Programmable Gate Arrays (FPGA) because of the
complexity of algorithms is more. Then again, many
scientific applications require floating point arithmetic
because of high accuracy in their calculations. In this paper
an efficient implementation of an IEEE 754 single

Page 31

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 1, 2015

precision floating point arithmetic unit is designed in


Xilinx SPARTAN 3E FPGA. VHDL environment is
performed for floating point arithmetic unit design using
pipelining, which provides high performance. Pipelining is
used to execute multiple instructions simultaneously. In
top-down design approach, four arithmetic modules,
addition/ subtraction, multiplication and division are
combined to form a floating point arithmetic unit. FP
addition is implemented using Leading-One-Detector
(LOD), Leading-One-Predictor (LOP) and two-path
algorithms. In this ALU for Adder module clock period is
(LOD- 33.159ns, LOP-28.358ns, Tow-path- 22.313ns), for
FP multiplier it is 10.402 ns, for FP divider it is 7.058ns.
And the area in slices is for Adder module it is (LOD- 694,
LOP-731, Two- path- 1020), for FP multiplier it is 272, for
FP divider it is 185. Synthesis and simulation results are
obtained by using Xilinx13.1i platform. [4]

3. OVERALL ANALYSIS OF REPORTED WORK


From the overall analysis we can say that floating
point ALU was design that performed arithmetic
operation
which
include
addition,
subtraction,
Multiplications, and Division operations. That ALU was
designed using pipelining because of that that the speed of
ALU is increases and it executes multiple instructions
simultaneously. That ALU was designed with Top down
approach .In that ALU problem is that a more delay, more
area, and hardware complexity in terms of synthesis is also
more. This conclusion is made in [1], [2] and [4].After that
one ALU was designed that performed Addition,
subtraction, multiplication, division, AND, OR, NOT
operation [3]. Same designing method was used .Same
problem is present in this ALU also. Our approach is that
to design 32 bit floating point ALU with universal logic
gate, which can perform Addition, subtraction,
multiplication ,division , Logical operation with less delay
and less area.
4. PROBLEM DEFINITION
An A.L.U. performed addition, subtraction,
multiplication, and division operation, latter on this, they
add the logical operation which include AND, OR, NOT
gate but the hardware complexity in terms of synthesis,
delay and area is also more with less accuracy. So to avoid
this problem we are designing 32 bit floating point A.L.U.
which perform arithmetic operation which include addition,
subtraction, multiplication, division operation and design
of universal logic gate with high accuracy, high speed ,less
delay and less area.
5. PROPOSED METHODOLOGY
1) Designing of multiplication module.
2) Designing of addition/subtraction module.
3) Designing of Division module.
4) Designing of universal gate module.

ISSN: 2455-3743

5) To study the concept of pipelining, Parallel processing


which is used to execute multiple instructions
simultaneously
6) Comparison and study of the results
6. PROBABLE OUTCOME
Proposed work will result 32 bit floating point
A.L.U. with universal logic gate, which perform addition,
subtraction, multiplication, division and designing of
universal logic gate. This will meet the following
specifications:
Reduce Area
Less delay
7. IMPLEMENTED WORK
7.1 Pipelined Floating Point Multiplication Module
In this 32 bit pipelined floating point A.L.U. is designed
with four stage pipelining. Which has two 32 bit numbers
given as input and got result is a 32 bit number, which is a
multiplication of that two 32 bit numbers .
In floating point numbers, multiplication of two numbers is
basically performed through adding their exponents and
multiplying their mantissas. Multiplier Module: Floating
point multiplier unit block diagram is illustrated in Fig. 1

Figure1: Block Diagram of Multiplier


1) Zero Detect: It is used to set a zero flag when any of the
input operands is zero. This avoids unnecessary
calculations throughout the multiplier module when a zero
input is applied. The sign, exponent and mantissa of each
input operand are separated to be manipulated differently
throughout the multiplier module. To prepare the mantissas
for the multiplication operation, the dropped implied bit of
each mantissa is set to 1.
2) Add Exponents: To determine the resultant exponent,
the exponents are added and a 127 bias is subtracted. The
bias subtraction compensates for the bias having been
added in both exponents. The result is fit into a 10 bit
exponent to allow checking for overflow or underflow in
the post multiply normalization step. Since the 8 bit
exponent of a single precision floating point number is an
unsigned number, the 10th bit set to '1' indicates an
underflow. An overflow is detected when the 10th bit is '0'
and the 9th bit is '1'. Fig.1. Block Diagram of Floating
Point Multiplier Module
3) Block Multiplier: Multiplies the two 24 bit mantissa of
operands A and B. the bottle neck of the design is the

Page 32

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 1, 2015

24*24 bit multiplier used to calculate the resulting 48 bit


mantissa. To increase the maximum operating speed of the
multiplier, the proposed design breaks up the 24*24 bit
multiplication of operands A and B into nine 8*8 bit
multiplications where each mantissa is sliced into three 8bit slices such that A=A2A1A0 B=B2B1B0. Then, B0 is
multiplied in A2, A1 and A0. Each of these three 8*8 bit
multiplications gives a 16 bit result. The three 16 bit results
are properly manipulated to give a 32 bit result Ro of the
24*8 bits multiplication operation (i.e. A*B0). In a similar
manner B1 and B2 are multiplied in A2, A1 and A0 to give
R1 and R2. R1 and R2 are properly shifted to be added to
R0 thus giving the 48 bit result mantissa. The result sign is
determined through a simple XOR operation.
4) Post Normalize: The implied bit is located and dropped.
The 48 bit mantissa from the multiplication operation is
then truncated to a 26 bit mantissa which is the 23 bits
assigned for the mantissa along with three extra bits to
increase the accuracy of the rounding process. These three
extra bits are the guard, round and sticky bits. The guard
Nod round bits are just an extension added to the mantissa
to allow for extra precision. Sticky bit is the logical
"Or"ing of all the truncated bits. The exponent is adjusted
and checked for overflow or underflow and the appropriate
flag is set accordingly.
5) Rounding: The 26 bit resultant mantissa is rounded to 23
bits using the REN technique. After rounding, the exponent
is checked again for possible overflow. Finally, the sign,
exponent and mantissa are appended together to give the
single precision floating point multiplication result in the
IEEE format along with the overflow and underflow flags.

ISSN: 2455-3743

7.2 Adder/Substractor Module:


In floating point addition (or subtraction), the exponents of
the two floating point operands must be equalize. So in the
adder/subtracter unit, the exponent of the smaller number is
incremented such that both exponents are equal and the
mantissa of the small number is then shifted right 'n' times
where 'n' is the difference between the large and small
exponents. After the addition/subtraction operation is
performed, the resultant mantissa is normalized using the
LOD method. The LOD detects the most significant '1' by
counting the number of zeros (nz) before the most
significant '1'. The mantissa is then shifted left 'nz' times.
Due to the excess shifting required before and after the
addition/subtraction operation in the pre-normalization of
the smaller number and the post normalization of the
resultant mantissa respectively. To increase the maximum
operating speed of the adder/subtracter unit all the shift
operations in the pre-normalization and post normalization
steps were performed through barrel shifting. Barrel
shifting has the advantage of shifting the data by any
number of bits in one operation which makes barrel
shifting suitable for the shifting operations required in the
adder/subtracter unit that can be a shift by any number
between 1 and 253 depending on the difference between
the exponents of the input operands. LOD Floating Point
Adder/Subtracter Unit. The general block diagram of the
floating point adder/subtracter module is illustrated in Fig.
5.

Figure 5: Block Diagram of Multiplier

Figure2: RTL Schematic of Multiplier

Figure 3: RTL Schematic of Multiplier

1) Unpack: The sign, exponent and mantissa of both


operands are separated. A flag, an equal b flag, is set if
both inputs are equal. The equal b flag will be used if the
effective operation, determined in the adder/subtractor
module, was subtraction to set a flag indicating a zero
output. This prevents unnecessary addition/subtraction and
pre normalization operations from taking place.
2) Swap: Inputs are swapped if necessary such that
operand A carries the larger floating point number and
operand B is the smaller operand to be pre- normalized. A
swap flag is set if the operands were swapped to be used in
determining the effective operation in the adder/subtracted
module.
3) Zero Detect: An appropriate flag is set if one or both
input operands is a zero. This helps avoid unnecessary
calculations and normalizations when a zero operand is
detected. The resultant exponent and the difference
between the two exponents are determined here.
4) Pre-normalize: The smaller mantissa, of operand B, is
pre-normalized, that is it's shifted by the difference

Page 33

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 1, 2015

between the two input exponents. Three extra bits the


guard bit, the round bit, and the sticky bit are added to both
mantissas to increase the accuracy of the performed
operation (addition or subtraction) and to be used in the
rounding process. Sticky bit is the logical "Or"ing of any
bits that are dropped during the pre-normalization of
operand B.
5) Adder/Substractor: The effective operation to be
performed is calculated according to the signs of operands
A & B, the input operation and the swap flag. The effective
operation is performed and the zero flag is updated if the
effective operation is subtraction and the aequalb flag is
set.
6) Post Normalize: The resultant mantissa is normalized
after the leading one is detected using the LOD method.
The resultant exponent is adjusted accordingly.
7) Rounding: The resultant mantissa is rounded using the
REN technique. The final output is given in IEEE format.
In the 32 bit floating point ADDER/SUBSTRACTOR we
are giving two 32 bit numbers, and clock signal is also
given for synchronization purpose. ADD_SUB signal is
given when this signal is 1 it perform addition operation,
When ADD_SUB signal is 0 it perform subtraction.

ISSN: 2455-3743

Kebangsaan Malaysia, 43600 Bangi,Selangor, Malaysia


Journal of Applied Sciences Research, 8(1): 611-619, 2012
[3] Shuchita Pare*1 Dr. Rita Jain*2 32 Bit Floating Point
Arithmetic Logic Unit ALU Design and Simulation
international journal of imerging trend in electronics and
computer science volume 1 issue, 2012
[4] Yedukondala Rao Veeranki#1, R. Nakkeeran*2 ,
Spartan 3E Synthesizable FPGA Based Floating-Point
Arithmetic Unit International Journal of Computer Trends
and Technology (IJCTT) - volume4Issue4 April 2013
[5] Prashanth B.u.v \ P.Anil Kumai, .G Sreenivasulu3
iMaintenance Engineer, DST-PURSE, 2M. TECH Student
, 3Associate Professor,S. V University, Tirupati-517502
Design & Implementation of Floating point ALU on a
FPGA Processor International Conference on Computing,
Electronics and Electrical Technologies [ ICCEET],2012
[6] Mamu Bin Ibne Reaz, MEEE, Md. Shabiul Islam,
MEEE, Mohd. S. Sulaiman, MEEEFaculty of Engineering,
Multimedia University, 63 100 Cybejaya, Selangor,
Malaysia Pipeline Floating Point ALU Design using
VHDL ICSE2002 Proc. 2002
[7]ICSE2002 Proc. 2002, Penang, Malaysia Pipeline
Floating Point ALU Design using VHDL Mamu Bin Ibne
Reaz, MEEE, Md. Shabiul Islam, MEEE, Mohd. S.
Sulaiman, MEEEFaculty of Engineering, Multimedia
University, 63 100 Cybejaya, Selangor, Malaysia 2002.
10 AUTHOR PROFILE

Figure6:RTL Schematic of Adder_Substractor


8. CONCLUSION
High speed 32 bit floating point multiplier based on the
IEEE-754 single precision format is developed based on
four stage pipeline technique and also designed a 32 bit
ADDER_SUBSTRACTOR module. Next I will design
DIVISION MODULE, UNIVERSAL GATE MODULE,
and then A.L.U.
9. REFERENCES
[1] 1Rajit Ram Singh 2Vinay Kumar Singh 3poornima
shrivastav 4Dr. GS Tomar VHDL environment for
floating point Arithmetic Logic Unit ALU design and
simulation

Prof. Vishwajit K. Barbudhe


received
the
Master
of
Techonology in Electronics and
Communication from Rajiv
Gandhi Technical University
Bhopal. Currently he is an
Assistant
Professor
in
Jagdambha
College
of
Engineering and Technology,
Yavatmal, India.
He has
published
28
papers
in
international journals. 6 papers
in International Conference and
6
papers
in
National
conference. He is having 7 year
teaching experience and his
field of specialization is digital
signal processing, VLSI, digital
communication,
Image
processing.

[2]L.F. Rahman, Md. Mamun, M.S. Amin, VHDL


Environment for Pipeline Floating Point Arithmetic Logic
Unit Design and SimulationDepartment of Electrical,
Electronic and Systems Engineering, Universiti

Page 34

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 1, 2015

ISSN: 2455-3743

REVIEW OF INDEPENDENT COMPONENT ANALYSIS ALGORITHMS AND ITS


APPLICATION
NARESH NIMJE
Department of Electronics & Tele-Communication Engineering
Dr. Rajendra Gode College of Engineering & Technology, Amravati, India
ABSTRACT: The independent component analysis of a random vector consists of finding for a linear transformation that
minimizes the statistical dependence between its components. In order to define suitable search criteria, the expansion of mutual
information is utilized as a function of cumulants of increasing orders. An efficient algorithm is proposed, which allows the
computation of the ICA of a data matrix within a polynomial time. The concept of lCA may actually be seen as an extension of
the principal component analysis, which can only impose independence up to the second order and, consequently, defines
directions that are orthogonal. Potential applications of ICA include data analysis and compression, Bayesian detection,
localization of sources, and blind identification and deconvolution.
Keywords: - independent component analysis, data analysis and compression, Bayesian detection, localization of sources,
and blind identification and deconvolution.
1.

INTRODUCTION

Nowadays, performing statistical analysis is only a


few clicks away. However, before anyone carries out the
desired analysis, some assumptions must be met. Of all the
assumptions required, one of the most frequently encountered
is about the normality of the distribution .
ICA is very closely related to the method called blind source
separation (BSS) or blind signal separation. A ``source'' means
here an original signal, i.e. independent component, like the
speaker in a cocktail party problem. ``Blind'' means that we no
very little, if anything, on the mixing matrix, and make little
assumptions on the source signals. ICA is one method, perhaps
the most widely used, for performing blind source separation.
An example application of ICA is to solve the "Cocktail Party
Problem." This problem states we have multiple people talking
at a party, but alas the party is very crowded. We try to record
the happenings of the party with a microphone, but so many
people are talking that no one is understood. However, if we
record the party with multiple microphones, can we reconstruct
individual voices. Ignoring delays due to distance to the
microphones, ICA can solve this problem.
This paper presents an introduction to independent
component analysis (ICA). Unlike principal component
analysis, which is based on the assumptions of
uncorrelatedness and normality, ICA is rooted in the
assumption of statistical independence.
2.

ALGORITHM
OF
COMPONENT ANALYSIS

INDEPENDENT

In this section discuss the some basic independent


component analysis algorithm.
[1] FastICA FOR ONE UNIT:
To begin with, we shall show the one-unit version of
FastICA. By a "unit" we refer to a computational unit,
eventually an artificial neuron, having a weight vector
that the neuron is able to update by a learning rule[1]. The

FastICA learning rule finds a direction, i.e. a unit vector


such that the projection
maximizes non-gaussianity.
Non-gaussianity is here measured by the approximation of
negentropy
. The variance of
must here be
constrained to unity; for whitened data this is equivalent to
constraining the norm of
to be unity. The FastICA is
based on a fixed-point iteration scheme for finding a
maximum of the nongaussianity of
, as measure. It
can be also derived as an approximative Newton iteration[1].
Denote by g the derivative of the non quadratic function G.
The basic form of the FastICA algorithm is as follows:
1. Choose an initial (e.g. random) weight vector
2. Let

3.
4.

Let
If not converged, go back to 2.

Convergence means that the old and new values of


point
in the same direction, i.e. their dot-product is (almost) equal
to 1[1][2]. It is not necessary that the vector converges to a
single point, since
and
define the same direction.
This is again because the independent components can be
defined only up to a multiplicative sign. Note also that it is
here assumed that the data is prewhitened.
[2] FastICA FOR SEVERAL UNITS:
The one-unit algorithm of the preceding subsection
estimates just one of the independent components, or one
projection pursuit direction. To estimate several independent
components, we need to run the one-unit FastICA algorithm

Page 35

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 1, 2015

ISSN: 2455-3743

using several units (e.g. neurons) with weight vectors

,
to

the

. To prevent different vectors from converging


same
maxima
we
must decorrelate the

outputs
after every iteration[1][2]. A simple
way of achieving decorrelation is a deflation scheme based
on a Gram-Schmidt-like decorrelation. This means that we
estimate the independent components one by one. When we
have
estimated p
independent
components,
or p vectors

, we run the one-unit fixed-point

algorithm for
from

, and after every iteration step subtract


the

``projections''
previously
renormalize

estimated p vectors,

of
and

the
then

In certain applications, however, it may be desired to use a


symmetric decorrelation, in which no vectors are
``privileged'' over others. This can be accomplished, e.g., by
the classical method involving matrix square roots,

Where

is the matrix

and the inverse square root


from
the
eigen
of
as

of the vectors,
value

is obtained
decomposition

. A simpler alternative is
the following iterative algorithm

and

. The matrix
needs to be orthogonalized after every step. In this matrix
version, it is natural to orthogonalize
symmetrically.
The above version of FastICA could be compared with the
stochastic gradient method for maximizing likelihood

Where
is the learning rate, not necessarily constant in
time. We show that FastICA can be considered as a fixedpoint algorithm for maximum likelihood estimation of the
ICA data model. . In FastICA, convergence speed is
optimized by the choice of the matrices
and
. Another advantage of FastICA is that it can estimate both
sub- and super-gaussian independent components, which is
in contrast to ordinary ML algorithms, which only work for a
given class of distributions
[4] JADE (ICA): Joint Approximation Diagonalisation of
Eigenmatrices (JADE) is an algorithm for independent
component analysis that separates observed mixed signals
into latent source signals by exploiting fourth order
moments. The fourth order moments are a measure of nonGaussianity, which is used as a proxy for defining
independence between the source signal [5] s. The motivation
for this measure is that Gaussian distributions possess zero
excess kurtosis, and with non-Gaussianity being a canonical
assumption of ICA, JADE seeks an orthogonal rotation of
the observed mixed vectors to estimate source vectors which
possess high values of excess kurtosis.
Let
denote an observed data
matrix whose
columns correspond to observations of
variate mixed vectors. It is assumed that
is prewhitenend, that is, its rows have a sample mean
equaling zero and a sample covariance is the
dimensional identity matrix, that is,

[3] FastICA AND MAXIMUM LIKELIHOOD :


This algorithm gives new version of FastICA that shows
explicitly the connection to the well-known infomax or
maximum likelihood algorithm introduced.. If we express
FastICA using the intermediate formula, and write it in
matrix form, we show that FastICA takes the following
form: [3] [4]

Where

Applying JADE to

entails

1. computing fourth-order cumulants of


and then
2. optimizing
a contrast
function to
obtain
a
rotation matrix
To estimate the source components given by the rows of
the
dimensional matrix
.
[5] KernelICA : Kernel independent component analysis
(Kernel ICA) is an efficient algorithm for independent
component analysis which estimates source components by

Page 36

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 1, 2015

optimizing a generalized variance contrast function[6], which


is based on representations in a reproducing kernel Hilbert
space. Those contrast functions use the notion of mutual
information as a measure of statistical independence.
3. APPLICATION OF INDEPENDENT COMPONENT
ANALYSIS

1. Independent Component Analysis used for Decision


Tree: Decision trees are typically used as computationally
efficient representations in classification or regression. For
example binary trees allow one to reach a leaf node in log2 n
decisions when the tree contains n nodes in total. If the
decisions are easy to compute, this may result in a very light
computational process [7]. Typical problems involving
decision trees deal with multivariate data where the
components are usually attribute values with a clear
interpretation. Decisions can be simply implemented as
threshold values on one of the attributes. An example is an
attribute height (k) which is the height of person k. A
decision could be a threshold height (k) > 175cm, which
would split the data in two classes based on height.
One of the key properties of Independent Component
Analysis is the ability to find linear combinations of
multivariate data that have certain information-theoretic
properties [7].
Often these linear combinations represent underlying
sources, which cannot be directly observed. If the data
contains such hidden sources, decision based on thresholding
observed data components may not be very effective. In,
single-component ICA was used to implement binary
decisions in a decision tree. The key idea is that the
independent components have more structure than the
observed components, and therefore can be expected to be
better candidates for linear
threshold decisions. Since only single ICA component needs
to be computed at each tree node, computational complexity
resulting from a large number of ICA components can be
avoided.
[2] ICA for Text Mining: Independent component analysis
(ICA) was originally developed for signal processing
applications. Recently it has been found out that ICA is a
powerful tool for analyzing text document data as well, if the
text documents are presented in a suitable numerical form.
This opens up new possibilities for automatic analysis of
large textual data bases: finding the topics of documents and
grouping them accordingly [8].
First approaches of using ICA in the context of text
data considered the data static. In our recent study, we
concentrated on text data whose topic changes over time.
Examples of dynamically evolving text are chat line
discussions or newsgroup documents. The dynamical text
stream can be seen as a time series, and methods of time

ISSN: 2455-3743

series processing may be used to extract the underlying


characteristics here the topics of the data [8].
[3] Face Recognition using Independent Component
Analysis: A number of current face recognition algorithms
use face representations found by unsupervised statistical
methods [9]. Typically these methods find a set of basis
images and represent faces as a linear combination of those
images. Principal component analysis (PCA) is a popular
example of such methods. The basis images found by PCA
depend only on pair wise relationships between pixels in the
image database. In a task such as face recognition, in which
important information may be contained in the high-order
relationships among pixels, it seems reasonable to expect
that better basis images may be found by methods sensitive
to these high-order statistics. Independent component
analysis (ICA), a generalization of PCA, is one such method
[9]
. We used a version of ICA derived from the principle of
optimal information transfer through sigmoidal neurons. ICA
was performed on face images in the FERET database under
two different architectures, one which treated the images as
random variables and the pixels as outcomes, and a second
which treated the pixels as random variables and the images
as outcomes. The first architecture found spatially local basis
images for the faces. The second architecture produced a
factorial face code. Both ICA representations were superior
to representations based on PCA for recognizing faces across
days and changes in expression. A classifier that combined
the two ICA representations gave the best performance [9].
[4] Mobile Phone Communications Using Independent
Component Analysis: In commercial cellular networks, like
the systems based on direct sequence code division multiple
access (DSCDMA), many types of interferences can appear,
starting from multi-user interference inside each sector in a
cell to interoperator interference [11]. Also unintentional
jamming can be present due to co-existing systems at the
same band, whereas intentional jamming arises mainly in
military applications. Independent Component Analysis
(ICA) use as an advanced pre-processing tool for blind
suppression of interfering signals in direct sequence spread
spectrum communication systems utilizing antenna arrays.
The role of ICA is to provide an interference-mitigated
signal to the conventional detection. Several ICA algorithms
exist for performing Blind Source Separation (BSS) [11]. ICA
has been used to extract interference signals, but very less
literature is available on the performance, that is, how does it
behave in communication environment? This needs an
evaluation of its performance in communication
environment. This chapter evaluates the performance of
some major ICA algorithms like Bell and Sejnowskis
infomax algorithm, Cardosos Joint Approximate
Diagonalization of Eigen matrices (JADE), Pearson-ICA,
and Comons algorithm in a communication blind source
separation problem. Independent signals representing SubGaussian, Super-Gaussian, and mix users, are generated and
then mixed linearly to simulate communication signals.

Page 37

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 1, 2015

Separation performance of ICA algorithms is measured by


performance index.
[5] Predicting Stock Market Prices Using Independent
Component Analysis: In developing a stock price
forecasting model, the first step is usually feature extraction.
Nonlinear independent component analysis (NLICA) is a
novel feature extraction technique to find independent
sources given only observed data that are mixtures of the
unknown sources, without prior knowledge of the mixing
mechanisms [10]. It assumes that the observed mixtures are
the nonlinear combination of latent source signals [10]. This
study propose a stock price forecasting model which first
uses NLICA as preprocessing to extract features from
forecasting variables. The features, called independent
components (ICs), are served as the inputs of support vector
regression (SVR) to build the prediction model.
Experimental results on Nikkei 225 closing cash index show
that the proposed method can produce the best prediction
performance compared to the SVR models that use linear
ICA, principal component analysis (PCA) and kernel PCA as
feature extraction, and the single SVR model without feature
extraction.
[6] Removing artifacts, such as eye blinks, from EEG
data using independent component analysis: Independent
Component Analysis is a powerful tool for eliminating
several important types of non-brain artifacts from EEG data
[12]
. EEGLAB allows the user to reject many such artifacts in
an efficient and user-friendly manner. The quality of the data
is critical for obtaining a good ICA decomposition. ICA can
separate out certain types of artifacts -- only those associated
with fixed scalp-amp projections. These include eye
movements and eye blinks, temporal muscle activity and line
noise [12]. ICA may not be used to efficiently reject other
types of artifacts -- those associated with a series of one-ofa-kind scalp maps. For example, if the subject were to
scratch their EEG cap for several seconds, the result would
be a long series of slightly different scalp maps associated
with the channel and wire movements, etc. Therefore, such

[7] P. Pajunen and M. Girolami. Implementing Decisions in


Binary Decision Trees using Independent Component
Analysis. In Proceedings of the 2nd Int. Workshop on
Independent Component Analysis and Blind Source
Separation, pages 483{487, Espoo, Finland, June 2000.
[8] Bingham, E. Topic identi_cation in dynamical text by
extracting minimum complexity time components. In: Proc.
3rd International Conference on Independent

ISSN: 2455-3743

types of "non-stereotyped" or "paroxysmal" noise need to be


removed by the user before performing ICA decomposition
[12]
.
4 CONCLUSION
This paper surveyed contrast functions and algorithms for
ICA. ICA is a general concept with a wide range of
applications in neural computing, signal processing, and
statistics. ICA gives a representation, transformation, of
multidimensional data that seems to be well suited for
subsequent information processing.
5 REFERENCES
[1] Hyyarinen A. Oja. E. (2000), Independent Component
analysis: Algorithms and Application, Neural Network
13(4-5): 411-430.
[2] Aapo Hyvrinen, Survey on Independent Component
Analysis, Neural Computing Surveys 2, 94-128, 1999.
[3] Petr Tichavsk, Senior Member, IEEE, ZbynEk
Koldovsk, Member, IEEE, and Erkki Oja, Fellow, IEEE,
Performance Analysis of the Fastica Algorithm and
CramrRao Bounds For Linear Independent Component
Analysis, IEEE Transactions on Signal Processing, Vol. 54,
No. 4, April 2006
[4] Aapo Hyvrinen, Independent component analysis:
recent advance ,Phil. Trans. R. Soc. A 2013 371, 20110534.
[5] D.N. Rutledge, D. Jouan-Rimbaud Bouveresse
Corrigendum to Independent Components Analysis with the
JADE
algorithm,
TrAC
Trends
in
Analytical
Chemistry, Volume 67, April 2015, Page 220.
[6]Francis R. Bach, Michael I. Jordan, Kernel Independent
Component Analysis, Journal of Machine Learning
Research 3 (2002) 1-48.
Component Analysis and Blind Signal Separation
(ICA2001), December 9{13, 2001, San Diego, CA, USA, pp.
546{551.
[9] marian stewart bartlett, javier r. movellan, terrence j.
sejnowski, face recognition by independent component
analysis IEEE Trans Neural Netw. 2002; 13(6): 14501464.
[10] Chi-Jie Lu, Wu, Jui-Yu; Cheng-Ruei Fan; Chih-Chou
Chiu, Forecasting Stock Price Using Nonlinear Independent
Component Analysis And Support VectorRegression,
Industrial Engineering and Engineering Management, 2009.

Page 38

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 1, 2015

IEEM 2009. IEEE International Conference on 8-11 Dec.


2009.
[11] Sargam Parmar, Bhuvan Unhelkar, Independent
Copmpnent Analysis Algorithms in wireless communication
systems, Handbook of Research in Mobile Business,
Second Edition: Technical, Methodological and Social
Perspectives.
[12] Quick Tutorial on Rejecting Data Artifacts with
EEGLAB.

ISSN: 2455-3743

6. AUTHOR PROFILE

Naresh Nimje Currently


pursing
Bachelor of Engineering in
Electronics
and
TeleCommunication from Dr.
Rajendra Gode Institute of
Technology & Research,
Amravati, India.

Page 39

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 1, 2015

ISSN: 2455-3743

A REVIEW ON PARALLEL PROGRAMMING MODELS IN HIGH PERFORMANCE


COMPUTING
1,2,3

ANIKET YAWALKAR1, ASHISH PAWAR2, AMITKUMAR MANEKAR3


Department of IT Shri Sant Gajanan Maharaj College of Engineering, Shegaon
Dist. Buldhana, Maharashtra, India
1
yawalkaraniket93@gmail.com
2
kscorpion297@gmail.com
3
asmanekar24@gmail.com

ABSTRACT: There are three main approaches of parallel programming implicit, explicit and systematic. Parallel program is
used to high performance computing. Mainly depend on PGAS and hybrid model. Parallel programing used in processor to
increase the speed of the computing of processor. They work on GIS Raster and Vector to designed algorithm which uses library
GDAL which is based on OGR which is part of GDAL. They are used in parallel computers with multiprocessor and multicore
.Role of parallel processing is for increase performance based on MPI.MPI is the combination of Distributed Memory, Shared
and Hybrid Memory. Power flow problem found in power system which is solved using openMP and thread based programming
which is work on Shared memory programming environment .OpenMP based on Open Threading, Shared Memory and
Multicore Techniques. Parallel Computing work on Virtual Reality concept which depend on object. Object have not any
deterministic shape and cant handle using geometric equation for that there is implementation of Convex Hull algorithm.
Parallel programing used for to support signal processing algorithm for peak detection. Implementation of PAMM and LC-MS
detection algorithm requires Gibbs sampling .Parallel code do speedup but occurs fault tolerance. Simulation and Virtual
Reality depends on Vector processing for geometric projection. This paper we have taken different views of parallel
programming model available in different approaches.
Keywords: GAMMA,VLSI, Multhreading, PGAS,GIS,GDAl,OGR, shared Memory, Parallelism, MPI (Message Passing
Interface), OpenMP, Heterogeneous(hybrid) systems, Virtual reality,,PAMM,LC-MS,MLP,Vector Processing,CUDA

1.

INTRODUCTION

The work of parallel computing is totally based on the parallel


programming. GAMMA is act as backbone in the parallel
programing which is possible due to the VLSI [1]. There are
some reasons behind GAMMA in the parallel programming
.The concept of GAMMA based in java multithreading.
Parallel Programming used to make high performance of our
computing. Shared and Distributed Memory concept is mainly
related to the parallel programing .In multicore CPU uses the
parallel programing approach. In this work our work is to
explorer the concept of parallel programing in different
approaches. Hybrid concept mainly used in shared and
distributed memory concept. The use of Hybrid Parallel
Program is to increase the capacity of computer with
multicore nodes. Heterogeneous program used in CPU and
GPUs system. There are many Open Industry Standers
(OpenMP, MPI and OpenCL) used to extend it. They
increase the paralism. In Microprocessor contain Multicore
model (GPGPU) and that contains a Parallel program which
acts as an arena. Parallel program uses the many algorithm
using C++ and MPI [5]. MPI contain the Distributed Memory,
Shared memory and Hybrid memory [2]. Polygon
Vectorization based on MPI which is a part of GDAl library
[2]. They mainly contain the algorithms which are free to read
the geographical data format by open source Geospatial

Foundation. They contain command line utilities for data


transaction and processing. In High Performance Computing
contain multiple computer resources. Resources can be
allocated with the help of threads. Power flow problem solved
using the threads. Power flow tested in Shared Memory
machine plus Cache Coherent Non-Uniform memory access
(ccNUMA) [5].This is mainly depend on OpenMP which
contain set of compiler directives that requires minimal
knowledge of parallel programming. In future PC and Server
contain Multicore technology and hard coded parallel
programming. Parallel Programming used to support signal
processing algorithm for peak detection. We added PAMM
but it occurs fault tolerance .Simulation and Virtual reality
mainly depends on Parallel Program which contains vector
data processing for generic projection. Object doesnt have
deterministic shape and size and not possible by geometric
equation for that we use Convex Hull [8]. In that MPP and
MLP added in digital Image which work on polygon concept
that added in Convex Hull. Concurrency using several cores
can overcome these issues in attendance are nothing but many
and multi core processor using parallel ones this is also called
as concurrency revolution.

Page 40

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 1, 2015

2.

RELATED WORK

The collected survey work provides the overview of different


approaches for parallel programming models with its pros and
cons.
Auto Parallelism:-Using teaching level parallelism (ILP) or
parallel compilers successive programs are automatically
paralleled. Definite programs without doing modification is
recompiled using these parallel compilers it has a limitation
that amount of parallelism is very less due to involvedness of
automatic transformation of code[1] .
Parallel programming approach:-Application are turned to
deed parallelism by partitioning the total work into small task.
It provides tall parallelism. Besides of these two some
archetypal parallelism also present into the computer
programs i.e. implicit, explicit. The main four phases for the
parallelism is judgement concurrency, algorithm building,
supporting structure and implementation mechanism [2].
1. GAMMA:-It is hard to imagine that programmer can
mentally manage the decomposition of a program into the
great number of individual task and harmonize them well
.Hence necessity of high level of programming paradigm
arises , which allows to describe a algorithm as few
sequentially constrain as possible. All these can provide by
GAMMA paradigm [1].
2. VLSI-Very-large-scale integration (VLSI) is the process
of creating an IC by combining thousands of transistors into a
sole chip. Parallel programming is used to increase the
performance of VLSI chip.
3. Multithreading: -It is the notion in which single set of
code can be use by some processors at the different stage of
execution. Multithreading is the ability of a program or
an operating system development to achieve its use by
additional than one user at a time and to even accomplish
multiple requests by the same user without having to have
manifold duplicates of the programming running in the
computer.
Multithreaded program is parallel but parallel program is not
necessarily multithreaded [5].
4. PGAS It is a Partitioned Global Address Space which
assumes global memory address space which is logically
partitioned. In that there is array of memory and each block is
act as a Thread. All the PGAS can manages SPMD (Style for
distributed memory) which contained the MPI. Multiple
PGAS model are linked together thats why exploiting
locality of reference. This is basics of parallel c, forton,
fortress, chapel, x10, Global Array.
5. GIS The Geographic Information System (GIS) is a
computer-assisted system for attainment, storage, examination
and show of geographic data. GIS allows for creating,
maintaining and enquiring electronic databases of information
normally displayed on maps [3].
a)
Vector In vector data is represented in original
form .Its output is usually aesthetically pleasing since no data
conversion required, so accurate geographic information is
maintained. They have some disadvantages like location of
each vertex needs to store explicitly for efficiency and then

ISSN: 2455-3743

converted it to political structure for that require more


extensive data cleaning. If you do any changes in vector they
may leads to changes in whole topology, because of this
Raster is implanted to improve its functionality.
b)
Raster Due to the nature of data storage technique
data analysis is usually easy to program and quick to perform.
They have inherent nature suppose one attributed map is
ideally suited for mathematically molding then location is
defined as a matrix. It have also some disadvantages that cell
size can determines the resolution of each data which is
represented and it is difficult to adequately represent linear
future and network linkage is difficult to establish.
6. GDAL - The GDAL (Geospatial Data Abstraction
Library) is a library for reading and writing raster geospatial
data setups, and is released under the permissive X/MIT style
free software license by the Open Foundation Geospatial
Foundation. As a library, it presents a single abstract data
model to the calling application for all supported formats. It
may also be built with a variability of useful commandline utilities for data translation and processing [3].
7. OGR -The related OGR library (OGR Simple Features
Library, which is part of the GDAL source tree, provides a
similar proficiency for simple features vector data. GDAL
was primarily developed by Frank Warmerdam until the
release of version 1.3.2, when maintainer ship was officially
transferred to the OGR Project Management Committee under
the Open Source Geospatial Foundation. OGR is painstaking
a major free software project for its "extensive capabilities of
data exchange" and also in the commercial GIS municipal due
to its widespread use and comprehensive set of functionalities.
8. Shared Memory Concept of shared memory can define
the global memory which is accessible to all. Which is mostly
used in the distributed architecture in which multiple
peripheral units are access only one memory .Parallel
Programming concept is mostly used to increase the
performance of it [1].
CPU

CPU

Memory

CPU

CPU

Figure1: The shared memory architecture

Page 41

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 1, 2015

9. Distributed Memory: - Each computer with its own


serving memory blocks is the distributed memory model.
These models work in network or a grid of computers. Fig 2 is
the distributed memory architecture [2].

Common n/w line

PU and its PU and its PU and its


memory memory memory
Figure 2: The distributed-memory architecture
10. LC-MS Liquid Chromatography-Mass Spectrometry
(LC-MS) is a high-throughput method for protein
identification and quantification.
Advantages:
a) Maximize the number of peptide identification.
b) Done parallelism.
Limitations:
a)
Needs long processing time.
b)
High complexity.
11. MLP- Its a Multi-Level Parallelism. Its combine various
approaches like (e.g. MPI + OpenMP + CUDA) in all of
them. With the use of them we achieve the shared memory
concept, distributed memory and data parallelism. They may
creates some problem but again with the help of OpenCL
framework we solve this.
Advantages:
a) Improves the productivity and performance.
b) Increasing the portability.
Limitations:
a) Integrate the nested data.
b) Task parallelism.
c) Compatible language constructs.
12. OpenMP allows for implicit intra-node communication,
which is a shared memory paradigm. Provides for efficient
consumption of shared memory SMP systems. Facilitates
relatively easy threaded programming. Does not incur the
upstairs of message passing, since communication among
threads is implicit. Is the defact standard, and is reinforced by
most major compilers (Intel, IBM, gcc, etc). The process goes
to the data. Program correctness is an issue since all cobwebs
can update shared memory locations.
Advantages:
a) OpenMP codes will only run on shared memory
b) machines
c) Not Portable
d) Permits both courses gain and fine gain parallelism.
e) Each thread sees the same global memory.
f) Implicit messaging.

ISSN: 2455-3743

g) Use fork-join model for parallel computation.


Limitation:
h) OpenMP works only for shared memory. Limited
scalability, not much speeds up.
i) Threads are executed in a non-deterministic order.
j) OpenMP requires explicit synchronization
13.MPI(Message Passing Interface)-The
(MPI) is a
message passing library standard based on the agreement of
the MPI Forum, which has over 40 participating
organizations, including vendors, researchers, software library
creators, and users. The goal of the Message Passing Interface
is to establish a portable, efficient, and flexible ordinary for
message passing that will be widely used for writing message
passing programs. As such, MPI is the first homogenous,
vendor independent, message passing library. The advantages
of developing message passing software using MPI closely
match the design goals of transportability, efficiency, and
flexibility. MPI is not an IEEE or ISO standard, but has in
circumstance, become the "industry standard" for writing
message passing programs on HPC platforms.
Advantages:
a) MPI runs on both distributed and shared memory model.
b) Portable.
c) Best for parallelism.
d) Each process has its own memory.
Limitation:
a) Structure is large.
b) Global operations can be very expensive.
c) Significant change to the code is often required, making.
d) Transfer between the serial and parallel code difficult.
e) In MPI dynamic Changes is often difficult.
14. Heterogeneous (hybrid) systems (MPI+OpenMP) the
use of inherently different models of programming in a
courtesy manner, in order to achieve some benefit not
conceivable otherwise. A way to use different mockups of
parallelization in a way that takes lead of the good points of
each. Introducing MPI into OpenMP claims can help scale
across multiple SMP nodes. Introducing OpenMP [1] into
MPI applications can help make more efficient use of the
shared memory on SMP nodes, thus modifying the need for
explicit t intra-node communication. Introducing MPI and
OpenMP during the design/coding of a new claim can help
maximize efficiency, performance, and scaling. All this for
the implementation of single user program to multiuser
program
15. Virtual reality Following are the rewards of parallel
programming to the virtual reality.
Advantages:
a) Improves the performance.
b) Increasing the validity of redundant frames.
c) For the implementation of single user program to
multiuser program.

Page 42

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 1, 2015

15. PAMM-We develop a new programming model, which


we denote to as the Parallel Markovian Messaging (PAMM)
model. PAMM is developed following the same philosophy of
Map Reduce. That is, the programming model should be
simple and clean. Also, constraints are introduced if they
provide a good Interchange between robustness and
applicability.
Advantages:
a) PAMM allows message passing between the processes,
called workers. However, the messages in PAMM must
have the Markovian Property: the state of a worker can be
completely determined by the latest messages from the
worker and the latest messages to the worker; in addition,
the order of the messages should be irrelevant.
b) PAMM supports fault tolerance at a very low cost.
c) To run PAMM, the computing task should be well
partitioned.

15. CUDA Architecture


CUDA is NVIDIAs parallel computing architecture
that enables dramatic increases in calculating performance by
harnessing the power of the GPU (graphics processing
unit).With masses of CUDA-enabled GPUs sold to date,
software developers, scientists and academics are discovery
broad-ranging uses for CUDA, including image and film
processing, computational biology and chemistry, fluid
dynamics simulation, CT image renovation, seismic analysis,
ray tracing, and much more [9].
Advantages:
a) Provide acceleration to the video application market.
b) CUDA has been willingly received in the area of
scientific research in modular dynamics simulation
program.
c) Provide simple functionality for programming approach.
d) Provide thread parallelisms.
e) Familiar with slandered programming language like C.
f) Provide shared memory concept and synchronization of
memory.
Limitations:
a) Does not support full standard of c.
b) Unlike OpenCL, CUDA-enabled GPUs are only
available from Nvidia.
c) Exception handling finality does not in the CUDA like
OpenMP and others.
3.

ISSN: 2455-3743

programing platform provides separating complexity through


multithreading . They provide good performance only
exception lack of refining. Overall parallel programing based
on MPI and GDPL. It is used in multicore CPU.
In our study we observed that MPI i.e. distributed memory
model which provide main contribution to the parallelism.
Power system based study depends on expensive parallel
computers and provide extensive experience on their
programing. This can be used in 3D objects which made using
mathematical equations which contain convex hull algorithm
.Objects collision detection between object and virtual world
constructed using convex hull.There is new convex hull
algorithm which uses linear time algorithm which provides
low time complexity.
4.

REFERENCES

[1] Mr.Amitkumar S Manekar, Prof.PankajKawadkar, Prof.


MalatiNagle: A review on new paradigrams on parallel
programing modling in high performance computing,
Volume 1, Issue 4, August 2012
www.ijcsn.org
ISSN 2277-5420.
[2] Salim Ghanemi: A High level Abstract parallel
programming platform: Application to Gis image image
recognition.
[3] Jinbiao Wei, Manchun Li, Yafei Wang, Chong Chen,
WuyangHong., Zhenjie Chen. Parallel algorithm
designed for polygon vectorization,ACM Queue, vol. 3,
no. 7, pp. 54-62, 2005
[4] Zhenghao Zhang, Youting Sun, Ulisses Braga-Neto,
Edward Dougherty, Jianqiu Zhang Survey on Parallel
Programming Model, Proc. of the IFIP Int. Conf. on
Network and Parallel Computing, vol. 5245, pp. 266275,
Oct. 2008.
[5] Hasan DAG, Gurkan SOYKAN: Power flow using
Thread programming. Istanbul, 34083,
[6] FadiYaacoub, YskandarHamam, Antoine Abche,
CharbelFares.Convex Hull in Medical simulation: A new
hybrid Approach, 1-4244-0136-4/06/$20.00 '2006 IEEE.
[7] Zhenghao Zhang, Youting Sun, Ulisses Braga-Neto,
Edward Dougherty, JianqiuZhang,A parallel programing
framework with markovian messaging for LC-MS
peptiduges.
[8] Chao-TungYang, Chih-LinHuang CUDA Programming
for multicore CPU clusters 4/06/$20.00 '2006 IEEE.
[9] Gisela Klette. A recursive algorithm for calculating the
convex hull, GPU CUDA Programming (Parallel
Approach) Proc. of the IFIP Int. Conf. on Network and
Parallel Computing, vol. 5243, pp. 266453, Oct. 2008.

CONCLUSION

Our work is based on modern parallel programming model,


from this study it is clear that available multi core and many
core models with efficient parallelism provide arena to trend
computer science curricular and to increase parallelism many
distributed memory models are used.GAMMA based parallel

Page 43

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 1, 2015

5.

AUTHOR PROFILE
Ashish Namdev Pawar
perceiving the 3rd year B.E.
degree
in
Information
Technology from Shri Sant
Gajanan Maharaj College of
Engineering Shegaon, Dist.
Buldhana, Maharashtra, India.

ISSN: 2455-3743

Prof. A. S. Manekar
Amitkumar
S
Manekar
working as assistant Professor
in IT Department SSGMCE,
Shegaon.
His research area is Big Data
analysis and High performance
Computing. He has guided
many Under Graduate and
Post Graduate Students.

Aniket Dilip Yawalkar


perceiving the 3rd year B.E.
degree
in
Information
Technology from Shri Sant
Gajanan Maharaj College of
Engineering Shegaon, Dist.
Buldhana, Maharashtra, India.

Page 44

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 1, 2015

ISSN: 2455-3743

AUTOMATED PARKING SLOT ALLOTTER USING RFID AND NFC TECHNOLOGY


HARSHAL PHUSE1, SUMIT BAJARE2, RANJIT JOSHI3, AMITKUMAR MANEKAR4
1,2,3,4
Shri Sant Gajanan Maharaj College of Engineering, Shegaon,
Dist. Buldhana, Maharashtra, India
Sant Gadge Baba Amravati University
1
harshphuse40@gmail.com
2
sumitbajare20@gmail.com
3
ranjitjoshi9@gmail.com
4
asmanekar24@gmail.com
ABSTRACT: In this modern era, the technology is imposing its relevance on the people. Now a day having the car is going to
be mainstream issue. Every day we come across the huge traffic and congestion problem in public places as well as in
organizations. From an employee of organization to the student going to college is prevailingly having the cars. So managing
the parking of these cars in big organizations or colleges where the traffic is congested needs the continuous manual
intervention and it becomes the tedious job to keep the information about all cars which are checking-in and checking-out. By
the proposed system drivers will not have to stop for long time and the whole process will become digitalized thereby
eliminating human intervention. So to manage the cars in the parking we are developing an automated system for car parking.
In the proposed system we are going to develop an Automated Parking Allotter system so that continuous human intervention
in parking problems will get deprived and the parking of cars can be done in fast and efficiently. We are accomplishing this
system by using the technologies called RFID i.e. Radio Frequency Identification, NFC i.e. Near Field Communication and
OCR i.e. Optical Character Recognition. This system will also keep the track of allotted parking lots and available ones. So
the advancement in the current system can change the time requirement and efforts considerably.
Keywords: RFID (Radio Frequency Identification), NFC (Near Field Communication), OCR (Optical Character Recognition),
Automated Parking Slot Allotter.

1. INTRODUCTION
In this system we are using the RFID i.e. Radio Frequency
Identification, NFC i.e. Near Field Communication and the
OCR i.e. Optical Character Recognition.
Radio frequency identification, or RFID, is a standard term
for technologies that use radio waves to automatically
identify objects. The most commonly adopted method to
use the RFID technology is to identify the RFID tag with
serial number of the tag. This serial number is read by
RFID reader whenever the tag comes under readers range
[1]. In the proposed system we are going to use the passive
RFID tag. The operating distance of RFID ranges up to 100
meters. Tags can operate solely under the reader coverage
region [2]. We are using RFID technology because of
following features:

Manual workloads will be decreased [3].

It is universal useful and efficient [4].

1.2
1.3

Lessfrequent
userVisitor

RFID is much more secure [5].

The other technology i.e. NFC is used for short range


transmission of data. NFC is designed to support existing
RFID transactions including contactless payments and
some ticketing systems, as well as being a generally
programmable platform [6]. Here we are using this
technology for the payment of parking. The image
processing and character recognition is one of the crucial
factors in proposed system. Thus to ensure the optimization
of system we are using OCR i.e. Optical Character
Recognition. The OCR is the most commonly used
technique by researchers due to its accuracy [7]-[8]. This
technique is being adopted by researchers because of its
ability to detect the alphanumeric data as well [7]-[8]. The
users of this system can be of three categories:
1.1 Frequently visiting user

The use of RFID is for the users which are frequently


visiting the organization. The users under this category can
be employee for that organization. The NFC technology is

Page 45

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 1, 2015

used with the users which are less frequently visiting the
organization. The users under this category can be the client
or trainee. The payment for the parking is done by using
NFC mifare card.
The character recognition of the vehicle number plate
comes into picture when the parking vehicle is rarely comes
to organization. The user can be parents of student coming
to visit the college.
For the convenience of user the allotted parking lot
will be displayed on the screen near the boom barrier.
2.

RELATED WORK

Alternative way for cars without RFID Finding a vacant


parking space during the rush hours is a common problem
in most of the cities. This problem is common in the large
organizations and colleges. This not only causes waste of
time and fuel for drivers looking for parking but also
increases air pollution and drivers frustration.
In currently used systems at most of the
places parking is done using conventional way i.e.
manually. The systems lead to continuous human
intervention and thus the work is not performed quickly.
The automated parking systems developed using RFID
technology are allowing only cars that having valid RFID
tag and the cars without RFID are having no provision. The
OCR technology that we are going to use is providing the
provision to cars that are not having RFID tag. In addition
to this the use of NFC technology in the payment of
parking fees brings the convenient and quick way to pay
digitally. The camera called ANPR (Automatic Number
Plate Recognition) can be used to recognize the vehicle
number [9]. The main feature that we are going to provide
through this system is that the cars having multiple
categories of users are managed.
3.

1.

ISSN: 2455-3743

To operate for users having RFID tag.


2.

To operate for users having NFC mifare card.

3.

To recognize the plate number of the visitors


car.
Display the allotted parking lot.

4.

In the proposed system we will use the three


databases:
1.

Database for the users having RFID tag.

2.

Database for the users having NFC card.

3.

Database for daily check-ins and check-outs.

The general arrangement of the system can be done as


shown in figure 1

M ETHODOLOGY

The processing of this system prominently categorized into


following parts:
Figure 1: Arrangement of system
The users in all categories will be informed about the
allotted parking lot with the common LED display near
boom barrier. The software will keep the track of allotted
parking lots and the lots which are available.
To achieve the robustness in the system it is mandatory
that the system must be capable of performing all
operations efficiently. So to reduce the timing overheads
we are using the RFID tags for frequently visiting users.

Every RFID tag is having unique identifier. So by using


this amenity we can embed the RFID tag in the frequently
visiting cars. This will eliminate the overhead of system to
recognize the characters every time. The system is having
the separate database for users with RFID tag so all the
information regarding to that car will be stored into that
database. Figure 2 shows the operating steps with the
vehicles
having
valid
RFID
tag.

Page 46

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 1, 2015

ISSN: 2455-3743

having congested traffic and the user can be allowed to


make payment by using the same NFC card. As depicted
in the figure 1 the ANPR camera is mounted on the boom
barrier. This camera is used to capture the image of
visitors cars number plate and the number is recognized
by the special technique called as OCR. The processing of
image should be performed quickly to reduce the time
overheads.
Figure 4 shows the processing of system in accordance
with OCR technique:

Figure 2: Processing on car with RFID


As shown in figure 2 when car arrives at entrance and is
having valid RFID tag then scanner scans the tag and
searches for the details in the RFID database. The details
may contain the vehicle number, tag number and car
owners name. The entry in the check-in and check-out
database will be done. Then the driver will be informed
about the allotted parking lot on the LED display. While
checking-out another scanner at exit point will scan the tag
and make the related entries in a database. Following
figure 3 shows how the processing will be done for users
having the NFC card:

Figure 3: Processing on car with NFC


As shown in figure 3 when car arrives at entrance and is
having valid NFC card then reader scans the tag and
searches for the details in the NFC database. The NFC
cards are having the unique number. The card called
mifare card is the NFC card which can store the balance
i.e. money and this card can be used to make the
payments. This card can be given to the less frequent
visitor. The valid NFC database entries are checked and if
found the payment is done. The entry in the database of
checking-in will be done. Similarly the check-out entry
will also be done. But the payment will not be done twice.
This kind of system can be used broadly in the cities

Figure 4: Processing on visitors car using OCR


The OCR is the most commonly used technique by
researchers due to its accuracy [8]-[9]. This technique is
being adopted by researchers because of its ability to
detect the alphanumeric data as well [8]-[9].
After recognizing the car plate number next task is to
make the database check-in entry along with the entry
time.
The parking lot number will be displayed on the display
near to the barriers so that the driver will get to know
where to park the car. After allotting the parking lot
number the barrier will be removed and car will be
allowed to check-in. The database entries will be done as
car checks-in. The database entries will contain the car
number, check-in time, allotted parking lot. While
checking-out the car, the camera which is at exit gate will
perform the same task as done by check-in camera. So
once the number is recognized the database will be
searched for respective entry and the check-out will be
marked and the lot which was allotted to the car will be
moved to the available list.

Page 47

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 1, 2015

4. IMPLICATIONS
Automated Parking Allotter can be used at the places
where the large numbers of cars checks-in and checks-out.
These places can be Organization, College, Hotels,
Theaters, and Malls. Here the continuous manual
intervention will be eliminated and management of all data
will become easy. The use of technologies like RFID,
NFC, and OCR can be done in optimum level. Thus the
Automated Parking slot Allotter system is Reliable and
Perfect for Service.
5. CONCLUSION
The Automated Parking Allotter will enable the working
with the less human intervention. As we are using RFID
the regular visitors will get the rid off of the waiting long
for the parking. This is achieved by using the RFID tag.
The use of NFC allowes less frequent users to make the
payments using mifare card. The visitors who comes rarely
will have to go through OCR for recognizing the number
plate. Thus this system is fully functional for the multiple
users. And the traffic congestion problem in cities as well
as in parks and organizations can be overcomed to
remarkable level.
6.

REFERENCES

[1] Zeydin Pala, Nihat Inanc Utilizing RFID for Smart


Parking Applications International Journal of Advanced
Manufacturing Technology, 31 (1-2): 121-127
[2] L. Mainetti, L. Palano, L. Patrono, M. L. Stefanizzi, R.
VergalloDept. Of Innovation Engineering University of
Salento Lecce, Italy
[3] Penttila, K., Keskilammi, M, Sydanheimo, L.,
Kivikoski, M., 2006. Radio frequency technology for
automated manufacturing and logistics control.
International Journal of Advanced Manufacturing
Technology, 31 (1-2): 116-124.
[4] Zhang, L., 2005. An Improved Approach to Security
and Privacy of RFID application System. Wireless
Communications, Networking and Mobile Computing.
International Conference. (2): 1195- 1198.
[5] Xiao, Y., Yu, S., Wu, K., Ni, Q., Janecek. C.,
Nordstad, J., 2006. Radio frequency identification:
technologies, Applications, and research issues. Wiley
Journal of Wireless Communications and Mobile
Computing. (Accepted for publication).
[6] Giuliano Benelli, Alessandro Pozzebon University of
Siena, Italy. An Automated Payment System for Car Parks
Based on Near Field Communication Technology
[7]S. Rasheed, A. Naeem, and O. Ishaq, Automated
Number Plate Recognition using Hough Lines and
Template Mataching, Proceedings of the World Congress
on Engineering and Computer Sciences, vol.1, 2012, pp. 15.

ISSN: 2455-3743

[8] M. Tahir and M. Asif. Automatic Number Plate


Recognition System For Vehicle Identification Using
Optical Character Recognition, Proceedings of the
International Conference on Education Technology and
Computer, 2009, pp. 335-338.
[9] An Automated Vehicle Parking Monitoring and
Management System Using ANPR Cameras Mohammed
Y Aalsalem, Wazir Zada Khan, Khalid Mohammed
Dhabbah Faculty of Computer Science & Information
System, Jazan University, Kingdom of Saudi Arabia
7.

AUTHOR PROFILE
Harshal Sudhakar Phuse
Perceiving the 3rd year B.E.
degree
in
Information
Technology from Shri Sant
Gajanan Maharaj College of
Engineering Shegaon, Dist.
Buldhana, Maharashtra, India.
Sumit Raosaheb Bajare
Perceiving the 3rd year B.E.
degree
in
Information
Technology from Shri Sant
Gajanan Maharaj College of
Engineering Shegaon, Dist.
Buldhana, Maharashtra, India.

Ranjit Surendra Joshi


Perceiving the 3rd year B.E.
degree
in
Information
Technology from Shri Sant
Gajanan Maharaj College of
Engineering Shegaon, Dist.
Buldhana, Maharashtra, India.

Prof. A. S. Manekar
Amitkumar
S
Manekar
working as assistant Professor
in IT Department SSGMCE,
Shegaon.
His research area is Big Data
analysis and High performance
Computing. He has guided
many Under Graduate and Post
Graduate Students.

Page 48

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 1, 2015

ISSN: 2455-3743

REVIEW AUTOMATED STUDENTS ATTENDNCE MANAGEMENT SYSTEM USING


RASPBERRY-PI AND NFC
NIKHIL P. SHEGOKAR1, KAUSTUBH S. JAIPURIA2, AMITKUMAR MANEKAR3
1,2,3
Shri Sant Gajanan Maharaj College of Engineering, Shegaon-444203
1
nikhilshegokar12@gmail.co
2
kaustubhj@gmail.com
3
asmanekar24@gmail.com
ABSTRACT: Now a days speed and efficiency is what needed to reduce the time of work and performance of an individual or a
system. Hence we are presenting an automated attendance management system using raspberry pi and NFC which is a smarter
and more efficient way. This will help in speeding up the attendance monitoring system in schools/colleges/universities or in
small business also and thereby reducing the time for taking attendance is becomes easy by using this. The main active
components of the paper are a Raspberry Pi, which is a single-board micro-computer, and an NFC tag reader. NFC (Near Field
Communication) technology allows information to be read ,write and exchanged over short distances using wireless
communication.
Keywords: Automated Attendance, Raspberry Pi, Facial recognition, NFC.

1. INTRODUCTION
Attendance monitoring system is very important process in
almost all the organization and Institutions. Now a days
there are two types of systems are available first is manual
and second is automated. The mostly used method for
taking attendance is totally manual based i.e. by using
sheets of papers or books. Sometimes the attendance sheet
could be lost because of this method and may easily allow
for the impersonation. This method is time consuming
hence for reducing this weaknesses, there is a need of an
automated and reliable system. Due to our manual
attendance system lot of time is being wasted because of
calling each student number and name and also the teacher
has to do lot of paper work and maintaining that paper sheet
is quite difficult. Sometimes the problem of illegal
attendance is faced. So it became necessary to do all this
online and automated. Biometric authentication is one of the
most popular and accurate technology. We can do this using
biometric ideas or by the NFC tags with a unique tag
provided to each student. Now, day by day the world
becomes digitize means we are using automatic systems
which can save times. Similarly we are changing the way of
attendance monitoring system from manual to an automated.
The automated attendance system may use bar-codes,
electronic tags, biometric and touch screens in place of
papers.
2. RELATED WORKS
Current Networked and mobile technologies are developed
by providing more methods which is supporting for
childrens in their transitions between home and schools [3].
For example Childrens can travel securely by using
location system [4]. Giuliano Benelli and Alessandro
Pozzebon they are projected and developed a paper for

Parking an automobile an automatic payment system is


offered supported NFC [12]. We saw that face recognition is
a technique and is better to replace the biometrics system
effectively. In face detection we can store the captured
images of the students and which is stored in the database of
the system. We are not using biometric attendance because
it is costly and it requires extra human works and consumes
the more time. Actually the biometric attendance
management system uses the iris recognition and thumb
scanning. We can access the facial recognition by the
support of camera and we dont require any other accessory
and simply attendance is marked. We perform this entire
task by performing algorithms [2]. In 2014,G.
Senthilkumar, K. Gopalakrishnan, V. Sathish Kumar they
developed an associate degree embedded image capturing
system victimisation Raspberry Pi system [10].
L.ARUNKUMAR and A.ARUN RAJA in might 2015
projected a paper to debate on the standardized
authentication model that's capable of extracting the
fingerprints of individual and store them [7]. In 2012,
Mohammad Umair Yaqub proposed a module for school
students World Health Organization have to be compelled to
regulator their NFC alter phone with teacher phone before
the lecture and when the lecture teacher transfer group
action on the server [8]. In 2013,Unnati A. Patel projected
a system within which she uses RFID technology for the
group action of scholars in classroom [6]. In 2014,
Dhiraj.R.Wani, Tushar.J.Khubani, Prof. Naresh Thoutam
they proposed a system throughout the automatic attending
management system is finished with Raspberry Pi and
NFC with face recognition. If NFC tag and face
recognition results accessed then the attending is reported
else not reported [11]. Jomon Joseph, K. P. Zacharia
projected paper within which they uses face recognition for
the
validation

Page 49

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 1, 2015

Purpose particularly just in case of the scholars group


action. Todays attendance system is monotonic and time
consuming. It stores the all recognized data into the
database of the system and is very useful and it reduces the
time and also it detects the face of a person from 2 foot or a
lot of aloof from the camera position.[9]. Karthik Vignesh
E, Sachdeva and Shrutik Katchii in 2014 they proposed a
review paper on Raspberry Pi in that they discussabout
Domain

Techniques

ISSN: 2455-3743

Raspberry Pi and the project that can be implemented using


Raspberry pi[5].
3.

ANALYSIS

There are some techniques are available and below table


1 shows the strength and weaknesses of each technique
also they can compare with each other.

Advantages

Limitations

1.

Fingerprinting is an excellent technique for


checking our backgrounds also it is widely
accepted in forensic labs.

1. Fingerprint is not as exactly as iris


recognition because Fingerprint false obtain
rate varies to vendor, and is relatively 1 in
100,000. Iris realization false accept rate is 1
in 1.2 million statistically.

2.

In fingerprinting there is relatively low false rate


of rejection and also false rate of acceptance when
used in low incidence of outliers sometimes
gender is an issue in a large group.

2. Eye recognition performs in a high speed


environment and it has a single task, whereas
fingerprint technique searches takes more time
and false rate is high in fingerprinting

3.

Lots of vendors and solutions are available.

4.

Fingerprinting technique has the ability to scan


and verify numbers of fingers at the same time.

3. The long association of fingerprints with


coronals made this biometric a troublesome
ritual of authentication to some people.

Finger
printing

4. Most systems require physical influence by


a scanner device that needs to keep clean.
Biometric

1.

Facial
recognition

1.

As new technology is getting better and


better many James Bond gadgets like
Duplicate Eye lens is available.

2.

As eye is the most sensitive part this may


damage the retina of eye.

3.

Transplantation of Eyes is possible so


may lead to the security issue.

2.

Capacity to handle almost large populations at


high speed.

3.

Convenient: all people need to look into a camera


for few seconds. A video image is taken and
stored which is non-invasive and inherently
secure.

4.

The iris itself a steady throughout to persons life


physical characteristics of the iris don't change
with age.

1.

Implement a first level scan within an


extremely large, low-security situation.

1.

Lighting, age, glasses, along with


head/face cover all impact false reject
rates.

2.

Easy to deploy, also can use typical CCTV


appliances integrated with face recognition
software.

2.

Even in surveillance applications, lower


accuracy results to multiple contender

Eye/Iris
recognition

Open-CV

Explain highest accuracy: iris recognition has no


false matches over two million crosscomparisons, accordingly Biometric device to
Test Final Report (19 March 2001, Center for
Mathematics and Scientific Computing, National
Physics Laboratory, U.K.)

Page 50

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 1, 2015

3.

Passive technology does not require user


cooperation and works to a distance.

4.

May be adequate to help high quality images in an


existing database.

1.

NFC/RFID
Technology

4.

Tags

Great convenience to the user, because the data


exchange is completed by bringing two NFC
enabled devices together.

2.

Versatility

3.

Reduces cost of an electronic issuance.

4.

Secure communication.

5.

No special software.

6.

No search and pair procedure.

CONCLUSION

If Attendance management system using raspberry Pi


and NFC is automated and monitored without any
wastage of time i.e. the time used for roll calls and
pronouncing each students name properly can be
eliminated. It provides implementation of new
technology in schools, colleges, etc where attendance is
necessary to achieve an overall attendance of an
individual student automatically without any calculation
from attendance sheet. It is the most highly secured
authentication system because of NFC tag and facial
recognition. No illegal attendance by other individuals
because of this face authorization. Hence, this system
must b implemented in each sector where attendance
plays an important role. This is latest and the smarted
technology in the world of automation. .

5.

REFERENCES

[1] Anurag K; Near Field Communication 2010.


[2] Mashhood Sajid, Rubab Hussain, Muhammad
Usman A conceptual model for Automated
Attendance Marking System using Facial
Recognition.

ISSN: 2455-3743

occurrences in large populations. As a


result, secondary processing is required
to surveillance of operations.
3.

Privacy concerns: people do not always


know when their picture/image is being
captured and actuality searched to
database or worse, being enrolled in a
database.

4.

Can be used without explicit opt-in


permission.

1.

The system has a limitation they can be


operated only with those devices under
short range around 10 cm.

2.

The transfer rate of data is very less and


about 106kbps, 212 kbps to 424kbps.

3.

Can be costly to merchant companies to


initially adopt this technology.

[3] R Edwards and M. David, childrens


understanding of parental involvement in
Education. Final report: ESRC Research Grant,
Award No. L129251012, 1999.
[4] Sans Francisco Chronicle: Students kept under
supervision at school- Some parents angry ver.
radio device. Last access: 27.04.2009.
[5] Pritish Sachdeva and Shrutik Katchii-A Review
Paper on Raspberry Pi.
[6] Unnati A. Patel Student Management System
based on RFID Technology.
[7] L.ARUNKUMAR and A.ARUN RAJA Biometrics Authentication Using Raspberry Pi.
[8] Mohammad Umair Yaqub, Umair Ahmad Shaikh
and
Mohamed_Mohandes,
\NearFieldCommunication Technology Division,
King Fahd University of Petroleum & Minerals in
the year 2012.
[9] Jimson Joseph, K.P.Zacharia Automated
Attendance Management System using face
Recognition.
[10] G.Senthilkumar, K.Gopalakrishnan, V. Sathish
Kumar Embedded image capturing system using
Raspberry Pi System.
[11] Dhiraj.R.Wani, Tushar.J.Khubani, Prof.Naresh
ThoutamNFC Based Attendance Monitoring
Systemwith Facial Authorization.

Page 51

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 1, 2015

[12] Giuliano Benelli, Alessandro Pozzebon An


Automated Payment System for Car Parks Based
on Near Field Communication Technology
[13] Karthik Vignesh E, Shanmuganathan S,
A.Sumithra S.Kishore and P. Karthikeyan A
oolproof Biometric Attendance Management
System

6.

ISSN: 2455-3743

AUTHOR PROFILE
Nikhil P. Shegokar
Studying BE at Shri Sant
Gajanan Maharaj College
of Engineering, Shegaon,
Maharashtra, India.

Kaustubh S. Jaipuria
Studying BE at Shri Sant
Gajanan Maharaj College
of Engineering, Shegaon,
Maharashtra, India.

Prof. A. S. Manekar
Amitkumar S Manekar
working
as
assistant
Professor in IT Department
SSGMCE, Shegaon.
His research area is Big
Data analysis and High
performance Computing.
He has guided many
Under Graduate and Post
Graduate Students.

Page 52

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 1, 2015

ISSN: 2455-3743

SURVEY OF BIOMETRIC AUTHENTICATION TECHNIQUES


ANKUSH DESHMUKH1, POONAM HAJARE2, RAJESHRI KACHOLE3, AMITKUMAR MANEKAR4
1,2,3,4
Department of Computer Science And Engineering SSGMCE Shegaon, Maharashtra, India.
deshmukhank@gmail.com1
hajarepv123@gmail.com2
rajeshrikachole2012@gmail.com3
asmanekar24@gmail.com4
ABSTRACT: Security is not a single layer issue. Now a days only passwords, cards, or other keys are not sufficient to provide
high level security. So we need to upgrade our security standards. We need something that cannot be stolen or copied. To resolve
this problem another level of security comes into picture that is biometric security. Biometric security identifies an individual on the
basis of distinctive biometric characteristics that may be a fingerprint, face, DNA, or any other unique character. Biometric
identification becomes important part of todays security systems. Many countries are accepting the biometry based personal
identification of their employees in various departments like armed forces, national security departments etc to prevent important
national information secure. This increases the importance of biometrics in the field of security. Now a day IT companies are also
moving towards the biometric security to avoid unauthorized access to their data and much commercial software.
Keywords: Security, Biometric, Authentication, Fingerprint.
1. INTRODUCTION
The biometric identification and/or verification system works
on the basis of template matching. At the time of registration
any biometric system takes the biometric characteristics as
input and generates a template from them and at the time of
verification it takes the biometric characters as input and again
generates the template and match with the template generated
at the time of registration. Depending on the results of
template matching the verification of an individual is done.
The biometric characters may be physiological or behavioural.
The shape of body such as pattern of fingerprint, face, palms,
eyes and DNA etc are considered as physiological characters
of humans and behavioural characteristics are nothing but the
way of typing, signature, voice etc.
Todays technology is using some behavioural
and/or physiological biometric characters for identification
and verification of users such as fingerprint, DNA, face
detection method, identification on the basis of iris of user,
signature identification (Digital signature), voice based
identification method, ECG and body odder based
identification system. These technologies are upgrading the
standards of todays security system in all directions [1].The
reasons behind fast adaptation of the biometric authentication
system in large number is the features provided by biometric
system like, universality, performance, uniqueness,
measurability, acceptability. Universality identifies each user
of it uniquely. It sufficiently distinguishes one person from
remaining population. Performance is related to speed of
operation and robustness in operation. Uniqueness is nothing
but how

Uniquely the system identifies the each user. Measurability is


nothing but ease of measuring the traits. Performance is related
to speed of operation and robustness in operation. Acceptability
means how eagerly users accept it [2].
2. WORKING OF BIOMETRIC AUTHENTICATION TECHNIQUE
The working of all the biometric authentication
technique uses the same patterns shown in figure (1). The
whole biometric system is divided into three sub modules i.e.
registration, verification and result. At the very first stage of
system i.e. registration, a new user needs to get registered into
database. In process of registration system ask for biometric
characters of a user. The system applies the certain algorithm
on the input feature and converts them into template and stores
them in database. In second sub module the verification is
done. In this module the user who wants to get identified
provide his biometric characters and then the system will apply
the exactly same algorithm to generate the template from it
which was previously applied at the time of, registration.
Templates never contain all the information, it only contain the
minimum required information to identify the user uniquely. In
third and final sub module depending on the result of
previously stored and new generated template matching the
authentication is done [3].

Page 53

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 1, 2015

ISSN: 2455-3743

ridge bifurcation means a point where a ridge gets divided into


branches. Those points i.e. a point where ridge finish or ridge
split are precise as minutia point and on the basis of arrangement
of minutiae points the verification is done between saved image
and currently scanned image of fingerprint of an individual.
a) FACE DETECTION BASED AUTHENTICATION

Figure1: Biometric Authentication Process [7]


3. OVER VIEW OF TODAYS AUTOMATED BIOMETRIC
AUTHENTICATION TECHNIQUES
There are various biometric characteristics (physiological
and behavioural) that can be separately used to develop
biometric identification system like fingerprints, face
detection, iris based identification, voice detection, DNA
matching and many more. Each system has its own
advantages and disadvantages. Each system is supposed to
efficiently meet all the requirements of user. The selection
of system is depends upon the need of application and the
security level needed [4].
3.1 FINGERPRINT BASED AUTHENTICATION
Fingerprint based authentication technique is the most well
known and widely used biometric identification and
verification system. Due to uniqueness and persistence in
fingerprints of an individual, the fingerprint based
authentication technique is most reliable, secure and long
lasting biometric authentication technology. This technique
uses patterns of finger of an individuals (as shown in fig (2))
to identify them uniquely. Pattern mainly consists of ridges
and minutiae positions in fingerprint image. The three crucial
patterns of fingerprint ridges are arch, loop and whorl. In
arch the ridges starts from the one side, rise in the middle
forming an arch and then exit on the other side of finger. In
loop pattern the ridges starts from one side of a finger, form a
curve and then exit on the same side of finger. And in whorl
type the ridges form circularly around a central point of

finger.

Figure 2: Fingerprint Patterns


To calculate minutiae points from fingerprint image the two
most important characteristics of ridges are ridge ending and
ridge bifurcation. Ridge ending is end or start of a ridge. And

Face detection is one of the biometric identification


techniques which uses facial features such as position and shape
of eyes, nose, mouth etc. Figure (3) shows the landmarks on the
face of human beings. Facial identification is one of the simple
and easy techniques for biometric identification. Generally face
detection algorithm uses one of the two main face detection
algorithm strategies for identification and verification of an
individual. One is geometric strategy. Geometric strategy work
on the geometry i.e. position and shape of chin, lips, eyes and
other parts of the face and interrelationship between them. And
second strategy for face detection algorithm is photometric
algorithm. This algorithm converts the image into pixel values
and then generates the template from those values. This
algorithm is static in nature.

Figure3: Landmarks Used In Face Detection Method


a)

HAND GEOMETRY BASED IDENTIFICATION

Hand based identification is the another biometric


identification method which identifies an individual on the
basis of geometric features of hand like height and width of
fingerprint, diameter of palm and he total perimeter of hand.
The pictorial view of this system is as shown in the figure (4).
This technique is simple and very easy to use. This technique
does not provide very high level of security. The changing
weather and skin problem can affect the accuracy of the hand
geometric based identification technique. The accuracy of
system can be affected by various problems such as change in
shape of hand and some other physiological changes in body.
This technology is mainly accepted in low level security
applications. Since only the geometry of hand is unable to
identify the mass of users uniquely this system has many
drawbacks in it [6].

Page 54

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 1, 2015

ISSN: 2455-3743

b)

Figure4: Hand geometry identification system

IRIS BASED IDENTIFICATION

During the pre development of human child, at


being a thin circular shaped structure stars to develop around
pupil of eye. The purpose of that thin circular network is to
control the thickness and dimension the structure of central
part of eye i.e. pupil which indirectly control the amount of
light reaching t to the retina. The development of iris starts
during pre development of child and ends at the age of 2
years old. The structure of iris never changes during the
whole life of an individual. It is near to impossible to change
the iris by surgery and it is very easy to identify the duplicate
iris. Due to these reasons the iris identification is one of the
best biometric identification systems.

a) RETINA
Retina is nothing but the posterior part of human eye. The
complex structure of eye retina is able to identify each and
every individual from each other. The retina consists of vast
and complex network of capillaries that makes retina
different from the retina of other individual. Figure (5) shows
the complex structure of retina. Capillaries are like pipes that
provides blood to eye. Each individual has a unique structure
of capillaries. Retina based identification technique uses this
network to identify an individual separately. The retina
based identification technique is very complex and used
where very high security is required. The structure of eye
retina is remains unchanged from birth to death. Only
exception to this is diseases like diabetes, glaucoma, and
other retinal degenerative disorder can make some changes in
the structure of retina. The neural network of retina is so
unique that twins child also has the different structure.

Figure 5: Structure of Eye Retina [7]


To capture the capillary network as shown in fig (5) of eye,
the user needs to peep into an eye-pipe and concentrate on a
particular point for specific period of time. The image
captured in this process cannot be directly used for template
generation. The image requires going through various image
enhancement algorithms. Due to this kind of complex
mechanism this technology is not accepted on much large
amount in todays biometric security system.

Figure 6: Structure of Human Eye [7]


The human eye contains the parts as shown in the above
figure. The iris scanning algorithm needs to eliminate the
reflections from various parts as eyelids, eyelashes; many
other fake reflections and only identify the pixels of iris.
After identifying the iris pixels, the algorithm creates the bit
pattern for template generation. And this template is them get
stored into database and used at the time of verification.
c) VOICE BASED IDENTIFICATION
Voice is also a one of the biometric character or feature of
human beings. A voice based identification system uses
various characteristics that are called as the voice biometrics
of voice. Two voices are differentiated on the basis of
auditory characters of two vices. These auditory characters
consists of two parts first is anatomy and second one is
learned behavioural pattern. The anatomy characters deals
with shape and size of throat and mouth, and learned
behavioural patterns deals with voice pitch and speaking
style.
Physiological part of human voice are remains
constant but he behavioural patterns of speech get changed
with the environmental conditions, emotional condition of
speaker, age of speaker and many other medical conditions.
Due to this reasons the voice based identification system is
not that much reliable as compared to other biometric
identification systems present today. Only the voice of

Page 55

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 1, 2015

person is not able to identify him uniquely in the mass of


people. One of the main is disadvantage of voice based
identification is background noise [4].
d) DNA BASED IDENTIFICATION
Deoxyribonucleic acid i.e. (DNA) identification and/or
verification system provides the highest level of biometric
security. The possibility of DNA replication is 1 person in a
6 billion. That makes the DNA identification at highest level.
DNA encodes the basic instructions used in the growth of all
living organism as well as viruses. It is all most distinct
biometric identification for human beings except for
monozygotic twins. Monozygotic twins are born when a
particular egg is fertilized create one zygote which then
divides into two separate embryos [9]. The anther advantage
of DNA is that it never changes through life of a human
being. The figure (7) shows the structure of DNA.

ISSN: 2455-3743

transaction such as banking and many others. This system


uses the way in which person sign his/her name for
identification. Todays signature recognition systems are also
able to measure pressure and velocity of the point of the
stylus. Signature reorganisation system can work in two
ways one is static and second is dynamic. In static way the
user needs to give his/her signature on paper and digitalized
it using camera or scanner and them biometric system
analyze for shape and size, and in dynamic mode user needs
to put his/her signature
Directly on the digital writing pad and result is given at run
time.
F) KEY STROKE BASED IDENTIFICATION

Identification system which uses the way of typing i.e. speed


of typing alphabets, pressure on keys, and rhythm of typing
is known as keystroke
Based identification. Only that much information is not
sufficient to identify an individual in mass public, but this
technique offer enough inequitable information to permit
verification. This technology also uses pressure on the keys
and timing information of keys up/hold/down events.
G) PALM PRINT BASED IDENTIFICATION

The current DNA obtaining method need blood tissue or


tissue from other part of body like DNA in blood, semen,
skin, saliva or hair. In DNA profiling, the lengths of
changeable sections of cyclic DNA, such as short tandem
repeats and minisatellites, are compared between people. The
acceptability of DNA based identification system is not
adopted on large scale due to various reasons like, DNA
matching can no performed at run time, DNA sample stored
in lab needs more security and it is too costly also.
e) SIGNATURE BASED IDENTIFICATION SYSTEM
Signature based identification is mostly used identification
system used today. This technique is accepted in various
departments like government departments, commercial

Similar to pattern on the fingerprint, the palm of human also


contain the unique pattern which can be used for
identification and verification purpose. The area of a palm as
compare to area of fingerprint is more so the capacity of
palm for identification is also mare. Since area is more a
palm cannot be scanned using fingerprint scanner it needs a
special palm scanner device. The human palm also contains
the features which are unique to the individuals. The palm
contains principle lines and wrinkles which are helpful in
identifying person [11]. Using high resolution scanner the
geometric features of human palm like height, width, length
of palm can be calculated. Palm also
contains ridges and valley from those minutiae `points can be
calculated. Using all this information together a system with
high level security can be develop [4].

Page 56

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 1, 2015

ISSN: 2455-3743

COMPARISON OF VARIOUS BIOMETRIC AUTHENTICATION SYSTEMS


H: HIGH, M: MEDIUM, L: LOW [4]
Biometric
Characterist
ics

Universal
ity

Uniquen
ess

Permane
nce

Measurabi
lity

Performa
nce

Acceptabil
ity

Circumvent
ion

Finger print

Palm print

Hand
Geometry

Iris

Retina

Face

Voice

Signature

Keystroke

DNA

4. CONCLUSION
In todays digital world the acceptability of the
biometric identification system is increasing more and
more. All the features provided by biometric system such
as uniqueness, persistence, universality etc makes it more
powerful. The biometric security provides the higher
level of security than the system with password, cards, or
other keyword. The main objective of this paper is to
provide abstract overview of currently used biometric
identification and/or verification systems present in
todays society. This paper also concludes that, security
level required by todays
Society can be fulfilled by various biometric security
systems.
5. REFERENCES
[1] James Wayman, Anil Jain, Davide Maltoni and Dario
Maio, "An Introduction to Biometric Authentication
Systems". In Biometrics: Technology, Design and
performance evaluation. Springer Publications. ISBN
978-0-7923-8345-1.

[3] QinghanXiao BiometricsTechnology, Application,


Challenge and Computational
Intelligence Solutions, May2007|Ieee Computational
Intelligence Magazine.
[4] A. K. Jain, A. Ross, S. Prabhakar, "An Introduction
to Biometric Recognition", IEEE Trans. on Circuits and
Systems for Video Technology, Vol. 14, No. 1, pp 4-19,
January 2004
[5] www.extremetech.com
[6] Kresimir Delac, Mislav Grgic, a survey of biometric
recognitionmethods, 46th International Symposium
Electronics in Marine, ELMAR-2004, 16-18 June 2004,
Zadar, Croatia.
[7]
The
human
eye
http://www.intechopen.com

adaptive

optics,

[8] Mehrchilakalapudi.woedpress.com
[2] Jain, A.K.; Bolle, R.; Pankanti, S., eds. (1999).
Biometrics: Personal Identification in Networked
Society. Kluwer Academic Publications. ISBN 978-07923-8345-1.

[9] http//en.wikipedia.org
[10] www.teacherweb.com

Page 57

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 1, 2015

[11] D. Zhang and W. Shu, Two novel characteristic in


palm print verification: Datum point invariance and line
feature matching, Pattern Recognise, vol. 32, no. 4, pp.
691702, 1999.

ISSN: 2455-3743

[12] Hand geometry Eter Biometric Technology,


http://www.eter.it

6. AUTHOR PROFILE
Ankush S. Deshmukh
Pursuing 3rd Year BE at SSGMCE
Shegaon

Poonam V. Hajare
Pursuing 3rd Year BE at SSGMCE
Shegaon
Rajeshri V. Kachole
Pursuing 3rd Year BE at SSGMCE
Shegaon
Prof. A. S. Manekar
Amitkumar S Manekar working as assistant
Professor in IT Department SSGMCE,
Shegaon.
His research area is Big Data analysis and
High performance Computing. He has
guided many Under Graduate and Post
Graduate Students.

Page 65

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 1, 2015

ISSN: 2455-3743

A COMPREHENSIVE SURVEY ON LOAD BALANCING ALGORITHMS IN IAAS


A GAWANDE1, S JAIN2 AND K RAUT3
SSGMCE, Dept. of Information technology,
SHEGAON (MH)
gawandeakshay3@gmail.com, gawandeakshay3@gmail.com, kiran.raut10@gmail.com
1,2,3

ABSTRACT: Cloud computing is a recent advancement where in IT infrastructure and applications are provided as services to
end-users. Cloud computing is the delivery of computing services over the internet. It is always required to share work load among
the various nodes of the distributed system to improve the resource utilization and for better performance of the system. So there
may be load on VMs, for balancing the load we have two ways, one is allocation of resources and second is scheduling the task. In
this paper we are making survey on load balancing and task scheduling algorithms for effective utilization of the system and proper
servicing the clients request. For creating cloud environment we are using CloudSim simulator. Makespan parameter is used for
comparing the results of the different algorithm.
Keywords: Cloud computing, task scheduling, load balancing, CloudSim, IaaS
Store data and run software. Cloud computing is comparable
to distributed computing (it is the collection of computer
resources so cloud computing is totally an internet base
1. INTRODUCTION
approach where all the applications and files are hosted on a
Cloud computing is defined as a type of computing that
cloud which consisting of various computers interconnected
depends on sharing computing resources rather than having
with each other in complex manner. Cloud computing
local servers or personal devices to handle applications. As
elaborates the concept of parallel as well as distributed
the 90s came to an end, Cloud Computing progressed quite
computing to provide shared resources, hardware, software
rapidly. The evolution of cloud computing can be explained
and information to computers or other devices on demand.
in three phases.
These system follows a "pay as you use" model. Now, the
customers are not buying the software or computational
platform. With internet facility, the customer can use the
computation power or software resources by paying money
only for the duration customer has used the resource. The
customer is interested in reducing the overall execution time
of tasks on the machines.
The processing units in cloud environments are called as
virtual machines (VMs).The view of business is that the
virtual machine should execute the task as early as possible
and these VMs are run in parallel. This leads to problems in
scheduling of the customer tasks within the available
resources. The scheduler does the scheduling process
a. Mainframe computing: - In mainframe computing
efficiently and makes full use of available resources. We can
system, a single system stored the bulk of data and every
assign more than one task to one or more virtual machines
other user accesses
the data from the same single
that run the task simultaneously. In this kind of environment
system. But this computing was inefficient and
it will be make sure that the load can be balance in all VMs
expensive.
i.e., it should make sure that the tasks are not loaded heavily
b. Distributed System: - In distributed system the data was
on one VM and some VMs do not remains idle and/or under
stored in various system with enough memory and user
loaded. Cloud computing delivers infrastructure, platform,
can access the data from the different system as per their
and software as services, which are made available as
requirement placed at different location.
subscription-based services in a pay-as-you-go model to
Cloud computing:-the cloud is a shared network of
consumers. Those services are referred as
computers through which people and companies
a. Infrastructure as a Service (IaaS)
b. Platform as a Service (PaaS)
c. Software as a Service (SaaS).
IaaS refers to a combination of hosting, hardware
provisioning and basic services needed to run a cloud. PaaS
refers to the provision of a computing platform and the

Page 66

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 1, 2015

provision and deployment of the associated set of software


applications to an enterprise by a cloud provider. Software
as a Service (SaaS) is a software distribution model in
which applications are hosted by a service provider and
made available to customers over a network.
In an IaaS model, a third-party provider hosts hardware,
software, servers, storage and other infrastructure
components on behalf of its users. IaaS providers also host
users' applications and handle tasks including system
maintenance, backup and flexibility planning. It provides
highly scalable resources that can adjusted on demand. This
makes IaaS well-suited for workloads that are temporary,
experimental or change unexpectedly. It is not possible to
perform the bench marking experiment in repeatable,
dependable and scalable environment using real world cloud
environments. The alternative for this is to use the simulation
tools. Here for performing experiment CloudSim simulator is
being used to perform the load balancing and task scheduling
algorithm. Some related terms .Some of the stakeholders of
cloud computing are a.
b.

c.

Figure 1. In particular, the components of handling events


and message passing in SimJava can be reused in CloudSim.
Similarly, using GridSim simplifies the implementation of
networking, information services, files, users, etc. We now
discuss main components of a cloud computing infrastructure
which are implemented in the simulator CloudSim. Figure
below shows complete components of CloudSim.
a.

End users:-the end users are the customers or users who


demand for the services in cloud computing.
Cloud Provider:-Cloud providers are the Responsible
for building the cloud. They can offer user Public,
Private or hybrid cloud.

Cloud developer:-This entity lies between the


Developer and the end user.it has the responsibility of
taking both into consideration. The cloud developer
should have the enough technical knowledge of cloud.

Here in this paper CloudSim simulator is used for


implementing the load balancing algorithm.
f.
2. INTRODUCTION TO CLOUDSIM
CloudSim goal is to provide a generalized and extensible
simulation framework that enables modeling, simulation, and
experimentation
of
emerging
Cloud
computing
infrastructures and application services, allowing its users to
focus on specific system design issues that they want to
investigate, without getting concerned about the low level
details related to Cloud-based infrastructures and services.
Support for user-defined policies for allocation of hosts to
virtual machines and policies for allocation of host resources
to virtual machines
2.1 Architecture of CloudSim:
CloudSim is implemented using existing simulation libraries
and frameworks such as GridSim and SimJava in order to
handle the low-level requirements of the system as shown in

ISSN: 2455-3743

Data center: In CloudSim, data center is used to model


the core services at the system level of a cloud
infrastructure. It consists of a set of hosts which manage
a set of virtual machines whose tasks are to handle "low
level" processing, and at least one data center must be
created to start the simulation.
b. Host: This component is used to assign processing
capabilities (which is specified in the millions of
instruction per second that the processor could perform),
memory and a scheduling policy to allocate different
processing cores to multiple virtual machines that is in
the list of virtual machines managed by the host.
c. Virtual machine: This component manages the
allocation of different virtual machines different hosts,
so that processing cores can be scheduled (by the host)
to virtual machines. This configuration depends on
particular application, and the default policy of the
allocation of virtual machines is "first-come, first-serve".
d. Datacenter broker: The responsibility of a broker is to
meditate between users and service providers, depending
on the requirement of quality of service that the user
specifies. In other words, the broker will identify which
service provider is suitable for the user based on the
information it has from the Cloud Information Service,
and negotiates with the providers about the resources
that meet the requirement of the user. The user of
CloudSim needs to extend this class in order to specify
requirement in their experiments.
e. Cloudlet: This component represents the application
service whose complexity is modeled in CloudSim in
terms of the computational requirements.
Cloud Coordinator: These components manages the
communication between other Cloud Coordinator services
and brokers, and also monitor the internal state of a data
center which will be done periodically in terms of the
simulation time. All above are the entities related to the
CloudSim which helps us to implement the Load balancing
algorithm. Load balancing must take into account the two
major tasks-one is the resource allocation and other is the
task scheduling in distributed environment.
3.

LOAD BALANCING IN CLOUD COMPUTING

Load balancing provides solution to number of issues present


in cloud computing environment. Load balancing must take
into consideration for two major tasks, one is the resource
allocation and other is task scheduling in distributed
environment. Efficient allocation of resources and scheduling
of resources will ensure following major points:

Page 67

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 1, 2015

ISSN: 2455-3743

a. Resources must available easily on demand.


b. Resources must be utilized efficiently in both high/low
load condition.
c. Energy must save in case of low load (i.e. when usage of
cloud resources is below certain threshold).
d. Cost of using resources must be reduced.
For measuring the efficiency and effectiveness of Load
Balancing algorithms simulation environment are required.
CloudSim [12] is the most efficient tool that can be used for
modeling of Cloud. During the lifecycle of a Cloud,
CloudSim allows VMs to be managed by hosts which in turn
are managed by datacenters.
Now a days cloud computing technology is growing so
rapidly and the user or clients are demanding more data
continuously so load balancing is use to improve the overall
performance of the of the cloud system. Here in this paper we
investigate the algorithm use to solve the issue of load
balancing and task scheduling in cloud computing [1]

load of overloaded matching to the under loaded machine


and thus access the data.
Cloudsim provides architecture with four basic entities.
These entities allow user to set-up a basic cloud computing
environment and measure the effectiveness of Load
Balancing algorithms. A typical Cloud modeled using
CloudSim consists of following four entities Datacenters,
Hosts, Virtual Machines and Application as well as System
Software. Datacenters entity has the responsibility of
providing Infrastructure level Services to the Cloud Users.
They act as a home to several Host Entities
or several instances hosts entities aggregate to form a single
Datacenter entity. Hosts in Cloud are Physical Servers that
have pre-configured processing capabilities. Host is
responsible for providing Software level service to the Cloud
Users. Hosts have their own storage and memory. Processing
capabilities of hosts is expressed in MIPS (million
instructions per second). They act as a home to Virtual

Figure 1 : Load balancing

Machines or several instances of Virtual machine entity


aggregate to form a Host entity. Virtual Machine allows
development as well as deployment of custom application
service models. They are mapped to a host that matches their
critical characteristics like storage, processing, memory,
software and availability requirements. Thus, similar
instances of Virtual Machine are mapped to same instance of
a Host based upon its availability. Application and System
software are executed on Virtual Machine on-demand.
3.1Resource Allocation
Resource provisioning is the task of mapping of the
resources to different entities of cloud on demand basis [11].
Resources must be allocated in such a manner that no node in
the cloud is overloaded and all the available resources in the
cloud do not undergo any kind of wastage (wastage of
bandwidth or processing core or memory etc.). Mapping of
resources to cloud entities is done at two levels:

Above figure Explains the load balancing concept as load


balancer with the use of load balance algorithm transfer the

Page 68

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 1, 2015

3.2 VM Mapping onto the Host


Virtual machines reside on the host (physical servers).
More than one instance of VM can be mapped onto a single
host subject to its availability and capabilities. Host is
responsible for assigning processing cores to VM.
Provisioning policy define the basis of allocating processing
cores to VM on demand. Allocation policy or algorithm must
ensure that critical characteristics of Host and VM do not
mismatch.
3.3 Application or Task Mapping onto VM
Applications or tasks are actually executed on VM. Each
application requires certain amount of processing power for
their completion. VM must provide required processing
power to the tasks mapped onto it. Tasks must be mapped
onto appropriate VM based upon its configuration and
availability.
3.4 Task Scheduling
Task scheduling is done after the resources are allocated to
all cloud entities. Scheduling defines the manner in which
different entities are provisioned. Resource provisioning
defines which resource will be available to meet user
requirements whereas task scheduling defines the manner in
which the allocated resource is available to the end user (i.e.
whether the resource is fully available until task completion
or is available on sharing basis). Task scheduling provides
Multiprogramming Capabilities in cloud computing
environment [11].
Task scheduling can be done in two modes:
a. Space shared
b. Time shared
Both hosts and VM can be provisioned to users either in
space shared mode or time shared mode. In space sharing
mode resources are allocated until task does not undergo
complete execution (i.e. resources are not preempted);
whereas in time sharing mode resources are continuously
preempted till task undergoes completion.
Based on resource provisioning and scheduling, four cases
can be examined under different performance criteria so as to
get efficient load balancing scheme.

ISSN: 2455-3743

Case 4: Hosts are provisioned to VMs in time sharing


manner and VMs are provisioned to tasks in space sharing
manner.
4. LOAD BALANCING ALGORITHMS
In this section some load balancing algorithms are discussed
where load balancing is assumed as NP Complex problem.
4.1 Genetic Algorithm
A Genetic Algorithm is a search technique used in computing
to find true or approximate solutions to optimization and
search problems. Genetic Algorithms are categorized as
global search heuristics. Genetic Algorithm are a particular
class of evolutionary algorithms that use techniques inspired
by evolutionary biology such as inheritance, mutation,
selection, and crossover (also called recombination).
Genetic algorithm was the primary organic technique that
supported the principle of survival and Mendels laws of
inheritance. Genetic Algorithm has various benefits over
alternative techniques for computationally intensive issues, if
supplied with properly set operators and fitness functions.
Genetic Algorithm defines a collection of solutions
(chromosomes) that are jointly referred to as a population.
The tactic then performs crossover, mutation and selection
operations iteratively until the stopping criteria is satisfied
[1].
The resultant set is the set of solutions.
Step 1: Selection
We randomly choose a set of the people based on their
fitness:
Step 2: Crossover
Step 3: Mutation
Algorithm for Genetic Algorithm
Step 1: Create a population of a fixed range of random
chromosomes.
Step 2: Calculate the fitness values for all chromosomes.
Step 3: Choose 2 chromosomes having best fitness as parents
(Selection step).
Step 4: Perform crossover among the parents to produce
offspring using crossover ratio (Crossover step).
Step 5: Perform mutation if required at each position using
mutation ratio
Step 6: Add the offspring chromosomes to the first
population.
Step 7: If the termination condition is satisfied, then stop and
come back to best chromosome as an answer, else repeat
from
Step two.
Flow Chart for Genetic Algorithm:

Case 1: Hosts and VMs, both are provisioned in space


sharing manner.
Case 2: Hosts and VMs, both are provisioned to VMs and
tasks respectively in time sharing manner.
Case 3: Hosts are provisioned to VMs in space sharing
manner and VMs are provisioned to tasks in time sharing
manner.

Page 69

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 1, 2015

ISSN: 2455-3743

Completion time of task


Initialization of population

Evaluation of Filters Value

Selection of Parents

Apply Selection Operator

Apply Mutation Operator

Offspring

on

Expected execution time of

on

4.3 Improved MaxMin Algorithm


Max-min algorithmic rule allocates task on the resource
where massive tasks have highest priority instead of smaller
tasks. For instance, if we've got one long task, the Max-min
may execute several short tasks at the same time whereas
death penalty massive one. The full make span, during this
case is set by the execution of long task. However if metatasks contains tasks have comparatively completely different
completion time and execution time, the make span isn't
determined by one in all submitted tasks. It would be like the
Min-min make span.
1.
2.
3.
4.
5.
6.

Update Generation

7.
8.
9.

for all submitted tasks in meta-task;


for all resources;
= +
While meta-task is not empty
Find task
consumes maximum completion time.
Assign
to the resource
which gives minimum
execution time.
remove from meta-tasks set
update for selected
update
for all j

Algorithm
Figure a.
2: Max-Min
Flow Chart
for Genetic
Algorithm
In the Max-min algorithmic rule, shown in figure 1,
represents the prepared time of resource
to execute a
task, where as and
represent the expected completion
time and Execution time severally [1]. As shown, task
with most expected completion time is chosen to
be assigned for
corresponding
resource
that
offers minimum execution time.
1.
2.
3.
4.
5.
6.
7.

for all submitted tasks in meta-task;


for all resources;
=
+
While meta-task is not empty
Find task consumes maximum completion time.
Assign
to the resource
which gives minimum
execution time.
Remove from meta-tasks set.

Where:
Set of task called as meta-task
Resources

Figure 3 : Improved Max Min algorithm

Page 70

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 1, 2015

4.4 Min Min Algorithm:


The Min-Min algorithm is simple and still basis of present
cloud scheduling algorithm [14]. It starts with a set S of all
unmapped tasks. Then the resource R which has the
minimum completion time for all tasks is found. Next, the
task T with the minimum size is selected and assigned to the
corresponding resource R (hence the name Min-Min). Last,
the task T is removed from set S and the same procedure is
repeated by Min-Min until all tasks are assigned (i.e., set S is
empty). The pseudo code of Min-Min algorithm is
represented in Fig 1 assuming we have a set of n tasks (T1,
T2, T3 Tn) need to be scheduled onto m available
resources (R1, R2, R3 Rm). We denotes the Expected
Completion Time for task i (1in) on resources j (1jm) as
Ctij that is calculated as in (1), Where rtj represents the
Ready Time of resource Rj and Etij represents the Execution
Time of task Ti on resource Rj.
1.
2.
3.
4.
5.
6.
7.
8.
9.

For all submitted tasks in the set;


For all resources;
= j+ ; End For; End For;
Do while tasks set is not empty
Find task that cost minimum execution time.
Assign
to the resource
which gives minimum
expected complete time
Remove from the tasks set
Update ready time for select
Update
for all

10. End Do
4.5 Bee Colony Optimization
The BCO rule is predicated on the activities of bees
whereas looking for nectar, and sharing the knowledge with
alternative bees. There square measure 3 styles of agents the employed bee, the onlooker bee, and the scout. The
employed bee stays on a food supply and provides its
surroundings in memory; the onlooker acquires this
knowledge from the employed bees and selects one in every
of the food sources to forage; and also the scout performs the
task of finding new nectar sources.
The procedure for BCO is as follows:

ISSN: 2455-3743

Step1. Initialization: Distribution of the populations into the


answer house randomly, and analysis of their fitness values,
within the quantitative relation ne that represents the share of
used bees within the total population.
Step2.
Movement of Onlookers: Calculation of the
likelihood of selection of a food source by the equation,
(1)
Where is the fitness value of the solution i evaluated by
the employed bee, N represents the number of employed
bees, and
is the probability of selecting the
employed
bee.
Choice of a target food source to move to onlooker bees and
determination of the nectar amounts, by the equation.
(2)
Step3. Movement of Scouts: If the fitness values are not
improved by continuous iterations, those food sources are
abandoned, and these employed bees convert into scouts and
are moved by the equation
Where r is a random number and r [1, 2].
Step4. Updating of the Best Food Source: Memorization of
the best fitness value and its position.
Step5.Termination Checking: Checking of the termination
condition, if satisfied, termination of the program and output
of the results; otherwise repetition of Step 2.
4.6 Ant Colony Optimization
Individual ants are behaviorally much unsophisticated
insects. They have a very limited memory and exhibit
individual behavior that appears to have a large random
component. Acting as a collective however, ants manage to
perform a variety of complicated tasks with great reliability
and consistency.
ACO is an algorithm, which is based on the behavior of the
real ants in finding a shortest path from a source to the food. It
is inspired from the ant colonies that work together in foraging
behavior. The ants work together in search of new sources of
food and simultaneously use the existing food sources to shift
the food back to the nest. The ants leave a pheromone trail
upon moving from one node to another.

Page 71

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 1, 2015

ISSN: 2455-3743

Figure 4 : Step for Bee Colony Optimization

Initialize ants

Stop

Find solution

Is termination
criteria met?

Update
pheromon
e

Evaluates
solutions

Update
pheromone

Probabilistically find new


solution based on
pheromone values

Evaluates
solutions

With the help of pheromone trails, the ant subsequently came


to the food sources. The intensity of the pheromone [10] can
vary on various factors like the quality of food sources,
distance of the food, etc. The ants use these pheromone trails
to select the next node. The ants can even modify their paths
upon encountering any obstacles in their path. This
phenomenon of the ants was used in many algorithms for
optimization where the ants follow each other through a
network of pheromone paths. The ants upon traversal from
one node to another update the pheromone trail of that path,
so a path becomes more feasible if more ants traverse upon it.
Paths that have the highest pheromone intensity have the
shortest distance between the point and the best food source.
In ACO algorithm [10] when the request is initiated the ant
start its movement. Movement of ant is of two ways:
Forward Movement [10] means the ant in continuously
moving from one overloaded node to another node and check
it is overloaded or under loaded, if ant find an overloaded
node it will continuously moving in the forward direction and
check each nodes.
Backward Movement- If an ant find an over loaded node
the ant will use the backward movement to get to the
previous node, in the algorithm if ant finds the target no de
then ant will commit suicide, this algorithm reduced the
unnecessary back ward movement, overcome heterogeneity,
is excellent in fault tolerance [10]
Mayanka Katyal et al [11] discussed various load balancing
scheme such as static load balancing, distributed and nondistributed dynamic load balancing, centralized and
hierarchical load balancing. On one hand static load
balancing scheme provide easiest simulation and monitoring
of environment but fail to model heterogeneous nature of
cloud. On the other hand, dynamic load balancing algorithm

Page 72

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 1, 2015

are difficult to simulate but are best suited in heterogeneous


environment of cloud computing.
Shagufta Khan et al [11] implemented SALB algorithm.
Firstly studied existing ACOs and then develop effective
load balancing algorithm using ant colony optimization The
main contribution of work is to balance the entire system
load while trying to maximize and minimize the different
parameter.
Soumya Banerjee et al [13] presented an initial heuristic
algorithm to apply modified ant colony optimization
approach for the diversified service allocation and scheduling
mechanism in cloud paradigm. The pheromone update
mechanism of ACO and coefficient is modified to &. This
modification supports to minimize the make span of the
cloud computing based services and probability of servicing
the request also has been converged using the modified
scheduling. Cloud task scheduling is an NP-hard
optimization problem, and many meta-heuristic algorithms
have been proposed to solve it. A good task scheduler should
adapt its scheduling strategy to the changing environment
and the types of tasks.
Kun Li et al [14] proposes a cloud task scheduling policy
based on Load Balancing Ant Colony Optimization
algorithm. The main contribution of our work is to balance
the entire system load while trying to minimizing the make
span of a given tasks set.
Al-Dahoud Ali et al [15] proposed ACO algorithm for load
balancing in distributed systems. This algorithm is fully
distributed in which information is dynamically updated at
each ant movement. Multiple colonies paradigm will be
adopted such that each node will send a colored colony
throughout the network and which are used to prevent ants of
the same nest from following the same route and hence
enforcing them to be distributed all over the nodes in the
system and each ant acts like a mobile agent that carries
newly updated load balancing information to the next visited
node.
Tushar Desai et al [15] discussed various load balancing
techniques for cloud computing. The main purpose of load
balancing is to satisfy the customer requirement by
distributing load dynamically among the nodes and to make
maximum resource utilization by reassigning the total load to
individual node. This ensures that every resource is
distributed efficiently. So the performance of the system is
increased.
Ratan Mishra et al [16] developed an effective load balancing
algorithm using Ant colony optimization technique to
maximize or minimize different performance parameters like
CPU load, Memory capacity, Delay or network load for the
clouds of different sizes.

ISSN: 2455-3743

5.

CONCLUSION

In this paper, we have


surveyed various load balancing techniques for cloud
computing. The main purpose of load balancing is to satisfy
the customer requirement by distributing load dynamically
among the nodes and to make maximum resource utilization
by reassigning the total load to individual node. This ensures
that every resource is distributed efficiently and evenly. So
the performance of the system is increased. We have also
discussed architecture of CloudSim simulator and required
qualitative matrix for load balancing.
6. REFERENCES
[1] Nuaimi, K.A et al., A Survey of Load Balancing in
Cloud Computing: Challenges and Algorithms, Second
Symposium on Cloud computing, Dec. 2012.
[2] Rajwinder Kaur1 and Pawan Luthra, Load Balancing in
Cloud Computing, Second Symposium on Cloud
computing, Dec. 2012.
[3] Abbas Noon, Ali Kalakech, Seifedine Kadry, A New
Round Robin Based Scheduling Algorithm for Operating
Systems: Dynamic Quantum Using the Mean Average
IJCSI International Journal of Computer Science Issues, Vol.
8, Issue 3, No. 1, May 2011 ISSN (Online): 1694-0814
www.IJCSI.org
[4] Rajveer Kaur, Supriya Kinger, Analysis of Job
Scheduling Algorithms in Cloud Computing International
Journal of Computer Trends and Technology (IJCTT)
volume 9 number 7 Mar 2014
[5] Swachil Patel, UpendraBhoi Priority Based Job
Scheduling Techniques In Cloud Computing: A Systematic
Review International Journal Of Scientific & Technology
Research Volume 2, Issue 11, November 2013.
[6] Geeta,Charanjit Singh Load Balancing in Distributed
System Using FCFS Algorithm with RBAC Concept and
Priority Scheduling International Journal of Recent
Development in Engineering and Technology, Volume 3,
Issue 6, December 2014.
[7] Saurabh Bilgaiyan, SantwanaSagnika, Madhabananda
Das, An Analysis of Task Scheduling in Cloud Computing
using Evolutionary and Swarm-based Algorithms, School of
Computer Engineering KIIT University, Bhubaneswar,
Odisha, India, 751024
[8] Optimized Scheduling Algorithm International
Conference on Computer Communication and Networks
CSI- COMNET-2011106 Rajveer Kaur1, Supriya
Kinger2International Journal of Computer Trends and
Technology (IJCTT) volume 9 number 7 Mar 2014
[9] M. A. Elsoud, O. M. Elzeki, Improved Max-Min
Algorithm in Cloud Computing International Journal of
Computer Applications (0975 8887) Volume 50 No.12,
July 2012 User-Priority Guided Min-Min Scheduling
Algorithm for Load Balancing in Cloud Computing

Page 73

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 1, 2015

[10] Saurabh Bilgaiyan, Santwana Sagnika An Analysis of


Task Scheduling in Cloud Computing using Evolutionary
and Swarm-based Algorithms International Journal of
Computer Applications (0975 8887) Volume 89 No.2,
March 2014
[11] Mayanka Katyal, Atul Mishra, A Comparative Study of
Load Balancing Algorithms in Cloud Computing
Environment.
[12] Ashwini L, Nivedha G, MrsA.Chitra Improving
Efficiency by Balancing the Load Using Enhanced Ant
Colony Optimization Algorithm in Cloud Environment,
IJREAT International Journal of Research in Engineering &
Advanced Technology, Volume 2, Issue 2, Apr-May, 2014.
[13] Shagufta Khan, Niresh Sharma, Effective Scheduling
Algorithm for Load balancing using Ant Colony
Optimization in Cloud Computing, International Journal of
Advanced Research in Computer Science and Software
Engineering, Volume 4, Issue 2, February 2014.
[14] Soumya Banerjee, Indrajit Mukherjee, and P.K. Mahanti
, Cloud computing initiative using modified ACO
framework, WorldAcademy of Science, Engineering and
Technology Vol:3 2009-08- 27.
[15] Kun Li, GaochaoXu, Guangyu Zhao, Yushuang Dong,
Dan Wang, Cloud Task scheduling based on Load
Balancing Ant Colony Optimization, 2011 Sixth Annual
ChinaGrid Conference, 2011 IEEE.
[16] Al-Dahoud Ali, Mohamed A. Belal and MohdBelal AlZoubi, Load Balancing of Distributed Systems Based on
Multiple Ant Colonies Optimization, American Journal of
Applied Sciences 7 (3): 428-433, 2010 ISSN 1546-9239..
[17] Tushar Desai, Jignesh Prajapati, A Survey of Various
Load Balancing Techniques and Challenges in Cloud
Computing, International Journal of Scientific &
Technology Research volume 2, issue 11, November 2013.
[18] Ratan Mishra and Anant Jaiswal, Ant colony
Optimization: A Solution of Load balancing in Cloud,
International Journal of Web & Semantic Technology
(IJWesT) Vol.3, No.2, April 2012.
6.

ISSN: 2455-3743

K RAUT pursuing Bachelor


of Engineering in
Information Technology,
from SSGMCE, SHEGAON,
India.

AUTHOR PROFILE
A Gawande pursuing
Bachelor of Engineering in
Information Technology,
from SSGMCE, SHEGAON
, India.

S JAIN pursuing Bachelor of


Engineering in Information
Technology, from SSGMCE,
SHEGAON , India.

Page 74

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 1, 2015

ISSN: 2455-3743

PROPOSED AUTOMATED STUDENTS ATTENDANCE MANAGEMENT SYSTEM USING


RASPBERRY PI AND NFC
MAHESH P. SANGEWAR1, SHUBHAM R. WAYCHOL2, AMITKUMAR MANEKAR3
1
Shri Sant Gajanan Maharaj College of Engineering, Shegaon-444203
maheshcoengg@gmail.com1
waycholshubham@gmail.com2
asmanekar24@gmail.com3
ABSTRACT: As we know the tradition approach of attendance, takes quite long to take attendance and pronounce each and
everyones name properly of a bulk of students. And in the changing world we must have to update our attendance system in a
smarter way with speed and efficiency to reduce the time required to take attendance in our traditional way. Here, we are
presenting an automated attendance management system using Raspberry Pi and NFC which is a smarter and more efficient way
.with the help of such system the attendance management system in school/colleges/universities and hence reducing the time
required for attendance in class. This system is applicable to not only students but also teachers, employees, workers.
Keywords: Automated Attendance, Raspberry Pi, Facial Recognition, NFC.

1. INTRODUCTION
Attendance is the necessary think in todays days to maintain
in the discipline in school and colleges. Automated Students
Attendance Management system using Raspberry-Pi and NFC
is a modern easy and cheap way to take attendance and
provide the result accurately. For making this amazing and
modern Attendance management system we required NFC tag
(Raspberry Pi), Student id_tag (NFC tag) and RFID .NFC is
the technology which is been with us since a couple of years
back and carries a lot of advantages .The NFC which is use in
todays each phone is basically based on wireless
communication interface and also the NFC tags are available
which is used for storing and identify particular identity ,
RFID (Radio Frequency Identifier) is based on the magnetic
field induction which is used for communication between two
electronic devices here RFID is used to identify a particular
NFC tag. A Raspberry Pi is a low cost, business card size
computer with his ARM processor, able to play 1080p video
with Videocore 4 GPU, 512MB of RAM , a SDcard slot, USB
slots with a 10/100 Ethernet port. This attendance system can
be used by school, college, offices, university, by using NFC
in Linux OS based Raspberry Pi device. The student just has
to tab his NFC tag to RFID reader which is placed along with
the Raspberry Pi and after identifying the Face recognition is
done. After this, the system will generate the report of the
student and calculate the number of students attends that class
and this also avoid the problem of illegal marking of
attendance. In this way the system works. The Hardware
requirements are Raspberry Pi, NFC tag, RFID reader, camera
module. The overall cost of this system will not be more than
a 5,000 INR. The total execution is done by hardware and
software of this system. The main advantage of this type of
this system is this system totally avoid the illegal attendance

problem, it saves a lot of time that is waste in the lecture and


hence a smarter way in the smarter world full of technologies.

Figure 1: The Sequence Diagram

2.

RELATED WORK

As we know most of the university and colleges attendance of


students is important for checking and managing student
attendance. Some colleges use paper sheet for student
attendance and after that fill all this information manually on
college server. A designed proposed by Marianne Kinnula
and the team, in 2010 was based on NFC tags of students,
NFC reader. Students need
to tap their NFC cards to

Page 75

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 1, 2015

their teachers Mobile phone which would be acting as an


NFC reader. As soon as the tag comes in contact with that
reader the attendance would be marked at the back-end. This
leads to the waste of time in their lecture time. Nearly 5-10
min is being wasted in a lecture of an hour [2].A system
proposed in 2012 by Mohammad Umair Yaqub in which
students have to tap their NFC enable phone with teacher
Phone before the lecture and after this the lecture teacher
upload attendance on the server [3].Unnati A. Patel
designed a system in 2013 in which she uses RFID technology
for the attendance of students in classroom.[4] In 2014,
Dhiraj.R.Wani, Tushar.J.Khubani ,and their guide Prof.Naresh
Thoutam designed a system in which automated attendance
management system is done with Raspberry Pi and NFC
with facial recognition. If NFC tag and facial recognition
result is accesed then the attendance is reported otherwise not
[5].A paper proposed byJomon Joseph, K.P.Zacharia in
which they uses face recognition for the validation purpose
especially in case of the students attendance. Todays
attendance system is monotonic and time consuming. Face
Recognition is use to observe the facing characteristics of a
person to identify that person. It stores the all recognized data
into the database of the system and is very useful and it
reduces the time and also it detects the face of a person from 2
foot or more away from the camera position. [6]. In 2014,
Ajinkya Patil, Mrudang Shukla proposed a paper in which
they implemented a classroom attendance management system
based on face recognition by image processing. Face detection
differentiates between faces from non-faces like objects and is
therefore essential for accurate attendance. In this the face of
student can b detect individually or with group of students
[7].In december 2013,Vishal more, Surbhi Nayak
proposed a attendance monitoring system it replaces the
system which is proposed in 2010 by
Mari Ervasti,
Marianne Kinnula, Minna Isomursu by making some
advanced changes in the time span. All the data is stored in the
database and the final report can be generated [8]. Few years
back an Attendance monitoring system is proposed by Jakub
Dvorak in which he used Raspberry Pi and NFC. User
initially has to perform some actions regarding students. Then
after that he is asked to tap the tag on the NFC reader. All this
is stored in the database MySql, which can be retrieved later
[9].G.Senthilkumar, K.Gopalakrishnan, and V. Sathish
Kumar they designed an embedded image capturing system
using the latest Raspberry Pi system with the camera module
in the year 2014. In this proposed an image capturing
technique in an embedded system supported Raspberry Pi
board [10].
3.

METHODOLOGY

3.1
COMPONENTS
3.1.1 NFC/RFID: Near Field Communication which is also
popular as NFC is a short range wireless technology and it is
the subset of RFID family. Near Field Communication is the
one of the most significant technologies in the field of

ISSN: 2455-3743

personal communication. NFC is based on High frequencyRadio Frequency Identification (HF-RFID) technology that
uses magnetic field induction to do communication between
electronic devices in its active area. Both operate at 13.56
MHz Operating distance of NFC technology is typically 10
cm and data exchange rate is typically 424 kb/s.
3.1.2 Raspberry Pi: The Raspberry Pi is the low cost credit
card size computer with a RAM of 512MB and a memory card
slot, memory card is used for booting purpose and Linux OS is
used. Slots of USB and Ethernet is provided with a 40pin
GPIO used to interface with RFID reader and a camera
module can be attached to Raspberry Pi for face recognition. It
also contains a memory card slot and standard mobile charger
slot.

Figure 2: The Raspberry Pi


3.1.3 Open CV: The Open Source Computer Vision which is
popular as Open CV, It is the transmission of data in the form
of still or video camera to obtain the desired goals. It is used
for Motion analysis and object tracking, image analysis, object
recognition, structural analysis. There are different algorithms
such as fisherman algorithm, ADABOOST these are the
algorithms to which we are going to work. It has the libraries
such as IPL (Image Processing Libraries).In open CV we can
perform Framework for working with different databases like
Action recognition, Gesture recognition, Object recognition
and text recognition.
3.2
WORKING
The method to implement this type of system is simple. All we
have to do is to make the hardware implementation correctly
and the NFC tag is provided to every student which is a
unique id for every single student. So whenever the user tabs
his NFC tag on the RFID reader, due to the magnetic field
induction between RFID and that NFC tag the tag is readed
and if the card belongs to that particular department then the
validation successful or else that card is not validate or not
registered. If the card validation is successful then the face
recognition of that user is done with camera module which is
connected to Raspberry Pi. If the NFC tag recognition is valid

Page 76

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 1, 2015

and the face recognition is valid then attendance is recorded.


The admin can change students information, generate reports
of particular period and can change the database if required. If
the result is invalid then the illegal use of NFC tag is being
made or user must register properly. The NFC tag and Face
recognition data is already stored in the database and the
Raspberry Pi is connected to the server to access that database.
The most important advantage of this type of system is cost
effectiveness and in this way the automated attendance
management system replaces the manual and traditional
approach of attendance and maintaining the record of
attendance.

Figure 3: The actual implementation


4.
FUTURE SCOPE
As time is the most valuable thing, by implementing this
technique we can save a valuable time which is being wasted
until now in a lecture of an hour nearly 5-10 min is wasted.
This system is quite cost effective and even provides a
smarter way for taking attendance. The main advantage is it
saves time, avoid illegal attendance system. This system may
b used for the security purpose means only the authorized or
registered person can access some particular high security
area by their tag and face authorization is valid or not can be
decided. So we must have to replace our tradition way by a
smarted and automated way in the smarter world.
5.

CONCLUSION

Attendance management system using Raspberry Pi and NFC


is automated and monitored without any wastage of time i.e.
the time used for roll calls and pronouncing each students
name properly can be eliminated.. It provides implementation
of new technology in schools, colleges, etc where attendance
is necessary to achieve an overall attendance of an individual
student automatically without any calculation s from
attendance sheet. It is the most highly secured authentication
system because of NFC tag and facial recognition. No illegal
attendance by other individuals because of this face
authorization. Hence, this system must be implemented in
each field or department where attendance plays an important

ISSN: 2455-3743

role. This is latest and the smarted technology in the world of


automation.
6.

REFERENCES

[1]Anurag K; Near Field Communication 2010


Minna Isomursu, Marianne Kinnula, Mari Ervasti
,International Journal on Advances in Life Sciences, vol 2
no
1
&
2,year
2010,http://www.iariajournals.org/life_sciences
[2]Mohammad Umair Yaqub,Umair Ahmad Shaikh and
Mohamed_Mohandes,Near
Field
Communication
Technology Division, King Fahd University of Petroleum &
Minerals in the year 2012.
[3]Unnati A. Patel Student Management System based on
RFID Technology.
[4]
Dhiraj.R.Wani,
Tushar.J.Khubani,
Prof.Naresh
ThoutamNFC Based Attendance Monitoring Systemwith
Facial Authorization.
[5]Jomon Joseph, K.P.Zacharia Automated attendance
management system using face recognition.
[6] Ajinkya Patil, Mrudang Shukla Implementation of
classroom Attendance system based on a face recognition in
class.
[7] Vishal More, Surabhi Nayak, Attendance Automation
using Near Field Communication (NFC) Technology,
International Journal of Scientific & Engineering Research,
Volume 4, Issue 12, December-2013.
[8]Jakub Dvorak, Attendance system using Raspberry Pi
and
NFC
Tag
reader,
http://www.instructables.com/id/Attendance-system-usingRaspberry-Pi-and-NFC-Tag-r/
[9]G.Senthilkumar, K.Gopalakrishnan, V. Sathish Kumar
Embedded image capturing system using Raspberry Pi system.

7.

AUTHOR PROFILE
Mahesh P. Sangewar
Studying BE at Shri Sant
Gajanan Maharaj College of
Engineering,
Shegaon,
Maharashtra, India.
Shubham R. Waychol
Studying BE at Shri Sant
Gajanan Maharaj College of
Engineering,
Shegaon,
Maharashtra, India.

Page 77

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 1, 2015

ISSN: 2455-3743

Prof. A. S. Manekar
Amitkumar S Manekar working
as assistant Professor in IT
Department SSGMCE, Shegaon.
His research area is Big Data
analysis and High performance
Computing. He has guided many
Under Graduate and Post
Graduate Students.

Page 78

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 1, 2015

ISSN: 2455-3743

MY MOMENTS AN ANDROID BASED DIARY APPLICATION


CHETAN PATIL1, KAVITA CHAUDHARI2, SNEHAL DESHMUKH3, AMITKUMAR MANEKAR4
1,2,3
Shri Sant Gajanan Maharaj College of Engineering,
Khamgoan road, Shegaon ,Dist. Bhuldhana,
Maharashtra, India
1
cvpatil1995@gmail.com
2
kavitachaudhari722@gmail.com
3
Snehaldeshmukh.200@gmail.com
4
asmanekar24@gmail.com
ABSTRACT: Android application has proved its importance in many aspects of human life and they are becoming one of the
reliable sources of getting information from the internet through application specific task. We all must have decide to write diary at
least once? What is new in it we may take our pen and book out and can write the diary but is it maintaining consistency? No. My
Moments is an application which allows you to write your personal as well as official diary in four ways you may write the text ,
add audio sound or can capture video and can retrieve your memorable moment when you want to access the and can enjoy your
happy moment. This android application takes care that you must be consistent in writing you diary by scheduling timing
according to your choice. This diary application can also be used to write professional diary also some times in many work we
need to maintain diary of the work .This application will provide you the convenient way to maintain the work related information
and can share this information with the higher authority. It means this application will allows the bosses to keep track of their
employee who are sent on field duty and this application will provide the easiest way to capture your moment and also for official
use.
Keywords: Android application, Personal diary, Work diary, tracking employees field work, sharing work diary, RAD model,
four way input..

1. INTRODUCTION
The diary is the best way to note down your day to day
activity so that in future you may be able to have looked at
your achievement and happy moments. Diaries can be of two
type personal diary and work diary (professional
diary).Personal diary is to be kept secret one where as private
diary can be shared with organization to which you are
abounded. My Moments application will allow you to have
your personal diary and Professional diary in the same case
but with different privacy menu and allow you to note down
your memories in your cell phones which is much secured
than traditional way i.e. book .User must write the diary on
daily basis for that proper scheduling in this Application we
are focusing on the same with help of reminders and
notifications. When it comes to write diary we thought about
writing text but it is not the efficient way or we can express
our thoughts in more colorful way though the pictures and
videos or through the audios that will make this application
different from others. In this application all the four methods
i.e. text, image, audio, video will be implemented that will
make the user more easy to use and will not force user to type
all the things in the text. Android application is the way to
make the life easy and convenient and to make your smart
phone smarter this application will Help and keeping
memories of your memorable moments and to watch that
moment will give and different filling as a person.
This diary will be an audiovisual diary and you may

enjoy your joyful moments in future whenever you want. My


Moments will change the current scenario from old way to the
digital in writing diaries. Some jobs involve field duty and to
take note about your work and it needs to be shared with your
organization. This application allows you to keep the work
related records.
As Work Diary may essential to be shared with the
organization so that your work will showed to the higher
authority. Diary will be helpful for the student as well as to
the professionals also to take the notes about their life and
that is the important one.
2. RELATED WORK
The tradition methods to write diary is to take pen
and paper and write down in a personal book and keep that
book in safe place. Another method is the android application
current system is having diaries with text input or with images
and provides method to retrieve that. Play Store is having
much Diary application like Memoire: the diary, Journey
Diary, My Diary all these application provides same way to
write the diary or daily diary.

Page 79

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 1, 2015

Some of the Apps on Play Store:


Memoires: The Diary
This Application from play store Application Memoires gives
a way to quickly enter youre moments, thought, memories or
notes, capture photos, audio or insert images from gallery. It
also accepts text and images from other apps [3].
Journey Diary:
Capture life moments, from everyday thoughts to useful
travel logs, Journey helps you to re-experience the fun of
journaling through the warmth and humanity of the interface.
Journey Application youre your moments beautiful mounted
in calendar, photo and atlas view, allowing you to recall the
greatest moments anytime, anywhere. Online and Wear app
[3].
Diary application:
Diary is a simple application which lets you write daily
memories with a simple user edge. Add a visual touch to your
memories attaching a photo [3].
Methodology
The Diary application will work in two modules Personal
Diary and Work Diary control flow diagram is shown in
figure1. In personal diary as well as in work diary data can be
input in four ways i.e. text, audio, video, and images these
input will be taken from four different activities as shown in
figure.1.
Figure 1: Control Flow diagram of application.

ISSN: 2455-3743

RAD is Rapid Application Development model used in


software development and one of the most useful model for
Android application development in which instead of starting
from scratch we start building our project with existing
module and to use the existing open source code and to
implement the ideas in the beginners mind.
All the modules can be built within a week and only thing that
is needed is the idea to implement the land as [2] RAD model
is most convenient as all the business logics are available and
no need to create or develop new one. Comparison of this
model with the traditional model is shown in figure 2.

Fig 2: RAD model vs. Traditional model. [4]


Capturing Video:
To capture video using android application we need to have
permission and that can be taken by writing code in android
manifest file. To advertise that application depends on having
a camera, put a <uses-feature> Permission tag in the Android
.manifest file as shown in figure 3:

Main Activity

Personal Diary

Add Moments

Capture Video

See past
Moments
Retrive All Data
W.R.To Time

Add moments

Capture Video

Add text

Add text

Capture Audio

Capture Audio

Capture Image

Capture Image

Official Diary

Settings

See Past
Moments

Share Data.

Retrive All Data


W.R.To Time

Control of these activities is provided to the buttons present


in main activity. This project is developed using RAD (Rapid
Application Development) model.
RAD

Figure 3: Hardware permission in Android Manifest files


[1].
Then we may write code to access the camera through java
code. The method dispatchTakeVideoIntent () is used to
capture the video or to record the video. This can be done by
applying code as shown in figure 4.

Figure 4: Java Code to record video [1].

Page 80

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 1, 2015

ISSN: 2455-3743

Capturing Image:
Images can be captured by using the Android Application we
simply need to have permission for the hardware in the
android Manifest as like in recording videos .And following
java code can be applied to capture photo using the android
application method dispatchPictureIntent() is used as shown
in following code in figure 5:

Figure 5: Java code to capture image. [1]


Capturing Audio:
You can record audio using the Media Recorder but hardware
must support the Android APIs. And by using the java code
we may pause, resume, start and stop the audio recording.
Scheduling Time: Scheduling is important for writing the
Diary if it is not written daily it wont be your daily diary and
you are not maintaining the consistency so in the My
Moments App we are providing menus in the settings so that
time will be scheduled if the writer or the owner of the diary
didnt write anything that day user will get reminder that
todays writing is remaining and consistency in the writing
will be maintained.
Sharing of Data:
Work diaries information can be shared with the organization
as we are attaching the audio video and the images to the
diary. These attachments can be shared with anyone you
wants as it would be stored on the sdcard and it would be sane
like other media files so by using Bluetooth or Xender or
other Android Application we may share that data.

:
Figure 6: Login Activity
Figure 6 shows the login activity which will be the first
activity that user will interact. User will enter the password so
that the security of data will be maintained.
Personal Diary:

1. Performance Analysis:
As Android application are making life easy and mobile
based application are more easy and convenient to use. My
Moments diary is the application which can be used for the
personal diary as well as business note taking. All the other
available application in market provide text and image input
but My Moments diary will provide you methods to take
input in four different ways. Following figures.
Login Page
Figure 7: Personal diary Activity
Figure 7 shows the personal activity .It is one of the shelf of
the diary application in which user can access after entering
correct password and if the user is authenticate.
Work Diary:

Page 81

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 1, 2015

ISSN: 2455-3743

12.
13.

Allow to capture images and attach to the


diary.
Swipe to navigate between pages.

5.

CONCLUSION
In this paper we have implemented the proposed
application for writing diary in an interesting way and to
change the conventional diary writing method. Android
application makes the cell phone smart and this
application will one of the smartest applications. As
proposed system is having feature to maintain youre
personal as well as your work diary it would be useful for
the professionals as well as to the common persons who
are willing to write the diary on the daily basis.

6.

REFERENCES

[1] https://developer.android.com/guide/topics/media.
[2] Paper on Model Based Rapid Application Development
by Netta M. Shanil ,Shiri Davidson IBM Haifa Research Lab
[3 ]https://play.google.com/store/apps.
[4] https://www.slideshare.net.
Figure 8: Work diary Activity
Figure 8 shows the work Diary activity in which user can
enter data related to the work same as the personal diary
methods to input the data are same.
4. FEATURES
In the market there are many applications available for the
same purpose but My Moments Application stands different
from the others in many ways. We have all four types of
methods to input the data and to make the diary audio visual.
Many applications provide feature to take note while
attending the meeting these can be done by using Work
Diary and notes can be note down by using audio recorder or
in the text also.
Some of the feature is listed below
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.

Provide four ways to input the data audio,


video, text, Images.
Password protected.
Audio recordings as attachments.
Video Recording as attachments.
Configurable fonts and font size.
Rich text editor.
Time scheduling.
Retrieve data as per the date and time.
Direct access to any moment in your life by
searching date and time.
Two diaries one for the personal use and other
for the work.
Data sharing in work diary to the organization.

7.

AUTHOR PROFILE

Chetan Vijayrao Patil.


Perceiving the 3rd year B.E. in
Information And Technology at Shri
Sant Gajanan Maharaj College Of
Engineering , Shegaon, Dist
Bhuldhana. Maharashtra , India.

Miss. Kavita Ashok Chaudhari.


Perceiving the 3rd year B.E. in
Information And Technology at Shri
Sant Gajanan Maharaj College Of
Engineering , Shegaon, Dist
Bhuldhana. Maharashtra , India

Page 82

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 1, 2015

Miss Snehal Diliprao Deshmukh


Perceiving the 3rd year B.E. in
Information And Technology at Shri
Sant Gajanan Maharaj College Of
Engineering,
Shegaon,
Dist
Bhuldhana.
Maharashtra, India.

ISSN: 2455-3743

Prof. A. S. Manekar
Amitkumar S Manekar working as
assistant Professor in IT Department
SSGMCE, Shegaon.
His research area is Big Data
analysis and High performance
Computing. He has guided many
Under Graduate and Post Graduate
Students.

Page 83

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 1, 2015

ISSN: 2455-3743

COMPARISON OF PARTICLE SWARM OPTIMIZATION AND GENETIC ALGORITHM FOR


LOAD BALANCING IN CLOUD COMPUTING ENVIRONMENT
K PATHAK1, G VAHINDE2
SSGMCE, DEPT. OF INFORMATION TECHNOLOGY,
SHEGAON (MH)
kashmirapathak54@gmail.com, ganeshcoengg@gmail.com

1,2

ABSTRACT: Cloud computing is a recent advancement where in IT infrastructure and applications are provided as services to
end-users. Cloud computing is the delivery of computing services over the internet. It is always required to share work load among
the various nodes of the distributed system to improve the resource utilization and for better performance of the system. So there may
be load on VMs, for balancing the load we have two ways, one is allocation of resources and second is scheduling the task. In this
paper we are comparing two well-known non-linear algorithms for load balancing in cloud environment for effective utilization of
the system and proper servicing the clients request. We conducted our experiments on CloudSim simulator taking makespan as
parameter for comparing the results of the different algorithm.
Keywords: Cloud computing, task scheduling, load balancing, PSO, Swarm optimization, Genetic algorithm, CloudSim, IaaS.

1. INTRODUCTION
Cloud computing is not another thought in specialized world but
rather it is a forthcoming innovation. Grid Computing, Utility
Computing and dispersed frameworks have direct association with
the Cloud Computing. It can be expressed that framework
processing goes about as the spine to Cloud Computing. Cloud
computing gives virtual assets and administrations with the goal of
diminishing expense. Cloud computing is actualized and famous
generally because of its properties of giving virtualization and
reflection [1] [2].
In operating system, various strategies have been projected for
the purpose of job scheduling. The algorithms or strategies
proposed for job scheduling are SJF (Shortest Job First), FIFO
(First in First Out), LIFO (Last in First Out), Priority Based,
Greedy Algorithm. The basic aim of these algorithms is
minimization of total execution time of all jobs. These algorithms
are easily understandable and can be implemented [3]. In Cloud
environment, there is no restriction on number of jobs at a time
requesting for scheduling which becomes the issue of efficiency of
existing operating system scheduling algorithms. These algorithms
may produce unwanted results in Cloud Computing environment
and thus are not feasible to be implemented.
To make use of existing algorithms, there is a need of
optimization of these algorithms to generate better results. One
more issue in job scheduling and load balancing is that the
response time for various task is very high and load on the
processor become a threaten of failure of processor. This leads to a
need of the algorithm which can optimize the load balancing
process [4]. In this paper, both the Genetic Algorithm [5] and
Particle Swarm Optimization approach [6] has been implemented
for scheduling and load balancing and a

comparison is drawn on the basis of defined parameters to find


the better approach for scheduling in Cloud computing
environment.
The papers organization is as follows:

Section 2 describes the architecture of cloudsim.

Section 3 describes the overview of the literature.

Section 4 describes the comparison of PSO and GA for


load balancing.

Section 5 describes the conclusion and analysis.


2. INTRODUCTION TO CLOUDSIM
CloudSim goal is to provide a generalized and extensible
simulation framework that enables modeling, simulation, and
experimentation of emerging Cloud computing infrastructures
and application services, allowing its users to focus on specific
system design issues that they want to investigate, without
getting concerned about the low level details related to Cloudbased infrastructures and services. Support for user-defined
policies for allocation of hosts to virtual machines and policies
for allocation of host resources to virtual machines 2.1
Architecture of CloudSim:
CloudSim is implemented using existing simulation libraries and
frameworks such as GridSim and SimJava in order to handle the
low-level requirements of the system as shown in Figure 1. In
particular, the components of handling events and message
passing in SimJava can be reused in CloudSim. Similarly, using
GridSim simplifies the implementation of networking,
information services, files, users, etc. We now discuss main
components of a cloud computing infrastructure which are
implemented in the simulator CloudSim. Figure below shows
complete components of CloudSim.

Page 84

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 1, 2015

ISSN: 2455-3743

Figure 1. Layered CloudSim architecture


g.

Data center: In CloudSim, data center is used to model


the core services at the system level of a cloud
infrastructure. It consists of a set of hosts which manage a
set of virtual machines whose tasks are to handle "low
level" processing, and at least one data center must be
created to start the simulation.
h. Host: This component is used to assign processing
capabilities (which is specified in the millions of
instruction per second that the processor could perform),
memory and a scheduling policy to allocate different
processing cores to multiple virtual machines that is in the
list of virtual machines managed by the host.
i. Virtual machine: This component manages the allocation
of different virtual machines different hosts, so that
processing cores can be scheduled (by the host) to virtual
machines. This configuration depends on particular
application, and the default policy of the allocation of
virtual machines is "first-come, first-serve".
j. Datacenter broker: The responsibility of a broker is to
meditate between users and service providers, depending
on the requirement of quality of service that the user
specifies. In other words, the broker will identify which
service provider is suitable for the user based on the
information it has from the Cloud Information Service, and
negotiates with the providers about the resources that meet
the requirement of the user. The user of CloudSim needs to
extend this class in order to specify requirement in their
experiments.

k. Cloudlet: This component represents the application service


whose complexity is modeled in CloudSim in terms of the
computational requirements.
l. Cloud Coordinator: These components manages the
communication between other Cloud Coordinator services and
brokers, and also monitor the internal state of a data center
which will be done periodically in terms of the simulation
time. All above are the entities related to the CloudSim which
helps us to implement the Load balancing algorithm? Load
balancing must take into account the two major tasks-one is
the resource allocation and other is the task scheduling in
distributed environment.
3. LITERATURE SURVEY
Radojevic and Zagar [4] proposed another calculation
for burden adjusting called as CLBDM (Central Load
Balancing Decision Module). The plan was proposed with the
purpose of correspondence with all parts of PC structure,
including workload balancers and application servers. CLBDM
has effect sending choices on the heap balancers taking into
account the collected data and inner estimations. It has
determined that the execution of the composed structure can
depend basically on isolating up work successfully over the
taking an interest hubs in a circulated system of registering
frameworks.
Ebehart and Kennedy [6] introduced particle swarm optimizer
in an innovative structure. It has mentioned that genetic
algorithm was much similar to particle swarm optimization.

Page 85

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 1, 2015

Likely genetic algorithm, particle swarm optimization also


begun with the generation of population but unlike GA, it can
assume particles and can initiate them with some random
position and velocity and allows them to move freely in search
space. It has implemented the technique over various
applications and concluded that PSO has given better
performance than other techniques. For example, due to
crossover operator in GA, the immigration among subspecies of
robots can be a serious issue; this issue should not be present
with particle swarms.
AV. Karthick et al. [7] proposed a multi-queue scheduling
scheme which can increase the client satisfaction and utilize
energy of the system. Scheduling was the most important
complex part in Cloud Computing and thus the major goal of
global scheduler was to share the resources at most the
maximum level. Researchers have given more importance to
build a job scheduling algorithms that were well-suited and
appropriate in Cloud Computing situation. Also in Cloud
Computing, the user has to pay for services based on usage time
thats why Job scheduling was one of the critical event in Cloud
Computing. Swachil J. Patel et al. [8] has raised an issue of
priority in job scheduling because some job requests have need
of being scheduled first then all other remaining jobs which can
adjust with longer waiting time. With this aim, author has
presented an improvement in job scheduling scheme based on
the priority in Cloud Computing. Also this algorithm need to be
further improved for the minimization of make span.
Daji Ergu et al. [9] proposed task-oriented resource allocation
model in Cloud Computing environment. Due to different
computers with varying capacities in Cloud environment,
allocation of resources becomes complex and difficult. The
proposed scheme has formed a pair matrix of tasks and
comparison was made on the basis of network bandwidth, cost
of task, its reliability and its completion time. The main motive
was to improve the consistency ratio when allocation was based
on weights of tasks.
Tingting Wang et al. [10] has discussed that the load balancing
issue was critical in Cloud scenario due to huge number of
users and large data volume. Thus the requests of resource
sharing and reuse were becoming more and more imperative.
With the purpose of an efficient task scheduling strategy author
has implemented load balancing using genetic algorithms so as
to fulfill the user necessities and get better resource utilization.
This strategy not only resulted into task scheduling sequence
with shorter job and average job makespan, but has also
satisfied the inter-nodes load balancing. But this strategy has
assumed that there was no priority among jobs. However, in
real Cloud Computing environment, it is not avoidable.
Ayed Salman et al. [11] has exhibited another undertaking task
calculation that was in light of the standards of PSO in the
circulated or parallel registering frameworks. PSO has taken
after a populace based inquiry, which performs as indicated by
the easygoing conduct of winged animals and fishes. PSO
framework has joined neighborhood seek techniques with
worldwide hunt strategies. Through self-experience and
neighboring knowledge, it endeavors to adjust investigation and

ISSN: 2455-3743

abuse. Every individual component of the populace was called


as a molecule that flies around in a multidimensional pursuit
space in the hunt of the best arrangement. Particles may
overhaul their position as per their own particular and their
neighboring particles position, sending toward their best
position or their neighbor's best position.
4. COMPARISON OF PSO AND GA FOR LOAD
BALANCING.
4.1 Genetic Algorithm
A Genetic Algorithm is a search technique used in computing
to find true or approximate solutions to optimization and search
problems. Genetic Algorithms are categorized as global search
heuristics. Genetic Algorithm are a particular class of
evolutionary algorithms that use techniques inspired by
evolutionary biology such as inheritance, mutation, selection,
and crossover (also called recombination).
Genetic algorithm was the primary organic technique that
supported the principle of survival and Mendels laws of
inheritance. Genetic Algorithm has various benefits over
alternative techniques for computationally intensive issues, if
supplied with properly set operators and fitness functions.
Genetic Algorithm defines a collection of solutions
(chromosomes) that are jointly referred to as a population. The
tactic then performs crossover, mutation and selection
operations iteratively until the stopping criteria is satisfied [1].
The resultant set is the set of solutions.
Step 1: Selection
We randomly choose a set of the people based on their fitness:
Step 2: Crossover
Step 3: Mutation
Algorithm for Genetic Algorithm
Step 1: Create a population of a fixed range of random
chromosomes.
Step 2: Calculate the fitness values for all chromosomes.
Step 3: Choose 2 chromosomes having best fitness as parents
(Selection step).
Step 4: Perform crossover among the parents to produce
offspring using crossover ratio (Crossover step).

Page 86

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 1, 2015

Step 5: Perform mutation if required at each position using


mutation ratio
Step 6: Add the offspring chromosomes to the first population.
Step 7: If the termination condition is satisfied, then stop and
come back to best chromosome as an answer, else repeat from
Step two.
Flow Chart for Genetic Algorithm

ISSN: 2455-3743

(1)
Where is the fitness value of the solution i evaluated by the
employed bee, N represents the number of employed bees, and
is the probability of selecting the
employed bee.
Choice of a target food source to move to onlooker bees and
determination of the nectar amounts, by the equation.
(2)
Step3. Movement of Scouts: If the fitness values are not
improved by continuous iterations, those food sources are
abandoned, and these employed bees convert into scouts and
are moved by the equation

Where r is a random number and r [1, 2].

Figure 2 Flow Chart for Genetic Algorithm


4.2 Bee Colony Optimization: The BCO rule is predicated on the activities of bees whereas
looking for nectar, and sharing the knowledge with alternative
bees. There square measure 3 styles of agents - the employed
bee, the onlooker bee, and the scout. The employed bee stays on
a food supply and provides its surroundings in memory; the
onlooker acquires this knowledge from the employed bees and
selects one in every of the food sources to forage; and also the
scout performs the task of finding new nectar sources.
The procedure for BCO is as follows:
Step1. Initialization: Distribution of the populations into the
answer house randomly, and analysis of their fitness values,
within the quantitative relation ne that represents the share of
used bees within the total population.
Step2. Movement of Onlookers: Calculation of the likelihood
of selection of a food source by the equation,

Step4. Updating of the Best Food Source: Memorization of the


best fitness value and its position.
Step5.Termination Checking: Checking of the termination
condition, if satisfied, termination of the program and output of
the results; otherwise repetition of Step 2.

Page 87

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 1, 2015

ISSN: 2455-3743

Figure 6 Step for Bee Colony Optimization

Table 1: Comparison of PSO algorithms for Load Balancing in Cloud Environment


Ref.
wor
k

Abstract

Enco
ding
schem
e

Initial
popula
tion
genera
tion

Optimiz
ation
criteria

Natu
re of
tasks

Enviro
nment

Highligh
ts

Conclusion

[12]

PSO algorithm embedded in


crossover and mutation and in
the
local
research,
the
experiment results show the
PSO algorithm not only
converges faster but also runs
faster than the other two
algorithms in a large scale.

1
n
Vecto
r
Repre
sentati
on

Rando
m

Commun
ication
Time
and
Executio
n Cost

Indep
ende
nt

Cloud
Simulat
ion
Environ
ment

PSO algorithm both gains


optimal solution and
converges faster in large
tasks. It is obvious that
PSO is more suitable to
cloud computing.

[13]

A heuristic approach based on


particle swarm optimization
algorithm is adapted to solving
task scheduling problem in grid
environment. Each particle is
represented a possible solution,
and the position vector is
transformed
from
the
continuous variable to the
discrete variable.

1
n
Vecto
r
Repre
sentati
on

Rando
m

Make
span

Indep
ende
nt

Grid
Simulat
ion
Environ
ment

Local
search
based on
VNS
applied
after each
permutati
on

Simulation
results
demonstrate that PSO
algorithm can get better
effect for a large scale
optimization
problem.
Task
scheduling
algorithm based on PSO
algorithm can be applied
in the computational grid
environment.

[14]

User applications may incur


large
data
retrieval
and
execution costs when they are
scheduled taking into account

1
n
Vecto
r
Repre

Rando
m

Commun
ication
Cost and
Executio

Work
flow

Cloud
Simulat
ion
Environ

Combine
d
with
Heuristic

PSO based task-resource


mapping can achieve at
least three times cost
savings as compared to

Page 88

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 1, 2015

ISSN: 2455-3743

only the execution time. In


addition to optimizing execution
time, the cost arising from data
transfers between resources as
well as execution costs must
also be taken into account. We
compare the cost savings when
using PSO and existing Best
Resource Selection (BRS)
algorithm. Our results show that
PSO can achieve: a) as much as
3 times cost savings as
compared to BRS, and b) good
distribution of workload onto
resources.

sentati
on

[15]

Here we propose a hybridscheduling algorithm to solve


the independent task scheduling
problem in grid computing. We
have combined PSO with the
gravitational emulation local
search (GELS) algorithm to
form a new method, PSO
GELS.

1
n
Vecto
r
Repre
sentati
on

Rando
m
while
conside
ring
constra
ints

Executio
n Cost
with
Deadline
Constrai
nt

Work
flow

Cloud
Simulat
ion
Environ
ment
(Java)

Used hill
climbing
after each
iteration

The
hybrid
PSO
algorithm performs better
for
local
searches.
Because GELS is used
for the local search rather
than other local search
algorithms such as hillclimbing or SA, the
hybrid algorithm finds
better solutions than other
algorithms. PSOGELS
perform better than the
other algorithms.

[16]

Here we present a Hybrid


Particle Swarm Optimization
(HPSO)
based
scheduling
heuristic to balance the load
across the entire system while
trying
to
minimize
the
makespan of a given task sets.

1
n
Vecto
r
Repre
sentati
on

Rando
m

Make
span,
No. of
tasks
that miss
their
deadline

Indep
ende
nt

Java
Simulat
ion
Environ
ment

Local
search
based on
GELS
applied
on results
obtained
from
PSO

It is found that HPSO


based task scheduling can
achieve
better
load
balancing as compared to
PSO based scheduling.

[17]

Here The representations of the


position and velocity of the
particles in conventional PSO is
extended from the real vectors
to fuzzy matrices. The proposed
approach is to dynamically
generate an optimal schedule so
as to complete the tasks within a
minimum period of time as well
as utilizing the resources in an
efficient way. We evaluate the

Fuzzy
Matric
es

Rando
m

Make
span

Indep
ende
nt

Grid
Simulat
ion
Environ
ment

Applying
LJFPSJFP
heuristic
alternativ
ely after
allocation
of batch
of jobs to
nodes

We
evaluated
the
performance of a fuzzy
PSO
for
grid
job
scheduling and compared
the performance with GA
and SA. Empirical results
revealed
that
the
proposed approach can be
applied
for
job
scheduling.
When
compared to GA and SA,

n Cost

ment
(Swarm
package
)

BRS based mapping for


our application workflow.
In addition, PSO balances
the load on compute
resources by distributing
tasks
to
available
resources.

Page 89

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 1, 2015

ISSN: 2455-3743

performance of the proposed


PSO algorithm with a Genetic
Algorithm (GA) and Simulated
Annealing (SA) approach.

Re
f.

Enco
ding
sche
me

Initial
populat
ion
generat
ion

an important advantage
of the PSO algorithm is
its speed of convergence
and the ability to obtain
faster
and
feasible
schedules.

Optimiz
ation
criteria

Selecti
on
operat
or

Cros
sove
r
oper
ator

Mutat
ion
operat
or

Nature
of
tasks
and
enviro
nment

Conclusion

wo
rk

Abstract

[18]

This paper proposes


a
novel
load
balancing strategy
using
Genetic
Algorithm
(GA).
The
algorithm
thrives to balance
the load of the cloud
infrastructure while
trying minimizing
the make span of a
given tasks set.

Fixe
d bit
strin
g
repre
senta
tion

Random
ly

Load
Balancin
g

Rando
mly

Sing
le
Poin
t
Cros
sove
r

Bits
are
toggle
d
(01
or
10)

Indepe
ndent
and
Cloud
Analys
t

Analysis of the results, indicates


that the proposed strategy for
load
balancing
not
only
outperforms a few existing
techniques but also guarantees
the
QoS
requirement
of
customer
jobs.
However
variation of the crossover and
selection strategies could be
applied as a future work for
getting more efficient and tuned
results.

[19]

This paper presents


a
decentralized
scheduling
algorithm based on
genetic algorithms
for the problem of
DAG
scheduling
and the integration
platform for the
proposed algorithm
in Grid system

Per
muta
tion
Base
d
Repr
esent
ation

Random

Make
span,
Load
Balancin
g,
Resource
Utilizatio
n, Time
taken to
obtain
solution

Roulett
e
Wheel

No
oper
ator

Adapti
ve
Mutati
on
Operat
or

Workfl
ow and
Grid
Simula
tion

The DAG Scheduling algorithm


has demonstrated to offer the
best schedule length over all
resources in a slightly shorter
time than the other scheduling
algorithms. The distribution of
the tasks to the available
resources
has
proved
to
accomplish good load balancing
and efficient resource allocation
by minimizing the idle times on
the processing elements.

[20]

This paper proposes


a new scheduler
which makes a
scheduling decision
by evaluating the
entire group of tasks
in the job queue. A
genetic algorithm is
designed as the

Dire
ct

Resourc
e having
some
data for
the task
is
assigne
d to task

Makespa
n

Roulett
e
Wheel

OnePoin
t
Cros
sove
r

Flip
Mutat
or

Indepe
ndent
and
Cloud
Simula
tion
Enviro
nment(
Hadoo

Simulation experiments show


the effectiveness of GA-based
load balancing technique method
compared to FIFO and delay
scheduling.

Page 90

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 1, 2015

ISSN: 2455-3743

optimization method
for
the
new
scheduler.

p
MapRe
duce)

[21]

To make the most


efficient use of the
resources,
we
propose
an
optimized
scheduling
algorithm to achieve
the optimization or
sub-optimization for
cloud
scheduling
problems.
Scheduling policy
achieved by Parallel
Genetic Algorithm
which is much faster
than
traditional
Genetic Algorithm.

Dire
ct

Not
mention
ed

Resource
Utilizatio
n<comm
a> Speed
of
resource
allocatio
n

Not
mentio
ned

Not
ment
ione
d

Not
mentio
ned

Indepe
ndent
and
Cloud
Simula
tion
Enviro
nment
(using
Java
Langua
ge with
Java
Geneti
c
Algorit
hm
Packag
e)

Mathematically, we consider the


scheduling problem as an
Unbalance Assignment Problem.
Comparing to the GA, with
parallel GA improved the speed
of finding the best allocation.
And the utilization rate is better
than the Round robin algorithm
and Greedy algorithm.

[22]

In this paper, we
proposed a hybrid
heuristic
method
(HSGA) to find a
suitable scheduling
for workflow graph,
based on genetic
algorithm in order to
obtain the response
quickly
moreover
optimizes
makespan,
load
balancing
on
resources
and
speedup ratio.

Dire
ct

New
method
Based
on bestfit and
roundrobin
method

Makespa
n<comm
a> Load
Balancin
g
on
Resource
s<comm
a>
Speedup
Ratio

No
operat
or

Ran
dom
Gen
e
Sele
ction
Cros
sove
r

Selecti
ng a
rando
m
gene
and
replaci
ng its
resour
ce
with a
resour
ce
having
better
failure
rate
and
not
overlo
aded

Workfl
ow and
Cloud
Simula
tion
Enviro
nment

This Paper is based on GA, that


uses the optimize characteristics
Best-Fit
and
Round-Robin
algorithms. For making the
initial population, it uses two
stages evaluation. At first stage,
all tasks of application will be
ordered by a priority method
with respect to their influence to
each other based on graph
topology. In the second stage,
the candidate resources will be
assigned by combination of
features of two methods, BestFit and Round robin to select
good candidate resources.

[23]

In order to solve this


problem,
considering the new
characteristics
of
cloud
computing

Dire
ct

Greedy
algorith
m

Makespa
n<comm
a> Load
Balancin

Rotatin
g
Selecti
on
Strateg

One
Poin
t
Cros
sove

Local
Search

Indepe
ndent
and
MATL

In this paper, we have proposed


JLGA algorithm to achieve tasks
scheduling with least makespan
and load balancing. We have
experimentally
evaluated

Page 91

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 1, 2015

and
original
adaptive
genetic
algorithm (AGA), a
new
scheduling
algorithm based on
double-fitness
adaptive algorithmjob spanning time
and load balancing
genetic
algorithm
(JLGA)
is
established.

ISSN: 2455-3743

AB

performance of JLGA algorithm


and the DGA with 30 jobs. The
experimental results shows that
the JLGA takes little both in
total job and average job
consuming, and it balances the
entire system load effectively.

[24]

This paper presents


a
scheduling
strategy on load
balancing of VM
resources based on
genetic algorithm.
This
strategy
computes ahead the
influence it will
have on the system
after the deployment
of the needed VM
resources and then
chooses the leastaffective solution,
through which it
achieves the best
load balancing and
reduces or avoids
dynamic migration.

Tree
Repr
esent
ation

Spannin
g tree
based
method

Load
Balancin
g

Rotatin
g
Selecti
on
strateg
y

New
ly
deve
lope
d
meth
od

Newly
develo
ped
metho
d

Indepe
ndent
and
Cloud
Experi
mental
Enviro
nment

The method achieves the best


load balancing and reduces or
avoids dynamic migration thus
resolves the problem of load
unbalancing and high migration
cost caused by traditional
scheduling algorithms. The
experimental results show that
this method can better realize
load balancing and proper
resource utilization

[25]

Multi-agent genetic
algorithm (MAGA)
is a hybrid algorithm
of
GA,
whose
performance is far
superior to that of
the traditional GA.
This
paper
demonstrates
the
advantage
of
MAGA
over
traditional GA, and
then exploits multiagent
genetic
algorithms to solve
the load balancing
problem in cloud

Fixe
d bit
strin
g
repre
senta
tion

Random
ly

Load
balancin
g

Neighb
orhood
Compe
tition
Operat
or for
agents

Neig
hbor
hood
Orth
ogon
al
Cros
sove
r
Oper
ator
for
agen
ts

Mutati
on
operat
or for
agents

Indepe
ndent
and
Cloud
Simula
tion
Enviro
nment.

This
paper
experimentally
proves that MAGA is more
appropriate than GA to handle
high-dimensional
function
optimization problems. Then,
establishing a cloud computing
load balancing model, Min_min
and MAGA algorithms were
applied for resource scheduling
respectively. This method, used
for solving load balancing
strategy based on virtualized
cloud computing, is feasible and
effective.

Page 92

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 1, 2015

ISSN: 2455-3743

computing
[26]

In this paper, we
present a hybrid
approach
called
FUGE that is based
on fuzzy theory and
a genetic algorithm
(GA) that aims to
perform
optimal
load
balancing
considering
execution time and
cost.
FUGE
algorithm
assigns
jobs to resources by
considering virtual
machine
(VM)
processing
speed,
VM memory, VM
bandwidth, and the
job lengths.

Dire
ct

Random

Makespa
n<comm
a>
Cost<co
mma>
Load
Balancin
g

Based
on
fitness
value

Fuzz
y
base
d
cross
over
appr
oach

Indepe
ndent
and
CloudS
im,
MATL
AB

We compared the performance


of our approach to several other
cloud job scheduling models; the
results of the experiments
proved the efficiency of our
FUGE approach in terms of
execution time, execution cost,
and
average
degree
of
imbalance. The FUGE approach
modifies the SGA with the use
of fuzzy theory and improves
over the SGA performance in
terms of execution cost by about
45% and in terms of total
execution time by about 50%,
which are the main goals of this
research.

[27]

This paper mainly


focuses on how to
improve the energy
efficiency of servers
in a data center by
appropriate
task
scheduling
strategies. Based on
MapReduce,
Googles
massive
data
processing
framework, a new
energy-efficient task
scheduling model is
proposed in this
paper. To solve this
model,
we
put
forward an effective
genetic
algorithm
with
practical
encoding
and
decoding methods
and
specially
designed
genetic
operators.

Dire
ct

Random

Energy
Conserva
tion

Rando
m

Mult
ipoin
t
Cros
sove
r

Single
-Point
Mutati
on
Operat
or

VM
Placem
ent and
Cloud
Simula
tion
Enviro
nment

We design a practical encoding


and decoding method for the
individuals, and construct an
overall
energy
efficiency
function of the servers as the
fitness value of the individual.
Also, in order to accelerate the
convergent speed and enhance
the searching ability of our
algorithm, a local search
operator is introduced. Finally,
the experiments show that the
proposed algorithm is effective
and efficient.

Page 93

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 1, 2015

8.

ISSN: 2455-3743

CONCLUSION
In this paper, we have surveyed GA and PSO method for
load balancing in cloud computing. The main purpose of
load balancing is to satisfy the customer requirement by
distributing load dynamically among the nodes and to make
maximum resource utilization by reassigning the total load
to individual node. This ensures that every resource is
distributed efficiently and evenly. So the performance of the
system is increased. We have also discussed architecture of
CloudSim simulator and required qualitative matrix for load
balancing.
9.
1.

REFERENCES

Dinh, Hoang T., et al. "A survey of mobile cloud


computing: architecture, applications, and approaches."
Wireless communications and mobile computing 13.18
(2013): 1587-1611.
2. Zhang, Shufen, Hongcan Yan, and Xuebin Chen.
"Research on key technologies of cloud computing."
Physics Procedia 33 (2012): 1791-1797. Komal et al.,
International Journal of
3. Advanced Research in Computer Science and Software
Engineering 5(6), June- 2015, pp. 502-507 2015,
IJARCSSE All Rights Reserved Page | 507
4. Silberschatz, Abraham, Peter Baer Galvin, and Greg
Gagne Operating System Principles John Wiley &
Sons, 2006.
5. Radojevic, Branko, and Mario Zagar. "Analysis of issues
with load balancing algorithms in hosted (cloud)
environments." MIPRO, 2011 Proceedings of the 34th
International Convention. IEEE, 2011.
6. Gu, Jianhua, et al. "A new resource scheduling strategy
based on genetic algorithm in cloud computing
environment." Journal of Computers 7.1 (2012): 42-52
7. Eberhart, Russ C., and James Kennedy. "A new
optimizer using particle swarm theory" Proceedings of
the sixth international symposium on micro machine and
human science. Vol. 1. 1995.
8. AV.Karthick An Efficient Multi Queue Job Scheduling
for Cloud Computing, International conf. on Computing
and Communication Technologies (WCCCT), March
2014, 164 166.
9. Patel, Swachil J., and Upendra R. Bhoi. "Improved
Priority Based Job Scheduling Algorithm in Cloud
Computing Using Iterative Method" Advances in
Computing and Communications (ICACC), 2014 Fourth
International Conference on IEEE, 2014.
10. Ergu, Daji, et al. "The analytic hierarchy process: task
scheduling and resource allocation in cloud computing
environment" The Journal of Supercomputing 64.3
(2013): 835-848.
11. Wang, Tingting, et al. "Load Balancing Task Scheduling
Based on Genetic Algorithm in Cloud Computing"

12. Dependable, Autonomic and Secure Computing (DASC),


2014 IEEE 12th International Conference on. IEEE,
2014.
13. Salman, Ayed, Imtiaz Ahmad, and Sabah Al-Madani.
"Particle swarm optimization for task assignment
problem." Microprocessors and Microsystems 26.8
(2002): 363-371.
14. Lizheng Guo1et al., Task Scheduling Optimization in
Cloud Computing Based on Heuristic Algorithm,
journal of networks, vol. 7, 3, march 2012.
15. Lei Zhang1 et al., A Task Scheduling Algorithm Based
on PSO for Grid Computing, International Journal of
computational Intelligence Research. ISSN 0973-1873
Vol.4, No.1, 2008
16. Suraj Pandey et al., A Particle Swarm Optimizationbased Heuristic for Scheduling Workflow Applications
in Cloud Computing Environments, 24th IEEE Int.
Conf. adv inf netw appl, 2010.
17. Zhangjun Wu1 et al., A Revised Discrete Particle
Swarm Optimization for Cloud Workflow Scheduling,
IEEE transactions on cloud computing, vol. 2, April-June
2014.
18. Zahra Pooranian, An efficient meta-heuristic algorithm
for grid computing, TELKOMNIKA Indones J Electr
Eng 2012.
19. Gomathi B et al. task scheduling algorithm based on
hybrid particle swarm optimization in cloud computing
environment Journal of Theoretical and Applied
Information Technology, 10th September 2013.
20. Kousik Dasgupta, Brototi Mandal et al., A Genetic
Algorithm (GA) based Load Balancing Strategy for
Cloud Computing, International Conference on
Computational Intelligence: Modeling Techniques and
Applications, (CIMTA) 2013.
21. Florin Pop, Ciprian Dobre et al., Genetic Algorithm for
DAG Scheduling in Grid Environments, 978-1-42445007-7/09/$26.00 2009 IEEE Conference.
22. Yujia Ge, Guiyi Wei, GA-Based Task Scheduler for the
Cloud Computing Systems, International Conference on
Web Information Systems and Mining, 2010.
23. Zhongni Zheng ,Rui Wang et al., An Approach for
Cloud Resource Scheduling Based on Parallel Genetic
Algorithm, 978-1-61284-840-2/11/$26.00 IEEE,2011.
24. Arash Ghorbannia Delavar, Yalda Aryan, HSGA: a
hybrid heuristic algorithm for workflow scheduling in
cloud systems, Springer Science+Business Media New
York, 2013.
25. Tingting Wang, ZhaobinLiu et.al. Load Balancing Task
Scheduling based on Genetic Algorithm in Cloud
Computing,
12th
International
Conferenceon
Dependable, Autonomic and Secure Computing IEEE,
2014.

Page 94

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 1, 2015

26. Jianhua Gu, Jinhua Hu, Tianhai Zhao,Guofei Sun, A


New Resource Scheduling Strategy Based on Genetic
Algorithm in Cloud Computing Environment,
JOURNAL OF COMPUTERS, VOL. 7, NO. 1,
JANUARY 2012.
27. Kai Zhu, Huaguang Song et al., Hybrid Genetic
Algorithm for Cloud Computing Applications, Asia Pacific Services Computing Conference IEEE, 2011.
28. Mohammad Shojafar, Saeed Javanmardi et al., FUGE:
A joint meta-heuristic approach to cloud job scheduling
algorithm using fuzzy theory and a genetic method,
Springer Science+Business Media New York, 2015.
29. Xiaoli Wang, Yuping Wang, Hai Zhu, Energy-efficient
task scheduling model based on MapReduce for cloud
computing using genetic Algorithm, JOURNAL OF
COMPUTERS, VOL. 7, NO. 12, DECEMBER 2012.

ISSN: 2455-3743

10. AUTHOR PROFILE


K PATHAK Perceiving
the 3rd year B.E. in
Information
And
Technology at Shri Sant
Gajanan Maharaj College
of Engineering, Shegaon,
Dist
Bhuldhan,
Maharashtra , India
G VAHINDE Perceiving
the 3rd year B.E. in
Information
And
Technology at Shri Sant
Gajanan Maharaj College
of Engineering, Shegaon,
Dist
Bhuldhan,
Maharashtra , India

Page 95

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 1, 2015

ISSN: 2455-3743

A SURVEY PAPER ON TRACKING SYSTEM BY USING SMART PHONE


POONAM HAJARE1, RAJESHRI KACHOLE2, ANKUSH DESHMUKH3,4AMITKUMAR MANEKAR
123
Department of Computer Science and Engineering SSGMCE Shegaon, Maharashtra, India.
hajarepv123@gmail.com1
rajeshrikachole2012@gmail.com2
deshmukhank@gmail.com3
asmanekar24@gmail.com4
ABSTRACT: In todays world, each and every thing that is from human to machine has to work fast but with regularity. For
regularity, it is necessary to keep eye on employee. But who will keep eye on employee? because a human cannot keep track of
thousands of employee. Besides this there is a prompt growth of smart phones. So by using smart phone, we are trying to keep track
of employee. In this survey paper, we will have overview of The Master Spy. It is name of android application which is used to
monitor the employee by using their android smart phones. It is needed to improve technical growth as well as standard of company
or organization. This android application is user friendly. Requirement of this project is only smart phones. Every employee has
given unique Id and password. By using ID employee can enter into organization and by using ID and password employee can
check his or her profile. We can send memos to employee as well as urgent meeting calls on their ID. List of employee is stored in
database. This project helps to improve accuracy in management and also used to save time of human being. By using this system,
job performance will be improved as employee knows someone keeps eye on them. Thus we can deploy the project to customer in
deadline. We can also meet to customers satisfaction. It is not costly as employee just has to install this application in mobile. We
can also track the mobile of employee. Specialty of this project is - it is not harmful for anyone as it does not induce any rays. It can
be used in shopping mall, schools or colleges, organizations, companies etc. This project can be implemented offline.
Keywords: Tracking, Spy, Master spy
1. INTRODUCTION
Tracking something is nothing but monitoring or
observing the activities of anything or a person. Tracking is
one of the ways to provide the security. Now a days various
companies have started tracking their employees for some
security reasons in official time. Tracking restricts the
activities of employees during official time. Tracking is used
to ensure the presence of particular person in authorised area
only. Tracking can be done in two ways i.e. manual tracking
and automated tracking. Example of manual tracking is
tracking by assigning special person to keep track. For
automated tracking various tracking systems are available in
market such as tracking using GPS, tracking on the basis
mobile phone. This paper gives overall view of tracking
systems present currently. Various strategies used for tracking
of both manual and automated tracking are discussed in this
paper.
2. SURVEY ON EXISTING SYSTEM SOME RELATED TERMS
ANDROID
Android is the most popular and open source
operating system. It can be used in mobile as well as in
tablets. Now a day, android smart phones are used by
maximum people ranging from rookie to professional.
Some Manual Tracking Systems
Team of Investigator posting as a friend
In some organization employees are hired to keep
eye on working employee in same organization. For
example, K mart hired a team of investigator as a
friend of working employee. They had a lunch,
beer; dinner with employee after finishing work
[1].K mart claimed that they were investigating a
possible theft and drug ring. But by federal

Android is a Linux based operating system. Android is a


platform which is available in its play store and it allows
users to create, install and use their own application using
framework. Android framework is licensed under Apache
license. It means that we can easily distribute our android
application under customized license
GPS: GPS is a Global Positioning System which is used to
transmit exact signal for showing accurate location, speed,
time information to end user by gathering of satellites in a
circle around the earth orbit. GPS can provide accurate
positioning for 24 * 7 a day anywhere in the world. The use
of GPS is expanded as GPS is less expensive. It is used by
pilots to avoid collisions, by farmer to guide equipment and
control accurate of chemicals on accurate place, by boaters,
by hunters etc.
Tracking falls into 2 categories
Manual To track the employee manually is not efficient
way because it requires more time to arrange records and
calculate the behaviour of employee. It can be costly as
organization has to require humans and paper.
Automated Automated tracking system can keep the track
of employee efficiently. But existing system faced some
problem.
lawunion activity of employee prohibits and in
1995 company instructed investigator and post
notice that they should
not observe union
activities of employee [4].

Page 96

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 1, 2015
a) Undercover Operatives
Some company use employers undercover operatives to
collect more information about employee. But in it, personal
life of employee may disturb. For example, in Portland, 24
years legal assistant filed a worker's compensation claim. In
it, news of abortion opened. Due to this, image of that lady
affected and she deep in stress. [1]
b) Spying
Spying is an art of assigning a management by another
management to keep eye on other management or employee.
In it, management secretly observe to employee and the
management under investigation does not know about
investigation. For example in California, guard was ask to
spy two managers. Those managers suspected of having
homosexual liaisons in company time. But in this case, guard
loosed the job because it was not a part of his job. In this
technique, there may be a loss of employee and wastage of
time. It also effects on personal life of employee.
c) Swipe Card System
There are so many employee tracking systems. Swipe card
is one of them. But in swipe card, if employee tries to swipe
the card slowly or quickly then it can create problem and it
will create big problem when that card is missing .In crowd,
anyone can enter that is unauthorized person can get
permission. It is very dangerous to security point of view.
Some Automated Tracking Systems
a) Phone Tapping
Phone tapping and eavesdropping are the most commonly
used employee monitoring techniques. In it the incoming
as well as outgoing calls of employee are tapped. By using
it, if any employee share secretes information about the
company to unauthorized user then we can know about it.
But disadvantage is that private calls are also tapped and it
also identifies the length of calls. So privacy may be
affected. [5]
b) E mail and voice mail
In it, the electronic and voice mail of employee are
checked by an employer. E mail provides an option
'Receipt Request' and 'Priority Category'. By using it, an
employer can easily check sent messages of employee
even after deletion of mails because back up of mails of
each employee is kept on magnetic tape and there is no law
to stop employers from email reading [6]. But in it, there
required large amount of space to back up [7]
c) Computerized System
There are several types of computerized employee tracking
system. In 2008, Nucleus Research wished-for the use of
an automated attendance system, which can be used for
employee as well as students. It eliminated repetitive work
of humans. It can store more amount of data as compare to
manual tracking system. In it human has to enter data of
But for this process, we need hardware which is costly
and it can be damaged by the touch of multiple users. And
also voice of person can change as per change in age or
due to throat infection. The finger print of people working
in chemical industries or labours is often affected. Thus

ISSN: 2455-3743

employee into computerized system. In this, the problem


of data entry inaccuracy may occur [3].

d) Video Surveillance
In this system, video surveillance is used to monitor
employee behaviour. In it, cameras are places in some area
that is in hidden as well as in noticeable area. These tiny
cameras can give surveillance information. At that time,
main reason for this system was to track employees
pilferage, horseplay, safety hazards. It was stated that 40%
of respondents believe it is employer's right to use video
surveillance [2]. But it is very costly to maintain camera in
large campus.
e) Active Badge
It is credit card sized badge which stick on the cloth of
employee. This system requires transmitting device.
Sensor uses signal from device and monitor the activity of
employee via network to server. Server is used to translate
the signal into information and share it on the LAN that is
Local Area Network [8]. It requires lithium battery to
activate the badge which is entering consuming process.
f) Biometric Based Tracking System
In manual tracking system, it is difficult to keep a track of
large amount of employee by using only pen and paper. In
computerized tracking system, there may be chances of
duplication that is proxy. To avoid these problems a new
system is invented called as biometric based tracking
system. In biometric based tracking system, we can
uniquely identify an employee on the basis of his or her
voice, eye i.e. retina, fingerprint, and face recognition.
For tracking, employee has to put his or her finger
on finger print device. Device scan finger print and unique
matriculation number sends to database. After that if
number matches then employee can consider as
authorized. Otherwise employee cannot enter into
organization [9]. Simple architecture is shown as below

Figure 1: Architecture of biometric (fingerprint) base


attendance system [11]
biometric system cannot be used by these companies.
Eyes of diabetics patients can affect differently. Thus it
creates problem to identify the authorized user correctly.
This system is unreliable and expensive.
g) RFID based Attendance System

Page 97

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 1, 2015

ISSN: 2455-3743

It is only an attendance system because it cannot keep the


track of employee. It can only store data of employee.
RFID stands for Radio Frequency Identification. It is an
automatic method of identification which is used to store
and retrieve data by using RFID tag which can be called as
transponders. Main aim of this system is to take attendance
of employee or students. Each employee has RFID tag and
employee having tag have to touch RFID device which is
located at entrance. Data is sent to BISAM server which is
placed at company. By using this project we can send
notification about meetings to employee through BISAM
SMS gateway server. RFID project requires some
components as transceiver chip, serial communication
integrated circuit, m_controller, liquid crystal display,
universal serial bus interface, power supply module etc.
Figure 3: Bluetooth Based System [11]
The difficulty in Bluetooth based system is only
employee's cell phones are necessary because in case of
absent of employee if mobile is given to other employee
then that employee can mark as a present.

Figure 2: RFID based System [11]


ID of employee stored in microcontrollers memory.
When employee touches the RFID device, it reads the ID
and sends to microcontroller. Microcontroller compares
the ID and if match is found then name of an employee
will display on LCD through PC via RS323 port [10]. It
also proposed other system in which we can store data in
online server and dynamical changer is possible [11].But
verification is not done in RFID project. Hence there may
be chances of duplication i.e. proxy. It means that
unauthorized person can also enter into organization which
is very harmful for security point of view.
h) Bluetooth Based System
This is one of the latest systems firstly designed by Mr.
Vishal Bhalla in 2003 [11]. In this system, employee can
identify by using Bluetooth network. Intentionally it was
designed for student attendance but small companies used
it for employee attendance. In it, attendance is recorded at
managers mobile that is we have to install application
software in manager's mobile. Application software
enables to query employee's mobile via Bluetooth network
and through transport employee's mobile MAC address
that is Media Access Control to manager's mobile and
employee can consider as authorized as well as present.

2. CONCLUSION
Work suggests survey on existing attendance and tracking
system. We observed some advantages and disadvantages
of that system and by using some good properties of
existing system we are developing 'The master spy'. By
using this system, job performance will be improved as
employee knows someone keeps eye on them. Thus we
can deploy the project to customer in deadline. We can
also meet to customers satisfaction. It is not costly as
employee just has to install this application in mobile. We
can also track the mobile of employee. Specialty of this
project is - it is not harmful for anyone as it does not
induce any rays. It can be used in shopping mall, schools
or colleges, organizations, companies etc. We can also
calculate the behaviour of employee by using k means
cluster algorithm. This project can be implemented offline
by using HTML5.
3.

REFERENCES

[1] Schultz, E. E. (1994, July 29) Employee bewares: The


boss may be listening. The Wall Street journal, Sec. C, p.1)
[2] SHRM survey Losey M. R., 1994 September
Workplace privacy: issues and implications. USA Today,
76-78.)
[3] Research note, automating time and attendance: low
hanging roi, proceeding in Nucleus Research, January
2008.)
[4] Johnson, B. (1995) Technological surveillance in the
workplace.
Available
at
http://www.fwlaw.com/techsurv.html)

Page 98

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 1, 2015

[5] Privacy rights Clearinghouse (1997) Employee


monitoring: is there privacy in the workplace? Available at
http://www.privacyrights.org)
[6] Alderman, L. 1994, December Safeguard your secret
from your boss. Money, 31-32

4.

ISSN: 2455-3743

AUTHORS PROFILE
Poonam V. Hajare
Pursuing 3rd Year BE at SSGMCE
Shegaon

[7] Pillar, C. 1993, July Bosses with X ray eyes.


Macworld, 118-123
[8] Pountain, D. (1993, December) Track people with
active badges. Byte, 57)
[9] O.Shoewu and O.A.Idowu, Development of
attendance management system using biometrics,
Department of Computer Science, Lagos state University,
Epe campus, Nigeria.
[10] T. S. Lim, S. C. Sim, and M. M. Mansor,"RFID based
attendance system, IEEE Symposium on Industrial
Electronics & Applications 2009 (ISIEA 2009), vol.2, pp.
778-782,
4-6
Oct.
2009,
doi:
10.1109/ISIEA.2009.5356360.
[11] D.M. Kassim, H. Mazlan, N. Zaini, and M. K. Salleh,
Web-based student attendance system using RFID
technology, in Proc. IEEE Control and System Graduate
Research Colloquium (ICSGRC 2012), pp. 213-218, 16-17
July 2012, doi: 10.1109/ICSGRC.2012.628716"
[12] Vishal Bhalla, Tapodhan Singla, Ankit Gahlot and
Vijay
Gupta,
Bluetooth
Based
Attendance
Management System, International Journal of Innovations
in Engineering and Technology (IJIET) Vol. 3 Issue 1
October 2013, ISSN: 2319 1058.

Rajeshri V. Kachole
Pursuing 3rd Year BE at SSGMCE
Shegaon

Ankush S. Deshmukh
Pursuing 3rd Year BE at SSGMCE
Shegaon

Prof. A. S. Manekar
Amitkumar S Manekar working as
assistant Professor in IT Department
SSGMCE, Shegaon.
His research area is Big Data analysis
and High performance Computing. He
has guided many Under Graduate and
Post Graduate Students.

Page 99

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 1, 2015

ISSN: 2455-3743

REVIEW ON SECURITY AND AUTHENTICATION SYSTEM IN ACCESSING DATA


MITALI LAKADE1, RUCHI KELA2, ASHWINI3, AMITKUMAR MANEKAR4
1,2,3,4
Shri Sant Gajanan Maharaj college of Engineering,
Amravati University
1
mitalilakade286@gmail.com
2
ruchi.kela1995@gmail.com
3

ashwinigugle@email.com
asmanekar24@gmail.com

ABSTRACT: Various Password authentications methods are available now days now a days various complex and light weighted
methods are available in various online application and E Commerce sites. Some methods are based on encryption which provide
real time security and basically its a lasts alternative. Some methods are based on biometric data where humans eye and figure
prints are the key values. In this work we are trying to take a tour and collect information about all authentication methods
available in todays context. While collecting information their pros and cons are measured and analysis. Finally a conclusion is
drawn on the basis of these methods. A future scope is canvassed and a conclusion drawn on the basis of some parameters which
will help to decide the best method for online transactions system.
Keywords: Authentication, Security, Online Data Accessing.
1. INTRODUCTION
Spyware is a most dangerous malware that get installed on
any computer without the authority or permission of the
owner in order to collect the information of owner. The
owners information can be private or public, it collects
information such personal details, credit card no. , password
saves in chrome cookies etc. which is personal to the owner.
Spyware also collect information such as user keystrokes,
internet activities, sometimes slow down our computer or
crash and also take space etc. So it is becoming serious issue
of spyware which need to be handling in some cases, so we
have many different software tools to protect this issue. Our
review says there are until now so many methods for
password security. The most widely used is textual password
technique. Textual password is the combination of alphabet
(A-Z, a-z), Digits (0-9) and special symbol (eg: @,*,-). This
type of password also called as alphanumeric password. But
with this many different security issues arises. The
alternative to this textual password there is another technique
Graphical password. To reduce all this problem of traditional
methods, textual graphical password scheme using color
combinations have been developed for the possible 9
alternative solution to old one traditional system. The textual
password authentication is not secure and has high failure
rate compare to the others because shoulder surfing is too
much easy for textual based password. To overcome these,
the primary design without any extra complexity into the
authentication process is improved.
2. RELATED WORK
Various types of techniques are available for authentication
alphanumeric password is a traditional technique which is
widely used. It consists of secret series of characters. The
user id and password act as user identification and
authentication to access required resources. This
alphanumeric password technique secures resources but it
has many disadvantages. User can pick password which can

be guessed easily and vulnerable to shoulder surfing. If user


selects a password which is difficult to guess, then it is hard
for user to remember it. Also user can become the victim of
dictionary attack, brute force attack and spyware etc.
Password is system generated and is difficult to remember
[1]. Some researchers developed authentication methods that
use Graphical Password that can overcome the problems
related to traditional text password method [2]. These
Graphical passwords are more difficult to break using the
traditional attack methods such as brute force search,
dictionary attack [3], or spyware.
Problems with textual characters:
Textual passwords [4] are the most popular user
authentication method but have security and usability
problems.
The common human tendencies to create
memorable passwords which should be small not so lengthy,
as strong system assigned passwords are difficult to
remember. They use to provide the same password for
different accounts. Large number of passwords increases
interference lead to confusion.
Some of the textual based password techniques are:
a) Encryption: - Encryption of data [5] has become an
important way to protect data resources especially on the
internet. Encryption is the process of applying special
algorithms and keys to transform data into cipher code before
they are transmitted and decryption involves the application
of mathematical algorithms and keys to get back the original
data from cipher code [6].
b) Cryptography: - Cryptography is the major element
which is used to secure data or information while sharing
confidential data.
There are many techniques available to secure data but still
improvements and establishment of new techniques is
required. In which plain text message get converted into
cipher text also known as human unreadable form [8] [9]
[10] [11] [12] [13].
c) One-Time Password (OTP):-Xuguang Ren, Xin-Wen
Wu proposed generation of dynamic OTP.A one-time

Page 100

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 1, 2015

password (OTP) is a password which is valid for only one


login session or transaction. OTPs avoid a number of
shortcomings that are associated with text (static) passwords.
The most important shortcoming that is addressed by OTPs
is that, in contrast to static passwords, they are not vulnerable
to replay attacks. They have considered users password, the
authenticating time, as well as a unique property that the user
possesses at the moment of authentication (for example, the
MAC address of the machine that the user uses for
authentication) to generate OTP[14]. This system effectively
protects users account against various attacks such as
phishing attack, reply attack, and perfect-man-in-the-middle
attack [15].
d) Two One-Way Hash Functions: -Huiyi L. and Yuegong
Z proposed scheme which uses two one-way hash functions,
one is a hash chain and another is to secure the hash chain .
Hash chain is the core of the authentication scheme, and the
other is use for information transmission between the user
and server. This scheme presents higher security and lower
computational cost [16] as well as functions of bidirectional
identity authentication.
e) Two Factor Face Authentication Scheme: - Jeonil
Kang, et.al in this paper, a user password is suggested [17]
and a two factor face authentication scheme using matrix
transformations.
f) Cryptography: - SALT (cryptography) In Cryptography,
a salt is random data or string that is used as additional input
to a function that hashes a password [18]. Salt is use to
protect against dictionary attacks and rainbow table attacks.
Randomly generated String that is the generated salt and
Users password are concatenated and processed with the
Cryptographic hash function and result is stored with salt in a
Database.
Random function Salt is gets generated by using original
password. After this generated Salt is appended to the
original users password. Then this Salt appended password
is passes to Hash function which is use to generate Salt Hash
password. Finally both the generated Salt Hash Password and
Salt is stored in database. In this iteration count is used
which is refers to the number of time that the hash function
with which we are digesting is applied to its own this means
that, once we generate a salt concatenated with the password
then apply the hash function, get the result and again pass
that result as a input to the same hash function .This process
is repeated again and again a number of times. The minimum
number of iteration is 1000 for more security.
g) PBKDF2 (Password Encryption):- Pseudorandom
function such as cryptographic hash, cipher is applied by
PBKDF2 to the user password along with salt value. This
process repeated multiple times to produce derived key. This
key is used as cryptographic key in subsequent operations.
So PBKDF2 algorithm takes a salt, Random function which
gives random value, original users password. Then this
algorithm using number of iterations again and again Derived
Key is generated which is final output. Derived key that
conations salt, some random values and original password.
This is for the login passkey which is makes cracking or
hacking of password is quite slow and makes password more
secure and protected.

ISSN: 2455-3743

h) Session Password Using grid


Pair based authentication scheme
Grid: - 8x8matrixes is used to represent alphabets, digits and
special symbols to the user so user has to select session
password by using grid where I is row element and J is
column elements. E.g. Length of the password is 8 and
password is ABCDEQRS. Now we will make pairs as AB
CD EQ RS .so the session password length is 4. We will
select row containing character A and column containing
character B. And intersection of that row and column is
inserted into the password field likewise all the pairs are
selected into the grid and password is entered. This new
password is valid only for that session and is stored in the
user log on server. The session password and 8x8 grids are
sent to the server. On the server side this session password is
cross checked with the user's original password. [18]
Biometric based authentication techniques:
Biometric based authentication techniques, such as
fingerprints, iris scan, facial recognition and other more
known or futuristic biometrics [19] [20] such as gait and
smell, are not adapted to the full extent. The major drawback
of Biometric based approach is that such a system is costly
and the identification process can be slow and often
undependable [21]. [22].Jacobson M., et.al introduced the
notion of implicit authentication that consists in
authenticating users based on behavioral patterns [23].
Token based authentication techniques:
A security token is a physical device that can be easily
carried. A security token can be a bankcard, a smartcard
containing passwords, PIN to protect a lost or stolen token.
The drawback of a metal key is that, if it gets lost, it enables
its finder to enter the house. There is a distinct advantage of a
physical object used as an authenticator, because if it is lost,
the owner can have proof of this and can act accordingly
[24].
Multiple password interference in text:
The problems relating to the utility and security of multiple
passwords are not largely evaluated. However, we know that
people generally have difficulty in remembering multiple
passwords. Since users reuse the same password for different
systems as they try to login [25] hence this reduces security.
Graphical based:
So alternative to textual password, a technique proposed is
graphical password [26].In this technique the images or
shapes are used because people can remember images easily
than text , it is easy for human beings to remember the places
they visit, things they have seen and faces of different
people. In addition, if images used in graphical password
technique are large enough, the password space of a
graphical password technique may exceed as compare to
text-based password and thus can offer resistance to all
possible attacks of text-based password. In such way
graphical passwords are difficult to guess and easy to
remember.

Page 101

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 1, 2015

3. Graphical password techniques are categorized as


follows:
1) Recognition Based System:- In this system, for
registration the user has to select the certain number of
images from a set of random images in an order as a
password, and for authentication the user has to identify
(recognize) those images in a same order.
There are some special techniques under this system:
i)Dhamija and Perrig[27] proposed a graphical authentication
scheme where the user has to identify the pre-defined images
to prove users authenticity. In this system, the user selects a
certain number of images from a set of random pictures
during registration. Later, during login the user has to
identify the pre selected images for authentication from a set
of images. This system is vulnerable to shoulder-surfing.
ii)Passface [28] is a technique where the user sees a grid of
nine faces and selects one face previously chosen by the user.
Here, the user chooses four images of human faces as their
password and the users have to select their pass image from
eight other decoy images. Since there are four user selected
images it is done for four times.
iii) Wiedenback et al [29] describes a graphical password
entry scheme using convex hull method towards Shoulder
Surfing attacks. A user needs to recognize pass-objects and
click inside the convex hull formed by all the pass-objects. In
order to make the password hard to guess large number of
objects can be used but it will make the display very crowded
and the objects almost indistinguishable, but using fewer
objects may lead to a smaller password space, since the
resulting convex hull can be large.
iv)Blonder [30] designed a graphical password scheme
where the user must click on the approximate areas of predefined locations.
v) Passlogix [31] [32] extended this scheme by allowing the
user to click on various items in correct sequence to prove
their authenticity.
2) Recall-Based System
In this system a user is asked to reproduce something that he
created or selected earlier during the registration stage. There
are some techniques under this system:
i) Jermyn, et al. [33] proposed a new technique called
Draw- a-Secret (DAS), where the user is required to redraw the pre-defined picture on a 2D grid. If the drawing
touches the same grids in the same sequence, then the user is
authenticated. This authentication scheme is vulnerable to
shoulder surfing.
ii) Haichang et al [34] proposed a new shoulder-surfing
resistant scheme, where the user is required to draw a curve
across their password images orderly rather than clicking on

ISSN: 2455-3743

them directly. This graphical scheme combines DAS and


Story schemes to provide authenticity to the user.
iii) Syukri [35] developed a technique where authentication
is done by drawing user signature using a mouse. This
technique included two stages, registration and verification.
At the time of registration stage the user draws his signature
with a mouse, after that the system extracts the signature
area. In the verification stage it takes the user signature as
input and does the normalization and then extracts the
parameters of the signature. The disadvantage of this
technique is the forgery of signatures. Drawing with mouse
is not familiar to many people; it is difficult to draw the
signature in the same perimeters at the time of registration.
iv) In this pass point system through complex, one can make
more password .To get access user has to click close to the
selected click point within specific distance i.e. 0.25 to 0.50
cm from the users click point .Sometimes it may happens
that user unable to identify the exact pixel of the click point
so flexibility is not there therefore we define specific area
around the click point. If user can click within that range it
gets access.
v) Cued click point: Cued click point [37] [38] is an
alternative to pass point .In this technique we are clicking on
one point on each images rather than on multiple points on
single images. If user make mistake while clicking latest
click point at that stage user can cancel their attempt and
retry from the beginning.
vi) Soon-Nyean Cheong, et.al presented a secure two-factor
authentication NFC smartphone access control system using
digital key and the proposed Encrypted Steganography
Graphical Password [39].
4.

CONCLUSION

Various password techniques such as textual password,


graphical password, behavioral and combination of graphical
with textual is discuss with its pros and cons. The best
alternative for textual password is a graphical password. The
graphical password can reduce the burden of human memory
as humans tend to remember graphics and images better.
Overall it is more difficult to break graphical passwords
using various attacks like brute force attack, dictionary
attack, social engineering etc. We have tried our best to
change the phrase that small password is not secure and
easy to guess into small password is secure and hard to
guess But graphical passwords are vulnerable to shoulder
surfing and spyware attack. So the best alternative to such
techniques is combination or graphical with textual which
constitute (color code, grid, ciphering).In such technique we
are proving small password easy to remember and secure
password authentication.
5.

FUTURE SCOPES

Future scope of this technique is that, as it provides more


security than the others existed systems more secure login for
users is possible. So this technique is not just limited for
PDA i.e. personal digital Assistant but also it is very useful
for providing protection against Hacking, Dictionary attacks

Page 102

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 1, 2015

etc. In future it will be used for Banking Applications,


Military security purpose, Scientists research security,
Mobile phones applications where the security is more
important.
6.

REFERENCES

[1] S. Wiedenbeck, j. Waters, j. C. Birget, a. Brodskiy, and n.


Memon, passpoint: design and longitudinal Evaluation of a
graphical password scheme, international journal of human
studies 63 (2005) 102-127.
[2] S. Wiedenbeck, j. Waters, j. C. Birget, a. Brodskiy, and n.
Memon, "authentication using graphical passwords: Basic
results," in human-computer interaction international (hcii
2005). Las vegas, nv, 2005
[3] P.C. van oorschot, a. Salehi-abari, j. Thorpe. Purely
automated
attacks
on
passpoints-style
graphical
Passwords.ieee trans. Info. Forensics & security, 5(3):393
405, 2010.
[4] S. Chiasson, a. Forget, e. Stobert, p. Van oorschot, and r.
Biddle, multiple password interference in text and clickbased graphical passwords, proc. Acm conf. Computer and
comm. Security (ccs), nov. 2009.
[5] Partial face recognition: alignment- free approach,
shengcai liao, anil k. Jain, fellow, ieee and stan z. Li, fellow,
ieee, 2011.
[6] K . Pi et al. (ijaer) 2014, vol. No. 8, issue no. Iii, sep issn:
2231-5152
[7] International journal of information & computation
technology. ISSN 0974-2239 volume 4, number 13 (2014),
pp. 1305-1314 international research publications house
Http://www. Irphouse.com
[8] Gary c. Kessler, an overview of cryptography,
Http://www.garykessler.net/library/crypto.html, 2014
[9]https://encryptedtbn3.Gstatic.com/images?q=tbn:and9gcr1
2qomdzbjetouvqfwyigk3w_M2vvcrby7wgmp77uqaic3arpm
q
[10] Salah alabady, design and implementation of a network
security Model for cooperative network, 2009
[11] Sumedha kaushik and ankur singhal, network security
using Cryptographic techniques, 2012
[12] Vishwa gupta, gajendra singh and ravindra gupta,
advance Cryptography algorithm for improving data
security, 2012
[13] University of Alabama at birmingham, ease and
security of Password protections improved, 2014
[14] International journal of network security & its
applications (ijnsa), vol.4, no.2, march 2012
[15] Xuguang ren, xin-wen wu, a novel dynamic user
authentication scheme, international symposium on
communications and information technologies, pp. 713-717,
2012.
[16] Huiyi l., yuegong z., an improved one-time password
authentication scheme, proceedings of icct, pp 1-5, 2013.
[17] Kang, j., nyang, d., lee, k., two-factor face
authentication using matrix permutation transformation and a
user password, Information science. 269, pp. 120, 2014.
[18] Multidisciplinary journal of research in engineering and
technology volume 1, issue 2, pg.175-182

ISSN: 2455-3743

[19] M. Mosam et. Al International journal of network


security & its applications (ijnsa), vol.3, no.3
[20] Xiongwu xia, lawrence ogorman innovations in
fingerprint capture devices, veridicom inc. 31 scotto pl,
dayton, nj 08810, usa received21 december 2001.
[21] S. Pankanti, r. M. Bolle, a. Jain, biometrics: the future of
identification, special issue of computer, vol. 33, no. 2, Feb.
2000.
[22] l. OGorman, comparing passwords, tokens, and
biometrics for user authentication, proc. Ieee, vol. 91, no. 12,
pp. 2019-2020, dec. 2003.
[23] Wayman, j., jain, a. K., maltoni, d., &maio, d. (eds.).
(2004). Biometric systems: technology, design and
performance Evaluation. New York: Springer.
[24] Huigi catuogno, clemente galdi, a graphical pin
authentication mechanism with applications to smart cards
and low-cost Devices, information security theory and
practices. Smart devices, convergence and next generation
networks, lncs, vol. 5019, pp. 1635, 2008.
[25] Sonia chiasson1, alain forget1, Elizabeth stobert2, p.c.
van oorschot1, Robert biddle1 1school of computer science,
multiple password interference in text passwords and clickbased graphical passwords, department of psychology
Carleton university, Ottawa, Canada, November 2009.
[26] A. Adams, m. A. Sassed, and p. Lunt. Making
passwords secure and usable. In hci 97: proceedings of hci
on people and computers, pp.1-19, London, UK, 1997.
Springer-verlag.
[27] R. Dhamija, and a. Perrig. dj vu: a user study using
images for authentication. In 9th usenix security
symposium, 2000.
[28] Real user corporation: passfaces. Www.passfaces.com
[29] S. Wiedenbeck, j. Waters, j.c. birget, a. Brodskiy, n.
Memon, design and longitudinal evaluation of a graphical
password system. International j. Of human-computer
studies 63 (2005) 102-127.
[30] G. E. Blonder, "graphical passwords," in lucent
technologies, inc., murray hill, nj, u. S. Patent, ed. United
states, 1996.
[31] Passlogix, site http://www.passlogix.com.
[32] Forget, s. Chiasson, p. Van oorschot, and r. Biddle,
improving text passwords through persuasion, proc. Fourth
symp. Usable privacy and security (soups), July 2008
[33] Jermyn, I., Mayer a., monrose, f., reiter, m., and Rubin.
the design and analysis of graphical passwords in
proceedings of usenix security symposium, august 1999.
[34] Haichanggao, zhongjie ren, xiuling chang, xiyang liu
uwe aickelin, a new graphical password scheme resistant to
shoulder-surfing
[35] F. Syukri, e. Okamoto, and m. Mambo, "a user
identification system using signature written with mouse," in
third Australasian conference on information security and
privacy (acisp): springerverlag lecture notes in computer
science (1438), 1998, pp. 403-441.
[36] X.s. Zhou and T.S. huang, relevance feedback
for image retrieval: a comprehensive review, multimedia
systems, vol.8, no. 6 apr. 2003.

Page 103

International Journal of Research in Computer & Information


Technology (IJRCIT) Vol. 1, Issue 1, 2015

ISSN: 2455-3743

[37] S. Chanson, p. Van oorschot, and r. Biddle, graphical


password authentication using cued click points, proc.
European symp. Re-search in computer security (esorics), pp.
359-374, sept. 2007
[38]. Thorpe, j. And van oorschot, p.c. human-seeded attacks
and exploiting hot-spots in graphical passwords. Usenix
security symp. 2007
[39] Cheong, soon-nyean, huo-chong ling, pei-lee the,
secure encrypted steganography graphical password
scheme for near Field communication Smartphone access
control system, expert systems with applications 41.7, pp.
3561-3568, 2014.
7.

AUTHOR PROFILE
Mitali Lakade Perceiving the
3rd year B.E. in Information
And Technology at Shri Sant
Gajanan Maharaj College of
Engineering, Shegaon, Dist
Bhuldhan, Maharashtra ,
India
Ruchi Kela Perceiving the
3rd year B.E. in Information
And Technology at Shri Sant
Gajanan Maharaj College of
Engineering, Shegaon, Dist
Bhuldhan, Maharashtra ,
India
Ashwini Perceiving the 3rd
year B.E. in Information And
Technology at Shri Sant
Gajanan Maharaj College of
Engineering, Shegaon, Dist
Bhuldhan, Maharashtra ,
India
Prof. A. S. Manekar
Amitkumar
S
Manekar
working as assistant Professor
in IT Department SSGMCE,
Shegaon.
His research area is Big Data
analysis
and
High
performance Computing. He
has guided many Under
Graduate and Post Graduate
Students

Page 104

International Journal of Research in Computer & Information Technology (IJRCIT)


CALL FOR PAPER
VOLUME 1, ISSUE II, MARCH 2016
International Journal of Research in Computer & Information Technology (IJRCIT) is an online journal in
English published in annually for Academicians, Scientist, Engineers, and Research scholars involved in Computer
Science and Information Technology to publish high quality and refereed papers. Paper reporting original research and
innovative application from all parts of the world is invited. Papers for publication in the IJRCIT are selected through
peer review to ensure originality, relevance and readability. The aim of IJRCIT is to publish peer reviewed research
and review articles rapidly developing field of Computer Science and Information Technology.
IJRCIT invites authors to submit their original and unpublished works that communicates current research
on information assurance and security regarding both the theoretical and methodological aspects, as well as various
applications in solving real world information security problems.
Author(s) are requested to prepare their manuscript as per the IJRCIT paper format and submit to
graph.editor@gmail.com.
Important dates:
Manuscript Submission Last Date: 10th Jan 2016
Author Notification: 15th FEB 2016
Online Publication: 20th MARCH. 2016

Area of Research
Data Base Management System, Data Mining and Warehousing, Image Processing, Data Structure , Design and
Analysis of Algorithm, Multi-media and Computer Graphics, Software Engineering, Computer Security and
Cryptography, Theory of Computation, Artificial Intelligence, Distributed System, Operating System, Fuzzy logic,
Cloud Computing, Nanotechnology, Soft Computing, Network Security, Mobile Communication, Internet and web
technologies etc.

All submitted papers will be judged based on their quality by the technical committee and reviewers. Papers that
describe research and experimentation are encouraged.
All paper submissions will be handled electronically and detailed instructions on submission procedure are
available on IJRCIT website (http://www.garph.org).

Best Regards
Editor-in-Chief
Global Advanced Publication House

15 October 1931 27 July 2015

Where there is righteousness in the heart, there is beauty in the character.


When there is beauty in the character, there is harmony in the home. When
there is harmony in the home, there is order in the nation. When there is
order in the nation, there is peace in the world

You might also like