You are on page 1of 11

Hussein Ali*, Dr.

Attila Fazekas *
Faculty of Informatics, University of Debrecen
Kassai ut 26, 4028 Debrecen, Hungary
E-mail: smart_programer@yahoo.com , attila.fazekas@unideb.hu

Abstract : Purpose of this paper is to tracking the gaze and plot a normally diagram that will give us information about the
Patient situation for example if (she/he) is angry or may have some stress so we can postponed giving the medicine for a while
and we could wait until (her/his) situation will be better and then we can give (her/him) the medicine . This system can help the
doctors a lot and make it easy to deals with patients without any problem and also make the health condition for the patents be
better . We are using regular low resolution web camera to get the results of system which benefits in economical way for
development. To achieve the systems goal we are using OpenCV (Open Source Computer Vision) library which is open source
library which makes system very economical.

Keywords: Haar-Like Features, Gaze Detection, Voila Jones Algorithm , ROI.

INTRDOUTION

Gaze estimation is the process of measuring either the point of gaze or the motion of an eye relative to the head. The device to
measure gaze is called eye tracker [1]. Tracking eye gaze of an individual has numerous applications in many domains. Gaze
tracking is a powerful tool for the study of real time cognitive processing and information transfer-as human attention can be
deduced in many cases by following gaze to the object of interest. Latest human-computer interacting devices use persons gaze
as input to the computer in the same way as using a mouse. Gaze estimation has also found usage in the auto industry for
monitoring driver vigilance, based on drivers gaze patterns [2]. Eye gaze also has applications in the medical field, for example
studying the behaviors of patients suffering from neurological, communication and vision disorders like pervasive developmental
disorder and autism spectrum disorder. Studies show that during a typical interaction with another person, rather than focusing on
the eyes, children with ASD (Autism Spectrum Disorder) tend to focus more on mouth, bodies and other objects at the scene [3-
5]. Thus the gaze pattern could serve as a useful tool for early detection of ASD. Most of the accurate eye tracking devices
available today are expensive because they require high quality cameras and specially designed equipments like wearing custom
designed glasses, high speed machines etc. Some of the devices require trained individuals in collecting the raw data and for
conducting experiments. The techniques they use mostly concentrate on the reflection of the IR light on the pupil and track its
position using mathematical analysis. In our paper, we tried to show that the iris and sclera can also be used for gaze estimation
and they can be detected easily. The objective of this paper is to devise a simple web-cam based eye tracking algorithm that can
provide reasonably good accuracy.

. Gaze tracking has been widely used in a variety of applications from tracking consumers gaze fixation on advertisements,
controlling human computer devices, to understanding behaviors of patients with various types of visual and/or neurological
disorders such as autism So we solve this problem by using simple and cheap hardware that Affordable every person to create
gaze tracking systems. In this system we try to prove that is possible to create a gaze tracking system by using a regular web
camera and the free, open source Computer Vision library OpenCV.

Previous work :

The most effect methods to detect and locate and tracking the gaze precisely involve expensive tools .The method mention in (1)
the Author used contact lens that is put and fix in place on the eye. This will make us tracking something attached to the lens
and then reduce the problem . There is problem in this method while it is accurate, it is invasive and uncomfortable to use and
this make it impractical as a solution to this program . The goal of our research is do it and create and easily attainable and user
sensitive device that is not parasitical and could become commercially viable as a method of computer interfacing that we can
override the mouse and the keyboard .
Fig (1) smart lens eye

The other Authors used (EOG) Electronic Oculography In this method some skin around the eyes are electrodes , These
electrodes are measure the changes in the electrical field as the eyes move, eye movements can be calculate from these data .
This method performs better when measuring relative eye movements rather than absolute movements. This method is limited in
accuracy and very susceptible for noise.

Fig (2)EOG(Electronic Oculography )

And the third method is Video Image Analysis A video camera is used in these methods to observe the users eyes. Image
processing techniques are then used to analyze the image and track the eye features. Based on calibration, the system determines
where the user is currently looking. The following figure show for us our important point in our paper

Webcam Image Draw


input processing diagram

inm
Fig (3)High level Design of the system
IMPLEMENTATION:

Webcam Image Draw Diagram


input processing

Gray scale Histogram


image Equalization

Eye detection using


Haar- like features

Center of the eye(Actual ROI


position)

Transforming to
Window Coordinates

Fig (4)Low Level design of our system

Input Webcam

The first step of our project is get a signal frame from the camera .The camera is input stream which consist of a group of frames,
the system deals with each frame separately and scan it for a special and important features that define a face based on the
function that will be define in our project .The system deals with every image that capture from the webcam as array od data , the
more details the system has and more exact details the computer can synthesize from the image , also there is another duty for the
webcam that determine the number of frames per second which is a especially important in our system ,because it determines
how much the computer legs from the users movement .
Fig(5) take the frame from the cam

IMAGE PROCCESSING

The video that we get it from the webcam he need to filtering to get exactly and accurate position from it this done by the next
image processing secondary steps :

1-Get a Grey image scale from transformation the colored image

We put the colored image that we captured from the webcam through the first step of image processing to get grey scale image
(gray frame ). The gray frame has only one channel (8bit) when the colored image had three channels representing the three
major colors (RGB) , so its making it easier to process and manipulate . An example of the transformation of the color image
show in the following figure .

Fig (6)Transformation color image to Gray image

2-Histogram Equalization

The gray scale image obtained above is the Equalized using a histogram to improve the equality of the picture by making use of
all the pixel intensities equally and we can do this we Consider a discrete grayscale image {x} and let n i be the number of
occurrences of gray level i. The probability of an occurrence of a pixel of level i in the image is
() = ( == ) == , 0 < (1)

L being the total number of gray levels in the image (typically 256),n being the total number of pixels in the image, and x(i)
being in fact the image's histogram for pixel value i, normalized to [0,1].

Fig (7) Histogram Equalization picture take from


(https://www.packtpub.com/mapt/book/hardware_and_creative/9781849697200/2/ch02lvl1sec24/histogram-equalization-for-
contrast-enhancement)

Face and Eye detection

Object detection is the first and the most important step in our project . When the program is start running , the camera is start
working in our project . The video that obtain from the camera treated as a separate frame , and each frame is analyzed in search
of an object (in our project the face ) to be detect using Haar classifiers.

Haar-like features are digital image features used in object recognition. They owe their name to their intuitive similarity with
Haar wavelets (a sequence of rescaled "square-shaped" functions which together form a wavelet family or basis. Wavelet analysis
is similar to Fourier analysis in that it allows a target function over an interval to be represented in terms of an orthonormal
function basis. The Haar sequence is now recognized as the first known wavelet basis and extensively used as a teaching
example.) And were used in the first real-time face detector.( Haar-Like Features, http://en.wikipedia.org/wiki/Haar
like_features, June 23, 2013)
Fig Haar-like Features

Haar-classifiers changed the conventional way of working with only image intensities (RGB pixel values), and suggested
working with an alternative set based on Haar wavelets. [10] The Haar classifier contains a ready set of functions used for
detecting objects in a person such as a face, a pair of eyes or one eye and a nose. The ones which were used are the Haar-face
function and the Haar-eye. The word "cascade" in the classifier name means that the resultant classifier consists of several
simpler classifiers (stages) that are applied subsequently to a region of interest until at some stage the candidate is rejected or all
the stages are passed. Classifiers at every stage of the cascade are complex themselves and they are built out of basic classifiers
using one of four different boosting techniques (weighted voting). The basic classifiers are decision-tree classifiers with at least 2
leaves. Haar-like features are the input to the basic classifiers. The feature used in a particular classifier is specified by its shape
and/or position within the region of interest and the scale(Face Detection using OpenCV,
http://opencv.willowgarage.com/wiki/FaceDetection, July 16, 2013.).

Haar Face classifier

The face is detected by the Haar classifier function that uses Haar-like features . The classifier uses built in standers for detect
and identifying the face and start tracking while he stay in the range of the webcam

The following figure show the face detection using Haar classifier

Fig (11)Haar Face classifier


Haar Eye classifier

In our project we use the same algorithm that we used for detect the face to detect the eye (Haar-like features ) as shown the
following figure

Fig (12)Haar eye classifier picture taken from (https://dzone.com/articles/face-and-eyes-detection-opencv)

Finding the region of Interest(ROI)

The region of interest in our paper we can find it by two steps sequentially("5) :

1-We find the faceROI From the face detected in the haar face classifier

2-The eyeROI will be detect from the faceROI using the Haar eyes classifier

The region of interest the eyes determine width and height of the ROI .

The first and second features selected by AdaBoost. The two features are shown in the top row and then overlayed on a typical
training face in the bottom row. The first feature measures the difference in intensity between the region of the eyes and a region
across the upper cheeks. The feature capitalizes on the observation that the eye region is often darker than the cheeks. The second
feature compares the intensities in the eye regions to the intensity across the bridge of the nose.

Measuring Center of the eye


In our paper we measured center of the eyes from the equalized ROI edges . It is set as to measure center of the eyes we will start
for the right eye from the left edge of the face and the left eye from the right edge of the face we starting with X-coordinates for
the both eyes moving to Y-coordinates for the bot left and right eye .The measurements used a common measurements for
people with average-sized face as shown in the following figure

Fig(13)Eye Center calculation

Calculation the Actual position

When the center of the eye was detected we considered this point to be the actual position of the eye for the first frame that we
captured from the cam , this point stored in an Matrix in which first element is the coordinates for the current position . The X-
coordinates of the point is considered to be the X-position of the pupil while Y- coordinates is considered to be the Y-position of
the pupil .

Transfer to screen coordinates


For each detected position ,the new position is must be compare with the previous one and stored in matrix .Its then subtracted
from the previous position if any difference was occurred , then this difference scaled to the screen dimension and added to the
previous stored position to get the actual mapping on the screen .

Gaze pointer for person like a test

Example for Patient diagram that show for us he was little nervous

Name Quality of program State


Patient(x) 90% Angry
Patient(y) 80% Nervous
Patient(z) 95% Calm
Patient(m) 70% Afraid
Patient(n) 80% Boring
Table1 show the test persons and them state
Quliaty of program

angry
calm
afraid
boring

Conclusion

Functionalities of eye tracking system is to detect the face and the eyes and tracking them , then getting the position of the gaze
and drawing diagram for these points to detect where is the user looking and were successfully , in our paper we detect and
tracking the gaze using low resolution camera using the cheap equipment and easy algorithm to get the nice and almost exactly
result that help the doctors to know in which situation the patient now to work with (his/her) smoothly, this save for us the time
and the cost with cheaper and easier pieces of equipment.

Acknowledgements

The work of H.AlHAMZAWI was supported by the Stipendium Hungaricum Scholarship.

You might also like