You are on page 1of 7

Gray scale :

1-Face and eye detection

2-pupil location

3- corner detection

4- Calibration

5- Transforming to Window Coordinates

1-face and eye detection

This is the first step of our research to implementation our system ,and we decided to use Haar-like
features ,These features use the changes in contrast values among adjacent rectangles of pixels
to determine relative dark and light areas. Two or three rectangles with relative contrast
differences form a single Haar-like feature. The features as shown a below figure *1*are then
used to detect objects in an image. increasing or decreasing the size of the pixel group can be
scaled up and down easily of These features, This allows Haar-like features to detect objects of
various sizes with relative ease(Wilson, P. I., and Fernandez, J. Facial feature detection using
haar classifiers.J. Comput. Sci. Coll. 21, 4 (Apr. 2006), 127{133.)

AR[x,y] = A(x,y)
x0x
x0(x-jy-y0j)
Figure 2.2: Example of Haar features: Taken from [Wilson, P. I., and Fernandez, J. Facial feature
detection using haar classifiers.J. Comput. Sci. Coll. 21, 4 (Apr. 2006), 127{133.]

The rectangles themselves are calculated using an intermediate image called the integral
image. This integral image contains the sum of the intensities of all the pixels to the left
and above the current pixel. The equation for this is shown below(Wilson, P. I., and Fernandez, J.
Facial feature detection using haar classifiers.J. Comput. Sci. Coll. 21, 4 (Apr. 2006), 127{133)

AI[x,y] = A(x , y )
x0x
y0y

In figure 2.2 the rotated features are create a different integral image named as rotated integral image
based of the following equation (Wilson, P. I., and Fernandez, J. Facial feature detection using haar classifiers.J.
Comput. Sci. Coll. 21, 4 (Apr. 2006), 127-133)

AR[x,y] = A(x,y)
x0x
x0(x-jy-y0j)

Using the integral image, or rotated integral image as needed, and taking the difference
between two or three connected rectangles it is possible to create a feature of any scale , calculating
features of various sizes requires the same effort as two or three pixels the advantage of using Haar-like
features. This make Haar features fast and efficient to use (Wilson, P. I., and Fernandez, J. Facial feature
detection using haar classifiers.J. Comput. Sci. Coll. 21, 4 (Apr. 2006), 127-133) .for the second part for the eye
detection we using in our paper the same like Face Detection using Haar Cascades to that face detected
the same method but different classifier . The classifiers used for the right and left eye detection are and
haarcascade righteye 2splits.xmland haarcascade lefteye 2splits.xml respectively The eye detection is
done for each eye Haar classifier separately. Pupil detection we suing Edges Analysis (EA) this method
created from the work S.Asteriadis,et.a.( S.Asteriadis,N.Nikolaidis,A.Hajdu,I.Pitas,An Eye Detection
Algorithm Using Pixel)

Eye location In a picture of a human face used to the edge pixel information . The input frame is

processed by the most popular and famous edges detection algorithm for digital images developed

by Canny (Canny, A Computational Approach to Edge Detection,


http://www.limsi.fr/Individu/vezien/PAPIERS_ACS/canny1986.pdf). So before the Guassian blur filter is
applied to remove unwanted and undesired noise .The Canny method based in two thresholds values
(Upper and lower value ). The Upper threshold value is a the minimum gradient needed to classify the
pixel as an edge object .Such as a pixel is also called strong edge pixel. In the edge , there is also a pixel
of gradient between the Lower and Upper thresholds values , having at least one strong
edge pixel as a neighbor. The lower threshold protects against splitting edges in low
contrast regions. In our paper Upper and Lower thresholds values set to 2.0 and 1.5 times
the mean shine and respectively. The output in our paper of the Canny method is a binary
picture with edges marked white. Look the following figure

Input Image(A)and processing result Canny algorithm(B);edges are coloured white.

The second step to detect the pupil is to find the the horizontal and the vertical lines sharing the next to
highest number of the points with the edge .Pupil center detect by intersection of these lines Canny
(Canny, A Computational Approach to Edge Detection,
http://www.limsi.fr/Individu/vezien/PAPIERS_ACS/canny1986.pdf).

3-corner detection

One early attempt to find these corners was done by Chris Harris & Mike Stephens in their paper A
Combined Corner and Edge Detector in 1988, so now it is called Harris Corner Detector. He took this
simple idea to a mathematical form. It basically finds the difference in intensity for a displacement of
in all directions. This is expressed as below:

Window function is either a rectangular window or gaussian window which gives weights to pixels
underneath.

We have to maximize this function for corner detection. That means, we have to maximize the
second term. Applying Taylor Expansion to above equation and using some mathematical steps (please
refer any standard text books you like for full derivation), we get the final equation as:

Where
Here Ix and Iy are image derivatives in x and y directions respectively.

Where



1 and 2 are the eigen values of M

So the values of these eigen values decide whether a region is corner, edge or flat.

When R is small, which happens when 1 and 2 are small, the region is flat.

When R<0 which happens when 1 << 2 or vice versa, the region is edge.

When R is large, which happens when 1 and 2 are large and 1 ~ 2 the region is a corner.

It can be represented in a nice picture as follows:

So the result of Harris Corner Detection is a grayscale image with these scores. Thresholding for
a suitable give you the corners in the image. We use in our paper OpenCV has the function
cv2.cornerHarris() for this purpose. It's arguments are :

img - Input image, it should be grayscale and float32 type.


blockSize - It is the size of neighbourhood considered for corner detection
ksize - Aperture parameter of Sobel derivative used.
k - Harris detector free parameter in the equation.

4-Calibration

The aim of calibration of our program is to indication points that can be used later to
translate the pupil and corner positions into a position on the screen. We used calibration
technique that used in [Ciesla Michal, K. P. Eye Pupil Location Using Webcam. Online.
Accessed on 31 October 2012.Available from:
http://arxiv.org/ftp/arxiv/papers/1202/1202.6517.pdf.], is to present a black screen with a
single green dot drawn on. Then the user must be looks at the dot and clicks to record
the coordinates. the next dot is then displayed until twelve dot positions are
recorded(Frieslaar, I. Moving the mouse pointer using eye gazing. Published
throughDepartment of Computer Science, University of the Western Cape, 2011)
2-gaze calculation and tracking

A Gaze tracking system is a combination of several different techniques, of which Eye


tracking is only a part.

You might also like