You are on page 1of 13

KONGU ENGINEERING COLLEGE, Perundurai, Erode.

Department of Mechatronics

ADVANCED FATIGUE DETECTION AND ACCIDENT PREVENTION SYSTEM

1)

A.V.Hari Raja,(II-Mechatronics),

E-mail: avhariraja@gmail.com Mobile: 9150422617

2)

Kingsley Anand Shanmugam,(II-Mechatronics),

E-mail: kingsleyanandshanmugam@rocketmail.com Mobile: 918754123915


ABSTRACT Recent days the traffic accident has increased in great scale. Some of the major factors are Driver drowsiness and fatigue. The purpose of this paper is to advance a system to detect fatigue symptoms in drivers and

produce could

timely

warnings accidents.

that This

system analysis,

the speed and

prevent

distance of the neighboring cars and respond such that to prevent accidents, while applying braking. Also during emergency cases like heart attack, unconsciousness etc, it will be more advantageous. INTRODUCTION: Driver fatigue resulting from sleep

presents a real-time approach for detection of drivers fatigue. The algorithm developed is unique to any currently published papers, which was a primary objective of this paper. Here the eyes are located by a camera and the intensity changes in the eye area (including the effect of red pupil) determine whether the eyes are open or closed. If the eyes are found closed for 5 consecutive frames, the system draws the conclusion that the driver is falling asleep signal. system under conditions acquisition images photometric different conditions Using and The is issues capable reasonable also. with climatic using / The system property a of warning in this working lighting image acquires consistent under ambient advantage

deprivation or sleep disorders is an important factor in the increasing number of accidents on todays roads. The main purpose was to advance a system to detect fatigue symptoms in drivers and produce timely warnings that could prevent accidents. In the trucking industry, 57% fatal truck accidents are due to drivers fatigue. Driver fatigue is a significant factor in a large number of vehicle accidents. Recent statistics estimate that annually 1,200 deaths and 76,000 injuries can be attributed to fatigue related crashes. The main components of the system consists of a remotely located video CCD camera, a specially designed hardware system for real-time image acquisition and for controlling the illuminator and the alarm system, and various computer vision algorithms for simultaneously, real-time and non-intrusively monitoring various visual bio-behaviors that typically characterize a drivers level of vigilance.

near-infrared on For

(NIR) illuminator and CCD camera. Technique detecting a neighbor car index, Homographic function Homometric function Radar sensing property Incase if the driver is still in unconscious the system provides the automatic braking to avoid the crash over the adjacent cars. The

FUNCTIONAL DESCRIPTION:

The input to the system are images from a video camera mounted in front of the car, which then analyzes each frame to detect the face region. The face is detected by searching for skin color-like pixels in the image. Then a blob separation performed on the grayscale image helps obtain just the face region. In the eye-tracking phase, the face region obtained from the previous stage is searched for localizing the eyes using a pattern-matching method. Templates, obtained by subtracting two frames and performing a blob analysis on the difference grayscale image, are used for localizing the drivers eyes. The eyes are then analyzed to detect if they are open or closed. If the eyes remain

closed continuously for more than a certain number of frames, the system decides that the eyes are closed and gives a fatigue alert. It also checks continuously for tracking errors. After detecting errors in tracking, the system starts all over again from face detection. The main focus is on the detection of micro-sleep symptoms. This is achieved of by monitoring the eyes the driver

throughout the entire video sequence. The three phases involved in order to achieve this are the following: (i) Localization of the face, (ii) Tracking of eyes in each frame, (iii) Detection of failure of tracking.

Outline of the System

information from the RGB signal while preserving its color. Further, the complexity 1) FACE DETECTION: Normalized chromatic color representations are defined as the normalized r- and g components of the RGB color space. This representation removes the brightness of the RGB color space is simplified by the dimensional reduction to a simple RG color space. Skin color models vary with the skin color of the people, video cameras used and also with the lighting conditions. Using the

skin color model the system filter out the incoming video frames to allow only those pixels with high likelihood of being skin pixels. The system uses a threshold to filter out the skin like pixels from the rest of the image. The filtered image is then binarized and blob operation performed to detect the face region from the rest of the image space. In order to reduce the computational cost and speed up the processing, each incoming frame is sub sampled to a 160x120 frame. 2) TRACKING OF EYES: The reference eye patterns for each user are recovered previously by taking the difference of two images. The eye blink is used to estimate the position of the eye. The eye templates are recovered by taking a difference of the two images and employing blob (area) operations to isolate the eye regions. For the correct detection of the eye templates, it is required that there is no other motion of the face other than the eye blinks. The eye pattern consists of the eyes centered at the center of the iris of the user. The system searches for the open eyes starting from the left eyes first and then looks for the right eyes. If the scores for the open eyes are reasonably higher than the acceptance level and the system decides that the eyes eye. Thus, there were misses due to incorrect matching with the facial hair for open eyes and other parts of the face for the closed eyes.

are open, it does not search for the closed eye patterns in the image. 3) DETECTION OF FAILURE: The threshold scores fixed for the open eyes consist of a minimum above which the system decides that the eyes are probably open. When the scores are above the maximum threshold, the system decides for sure that the eyes are open and does not search for the closed eyes in the image. But if the scores are between the minimum and the maximum limits, then the system searches the image for closed eyes too in order to remove any mismatches. In case the eyes of the subject remain closed for unusually long periods of time, the system gives a fatigue alert. The fatigue alert persists as long as the person does not open his eyes. In case all the matches fail, the system decides that there is a tracking failure and switches back to the face localization stage. As the face of the driver does not move a lot between frames, we can use the same region for searching the eyes in the next frame. DISADVANTAGES: (i) There were mismatches especially in the case of closed eyes as the system finds any part of the skin region as the (ii)The system could not track the eyes when the subject wears glasses while driving.

(iii) Also the system could not track the eyes, when the subjects ADVANCED DETECTION SYSTEM: People in fatigue exhibit certain visual behaviors easily observable from changes in their facial features like eyes, head and face. This system can simultaneously and in real time monitor several visual behaviors that typically characterize a persons level of alertness while driving. These visual cues include eyelid movement, pupil movement, and face orientation. The fatigue parameters computed from these visual cues are subsequently combined to form a composite fatigue index that can robustly, accurately characterize ones vigilance level. IMAGE ACQUISITION SYSTEM: The purpose of image acquisition is to acquire the video images of the driver face in real time. The acquired images should have relatively and consistent should photometric produce property under different climatic/ ambient conditions distinguishable features that can facilitate the subsequent image processing. To This end, the persons face is illuminated using a head rotation is FATIGUE above 45 degrees and head tilt up is above.

near-infrared illuminator (NIR). The use of infrared (IR) illuminator serves three purposes: first, it minimizes the impact of different ambient light conditions, therefore ensuring image quality under varying realworld conditions including poor illumination, day, and night; second, it allows producing the bright pupil effect, which constitutes the foundation for detection and tracking the proposed visual cues. Third, since NIR is barely visible to the driver, this will minimize any interference with the drivers driving.

Principle of Bright and Dark Pupil Effects

A bright pupil can be obtained if the eyes are illuminated with a NIR illuminator beaming light along the camera optical axis at certain wavelength. At the NIR wavelength, pupils reflect almost all IR light they receive along the path back to the camera, producing the bright pupil effect, very much similar to the red eye effect in photography. If illuminated off the camera optical axis, the pupils appear dark since the reflected light will not enter the camera lens. This produces the so-called dark pupil effects. IR illuminator consists two sets of IR LEDs, distributed evenly and symmetrically along the circumference of two coplanar concentric rings. The center

of both rings coincides with the camera optical axis. The use of multiple IR LEDS can generate a strong light such that the IR illumination from the illuminator dominates the IR radiation exposed to the drivers face, therefore greatly minimizing the IR effect from other sources. This ensures the bright pupil effect under different climatic conditions. The use of more than one LED also allows to produce the bright pupil for subjects far away (3 f) from camera. To further minimize interference from light sources beyond IR light and to maintain uniform illumination under different climatic conditions, a narrow band pass NIR filter is attached to the front of the lens

Acquired Image with Desired Bright Pupil Effect and sharp pupil spot

PUPIL DETECTION AND TRACKING:

Pupil Detection And Tracking System Flowchart

ILLUMINATION REMOVAL SUBTRACTION:

INTERFERENCE VIA IMAGE

only ambient light illumination from the one illuminated by both the IR illuminator and the ambient light. The resultant image contains the illumination effect from only the IR illuminator, therefore with bright pupils and relatively dark background. This method has been found very effective in improving the robustness and accuracy of our eye tracking The system uses a video decoder that detects from each interlaced image frame (camera output)the even and odd field signals, which is inner IR rings on to produce the dark and bright pupil image fields. Then the system separates each frame into two image fields (even and odd), representing the bright and dark pupil then used to alternately turn the outer and images separately. The even image field is then digitally subtracted from the odd

The detection algorithm starts with a preprocessing to minimize interference from illumination sources other than the IR illuminator. This includes sunlight and ambient light interference background illumination interference removal (a)The image field obtained with both ambient and IR light. (b)The odd image field obtained with only ambient light. (c)The image resulting from subtraction of (b) from (a) To uniquely detect pupils, other bright areas in the image must be removed or they may adversely affect pupil detection. The background clusters removal is accomplished by subtracting the image with

image field to produce the difference image.

Block Diagram of Image Subtraction Circuitry

For determining the initial pupil position, given the image resulted from the removal of the external illumination disturbance, pupils may be detected by searching the entire image to locate two bright regions that satisfy certain size, shape, and distance constraints. To do so, a search window scans through the image. At each location, the portion of the image covered by the window is examined to determine its intensity distribution. The blob is then validated based on its shape, size, its distance to the other detected pupil, and its motion characteristics to ensure it is a pupil. COMPUTATION OF EYELID

eye blink frequency, eye closure duration, eye closure speed, and the recently developed parameter PERCLOS. PERCLOS measures percentage of eye closure over time, excluding the time spent on normal closure. Another ocular parameter that could potentially be a good indicator of fatigue is eye closure/opening speed, i.e. the amount of time needed to fully close the eyes and to fully open the eyes. An eye closure occurs when the size of detected pupil shrinks to a fraction (say 20%) of its nominal size. As shown in Figure below, an individual eye closure duration is defined as the time difference between two consecutive time instants, t2 and t3, between which the pupil size is 20% or less of the maximum pupil size. And an individual eye closure speed is defined as the time period of t1 to t2 or t3 to t4, during which pupil size is between 20% and 80% of nominal pupil size, respectively.

MOVEMENT PARAMETERS: Eyelid movement is one of the visual behaviors that reflect a persons level of fatigue. There are several ocular measures to characterize eyelid movement such as

Definition of Eye Closure Duration and Eye Open/Close Speed FACE DETERMINATION: ORIENTATION

control from driving. In this case we have implemented the method to prevent this. The main aim of this method is to to to wake prevent the the host from his from drowsiness. vehicle accident. ALERTING DRIVER: The first step in this method is to provide a vibrating seat for the driver. The vibration of the seat is done with the help of the stepper motor to wake the driver from his drowsiness. ACCIDENT PREVENTION: Even though the driver regains his drowsiness, he might have lost his driving control. So it is necessary to provide the automatic braking. The surrounding vehicle must be analyzed for automatic braking, which is one bys CCD(Charge Control Device) camera and RADAR sensors. They are placed at specific positions for identifying

The system recovers 3D face pose from a monocular view of the face with full perspective projection. There is a direct correlation between 3D face pose and properties of pupils such as pupils size, inter-pupil distance, and pupils shape. The followings are apparent from images above. (i)The inter-pupil distance decreases as the face rotates away from the frontal orientation. (ii) The ratio between the average intensity of two pupils either increases to over one or decreases to less than one as face rotates away or rotates up/down. (iii)The shapes of two pupils become more elliptical as the face rotates away or rotates up/down. (iv) The sizes of the pupils also decrease as the face rotates away or rotates up/down. WHATS AFTER DRIVERS FATIGUE DETECTION? The accidents can take place in a fraction of second. For example when person becomes drowsy, he loses his

the neighboring cars. The main aim are the verification of obstacles and the detection of obstacle boundaries. This allows to analyze the situation for carrying out emergency braking. The verification of obstacles is done by analyzing the scaling of obstacles as they get closer to the camera. The CCD Camera detects the obstacles within the 8 degree span. But in times of snowfall it is inefficient. So to overcome this CCD camera uses the Homomorphic function. During weather variation condition, camera works based on Homomorphic function, which is to develop frequency domain procedure i.e. the appearance of an image by simultances Gray Level Range using Butterworth high pass filter in output of camera for better reliable pictures. of This the helps low in the amplification intensity

well

illustrated

by

the

theorem

on

intersecting lines. The quantity that relates scale to distance is the covered distance of the egovehicle. If the car travels half the distance to any obstacle, the size of the imaged obstacle will double. On the other hand, if the scale and traveled distance are known, obstacle distance can be computed. Surely we dont want to wait for the scale factor do double to estimate distances. But as scale can be efficiently computed in images, small scale changes already allow for distance estimates. In this paper we measure scale changes by automatic tracking of template regions.

waveforms from the obstacles. To start with a simple example, consider the two trucks in Fig as a radar hypothesis. Our goal is to verify or discard this hypothesis by means of computer vision. The feature we are going to use is image scale. As the vehicle approaches the obstacles, the image of the obstacles taken by a forward-looking camera will grow in size. This principle is well known for humans as it is simply based on the perspective transformation of our eye or in computer vision context - of the camera lens. The principle of distance estimation by relative scale in camera sequences is CCD camera image-detects the movable obstacles which is marked in red color.

transformation or by the homography generated by the street. INDEX DETERMINATION IN NEIGHBOR VEHICLES: The obstacle distance is measured with the help of CCD camera and radar sensors. RADAR reflected image represented white templates on the moving obstacles A problem arises if the scale of such a template region does not originate from obstacles and therefore leads to false results. Truly, such problem can arise if no obstacle is contained in the image and for instance figures painted on the street are tracked. This leads to the question of how one can distinguish such template regions on the street from others on obstacles. Both, a distant obstacle and the street, are a plane (planar surface) Under planes transformations freedom). in first undergo (eight Homographies approximation. transformations homographic degrees of perspective images Simultaneously and CCD both capture is camera image

homographic image which easily detects distortion on comparison with those two images but it leaves the movement of leaves because it is in slight variation. Variation distance determined by checking two consecutive images, for checking obstacles position. recognize sequences. CCD camera detects the obstacles upto 100m. According to the frame number, distortion due to distance variation is varied which is mentioned in the graph seen below. Actually CCD camera and radar placed at front and back vehicle. Any vehicle on the side of the car is determined by optical sensor in temperature variation from neighbor vehicle engine. of the the The radar sensors observed in obstacles

homography image represented by dot

contain the normal of mapped planes and therefore the homography of the street is different to that of obstacles. Because the visible surface of an obstacle is approximately parallel to the camera plane, its transformation can be modeled by translation and scale (similarity transformation). The key to distinguish template regions on the street from others on obstacles lies in checking if the transformation is described by a similarity

ABS

BRAKING

SYSTEM

FOR

Image

processing and reliable

achieves

highly of

STOPPING VEHICLE: ABS braking applied related to the index of neighbor vehicles from the wheel speed sensors. Apart from the wheel speed sensors abs braking system consists of electronic unit and hydraulic unit. The electronic control unit monitors and compares the signals from the wheel-speed sensors. If the electronic control unit senses rapid deceleration (impending lock-up) at a given wheel, the electronic control unit commands the hydraulic control unit to reduce hydraulic pressure to that wheel. Further for the indication of braking, side lights are activated with the horn to alert the neighboring vehicles.

accurate

detection

drowsiness. Image processing offers a non-invasive approach to detecting drowsiness without the annoyance and interference. Further the system provides driver awakening prevention. REFERENCES: 1) Gee, A. & Cipoll, R. (1994) Determining the gaze of faces images. Image and Vision Computing, 2) Remote sensing and image interpretation M.Lillesand W.Kiefor. 3) Wierville, W.W. (1994) Overview of research on driver drowsiness definition and driver drowsiness Munich. detection.ESV, by and Thomas Ralph alarm and the accident

CONCLUSION: This fatigue monitor system was tested in a simulating environment with subjects of different ethnic backgrounds, different genders, ages, and under different illumination conditions. The system was found very robust, reliable and accurate. The following conclusions were made:

You might also like