You are on page 1of 6

Second International Conference on Computer Research and Development

Real-time Lane Detection Based on Extended Edge-linking Algorithm


Qing Lin

Youngjoon Han

Hernsoo Hahn

Department of Electronic Engineering


Soongsil University
Seoul, South Korea
lqsdust@163.com

Department of Electronic Engineering


Soongsil University
Seoul, South Korea
young@ssu.ac.kr

Department of Electronic Engineering


Soongsil University
Seoul, South Korea
hahn@ssu.ac.kr

minimization algorithm should be applied, which is


comparatively time-consuming.
Based on the previous work done in lane detection field,
a linear-model based method is developed in this paper to
detect lane-marks in real time. In order to make the linear
model estimation more efficient and robust, lane-mark
features like edges, colors, width and orientation are
combined to search for lane-marks, based on which linearmodel parameters, which are used to represent lanes, are
calculated. Lane position can be determined based on these
linear-model parameters, and lane departure can be
estimated. An overview flowchart of the proposed lane
departure detection method is shown in Fig.1.

Abstract Lane detection can provide important information


for safety driving. In this paper, a real time vision-based lane
detection method is presented to find the position and type of
lanes in each video frame. In the proposed lane detection
method, lane hypothesis is generated and verified based on an
effective combination of lane-mark edge-link features. First,
lane-mark candidates are searched inside region of interest
(ROI). During this searching process, an extended edge-linking
algorithm with directional edge-gap closing is used to produce
more complete edge-links, and features like lane-mark edge
orientation and lane-mark width are used to select candidate
lane-mark edge-link pairs. For the verification of lane-mark
candidates, color is checked inside the region enclosed by
candidate edge-link pairs in YUV color space. Additionally, the
continuity of the lane is estimated employing a Bayesian
probability model based on lane-mark color and edge-link
length ratio. Finally, a simple lane departure model is built to
detect lane departures based on lane locations in the image.
Experiment results show that the proposed lane detection
method can work robustly in real-time, and can achieve an
average speed of 30~50ms per frame for 180h
h120 image size,
with a correct detection rate over 92%.

Lane Detection
Input
Video
Frame

3
Lane
Hypothesis
Verification

Lane Model
Fitting

Continuity
Checking

Lane
Parameters
Lane
Departure
Detection

Figure 1. Overview of the lane detection algorithm.

II.

LANE HYPOTHESIS GENERATION

The goal of lane hypothesis generation is to find


candidate lane-mark regions inside regions of interest (ROI).
Candidate lane-mark regions are identified based on
hypothesized lane-mark edges, which are selected based on
edge-link features including length, orientation, and width of
edge-link pairs. An extended edge-linking algorithm with
directional edge-gap closing plays a key role in this step.

INTRODUCTION

Lane departure detection can provide useful information


for many applications like driver assistant systems and selfguided vehicles. This paper mainly deals with the problem of
lane departure detection based on a real-time lane detection
algorithm.
Many approaches have been applied to lane detection,
which can be classified as either feature-based or modelbased[1][5]. Feature-based methods detect lanes by using
low-level features like lane-mark edges[2-4]. The featurebased methods have high dependency on clear lane-marks,
and suffer from weak lane-marks, noise and occlusions.
Model-based methods represent lanes as a kind of curve
model which can be determined by a few critical geometric
parameters[5-9]. Compared with the feature-based methods,
the model-based methods are less sensitive to weak lane
appearance features and noise. However, in the model-based
methods, a complex modeling process which involves much
prior knowledge is usually required. Moreover, the model
constructed for one scene may not work in another scene,
which makes the method less adaptive. Additionally, for best
estimation of model parameters, an iterative error
978-0-7695-4043-6/10 $26.00 2010 IEEE
DOI 10.1109/ICCRD.2010.166

2
Lane
Hypothesis
Generation

Frame++

Keywords-lane detection; extended edge-linking; edge-link pairs


scan; Lane departure warning

I.

1
ROI
Setting

A. ROI Initialization and Edge Detection


Edge detection must be done before edge-linking step. In
order to reduce processing time while keeping a good
performance, region of interest (ROI) is first initialized, and
a Sobel operator with non-local maximum suppression
(NLMS) is used to find edge-pixels inside the ROI. The
initialization of ROI is illustrated in Fig.2.

Figure 2. Initialization of regions of interest (ROI).

725

Before edge detection, the input image I is first smoothed


with a 55 Gaussian filter G to remove background noise.
 
 I ' G * I .
On the smoothed imageI, a Sobel operator is applied to
compute edge amplitudes | (G * I ) | and find local edge
normal directions.

(G * I )

| (G * I ) |

Edge Tracing

Check neighbors of link


end points
in link direction

Scan connected
neighbors

Connected
neighbor found?

Y
Junction point?
N
Add points to
edge link

Label it with link


number

Read edge image

Gradient>
MinMagnitude?
Y

Edge pixel
scan

Select new edge points


and add to edge link

Y
Unlabeled
edge points
found?

wn

0.

Meet point in
another link?
Y

Merge two links

Label it with new link


number

Num(NewPts)>
MaxGapLength?

Y
All edge links
finish?
Y

 

End

Only an edge-pixel which is local maximum along its


gradient direction is kept as a final edge-pixel. This will help
produce clearer edge contours. Edge detection results are
shown in Fig.3.

(a) Sobel Edges

directions.

Edge gap closing


Read gradient image

Start

Hysteresis thresholds are used to eliminate isolated weak


edge-pixels. Finally, NLMS is applied to deal with the
localization issues inherent in a Sobel operator. Edge-pixels
are localized by finding zero-crossing along the edge normal
w (G * I )

Starting point scan

Figure 4. Extended edge-linking flowchart.

This continues until no more connected edge-points are


found. If the orientation of edge-link is not within the range
q
q
of > 20 ,160 @ , then it is not likely to be a lane-mark edge
and is discarded as a noise edge-link. The orientation of
edge-link is estimated by calculating the mean slope value
between two adjacent edge-points, as illustrated in Fig.5 (a).





(b) Sobel edges after NLMS

Figure3. Example of edge detection result.

B. Edge-Linking with Directional Edge Gap Closing


After the edge image is obtained, an extended edgelinking algorithm with directional edge-gap closing is
proposed in this paper to produce more complete edge-links
that possibly belong to lane-marks.
The traditional edge-linking algorithm has previously
been used in the area of lane detection [10]. It works well on
edge images obtained from an original resolution image with
little noise. However, due to noise and pixel loss caused by
the image down-sampling operation, on down-sampled
images, some dashed lane-mark edges which appear in the
far field of camera sight will be broke up into many small
disconnected segments. Since the lengths of these small links
are usually under the edge-link length threshold, they can be
easily filtered out as noise edge-links, which cause the loss
of real lane-mark edges. This case is illustrated in Fig.6. In
Fig.6 (d), the lane-mark located at right side of the lane is
lost in the final edge-link image.
To deal with this problem, an extended edge-linking
algorithm with directional edge-gap closing is developed.
The extended edge-linking algorithm is composed of three
major steps: starting point scan, edge tracing and edge-gap
closing. The algorithm flowchart is shown in Fig.4.
At first, a raster scan is performed starting from bottom
of image looking for starting points of a new edge-link.
When a starting point is found, edge tracing starts to trace all
the edge-points associated with the starting point.
In the edge-tracing process, the next edge-pixel to be
added to the link is the one that is within the 8 connected
neighboring pixels to the current edge-pixel. Given a starting
point, edge-pixels are tracked in one orientation, then stored
in edge-link array and labeled with their edge-link number.

(a) Edge-link orientation

(b) Edge-link gap closing

Figure5.Edge-link orientation estimation and edge-gap closing.

After edge-links with valid orientations are obtained, the


directional edge-gap closing step begins. In this step, edgelinks are extended by adding new edge-pixels along the
edge-link orientation to fill the gap. The maximum number
of added points is determined by a user-defined value.
Generally, 5 pixels are enough to fill the gaps between split
edge-links.
New edge-points are selected from the neighboring
points of the starting point and end point of one edge-link in
the edge-link orientations. As Fig.5 (b) illustrates, on a
gradient image, for a given edge-link, three points along the
direction of this edge-link are considered as new candidate
edge-points around the end-point of this edge-link, if the
gradient magnitude of these points are above a user-defined
minimum value. For each of these three candidate points, the
sum of its gradient and the maximum gradient of this points
three neighbors are calculated as a measure to determine the
new edge-point. The point with the maximum sum value is
selected as new edge-point. This process will continue until
one of the three following conditions is satisfied:
(1) The new edge-points added meet the point in another
edge-link. In this case, two edge-links are merged into one
complete edge-link.
(2) The maximum number of new points is reached.
(3) There are no more points with a gradient value larger
than the minimum threshold found.

726

After this directional edge-gap closing step, those


disconnected small edge segments which belong to one lanemark can be linked together as one edge-link. Then edge-link
length is checked to discard those small edge-links with
lengths below 15 pixels. The result of this extended edgelinking algorithm is shown in Fig.6.

(a) Input down-sampled image

Figure 8. Flowchart edge-link pair scanning.

During scanning, the distance between two adjacent


edge-points located in the same row is calculated and
compared with the corresponding value recorded in the lookup table. If satisfied, the correspondent edge-point counter
will increase by one. When the scan is finished, the ratio
between edge-point counter value and edge-link length is
calculated as a measure to determine whether this edge-link
should be kept as candidate lane-mark edge or not.
By scanning edge-links with lane-mark width, some edge
noise caused by road signs or guardrails which have similar
orientations as lane-mark edges, can be removed. This is
shown in Fig.7 (b), in which a piece of edge caused by a left
road side guardrail is discarded.

(b) Edge image

(c) Edge-linking without gap closing (d) Edge-linking with gap closing

III.

Figure6.Result of extended edge-linking algorithm.

In lane hypothesis generation step, lane-mark edge


features like edge-orientation, edge-length, and edge-pair
width are checked to filter out noise edges and select
candidate lane-mark edges. For lane hypothesis verification,
lane-mark colors are checked inside regions enclosed by
candidate lane-mark edge-pairs.
In Korea, there are generally three kinds of lane-mark
colors: white, yellow and blue. These three colors are much
easier to identify in the YUV color space compared to the
commonly used RGB color space. Therefore, the color
checking step is done in the YUV color space. The flowchart
of color checking has already been illustrated in Fig.3.
The first thing is to transform the RGB color space into
the YUV color space using (4). Then the yellow-checking
image is produced by U subtract by V, the blue-checking
image is generated by V subtract by U. White is checked by
using the Y channel. The Result is shown in Fig.13.
0.587
0.114 R
Y 0.299

U 0.169 0.331 0.500 G
V 0.500 0.419 0.081 B

C. Edge-link pair scanning


After edge-links are obtained, edge-link pair scanning is
carried out to check the distance between adjacent edgelinks. If the distance satisfies the width of a lane-mark, then
the region enclosed by this pair of edge-links will be
regarded as candidate lane-mark regions.

(a) Edge-link pair scan

(b) Candidate lane-mark edges

Figure7. Edge-link pair scanning result.

As illustrated in Fig.7, the scan starts from bottom middle


point of image, and scans the edge-link image from middle
to side and from bottom to top. Since the road image
captured by on-board camera shows a projective view of the
road scene, the lane-mark width appears larger near the
image bottom and smaller near the image top. In order to
reflect the width of the lane-mark at a certain height of the
image, a look-up table is used to store the acceptable lanemark width at a certain height of the image measured in
pixels. In addition, an edge-point counter is assigned to each
edge-link to record the number of points which satisfy the
width condition with the adjacent link in the scan direction.
The flowchart is shown in Fig.8.
Read edge-link
image

Edge point
found?
Y

Calculate distance

(c) Blue-checking channel

Satisfy width?

(b) Yellow-checking channel

(d) White-checking channel

Figure9. Channels used for color checking.

Edge point counter++


N

Search for next edge


point in row i
Next edge
point found?
Y

(a) ROI

Search for edge


point in row i

LANE HYPOTHESIS VERIFICATION

By using adaptive thresholds, yellow, blue, and white


dominant regions can be extracted from the corresponding
color checking channels. Scanning candidate lane-mark
regions on yellow-checking channel first, and then
calculating Ryi / Ri . R yi indicates the number of yellow pixels

row++
All rows
finished?
Y
Counter/Total>0.8?
Y
Keep as
candidate

inside the candidate lane-mark region, while Ri means the


total number of pixels inside candidate lane-mark region. If
this ratio is larger than 0.5, it is assumed that this region
contains yellow lane-marks. After yellow is checked, blue

Discard as
noise

727

 
P ( Dash | Li ) P ( Dash ) P ( Li | Dash )
 
Therefore, the given lane-mark edge-link pair Li can be
classified as solid or dashed using the Bayes decision rule:

and white are checked sequentially. The flowchart of the


whole lane detection algorithm is shown in Fig.10.
5HDG,QSXW
,PDJH
52,
*HQHUDWLRQ
7UDQVIRUP
LQWR<89VSDFH

<HOORZODQH
GHWHFWHG

5EL5L!"

%OXHODQH
GHWHFWHG

5ZL5L!"

:KLWHODQH
GHWHFWHG

Li

/LQHILWWLQJ
EDVHGRQHGJHSRLQWV

1RODQH
GHWHFWHG

/DQHFRQWLQXLW\
FKHFNLQJ

(GJHOLQNSDLU
VFDQLQJ

&RORUFKHFNLQJ
LQVLGHHGJHOLQNSDLUV

/DQH'HSDUWXUH'HWHFWLRQ

Yellow lane-mark
0.8

LANE DEPARTURE DETECTION

90
0

P ( solid | Li ) t P ( dash | Li )
 
P ( solid | Li )  P ( dash | Li )

Blue lane-mark
0.7

White lane-mark
0.5

The likelihood is calculated based on the ratio between


lane-mark edge-link length and the length of the fitted line.
As is shown in (9), ei indicates lane-mark edge-link points,
and l j represents points contained in the fitted line model.

A. Line Fitting and Continuity Checking


After color verification, edges that belong to lane-marks
can finally be decided. Before lane departure detection, lane
position and continuity of lane should be provided. Lane
position is figured out by fitting straight line model based on
lane-mark edge-points using Hough transform. The normal
equation of a line is used as a line model, as is shown in (5).
x cos T  y sin T U
 
Two parameters T and U should be estimated to fit the
straight line model. Their values are finally determined by
max-voting in the Hough parameter space. Since many noise
edges have been filtered out in the previous edge processing
and color verification steps, the input edge-points to do
voting in the Hough space is limited to a certain amount, this
helps to reduce much processing time of Hough transform so
that line fitting can be done robustly and efficiently.
x

solid
dash

TABLE I. P ( Solid ) VALUES FOR DIFFERENT COLORS.

Figure10. Lane detection flowchart.

IV.

P ( Solid ) P ( Li | Solid )

The prior probability P ( Solid ) and P ( Dash ) is estimated


by lane-mark colors. This is based on the prior knowledge
that, in Korea, most yellow and blue lane-marks that appear
on city roads or highways are solid lane-marks, while white
lane-marks are usually dashed lane-marks. Based on
experiment experience, P ( Solid ) values for different lanemark colors are shown in Table.1, which indicates the solid
lane-mark prior probability based on lane-mark colors.

(GJHGHWHFWLRQ
(GJHOLQNLQJ

([WUDFWVSHFLILF
FRORUUHJLRQ

5\L5L!"

P ( Solid | Li )

P ( Li | Solid )

t et
j lj

 

Up to this step, all lane parameters needed for lane


departure detection can be determined, and these parameters
can be stored in a vector T , U , c, t . T , U in Hough space
indicates lane position, c is lane color, and t is the type
of lane (dashed or solid).
B. Lane Departure Detection

52

54 90

180

41

124

(a) Ideal case


-52 54
$

41

124

(b) Left lane departure

(c) Right lane departure

Figure12.Lane departure detection.

362

After lane parameter set T , U , c, t is determined in


previous steps, lane departure detection can be carried out.
Lane parameter T is used to define a lane departure measure
T T l / T r . In a simple lane departure model illustrated in

(a)Line fitting on image plane (b) Line parameters in Hough space


Figure11. Line fitting using Hough transform.

The continuity of lane can be judged by the type of lanemark. Solid lane-marks represent continuous lanes, while
dashed lane-marks indicate discontinuous lanes. In order to
classify a solid lane-mark and dashed lane-mark, a Bayesian
probability model is employed. Given a lane-mark edge-link
pair Li , two posterior probability functions (6) and (7) are
involved to estimate the type of lane-marks, where
P ( Li | Solid ) and P ( Li | Dash ) are the likelihood probabilities
of solid and dashed lane-marks, while P ( Solid ) and
P ( Dash ) are the corresponding prior probabilities.

Fig.15, it can be observed that in ideal driving case: T l | T r ,


if left lane departure happens: T l ! T r , while right lane
departure happens: T l  T r . Therefore, lane departure can be
estimated by the value of T based on a predefined threshold
(>1). If T, left lane departure happens. If T1/,
right lane departure occurs. Moreover, lane color parameter c
and lane continuity parameter t can provide further

728

information on lane types so that more detailed information


from these parameters can be included when lane departure
warning instructions are generated.
V.

EXPERIMENTAL RESULTS

The proposed lane departure detection algorithm has


been implemented in visual c++ 6.0. For video clips with
180h120 image resolution, the processing time was about
30~50ms per frame on an Intel Core2 1.86GHZ processor,
and the algorithm can achieve an average speed of 10 frames
per second. The actual resolution of the video image that the
detection algorithm is working on is up to 720 h 480.
However, since image down-sampling is used to reduce the
image size to 180 h 120, the speed performance of the
algorithm is more or less the same. Although image downsampling will cause pixel loss, this does not affect the
detection algorithm very much, as complete edge-links can
be restored by using the proposed extended edge-linking
algorithm.
To evaluate the performance of the proposed lane
detection algorithm, the detection results on a challenging
city road video clip are presented here. The video clip
contains different lane types, all kinds of lane variation
situations, complex road surfaces, and significant obstacles.
The test results on this video clip are shown in Table 2.
5000 frames from this video clip are used for testing. Here
Correct Detection means all lane parameters including
number of lanes, lane position, color, and continuity are
correctly detected. If one parameter is not correctly detected,
then the result is reported in False Detection. In addition,
cases where non-lane objects are detected as lane are also
included in False Detection. On the other hand, if existing
lanes are not detected, then these cases are reported in Missdetection. Fig.13 lists some examples of the detection
results.
TABLE II.
Frames
5000

Figure13. Detection result examples on video frame.

VI.

In this paper, a real-time lane departure detection system


is presented. The major part of the system is a lane detection
module based on an extended edge-linking algorithm.
Compared with other model-based lane-detection methods,
the proposed algorithm takes advantage of the feature-based
method to simplify the modeling process. At lane hypothesis
generation step, effective combinations of lane-mark edge
features are checked to select edge-link pairs which belong
to lane-marks. Most of the noise can be removed at this step,
only edge points which belong to lane-marks are used to
estimate line parameters, this helps to reduce much of the
processing time it costs for lane parameter estimation.
Additionally, in the proposed algorithm, there are no special
requirements for camera parameters, background models, or
any other road surface models. This makes the algorithm
more adaptive to various road environments.
ACKNOWLEDGMENT
This work was supported by the Korea Research
Foundation Grant funded by the Korean Government
(MOEHRD). Meanwhile, this work was also supported by
the MKE (The Ministry of Knowledge Economy), Korea,
under the ITRC (Information Technology Research Center)
support program supervised by the IITA (Institute for
Information Technology Advancement). (IITA-2009(C1090-0902-0007))

TEST RESULTS ON VIDEO CLIP .

Correct Detection
4615(92.3%)

False Detection
241(4.82%)

REFERENCES

Miss-detection
144(2.88%)

[1]

[2]

[3]

[4]

[5]

[6]

CONCLUSIONS


[7]

729

Joel C. McCall and Mohan M.Trivedi, Video-based Lane Estimation


and Tracking for Driver Assistance: Survey, System, and
Evaluation, IEEE Transactions on Intelligent Transportation
Systems, vol.7, 2006, pp.20-37, doi: 10.1109/TITS.2006.869595.
A. Broggi and S. Berte, Vision-based Road Detection in Automotive
Systems: a Real-time Expectation-driven Approach, Journal of
Artificial Intelligence Research , vol.3, 1995, pp. 325-348.
M. Bertozzi and A. Broggi, GOLD: A Parallel Real-time Stereo
Vision System for Generic Obstacle and Lane Detection, IEEE
Transactions of Image Processing, 1998, pp. 62-81.
S.G. Jeong, C.S. Kim, K.S. Yoon, J.N. Lee, J.I. Bae, and M.H. Lee,
Real-time Lane Detection for Autonomous Navigation, IEEE Proc.
Intelligent Transportation Systems (ITSC01), 2001, pp. 508513.
Yue Wang, Eam Khwang Teoh and Dinggang Shen, Lane Detection
and Tracking Using B-snake,Image and Vision Computing, vol. 22,
2004, pp. 269-280, doi:10.1016/j.imavis.2003.10.003.
Jung, C. R. and C. R. Kelber, A Lane Departure Warning System
Using Lateral Offset with Uncalibrated Camera, Proc. IEEE
Conf. on Intelligent Transportation Systems (ITSC05), 2005,
pp.102-107, doi: 10.1109/ITSC.2005.1520073.
D. Jung Kang, J. Won Choi and I.S. Kweon, Finding and Tracking
Road Lanes Using Line-snakes, Proceedings of Conference on
Intelligent Vehicle,1996, pp. 189-194, doi: 10.1109/IVS.1996.566336.

[8]

Zu Kim, Realtime lane tracking of curved local road, Proc. IEEE


Conf. on Intelligent Transportation System, 2006, pp. 1149-1155.
[9] Y. Wang, D. Shen and E.K. Teoh, Lane Detection Using Spline
Model, Pattern Recognition Letters vol.21 (8) 2000, pp. 677-689.
[10] Jaehyoung Yu, Youngjoon Han and Hernsoo HahnAn Efficient
Extraction of On-road Object and Lane Information Using
Representation Method, IEEE International Conference on Signal
Image Technology and Internet based systems, 2008, pp: 327-332,
doi: 10.1109/SITIS.2008.19.

730

You might also like