You are on page 1of 11

This article has been accepted for inclusion in a future issue of this journal.

Content is final as presented, with the exception of pagination.

Automatic Lane
Identification Using the
Roadside LiDAR Sensors
Jianqing Wu and Hao Xu
Department of Civil and Environmental Engineering, University of Nevada, Reno,
United States, E-mail: jianqingwu2015@gmail.com; haox@unr.edu
Junxuan Zhao
Department of Civil, Environmental and Construction Engineering, Texas Tech University,
Lubbock, United States, E-mail: junxuan.zhao@ttu.edu

XXXXXX

Abstract—How to collect the real-time information of unconnected vehicles has been a challenge for con-
nected vehicle technologies. The LiDAR sensors deployed along the roadside and at intersections provide
a solution to fill the data gap during the transition from the traditional traffic to the full connected traf-
fic. The roadside LiDAR sensors can record the movement of all road users with a relative long detection
range. The lane detection serves as a fundamental but important step for LiDAR data processing. The
location (which lane is occupied) of vehicles can be used for lane changing detection, vehicle departure
warning and wrong-way alerts. But currently, there is not an effective method to identify the boundary of
lanes using roadside LiDAR sensors. This paper presents a systematic procedure for lane detection based
on the trajectories of vehicles collected on the road with the roadside LiDAR sensor. The whole procedure
includes two major parts: background filtering and road boundary identification. This robust procedure
can release engineers from the manual lane identification task and can avoid any error caused by manual
work. Two case studies confirmed the effectiveness of the proposed method. Compared to previous lane
detection methods, this procedure is not affected by the existence of pedestrians. This method can also
detect the boundaries of lanes from the roads having curves with the limited time cost.

Digital Object Identifier 10.1109/MITS.2018.2876559


Date of publication: 26 October 2018

IEEE IntEllIgEnt transportatIon systEms magazInE • 2 • month 2019 1939-1390/19©2019IEEE


This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

I. Introduction the current connected vehicle deployment and other traf-

C
onnected vehicle technologies allow many appli- fic engineering applications. With LiDAR sensors deployed
cations to improve traffic safety, mobility, and fuel at intersections and along roads, HRTD of each individual
efficiency [1]. The complete benefits of current con- road user can be extracted and shared with traffic facilities
nected vehicle deployment need all vehicles and and other roadway users through wireless communication
other road users to be equipped with connected-vehicle technologies such as Dedicated Short Range Communica-
devices and broadcast information in real time. But it will tions (DSRC) or Cellular communications. A diagram to
take time to equip connected-vehicle devices into all ve- demonstrate the principle of the new connected infrastruc-
hicles, especially old ones. It is a challenge for the exist- tures is shown in Fig. 1.
ing deployments to achieve all expected connected-vehicle For the roadside LiDAR sensors, lane detection technol-
advantages in the real world since the real-time informa- ogy can be used to report the position of each objects on the
tion of unconnected vehicles could not be obtained. So it is road which is very important information for connected ve-
urgent to find an approach to collect the high-resolution hicles. Lane identification is very useful for the applications
traffic data (HRTD) of unconnected vehicles during the of connected vehicles, such as lane departure warning, lane
transition from the traditional traffic to the full connected change detection and wrong-way alerts. Though it is pos-
traffic. The traditional traffic sensors such as loop detec- sible to manually identify the boundaries of traffic lanes by
tors, video detectors, and Bluetooth sensors mainly provide analyzing vehicle trajectories, this manual method is time-
macro traffic data such as traffic flow rates, average speeds consuming and may cause some errors. A lot of studies have
and occupancy. The previous study [2] showed that induc- been conducted aiming to identify lanes. Most existing lane
tive loop detectors could also be used for vehicle re-iden- recognition methods are based on detecting the landmarks,
tification and provide micro data. However, this algorithm which are widely used on autonomous vehicles [3]. Lau-
required speed estimation and the provided data still did gier et al. [4] developed an approach for the perception of
not meet the high requirement of HRTD. The camera/ dynamic environment based on sensor fusion and tempo-
video detector can provide HRTD. But the accuracy can be ral filtering in occupancy grids using a probabilistic frame-
dramatically reduced with the influence of different light work. This method has been used to perform risk estimation
condition and severe weather condition (rain, snow, fog). and could be used in many intelligent vehicle applications
The new Light Detection and Ranging (LiDAR) technol- such as emergency braking, obstacle avoidance or auto-
ogy has the capability to detect the 360-degree surrounding matic driving. Ieng et al. [5] presented an algorithm for lane
objects with high accuracy. During each scan, the LiDAR marking features extraction and robust shape estimation of
sensor collects a cloud of points with x, y and z coordinate lane markings only based on the lane-marking geometry
values of surrounding objects. The LiDAR sensors can work using onboard cameras. The proposed lane-marking fea-
in days and nights without influence of light condition and tures extractor is combined with a linear fitting algorithm
weather condition. The accuracy and reliability is the major
reason that LiDAR sensors have been used as a critical com-
ponent of self-driving vehicle sensor systems. LiDAR sen-
sors have been employed in advanced autonomous vehicles
for several years to detect road boundaries and lane marks.
In the onboard detection system, LiDAR sensors often work
with other sensors such as onboard cameras to detect and
track other vehicles and pedestrians. Different with tradi-
tional onboard LiDAR sensors, roadside LiDAR sensors are
deployed at a static location around intersections or along
roads. Roadside LiDAR sensors are expected to indepen-
dently offer HRTD without high-resolution camera sen-
sors, considering the cost and limitation of infrastructures
and right of way. By now, LiDAR sensors are rarely used for
roadside deployment because of the historical high cost of
LiDAR sensors. A 360˚ LiDAR unit used to cost more than
$50,000. It was impossible to deploy the sensors at intersec-
tions and along highways with this price. However, the unit
cost has been dramatically reduced to several thousand
dollars. The reduced price allows deployment of LIDAR
sensors along a road network to provide high-resolution
micro data of all traffic, which will significantly change Fig 1 Overview of roadside LiDAR sensors.

IEEE Intelligent transportation systems magazine • 3 • month 2019


This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

of curves that can be sued inside a Kalman filtering to track trajectories in the space. The whole procedure includes
lane-markings. This detector can be used with cameras two major parts: background filtering and road boundary
mounted in many positions on the vehicle with few param- identification. The obtained lane location can be used to
eter modifications such as the degree of the curve model. detect vehicle locations and serve connected vehicles. This
Yang, et al. [6] presented a method which can automatical- paper is structured as follows: Section II introduces the
ly extract road markings from mobile LiDAR point clouds LiDAR sensor and the raw LiDAR data stream. Section III
based on the strength of reflection of laser points. Melo, et presents the background filtering algorithm. In Section IV,
al. [7] applied K-means clustering for lane identification the detailed road boundary detection method is illustrated.
using uncalibrated traffic surveillance cameras. The case Section V presents two case studies of lane identification
study shows that this method can identify the location of using the procedure developed in this research. Finally,
lanes very well. Wang, et al. [8] developed a lane identifica- Section VI summarizes the major contribution of this re-
tion method using the spline model. The spline used in that search and future extension.
research is claimed that can form arbitrary shapes by differ-
ent sets of control points, which can describe a wider range II. LiDAR Sensor and Data Stream
of lane structures compared with other lane models. But The lane identification procedure is developed based on
most methods mentioned above are currently being used to the VLP-16 LiDAR sensor, which is a 360° 3D LiDAR sen-
serve autonomous/semi-autonomous vehicles. For on-board sor manufactured by Velodyne LiDARTM. The VLP-16 Li-
LiDAR sensors serving autonomous vehicles, the location of DAR can create 360° 3D point cloud using 16 laser/detector
lanes should be learned in real time since the vehicles are pairs mounted in a compact housing. The housing rapidly
moving on the road. Those algorithms based on road mark- spins to scan the surrounding environment with a rota-
ing identification could not work well if the road marking tional speed: 5–20 rotations per second, which can be ad-
is not obvious. Based on the authors’ best knowledge, there justed by customizing the sensor configuration. It can cover
were no studies that used roadside LiDAR sensors for lane 360° horizontal field of view and a 30° vertical field of view
identification to serving connected vehicles. For roadside with ±15° up and down. Though the manufacturer claimed
LiDAR sensors, the location of lanes can be identified in ad- that the VLP-16 can detect objects with a radius of 100 m,
vance and be used for a long time since lanes are station- the practice shows vehicles can be only detected within an
ary in a specific location. On the other hand, the roadside about radius of 60 m around the LiDAR sensor. Out of the
LiDAR sensors need to collect all the location of lanes in its 60 m range, the points of vehicles will be lost in the point
scanned range while the onboard LiDAR sensors only focus cloud. The lane identification in this research used the data
on obtaining the location of lanes in front of the autonomous from the LiDAR sensor temporally installed on the roadside
vehicles. Since the work environment and data collection of the Kietzke Lane (north of the intersection with Miami
requirement are very different between on-board LiDAR Way) in Reno, Nevada. Kietzke Lane is a two-way arterial
sensors and roadside LiDAR sensors, the existing lane iden- road serving the major north-south urban traffic in the east
tification method serving autonomous vehicles could not be area of Reno. The speed limit is 40MPH (65 km/h). There
directly used for connected vehicles. It is therefore neces- are two lanes in each direction and a two-way-left-turn
sary to find an approach to identify traffic lanes for the road- lane for turning traffic, a Google satellite map is shown in
side LiDAR sensors. Fig. 2(a). The rotation speed of the LiDAR sensor was set
This paper developed an innovative procedure for lane at 10HZ. A frame of visualized 3D points collected by the
identification using the roadside LiDAR sensors. This LiDAR sensor is shown as an example in Fig. 2(b). A total of
method was developed based on the distribution of vehicle 30-minute LiDAR data was collected at that site. The data
stream has been uploaded onto Youtube through the follow-
ing link: https://youtube/X_5f4OQGpRE.
As can be seen in Fig. 2(b), there are vehicles, build-
Building
LiDAR LiDAR ings, trees and ground surface in the raw LiDAR data. Sur-
Position
Position Tree rounded by those buildings, trees and ground surface, the
vehicle points are not obvious in the point cloud, indicating
the trajectories of the vehicles could not be extracted with
background in the space. So those background points—
buildings, trees and ground surface should be excluded
Ground Surface
before lane detection.
(a) (b)
III. Background Filtering
Fig 2 LiDAR data collection. (a) LiDAR data collection site, (b) raw LiDAR The background can be buildings and ground surface, as
data. well as moving objects such as waving trees and bushes,

IEEE Intelligent transportation systems magazine • 4 • month 2019


This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

which means the location of these dynamic background record the number of points in each cube. The density of
is not stationary [9]. Even for the still background points, overlapped 3D points in each cube can then be calculated.
the locations are not the same in different frames because In general, the density of background should be higher
of LiDAR vibration or block of moving objects. The back- compared to the density of moving vehicles or pedestrians
ground filtering algorithm should exclude those back- as background points are relatively static. A threshold of
ground as much as possible, but keep the vehicle points point density in the cube (TD) then needs to be identified
at the same time. A lot of background filtering algorithms to distinguish background and non-background cubes. A
are developed based on the statistic or change of pixels of low TD may take slow-moving vehicles or pedestrians as
pictures taken by cameras, such as mixture of Gaussians background while a high TD may not exclude the back-
(MoG) method [10] and statistical background modeling ground far away from the LiDAR successfully. For the same
[11]. These algorithms used for vision-based data could not object, the number of points in the cloud varies when the
be used for LiDAR data directly because the roadside data distance between the object and the LiDAR sensor is differ-
are a series of points instead of pixel information. But the ent. The previous study [13] showed that the point density
similar idea can be borrowed from the statistical method in the LiDAR could be fitted with a potential function. Fig.
used for picture processing. The point density is one of the 3 shows an example of LiDAR point distribution.
most important parameters for LiDAR data processing [12]. As can be seen in Fig. 3, the number of points for one
The irregular pattern distribution of LiDAR points in the vehicle scanned by the VLP-16 dramatically decreases with
space increased the difficulty of object identification. By the increasing distance to the sensor. The points of back-
aggregating many frames, the number of vehicle points in ground should have the same distribution trend-points
the space should be much fewer compared to the abundant decreased with the increasing distance to VLP-16. This in-
background points. The impact of irregular pattern can be dicates that a fixed threshold value could not exclude the
reduced by frame aggregation. Then a point density-based background effectively. Then a dynamic threshold should
background filtering algorithm named 3D-DSF is applied be applied to the cubes based on the distance to VLP-16.
for background filtering, which is firstly developed by Wu, The area of whole scan range were then divided into 6
et al. [9]. The brief introduction about 3D-DSF is shown as parts for background filtering based on their distance to
follows. the LiDAR sensor: 0–5 m, 5–10 m, 10–20 m, 20–30 m, 30–
The 3D-DSF method can be elaborated into four major 40 m, more than 40 m. The threshold is defined for each
parts: frame aggregation, points statistics, threshold (TD) subarea. Other than the distance, the threshold is affected
learning and real-time filtering. The algorithm firstly col- by the pedestrian and vehicle appeared in the overlapped
lected raw data in a period as initial input for the back- frames, which should not be considered as background.
ground learning. To aggregating enough background To identifying this effect, the frequency of point number
points, the data frames in a period were overlapped into in each cube is counted during two scenarios (no vehicles;
one space based on the location of point cloud collected vehicle shows up) in the same site. Fig. 4 shows the results.
by the LiDAR sensor. Theoretically,
the more initial frames used for the
background identification, the better
1,800
Number of Points Scanned by LiDAR Sensor

accuracy can be achieved. However, 40


35
the more frames requires more com- 1,600
30
puter memory and will increase the
1,400 25
calculation time for the background 20
identification. In contrast, the algo- 1,200 15
rithm is unable to identify vibrating 10
background points if the number of 1,000 5
frames for background identifica- 0
800 20 25 30 35 40 45 50
tion is too low. The recommended
number of frames for overlapping is 600
between 1500 and 3000 by consid- 400
ering the accuracy and time cost.
After overlapping enough frames, 200
the 3D space was then chopped
0
into many small continuous cubes. 0 5 10 15 20 25 30 35 40 45 50 55
These small cubes can be identified Distance Between One Vehicle and LiDAR Sensor (m)
as a background space or not. A cor-
responding 3D matrix was built to Fig 3 Distribution of vehicle points.

IEEE Intelligent transportation systems magazine • 5 • month 2019


This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

N i is the ith number of points per cubic (from lowest to


6,000 highest).
With Vehicles No Vehicles Fi is the frequency of ith number of points per cube.
5,000
160 Threshold: 19 When the slope firstly becomes 0 or positive, the fre-
4,000 120 quency of number of points per cube in Equation 1 (F) is
Frequency

80 used as the threshold. In Fig. 4, the threshold should be


3,000
40 2 when there are no vehicles and the threshold should be
2,000 0
14 16 18 20 19 when there are vehicles on the road. If there is no ve-
1,000 hicle (or very few vehicles), the variation of the frequency
Threshold: 2
0 is due to the fluctuation. It is recommended to use sliding
0 5 10 15 20 25 30 window to smooth the curve and eliminate the fluctuation.
Number of Points per Cube
The recommended width of sliding window is 3 based on
the practice. The location of the cubes identified as back-
Fig 4 Frequency distribution of cubes. ground is then stored as background array. During the
real-time background filtering task, the algorithm uses
As is shown in Fig. 4, when there are vehicles on the the background array to match the location of the points
road, the frequency of cubes with low number of points is in each frame. Points located in background array are ex-
much higher than those without vehicles. It is also shown cluded from the LiDAR data. Fig. 5 shows two examples of
there are fluctuation in the trend of the orange line. The background filtering results (free traffic flow and congest-
slope can be used for the threshold identification. The ed traffic flow).
equation is shown in Equation 1. Fig. 5(a) shows a frame collected when one pedestrian
was crossing the intersection. There were three vehicles
Fi - Fi - 1 and one pedestrian in that frame. After background filter-
Slope = (1)
Ni - Ni - 1 ing, all vehicles and the pedestrian were kept (as shown in
Z Axis (m)

10
Z Axis (m)

10 0
0 –10
–10 40
60 Vehicle
30
40
20 20
0 10 Pedestrian
–20 60 Y Axis (m) 0 40
40 30
20 –10 20
Y Axis (m) –40 –20 0 10
–60 –60 –40 –20 0 X Axis (m)
X Axis (m) –10
(a) (b)

10
Z Axis (m)

0
Z Axis (m)

10 –10
0 50
–10 40
60
30
Vehicle
40 20
10 Pedestrian
20
)
Y Axis (m

0
Y Axis (m) 0 –10
–20
–20
–30
–40 60 –40
20 40 –50
–20 0 50
–60 –40 0 10 20 30 40
–60 X Axis (m) –50 –40 –30 –20 –10
X Axis (m)
(c) (d)

Fig 5 Background Filtering using 3D-DSF. (a) Before background filtering (free flow), (b) after background filtering (free flow), (c) before background
filtering (congested) and (d) after background filtering (congested).

IEEE Intelligent transportation systems magazine • 6 • month 2019


This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

Fig. 5(b)). Fig. 5(c) shows a frame collected when the traffic the vehicle. The recommended value for f is 1.2 m, and the
is congested (collected at the peak hour). The results show recommended value for MinPts is 10 based on using the
that all vehicles (9 vehicles) and pedestrians (3 pedestrians) regression method introduced by Wu, et al. [9]. Fig. 6 shows
were left in the space after background filtering, indicat- an example of clustering results.
ing the algorithm can still work well even when the traffic But there is another common situation that pedestri-
is congested. But compared to the situation when then traf- ans may be crowded in a street. The lane borders could
fic is free (Fig. 5(a) and (b)), there are more noise points not be detected properly if the pedestrians are clustered as
left when the traffic is congested. Those noises come from vehicles. A method based on an artificial neural network
the dynamic background points. Some cubes with the point (ANN) was developed to distinguish pedestrians and vehi-
density less than the threshold may contain dynamic back- cles with high detection accuracy rate in longer detection
ground points and could not be excluded from the space. range. Three features are extracted from each cluster as
The results show that this algorithm can filtering more the inputs to the ANN model for vehicle/pedestrian iden-
than 97% of background points and keep about 98% of ve- tification.
hicle points at the same time. Though some noises still left
after background filtering, those points were very limited 1) Total Number of Points
and randomly distributed in the space, which have limited In general, vehicle clusters include more points than pe-
influence on lane identification or vehicle tracking. Since destrian clusters at the same distance to the sensor.
the background information can be different because of
parked vehicle or LiDAR vibration, it is recommended to 2) Distance
run the algorithm in the backstage frequently to update For both pedestrian and vehicle clusters, the distance to
the location of background. The updating frequency varies the LiDAR sensor influences the number of cluster points.
based on the traffic feature of the roads and the calculation As the laser beams scan in the horizontal direction, the 2D
ability of the computer used for data processing. distance is enough to describe object locations. The dis-
tance of the position reference point of each cluster to the
IV. Road Boundary Detection sensor is calculated with the X, Y values.
After background filtering, the vehicle points can be easily
distinguished from the noises using traditional clustering 3) Direction of Cluster Points Distribution
methods. The reason of clustering in this step is to exclude Analysis of cluster points distribution in the 3D space re-
the noises left by background filtering. Currently, there vealed that distribution of pedestrian cluster points is
are many different algorithms available for object cluster- mainly along the vertical direction (z-axis), while the dis-
ing, including k-means clustering, fuzzy clustering, den- tribution of vehicle cluster points is primarily along the
sity-based clustering, and many other clustering methods horizontal direction in general (parallel to the x-y plane).
serving different data [14], [15]. Density-based clustering With the least-square linear regression method, a linear
is very suitable for vehicle clustering in LiDAR data since function can be generated to describe the main distribu-
the point density of vehicles is much higher compared with tion direction of each cluster.
other area in the space. The density-based spatial cluster-
ing of applications with noise, also known as DBSCAN, is
very effective to cluster density related points in the space
[10]. Another advantage of DBSCAN is this algorithm does 20
Noise Cluster 1
not need to know how many vehicles are on road in one
15
frame. DBSCAN can learn the number of clusters auto-
matically [16]. The DBSCAN algorithm requires 2 param- 10
eters—epsilon (f), which specifies how close points should
Y Axis (m)

be to each other to be considered a part of a cluster; and 5


minimum number of points (MinPts), which specifies how
0
many neighbors a point should have to be included into a
cluster. This paper applied DBSCAN algorithm for vehicle –5
clustering considering its powerful function to process the
points in spatial space. If f is too big, it may cluster two –10
vehicles with small headway into one vehicle. If f is too
small, it may cluster one vehicle into several vehicles. And –25 –20 –15 –10 –5 0 5 10 15
for MinPts, if the value is too big, vehicle with few points X Axis (m)
may be considered as noise. If MinPts is too small, the
noise left by the background filtering may be clustered into Fig 6 Vehicles clustering using DBSCAN.

IEEE Intelligent transportation systems magazine • 7 • month 2019


This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

The pedestrian/vehicle identification method utilizes The following reasons were found to cause the classifi-
a Backpropagation (BP) ANN model to distinguish pedes- cation failure:
trian and vehicle objects. The BP-neural network is one of 1) Very low amount of LiDAR points when pedestrians/ve-
the most popular neural network models in the world [17]. hicles are far away from the sensor (>25 meters);
It is a multilayer feed-forward neural network composed of 2) Part of a vehicle or pedestrian could be blocked by oth-
an input layer, a hidden layer, an output layer, and neurons er pedestrians/vehicles or facilities (This issue can be
in each layer, as shown in Fig. 7. The number of hidden fixed by deploying multiple sensors in different direc-
layers can be more than one. The input data is fed into the tions on the road);
input layer. Then, the activity of each hidden layer is de- 3) Several pedestrians close to each other and away from
termined by the inputs and the weights that connect the the sensor.
input layer and hidden layer. A similar process is between After distinguishing vehicles and pedestrians in each
the hidden layer and output layer. The transmission from frame, only vehicles were kept in each frame. Multiple frame
one neuron in one layer to another neuron in the next layer are aggregated together to enhance the trajectories of ve-
is independent. The output layer produces the estimated hicles. The number of frames used for aggregation depends
outcomes. The comparison information (error) between on the traffic volume. The criteria for selecting the number
target outputs and estimated outputs is given back to the of frames is that at least one vehicle should be covered in the
input layer as a guide to adjust the weights in the next outside lane. In this specific site, 1,000 frames were aggre-
training round. Through this iteration process, the neural gated. Ideally, the boundary of lanes can be directly identi-
network gradually learns the inner relationship between fied by finding the boundaries of vehicles on the road. But
input and output by adjusting the weights for each neuron the number of points in the space after data aggregation is
in each layer to reach the best accuracy. When the minimal about 254,562. Since the DBSCAN requires to calculate the
error is reached, or the number of iterations is beyond the distance of each point pairs, processing so many points re-
predefined value, the training process is terminated with quires a lot of memory and time. To reducing the calculation
fixed weights. load, the number of points should be reduced. Only the near-
The training results of the determined ANN model are est points to the LiDAR sensor in each clustering group were
shown in Tab. 1. counted. After point reduction. The points in the space were
reduced to 2,679. Then the boundary of the representative
points can be identified, as is shown in Fig. 8. As the vehicles
Feedback
points decreased with the increasing distance to the LiDAR
sensor, the group having the most number of points should
Total be the vehicles in the lane nearest to the LiDAR sensor. The
Number of Output
extracted road edge is shown as red lines in Fig. 9.
Points
Weight In this step, the road boundary nearest to the LiDAR
Weight Pedestrian
2D sensor is identified. The location of the other lanes can be
Distance
Vehicle calculated by moving the red line in Fig. 9 parallel with
the length of lanes and median. It should be noted that the
Direction
Input Layer Hidden Layer Output Layer

40
Fig 7 Example of artificial neural network.
30
20
Table 1. Evaluation of the ANN Model.
Y Axis (m)

10
Dataset Training Set Validation Set Testing Set 0
Total Clusters 600 600 1000 –10
Pedestrians Clusters 413 392 284 –20
Vehicles Clusters 187 208 716 –30
Identified Pedestrians –40
400 375 284
Clusters
–60 –40 –20 0 20 40
Identified Vehicle
183 204 652 X Axis (m)
Clusters
Detection Rate (%) 97.2 96.5 93.6
Fig 8 Clustering result after data reduction.

IEEE Intelligent transportation systems magazine • 8 • month 2019


This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

other boundary of road could not be detected through ve-


hicle trajectory since the LiDAR sensor can only scan part
of the vehicles in the space, and the scanned parts are 30
those near the LiDAR sensor. This problem is further illus-
trated in Fig. 10. The red vehicle (Cluster#1) is moving in 20
the lane nearest to the LiDAR sensor, and the blue vehicle
10

Y Axis (m)
(Cluster#3) is moving in the lane farthest from the LiDAR
sensor. If using the trajectory of the blue vehicle to iden-
0
tify the road boundary far from the LiDAR, the Red Line 1 Road Boundary
should be considered as the road boundary. Since the ve-
–10
hicle length is not accurate in the LiDAR data (shorter than
normal), the boundary of the road should be in the location
–20
near the green dot line, not the Red Line 1. So only the Red
Line 2 in Fig. 10 can be used for lane identification.
–40 –30 –20 –10 0 10 20 30
X Axis (m)
V. Case Study
This procedure has been evaluated using two LiDAR data- Fig 9 Round boundary.
sets collected in two different sites. The first site is located
near a wildlife overpass on I-80 freeway in rural area of
Elko, Nevada, United States. This road segment has two
20
lanes in West to East (WE) direction and three lanes in
East to West (EW) direction. The speed limit of this road 15
segment is 70 mph. The second data collection site is near 10
the intersection at N Virginia St@ Talus Way in the urban
5
area of Reno, Nevada, United States. The N Virginia St is a
Y Axis (m)

major arterial serving the traffic in the north-west area of 0 R Line 2


Red
Reno, There are two through lanes in each direction and Red Line 1
–5
one left-turn lane in the middle of two traffic directions.
–10
The speed limit on N Virginia St is 45 mph. Figure 11 shows Noise
the map of the two data collection sites. The data collection –15 Cluster 1
Cluster 2
video has been uploaded onto Youtube through the follow- –20 Cluster 3
ing link: https://youtu.be/V66l1X9RCb0.
The algorithm has been implemented in Matlab and is –40 –30 –20 –10 0 10
X Axis (m)
deployed on the Dell desktop equipped with Intel Core i7-
4790 CPU (3.60 GHz) and 16 GB of RAM to process the data
collected at the two sites. The border detection relies on Fig 10 Lane identification issue space.
the dense vehicle points on the road. Sit 1 is located at I80
freeway. Since the traffic volume on the left lane was rela- the range of lanes up to 20 m [18]. So this procedure has
tively low, the density of vehicle points was not high enough a longer detection range compared to previous algorithms.
for the algorithm to detect the second border. Therefore,
in that case, the algorithm is semi-automatic. The second VI. Conclusion
border could not be directly detected and should be calcu- This paper presents a very simple but effective method for
lated using the width of the road. For the lane identification lane identification serving the roadside LiDAR sensors. The
on the N Virginia St, since the road is not straight, so the road boundary is recognized based on the trajectories of ve-
road boundary is identified by connected all the points in hicles on the roads after background filtering. Though there
the boundary of the outside lane. Then the curve informa- were still some noises left after applying 3D-DSF, the results
tion can be kept. Fig. 12 shows the results of the clustering. showed that the left noises did not affect the accuracy of lane
The results show that the lane can be successfully de- identification. The precondition of applying this method for
tected, even for the road having a curve. The time used for lane identification is that the geometry (such as number of
lane detection is about 2 minutes after background filtering. lanes, lane width, and median width) of the road is known in
Since this task can be conducted before vehicle tracking, advance. The results of the case studies show that this proce-
two-minutes time cost is still acceptable. The whole detec- dure can successfully identify the location of lanes with very
tion length of lanes in this two sites are 87.9 m and 68.5 m, limited time cost—about 2 minutes for each site. It is also
respectively, while one previous method could only detect shown that this procedure can identify the location of lanes

IEEE Intelligent transportation systems magazine • 9 • month 2019


This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

Site 1—l80 Site 2—N Virginia St

Nevada Nevada

LiDAR Sensor LiDAR Sensor

Legend Legend
l 80 N Virginia St

Fig 11 Location of case studies.

45
40 40

30 35
30
20
20
Y Axis (m)
Y Axis (m)

10 15
0 10
–10 5
0
–20
–5
–30
–10
–40
–60 –40 –20 0 20 40 –30 –20 –10 0 10 20 30
X Axis (m) X Axis (m)
(a) (b)

Fig 12 Results of lane identification. (a) Lane identification on Site 1, (b) lane identification on Site 2.

IEEE Intelligent transportation systems magazine • 10 • month 2019


This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

from the non-straight roads. The lane identification can be tion Engineering Department at Texas Tech University.
conducted before real-time vehicle tracking. So the location Her research interests include Intelligent Transportation
of vehicle can also be successfully identified even under se- System, GIS in Transportation, and GNSS software receiver
vere weather condition, such as heavy rain and snow. algorithms.
This research is an important component of the system
to apply roadside LiDAR sensors serving connected vehicles References
[1] S. Andrews, “Vehicle-to-vehicle (V2V) and vehicle-to-infrastructure
and other traffic engineering practices. The current proce- (V2I) communications and cooperative driving,” Handbook Intell.
dure can identify the road boundary more than 65 m from the Veh., pp. 1121–1144, 2012. doi: 10.1007/978-0-85729-085-4_46.
[2] D. Guilbert, S. S. Ieng, C. Le Bastard, and Y. Wang, “Robust blind de-
LiDAR sensor, which is limited to the geometry of the roads convolution process for vehicle reidentification by an inductive loop
and scan range of the LiDAR sensor. The detection range is detector,” IEEE Sensors J., vol. 14, no. 12, pp. 4315–4322, 2014.
[3] M. Bertozzi and A. Broggi, “GOLD: A parallel real-time stereo vision
expected to be extended by setting up several LiDAR sensors system for generic obstacle and lane detection,” IEEE Trans. Image
along the roadside. This setup method can compensate the Process., vol. 7, no. 1, pp. 62–81, 1998. doi: 10.1109/83.650851.
[4] C. Laugier and J. Chartre, “Intelligent perception and situation aware-
point loss far away from the sensors and can be used to detect ness for automated vehicles,” in Proc. Conf. GTC Europe, Amsterdam,
both road borders. However, extraction efforts are needed to The Netherlands, Sept. 2016.
[5] S. S. Ieng, J. P. Tarel, and R. Labayrade, “On the design of a single
merge the points collected from different LiDAR sensors to- lane-markings detectors regardless the on-board camera’s position,”
gether. The proposed procedure could not detect the lanes at in Proc. IEEE Intelligent Vehicles Symp., 2003, pp. 564–569.
[6] B. Yang, L. Fang, Q. Li, and J. Li, “Automated extraction of road mark-
intersections, since the turning movements (left-turn, right- ings from mobile Lidar point clouds,” Photogramm. Eng. Remote Sens.,
turn, U-turn) could largely impact the distribution of vehicle vol. 78, no. 4, pp. 331–338, 2012. doi: 10.14358/pers.78.4.331.
[7] J. Melo, A. Naftel, A. Bernardino, and J. Santos-Victor, “Viewpoint in-
trajectories, which increase the difficulty of lane identifica- dependent detection of vehicle trajectories and lane geometry from
tion. Further studies should be conducted to explore the ef- uncalibrated traffic surveillance cameras,” Image Anal. Recog., pp.
454–462, 2004.
fective method for lane identification at intersections. [8] Y. Wang, D. Shen, and E. K. Teoh, “Lane detection using spline model,”
Pattern Recog. Lett., vol. 21, no. 8, pp. 677–689, 2000. doi:10.1016/s0167-
8655(00)00021-0.
About the Authors [9] J. Wu, H. Xu, and J. Zheng, “Automatic background filtering and lane
Jianqing Wu received the M.S. Degree identification with roadside Lidar data,” in Proc. 20th Int. Conf. Intel-
ligent Transportation, 2017.
in Civil Engineering from Shandong [10] C. Stauffer and W. E. L. Grimson, “Learning patterns of activity using
University, Jinan, China, in 2015. He is real-time tracking,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 22,
no. 8, pp. 747–757, 2000.
currently working toward the Ph.D de- [11] D. Z. Wang, I. Posner, and P. Newman, “What could move? Finding
gree with the Department of Civil and cars, pedestrians and bicyclists in 3D laser data,” in Proc. IEEE Int.
Conf. Robotics and Automation, 2012, pp. 4038–4044.
Environmental Engineering at Univer- [12] J. Balsa-Barreiro, J. P. Avariento, and J. L. Lerma, “Airborne light de-
sity of Nevada, Reno, Nevada, United tection and ranging (LiDAR) point density analysis,” Sci. Res. Essays,
vol. 7, no. 33, pp. 3010–3019, 2012.
States. His research interests include intelligent transpor- [13] J. Balsa-Barreiro and J. L. Lerma, “A new methodology to estimate
tation systems and traffic safety. the discrete-return point density on airborne Lidar surveys,” Int. J.
Remote Sens., vol. 35, no. 4, pp. 1496–1510, 2014.
[14] N. R. Pal, K. Pal, J. M. Keller, and J. C. Bezdek, “A possibilistic fuzzy
Hao Xu received his Ph.D. in Civil Engi- c-means clustering algorithm,” IEEE Trans. Fuzzy Syst., vol. 13, no. 4,
pp. 517–530, 2005.
neering from Texas Tech University. He [15] M. Ester, H. P. Kriegel, J. Sander, and X. Xu, “A density-based algorithm
joined the faculty at the University of Ne- for discovering clusters in large spatial databases with noise,” Kdd,
vol. 34, no. 96, pp. 226–231, 1996.
vada, Reno in the Department of Civil & [16] T. N. Tran, K. Drab, and M. Daszykowski, “Revised DBSCAN algorithm
Environmental Engineering as an Assis- to cluster data with dense adjacent clusters,” Chemometrics Intell. Lab.
Syst., vol. 120, pp. 92–96, 2013.
tant Professor in July 2013. His main ar- [17] K. Yetilmezsoy and S. Demirel, “Artificial neural network (ANN) ap-
eas of research include intelligent proach for modeling of Pb (II) adsorption from aqueous solution by
Antep pistachio (Pistacia vera L.) shells,” J. Hazardous Mater., vol.
transportation systems, traffic operation and control, and de- 153, no. 3, pp. 1288–1300, 2008.
velopment of applications for transportation engineering. [18] V. Popescu, M. Bace, and S. Nedevschi, “Lane identification and ego-
vehicle accurate global positioning in intersections,” in Proc. IEEE
Intelligent Vehicles Symp., 2011, pp. 870–875.
Junxuan Zhao received her M.S. in
Electrical Engineering from Colorado
State University in 2015 and B.E. in
Electrical Engineering and Automa- 
tion from Anhui Agricultural Universi-
ty, China, in 2013. She is a Ph.D. student
in Civil, Environmental, and Construc-

IEEE Intelligent transportation systems magazine • 11 • month 2019


This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

Keywords— Roadside LiDAR, background extraction, lane


identification, connected vehicles

Name: Hao Xu
Assistant Professor
Department of Civil and Environmental Engineering
University of Nevada, Reno
Mailing address: 1664 N Virginia St, MS 258
Reno, NV 89557
Tel: (775) 784-6909
Fax: (775) 784-1390
Email: haox@unr.edu

IEEE Intelligent transportation systems magazine • 12 • month 2019

You might also like