Professional Documents
Culture Documents
properties of the road environment. After this two-staged B. Segmentation of Road Area
extraction process, a decision for a road terrain type is taken Instead of delimiter-based road/lane detection, the properties
by applying a boosting classifier that is trained on annotated of the complete road surface can be applied for the detection
ground truth data. The proposed approach learns the visuo- process. Complementary to using 3D information for delimiter
spatial properties of driving scenes as the features implicitly detection, the physical property of the road flatness has been
represent both local visual properties and their spatial layout. used in a variety of approaches [17], [20], [21], [22]. However,
The proposed two-stage analysis process considering the this requires the road area to be limited by sufficiently elevated
spatial layout of a scene is useful for any classification structures.
task exhibiting a clear spatial correspondence between visual Consequently, many recent approaches are based on visual
properties at different metric locations. For the task of road properties like, e.g., the mainly gray and untextured asphalt
terrain detection, this allows the separation of visually similar region [9], [10], [11], [12], [13], [23], [24], [25]. These visual
parts of road area into the ego-lane and the remaining part of properties of the road area have been used for estimating the
the road. The main novelties of the presented approach are: overall road shape [25] or for segmenting the complete road
• Proposal of spatial ray features for analyzing spatial area [11], [13]. Pixel based classifications, using Conditional
properties in metric space and over arbitrary distances. Random Fields (CRF), can also be used to segment the
• Individual modeling of visual and spatial characteristics complete field of view into individual elements, including the
allows separating weather/lighting influences from size road surface [23]. However, classifying the visual appearance
variations in road types (highway vs city). on a local scale only can lead to many ambiguities. Therefore,
• Approach can be optimized for different types of road in [12] it was shown that incorporating a pixel’s larger visual
terrain (independent of the road delimiter type) by pro- context by using multi-scale grid histograms increases the
viding appropriately labeled ground truth. detection quality of all classes.
The article is organized as follows: In Section II we review This contribution introduces a related idea where the focus
related work on road segmentation and ego-lane detection. lies on capturing specific spatial configurations of visual
Section III provides an overview of the proposed system features in a larger metric representation instead of the directly
approach. Section IV describes the base classification of local surrounding image parts. Including such a bottom-up spatial
visual properties and Section V introduces the extraction of context allows for detecting semantic categories like ego-lane,
spatial properties from the scene using the proposed SPRAY which has previously only been done with delimiter-based
features. The classification of road terrain based on the result- approaches.
ing feature vectors is outlined in Section VI. The evaluation of
the proposed approach for road area and ego-lane detection on C. Inclusion of Scene Context
real-world data from a publicly available dataset is presented In order to further enhance the robustness of (bottom-up)
in Section VII. The contribution is concluded in Section VIII. classification decisions, top-down scene context is often in-
cluded as explicit prior. This context can be extracted directly
II. R ELATED W ORK from image data like, e.g., the vanishing point [26], the loca-
tion of horizon line, and the scene category [27]. Additionally,
For the task of vision-based road segmentation a variety
external information sources like map data [18] or data from
of approaches have been developed, making different assump-
other sensors [28] can be incorporated. The proposed approach
tions about road terrain:
can take advantage of such context information indicating
different road types with distinct spatial properties (e.g., width
A. Extraction of Road Delimiters of city road vs. highway). This can be realized by model
switching in the training/classification stage and we assume
Identifying the lane by detection of lane markings has a long
in the following that such a symbolic context information of
history (see, e.g., [14]). Today’s State-of-the-art approaches
the road type is available.
extract the delimiting elements of the driving space either for
the detection of the ego-lane (see, e.g., [4], [5], [15], [6],
III. S YSTEM OVERVIEW
[16], [7]). or for the detection of the complete road area
(see, e.g., [17], [18], [19]). Features for these models are The overall system depicted in Fig. 2 consists of two stages
extracted from longitudinal road structures like lane markings for achieving road terrain detection:
or road boundary obstacles (e.g. curbstones, barriers) by visual • Several base classifiers capturing local visual appearance
processing. This is mainly based on color and edge appearance properties
[5], [15], [6], [7], 3D information from stereo processing • A road terrain classifier based on SPRAY features cap-
[5], [15], [17], [19] or Structure From Motion [18]. From turing visuospatial properties
the extracted features, the road/lane shape can be tracked RGB images from the camera are fed into each of the base
using different road shape models (see, e.g., [20]). However, classifiers. Each base classifier provides one metric confidence
especially for inner city the applicability of these approaches is map in the 2D BEV driving space for one specific visual prop-
limited because road delimiters cannot be detected that easily erty as outlined in Section IV. We propose to use three base
(missing/bad lane markings, parked cars occluding curbstones, classifiers: ’base road classifier’, ’base boundary classifier’,
very low curbstones, ...). and ’base lane marking classifier’.
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS 3
Fig. 2. System block diagram showing the overall system architecture. Note
that all base classifiers include preprocessing and inverse perspective mapping
to provide metric confidence maps to the spatial layout computation. Training
has to be done separately to capture appearance variations and road geometry
characteristics.
TABLE I
P ERFORMANCE OF INDIVIDUAL FEATURES ON B ENCHMARK
QP
train QP
test QM
train QM
test
Baseline - 73.0 - 40.7
# SFA-features
3 59.7 53.6 52.5 50.1
10 76.2 71.6 62.4 59.4
20 78.0 73.0 63.5 59.6
30 78.8 73.3 64.2 59.9 Fig. 7. Distribution of base points in metric space (left), the SPRAY feature
40 79.2 73.9 64.1 59.9 generation procedure illustrated for one base point (middle), and the ego
# WH-features SPRAY feature (right).
16 73.5 71.2 57.0 54.5
36 76.0 73.5 59.7 56.5
64 76.8 74.2 60.5 57.2
100 77.7 75.0 61.1 57.4
# RGB-features
6 79.0 74.9 62.8 60.0
18 81.7 77.4 65.4 62.1
TABLE II
F EATURE COMBINATIONS ON B ENCHMARK DATASET.
Fig. 9. Integral over the confidences (absorption) for the third ray from
Fig. 7. Two SPRAY features AD3 (t1 ) and AD3 (t2 ) are obtained, which
reflect in this case the distance to the lane-marking and the left road border.
Fig. 10. Example of ground truth for road area (blue) and ego-lane (green)
fSPRAY,α (ti ) = ADα (ti ) = argmin (ρ | Aα (ρ) > ti ) (8) terrain types annotated in perspective space (left) and transformed in metric
ρ space (right).
B. Evaluation Measures
For the evaluation of road area and ego-lane detection
results a variety of evaluation measures can be used. Similar
to [12], [27], [44], we employ the F-measure derived from
the precision and recall values (10-12) for the pixel-based
evaluation. As there is no concrete application, we make use of
the harmonic mean (F1-measure, β = 1), while an unbalanced
F-measure using a different weighting of precision and recall
could also be applied. Furthermore, the accuracy (13) is a
measure frequently used in road segmentation tasks [11].
TP
Precision = (10)
TP + FP
Fig. 11. Result of the road terrain classification showing the BEV represen- TP
tation of Fig. 10 (left) and the classification result for road area (middle) and Recall = (11)
ego-lane (right). The corresponding base classifier results are those shown in TP + FN
Fig. 6. Precision Recall
F -measure = (1 + β 2 ) (12)
β 2 Precision + Recall
TP + TN
VII. E VALUATION Accuracy = (13)
TP + FP + TN + FN
For the evaluation of the overall approach we measure The threshold TH for SPRAY feature classification (i.e.,
the road area and ego-lane detection performance on inner- obtaining TP,FP,TN,FN) is chosen such that a maximal F-
city streams recorded while driving through the inner city measure (Fmax ) is obtained (14).
of Offenbach, Germany. In order to judge the quality of the
proposed approach for use in an automotive application, we Fmax = argmax F -measure (14)
will carry out evaluations in the perspective space as well as TH
in the metric bird’s-eye-view. For a comparison to state-of- Furthermore, in order to provide insights into the performance
the-art, the reader is referred to the evaluation1 on the KITTI- over the full recall range, the average precision (AP) as defined
ROAD benchmark [43]. in [45] is computed for different recall values r:
1 X
A. Input Data & Parameter Settings AP = max Precision(r̃) (15)
11 r∈0,0.1,...1 r̃:r̃>r
Input data consists of three rounds of driving at three
different weather conditions (overcast, sunny, and mixed), Considering both measures provides insights into an algo-
giving a total of nine rounds. The round track covers a mixture rithm’s optimal (Fmax ) and overall (AP) performance. A graph-
of main roads and small streets present in a German city ical impression of the overall performance can be gained using
(Offenbach). The images (1280 × 1024, camera focal length precision-recall curves. Traditionally, such evaluations have
5.4 mm) were recorded at 20 Hz frame rate. For training been carried out in the complete perspective space [11], [12],
and evaluation the images are cropped excluding the sky and [44]. In order to focus the evaluation on the driving task,
the hood of the ego-vehicle (see Fig. 1), resulting in images we limit the evaluation of the perspective space to the area
with 1280 × 271 pixels. Depending on traffic, each round covering the BEV road space (ignoring sky, a.s.o.) and, more
took 9-15 minutes and resulted in a total of 10000-18000 importantly, we also apply all metrics in BEV space.
frames. Annotations of road area and ego-lane have been
done manually, labeling one frame for every 8 seconds2 . This C. Evaluation Results
resulted in a total of 742 annotated frames (66-114 frames For each weather condition, the first round is used for
per round). The metric representation covers −10m to 10m training the base classifiers and the second round is used
in lateral (x) direction and 6m to 46m in longitudinal (z) for generating confidence maps of unseen data with the
direction (see Fig. 11), resulting in a BEV with 800 × 400 trained base classifiers. On these confidence values the SPRAY
px. features are trained. The third round is then used for testing
The whole system is implemented to run on a GPU using the full road terrain classification on unseen data. By iterating
OpenCL. On an NVidia GTX 580, the base classifiers process through the three rounds of each weather condition we perform
the RGB images and extract confidence maps (∼12ms) which a three-fold cross validation.
are handed to the SPRAY feature calculation (∼16ms) and the In a first experiment, the contribution of the individual
final road terrain classification (∼17ms). The complete pro- base classifiers combined with SPRAY features for road area
cessing pipeline requires ∼45ms, sufficiently fast for achieving detection is evaluated (see Table III). Similar to Section IV we
20 fps. generate the baseline from ground truth. For comparison, the
1 Results performance of the best base classifier feature combination
are available at http://www.cvlibs.net/datasets/kitti/eval road.php
2 Theused image data and annotations can be obtained by sending an e-mail from Section IV is shown, this represents the performance
with subject ’InnerCity dataset’ to Jannik.Fritsch@honda-ri.de. level of a purely appearance-based classification approach.
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS 9
TABLE III
R ESULTS OF ROAD AREA DETECTION ON INNER CITY DATASET.
QM
train QM
test
Baseline 43.0
base classifier C18WH64SF20 65.6 62.1
SPRAY features trained on
base road 80.0 70.2
base boundary 79.4 71.6
lane markings 45.9 40.3
base road + base boundary 84.3 73.6
all base classifiers 85.4 74.5
ACKNOWLEDGMENT
Fig. 15. Exemplary results for ego-lane for the images from Fig. 1. The authors gratefully acknowledge the anonymous review-
ers for their comments as well as the colleagues at the Honda
Research Institute Europe GmbH for many fruitful discussions.
raise the quality of the overall system. This stems from the
fact that the base lane marking classifier cannot detect road R EFERENCES
signs well as it is optimized for lane structures. This results
[1] T. Weisswange, B. Bolder, J. Fritsch, S. Hasler, and C. Goerick, “An
in road signs partly getting high base boundary confidences integrated adas for assessing risky situations in urban driving,” in Proc.
that are not counteracted by base road confidences, as road IEEE Intell. Veh. Symp., 2013.
signs are visually different from typical road appearance. As [2] S. Ishida and J. Gayko, “Development, evaluation and introduction of a
lane keeping assistance system,” in Proc. IEEE Intell. Veh. Symp., 2004,
a result, road signs are often classified as non-road although pp. 943 – 944.
they make up part of the road. [3] C. Guo, J. Meguro, Z. Kojima, and T. Naito, “CADAS: a multimodal
From the pixel-based evaluation results obtained on individ- advanced driver assistance system for normal urban streets based on
road context understanding,” in Proc. IEEE Intell. Veh. Symp., 2013, pp.
ual images we argue that the metric performance of road area 228–235.
is sufficient for using it as context cue. For example, it can be [4] J. McCall and M. M. Trivedi, “Video-based lane estimation and tracking
used to support/boost car detection results at positions close/on for driver assistance: survey, system, and evaluation,” IEEE Transactions
on Intelligent Transportation Systems, vol. 7, no. 1, pp. 20–37, 2006.
the road area [1]. Besides using the pixel-level results directly [5] R. Danescu and S. Nedevschi, “New results in stereovision based lane
as a kind of support map, also more abstract representations tracking,” in Proc. IEEE Intell. Veh. Symp., 2011, pp. 230–235.
of the extracted information are required for real applications. [6] R. Gopalan, T. Hong, M. Shneier, and R. Chellappa, “A learning
approach towards detection and tracking of lane markings,” IEEE
For example, the pixel-based evaluation is not suitable for Transactions on Intelligent Transportation Systems, vol. 13, no. 3, pp.
discussing the benefit for a lane-oriented ADAS application 1088–1098, 2012.
which classically requires the lane boundary positions instead [7] A. Linarth and E. Angelopoulou, “On feature templates for particle filter
based lane detection,” in Proc. IEEE Intell. Transp. Syst. Conf., 2011,
of a segmented area. In order to achieve such a kind of pp. 1721–1726.
representation, the confidence maps in BEV can be filtered. [8] G. K. Siogkas and E. S. Dermatas, “Random-walker monocular road
Subsequently, some kind of clothoid lane model (see Sec- detection in adverse conditions using automated spatiotemporal seed
selection,” IEEE Transactions on Intelligent Transportation Systems,
tion II) or a behavior-oriented driving model [43] can be vol. 14, no. 2, pp. 527–538, 2013.
fitted to the data. This abstraction from the low-level pixel [9] J. M. Alvarez and A. M. Lopez, “Road detection based on illuminant
classification is then also well suited for performing temporal invariance,” IEEE Transactions on Intelligent Transportation Systems,
vol. 12, no. 1, pp. 184–193, 2011.
integration based on the application requirements. [10] C. Guo, S. Mita, and D. McAllester, “Robust road detection and
tracking in challenging scenarios based on Markov random fields with
unsupervised learning,” IEEE Transactions on Intelligent Transportation
VIII. C ONCLUSION Systems, vol. 13, no. 3, pp. 1338–1354, 2012.
In this paper, we introduced a novel approach combining vi- [11] J. M. Alvarez, T. Gevers, Y. LeCun, and A. M. Lopez, “Road scene
segmentation from a single image,” in ECCV 2012, ser. Lecture Notes
sual and spatial information to enhance local classification de- in Computer Science. Springer Berlin Heidelberg, 2012, vol. 7578, pp.
cisions. The proposed SPRAY features capture the geometric 376–389.
characteristics of road environments over larger spatial areas. [12] Y. Kang, K. Yamaguchi, T. Naito, and Y. Ninomiya, “Multiband image
segmentation and object recognition for understanding road scenes,”
Through applying machine learning techniques for training the IEEE Transactions on Intelligent Transportation Systems, vol. 12, no. 4,
classifiers, the approach can be tuned for extracting different pp. 1423–1433, 2011.
road terrain types from road scenes. The evaluation of road [13] T. Kuehnl, F. Kummert, and J. Fritsch, “Monocular road segmentation
using slow feature analysis,” in Proc. IEEE Intell. Veh. Symp., 2011, pp.
area and ego-lane detection on a dataset with several weather 800–806.
conditions has shown that this approach can handle various [14] M. Bertozzi and A. Broggi, “Real-time lane and obstacle detection on
scenes such as roads without lane-markings and varying as- the GOLD system,” in Proc. IEEE Intell. Veh. Symp., 1996, pp. 213–218.
[15] R. Danescu and S. Nedevschi, “Probabilistic lane tracking in difficult
phalt appearances. The pixel-based evaluation carried out in road scenarios using stereovision,” IEEE Transactions on Intelligent
the metric BEV space provides the first step towards bridging Transportation Systems, vol. 10, no. 2, pp. 272–282, 2009.
the gap between image processing and vehicle control. The [16] J. Choi, J. Lee, D. Kim, G. Soprani, P. Cerri, A. Broggi, and K. Yi,
“Environment-detection-and-mapping algorithm for autonomous driving
second step is the identification of suitable algorithms for in rural or off-road environment,” IEEE Transactions on Intelligent
generating parametric representations that can be used for Transportation Systems, vol. 13, no. 2, pp. 974–982, 2012.
vehicle control in ADAS, which we are currently investigating. [17] J. Siegemund, U. Franke, and W. Forstner, “A temporal filter approach
for detection and reconstruction of curbs and road surfaces based on
We plan to extend our approach for application in different conditional random fields,” in Proc. IEEE Intell. Veh. Symp., 2011, pp.
road scenarios (e.g., highway and inner-city) and to perform 637–642.
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS 11
[18] M. Darms, M. Komar, and S. Lueke, “Map based road boundary [44] J. M. Alvarez and A. Lopez, “Novel index for objective evaluation of
estimation,” in Proc. IEEE Intell. Veh. Symp., 2010, pp. 609–614. road detection algorithms,” in Proc. IEEE Intell. Transp. Syst. Conf.,
[19] C. Guo, T. Yamabe, and S. Mita, “Robust road boundary estimation for 2008, pp. 815–820.
intelligent vehicles in challenging scenarios based on a semantic graph,” [45] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zis-
in Proc. IEEE Intell. Veh. Symp., 2012, pp. 37–44. serman, “The pascal visual object classes (VOC) challenge,” Int. J. of
[20] M. Konrad, M. Szczot, and K. Dietmayer, “Road course estimation in Computer Vision, vol. 88, no. 2, pp. 303–338, Jun. 2010.
occupancy grids,” in Proc. IEEE Intell. Veh. Symp., 2010, pp. 412–417. [46] T. Michalke, R. Kastner, J. Fritsch, and C. Goerick, “A generic tem-
[21] A. Wedel, U. Franke, H. Badino, and D. Cremers, “B-spline modeling poral integration approach for enhancing feature-based road-detection
of road surfaces for freespace estimation,” in Proc. IEEE Intell. Veh. systems,” in Proc. IEEE Intell. Transp. Syst. Conf., 2008, pp. 657–663.
Symp., 2008, pp. 828–833. [47] T. Wu and A. Ranganathan, “A practical system for road marking
[22] M. Okutomi and S. Noguchi, “Extraction of road region using stereo detection and recognition,” in Proc. IEEE Intell. Veh. Symp., 2012, pp.
images,” in Proc. Fourteenth Int Pattern Recognition Conf, vol. 1, 1998, 25–30.
pp. 853–856.
[23] C. Wojek and B. Schiele, “A dynamic CRF model for joint labeling of
object and scene classes,” in European Conference on Computer Vision,
vol. 5305, 2008, pp. 733–747.
[24] T. Gumpp, D. Nienhuser, and J. M. Zollner, “Lane confidence fusion for Jannik Fritsch (M 09) received the Dipl.-Ing. de-
visual occupancy estimation,” in Proc. IEEE Intell. Veh. Symp., 2011, gree in electrical engineering from Ruhr University
pp. 1043–1048. Bochum, Bochum, Germany, in 1996 and the Ph.D.
[25] U. Franke, H. Loose, and C. Knoeppel, “Lane recognition on country degree and the venia legendi (Habilitation) in com-
roads,” in Proc. IEEE Intell. Veh. Symp., 2007, pp. 99–104. puter science from Bielefeld University, Bielefeld,
[26] Q. Wu, W. Zhang, and B. V. K. V. Kumar, “Example-based clear path Germany, in 2003 and 2012, respectively.
detection assisted by vanishing point estimation,” in Proc. IEEE Int In 1998, he joined the Applied Computer Science
Robotics and Automation (ICRA) Conf, 2011, pp. 1615–1620. Group, Bielefeld University. In 2004, he joined
[27] J. M. Alvarez, T. Gevers, and A. M. Lopez, “3D scene priors for the EU Project COGNIRON (The Cognitive Robot
road detection,” in Proc. IEEE Conf. on Computer Vision and Pattern Companion), where he headed the integration efforts
Recognition, 2010, pp. 57–64. for the Key Experiment Robot Home-Tour. Since
[28] M. Serfling, R. Schweiger, and W. Ritter, “Road course estimation in 2006, he has been a Principal Scientist with the Honda Research Institute
a night vision application using a digital map, a camera sensor and a Europe GmbH, Offenbach am Main, Germany. His research interests include
prototypical imaging radar system,” in Proc. IEEE Intell. Veh. Symp., image processing methods for environment perception, spatial representations,
2008, pp. 810–815. and cognitive system concepts for intelligent automotive systems.
[29] T. Veit, J.-P. Tarel, P. Nicolle, and P. Charbonnier, “Evaluation of road
marking feature extraction,” in Proc. 11th Int. IEEE Conf. Intelligent
Transportation Systems ITSC 2008, 2008, pp. 174–181.
[30] H. A. Mallot, H. H. Bulthoff, J. Little, and S. Bohrer, “Inverse perspec-
tive mapping simplifies optical flow computation and obstacle detection,” Tobias Kühnl received the Dipl.-Ing. degree in elec-
Biological Cybernetics, vol. 64, pp. 177–185, 1991. trical engineering and information technologies from
[31] L. Wiskott and T. Sejnowski, “Slow feature analysis: Unsupervised the Technical University of Darmstadt, Darmstadt,
learning of invariances,” Neural Computation, vol. 14, no. 4, pp. 715– Germany, in 2010 and the Dr.-Ing. degree in com-
770, 2002. puter science from Bielefeld University, Bielefeld,
[32] M. Franzius, N. Wilbert, and L. Wiskott, “Invariant object recognition Germany, in 2013.
with slow feature analysis.” in Proc. Conf. on Artificial Neural Networks Since 2010, he has been with the Research Insti-
(ICANN), ser. Lecture Notes in Computer Science, vol. 5163. Springer, tute for Cognition and Robotics (Cor-Lab), Bielefeld
2008, pp. 961–970. University, working in the fields of road terrain
[33] P. Berkes, “SFA-TK: Slow feature analysis toolkit for matlab detection for advanced driver assistance systems,
(v.1.0.1),” 2003. [Online]. Available: http://itb.biologie.hu-berlin.de/ in collaboration with the Honda Research Institute
berkes/software/sfa-tk/sfa-tk.shtml Europe GmbH, Offenbach am Main, Germany. His further research interests
[34] B. Fino and V. Algazi, “Unified matrix treatment of the fast walsh- include computer vision systems, image processing, and machine learning.
hadamard transform,” Computers, IEEE Transactions on, vol. 100,
no. 11, pp. 1142–1146, 1976.
[35] Y. Alon, A. Ferencz, and A. Shashua, “Off-road path following using
region classification and geometric projection constraints,” in Proc. IEEE
Conf. on Computer Vision and Pattern Recognition, 2006, pp. 689–696. Franz Kummert (M 91) received the Dipl.-Ing.
[36] Y. Freund and R. Schapire, “A decision-theoretic generalization of on- and Ph.D. (Dr.-Ing.) degrees in computer science
line learning and an application to boosting,” Journal of Computer and from the University of Erlangen-Nrnberg, Erlangen,
System Sciences, vol. 55, pp. 119–139, 1997. Germany, in 1987 and 1991, respectively, and the
[37] Y. Sha, X. Yu, and G. Zhang, “A feature selection algorithm based venia legendi (Habilitation) in computer science
on boosting for road detection,” in Proc. Conf. Fuzzy Systems and from Bielefeld University, Bielefeld, Germany, in
Knowledge Discovery, vol. 2, 2008, pp. 257–261. 1996.
[38] T. Kuehnl, “Road terrain detection for advanced driver assistance sys- From 1987 to 1990, he was with the Pattern
tems,” Ph.D. dissertation, University of Bielefeld, 2013. Recognition Group (Institut fr Informatik, Muster-
[39] T. Michalke, R. Kastner, M. Herbert, J. Fritsch, and C. Goerick, erkennung), University of Erlangen-Nrnberg. Since
“Adaptive multi-cue fusion for robust detection of unmarked inner-city 1991, he has been with the Applied Informatics
streets,” in Proc. IEEE Intell. Veh. Symp., 2009, pp. 1–8. Group (Angewandte Informatik), Bielefeld University, where he has been an
[40] K. Smith, A. Carleton, and V. Lepetit, “Fast ray features for learning Applied Professor in pattern recognition since 2002, and the Dean of Studies
irregular shapes,” in Proc. IEEE Int Computer Vision Conf, 2009, pp. of the Faculty of Technology since 2003. He has published various papers
397–404. in these fields, and he is the author of a book on the control of a speech-
[41] J. Shotton, A. Fitzgibbon, M. Cook, T. Sharp, M. Finocchio, R. Moore, understanding system and on the automatic interpretation of speech and image
A. Kipman, and A. Blake, “Real-time human pose recognition in parts signals. His fields of research include speech and image understanding and the
from single depth images,” in Proc. IEEE Conf. on Computer Vision applications of pattern understanding methods to natural science domains.
and Pattern Recognition, 2011, pp. 1297–1304. Prof. Kummert has been a member of the Senat of Bielefeld University
[42] T. Kuehnl, F. Kummert, and J. Fritsch, “Spatial ray features for real-time since 2004.
ego-lane extraction,” in Proc. IEEE Intell. Transp. Syst. Conf., 2012, pp.
288–293.
[43] J. Fritsch, T. Kuehnl, and A. Geiger, “A new performance measure and
evaluation benchmark for road detection algorithms,” in Proc. IEEE
Intell. Transp. Syst. Conf., 2013, accepted.