Multi-Lane Perception Using Feature Fusion Based on GraphSLAM

Multi-Lane Perception Using Feature Fusion Based on GraphSLAM

Alexey Abramov, Christopher Bayer, Claudio Heller, Claudia Loy These authors contributed equally to this work.Alexey Abramov, Christopher Bayer, Claudio Heller, Claudia Loy are with Continental Teves AG, Chassis & Safety Division, Advanced Engineering, Guerickestrasse 7, DE-60488, Frankfurt am Main, Germany. {alexey.abramov,christopher.bayer, claudio.heller,claudia.loy}@continental- corporation.com
Abstract

An extensive, precise and robust recognition and modeling of the environment is a key factor for next generations of Advanced Driver Assistance Systems and development of autonomous vehicles. In this paper, a real-time approach for the perception of multiple lanes on highways is proposed. Lane markings detected by camera systems and observations of other traffic participants provide the input data for the algorithm. The information is accumulated and fused using GraphSLAM and the result constitutes the basis for a multi-lane clothoid model. To allow incorporation of additional information sources, input data is processed in a generic format. Evaluation of the method is performed by comparing real data, collected with an experimental vehicle on highways, to a ground truth map. The results show that ego and adjacent lanes are robustly detected with high quality up to a distance of 120 m. In comparison to serial lane detection, an increase in the detection range of the ego lane and a continuous perception of neighboring lanes is achieved. The method can potentially be utilized for the longitudinal and lateral control of self-driving vehicles.

I Introduction and Related Work

Nowadays, modern cars are equipped with Advanced Driver Assistance Systems (ADAS) to increase the comfort and safety of drivers and passengers. Systems like lane departure warning, lane keeping assist, adaptive cruise control, emergency brake assist and blind spot monitoring help drivers to keep the car within the lane and avoid collisions with other traffic participants [1]. For the next generation of ADAS, and especially regarding the prospect of autonomous vehicles, comprehensive and extensive knowledge about the environment is required. One fundamental part is a continuous and robust perception of the road. This includes extension of both longitudinal and lateral detection range with perception of all traffic lanes relevant for analysis of the current driving situation.

For sensing an ego lane state-of-the-art ADAS rely on camera systems to detect left and right markings [2, 3, 4, 5]. Despite recent developments in vision-based lane detection techniques [6, 7, 8, 9] the performance of camera systems remains limited to the camera’s aperture angle, variations in illumination, quality of lane markings and weather conditions. To detect other traffic participants serial vehicles utilize camera, radar, ultrasonic and lidar sensors.

Recent research demonstrates that fusion of measurements from diverse sensors ensures a more sophisticated representation of the environment [10, 11, 12]. This involves precise and robust lane estimation which is essential for automated driving at high speeds and automated lane changes. Common multi-sensor fusion techniques combine camera sensors with measurements of radar, lidar or both in order to compensate drawbacks of each other [13, 11].

Existing real-time multi-lane detection systems rely on optical sensors. Aly et al. [6] introduced a camera-based approach which detects and models multiple lanes in still images. However, this technique works well only on free roads as it does not take other traffic participants into account 111Passing vehicles can cause diagonal lines in the Inverse Perspective Mapping (IPM) that eventually lead to wrong lane markings.. In addition, lines with prominent contrast oriented parallel to the road direction (e.g. street writing, shadows of guardrails, curbs, etc.) can lead to false detections because the real width of lane markings is not considered [14]. A camera-based system proposed by Kang et al. [9] detects up to six lanes on highways. Nevertheless, this method also neglects information about other traffic participants and, thereby, lanes covered by other cars cannot be detected. Furthermore, it employs a second order polynomial with constant curvature to model lane markings. Huang et al. [13] detect multiple lanes using camera and lidar data, where lidar is employed as a complementary sensor for eliminating outliers 222Objects that can generate wrong lane detections are detected by the lidar-based obstacle detection. All 3D obstacle detections are projected onto the camera image to suppress explicitly road line candidates detected in those areas.. A high detection performance is achieved with a setup of 5 cameras and 13 lidars. Nevertheless, typical problems of optical sensors, such as perception of hidden or poorly visible lane markings, remain unresolved.

In this paper, a novel real-time multi-lane perception approach for highways is presented, which provides a continuous and robust estimation of ego and adjacent lanes up to . The main advantage of the proposed technique is detection of multiple lanes combining data from various input sources. For this purpose a generic feature description is introduced which allows processing and fusion of measurements regardless of the sensor type. In the present study, lane markings detected by two camera systems are fused with tracked trajectories of other traffic participants. The fusion result is utilized for estimation of a multi-lane clothoid model, which can serve as an input for the longitudinal and lateral control of self-driving vehicles. The system performance is evaluated by comparing real data collected on highways to ground truth. This multi-lane perception approach operates without prior knowledge about course of the road and number of existing traffic lanes. One more important advantage over existing methods is that the demonstrated application relies on a production-oriented sensor setup and requires neither digital map nor GPS localization [15].

A description of the lane estimation algorithm can be found in section II and the evaluation methods and results are presented in section III followed by the conclusion in section IV.

Ii Feature Based Lane Estimation

lane feature fusion

multi-lane modeling

Fig. 1: Architectural overview of the multi-lane perception method. are feature sets obtained from different input sources , represents a set of fused lane features.

The principal ideas of the multi-lane perception methods are described in the following. An overview of the main modules is shown in fig. 1, where represent different input sources, which allow deriving information about the road course (see sections II-B and II-C). The core module is the lane feature fusion algorithm that accumulates and fuses the feature sets obtained from the input sources. This method combines advantages of every input data type and is explained in detail in sections II-D and II-E. On top of the fusion result, a clothoid based multi-lane model representing the road course is estimated (see section II-F).

Hence the lane feature fusion is free of geometric model assumptions. To ensure a generic representation in the lane feature fusion a uniform interface is utilized for the inputs and the output . The interface is specified as a list of features

(1)

where and constitute the position of a feature and constitutes its heading. The measurement uncertainty of each feature is given by a confidence value and a covariance matrix with respect to , and . Note that this is a two-dimensional representation which omits height information about the road course.

Ii-a Experimental vehicle and sensor setup

The experimental vehicle used in the current study is equipped with the following sensors for environmental perception:

  • Serial Mono Camera (SMC) mounted behind the windshield above the rear-view mirror 333Here the right camera of Stereo Multi Functional Camera (SMFC) 300 series by Continental is used..

  • High Resolution Camera (HRC) mounted close to the SMC behind the windshield 444USB 3.0 camera UI-3580CP, IDS Imaging Development Systems GmbH..

  • Long Range Radar (LRR) mounted inside the front bumper below the license plate 555ARS 300, Continental, range ..

  • Short Range Radars (SRR) mounted inside the front and rear bumper at each corner 666SRR, Continental, range .

The vehicle coordinate system is a right-handed coordinate system, its origin lies in the middle of the front axle (height of the road), is identical to the driving direction, points to the left and points upwards.

Ii-B Camera-based lane detection and feature extraction

The SMC is a serial automotive camera with a resolution of px and an aperture angle of °. It performs multiple functions such as lane detection, traffic sign and pedestrian recognition and detection of other vehicles. The SMC lane detection is a combination of several image processing operations followed by a Kalman filter used for tracking [5]. The inner side of left and right lane markings of the ego lane are represented by clothoids (see fig. 2 in the top). The algorithm has a detection range up to under ideal conditions (good visible markings and appropriate weather and lighting).

The HRC is an experimental camera with a resolution of px and an aperture angle of °. In this work it is used to increase the detection range of the ego lane and, in addition, to extract lane markings from neighboring lanes. The algorithm processing the HRC images is based on the serial SMC lane detection and finds features at lane markings taking their physical sizes into account (shown in fig. 2 at the bottom). Note that no feature tracking or lane modeling are performed based on the HRC images alone. Detected features directly constitute the input for the lane feature fusion. Under good conditions the HRC features can be generated up to a longitudinal distance of .

Both cameras use online calibration algorithms to transform the SMC ego lane and marking features from the HRC to the vehicle coordinate system [16].

Fig. 2: Perception of lane markings and moving objects (red boxes) by the experimental vehicle. The ego lane estimated by the SMC is shown in the top (lane clothoids in yellow, adoptive measurement windows in green), while lane marking features detected by the HRC are shown at the bottom (yellow points).

Ii-C Traffic participant detection

In the present work, measured position and orientation of other traffic participants are used as additional information source to improve the road course estimation. Therefore, the robust and precise tracking of other traffic participants in the vicinity of the experimental vehicle is required. For this task, a model based data fusion algorithm as presented in [17] is applied. Each tracked object is represented by a Kalman filter whose state vector holds the position and velocity of the traffic participant in the vehicle coordinate system. LRR, SRR and SMC sensors of the experimental vehicle provide the input for the tracking module. At each time step the tracked objects already in existence are updated with the new sensor data, while new tracked objects are created if applicable. For the lane feature fusion only tracked objects driving in front of the experimental vehicle, that are verified by at least one radar and one camera measurement, are used to ensure high existence probabilities. Fig. 2 shows tracked objects as red boxes projected onto the images of SMC and HRC.

Ii-D Lane feature fusion using GraphSLAM

The aim of the lane feature fusion algorithm is to obtain a representation of the environment as a lane feature set , resulting from different lane feature inputs . In general this problem can be described as estimation of the posterior

(2)

where is the current vehicle pose and is a description of the environment given various measurements . The control vectors describe the movement of the vehicle at the corresponding time. This is known as Simultaneous Localization and Mapping (SLAM) problem which can be solved, for example, by an extended Kalman filter or a particle filter [18].

In this work, the GraphSLAM algorithm [19] is used, which estimates the environment and not only the current vehicle position , but the whole trajectory . This also allows to express dependencies between measurements, which will be shown in detail later.

In GraphSLAM, eq. 2 is described by a sparse graph. The vehicle poses and the environment are described as vertices . The measurements and control vectors describe constraints represented as edges connecting the corresponding vertices. The graph is formulated as the sum of constraints

(3)

where is an error function [19]. This error function returns the discrepancy between the measurements and the vertex pose difference . This discrepancy is weighted by the measurement covariance in information matrix form . The minimization of this sum of non-linear quadratic equations can be solved by the Gauss-Newton algorithm. The resulting configuration of vertices equals the poses of the estimated environment features and the vehicle poses.

Ii-E Building the graph

The environment and the measured lane features are represented as nodes in the graph . Since only the environment in front of the ego vehicle is of interest, the corresponding vehicle pose set is reduced to with poses.

Thus, the graph contains successive vehicle poses and lane features as vertices . Note that all poses of the graph vertices are given with respect to the current vehicle pose coordinate system. The measurement constraints defining the edges of the graph result from the input lane features and the control vectors, which are described in the following sections.

Ii-E1 Adding odometry to the graph

The current control vector is added to the previous graph . The control vector is composed of the yawrate and the speed of the vehicle and is used to calculate the pose difference between the previous pose and the current pose with the corresponding information matrix . At first, all vertices are transformed from to the current vehicle pose using . After this transformation all vertices, arising from measurements that are more than behind the ego vehicle, are removed from . Subsequently, past vehicle poses which are not connected to measurement vertices anymore are also removed. The odometry edge is inserted into the graph between two successive poses as the constraint

(4)

with the error function

In the exemplary graph in fig. 3 the odometry constraint is shown as dashed black edge.

Ii-E2 Adding SMC clothoids to the graph

The the SMC clothoids are sampled every two meters to compute poses and information matrices of the features in the vehicle coordinate system. These features are associated with all existing lane features of the graph. If no feature within an association distance is found, a new vertex is added to the graph. The constraint is described as

(5)

where the measurement is the desired pose difference between the vertex of the current vehicle pose and the vertex of the new or associated feature . The measurement of an SMC feature associated to an existing feature is shown in fig. 3 as a green edge.

Ii-E3 Adding HRC features to the graph

Since features in the HRC images are extracted directly at lane markings, the corresponding features in vehicle coordinates are directly associated with an existing feature or inserted as a new vertex with the corresponding measurement constraint

(6)

which is shown in fig. 3 as magenta colored edge.

Ii-E4 Adding dynamic object features to the graph

The idea is to use the positions and movement of other traffic participants to derive information about the lanes. In most cases drivers of other vehicles tend to drive near the middle of the lane. Based on this assumption, lane features are generated from tracked dynamic objects. Two features perpendicular to the object heading are generated on the left and right side of each dynamic object with a distance of representing potential lane markings. The parameter is an estimation of the current lane width that is taken from the clothoids if available or assumed to be 777Most common lane width on German highways. otherwise.

Fig. 3: Example graph of measuring an object over two successive time steps and and the feature obtained from the SMC and the HRC with the odometry edge (dashed black), left and right object feature edges (black) width edges (blue), smoothing edges (red), the SMC measurement edge (green) and the HRC measurement edge (magenta).

The corresponding feature covariance equals the sum of the object covariance and a covariance matrix representing the lateral standard deviation of traffic participants with respect to the middle of a lane. The resulting features are associated with existing features or added as a new vertex with the measurement constraint

(7)

where is the left or right feature of the -th tracked object at timestamp . An example of measurements of one tracked object over two successive time steps is shown in fig. 3.

A deficiency of this model is that left and right features are decoupled, which means that an improvement of the position of the left feature does not influence the right one and vice versa. Therefore, the assumption on the lane width is expressed as a constraint between the left and right feature:

(8)

The desired pose difference between the left and right feature of the same object is defined as with the lane width as lateral distance. The angle difference is set to zero, since the heading of the features is supposed to be equal. The information matrix corresponds to the variance of the current lane width estimation. The corresponding constraints are shown in fig. 3 as blue edges.

Furthermore, one additional dependency has to be considered: in the current model, two successive features on the same side of a tracked object are decoupled, this means has no direct influence on . If a feature is corrected by other measurements, a large discrepancy to the succeeding feature can occur, which needs to be minimized. Therefore, a smoothing constraint

(9)

is added between the two features. The lateral displacement between successive features can then be reduced by increasing and . Note that , since the longitudinal distance is not supposed to be altered. In fig. 3 this constraint is shown as red edge. If traffic participants perform a lane change this constraint is strongly violated, since at some point the feature of belongs to one lane and the features of to the neighboring lane. Here GraphSLAM provides the utility to multiply eq. 9 with a switch variable . If this variable is set to zero, the edge is disabled and if it equals one, it is fully activated. As in [20] this method is used for false loop closures during the optimization of GraphSLAM, where is added as a further constraint. This forces the edge to be enabled until the error of the edge gets too large and deactivating the edge will become more optimal.

Ii-E5 Solving the graph

In summary, the graph consists of the constraints

(10)

where sums over all relevant time steps and over all sensor features at the corresponding time step.

Fig. 4: Example of the fusion of SMC and HRC lane features (green) with the features generated on the left and right side of a tracked object (black), which performs a lane change to the right neighboring lane. (a) Features are associated during the graph building (gray ellipse). (b) Resulting features after solving GraphSLAM without any lane width and smoothing edges and the fused features (gray). (c) Solution with lane width edges (blue) between object features. Here the feature distances still equal the lane width after solving the graph. (d) Result from solving the graph with smoothing edges (red). During the optimization two smoothing edges got disabled (dashed red) by the switch variables.

A configuration of optimal vertex poses is obtained by solving the graph. The result of the algorithm is represented as a set of fused lane features which correspond directly to these optimal vertex poses. Note that the confidence values of the resulting fused features are updated whenever measurement features are associated. The influence of the smoothing and width constraints is illustrated in fig. 4. Note that an ego lane change has no impact on these constraints, since it only influences the odometry edges.

Fig. 5: Estimation of the base clothoid (orange) representing the road course and the lanes modeling (light green) based on the fused features. Valid features of the base clothoid fit are shown in blue while outliers are depicted in magenta. In this snap-shot features are generated from three tracked objects at approximately . Features of the adjacent lane to the right (negative -direction) with a distance larger than are provided solely from the HRC.

Ii-F Multi-lane modeling

The resulting fused lane features are used to extract the parameters of a mathematical model to represent ego lane and neighboring lanes. Such a model should be able to handle the curved roads and is supposed to be robust against measurement noise, missing data and outliers. Various models have been proposed to model the road and traffic lanes: straight lines [21], parabolic curves [8], Euler spirals also called clothoids [2, 5] and splines [3, 6, 7].

Although simple models, such as straight lines, are very robust against noise, they cannot model real roads with desired distance and accuracy. On the contrary, more complex techniques, such as parabolic curves, clothoids or splines, can accurately model complicated road shapes, but these models are also more sensitive to uncertainties in the measured data and require sophisticated methods for noise reduction and filtering.

In the present work, the clothoid model is used for modeling course of the road and traffic lanes, since this model is used in the design and construction of highways. Clothoids are defined by a linear change of curvature over the arc length . For small heading angles (up to °) clothoids can be sufficiently precise approximated by a third order polynomial [2]:

(11)

where is the lateral offset of the clothoid, and are heading and curvature at and describes the change of curvature.

The first step towards modeling traffic lanes is the estimation of the road course. For this purpose a base clothoid is computed by fitting the heading of the fused lane features using the derivative of eq. 11. The parameters , and of the base clothoid are estimated using Robust Linear Least Squares regression [22]. This is an iterative approach which detects and eliminates outliers from the fitting procedure. The confidence values of the features serve as weights in the fit. Fig. 5 shows the estimated base clothoid (orange) for a set of input features. The features considered for modeling are depicted in dark blue, while features classified as outliers are drawn in magenta.

Once the course of the road is known, the offsets of the lane clothoids need to be determined. To obtain the offset of a lane clothoid, it is necessary to know which of the fused lane features belong to that lane. If there already was an estimated lane model at the previous time step, that information is used to group the features. New clothoids are created by combining the offsets of clothoids from the previous time step with the parameters of the current base clothoid:

(12)

Each feature is associated to the closest of these clothoids resulting in several feature groups. Usually some features (all features in the first algorithm loop) cannot be associated using this clustering attempt, since no corresponding clothoid from the previous time step exists. Therefore, remaining features in the near range of the experimental vehicle are projected onto the y-axis. If enough features’ projections are in close proximity to each other, the mean lateral position is computed and used as offset in eq. 12 instead of to group the remaining features. In the last step, a Least Squares fit is performed for each group of features, that yields the offset of a lane clothoid. Thus the lane clothoids of the current time step are composed of the fitted offsets in conjunction with the course parameters of the base clothoid. To obtain a stable and continuous description of the lanes, a Kalman filter is used to filter each lane clothoid over time. An example of the resulting lane clothoids projected onto the SMC image is illustrated in yellow in the upper part of fig. 6. The lower part shows a top view diagram of the same clothoids plotted in green. Fig. 7 shows a scenario where lane markers are missing on the right side of th ecurrent lane. The lack of visual information is compensated by the utilization of trajectories of other traffic participants resulting in a stable lane estimation.888A video sequence illustrating the results is available as supplementary material.

Fig. 6: Exemplary result of the modeled lane clothoids projected in yellow onto the camera image (top). The same result is shown in green in the top view diagram (bottom). The red curves represent the ground truth data needed for comparison and evaluation of the result.
Fig. 7: Estimated lane clothoids (yellow) projected onto the camera image in a scenario with three drivable lanes, an emergency lane and several vehicles. Despite missing markers on the right side of the currently driven lane the presented method is able to obtain a stable result. Note that in this scene a successful estimation of the current lane is only accomplished due to the usage of the tracked trajectory of the truck driving in front of the ego vehicle.

Iii Experimental Evaluation

The performance of the lane feature fusion and subsequent lane clothoid modeling are evaluated using real data collected with the experimental vehicle on highways. For comparison a ground truth map containing global positions of lane markings of a highway route with multiple traffic lanes and left and right curves is used. The ground truth map is generated by driving on the highway on all traffic lanes using a RT4000 GPS system together with the odometry of the experimental vehicle and lane markings detected by the serial camera. The RT4000 family products are advanced, precision inertial and GPS Navigation systems for measuring motion, position and orientation 999Oxford Technical Solutions, RT4000 family systems, http://www.oxts.com/products/rt4000-family/. The separate data sets collected from each lanes are merged offline and the RT4000 GPS system is used again for localization in the ground truth map. A snap-shot of the ground truth map (red) and the modeled lane clothoids (green) is shown in the lower part of fig. 6. Another snap-shot of estimated lane clothoids and ground truth data plotted onto a map extract is shown in fig. 10. To compare the estimated lanes to the ground truth map, clothoids are sampled every and the lateral deviation at each sampled distance to the ground truth map is computed. This is done after every algorithm loop resulting in a distribution of measurements at each sample distance. For the evaluation the data collected during several drives on the highway with average traffic density is analyzed corresponding to a total driving distance of . As the measure of performance the mean, standard deviation and Root Mean Square Error (RMSE) of the distributions at each sampled distance accumulated over the total data are computed. The evaluation is conducted up to a maximum distance of separately for ego and adjacent lanes as they are expected to show different results.

distance ego lane adjacent lane
 [m] [m] [m] [m] [m] [m] [m]
0 0.00 0.10 0.10 -0.04 0.20 0.21
20 -0.02 0.10 0.11 -0.07 0.20 0.21
40 -0.06 0.17 0.18 -0.11 0.25 0.27
60 -0.11 0.26 0.28 -0.17 0.32 0.37
80 -0.20 0.38 0.42 -0.24 0.44 0.50
100 -0.26 0.48 0.55 -0.29 0.59 0.66
120 -0.26 0.58 0.64 -0.47 0.87 0.99
TABLE I: Mean, sigma and RMSE for ego and adjacent lane.

Results of the evaluation are presented in table I and the RMSE for ego (red) and adjacent lanes (blue) is plotted in fig. 8. While mean lateral deviation between estimated lanes and ground truth data is below one sigma for all analyzed distances, one can see that their quality decreases with distance. Due to data fusion over several time steps, lane features near the ego vehicle have been measured more often and have therefore higher accuracy. The results also show that the ego lane is estimated with a better performance than the adjacent lanes. One reason for this is that the SMC only contributes features to the lane currently driven on. At larger distances the estimation of the ego lane is at all times dominated by the tracked object right in front of the experimental vehicle and HRC data up to that object. The adjacent lanes on the other hand are often estimated using either tracked objects or HRC data depending on the current traffic situation.

In summary the lane feature fusion and the subsequent road modeling provide estimation of lanes on highways in real-time (less than ). Using the SMC, the HRC and tracked objects as input, an increase in the detection range of the ego lane compared to the serial lane detection of the SMC (up to ) is achieved. At the ego lane is estimated with an RMSE of . Due to the combination of HRC data and tracked objects neighboring lanes are estimated at all times. At a distance of the RMSE obtained for neighboring lanes is less than .

Fig. 8: RMSE of the difference between estimated lanes and ground truth for ego (red) and neighbor lane (blue).

It needs to be noted that evaluation on publicly available datasets such as KITTI [23], DARPA Urban Challenge [24], Caltech [25], ROMA [26] is not reasonable for the presented method. None of the datasets contains all of the following items that would be necessary for a meaningful comparison: camera images, ground truth data of multi-lane roads, tracked trajectories of other traffic participants and odometry of the ego vehicle.

Fig. 9: Aerial view of a short extract of Germany’s A9 highway 101010The A9 autobahn in Germany is declared to be a test track for self-driving cars by Germany’s ministry of transport [27].taken from Google maps. The modeled lane clothoids (red) and ground truth data (blue) are plotted onto the map and the current position of the experimental vehicle is denoted by the black cross. Note that the vehicles visible in the map are not correlated to the traffic participants used in the estimation of the lane clothoids.

Iv Conclusion

In the present work, a real-time perception method for highways estimating ego and neighbor lanes is presented. In addition to the serial camera system of the vehicle, a high resolution camera system and information deduced from the tracking of other traffic participants serve as input to the GraphSLAM based feature fusion algorithm. The fusion result is utilized to model traffic lanes on a highways represented as clothoids. No prior knowledge about the road or lanes is needed in the feature fusion and lane modeling procedure. Evaluation of the approach is conducted by comparing the estimated lanes to a ground truth map of a several kilometer long route on a highway. In comparison to the serial lane detection, the method provides increased detection range for the ego lane and detection of the neighbor lanes at all times.

The presented method works especially well in scenarios where many traffic participants are present. Hidden lane markings are compensated by using tracked trajectories of other vehicles in combination with camera data.

The results show that up to a distance of ahead of the vehicle good estimates of ego and adjacent lanes are obtained. Precision, stability and robustness of the derived multi-lane model suffice to be used in the development and testing of self-driving cars on highways. Up to now, several hundred kilometers have been driven autonomously on highways applying the presented methods in the experimental vehicle.

Enhancements to the presented lane perception method could be achieved by increasing the quality of the input data, e.g. improving outlier suppression in the lane feature extraction from the HRC images at large distance. Furthermore, the addition of new data sources such as vehicle-to-vehicle communication could lead to an increase in performance and robustness. The proposed method for multi-lane modeling yields a continuous and robust lane description on highways by initially estimating a base clothoid, that characterizes the general road course. However, a drawback of this approach is that highway exit and entrance ramps as well as highway junctions can not be represented in full detail. Development of a spline based lane model could pose a solution to this issue, as splines provide a more flexible description of the lanes and might therefore be able to cover more complex scenarios.

References

  • [1] K. Bengler, K. Dietmayer, B. Färber, M. Maurer, C. Stiller, and H. Winner, “Three decades of driver assistance systems: Review and future perspectives,” IEEE Intelligent Transportation Systems Magazine, vol. 6, no. 4, pp. 6–22, 2014.
  • [2] E. D. Dickmanns and B. D. Mysliwetz, “Recursive 3-d road and relative ego-state recognition,” IEEE Tranactions on Pattern Analysis and Machine Intelligence, vol. 14, no. 2, pp. 199–213, 1992.
  • [3] Y. Wang, D. Shen, and E. K. Teoh, “Lane detection using spline model,” Pattern Recognition Letters, vol. 21, no. 8, pp. 677–689, 2000.
  • [4] J. C. McCall and M. M. Trivedi, “Video-based lane estimation and tracking for driver assistance: survey, system, and evaluation,” IEEE Transactions on Intelligent Transportation Systems, vol. 7, no. 1, pp. 20–37, 2006.
  • [5] U. Meis, W. Klein, and C. Wiedemann, “A new method for robust far-distance road course estimation in advanced driver assistance systems,” in 13th International IEEE Conference on Intelligent Transportation Systems (ITSC), 2010.
  • [6] M. Aly, “Real time detection of lane markers in urban streets,” in IEEE Intelligent Vehicles Symposium, 2008.
  • [7] Z. Kim, “Robust lane detection and tracking in challenging scenarios,” IEEE Transactions on Intelligent Transportation Systems, vol. 9, no. 1, pp. 16–26, 2008.
  • [8] G. Liu, F. Wörgötter, and I. Markelić, “Lane shape estimation using a partitioned particle filter for autonomous driving,” in IEEE International Conference on Robotics and Automation (ICRA), vol. 9-13, 2011, pp. 1627–1633.
  • [9] S. Kang, S. Lee, J. Hur, and S. Seo, “Multi-lane detection based on accurate geometric lane estimation in highway scenarios,” in IEEE Intelligent Vehicles Symposium, 2014, pp. 221–226.
  • [10] J. Leonard, J. How, S. Teller, M. Berger, S. Campbell, G. Fiore, L. Fletcher, E. Frazzoli, A. Huang, S. Karaman, et al., “A perception-driven autonomous urban vehicle,” Journal of Field Robotics, vol. 25, no. 10, pp. 727–774, 2008.
  • [11] M. Fatemi, L. Hammarstrand, L. Svensson, and Ángel F. García-Fernández, “Road geometry estimation using a precise clothoid road model and observations of moving vehicles,” in 17th International IEEE Conference on Intelligent Transportation Systems (ITSC), 2014.
  • [12] H. Cho, Y. Seo, B. V. K. V. Kumar, and R. Rajkumar, “A multi-sensor fusion system for moving object detection and tracking in urban driving environments,” in IEEE International Conference on Robotics and Automation (ICRA), 2014, pp. 1836–1843.
  • [13] A. S. Huang, D. Moore, M. Antone, E. Olson, and S. Teller, “Finding multiple lanes in urban road networks with vision and lidar,” Autonomous Robots, vol. 26, no. 2-3, pp. 103–122, 2009.
  • [14] M. Aly. (2008) Real time lane detection in urban streets, Model-all video clip, detecting all lane markers on the street. [Online]. Available: http://www.vision.caltech.edu/malaa/research/lane-detection/
  • [15] M. A. Brubaker, A. Geiger, and R. Urtasun, “Map-based probabilistic visual self-localization,” Pattern Analysis and Machine Intelligence (PAMI), 2015.
  • [16] Y.-M. Liang, H.-R. Tyan, S.-L. Chang, H.-Y. M. Liao, and S.-W. Chen, “Video stabilization for a camcorder mounted on a moving vehicle,” IEEE Transactions on Vehicular Technology, vol. 53, no. 6, pp. 1636–1648, 2004.
  • [17] R. Grewe, M. Komar, A. Hohm, S. Hegemann, and S. Lüke, “Environment modelling for future adas functions,” in 19th ITS World Congress, 2012.
  • [18] S. Thrun, “Probabilistic robotics,” Communications of the ACM, vol. 45, no. 3, pp. 52–57, 2002.
  • [19] G. Grisetti, R. Kuemmerle, C. Stachniss, and W. Burgard, “A tutorial on graph-based SLAM,” IEEE Intelligent Transportation Systems Magazine, vol. 2, no. 4, pp. 31–43, 2010.
  • [20] N. Sünderhauf and P. Protzel, “Switchable constraints for robust pose graph slam,” in Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on, 2012, pp. 1879–1884.
  • [21] S. Lakshmanan and D. Grimmer, “A deformable template approach to detecting straight edges in radar images,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 18, no. 4, pp. 438–443, 1996.
  • [22] C. L. Lawson and R. J. Hanson, Solving Least Squares Problems (Classics in Applied Mathematics).    Society for Industrial Mathematics, 1987.
  • [23] J. Fritsch, T. Kuehnl, and A. Geiger, “A new performance measure and evaluation benchmark for road detection algorithms,” in 16th International IEEE Conference on Intelligent Transportation Systems (ITSC), 2013.
  • [24] A. S. Huang, M. Antone, E. Olson, L. Fletcher, D. Moore, S. Teller, and J. Leonard, “A High-rate, Heterogeneous Data Set from the DARPA Urban Challenge,” International Journal of Robotics Research, vol. 29, no. 13, pp. 1595–1601, 2010.
  • [25] M. Aly. (2011) Caltech lanes dataset. [Online]. Available: http://www.vision.caltech.edu/malaa/datasets/caltech-lanes/
  • [26] T. Veit, J.-P. Tarel, P. Nicolle, and P. Charbonnier, “Evaluation of road marking feature extraction,” in 11th International IEEE Conference on Intelligent Transportation Systems (ITSC), 2008, pp. 174–181.
  • [27] German Federal Ministry of Transport and Digital Infrastructure, “Strategy for automated and connetcted driving,” 2015.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
50614
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description