Detecting Lane and Road Markings at A Distance with Perspective Transformer Layers

Detecting Lane and Road Markings at A Distance with Perspective Transformer Layers


Accurate detection of lane and road markings is a task of great importance for intelligent vehicles. In existing approaches, the detection accuracy often degrades with the increasing distance. This is due to the fact that distant lane and road markings occupy a small number of pixels in the image, and scales of lane and road markings are inconsistent at various distances and perspectives. The Inverse Perspective Mapping (IPM) can be used to eliminate the perspective distortion, but the inherent interpolation can lead to artifacts especially around distant lane and road markings and thus has a negative impact on the accuracy of lane marking detection and segmentation. To solve this problem, we adopt the Encoder-Decoder architecture in Fully Convolutional Networks and leverage the idea of Spatial Transformer Networks to introduce a novel semantic segmentation neural network. This approach decomposes the IPM process into multiple consecutive differentiable homography transform layers, which are called “Perspective Transformer Layers”. Furthermore, the interpolated feature map is refined by subsequent convolutional layers thus reducing the artifacts and improving the accuracy. The effectiveness of the proposed method in lane marking detection is validated on two public datasets: TuSimple and ApolloScape.



I Introduction

Lane and road markings are critical elements in traffic scenes. The lane lines or road signs such as arrows can provide valuable information for self-driving cars’ planning and controlling.

Current lane marking detection methods mostly utilize the segmentation technique [1][2][3], which is based on fully convolutional deep neural networks (FCNs). The segmentation networks rely on local features which are extracted from the raw RGB pattern and mapped into semantic spaces for pixel-level classification. However, such an architecture often suffers from an accuracy degradation for lane and road markings far away from the ego-vehicle, because these markings occupy a small number of pixels in the image, and their features become inconsistent for varying distances and perspectives, as shown in Fig. 1. , while both distant and close lane marking information is important for the controlling and planning tasks.

Fig. 1: View of the lane markings at different distances. Similar lane markings in bird’s eye view show very different shape and scale features in original view.

An intuitive solution is to transform the original image to a bird’s-eye view (BEV) using the Inverse Perspective Mapping (IPM). In principle, this can solve the problem of inconsistent scales of road markings at different distances. However, the IPM is typically implemented by interpolation which reduces the resolution of the distant road surface and create unnatural blurring and stretching (Fig. 1). To tackle this issue, we adopt the Encoder-Decoder architecture in Fully Convolutional Networks [4] and leverage the idea of Spatial Transformer Networks [5] to build a semantic segmentation neural network. As shown in Fig. 2, Fully Convolutional layers are interleaved with a series of differentiable homographic transform layers called “Perspective Transformer Layers” (PTLs) which can transform the feature maps from the original view to the bird’s-eye view during the encoding process. Afterwards, it transforms feature maps back to the original perspective in the decoding process, where subsequent convolutional layers are employed to refine each interpolated feature map. Therefore, this network can still use the labels in the original view for an end-to-end training.

In this work, our contributions can be summarized as follows:

  • We proposed a lane marking detection network based on FCN, which integrates with novel PTLs to reduce the perspective distortion at a distance.

  • We build a mathematical model to derive the parameters of consecutive PTLs, enabling the mutual conversion between the original view and the bird’s eye view step-wisely.

  • The effectiveness of the proposed method in both instance segmentation and semantic segmentation for lane and road markings is approved on two public datasets: TuSimple [6] and ApolloScape [7].

Fig. 2: PTSeg architecture. Semantic segmentation network is the main body of PTSeg. It is based on a standard encoder-decoder network introduced in [4], of which the encoder is implemented with a ResNet-34 network [8]. Each Perspective Transformer Layer (PTL) follows a ResBlock or a Transposed Convolution layer, perspectively and gradually warping feature maps into a bird’s-eye view or back to front-view. The process of perspective transform is visualized qualitatively using color images above the main network. As for instance segmentation, following [2], an instance embedding branch is added. It shares previous layers with the semantic segmentation networks and outputs N-dimensional embedding per lane pixel, which is also visualized as a color map.

Ii Related Works

Lane marking detection has been intensively explored and recent progresses have mainly focused on the semantic segmentation-based and instance segmentation-based methods.

Ii-a Lane Marking Detection by Semantic Segmentation

The work [9] proposed both a road marking dataset and a segmentation network using the ResNet with Pyramid Pooling. Lee et al. [1] proposed a unified end-to-end trainable multi-task network that jointly handles lane markings detection and road markings recognition under adverse weather conditions with the guidance by a vanishing point. Zhang et al. [11] proposed a segmentation-by-detection method for road marking extraction, which delivers outstanding performances on cross datasets. In this method, a lightweight network is dedicatedly designed for road marking detection. However, the segmentation is mainly based on conventional image morphological algorithms.

Ii-B Lane Marking Detection by Instance Segmentation

Semantic segmentation is essentially just a pixel-level classification problem, it neither can distinguish different instances within the same category, nor can interpret separated parts of the same marking (dashed lines, zebra lines, etc.) as a unity. Therefore, researchers’ attention gradually shifted to study the problem of instance segmentation.

Pan et al. [3] proposed the Spatial CNN (SCNN), which generalizes traditional spatial convolutions to slice-wise convolutions within feature maps, thus enabling message passing between pixels across rows and columns in a layer. This is particularly suitable for linear shaped traffic lanes. Hsu et al. [12] proposed a novel learning objective function to train the deep neural network to perform an end-to-end image pixel clustering and applied this approach on instance segmentation. Neven et al. [2] went beyond the modeling limitation by pre-defined number of lanes, in their instance segmentation method each lane forms its own instance and the network can be trained end-to-end. They further proposed H-Net to parameterize the segmented results.

Ii-C Perspective Transform in CNNs

To compensate the perspective distortion, a spatial transform, such as IPM, should be involved in the networks. A typical work is the Spatial Transformer Network [5]. It introduces a learnable module, which explicitly allows the spatial manipulation of data within the network. This differentiable module can be applied to existing convolutional architectures, enabling actively spatial transform of feature maps.

The most similar work to ours is [13]. In this work, an adversarial learning approach is proposed for generating an improved IPM using the STN. The generated BEV images contain sharper features than that produced by traditional IPM. The main difference between this work and ours is that they took a ground-truth BEV image (obtained by a visual odometry) for supervision. Their target is to generate a high-resolution IPM, while ours is to improve the segmentation accuracy.

Iii Proposed Method

In this work, we boost the performance of lane marking detection by inserting differentiable PTLs into the standard encoder-decoder architecture. One challenge in designing Transformer layers lies in dividing and distributing the integral transform into several even steps. Another is about how to determine the proper cropping range for these intermediate views. In this section, we firstly describe the improved backbone in section III-A. Then, we address how to apply Transformer layers as well as how to solve above difficulties in section III-B. Finally, we illustrate the deployment of the backbone in both semantic and instance segmentation context with details about detection heads and loss functions in III-C.

Iii-a Network Structure

As shown in Fig. 2, the overall semantic segmentation network is based on a standard encoder-decoder network [4], in which the encoder is implemented with a ResNet-34 network [8] and PTLs interleave with the convolutional layers. We refer our network as PTSeg (Perspective Transformer Segmentors). In this network, images are down-sampled by the encoder to a feature map with 5 times of the stride-2 down-sampling operation. And the feature map is gradually warped into a pseudo BEV. Afterwards, the decoder reverts the previous transforms by up-sampling and back-projecting the feature map into its original size and perspective, while keeping the accumulated high-level semantic information of lane and road markings. The process of perspective transform is visualized qualitatively using color images above the main network.

As the sampling procedure does not affect the overall differentiability, the network with PTL can be trained end-to-end. Feature maps in the middle section of the network have perspective transform relationships with the input image, and the effect of these transforms in encoding process is equivalent to warping the front-view feature maps to a BEV, thus solving the problem of inconsistent scales of lane and road markings due to different distances. Meanwhile, the subsequent refinement reduces blur and artifacts caused by interpolation. Similar to the FCN, skip-connections can compensate for the information loss during the down-sampling, resulting in clear boundaries for detected lane and road markings.

Iii-B Consecutive Perspective Mapping

In order to map the front-view image into a bird’s-eye view smoothly, we adopt an approach differing from the standard IPM method. Here we decompose the integral transform into a series of shortest-path consecutive transforms ( for short) that project the view into view . This procedure is interpreted as


where (can be denoted as for short) is the rotation matrix by which virtual camera is rotated in relation to virtual camera ; is the translation vector from to ; and are the normal vector of the ground plane and the distance to the plane respectively. and are the cameras’ intrinsic parameter matrices.

However, to control the transform process, the value of internal parameters, i.e., and should be selected for each by trial and error, which is a tedious job. To simplify this process, we use a pure rotation virtual camera model to eliminate , , and use a Key-Point Bounding-Box Trick to estimate for optimal viewports of intermediate feature maps (as shown in Algorithm 1). Whereas the traditional IPM uses at least 4 pairs of pre-calibrated correspondences on each view to estimate the integral directly, we estimate the integral rotation by the horizon line specified on the image, which can be obtained by horizon line detection models, e.g., the HLW[14]. By representing the rotation in the axis-angle form, it is much easier to divide the rotation into sections by dividing the angle and keep the axis direction unchanged. In this way, all internal parameters of each are determined. Details about above procedure are given as follows.

Pure Rotation Virtual Cameras

It can be proven that a translated camera with unchanged intrinsic matrix can produce the same image as a fixed camera with accordingly modified intrinsic matrix. Thus, the consecutive perspective transform is modeled as synthesizing the ground plane image captured by a pure rotating camera, and (1) is simplified as,


and only the rotation matrix should be decomposed as


Estimating Integral Extrinsic Rotation by the Horizon Line

As extrinsic matrices with respect to the ground plane are not provided in TuSimple and ApolloScape datasets, we roughly estimate the integral rotation by the horizon line. Given two horizon points in the camera coordinates, and , the normal vector of ground plane (facing to the ground) is calculated by a cross-production, i.e.,


In order to rotate the camera to face to the ground, its -axis should be rotated to align with the normal vector . Hence, the rotation in axis-angle form is calculated as


where is a unit vector on -axis.

Decomposing the Extrinsic Rotation

Here we use the axis-angle representation for decomposing the rotation. We simply divide the integral angle into several even parts, and then convert each to the corresponding rotation matrix .

Optimal Viewports by Key-Point Bounding Boxes

While conducting IPM, image pixels at the edge often need to be cropped to prevent the target view from being too large. In order to preserve the informative pixels as many as possible, we roughly annotate the ground region by a set of border points in the front-view. The points are projected to the new view during each perspective transform. And we use a bounding box in the new view to determine the minimal available viewport which does not crop any projected key point. Thus, given a desirable target view width , the corresponding intrinsic and target view height is determined, as shown in Algorithm 1.

0:  , , ,
1:  Convert points from Image to Camera :
2:  Rotate points to view :
3:  Normalize by the Z-dimension:
4:  Get the bounding box: [
5:  Estimate focal length as a scale ratio:
6:  Estimate target view height with the same scale ratio:
7:  Estimate translation by aligning left-top corner of target image view and bounding box in target camera coordinate:
8:  Compose target intrinsic matrix:
9:  return  ,
Algorithm 1 Determine the Optimal Viewports through Key-Point Bounding Boxes

Iii-C Segmentation Heads

Semantic Segmentation

The lane and road marking detection problems are often cast as a semantic segmentation task [1] [9] [11]. By representing label classes as one-hot vectors, we predict the logits of each class at each pixel location. Then, we use the classic cross-entropy loss function to train this semantic segmentation branch.

Instance Segmentation

We follow the work of LaneNet [2] to interpret the lane detection problem as an instance segmentation task. The network contains two branches. The semantic branch outputs a binary mask, while the instance embedding branch outputs an N-dimensional embedding vector for each pixel. In the embedding space pixels are more easily to be clustered by a one-shot method based on distance metric learning [15]. For details of the loss function please refer to [2].

Iv Experiments

We evaluate our network on the TuSimple [6] and ApolloScape [7] dataset respectively for the instance segmentation and semantic segmentation tasks. Our network is implemented by the PyTorch [16] framework.

Iv-a TuSimple Benchmark


The TuSimple Benchmark is a dedicated dataset for lane detection and consists of 3626 training and 2782 testing images. The annotation includes the -position of the lane points at a number of discretized -positions.


The detection accuracy is calculated as the average correct number of points per image:


where denotes the number of correct points and is the number of groundtruth points. A point is regarded as correctly detected when the error is smaller than a predefined threshold. Besides, the false positive and false negative scores can also be calculated by


where denotes the number of mispredicted lanes, indicates the number of predicted lanes, is the number of missed groundtruth lanes and represents the number of all groundtruth lanes.

Training Details

Here we train the instance segmentation network as shown in Fig. 2. During the training process we use the Adam [18] optimizer, with a weight decay of 0.0005, a momentum of 0.95, a learning rate of 0.00004, and a batch size of 2. When the accuracy is without promotion up to 60 epochs, the learning rate drops to 10%. And the model converges after 220 epochs.

Xingang Pan [3] 96.53 0.0617 0.018 yes N/A
Yen-Chang Hsu [12] 96.50 0.0851 0.0269 no N/A
Davy Neven [2] 96.40 0.078 0.0244 no N/A
ResNet34-FCN 96.24 0.0746 0.0347 no 95.67
ResNet34-PTL-FCN (ours) 96.15 0.0818 0.0314 no 95.72
TABLE I: Test results on TuSimple Lane Detection Benchmark.                                                                                                                                     (u.h. is short for the region under the horizon line.)
(a) Lane points accuracy vs. distance in pixels.
(b) Lane points accuracy vs. distance in meters.
Fig. 3: TuSimple Lane Detection Benchmark Results.
Fig. 4: Visualization of the comparison among the base-line, our method and the groundtruth on Tusimple dataset. Each row contains three submaps. From left to right: results w/o PTLs, results w/ PTLs, GT. And the area inside the red wireframe should be paid more attention.

Evaluation Results

In comparison with other state-of-the-art methods [2] [3] [12], we show the test results in Table I, from which we can find out that our detection accuracy is already in the first echelon. It is worth to mention that all evaluation results above are in strict accordance with the metric defined by TuSimple. However, in our method the feature maps are warped into a bird’s-eye view of the ground, which force a part of the image above the horizon to be ignored by our method. An amount of lane segments existing above the horizon are visible for other methods but invisible for our method, which would lead to a slight decrease of the results. In order to make a fair comparison, we re-evaluated those samples only below the horizon, and ignored null sample points which is labeled as in the annotation1. The new evaluation result is listed in column of Table I. We also plot the accuracy versus different distances from ego-vehicle in the line charts. Fig. 3 (a) shows the accuracy in dependence of pixel distance to the image bottom. Fig. 3 (b) shows the accuracy in dependence of the real distance to the ego-vehicle. Both charts imply that our method can improve the detection accuracy of lane and road markings at longer distances. The qualitative comparison is shown in Fig. 4.

Iv-B ApolloScape Benchmark


The Lane Segmentation branch in ApolloScape dataset contains more than 110,000 frames with high quality pixel-level annotations. The annotation includes 35 kinds of lane and road markings from daily traffic scenarios, including but not limited to lanes, turning arrows, stop lines, zebra crossings. To the best of the authors’ knowledge, no related works have been trained on the ApolloScape Lane Segmentation dataset. Therefore, we only show the ablation experimental results of our own method.


The evaluation follows the recommendation of ApolloScape which uses the mean-IOU (mIOU) as the evaluation metric just like in [19].

(a) mean-IOU vs. distance in pixels
(b) mean-IOU vs. distance in meters
Fig. 5: Apollo Road Marking Semantic Segmentation Results.

Training Details

Since the ApolloScape only provides pixel-level semantic annotations, we train the semantic segmentation network as shown in Fig. 2. During the training process we use the Adam [18] optimizer, with a weight decay of 0.0005, a momentum of 0.95, a learning rate of 0.00004, and a batch size of 2. And the model converges after 25 epochs.

Evaluation Results

Also in a fair way, we evaluate the image part below the horizon. Fig. 5 shows the results of mean-IOU accuracy at different distances. Table II shows mIOU value and IOU values of some common types of lane and road markings. We ignored the rest classes who accounting for less than 0.1% of the data.

category class ResNet18-FCN ResNet18-PTL-FCN (ours)
arrow thru 0.611 0.692
thru & left turn 0.768 0.800
thru & right turn 0.824 0.808
left turn 0.767 0.768
stopping stop line 0.665 0.747
zebra crosswalk 0.859 0.858
lane white solid 0.832 0.800
yellow solid 0.813 0.803
yellow double solid 0.886 0.893
white broken 0.791 0.790
diamond zebra attention 0.749 0.775
rectangle no parking 0.652 0.724
mIOU 0.768 0.788
TABLE II: Per-class IOU results on ApolloScapes Lane Segmentation Benchmark.

According to the experimental results, our method can effectively improve the detection accuracy at further distances, especially for the road markings with richer structural features such as turning arrows. The qualitative comparison is shown in Fig. 6.

Fig. 6: Visualization of the comparison among the base-line, our method and the ground-truth on ApolloScape. Each row contains three submaps. From left to right: results w/o PTLs, results w/ PTLs, GT, and the enlarged view of area inside the red wireframe is placed on the top of each submap.

V Conclusion

In this paper, we introduced a segmentation network architecture improved by consecutive homography transforms for road marking detection. The parameters of consecutive transforms are clearly yielded by a pure rotating camera model and a key-point bounding-box trick. The proposed method is proven to be beneficial for distant lane and road marking detection. For the future research, we are going to incorporate an online scheme of extrinsic estimation into this structure. Also handling the non-flat ground surface which has unclear definition of ground normal vectors and horizon lines is one of the interesting topics.


This work is supported by the National Key Research and Development Program of China (No. 2018YFB0105103, No. 2017YFA0603104), the National Natural Science Foundation of China (No. U1764261, No. 41801335, No. 41871370), the Natural Science Foundation of Shanghai (No. kz170020173571, No. 16DZ1100701) and the Fundamental Research Funds for the Central Universities (No. 22120180095).


  1. See the labeling protocol in TuSimple.


  1. Seokju Lee, Junsik Kim, Jae Shin Yoon, Seunghak Shin, Oleksandr Bailo, Namil Kim, Tae-Hee Lee, Hyun Seok Hong, Seung-Hoon Han, and In So Kweon. Vpgnet: Vanishing point guided network for lane and road marking detection and recognition. arXiv:1710.06288 [cs], Oct 2017. arXiv: 1710.06288.
  2. Davy Neven, Bert De Brabandere, Stamatios Georgoulis, Marc Proesmans, and Luc Van Gool. Towards end-to-end lane detection: an instance segmentation approach. arXiv:1802.05591 [cs], Feb 2018. arXiv: 1802.05591.
  3. Xingang Pan, Jianping Shi, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Spatial as deep: Spatial cnn for traffic scene understanding. 2018.
  4. Jonathan Long, Evan Shelhamer, and Trevor Darrell. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3431–3440, 2015.
  5. Max Jaderberg, Karen Simonyan, and Andrew Zisserman. Spatial transformer networks. page 9.
  6. Tusimple lane detection benchmark.
  7. Xinyu Huang, Xinjing Cheng, Qichuan Geng, Binbin Cao, Dingfu Zhou, Peng Wang, Yuanqing Lin, and Ruigang Yang. The apolloscape dataset for autonomous driving. arXiv: 1803.06184, 2018.
  8. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Jun 2016.
  9. Xiaolong Liu, Zhidong Deng, Hongchao Lu, and Lele Cao. Benchmark for road marking detection: Dataset specification and performance baseline. In 2017 IEEE 20th International Conference on Intelligent Transportation Systems (ITSC), pages 1–6. IEEE, 2017.
  10. Y. Wu, T. Yang, J. Zhao, L. Guan, and W. Jiang. Vh-hfcn based parking slot and lane markings segmentation on panoramic surround view. In 2018 IEEE Intelligent Vehicles Symposium (IV), pages 1767–1772, June 2018.
  11. Weiwei Zhang, Zeyang Mi, Yaocheng Zheng, Qiaoming Gao, and Wenjing Li. Road marking segmentation based on siamese attention module and maximum stable external region. IEEE Access, 7:143710–143720, 2019.
  12. Yen-Chang Hsu, Zheng Xu, Zsolt Kira, and Jiawei Huang. Learning to cluster for proposal-free instance segmentation. arXiv:1803.06459 [cs], Mar 2018. arXiv: 1803.06459.
  13. Tom Bruls, Horia Porav, Lars Kunze, and Paul Newman. The right (angled) perspective: Improving the understanding of road scenes using boosted inverse perspective mapping. In 2019 IEEE Intelligent Vehicles Symposium (IV), page 302–309, Jun 2019.
  14. Scott Workman, Menghua Zhai, and Nathan Jacobs. Horizon lines in the wild. In Procedings of the British Machine Vision Conference 2016, pages 20.1–20.12. British Machine Vision Association, 2016.
  15. Bert De Brabandere, Davy Neven, and Luc Van Gool. Semantic instance segmentation with a discriminative loss function, 2017.
  16. Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. Automatic differentiation in pytorch. 2017.
  17. Adam Paszke, Abhishek Chaurasia, Sangpil Kim, and Eugenio Culurciello. Enet: A deep neural network architecture for real-time semantic segmentation, 2016.
  18. Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
  19. Marius Cordts, Mohamed Omran, Sebastian Ramos, Timo Rehfeld, Markus Enzweiler, Rodrigo Benenson, Uwe Franke, Stefan Roth, and Bernt Schiele. The cityscapes dataset for semantic urban scene understanding. In Proc. of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description