ComplexYOLO: Realtime 3D Object Detection on Point Clouds
Abstract
Lidar based 3D object detection is inevitable for autonomous driving, because it directly links to environmental understanding and therefore builds the base for prediction and motion planning. The capacity of inferencing highly sparse 3D data in realtime is an illposed problem for lots of other application areas besides automated vehicles, e.g. augmented reality, personal robotics or industrial automation. We introduce ComplexYOLO, a state of the art realtime 3D object detection network on point clouds only. In this work, we describe a network that expands YOLOv2, a fast 2D standard object detector for RGB images, by a specific complex regression strategy to estimate multiclass 3D boxes in Cartesian space. Thus, we propose a specific EulerRegionProposal Network (ERPN) to estimate the pose of the object by adding an imaginary and a real fraction to the regression network. This ends up in a closed complex space and avoids singularities, which occur by single angle estimations. The ERPN supports to generalize well during training. Our experiments on the KITTI benchmark suite show that we outperform current leading methods for 3D object detection specifically in terms of efficiency. We achieve state of the art results for cars, pedestrians and cyclists by being more than five times faster than the fastest competitor. Further, our model is capable of estimating all eight KITTIclasses, including Vans, Trucks or sitting pedestrians simultaneously with high accuracy.
Keywords:
3D Object Detection, Point Cloud Processing, Lidar, Autonomous Driving1 Introduction
Point cloud processing is becoming more and more important for autonomous driving due to the strong improvement of automotive Lidar sensors in the recent years. The sensors of suppliers are capable to deliver 3D points of the surrounding environment in realtime. The advantage is a direct measurement of the distance of encompassing objects [1]. This allows us to develop object detection algorithms for autonomous driving that estimate the position and the heading of different objects accurately in 3D [2] [3] [4] [5] [6] [7] [8] [9]. Compared to images, Lidar point clouds are sparse with a varying density distributed all over the measurement area. Those points are unordered, they interact locally and could mainly be not analyzed isolated. Point cloud processing should always be invariant to basic transformations [10] [11].
In general, object detection and classification based on deep learning is a well known task and widely established for 2D bounding box regression on images [12] [13] [14] [15] [16] [17] [18] [19] [20] [21]. Research focus was mainly a tradeoff between accuracy and efficiency. In regard to automated driving, efficiency is much more important. Therefore, the best object detectors are using region proposal networks (RPN) [3] [22] [15] or a similar grid based RPNapproach [13]. Those networks are extremely efficient, accurate and even capable of running on a dedicated hardware or embedded devices. Object detections on point clouds are still rarely, but more and more important. Those applications need to be capable of predicting 3D bounding boxes. Currently, there exist mainly three different approaches using deep learning [3]:
1.1 Related Work
Recently, Frustumbased Networks [5] have shown high performance on the KITTI Benchmark suite. The model is ranked
In contrast, Zhou et al. [3] proposed a model that operates only on Lidar data. In regard to that, it is the best ranked model on KITTI for 3D and birdseyeview detections using Lidar data only. The basic idea is an endtoend learning that operates on grid cells without using hand crafted features. Grid cell inside features are learned during training using a Pointnet approach [10]. On top builds up a CNN that predicts the 3D bounding boxes. Despite the high accuracy, the model ends up in a low inference time of 4fps on a TitanX GPU [3].
Another highly ranked approach is reported by Chen et al. [5]. The basic idea is the projection of Lidar point clouds into voxel based RGBmaps using handcrafted features, like points density, maximum height and a representative point intensity [9]. To achieve highly accurate results, they use a multiview approach based on a Lidar birdseyeview map, a Lidar based frontview map and a camera based frontview image. This fusion ends up in a high processing time resulting in only 4fps on a NVIDIA GTX 1080i GPU. Another drawback is the need of the secondary sensor input (camera).
1.2 Contribution
To our surprise, no one is achieving realtime efficiency in terms of autonomous driving so far. Hence, we introduce the first slim and accurate model that is capable of running faster than 50fps on a NVIDIA TitanX GPU. We use the multiview idea (MV3D) [5] for point cloud preprocessing and feature extraction. However, we neglect the multiview fusion and generate one single birdseyeview RGBmap (see Fig. 1) that is based on Lidar only, to ensure efficiency. On top, we present ComplexYOLO, a 3D version of YOLOv2, which is one of the fastest stateoftheart image object detectors [13]. ComplexYOLO is supported by our specific ERPN that estimates the orientation of objects coded by an imaginary and real part for each box. The idea is to have a closed mathematical space without singularities for accurate angle generalization. Our model is capable to predict exact 3D boxes with localization and an exact heading of the objects in realtime, even if the object is based on a few points (e.g. pedestrians). Therefore, we designed special anchorboxes. Further, it is capable to predict all eight KITTI classes by using only Lidar input data. We evaluated our model on the KITTI benchmark suite. In terms of accuracy, we achieved on par results for cars, pedestrians and cyclists, in terms of efficiency we outperform current leaders by minimum factor of 5. The main contributions of this paper are:

This work introduces ComplexYOLO by using a new ERPN for reliable angle regression for 3D box estimation.

We present realtime performance with high accuracy evaluated on the KITTI benchmark suite by being more than five times faster than the current leading models.

We estimate an exact heading of each 3D box supported by the ERPN that enables the prediction of the trajectory of surrounding objects.

Compared to other Lidar based methods (e.g. [3]) our model efficiently estimates all classes simultaneously in one forward path.
2 ComplexYOLO
This section describes the grid based preprocessing of the point clouds, the specific network architecture, the derived loss function for training and our efficiency design to ensure realtime performance.
2.1 Point Cloud Preprocessing
The 3D point cloud of a single frame, acquired by Velodyne HDL64 laser scanner [1], is converted into a single birdseyeview RGBmap, covering an area of 80m x 40m (see Fig.4) directly in front of the origin of the sensor. Inspired by Chen et al. (MV3D) [5], the RGBmap is encoded by height, intensity and density. The size of the grid map is defined with and . Therefore, we projected and discretized the 3D point clouds into a 2D grid with resolution of about . Compared to MV3D, we slightly decreased the cell size to achieve less quantization errors, accompanied with higher input resolution. Due to efficiency and performance reasons, we are using only one instead of multiple height maps. Consequently, all three feature channels ( with ) are calculated for the whole point cloud inside the covering area . We consider the Velodyne within the origin of and define:
(1) 
We choose , considering the Velodyne height of 1.73m [1], to cover an area above the ground to about 3m height, expecting trucks as highest objects. With the aid of the calibration [1], we define a mapping function with mapping each point with index into a specific grid cell of our RGBmap. A set describes all points mapped into a specific grid cell:
(2) 
Hence, we can calculate the channel of each pixel, considering the Velodyne intensity as :
(3) 
Here, describes the number of points mapped from to , and is the parameter for the grid cell size. Hence, encodes the maximum height, the maximum intensity and the normalized density of all points mapped into (see Fig. 2).
2.2 Architecture
The ComplexYOLO network takes a birdseyeview RGBmap (see section 2.1) as input. It uses a simplified YOLOv2 [13] CNN architecture (see Tab. 1), extended by a complex angle regression and ERPN, to detect accurate multiclass oriented 3D objects while still operating in realtime.
EulerRegionProposal.
Our ERPN parses the 3D position , object dimensions (width and length ) as well as a probability , class scores and finally its orientation from the incoming feature map. In order to get proper orientation, we have modified the commonly used GridRPN approach, by adding a complex angle to it:
(4) 
With the help of this extension the ERPN estimates accurate object orientations based on an imaginary and real fraction directly embedded into the network. For each grid cell (32x16 see Tab. 1) we predict five objects including a probability score and class scores resulting in 75 features each, visualized in Fig. 2.
layer  filters  size  input  output 

conv  24  3x3/1  1024x512x3  1024x512x24 
max  2x2/2  1024x512x24  512x256x24  
conv  48  3x3/1  512x256x24  512x256x48 
max  2x2/2  512x256x48  256x128x48  
conv  64  3x3/1  256x128x48  256x128x64 
conv  32  1x1/1  256x128x64  256x128x32 
conv  64  3x3/1  256x128x32  256x128x64 
max  2x2/2  256x128x64  128x64x64  
conv  128  3x3/1  128x64x64  128x64x128 
conv  64  3x3/1  128x64x128  128x64x64 
conv  128  3x3/1  128x64x64  128x64x128 
max  2x2/2  128x64x128  64x32x128  
conv  256  3x3/1  64x32x128  64x32x256 
conv  256  1x1/1  64x32x256  64x32x256 
conv  512  3x3/1  64x32x256  64x32x512 
max  2x2/2  64x32x512  32x16x512  
conv  512  3x3/1  32x16x512  32x16x512 
conv  512  1x1/1  32x16x512  32x16x512 
conv  1024  3x3/1  32x16x512  32x16x1024 
conv  1024  3x3/1  32x16x1024  32x16x1024 
conv  1024  3x3/1  32x16x1024  32x16x1024 
route  12  
reorg  /2  64x32x256  32x16x1024  
route  22 20  
conv  1024  3x3/1  32x16x2048  32x16x1024 
conv  75  1x1/1  32x16x1024  32x16x75 
ERPN  32x16x75 
Anchor Box Design.
The YOLOv2 object detector [13] predicts five boxes per grid cell. All were initialized with beneficial priors, i.e. anchor boxes, for better convergence during training. Due to the angle regression, the degrees of freedom, i.e. the number of possible priors increased, but we did not enlarge the number of predictions for efficiency reasons. Hence, we defined only three different sizes and two angle directions as priors, based on the distribution of boxes within the KITTI dataset: i) vehicle size (heading up); ii) vehicle size (heading down); iii) cyclist size (heading up); iv) cyclist size (heading down); v) pedestrian size (heading left).
Complex Angle Regression.
The orientation angle for each object can be computed from the responsible regression parameters and , which correspond to the phase of a complex number, similar to [27]. The angle is given simply by using . On one hand, this avoids singularities, on the other hand this results in a closed mathematical space, which consequently has an advantageous impact on generalization of the model. We can link our regression parameters directly into the loss function (7).
2.3 Loss Function
Our network optimization loss function is based on the the concepts from YOLO [12] and YOLOv2 [13], who defined as the sum of squared errors using the introduced multipart loss. We extend this approach by an Euler regression part to get use of the complex numbers, which have a closed mathematical space for angle comparisons. This neglect singularities, which are common for single angle estimations:
(5) 
The Euler regression part of the loss function is defined with the aid of the EulerRegionProposal (see Fig. 3). Assuming that the difference between the complex numbers of prediction and ground truth, i.e. and is always located on the unit circle with and , we minimize the absolute value of the squared error to get a real loss:
(6)  
(7) 
Where is a scaling factor to prevent the model from diversion in early phases and denotes that the th bounding box predictor in cell has highest intersection over union (IoU) compared to ground truth for that prediction. Furthermore the comparison between the predicted box and ground truth with IoU , where , is adjusted to handle rotated boxes as well. This is realized by the theory of intersection of two 2D polygon geometries and union respectively, generated from the corresponding box parameters , , , and .
2.4 Efficiency Design
The main advantage of the used network design is the prediction of all bounding boxes in one inference pass. The ERPN is part of the network and uses the output of the last convolutional layer to predict all bounding boxes. Hence, we only have one network, which can be trained in an endtoend manner without specific training approaches. Due to this, our model has a lower runtime than other models that generate region proposals in a sliding window manner [22] with prediction of offsets and the class for every proposal (e.g. Faster RCNN [15]). In Fig. 5 we compare our architecture with some of the leading models on the KITTI benchmark. Our approach achieves a way higher frame rate while still keeping a comparable mAP (mean Average Precision). The frame rates were directly taken from the respective papers and all were tested on a Titan X or Titan Xp. We tested our model on a Titan X and an NVIDIA TX2 board to emphasize the realtime capability (see Fig. 5).
3 Training & Experiments
We evaluated ComplexYOLO on the challenging KITTI object detection benchmark [1], which is divided into three subcategories 2D, 3D and birdseyeview object detection for Cars, Pedestrians and Cyclists. Each class is evaluated based on three difficulty levels easy, moderate and hard considering the object size, distance, occlusion and truncation. This public dataset provides 7,481 samples for training including annotated ground truth and 7,518 test samples with point clouds taken from a Velodyne laser scanner, where annotation data is private. Note that we focused on birdseyeview and do not ran the 2D object detection benchmark, since our input is Lidar based only.
3.1 Training Details
We trained our model from scratch via stochastic gradient descent with a weight decay of 0.0005 and momentum 0.9. Our implementation is based on modified version of the Darknet neural network framework [28]. First, we applied our preprocessing (see Section 2.1) to generate the birdseyeview RGBmaps from Velodyne samples. Following the principles from [2] [3] [29], we subdivided the training set with public available ground truth, but used ratios of 85% for training and 15% for validation, because we trained from scratch and aimed for a model that is capable of multiclass predictions. In contrast, e.g. VoxelNet [3] modified and optimized the model for different classes. We suffered from the available ground truth data, because it was intended for camera detections first. The class distribution with more than 75% Car, less than 4% Cyclist and less than 15% Pedestrian is disadvantageous. Also, more than 90% of all the annotated objects are facing the car direction, facing towards the recording car or having similar orientations. On top, Fig. 4 shows a 2D histogram for spatial object locations from birdseyeview perspective, where dense points indicate more objects at exactly this position. This inherits two blind spot for birdseyeview map. Nevertheless we saw surprising good results for the validation set and other recorded unlabeled KITTI sequences covering several use case scenarios, like urban, highway or inner city.
For the first epochs, we started with a small learning rate to prevent from diversion. After some epochs, we scaled the learning rate up and continued to gradually decrease it for up to 1,000 epochs. Due to the fine grained requirements, when using a birdseyeview approach, slight changes in predicted features will have a strong impact on resulting box predictions. We used batch normalization for regularization and a linear activation for the last layer of our CNN, apart from that the leaky rectified linear activation:
(8) 
3.2 Evaluation on KITTI
We have adapted our experimental setup and follow the official KITTI evaluation protocol, where the IoU thresholds are 0.7 for class Car and 0.5 for class Pedestrian and Cyclist. Detections that are not visible on the image plane are filtered, because the ground truth is only available for objects that also appear on the image plane of the camera recording [1] (see Fig. 4. We used the average precision (AP) metric to compare the results. Note, that we ignore a small number of objects that are outside our birdseyeview map boundaries with more than 40m to the front, to keep the input dimensions as small as possible for efficiency reasons.
BirdsEyeView.
Our evaluation results for the birdseyeview detection are presented in Tab. 2. This benchmark uses bounding box overlap for comparison. For a better overview and to rank the results, similar current leading methods are listed as well, but performing on the official KITTI test set. ComplexYOLO consistently outperforms all competitors in terms of runtime and efficiency, while still manages to achieve comparable accuracy. With about 0.02s runtime on a Titan X GPU, we are 5 times faster than AVOD [7], considering their usage of a more powerful GPU (Titan Xp). Compared to VoxelNet [3], which is also Lidar based only, we are more than 10 times faster and MV3D [2], the slowest competitor, takes 18 times as long.
\arraybackslashMethod  \arraybackslashModality  \arraybackslashFPS  Car  Pedestrian  Cyclist  
\arraybackslashEasy  \arraybackslashMod.  \arraybackslashHard  \arraybackslashEasy  \arraybackslashMod.  \arraybackslashHard  \arraybackslashEasy  \arraybackslashMod.  \arraybackslashHard  
\arraybackslashMV3D [2]  \arraybackslashLidar+Mono  \arraybackslash2.8  \arraybackslash86.02  \arraybackslash76.90  \arraybackslash68.49  \arraybackslash  \arraybackslash  \arraybackslash  \arraybackslash  \arraybackslash  \arraybackslash 
\arraybackslashFPointNet [5]  \arraybackslashLidar+Mono  \arraybackslash5.9  \arraybackslash88.70  \arraybackslash84.00  \arraybackslash75.33  \arraybackslash58.09  \arraybackslash50.22  \arraybackslash47.20  \arraybackslash75.38  \arraybackslash61.96  \arraybackslash54.68 
\arraybackslashAVOD [7]  \arraybackslashLidar+Mono  \arraybackslash12.5  \arraybackslash86.80  \arraybackslash85.44  \arraybackslash77.73  \arraybackslash42.51  \arraybackslash35.24  \arraybackslash33.97  \arraybackslash63.66  \arraybackslash47.74  \arraybackslash46.55 
\arraybackslashAVODFPN [7]  \arraybackslashLidar+Mono  \arraybackslash10.0  \arraybackslash88.53  \arraybackslash83.79  \arraybackslash77.90  \arraybackslash50.66  \arraybackslash44.75  \arraybackslash40.83  \arraybackslash62.39  \arraybackslash52.02  \arraybackslash47.87 
\arraybackslashVoxelNet [3]  \arraybackslashLidar  \arraybackslash4.3  \arraybackslash89.35  \arraybackslash79.26  \arraybackslash77.39  \arraybackslash46.13  \arraybackslash40.74  \arraybackslash38.11  \arraybackslash66.70  \arraybackslash54.76  \arraybackslash50.55 
\arraybackslashComplexYOLO  \arraybackslashLidar  \arraybackslash50.4  \arraybackslash85.89  \arraybackslash77.40  \arraybackslash77.33  \arraybackslash46.08  \arraybackslash45.90  \arraybackslash44.20  \arraybackslash72.37  \arraybackslash63.36  \arraybackslash60.27 
3D Object Detection.
Tab. 3 shows our achieved results for the 3D bounding box overlap. Since we do not estimate the height information directly with regression, we ran this benchmark with a fixed spatial height location extracted from ground truth similar to MV3D [2]. Additionally as mentioned, we simply injected a predefined height for every object based on its class, calculated from the mean over all ground truth objects per class. This reduces the precision for all classes, but it confirms the good results measured on the birdseyeview benchmark.
\arraybackslashMethod  \arraybackslashModality  \arraybackslashFPS  Car  Pedestrian  Cyclist  
\arraybackslashEasy  \arraybackslashMod.  \arraybackslashHard  \arraybackslashEasy  \arraybackslashMod.  \arraybackslashHard  \arraybackslashEasy  \arraybackslashMod.  \arraybackslashHard  
\arraybackslashMV3D [2]  \arraybackslashLidar+Mono  \arraybackslash2.8  \arraybackslash71.09  \arraybackslash62.35  \arraybackslash55.12  \arraybackslash  \arraybackslash  \arraybackslash  \arraybackslash  \arraybackslash  \arraybackslash 
\arraybackslashFPointNet [5]  \arraybackslashLidar+Mono  \arraybackslash5.9  \arraybackslash81.20  \arraybackslash70.39  \arraybackslash62.19  \arraybackslash51.21  \arraybackslash44.89  \arraybackslash40.23  \arraybackslash71.96  \arraybackslash56.77  \arraybackslash50.39 
\arraybackslashAVOD [7]  \arraybackslashLidar+Mono  \arraybackslash12.5  \arraybackslash73.59  \arraybackslash65.78  \arraybackslash58.38  \arraybackslash38.28  \arraybackslash31.51  \arraybackslash26.98  \arraybackslash60.11  \arraybackslash44.90  \arraybackslash38.80 
\arraybackslashAVODFPN [7]  \arraybackslashLidar+Mono  \arraybackslash10.0  \arraybackslash81.94  \arraybackslash71.88  \arraybackslash66.38  \arraybackslash46.35  \arraybackslash39.00  \arraybackslash36.58  \arraybackslash59.97  \arraybackslash46.12  \arraybackslash42.36 
\arraybackslashVoxelNet [3]  \arraybackslashLidar  \arraybackslash4.3  \arraybackslash77.47  \arraybackslash65.11  \arraybackslash57.73  \arraybackslash39.48  \arraybackslash33.69  \arraybackslash31.51  \arraybackslash61.22  \arraybackslash48.36  \arraybackslash44.37 
\arraybackslashComplexYOLO  \arraybackslashLidar  \arraybackslash50.4  \arraybackslash67.72  \arraybackslash64.00  \arraybackslash63.01  \arraybackslash41.79  \arraybackslash39.70  \arraybackslash35.92  \arraybackslash68.17  \arraybackslash58.32  \arraybackslash54.30 
4 Conclusion
In this paper we present the first realtime efficient deep learning model for 3D object detection on Lidar based point clouds. We highlight our state of the art results in terms of accuracy (see Fig. 5) on the KITTI benchmark suite with an outstanding efficiency of more than 50 fps (NVIDIA Titan X). We do not need additional sensors, e.g. camera, like most of the leading approaches. This breakthrough is achieved by the introduction of the new ERPN, an Euler regression approach for estimating orientations with the aid of the complex numbers. The closed mathematical space without singularities allows robust angle prediction.
Our approach is able to detect objects of multiple classes (e.g. cars, vans, pedestrians, cyclists, trucks, tram, sitting pedestrians, misc) simultaneously in one forward path. This novelty enables deployment for real usage in self driving cars and clearly differentiates to other models. We show the realtime capability even on dedicated embedded platform NVIDIA TX2 (4 fps). In future work, it is planned to add height information to the regression, enabling a real independent 3D object detection in space, and to use tempospatial dependencies within point cloud preprocessing for a better class distinction and improved accuracy.
Footnotes
 email: {martin.simon,stefan.milz,karl.amende}@valeo.com
 email: horstmichael.gross@tuilmenau.de
 email: {martin.simon,stefan.milz,karl.amende}@valeo.com
 email: horstmichael.gross@tuilmenau.de
 email: {martin.simon,stefan.milz,karl.amende}@valeo.com
 email: horstmichael.gross@tuilmenau.de
 email: {martin.simon,stefan.milz,karl.amende}@valeo.com
 email: horstmichael.gross@tuilmenau.de
 email: {martin.simon,stefan.milz,karl.amende}@valeo.com
 email: horstmichael.gross@tuilmenau.de
 The ranking refers to the time of submission: 14th of march in 2018
References
 Geiger, A.: Are we ready for autonomous driving? the kitti vision benchmark suite. In: Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). CVPR ’12, Washington, DC, USA, IEEE Computer Society (2012) 3354–3361
 Chen, X., Ma, H., Wan, J., Li, B., Xia, T.: Multiview 3d object detection network for autonomous driving. CoRR abs/1611.07759 (2016)
 Zhou, Y., Tuzel, O.: Voxelnet: Endtoend learning for point cloud based 3d object detection. CoRR abs/1711.06396 (2017)
 Engelcke, M., Rao, D., Wang, D.Z., Tong, C.H., Posner, I.: Vote3deep: Fast object detection in 3d point clouds using efficient convolutional neural networks. CoRR abs/1609.06666 (2016)
 Qi, C.R., Liu, W., Wu, C., Su, H., Guibas, L.J.: Frustum pointnets for 3d object detection from RGBD data. CoRR abs/1711.08488 (2017)
 Wang, D.Z., Posner, I.: Voting for voting in online point cloud object detection. In: Proceedings of Robotics: Science and Systems, Rome, Italy (July 2015)
 Ku, J., Mozifian, M., Lee, J., Harakeh, A., Waslander, S.: Joint 3d proposal generation and object detection from view aggregation. arXiv preprint arXiv:1712.02294 (2017)
 Li, B., Zhang, T., Xia, T.: Vehicle detection from 3d lidar using fully convolutional network. CoRR abs/1608.07916 (2016)
 Li, B.: 3d fully convolutional network for vehicle detection in point cloud. CoRR abs/1611.08069 (2016)
 Qi, C.R., Su, H., Mo, K., Guibas, L.J.: Pointnet: Deep learning on point sets for 3d classification and segmentation. CoRR abs/1612.00593 (2016)
 Qi, C.R., Yi, L., Su, H., Guibas, L.J.: Pointnet++: Deep hierarchical feature learning on point sets in a metric space. CoRR abs/1706.02413 (2017)
 Redmon, J., Divvala, S.K., Girshick, R.B., Farhadi, A.: You only look once: Unified, realtime object detection. CoRR abs/1506.02640 (2015)
 Redmon, J., Farhadi, A.: YOLO9000: better, faster, stronger. CoRR abs/1612.08242 (2016)
 Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S.E., Fu, C., Berg, A.C.: SSD: single shot multibox detector. CoRR abs/1512.02325 (2015)
 Ren, S., He, K., Girshick, R.B., Sun, J.: Faster RCNN: towards realtime object detection with region proposal networks. CoRR abs/1506.01497 (2015)
 Cai, Z., Fan, Q., Feris, R.S., Vasconcelos, N.: A unified multiscale deep convolutional neural network for fast object detection. CoRR abs/1607.07155 (2016)
 Ren, J.S.J., Chen, X., Liu, J., Sun, W., Pang, J., Yan, Q., Tai, Y., Xu, L.: Accurate single stage detector using recurrent rolling convolution. CoRR abs/1704.05776 (2017)
 Chen, X., Kundu, K., Zhang, Z., Ma, H., Fidler, S., Urtasun, R.: Monocular 3d object detection for autonomous driving. In: IEEE CVPR. (2016)
 Girshick, R.B., Donahue, J., Darrell, T., Malik, J.: Rich feature hierarchies for accurate object detection and semantic segmentation. CoRR abs/1311.2524 (2013)
 He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. CoRR abs/1512.03385 (2015)
 Chen, X., Kundu, K., Zhu, Y., Ma, H., Fidler, S., Urtasun, R.: 3d object proposals using stereo imagery for accurate object class detection. CoRR abs/1608.07711 (2016)
 Girshick, R.B.: Fast RCNN. CoRR abs/1504.08083 (2015)
 Li, Y., Bu, R., Sun, M., Chen, B.: Pointcnn (2018)
 Wang, Y., Sun, Y., Liu, Z., Sarma, S.E., Bronstein, M.M., Solomon, J.M.: Dynamic graph cnn for learning on point clouds (2018)
 Xiang, Y., Choi, W., Lin, Y., Savarese, S.: Datadriven 3d voxel patterns for object category recognition. In: Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition. (2015)
 Wu, Z., Song, S., Khosla, A., Tang, X., Xiao, J.: 3d shapenets for 2.5d object recognition and nextbestview prediction. CoRR abs/1406.5670 (2014)
 Beyer, L., Hermans, A., Leibe, B.: Biternion nets: Continuous head pose regression from discrete training labels. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 9358 (2015) 157–168
 Redmon, J.: Darknet: Open source neural networks in c. http://pjreddie.com/darknet/ (2013–2016)
 Chen, X., Kundu, K., Zhu, Y., Berneshawi, A., Ma, H., Fidler, S., Urtasun, R.: 3d object proposals for accurate object class detection. In: NIPS. (2015)