A Two-Stage Data Association Approach for 3D Multi-Object Tracking

A Two-Stage Data Association Approach for 3D Multi-Object Tracking

Abstract

Multi-object tracking (MOT) is an integral part of any autonomous driving pipelines because it produces trajectories which has been taken by other moving objects in the scene and helps predict their future motion. Thanks to the recent advances in 3D object detection enabled by deep learning, track-by-detection has become the dominant paradigm in 3D MOT. In this paradigm, a MOT system is essentially made of an object detector and a data association algorithm which establishes track-to-detection correspondence. While 3D object detection has been actively researched, association algorithms for 3D MOT seem to settle at a bipartie matching formulated as a linear assignment problem (LAP) and solved by the Hungarian algorithm. In this paper, we adapt a two-stage data association method which was successful in image-based tracking to the 3D setting, thus providing an alternative for data association for 3D MOT. Our method outperforms the baseline using one-stage bipartie matching for data association by achieving 0.587 AMOTA in NuScenes validation set.

1 Introduction

Multi-object tracking have been a long standing problem in computer vision and robotics community since it is a crucial part of autonomous systems. From the early work of tracking with hand-craft features, the revolution of deep learning which results in highly accurate object detection models [22, 18, 21] has shifted the focus of the field to the track-by-detection paradigm [5, 23]. In the framework of this paradigm, tracking algorithms receive a set of object detection, usually in the form of bounding boxes, at each time step and they aim to link detection of the same object across time to form trajectories.

While image-based methods of this paradigm have reached a certain maturity, 3D tracking is still in its early phase where most of the published approaches are originated from successful 2D exemplars. The most popular attempt at 3D tracking with established 2D tracking method is [26] which is an extension of [5] into 3D space. In these works, the tracking algorithm is made of the Hungarian algorithm [14] and Kalman filter. While the former finds track-to-measurement correspondences by solving a linear assignment problem, the later performs prediction and correction of tracks’ state. An improvement of [26] is proposed by [8] which replaces the 3D IoU [29] with the Mahalanobis distance as the cost function for the assignment problem. The idea of handling tracking as a matching problem is also used in the context of end-to-end learning [17, 28, 19]. [17] solves the tracking task in same fashion as [26]; however, this work trains a sub network for calculating the cost function of the assignment problem and the correction step is carried out by another sub network instead of the Kalman filter.[28, 19] train deep models to predict tracks position in the following frames along with generating detection and the track-to-detection correspondences are found by greedy matching.

Even though 3D tracking has been progressed rapidly thanks to the availability of standardized large scale benchmarks such as KITTI [13], NuScenes [6], Waymo Open Dataset [25], the focus of the field is placed on developing better object detection models rather than developing better tracking algorithm as evidenced in the Table.1. There are two trends can be observed in this table. First, tracking performance experiences significant boost when a better object detection model is introduced. Second, the method of AB3DMOT [26] is favored by most recent 3D tracking systems.

Dataset Method Name Tracking Method AMOTA Object Detector mAP
NuScenes CenterPoint [28] Greedy closest-point matching 0.650 CenterPoint 0.603
PMBM* Poisson Multi-Bernoulli Mixture filter [10] 0.626 CenterPoint 0.603
StanfordIPRL-TRI [8] Hungarian algorithm with Mahalanobis distance as cost function and Kalman Filter 0.550 MEGVII [31] 0.519
AB3DMOT [26] Hungarian algorithm with 3D IoU as cost function and Kalman Filter 0.151 MEGVII 0.519
CenterTrack Greedy closest-point mathcing 0.108 CenterNet [30] 0.388
Waymo HorizonMOT [9] 3-stage data associate, each stage is an assignment problem solved by Hungarian algorithm 0.6345 AFDet [11] 0.7711
CenterPoint Greedy closest-point matching 0.5867 CenterPoint 0.7193
PV-RCNN-KF Hungarian algorithm and Kalman Filter 0.5553 PV-RCNN [24] 0.7152
PPBA AB3DMOT Hungarian algorithm with 3D IoU as cost function and Kalman Filter 0.2914 PointPillars and PPBA[7] 0.3530
Table 1: Summary of tracking methods which details are published in the leader board of NuScenes and Waymo Open Dataset

The reason of AB3DMOT’s popularity is that despite its simplicity, it achieves competitive result in challenging datasets at significantly high frame rate (more than 200 FPS). However, such simplicity comes at the cost of the MOT system being vulnerable to false associations due to occlusion or imperfect detections which is case for objects in a clutter or far away from the ego vehicle.

Aware of the shortage of a generic 3D tracking algorithm which can better handle occlusion and imperfect detections so that to limit the false track-to-detection correspondence, yet remains relatively simple, we adapt the image-based tracking method proposed by [2] to the 3D setting. Specifically, this method is a two-stage data association scheme. In this scheme, each tracked trajectory is called a tracklet and is assigned a confidence score computed based on how well associated detection matches with tracklet. The first association stage aims to establish the correspondence between high confident tracklets and detection. The second stage matches the left over detection with the low confident tracklets as well as link low confident tracklets to high confident ones if they meet a certain criterion.

In this paper, we make two contributions

  • Our main contribution is the adaptation of an image-based tracking method to the 3D setting. In details, we exploit a kinematically feasible motion model, which is unavailable in 2D, to facilitate objects pose prediction. This model in turn defines the minimal state vector needed to be tracked.

  • Extensive experiment carried out in various datasets proves the effectiveness of our approach. In fact, our better performance, compared to AB3DMOT-style models, show that adding a certain degree of re-identification can improve the tracking performance while keeping the added complexity to the minimum.

2 Related work

A multi-object tracking system in the track-by-detection paradigm consists of an object detection model, a data association algorithm and a filtering method. While the last two components are domain agnostic, object detection models, especially learning-based methods, are tailored to their operation domain (e.g images or point clouds). This paper targets autonomous driving where objects pose are required thus interest in 3D object detection models. However, developing such a model is not in the scope of this paper, instead we use the detection result provided by baseline models of benchmarks (e.g. PointPillars of NuScenes) to focus on the data association algorithm and to have a fair comparison. Interested readers are referred to [1] for a review of 3D object detection.

Data association via the Hungarian algorithm was early explored in [12] where a 2-stage tracking scheme was proposed for offline 2D tracking. Firstly, detections are linked frame-by-frame to form tracklets. The affinity matrix of the Hungarian algorithm is established by geometric and appearance cue. While the geometric cue is the 2D Intersection over Union (IoU), the appearance cue is the correlation between two bounding boxes. Secondly, tracklets are associated to each other to compensate trajectory fragments and ID switch due to occlusion. This association is also carried out the Hungarian algorithm.

Due to its batch-processing nature [12] cannot be applied to online tracking, [5] overcomes this by eliminating the second stage and let objects which temporarily left the sensor’s field of view reenter with new IDs. Despite its simplicity, SORT - the method proposed by [5] achieve competitive result in MOT15[15] with lightning-fast inference speed (260 Hz). The success of SORT inspired [26] to adapt it to 3D setting by using 3D IoU as the affinity function. The performance of SORT in 3D setting is later improved in [8] which shows the use of Mahalanobis distance is superior to 3D IoU. [20] integrates the 3D version of SORT into a complete perception pipeline for autonomous vehicles.

The two-stage association scheme is adapted to online tracking in [2] which proposes a confidence score to quantify tracklets quality. Based on this score, tracklets are associated with detections or another tracklets, or terminated. The appearance model learned by ILDA in [2] is improved by deep learning in the follow-up work [3]. Recently, this association scheme is revisited in the context of image-based pedestrian tracking by [27] which proposed to use the rank of the Hankel matrix as tracklets motion affinity.

Differ from [2] and its related works, this paper applies the two-stage association scheme to online 3D tracking. In addition, we can provide competitive result despite relying solely on geometric cue to compute tracklet affinity thanks to the Constant Turning Rate and Velocity (CTRV) motion model which can accurately predict objects position in 3D space by exploiting their kinematic.

3 Method

3.1 Problem Formulation

Online multi-object tracking (MOT) in the sense of track-by-detection aims to gradually grow the set of tracklets by establishing correspondences with the set of detections received at every time step and updating tracklets state accordingly. A tracklet is a collection of state vectors corresponding to the same object , here are respectively the starting- and ending-time of the tracklet. A detection at time step encapsulates information of a 3D bounding box including the position of its center in a common reference frame , heading , and size . It is worth to notice that in the context of autonomous driving, objects are assumed to remain in contact with the ground; therefore, their detections are up-right bounding boxes which orientation is described by a single number - the heading angle.

The correspondence between and can be formally defined as finding the set that maximizes its likelihood given .

(1)

Due the exponential growth of possible associations between and , Equation.(1) is computationally intractable after a few time steps. In this paper, such a correspondence is approximated by the two-stage data association proposed by [2] as shown in the following.

3.2 Two-stage Data Association

Tracklet Confidence Score

The reliability of a tracklet is quantified by a confidence score which is calculated based on how well associated detections match with its states across its life span and how long its corresponding object was undetected.

(2)

where is a binary indicator which takes 1 if the tracklet has a detection associated with it at time step , and 0 otherwise. is the number of time step that the traklet gets associated with a detection. is the affinity function which detail will be presented later. is a tuning parameter which takes high value if the object detection model is accurate. is the number of time step that tracklet was undetected (i.e. did not have associated detection) calculated from its birth to the current time step .

Applying a threshold this confidence score divides the set into a subset of high confidence tracklets and a subset of low confidence tracklets . These two subsets are the fundamental elements of the two-stage association pipeline showed in Figure.1

Figure 1: The pipeline of two-stage data association. The first stage - local association establish the correspondences between detections at this time step and high confidence tracklets . Then, global association stage matches each low confidence tracklets with either a high confidence tracklet or a left-over detection, or terminates it.

Local Association

In this association stage, high confident tracklets are extended by their correspondence in the set of detections . This tracklet-to-detection is found by solving the a linear assignment problem (LAP) characterized by the cost matrix as following

(3)

where are respectively the number of high confidence tracklets and the number of detections. The intuition of this association stage is that because tracklets with high confidence have been tracked accurately for a number of time steps, the affinity function can identify if a detection is belong to the same object as the tracklet with high accuracy, thus limiting the possibility of false correspondences. In addition, low confidence tracklets are usually resulted from fragment trajectories or noisy detections, excluding them from this association stage help reduces the ambiguity.

Global Association

As shown in Figure.1, the global assocaition stage carries out the following tasks

  • Matching low confidence tracklets with high confidence ones

  • Matching low confidence tracklets with detections left over by the local association stage

  • Deciding to terminate low confidence tracklets

These tasks are simultaneously solved as a LAP formulated by the following cost matrix

(4)

here, are respectively the number low confidence tracklets and detections left over by the local association stage. Recall is the number of high confidence tracklets. Submatrix is the cost matrix of the event where low confidence tracklets are matched with high confidence ones

(5)

Submatrix represents the event where low confidence tracklets are terminated.

(6)

Finally, submatrix is the cost of the associating low confidence tracklets with detections left over by local association stage.

(7)

The solution to the LAP in this stage and in the Local Association stage is the association that minimize the cost and can be either found by the Hungarian algorithm for the optimal solution or by a greedy algorithm which interatively pick and remove correspondence pair with the smallest cost until there is no pair has cost less than a threshold (the detail of this greedy algorithm can be found in [8]).

Once a detection is assocatied with a tracklet, its position and heading is used to update the tracklet’s state according to the equation of the Kalman Filter, while its sizes is averaged with tracklet’s sizes in past few time steps to result in updated sizes. Detections do not get associated in the global association stage are used to initialize new tracklets.

Affinity Function

Affinity function is to compute how similar a detection to a tracklet or a tracklet to another. As mentioned earlier, due to the lack of colorful texture in point cloud, the affinity function used in this work is just comprised of geometric cue. Specifically, it is the sum of position affinity and size affinity.

(8)

The scheme for computing position affinity between a tracklet and a detection or between two tracklets are shown in Figure.2.

Figure 2: The computational scheme of position affinity. The filled triangles (or rectangles) are subsequent states of a tracklet. The colored arrow represents the time order: the closer to the tip, the more recent the state. The triangle (or rectangle) in dash line is the state propagated forward (or backward) in time. The covariance of these propagated states are denoted by ellipses with the same color. The two-headed arrows indicate the Mahalanobis distance. In the subfigure (a), the blue circle denotes a detection.

As shown in Figure.2.a, the position affinity between a tracklet and a detection is defined as the Mahalanobis distance between tracklet’s last state propagated to the current time step and the measurement vector extracted from

(9)

where is last state of tracklet propagated to the current time step using the motion model which will be presented below. is the measurement model computing the expected measurement using the inputted state and the measurement vector . The matrix is the covariance matrix of the innovation (i.e. the difference between expected measurement and its true value )

(10)

here, is the Jacobian of the measurement model, are covariance matrix of and , respectively.

In the case of two tracklets and , assuming starts after ended, their motion affinity is, according to Figure.2.b, is the sum of

  • Mahalanobis distance between the last state of propagated forward in time and the first state of

  • Mahalanobis distance between the first state of propagated backward in time and the last state of

(11)

here, is the last state of tracklet propagated forward in time to the first time step of tracklet , while is the first state of tracklet propagated backward in time to the last time step of tracklet .

The size affinity is computed as following

(12)

here, are the size of the last state of tracklet , while are the size of the detection . In the case of two tracklets and , assuming starts after ended, there size affinity is

(13)

The subscript in Equation.(13) respectively denote the ending and starting state of a tracklet.

3.3 Motion Model and State Vector

Exploiting the fact that objects are tracked in 3D space of a common static reference frame which can be referred to as the world frame, motion of objects can be described by more kinematically accurate models, compared to the commonly used Constant Velocity (CV) model. In this work, we use the Constant Turning Rate and Velocity (CTRV) model to predict motion of car-like vehicles (e.g. cars, buses, trucks), while keep the CV model for pedestrians.

For a car-like vehicles, its state can be described by

(14)

here, is the location in the world frame of the center of the bounding box represented by the state vector, is the heading angle, is longitudal velocity (i.e. velocity along the heading direction), are respectively velocity of and .

The motion on x-y plane of car-like vehicles can be predicted using CTRV as following

(15)

where, is the sampling time. Note that in Equation.(15), is assumed to evolve with constant velocity. In the case of zero turning rate (i.e. ),

(16)

The state vector of a pedestrian is

(17)

The motion of pedestrians is predicted according to CV model

(18)

4 Experiments

The effectiveness of our method is demonstrated by benchmarking against the SORT-style baseline model on 3 large scale datasets: KITTI, NuScenes, and Waymo. In addition, we perform ablation study using NuScenes dataset to better understand the impact of each component on our system’s general performance.

4.1 Tracking Results

Evaluation Metrics: Classically, MOT systems are evaluated by the CLEAR MOT metrics [4]. As pointed out by [16] and later by [26], there is a linear relation between MOTA and object detectors’ recall rate, as a result, MOTA does not provide a well-rounded evaluation performance of trackers. To remedy this, [26] proposes to average MOTA and MOTP over a range of recall rate, resulting in two integral metrics AMOTA and AMOTP which become the norm in recent benchmarks.

Datasets To verify the effectiveness of our method, we benchmark it on 3 popular autonomous driving datasets which offer 3D MOT benchmark: KITTI, NuScenes, and Waymo. These datasets are collections of driving sequences collected in various environment using a multi-modal sensor suite including LiDAR. KITTI tracking benchmark interests in two classes of object which are cars and pedestrians. Initially, KITTI tracking was designed for MOT in 2D images and recently [26] adapts it to 3D MOT. NuScenes concerns a larger set of objects which comprises of cars, bicycles, buses, trucks, pedestrians, motorcycles, trailers. Waymo shares the same interest as NuScenes but groups car-like vehicles into meta class.

Public Detection: As can be seen in Table.1, AMOTA highly depends on the precision of object detectors. Therefore, to have a fair comparison, the baseline detection results made publicly available by the benchmarks are used as the input to our tracking system. Specifically, we use MEGVII detection and PointPillars with PPBA detection for NuScenes and Waymo, respectively.

The performance of our model compared to the SORT-style baseline model in 3 popular benchmarks are shown in Table.2. As can be seen, our model consistently outperforms the baseline model in term of the primary metric AMOTA. The main reason of this is the lower ID switches and trajectory fragments of ours which shows the better ability of establishing track-to-detection correspondence compared to SORT-style algorithm.

Dataset Method AMOTA AMOTP MT ML FP FN IDS FRAG
KITTI Ours 0.415 0.691 N/A N/A 766 3721 10 259
AB3DMOT[26] 0.377 0.648 N/A N/A 696 3713 1 93
NuScenes Ours 0.583 0.748 3617 1885 13439 28119 512 511
StanfordIPRL-TRI[8] 0.561 0.800 3432 1857 12140 28387 679 606
Table 2: Quantitative performance of our model on KITTI, NuScenes, and Waymo validation set.

4.2 Ablation Study

In this ablation study, default method is the method presented in Section.3 which has

  • Two stages of data association (local and global). Each stage is formulated as a LAP and solved by a greedy matching algorithm [8].

  • The affinity function the sum of position affinity and size affinity (as in Equation.(8)).

  • The motion model is Constant Turning Rate and Velocity (CTRV) for car-like objects (cars, buses, trucks, trailers, bicycles) and Constant Veloctiy (CV) for pedestrians.

To understand the effect of each component on the system’s general performance, we modify or remove each of them and carry out experiment with the rest of the system being kept the same as the default method and the same hyperparameters. The changes and the resulted performance are shown in Table.3.

Method AMOTA AMOTP MT ML FP FN IDS FRAG
Default 0.583 0.748 3617 1885 13439 28119 512 511
Hungarian for LAP 0.587 0.743 3609 1880 13667 28070 596 573
No ReID 0.583 0.748 3616 1882 13429 28100 504 510
Global assoc only 0.327 0.924 2575 2244 26244 38315 4215 3038
Const Velocity only 0.567 0.781 3483 1966 12649 29427 718 606
No size affinity 0.581 0.748 3595 1904 13423 28448 512 508
Table 3: Ablation study using NuScenes dataset.

References

  1. E. Arnold, O. Y. Al-Jarrah, M. Dianati, S. Fallah, D. Oxtoby and A. Mouzakitis (2019) A survey on 3d object detection methods for autonomous driving applications. IEEE Transactions on Intelligent Transportation Systems 20 (10), pp. 3782–3795. Cited by: §2.
  2. S. Bae and K. Yoon (2014) Robust online multi-object tracking based on tracklet confidence and online discriminative appearance learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1218–1225. Cited by: §1, §2, §2, §3.1.
  3. S. Bae and K. Yoon (2017) Confidence-based data association and discriminative deep appearance learning for robust online multi-object tracking. IEEE transactions on pattern analysis and machine intelligence 40 (3), pp. 595–610. Cited by: §2.
  4. K. Bernardin and R. Stiefelhagen (2008) Evaluating multiple object tracking performance: the clear mot metrics. EURASIP Journal on Image and Video Processing 2008, pp. 1–10. Cited by: §4.1.
  5. A. Bewley, Z. Ge, L. Ott, F. Ramos and B. Upcroft (2016) Simple online and realtime tracking. In 2016 IEEE International Conference on Image Processing (ICIP), pp. 3464–3468. Cited by: §1, §1, §2.
  6. H. Caesar, V. Bankiti, A. H. Lang, S. Vora, V. E. Liong, Q. Xu, A. Krishnan, Y. Pan, G. Baldan and O. Beijbom (2020) Nuscenes: a multimodal dataset for autonomous driving. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11621–11631. Cited by: §1.
  7. S. Cheng, Z. Leng, E. D. Cubuk, B. Zoph, C. Bai, J. Ngiam, Y. Song, B. Caine, V. Vasudevan and C. Li (2020) Improving 3d object detection through progressive population based augmentation. arXiv preprint arXiv:2004.00831. Cited by: Table 1.
  8. H. Chiu, A. Prioletti, J. Li and J. Bohg (2020) Probabilistic 3d multi-object tracking for autonomous driving. External Links: 2001.05673 Cited by: Table 1, §1, §2, §3.2.3, 1st item, Table 2.
  9. Z. Ding, Y. Hu, R. Ge, L. Huang, S. Chen, Y. Wang and J. Liao (2020) 1st place solution for waymo open dataset challenge – 3d detection and domain adaptation. External Links: 2006.15505 Cited by: Table 1.
  10. A. F. Garcia-Fernandez, J. L. Williams, K. Granstrom and L. Svensson (2018-08) Poisson multi-bernoulli mixture filter: direct derivation and implementation. IEEE Transactions on Aerospace and Electronic Systems 54 (4), pp. 1883–1901. External Links: ISSN 2371-9877, Link, Document Cited by: Table 1.
  11. R. Ge, Z. Ding, Y. Hu, Y. Wang, S. Chen, L. Huang and Y. Li (2020) AFDet: anchor free one stage 3d object detection. External Links: 2006.12671 Cited by: Table 1.
  12. A. Geiger, M. Lauer, C. Wojek, C. Stiller and R. Urtasun (2013) 3d traffic scene understanding from movable platforms. IEEE transactions on pattern analysis and machine intelligence 36 (5), pp. 1012–1025. Cited by: §2, §2.
  13. A. Geiger, P. Lenz, C. Stiller and R. Urtasun (2013) Vision meets robotics: the kitti dataset. The International Journal of Robotics Research 32 (11), pp. 1231–1237. Cited by: §1.
  14. H. W. Kuhn (1955) The hungarian method for the assignment problem. Naval research logistics quarterly 2 (1-2), pp. 83–97. Cited by: §1.
  15. L. Leal-Taixé, A. Milan, I. Reid, S. Roth and K. Schindler (2015) Motchallenge 2015: towards a benchmark for multi-target tracking. arXiv preprint arXiv:1504.01942. Cited by: §2.
  16. L. Leal-Taixé, A. Milan, K. Schindler, D. Cremers, I. Reid and S. Roth (2017) Tracking the trackers: an analysis of the state of the art in multiple object tracking. arXiv preprint arXiv:1704.02781. Cited by: §4.1.
  17. M. Liang, B. Yang, W. Zeng, Y. Chen, R. Hu, S. Casas and R. Urtasun (2020) PnPNet: end-to-end perception and prediction with tracking in the loop. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11553–11562. Cited by: §1.
  18. W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C. Fu and A. C. Berg (2016) Ssd: single shot multibox detector. In European conference on computer vision, pp. 21–37. Cited by: §1.
  19. W. Luo, B. Yang and R. Urtasun (2018) Fast and furious: real time end-to-end 3d detection, tracking and motion forecasting with a single convolutional net. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pp. 3569–3577. Cited by: §1.
  20. A. Mauri, R. Khemmar, B. Decoux, N. Ragot, R. Rossi, R. Trabelsi, R. Boutteau, J. Ertaud and X. Savatier (2020) Deep learning for real-time 3d multi-object detection, localisation, and tracking: application to smart mobility. Sensors 20 (2), pp. 532. Cited by: §2.
  21. J. Redmon, S. Divvala, R. Girshick and A. Farhadi (2016) You only look once: unified, real-time object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 779–788. Cited by: §1.
  22. S. Ren, K. He, R. Girshick and J. Sun (2016) Faster r-cnn: towards real-time object detection with region proposal networks. IEEE transactions on pattern analysis and machine intelligence 39 (6), pp. 1137–1149. Cited by: §1.
  23. S. Scheidegger, J. Benjaminsson, E. Rosenberg, A. Krishnan and K. Granström (2018) Mono-camera 3d multi-object tracking using deep learning detections and pmbm filtering. In 2018 IEEE Intelligent Vehicles Symposium (IV), pp. 433–440. Cited by: §1.
  24. S. Shi, C. Guo, L. Jiang, Z. Wang, J. Shi, X. Wang and H. Li (2020) Pv-rcnn: point-voxel feature set abstraction for 3d object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10529–10538. Cited by: Table 1.
  25. P. Sun, H. Kretzschmar, X. Dotiwalla, A. Chouard, V. Patnaik, P. Tsui, J. Guo, Y. Zhou, Y. Chai and B. Caine (2020) Scalability in perception for autonomous driving: waymo open dataset. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2446–2454. Cited by: §1.
  26. X. Weng, J. Wang, D. Held and K. Kitani (2020) AB3DMOT: a baseline for 3d multi-object tracking and new evaluation metrics. arXiv preprint arXiv:2008.08063. Cited by: Table 1, §1, §1, §2, §4.1, §4.1, Table 2.
  27. H. Yang, J. Wen, X. Wu, L. He and S. Mumtaz (2019) An efficient edge artificial intelligence multipedestrian tracking method with rank constraint. IEEE Transactions on Industrial Informatics 15 (7), pp. 4178–4188. Cited by: §2.
  28. T. Yin, X. Zhou and P. Krähenbühl (2020) Center-based 3d object detection and tracking. arXiv preprint arXiv:2006.11275. Cited by: Table 1, §1.
  29. D. Zhou, J. Fang, X. Song, C. Guan, J. Yin, Y. Dai and R. Yang (2019) Iou loss for 2d/3d object detection. In 2019 International Conference on 3D Vision (3DV), pp. 85–94. Cited by: §1.
  30. X. Zhou, D. Wang and P. Krähenbühl (2019) Objects as points. External Links: 1904.07850 Cited by: Table 1.
  31. B. Zhu, Z. Jiang, X. Zhou, Z. Li and G. Yu (2019) Class-balanced grouping and sampling for point cloud 3d object detection. arXiv preprint arXiv:1908.09492. Cited by: Table 1.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
426865
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description