BayesOD: A Bayesian Approach for Uncertainty Estimation in Deep Object Detectors
Abstract
When incorporating deep neural networks into robotic systems, a major challenge is the lack of uncertainty measures associated with their output predictions. Methods for uncertainty estimation in the output of deep object detectors (DNNs) have been proposed in recent works, but have had limited success due to 1) information loss at the detectors nonmaximum suppression (NMS) stage, and 2) failure to take into account the multitask, manytoone nature of anchorbased object detection. To that end, we introduce BayesOD, an uncertainty estimation approach that reformulates the standard object detector inference and NonMaximum suppression components from a Bayesian perspective. Experiments performed on four common object detection datasets show that BayesOD provides uncertainty estimates that are better correlated with the accuracy of detections, manifesting as a significant reduction of 9.77%13.13% on the minimum Gaussian uncertainty error metric and a reduction of 1.63%5.23% on the minimum Categorical uncertainty error metric. Code will be released at https://github.com/asharakeh/bayesodrc.
I Introduction
Due to their high level of performance, deep object detectors have become standard components of perception stacks for safety critical tasks such as autonomous driving [1, 2, 3] and automated surveillance [4]. Therefore, the quantification of how trustworthy these detectors are for subsequent modules, especially in safety critical systems, is of utmost importance. To encode the level of confidence in an estimate, a meaningful and consistent measure of uncertainty should be provided for every detection instance (see Fig. 1).
Two important goals must be met to create a meaningful uncertainty measure. First, the robotic system should be capable of using the uncertainty measure to fuse an object detector’s output with prior information from different sources [5] to connect sequences of detections over time and increase detection and tracking performance as a result. Second and most importantly, the robotic system should be able to use its own estimates of detection uncertainty to reliably identify incorrect detections, including those resulting from out of distribution instances, where object categories, scenarios, textures, or environmental conditions have not been seen during the training phase [5].
Two sources of uncertainty can be identified in any machine learning model. Epistemic or model uncertainty is the uncertainty in the model’s parameters, usually as a result of the confusion about which model generated the training data, and can be explained away given enough representative training data points [6]. Aleatoric or observation uncertainty results from the stochastic nature of the observed input, and persists in network output despite expanded training on additional data [7].
Methods to estimate both uncertainty types in DNNs have been recently proposed by Kendal et al. [7], with applications to pixelwise perception tasks. Recent methods [9, 10, 11, 12, 13, 14, 15] extended Kendal’s work [7] to object detection, but fail to consider the multitask, manytoone nature of the object detection task. To that end, we introduce BayesOD, a framework designed to estimate the uncertainty in both bounding box and category of detected object instances. This paper offers the following contributions:

We provide a Bayesian treatment for every step of the neural network inference procedure, allowing the incorporation of anchorlevel and objectlevel priors in closed form.

We replace standard nonmaximum suppression (NMS) with Bayesian inference, allowing the detector to retain all predicted information for both the bounding box and the category of a detected object instance.

We perform comprehensive experiments to quantify the quality of the estimated uncertainty on four commonly used 2D object detection datasets, COCO, Pascal VOC, Berkeley Deep Drive and Kitti. We show that BayesOD provides a significant reduction of on the minimum Gaussian uncertainty error metric, a reduction of on the minimum Categorical uncertainty error metric, and an increase of on the probabilistic detection quality over the next best method from current state of the art.
Ii Related Work
Iia Deep Neural Networks For Object Detection
The object detection problem requires the estimation of both the category to which an object belongs, and its spatial location and extent, often expressed as the tightest fitting bounding box. The majority of state of the art object detectors in 2D [17] or in 3D [1, 2, 3] follow a standard algorithm, which maps a scene representation to object instances. Since the number of object instances in the scene is usually unknown a priori, the procedure begins with a densely sampled grid of prior object bounding boxes, referred to as anchors [18, 19], where the object detector provides a category and a bounding box estimate for each anchor element. Since multiple anchors can be mapped to a single bounding box in space, redundant outputs are eliminated through NonMaximum Suppression. BayesOD builds on the RetinaNet 2D object detector [19].
IiB Uncertainty Estimation In Deep Object Detectors
To account for epistemic uncertainty, Bayesian Neural Networks [20] usually apply a prior distribution over their parameters to compute a posterior distribution over the set of all possible parameters given the training dataset . A marginal distribution can then be computed for any prediction as:
(1) 
where is the input, and is the output of the neural network. Unfortunately, the calculation of the integral in Eq. (1) is usually intractable due to the nonlinear activation function between consecutive layers [21]. Tractable approximations can be derived through MonteCarlo integration by using ensemble methods [22] or Monte Carlo (MC) Dropout [6].
To estimate the epistemic uncertainty in the output of deep object detectors, Miller et al. [9] directly applies MC Dropout, treating the deep object detector as a black box. Uncertainty is then estimated as sample statistics from spatially correlated detector outputs. Subsequent work [10] studied the effect of various correlation and merging algorithms on the quality of the estimated uncertainty measures from the black box method in [9]. The black box method is shown to provide weakly correlated estimates for bounding box uncertainty, mainly because it observes the output bounding box after NMS, where most of the information from redundant predictions has already been removed.
Kendall et al. [7] provides one of the first works to address the estimation of aleatoric uncertainty for computer vision tasks. For regression tasks, a log likelihood loss is used to estimate heteroscedastic aleatoric uncertainty, written for every regression target as:
(2) 
where x is the input to, and is the output from the neural network. Furthermore y is the ground truth regression target, is the norm, are the neural network parameters, and is the estimated output variance.
Le et al. [15] directly apply the formulation in Eq. (2) to estimate the diagonal elements of the covariance matrix of the bounding box output from object detectors. Such methods are referred to as sampling free and require only a single run of the deep object detector to estimate uncertainty. The estimated variance in Eq. (2) has also been used in [11, 12, 14] to increase average precision, by incorporating it in the nonmaximum suppression stage, while disregarding the quality of the output uncertainty. The proposed sampling free methods assume a diagonal covariance matrix and still use NMS to eliminate low scoring predictions, reducing the quality of their estimated uncertainty for both objects’ bounding box and category.
Le et al. [15] estimate aleatoric uncertainty in deep object detectors by exploiting anchor redundancy, where multiple peranchors predictions map to the same object. These predictions are clustered using spatial affinity before NMS, and uncertainty measures are estimated using the cluster associated with every output prediction. Finally, a straightforward extension of [7] is typically used to perform joint estimation of epistemic and aleatoric uncertainty in deep object detectors [13, 23], while still employing NMS to eliminate rather than fuse information from redundant anchors.
Unlike each of the existing methods, BayesOD replaces NMS with Bayesian inference significantly improving the quality of its uncertainty estimates. In addition, BayesOD is the first method to tackle fusion of the category from redundant output anchors, as well as to provide a multivariate extension of Eq. (2) to estimate the aleatoric uncertainty of objects’ bounding boxes.
Iii A Bayesian Formulation For Object Detection:
Throughout this section, the bounding box of an object, represented by its top left and bottom right corners, is denoted as , whereas its category, represented by a onehot vector, is denoted as . The index is used to signify a variable related to the ^{th} anchor in the anchor grid. Variables not indexed with represent inference output clustered over several anchors. Finally, predictions provided by the neural network are denoted with a operator.
Iiia Computing The PerAnchor Gaussian Posterior:
Computing the uncertainty in the estimated peranchor bounding box: Following [7] and using MCDropout as a tractable approximation of the integral in Eq. (1), the sufficient statistics of the Gaussian marginal probability distribution describing the estimated peranchor bounding box can be derived as:
(3)  
(4) 
where is the number of times MCDropout sampling is performed, and is the bounding box regression output of the neural network for the ^{th} MCDropout run. The covariance matrix, , captures the epistemic uncertainty in the estimated bounding box .
Eq. (3) is sufficient to compute the output mean of the peranchor bounding box . However, Eq. (4) still needs to account for the aleatoric component of uncertainty, where the final peranchor output covariance can be approximated as:
(5) 
To estimate the full covariance matrix , a novel multivariate log likelihood regression loss is derived as:
(6) 
where is the predicted peranchor aleatoric covaraince matrix, is the predicted peranchor bounding box, and is the associated regression target. However, the loss in Eq. (6) is found to be numerically unstable. Furthermore, there are no guarantees on the positive definiteness of the predicted covariance matrix . Using the decomposition of , in conjunction with the CauchySchwarz inequality, a numerically stable surrogate loss function is derived as:
(7) 
where is a lower triangular matrix with ones for its diagonal entries, and is a diagonal matrix. The loss function in Eq. (7) is a numerically stable upper bound of the one in Eq. (6) and can guarantee the positive definiteness of by predicting positive values for the diagonal elements of through standard activation functions. The final output distributions after incorporating both epistemic and aleatoric covariance estimates are plotted as bounding boxes in the middle image of Fig. 2.
Incorporating peranchor bounding box priors: The peranchor bounding box prior is usually defined based on the training dataset as . The peranchor posterior distribution describing the bounding box can then be written as:
(8) 
is a Gaussian likelihood function described by the sufficient statistics in equations Eq. (3) and Eq. (5). The sufficient statistics can be computed through the multivariate Gaussian conjugate update, as:
(9)  
(10) 
The choice of anchor priors depends on the application, and whether object information is actually available a priori. Since no useful bounding box information is available from our 2D training datasets, a noninformative prior, visually shown in the left image of Fig. 2, is chosen for following [24].
IiiB Computing The PerAnchor Categorical Posterior:
Computing the uncertainty in the estimated peranchor category: Since the neural network outputs the parameters of a Categorical distribution rather than onehot categorical samples, the parameters for the Categorical marginal conditional probability distribution can be computed as:
(11) 
where is the soft max function, and is the output logit of the ^{th} category, estimated at the ^{th} MCDropout run of the neural network. No explicit treatment of the aleatoric classification uncertainty is performed, since it is already contained within the estimated parameters [14].
Incorporating peranchor category priors: For the object category, a Dirichlet distribution is set as a prior over the parameters of the categorical distribution generating , instead of incorporating a prior distribution directly over the category . The posterior distribution of the categorical parameters can be written as:
(12) 
where is the set of updated parameters , and are i.i.d. samples from . Since the likelihood function is a categorical distribution, the prior distribution is chosen to be a Dirichlet distribution allowing a Dirichlet posterior to be computed in closed form as:
(13) 
where is the element in instance corresponding to category , and are the inferred parameters of the Dirichlet posterior distribution. The peranchor categorical posterior distribution can be written as:
(14) 
where is the mean of the Dirichlet posterior distribution [24] in Eq. (IIIB) written as:
Similar to the prior used for the peranchor bounding box, we choose a noninformative Dirichlet prior for the peranchor category following [24]. Although noninformative, the prior still serves an essential purpose by allowing the derivation of a Dirichlet posterior in Eq. (IIIB), which will allow the fusion of information from multiple clustered categorical variables in the next section.
IiiC Bayesian Inference as a Replacement to NMS:
Similar to NMS, BayesOD clusters peranchor outputs from the neural network using spatial affinity. However, all elements in the cluster are then combined regardless of their classification score during inference. Greedy clustering is chosen as it provides adequate performance when compared to standard NMS, while maintaining computational efficiency. For better performing but slower clustering algorithms, see [10].
For the remainder of this section, we will continue the derivation for a single anchor cluster containing anchors. The anchor with the highest categorical score is considered the cluster’s center, is indexed by , and is described with the posterior distributions in Eq. (8) and Eq. (12). The rest of the cluster members are assumed to be measurement outputs from the neural network described by the states and , and are used to update the bounding box and category of the cluster center. Specifically, the final posterior distribution describing an object’s bounding box is:
(15) 
where is the set of inputs corresponding to the cluster members, is the peranchor posterior distribution of the cluster center, and is the likelihood derived through a conditional independence assumption of the of the cluster members given . The sufficient statistics of Eq. (IIIC) can be estimated in closed form as:
(16)  
(17) 
where are the sufficient statistics of the per anchor posterior distribution derived in Eq. (8).
To arrive at the final posterior distribution describing the category , a similar analysis can be performed to update the sufficient statistics of the cluster center with categorical measurements of the rest of the cluster members. Specifically, the posterior probability of can be derived as:
(18) 
where , and the categorical measurements are assumed to be i.i.d. In summary, is derived by updating the peranchor Dirichlet posterior distribution in (12) of the cluster center with index with categorical measurements from all cluster members. The final categorical distribution describing the state is then:
(19) 
where is computed as the mean of the posterior distribution in Eq. (IIIC):
(20) 
Note that every member of the cluster contributes to the estimation of the final bounding box and category states of the object. Furthermore, the output distributions for both the category and bounding box can be updated with objectlevel priors using the same equations presented in sections IIIA and IIIB. The final output from BayesOD is shown as the rightmost image in Fig. 2.
Training Dataset  Testing Dataset  Method  mAP(%)  PDQ Score(%)  mGMUE(%)  mCMUE(%) 

BDD  BDD  Sampling Free  36.59  33.97  44.19  28.46 
Black Box  36.43  32.46  47.63  30.45  
Anchor Redundancy  32.92  29.57  48.56  35.58  
Joint AleatoricEpistemic  36.84  29.57  46.35  28.28  
BayesOD  38.14  36.79  34.42  24.85  
BDD  Kitti  Sampling Free  64.78  29.24  46.70  20.67 
Black Box  62.96  32.26  49.23  22.27  
Anchor Redundancy  64.83  29.57  48.56  35.58  
Joint AleatoricEpistemic  62.96  29.57  46.35  28.28  
BayesOD  63.34  35.26  30.06  15.58  
COCO  COCO  Sampling Free  31.89  22.43  40.39  25.76 
Black Box  33.71  21.87  45.26  28.68  
Anchor Redundancy  29.94  17.63  43.74  31.13  
Joint AleatoricEpistemic  32.68  23.08  42.90  26.51  
BayesOD  35.41  23.15  30.23  24.13  
COCO  Pascal VOC  Sampling Free  54.94  14.18  49.49  29.63 
Black Box  54.67  12.77  48.90  29.42  
Anchor Redundancy  51.56  13.06  48.67  39.64  
Joint AleatoricEpistemic  55.43  11.62  49.99  30.14  
BayesOD  56.00  13.23  36.36  24.19 
Iv Experiments and Results
To show the effectiveness of BayesOD in comparison to the state of the art, it is applied to the problem of 2D object detection in image space. The evaluation is based on four commonly used datasets:
Models used for testing are not allowed to observe instances from the KITTI or Pascal VOC datasets.
All baseline uncertainty estimation methods used in comparison are integrated into the inference process of RetinaNet [19], trained using the regression loss function in Eq. (2) to estimate a diagonal bounding box covariance matrix. Full aleatoric covariance matrix results are provided through a second RetinaNet model, trained using the proposed regression loss in Eq. (7). For additional information on RetinaNet’s training procedure and hyperparamters, see [19].
Iva Evaluation Metrics
Three evaluation metrics are used to quantify the performance of uncertainty estimation methods in comparison to BayesOD. For performance on the detection task, we use the Mean Average Precision (mAP) [25, 26, 16, 8] at IOU. The maximum mean average precision achievable by a detector is .
The Minimum Uncertainty Error (MUE) [10] at IOU is used to determine the ability of the detector’s estimated uncertainty to discriminate true positives from false positives. The lowest MUE achievable by a detector is . We define the Gaussian MUE (GMUE) when the Gaussian entropy is used, Categorical MUE (CMUE) when the Categorical entropy is used. Finally, we average the GMUE and CMUE over all categories in a testing dataset to arrive to a single value, the Mean (Gaussian or Categorical) MUE (mGMUE or mCMUE).
Finally, we use the newly proposed Probability Based Detection Quality (PDQ) [27] to jointly quantify the bounding box and category probability assigned to true positives by the detector. The highest PDQ achievable by a detector is , where the PDQ increases as the distributions assigned to a detection better match those of the ground truth instance. For detailed information on the three evaluation metrics, we refer the reader to the [26, 10, 27].
IvB Comparison With State of The Art Methods:
BayesOD is compared against four approaches representing the state of the art methods for uncertainty estimation methods used for object detection. The four approaches are referred to as: Black Box [9, 10], Sampling Free [15, 14], Anchor Redundancy [15], and Joint Aleatoric Epistemic [13]. BayesOD, Black Box, and Joint Aleatoric Epistemic use stochastic runs of MCDropout, while Sampling Free and Anchor Redundancy use only one nonstochastic run. As such, BayesOD, Black Box, and Joint Aleatoric Epistemic run at a similar frame rate, approximately slower than Sampling Free and Anchor Redundancy. The affinity threshold used for clustering in all methods was set to the IOU, similar to that used for NMS in RetinaNet. The number of categorical samples in Eq. (12) is empirically set to .
Table I shows the results of evaluating the four methods in comparison to BayesOD, on the four testing datasets. BayesOD is seen to outperform all four methods on mAP when tested on the BDD, COCO and PASCAL VOC datasets by a margin of over the second best method, but is outperformed on the KITTI dataset by when using the Sampling Free and Anchor Redundancy methods. Such reduction in performance on KITTI is noted with all methods using MCDropout, implying that MCDropout might hurt mAP performance in cases where the testing dataset is semantically different than the training dataset.
Similarly, BayesOD also outperforms all four methods on PDQ when tested on the BDD, KITTI and COCO datasets by a margin of over the second best method. BayesOD is outperformed on the PASCAL VOC dataset by when using the sampling free method. Considering the performance only on PDQ, it cannot be determined if a method is assigning lower probability values to false positives.
On the other hand, the mGMUE/mCMUE are capable of providing a quantitative measure of how well the estimated uncertainty can be used to separate correct and incorrect detections [10]. BayesOD provides a significant reduction of in mGMUE over the next best method on all four testing datasets. Combined with BayesOD’s performance on the PDQ metric, it can be inferred that BayesOD not only assigns adequate probability to true positives, but also assigns a lower probability to false positives when compared to true positives. Finally, when comparing mCMUE, BayesOD provides a reduction between over the next best method on all four datasets.
Experiment  mAP(%)  PDQ Score(%)  mGMUE(%)  mCMUE(%)  

1  Full System  35.41  23.15  30.23  24.13 
2  Diagonal Covariance  34.77  22.64  30.69  25.25 
3  Epistemic Only  34.15  22.62  35.88  26.47 
4  Aleatoric Only  34.12  22.67  28.95  25.60 
5  Standard NMS  34.70  22.65  43.19  25.10 
IvC Ablation Studies:
Table II shows the results of the mAP, PDQ, mGMUE, and mCMUE for the ablation studies performed on the COCO dataset. The results of the full BayesOD framework can be seen in experiment . By analyzing the results of the ablation studies, the following claims are put forth:
Learning the offdiagonal elements of the covariance matrix provides slightly better uncertainty estimates for the objects’ bounding box. To support this claim, RetinaNet is trained using the original log likelihood loss in Eq. (2) instead of the proposed multivariate loss in Eq. (7). The results of BayesOD using this original loss formulation are shown in experiment . When compared to the full system, an increase of is observed in mGMUE. Although the improvement is not substantial, the new proposed loss avoids an explicit independence assumption and allows the neural network to learn to drive the offdiagonal elements of the covariance matrix towards if needed.
Aleatoric uncertainty provides a more discriminative uncertainty estimate for the objects’ bounding box over epistemic uncertainty estimated from MCDropout. To support this claim BayesOD is implemented without the update step in Eq. (5), to use only the peranchor sample variance computed from multiple stochastic runs of MCDropout. The results, presented in experiment , show an increase of and is observed in the mGMUE and mCMUE respectively. Note however that this conclusion is specific to MCDropout, and might not be valid for alternative epistemic uncertainty estimation mechanisms.
To provide better insight on the effect of epistemic uncertainty from MCDropout on the full system, experiment is performed by using BayesOD with a single inference run, and without any epistemic uncertainty estimation mechanism. The results show a decrease in mGMUE of over experiment , and over the full system, further cementing the conclusion that MCDropout might not be a good method to estimate epistemic uncertainty in deep object detectors.
Greedy NonMaximum Suppression is detrimental to the discriminative power of the uncertainty in the objects’ bounding box. To support this claim, the elimination scheme of NMS is selected to retain only cluster centers, while discarding the remaining cluster members. The results presented in experiment show a large increase of mGMUE when compared to the full system. We conclude that merging information from all cluster members into the final object estimate is essential for proper quantification of bounding box uncertainty by a neural network.
V Conclusion
This paper presents BayesOD, a Bayesian approach for estimating the uncertainty in the output of deep object detector. Experiments using BayesOD show that replacing NMS with Bayesian inference and explicitly incorporating full aleatoric covariance matrix estimation allows for a much more meaningful estimated category and bounding box uncertainty in deep object detectors. This work aims to pave the path for future research directions that would use BayesOD for active learning, exploration, as well as object tracking. Future work will study the effect of informative priors originating from multiple detectors, temporal information, and different sensors on the perception capabilities of a robotic system.
References
 [1] Jason Ku, Melissa Mozifian, Jungwook Lee, Ali Harakeh, and Steven Waslander. Joint 3d proposal generation and object detection from view aggregation. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2018.
 [2] Yin Zhou and Oncel Tuzel. Voxelnet: Endtoend learning for point cloud based 3d object detection. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018.
 [3] Xiaozhi Chen, Huimin Ma, Ji Wan, Bo Li, and Tian Xia. Multiview 3d object detection network for autonomous driving. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017.
 [4] Trupti M Pandit, PM Jadhav, and AC Phadke. Suspicious object detection in surveillance videos for security applications. In Inventive Computation Technologies (ICICT), International Conference on, 2016.
 [5] Niko Sünderhauf, Oliver Brock, Walter Scheirer, Raia Hadsell, Dieter Fox, Jürgen Leitner, Ben Upcroft, Pieter Abbeel, Wolfram Burgard, Michael Milford, et al. The limits and potentials of deep learning for robotics. The International Journal of Robotics Research, 37(45):405–420, 2018.
 [6] Yarin Gal and Zoubin Ghahramani. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In International Conference on Machine Learning (ICML), 2016.
 [7] Alex Kendall and Yarin Gal. What uncertainties do we need in bayesian deep learning for computer vision? In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, 2017.
 [8] Andreas Geiger, Philip Lenz, and Raquel Urtasun. Are we ready for autonomous driving? the kitti vision benchmark suite. In Conference on Computer Vision and Pattern Recognition (CVPR), 2012.
 [9] Dimity Miller, Lachlan Nicholson, Feras Dayoub, and Niko Sünderhauf. Dropout sampling for robust object detection in openset conditions. In 2018 IEEE International Conference on Robotics and Automation (ICRA), pages 1–7. IEEE, 2018.
 [10] Dimity Miller, Feras Dayoub, Michael Milford, and Niko Sünderhauf. Evaluating merging strategies for samplingbased uncertainty techniques in object detection. arXiv preprint arXiv:1809.06006, 2018.
 [11] Gregory P Meyer, Ankit Laddha, Eric Kee, Carlos VallespiGonzalez, and Carl K Wellington. Lasernet: An efficient probabilistic 3d object detector for autonomous driving. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 12677–12686, 2019.
 [12] Yihui He, Chenchen Zhu, Jianren Wang, Marios Savvides, and Xiangyu Zhang. Bounding box regression with uncertainty for accurate object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2888–2897, 2019.
 [13] Di Feng, Lars Rosenbaum, and Klaus Dietmayer. Towards safe autonomous driving: Capture uncertainty in the deep neural network for lidar 3d vehicle detection. In 2018 21st International Conference on Intelligent Transportation Systems (ITSC), 2018.
 [14] Di Feng, Lars Rosenbaum, and Klaus Dietmayer. Leveraging heteroscedastic aleatoric uncertainties for robust realtime lidar 3d object detection. In 2018 21st International Conference on Intelligent Transportation Systems (ITSC), 2018.
 [15] Michael Truong Le, Frederik Diehl, Thomas Brunner, and Alois Knol. Uncertainty estimation for deep neural object detectors in safetycritical applications. In 2018 21st International Conference on Intelligent Transportation Systems (ITSC). IEEE, 2018.
 [16] Fisher Yu, Wenqi Xian, Yingying Chen, Fangchen Liu, Mike Liao, Vashisht Madhavan, and Trevor Darrell. Bdd100k: A diverse driving video database with scalable annotation tooling. arXiv preprint arXiv:1805.04687, 2018.
 [17] Jonathan Huang, Vivek Rathod, Chen Sun, Menglong Zhu, Anoop Korattikara, Alireza Fathi, Ian Fischer, Zbigniew Wojna, Yang Song, Sergio Guadarrama, and Kevin Murphy. Speed/accuracy tradeoffs for modern convolutional object detectors. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017.
 [18] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster rcnn: Towards realtime object detection with region proposal networks. In Advances in Neural Information Processing Systems 28, 2015.
 [19] TsungYi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dollar. Focal loss for dense object detection. In The IEEE International Conference on Computer Vision (ICCV), Oct 2017.
 [20] David JC MacKay. Probable networks and plausible predictions—a review of practical bayesian methods for supervised neural networks. Network: computation in neural systems, 6(3):469–505, 1995.
 [21] Murat Sensoy, Lance Kaplan, and Melih Kandemir. Evidential deep learning to quantify classification uncertainty. In Advances in Neural Information Processing Systems, pages 3179–3189, 2018.
 [22] Balaji Lakshminarayanan, Alexander Pritzel, and Charles Blundell. Simple and scalable predictive uncertainty estimation using deep ensembles. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 6402–6413. Curran Associates, Inc., 2017.
 [23] Florian Kraus and Klaus Dietmayer. Uncertainty estimation in onestage object detection. arXiv preprint arXiv:1905.10296, 2019.
 [24] Andrew Gelman, Hal S Stern, John B Carlin, David B Dunson, Aki Vehtari, and Donald B Rubin. Bayesian data analysis. Chapman and Hall/CRC, 2013.
 [25] TsungYi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In European conference on computer vision, pages 740–755. Springer, 2014.
 [26] Mark Everingham, Luc Van Gool, Christopher KI Williams, John Winn, and Andrew Zisserman. The pascal visual object classes (voc) challenge. International journal of computer vision, 88(2):303–338, 2010.
 [27] David Hall, Feras Dayoub, John Skinner, Peter Corke, Gustavo Carneiro, and Niko Sünderhauf. Probabilitybased detection quality (PDQ): A probabilistic approach to detection evaluation. CoRR, abs/1811.10800, 2018.