Fine-grained Uncertainty Modeling in Neural Networks

Fine-grained Uncertainty Modeling in Neural Networks


Existing uncertainty modeling approaches try to detect an out-of-distribution point from the in-distribution dataset. We extend this argument to detect finer-grained uncertainty that distinguishes between (a). certain points, (b). uncertain points but within the data distribution, and (c). out-of-distribution points. Our method corrects overconfident NN decisions, detects outlier points and learns to say “I don’t know” when uncertain about a critical point between the top two predictions. In addition, we provide a mechanism to quantify class distributions overlap in the decision manifold and investigate its implications in model interpretability.

Our method is two-step: in the first step, the proposed method builds a class distribution using Kernel Activation Vectors () extracted from the Network. In the second step, the algorithm determines the confidence of a test point by a hierarchical decision rule based on the distribution of squared Mahalanobis distances.

Our method sits on top of a given Neural Network, requires a single scan of training data to estimate class distribution statistics, and is highly scalable to deep networks and wider pre-softmax layer. As a positive side effect, our method helps to prevent adversarial attacks without requiring any additional training. It is directly achieved when the Softmax layer is substituted by our robust uncertainty layer at the evaluation phase.

1 Introduction

Deep Neural Networks [26, 9, 28] are very good estimators for prediction tasks. However, achieving a trustworthy model output has been of increasing interest recently, especially in critical areas such as autonomous vehicles or medical diagnostics. One of the ways to establish trust in the model is to be able to make its decision process interpretable [31, 23]. In the recent years, numerous methods [21, 29, 12, 18, 13, 1, 22, 27, 32, 24] have been proposed to explain model predictions, for example [21] uses an auxiliary binary input to understand model predictions, [29] proposes inverse convolution operation using DeconvNets[30] and [24] uses back-propagated gradient of the last convolution layer.

Most of the approaches in model explanations are gradient based and operate on the input without regards to the model prediction confidence. While the input explanations might be still meaningful, the Softmax still produces overconfident scores which are not inline with the explanations. For example, the VGG19 pretrained model is overconfident about top-1 prediction than the second top prediction even though both predictions have equal prior as shown in Fig.[1].

\captionof subfigurecertain \captionof subfigureuncertain \captionof subfigureoutlier
Figure 1: Prediction confidence on our method versus the Softmax and previous works [16, 25, 11, 19]. Our method: (a) (top) is detected as a certain “great white shark”, and (bottom) is detected as a certain “digit-9”; (b) (top) is detected as an uncertain point between “great white shark” and “scuba diver”, and (bottom) is detected as an uncertain point between “digit-5” and “digit-8”; (c) both (top) and (bottom) are detected as out-of-distribution points. Softmax and previous works: (a) (top) is detected as a certain “great white shark”, and (bottom) is detected as a certain “digit-9”; (b) and (c) both (top) and (bottom) are detected as the out-of-distribution points


Although significant work has been done on explanations at the input level, estimating the uncertainty in prediction is still a challenge and is an active area of research [5, 10, 17, 25, 11, 15]. One of the first approaches in uncertainty calibration [6] was to scale the logit values in the Softmax (also called temperature scaling), however, temperature scaling alone is insufficient to detect outlier points on the lower end of the probabilities. [5] is proposed to prevent adversarial attacks by getting an estimate of the manifold for each class. [10] employs a simple mechanism to separate the test data into correctly and incorrectly classified examples and then plots an AUCROC curve to summarize the performance of a binary classifier discriminating with the score. [17] uses temperature scaling and adds a small amount of perturbation in the input image to separate in- and out-of-distribution samples. [25] and [11] are based on Dirichlet distribution [2] of class probabilities to form subjective opinions (however, a small change in the uncertainty threshold can significantly affect the model confidence). [15] enhances the training step by adding a confidence score in the loss function to remove out-of-distribution examples. [19] only detects outlier points based on a fixed threshold. [16] is similar to our work where it uses the pre-softmax layer to estimate the prior distribution of classes and then detect out-of-distribution points by setting a threshold on Mahalanobis distances. However, this approach is not scalable to wide pre-softmax layers due to joint covariance matrix computation. Besides, the proposed threshold-based separation method is insufficient to separate uncertain points within the distribution and out-of-distribution points (thus lacking finer-grained uncertainty).

Contribution: (a) We propose a robust framework - an extension of [16] - to separate out-of-distribution points from uncertain points within the distribution, (b) Propose kernel activation vectors, , that capture the low-level and high-level activations, (c) Retrieve relevant training points in the active learning framework, (d) Propose diagonal covariance matrix assumption in the class distribution that makes the framework scalable to very deep networks and or very wide pre-softmax layers without compromise in prediction performance, (e) Quantify class distribution overlap to understand concept affinity and provide model interpretability, and (f) Define a new baseline to measure outlier-detection capacity of the model.

2 Uncertainty Modelling

Given a neural network pre-trained for a specific target, for example, multi-class classification, we aim to evaluate the predictive uncertainty of the top prediction. At the training step, we first model the prior distribution of each class as a multi-variate normal distribution capturing the distribution of input activations. In order to provide a general procedure, we refer to these activations as Kernel Activation Vector (kav) that captures the low-level and high-level activations of the manifold for each class. At the training step, we also compute and store the top-k nearest KAV members (k being the hyperparameter) for each training class. These member vectors are then used in the ensemble voting for uncertainty prediction as discussed later in this section. The overall training step is summarized in algorithm 1. At the evaluation step, the uncertainty score is assigned based on a hierarchical decision process as outlined in detail in the following two subsections.

  Input: Training instances , trained Neural Network estimator,
  for  to  do
     for  to  do
     end for
  end for
  for  to  do
  end for
  for  to  do
  end for
Algorithm 1 Compute statistics (, ) of class specific prior distribution. Compute nearest members () of that distribution

2.1 Distance Estimates

Given an observation point, , we extract the kernel activation vector, , and compute its Mahalanobis distance from a multi-variate Gaussian distribution as:


where, is the kernel activation vector extracted from the pre-trained NN. Its belongingness to class is determined by the Maximum Likelihood Estimate (MLE) on pre-softmax layer of the NN esitmator. Note that computing the dense Covariance matrix, , in a high dimensional space is an expensive operation, both in time and space . We assume that the elements in are linearly independent and uncorrelated, which reduces the Covariance matrix to a diagonal matrix. This linear assumption reduces both time and space complexity to and makes the training procedure scalable to large datasets and wider Softmax layer. The Mahalanobis distance is therefore modified as:


where is a transformed kernel activation vector which is normalized to the class, i.e., centered by mean () and scaled by the standard deviations () element wise. Empirically, we validate that this constraint still gives superior results compared to baseline [10] and previous works [16, 17].

We also define an overlap between prior distributions of different classes using Bhattacharya distances to better understand the decision manifold. Bhattacharya distance, , between classes and , is given as:


Since the length of scales with the depth of a CNN, the second term in Eq.[3] becomes numerically unstable at large dimensions, for example, VGG16 produces . We use the following optimizations in Eq.[3] to achieve numerical stability:


where, the second step uses the proposed uncorrelation assumption (diagonal Covariance matrix). We now define the hierarchical uncertainty decision process.

2.2 Hierarchical Uncertainty Modelling:

For every test point, the proposed approach computes the Mahalanobis distance (Eq.[2]) of the kernel activation vector from the distributions of top-2 classes. A hierarchical decision function takes these distances and categorizes the test point as outlier, uncertain, or certain.

We define a point as out-of-distribution point if it is outside of the of the distribution. To determine the range, or, in general, the range, we note that the squared Mahalanobis distances follow distribution [8] whose degree of freedom is equal to the dimension of the distribution. Since the distribution does not have a closed from Cumulative Distribution Function (CDF), determining its although possible under approximations [20], is still a complicated procedure. We observe that, at very large degree of freedom, the can be approximated as a normal distribution with and . Let us denote this distance distribution as to distinguish it from the activation distributions (of ).

We use the out-of-distribution criteria described above to achieve finer grained uncertainty - separation of outlier points from uncertain points. We make a distinction between outlier points and uncertain points in the sense that the uncertain points belong to the joint distribution of top-k classes but outside of each marginal distribution. Outlier points are outside of the joint distribution.

Step-1: Detecting an Outlier point

Let and be the distributions of the top-2 classes respectively. Since and are independent Gaussians, their joint distribution is also Gaussian with parameters and . Let us call this distribution .

We define a test point as an outlier if its kernel activation vector is outside the of joint squared mahalanobis distribution, of top-2 classes. To achieve this, we first compute the Mahalanobis distance from the joint distribution of top-2 classes


and use the following decision rule () to detect an outlier point:


Step-2: Detecting Uncertain points

We make a distinction between uncertain points and outlier points in the sense that the belongingness of an uncertain point is within the joint distribution, of but outside of each marginal distribution, of the top-k prediction classes. An outlier point (for example random noise) is outside of such a joint distribution. To get a belongingness of a point (be it outlier or uncertain) to a multi-variate Gaussian distribution, we note that the squared Mahalanobis distances from such a distribution follow a distribution

For a test point, , let and be mahalanobis distances respectively from the top-2 classes. If and are within of each other ( learned empirically), then it is an uncertain point. If the point is not detected as outlier or uncertain, we tag that point as a certain point wrt the top-2 predictions.

In order to increase the confidence score and make the certain and uncertain points more separable, we add a controlled noise along the direction of the gradient for every test point as follows, similar to previous methods [16, 17]:


where is a modified neural network estimator whose Softmax predictions are replaced by the proposed uncertainty layer.

3 Experiments

In this section, we investigate the effectiveness of the proposed method in detecting uncertain points within the data distribution (section[3.2]), understand class distributions overlap in the decision manifold (section 3.3), discuss robustness against digits rotation and plot uncertainty confidence with increasing image occlusion. All tests are done on a single GPU machine and using the pretrained network wherever possible, otherwise we fine-tuned networks for accuracy (Resnet34 on CIFAR10 and SVHN).

3.1 New Baseline

We define a new baseline that distinguishes between and () points without sacrificing prediction performance. For demonstration purposes and to assess the proposed framework, we first establish that our method does not incur loss of prediction performance compared to recent methods such as [16, 17]. For this we compare out-of-distribution detection on TinyImageNet testset while using CIFAR10 testset as in-distribution data. We note that our method outperforms Softmax, Eucliden and gives the same performance as [16]. In addition, the prediction accuracy, using the decision thresholds in the proposed framework is at par with the previous works. Although fitting the proposed method on top of [16] would be an interesting test, we have left as a future investigation.

\captionof subfigureCIFAR10 \captionof subfigureCIFAR10
Figure 2: (a). AUCROC curve of in-distribution CIFAR10 (True Positive Rate) versus the out-of-distribution TinyImageNet dataset (False Positive Rate) using ResNet34 finetuned for CIFAR10, (b). Classification accuracy of CIFAR10 and SVHN on Softmax, [16] and the proposed method


3.2 Detecting Uncertain points within data distribution

With Fig.[2] as a benchmark, we now provide an assessment of the proposed method in detecting uncertain points that still remain in the data distribution. To generate uncertain points in the data distribution, we take a certain point and adversarially [4] modify the image according to until it just misses the true top-prediction. For example, an image of a plane under adversary could be classified as a bird.

For assessment, we now have an in-distribution dataset containing images for which model is intended to provide correct predictions and an uncertain dataset for which model has confused the top-prediction. As before, we plot the AUCROC curve of TPR (in-distribution) versus FPR (uncertain) using the baseline proposed above and also compare the same with Softmax scores. As highlighted in Fig.[3], we see a significant performance increment compared to Softmax.

Next, to demonstrate that our finer grained uncertainty metric is able to make a distinction between uncertain and noise points, we take random noise points and define this collection as an out of distribution dataset. The proposed method detects of the images as outlier - a new baseline for future works.

\captionof subfigureImageNet \captionof subfigureCIFAR10
Figure 3: Baseline for uncertain image detection. Plot of TPR of in-distribution dataset against the FPR of (its adversarially generated) uncertain dataset.

3.3 Class Distribution Overlap

We quantify and investigate class distribution overlaps using the Bhattacharya distance as outlined in Section 2.1. One of the implications of this investigation is to understand if a Neural Network has learned a concept or is simply memorizing the training data. To substantiate our claim, we reason on the following two cases based on Fig.[5-(a)]. Case1: we observe that the distribution pairs have a significantly high overlap (similarity) compared to any other pair. Case2: we observe that the distribution pairs, although having high similarity compared to other pairs, are still significantly separated compared to Case1. Next, in the ImageNet training data, many instances of class contain in the image or one of its variant categories, (for example ). This is not the case with pairs as their instances are fairly separated. This remark brings an obvious conclusion that the model is learning concepts (which is why pairs are close) and not memorizing the training data (otherwise overlap would be higher). Similar trends are observed in other distribution pairs in the ImageNet as well as other datasets (CIFAR10, MNIST, SVHN) as outlined in Fig.[4].

To the best of our knowledge, we believe that this finding is a unique contribution and has a profound implication in understanding the decision manifold and providing model interpretability.

\captionof subfigureImageNet \captionof subfigureCIFAR10 \captionof subfigureMNIST \captionof subfigureSVHN
Figure 4: Class distribution overlap for ImageNet (pretrained VGG16 [26] and sampled classes), CIFAR10 (pretrained VGG16 with top-layer modified to -classes and finetuned), MNIST (trained on LeNet [14]) and SVHN (pretrained VGG16 with top-layer modified to -classes and finetuned). Numbers in each cell represent Bhattacharya distance where smaller number indicates higher overlap and vice versa.


3.4 Image Rotation Test

We use the proposed approach to investigate the robustness of Neural networks confidence against image rotation similar to [25] and compare against Softmax confidence. We observe in Fig[5] that a slight rotation of digit-1 leads to a significant drop is Softmax decision, (from at column-2 to in column-3). In addition, Softmax produces an overly confident score for confused prediction, for example, in Fig[5], digit-1 at rotation is detected as digit-7 with overly confident score of , whereas the proposed method detects it as an highly uncertain point between digit-7 and digit-4. We conclude that the proposed method robustly detects uncertainty as the degree of rotation increases.

\captionof subfigureDigit-1 \captionof subfigureDigit-7 \captionof subfigureDigit-6 \captionof subfigureDigit-9
Figure 5: Uncertainty score of the proposed method on MNIST image rotation using the LeNet architecture. The columns in the bottom of the figure represent top-2 softmax score along with prediction class in the parenthesis

. .

3.5 Occlusion Test

To quantify the robustness in confidence of the model prediction, we assess empirically, how the confidence degrades with increasing noise in the input space. For a given test image, we iteratively generate its noisy versions using the salt and pepper noise [3] with increasing degree of noise. In Fig.[6], we note that the proposed method is more confident about the target prediction compared to Softmax.

To perform a reliable experiment that covers the entire dataset, we compute occluded images (with increasing degree of noise) for every image in the test dataset and determine the average area under curve for occlusion. We compare the AUC of our method against Softmax, and as shown in Table.[1], our method significantly outperforms Softmax.

\captionof subfigureExample-1 \captionof subfigureExample-2 \captionof subfigureExample-3 \captionof subfigureExample-4
Figure 6: Plot of uncertainty score with increasing salt and pepper noise. Images are taken from testset data in Table.[1]. We note that the Softmax confidence decays faster than our method for the fixed target prediction which makes our method robust to noise.

CIFAR10 (%) SVHN (%) Imagenet20 (%)
Noise (%) Softmax Ours Softmax Ours Softmax Ours
0 97.03 97.52 98.14 98.99 89.63 91.57
10 87.77 92.13 89.13 94.56 81.74 88.39
20 71.02 80.94 75.82 85.07 69.58 74.92
30 60.37 70.77 63.71 73.82 62.81 70.14
40 57.81 65.89 60.66 68.38 55.24 61.33
50 60.66 65.90 59.83 67.81 54.13 60.78
Table 1: Average AUC for the occluded test-set constructed using the Salt and Pepper noise. Higher number is better.

4 Conclusion

Proposed method presents a finer grained uncertainty measure that distinguishes outlier points from uncertain points. Our approach is based off of simple framework of kernal activations, is scalable to deep networks, is scalable to wider Softmax layer, sits on top of all neural networks and requires minimal disk storage. It is imperative to have confident model predictions in order to be able to use in critical real-world applications. We believe that our work is a meaningful contribution to achieve such confidence and prevent adversarial attacks robustly. In the future, we would like to investigate the effects of proposed method in multi-label classification tasks and in language modeling as well as kernel Mahalanobis distances [7] to achieve prior separation is higher dimension manifold.


  1. Or Biran and Kathleen McKeown. Human-centric justification of machine learning predictions. IJCAI, Melbourne, Australia, 2017.
  2. David M Blei, Andrew Y Ng, and Michael I Jordan. Latent dirichlet allocation. Journal of machine Learning research, 3(Jan):993–1022, 2003.
  3. Alan C Bovik. Handbook of image and video processing. Academic press, 2010.
  4. Anirban Chakraborty, Manaar Alam, Vishal Dey, Anupam Chattopadhyay, and Debdeep Mukhopadhyay. Adversarial attacks and defences: A survey. arXiv preprint arXiv:1810.00069, 2018.
  5. Reuben Feinman, Ryan R Curtin, Saurabh Shintre, and Andrew B Gardner. Detecting adversarial samples from artifacts. arXiv preprint arXiv:1703.00410, 2017.
  6. Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q Weinberger. On calibration of modern neural networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 1321–1330. JMLR. org, 2017.
  7. Bernard Haasdonk and Elżbieta Pękalska. Classification with kernel mahalanobis distance classifiers. In Advances in Data Analysis, Data Handling and Business Intelligence, pages 351–361. Springer, 2009.
  8. Johanna Hardin and David M Rocke. The distribution of robust distances. Journal of Computational and Graphical Statistics, 14(4):928–946, 2005.
  9. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
  10. Dan Hendrycks and Kevin Gimpel. A baseline for detecting misclassified and out-of-distribution examples in neural networks. arXiv preprint arXiv:1610.02136, 2016.
  11. Lance Kaplan, Federico Cerutti, Murat Sensoy, Alun Preece, and Paul Sullivan. Uncertainty aware ai ml: Why and how. arXiv preprint arXiv:1809.07882, 2018.
  12. Been Kim, Rajiv Khanna, and Oluwasanmi O Koyejo. Examples are not enough, learn to criticize! criticism for interpretability. In Advances in Neural Information Processing Systems, pages 2280–2288, 2016.
  13. Pang Wei Koh and Percy Liang. Understanding black-box predictions via influence functions. arXiv preprint arXiv:1703.04730, 2017.
  14. Yann LeCun et al. Lenet-5, convolutional neural networks. URL: http://yann. lecun. com/exdb/lenet, 20:5, 2015.
  15. Kimin Lee, Honglak Lee, Kibok Lee, and Jinwoo Shin. Training confidence-calibrated classifiers for detecting out-of-distribution samples. arXiv preprint arXiv:1711.09325, 2017.
  16. Kimin Lee, Kibok Lee, Honglak Lee, and Jinwoo Shin. A simple unified framework for detecting out-of-distribution samples and adversarial attacks. In Advances in Neural Information Processing Systems, pages 7167–7177, 2018.
  17. Shiyu Liang, Yixuan Li, and R Srikant. Enhancing the reliability of out-of-distribution image detection in neural networks. arXiv preprint arXiv:1706.02690, 2017.
  18. Scott M Lundberg and Su-In Lee. A unified approach to interpreting model predictions. In Advances in Neural Information Processing Systems, pages 4768–4777, 2017.
  19. Patric Nader, Paul Honeine, and Pierre Beauseroy. Mahalanobis-based one-class classification. In 2014 IEEE International Workshop on Machine Learning for Signal Processing (MLSP), pages 1–6. IEEE, 2014.
  20. Hilary I Okagbue, Muminu O Adamu, and Timothy A Anake. Quantile approximation of the chi–square distribution using the quantile mechanics. Proceedings of the World Congress on Engineering and Computer Science 2017 Vol I WCECS 2017, October 25-27, 2017, San Francisco, USA, 2017.
  21. Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. Why should i trust you?: Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 1135–1144. ACM, 2016.
  22. Andrew Slavin Ross, Michael C Hughes, and Finale Doshi-Velez. Right for the right reasons: Training differentiable models by constraining their explanations. arXiv preprint arXiv:1703.03717, 2017.
  23. Wojciech Samek, Thomas Wiegand, and Klaus-Robert Müller. Explainable artificial intelligence: Understanding, visualizing and interpreting deep learning models. arXiv preprint arXiv:1708.08296, 2017.
  24. Ramprasaath R Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE International Conference on Computer Vision, pages 618–626, 2017.
  25. Murat Sensoy, Lance Kaplan, and Melih Kandemir. Evidential deep learning to quantify classification uncertainty. In Advances in Neural Information Processing Systems, pages 3179–3189, 2018.
  26. Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
  27. Jost Tobias Springenberg, Alexey Dosovitskiy, Thomas Brox, and Martin Riedmiller. Striving for simplicity: The all convolutional net. arXiv preprint arXiv:1412.6806, 2014.
  28. Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, and Alexander A Alemi. Inception-v4, inception-resnet and the impact of residual connections on learning. In Thirty-First AAAI Conference on Artificial Intelligence, 2017.
  29. Matthew D Zeiler and Rob Fergus. Visualizing and understanding convolutional networks. In European conference on computer vision, pages 818–833. Springer, 2014.
  30. Matthew D Zeiler, Graham W Taylor, and Rob Fergus. Adaptive deconvolutional networks for mid and high level feature learning. In 2011 International Conference on Computer Vision, pages 2018–2025. IEEE, 2011.
  31. Quanshi Zhang and Song-Chun Zhu. Visual interpretability for deep learning: a survey. arXiv preprint arXiv:1802.00614, 2018.
  32. Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, and Antonio Torralba. Learning deep features for discriminative localization. In Computer Vision and Pattern Recognition (CVPR), 2016 IEEE Conference on, pages 2921–2929. IEEE, 2016.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description