We describe a limitation in the expressiveness of the predictive uncertainty estimate given by mean-field variational inference (MFVI), a popular approximate inference method for Bayesian neural networks. In particular, MFVI fails to give calibrated uncertainty estimates in between separated regions of observations. This can lead to catastrophically overconfident predictions when testing on out-of-distribution data. Avoiding such overconfidence is critical for active learning, Bayesian optimisation and out-of-distribution robustness. We instead find that a classical technique, the linearised Laplace approximation, can handle ‘in-between’ uncertainty much better for small network architectures.

oddsidemargin has been altered.
marginparsep has been altered.
topmargin has been altered.
marginparwidth has been altered.
marginparpush has been altered.
paperheight has been altered.
The page layout violates the ICML style. Please do not change the page layout, or include packages like geometry, savetrees, or fullpage, which change it for you. We’re not able to reliably undo arbitrary changes to the style. Please remove the offending package(s), or layout-changing commands and try again.


‘In-Between’ Uncertainty in Bayesian Neural Networks


Andrew Y. K. Foong0  Yingzhen Li0  José Miguel Hernández-Lobato0 0 0  Richard E. Turner0 0 

footnotetext: 1AUTHORERR: Missing \icmlaffiliation. 2AUTHORERR: Missing \icmlaffiliation. 3AUTHORERR: Missing \icmlaffiliation. . Correspondence to: Andrew Foong <>.  
Presented at the ICML 2019 Workshop on Uncertainty and Robustness in Deep Learning. Copyright 2019 by the author(s).

Neural networks have been shown to be extremely successful for supervised learning. However, they are known to underestimate their uncertainty when trained by maximum likelihood or maximum a posteriori (MAP) methods. A neural network that returns reliable uncertainty estimates whilst maintaining the computational and statistical efficiency of standard networks would have numerous applications in active learning, reinforcement learning and critical decision-making tasks (Gal et al., 2017; Chua et al., 2018; Gal, 2016).

A variety of techniques have been proposed to obtain uncertainty estimates for neural networks in computationally efficient ways (MacKay, 1992; Hinton & Van Camp, 1993; Barber & Bishop, 1998; Hernández-Lobato & Adams, 2015; Gal & Ghahramani, 2016; Lakshminarayanan et al., 2017). Among these, mean-field variational inference (MFVI) is a widely used approximate inference method that gives state-of-the-art performance in non-linear regression. On the commonly used UCI regression benchmark, MFVI with the reparameterisation trick (Blundell et al., 2015; Kingma et al., 2015) often outperforms Stochastic Gradient Langevin Dynamics, Probabilisitic Back-Propagation and ensemble methods (Bui et al., 2016; Tomczak et al., 2018), and is competitive with Monte Carlo Dropout.111See (Mukhoti et al., 2018) for a recent strong baseline for Monte Carlo Dropout on the UCI regression datasets.

Performance on the UCI datasets is usually measured by held out log-likelihood. This represents both accuracy and uncertainty quantification, since it penalises methods that are overconfident in addition to being inaccurate. It is therefore perhaps surprising that MFVI performs poorly on sequential decision making tasks that require good uncertainty quantification, such as contextual bandits (Riquelme et al., 2018). To perform well, a method must ‘know what it knows, and what it doesn’t know’: it should have high confidence near clusters of observations, and low confidence elsewhere. More specifically, a well-calibrated network should predict with high uncertainty far from data, as well as in regions between separated clusters of observations. However, the current UCI benchmark is not suitable for evaluating ‘in-between’ uncertainty, as the test set is obtained by uniformly subsampling the full dataset. We therefore design another UCI benchmark to test for in-between uncertainty, by taking the ‘middle region’ of the full dataset as the test set. We find that although MFVI performs well on the standard UCI benchmark, it can fail catastrophically on the in-between version, showing the detrimental effect the mean-field approximation has on in-between uncertainty. In contrast, a classical technique, the linearised Laplace approximation (MacKay, 1992), performs well on both.


This paper focuses on two approximate inference techniques for Bayesian Neural Networks (BNNs): Variational Inference (VI) and the Laplace approximation. We consider networks whose output given input and parameters is interpreted as the mean of a Gaussian distribution with homoscedastic output noise variance . We place a diagonal Gaussian prior over , here written as a column vector of all weights and biases in the network.

Variational Inference. Let the posterior distribution over given a dataset be . VI approximates this posterior with a simpler distribution . The parameters are learned by optimising a simple Monte Carlo estimate of the Evidence Lower Bound (ELBO):

where are sampled from . Optimising the ELBO minimises the KL-divergence between and the true posterior. Once is learned, we can make predictions by Monte Carlo sampling from :


A common and scalable choice for the form of is the mean-field Gaussian approximation (MFVI), which is a fully factorised Gaussian distribution. Another choice is to let be a full covariance Gaussian (FCVI). This is more flexible, but the number of variational parameters is now quadratic in the number of parameters in the network.

Laplace Approximation. The Laplace approximation (Denker & LeCun, 1991; MacKay, 1992) finds a mode of the posterior, and sets the approximate posterior to with . is set such that the curvature of matches the curvature of the logarithm of the Gaussian approximation at , that is:

In words, is the negative inverse Hessian evaluated at the MAP solution. In practice we use the Gauss-Newton matrix, which is guaranteed to be positive semi-definite, and can be evaluated using only first derivatives:

Here and is a vector whose th element is , where is the prior variance of .

Once and are obtained, there are two different ways to make predictions. The first is to Monte Carlo sample from the approximate posterior as in equation (1). We refer to this method as Sampled Laplace (SL). Unfortunately, the Laplace approximation is known to cause severe underfitting (Lawrence, 2001). An alternative procedure which empirically alleviates this is to linearise the output of the network about . This leads to a linear Gaussian model that can be solved exactly for the predictive distribution:

see Appendix id1 for details. We refer to this method as Linearised Laplace (LL).

Finding is identical to standard neural network training. Once at a mode, calculating the Gauss-Newton matrix requires one backward pass for each element of the dataset, which has a cost that scales linearly in the number of observations. Lastly the Laplace approximation requires inverting this matrix which has cubic cost in the number of parameters. This is still tractable for the smaller networks typically considered for regression on UCI datasets. Recent work has applied Kronecker-Factored Approximate Curvature (K-FAC) to obtain a scalable method; however they used sampling instead of linearisation and had to take steps to mitigate the underfitting problem inherent in the Laplace approximation (Ritter et al., 2018).


To test for in-between uncertainty, we compare these methods on two tasks. The first is a synthetic 1D regression dataset formed by adding Gaussian noise to a sine wave and observing two separated clusters of input points. The second are the UCI regression datasets.

Figure 1: Mean and two standard deviation bars of the predictive distribution for (without output noise).

Synthetic 1D Dataset. We plot results for four inference methods on the 1D dataset: MFVI, FCVI, LL and Hamiltonian Monte Carlo (HMC), in Figure 1.222SL is not shown as the fit is so poor that the error bars completely fill the figure. See (Lawrence, 2001) pp. 88 - 91. We use a single hidden layer network with tanh non-linearities333ReLUs caused problems with LL, see Appendix id1. and 50 hidden units. Diagonal Gaussian priors are used for all networks, and the observation noise is fixed to , the true value. Details and additional results are in Appendix id1. We see that MFVI fails to represent in-between uncertainty: its error bars are of similar magnitude in the data region and the in-between region. FCVI has larger uncertainty in the middle, but is slightly underconfident in the data region. LL and HMC show high confidence in the data region and increased uncertainty in between, showing that MFVI’s failure is rooted in approximate inference, not the model class.

There are several reasons for MFVI’s overconfidence. First, we show in Appendix id1 that a single hidden layer BNN with ReLU activations and with deterministic input weights and mean-field (possibly non-Gaussian) output weights must have an output variance that is convex as a function of its input. Such a BNN is incapable of expressing increased uncertainty between regions of low uncertainty. This would not be the case if the output weights had an unrestricted distribution. Although this insight does not immediately apply to BNNs with tanh activations and mean-field input weights, it shows that the mean field assumption can in some cases severely restrict the complexity of uncertainty estimates a BNN can express in function space.

Second, MFVI fails to express increased in-between uncertainty because fitting data in the outer region whilst having increased uncertainty in-between requires strong dependencies in the approximate posterior. This is because in a mean-field distribution, any parameter uncertainty used to express increased in-between uncertainty leads to uncontrolled variations in the fit in the data region. The only way to have a good fit and increased in-between uncertainty is to have variations in one parameter compensated for by variations in others, such that the resulting function still passes through the data points. This is explained in detail in Appendix id1 via a synthetic example.

Figure 2: Average test log-likelihoods on the standard splits for BNNs with one hidden layer (top) and two hidden layers (bottom). There are 50 hidden units in each layer.

Figure 3: Average test log-likelihoods on the gap splits for BNNs with one hidden layer (top) and two hidden layers (bottom). Note the scale on energy and naval, where MAP and MFVI fail catastrophically. There are 50 hidden units in each layer.

UCI Regression Datasets. We now investigate the uncertainty quality of BNNs in real-world datasets. The UCI datasets are usually split into training and test sets by subsampling the dataset uniformly. In the first experiment, we use the standard splits also used in (Hernández-Lobato & Adams, 2015; Bui et al., 2016; Mukhoti et al., 2018). In our second experiment, we create custom splits to test for in-between uncertainty. For each of the input dimensions of , we sort the datapoints in increasing order in that dimension. We then remove the middle of these datapoints for use as a test set. The outer are the training set. of the training set is used as a validation set. Thus a dataset with inputs has splits. We refer to these as the ‘gap splits’. A satisfactory method would achieve good results on the standard splits (showing an ability to fit the data well) and avoid catastrophically poor results on the gap splits (showing increased in-between uncertainty). The results are shown in Figures 2 and 3.444FCVI was only run on one hidden layer due to its long training times, and only tanh was used for Laplace as ReLUs caused problems with linearisation - see Figure 6. Full UCI results and experimental details in Appendix id1.

For the standard splits MFVI and LL perform best. FCVI does poorly, likely due to optimisation difficulties. LL does much better than SL, which is surprising given linearisation adds another approximation. However, it appears to redeem the poor Gaussian approximation (Lawrence, 2001). For the gap splits, MAP is competitive with Bayesian methods on power, protein and wine. However, on energy and naval MAP fails catastrophically, doing dozens or hundreds of nats worse than LL. The test sets of energy and naval thus show very different behaviour from their training sets, and good in-between uncertainty is required to prevent overconfident extrapolations. In this situation we would expect Bayesian methods to outperform MAP. However MAP and MFVI perform similarly poorly on energy and naval, showing that MFVI is overconfident. The only method that performs well on the standard splits and avoids any catastrophic results on the gap splits is LL.


We have shown that MFVI fails to provide calibrated in-between uncertainty, and that the standard UCI splits fail to adequately test for it. However, the decades-old LL approximation performs far better in this regard. Although recent advances in variational inference have allowed BNNs to scale to larger architectures than ever before, in terms of uncertainty quality the mean-field approximation loses crucial expressiveness compared to the less scalable LL approximation. It is therefore key for the field of approximate inference to consider how the approximation of posteriors in parameter space affects the expressiveness of uncertainties in function space. Future work will investigate the conditions an approximate posterior must satisfy to reliably capture in-between uncertainty. It would also be natural to see if combining K-FAC Laplace (Ritter et al., 2018) with linearisation leads to improved results.



We thank David R. Burt, Sebastian W. Ober and Ross Clarke for helpful discussions. AYKF gratefully acknowledges the Trinity Hall Research Studentship and the George and Lilian Schiff Foundation for funding his studies.


  • Barber & Bishop (1998) Barber, D. and Bishop, C. M. Ensemble learning in Bayesian neural networks. Nato ASI Series F Computer and Systems Sciences, 168:215–238, 1998.
  • Blundell et al. (2015) Blundell, C., Cornebise, J., Kavukcuoglu, K., and Wierstra, D. Weight uncertainty in neural network. In International Conference on Machine Learning, pp. 1613–1622, 2015.
  • Bui et al. (2016) Bui, T., Hernández-Lobato, D., Hernandez-Lobato, J., Li, Y., and Turner, R. Deep Gaussian processes for regression using approximate expectation propagation. In International Conference on Machine Learning, pp. 1472–1481, 2016.
  • Cho & Saul (2009) Cho, Y. and Saul, L. K. Kernel methods for deep learning. In Advances in Neural Information Processing Systems, pp. 342–350, 2009.
  • Chua et al. (2018) Chua, K., Calandra, R., McAllister, R., and Levine, S. Deep reinforcement learning in a handful of trials using probabilistic dynamics models. In Advances in Neural Information Processing Systems, pp. 4754–4765, 2018.
  • Denker & LeCun (1991) Denker, J. S. and LeCun, Y. Transforming neural-net output levels to probability distributions. In Advances in Neural Information Processing Systems, pp. 853–859, 1991.
  • Duvenaud & Adams (2015) Duvenaud, D. and Adams, R. P. Black-box stochastic variational inference in five lines of Python. In NIPS Workshop on Black-box Learning and Inference, 2015.
  • Gal (2016) Gal, Y. Uncertainty in deep learning. PhD thesis, University of Cambridge, 2016.
  • Gal & Ghahramani (2016) Gal, Y. and Ghahramani, Z. Dropout as a Bayesian approximation: Representing model uncertainty in deep learning. In International Conference on Machine Learning, pp. 1050–1059, 2016.
  • Gal et al. (2017) Gal, Y., Islam, R., and Ghahramani, Z. Deep Bayesian active learning with image data. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 1183–1192. JMLR. org, 2017.
  • Hernández-Lobato & Adams (2015) Hernández-Lobato, J. M. and Adams, R. Probabilistic backpropagation for scalable learning of Bayesian neural networks. In International Conference on Machine Learning, pp. 1861–1869, 2015.
  • Hinton & Van Camp (1993) Hinton, G. and Van Camp, D. Keeping neural networks simple by minimizing the description length of the weights. In in Proc. of the 6th Ann. ACM Conf. on Computational Learning Theory. Citeseer, 1993.
  • Kingma & Ba (2014) Kingma, D. P. and Ba, J. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
  • Kingma et al. (2015) Kingma, D. P., Salimans, T., and Welling, M. Variational dropout and the local reparameterization trick. In Advances in Neural Information Processing Systems, pp. 2575–2583, 2015.
  • Lakshminarayanan et al. (2017) Lakshminarayanan, B., Pritzel, A., and Blundell, C. Simple and scalable predictive uncertainty estimation using deep ensembles. In Advances in Neural Information Processing Systems, pp. 6402–6413, 2017.
  • Lawrence (2001) Lawrence, N. D. Variational inference in probabilistic models. PhD thesis, University of Cambridge, 2001.
  • Lee et al. (2017) Lee, J., Bahri, Y., Novak, R., Schoenholz, S. S., Pennington, J., and Sohl-Dickstein, J. Deep neural networks as Gaussian processes. arXiv preprint arXiv:1711.00165, 2017.
  • MacKay (1992) MacKay, D. J. C. A practical Bayesian framework for backpropagation networks. Neural computation, 4(3):448–472, 1992.
  • Mukhoti et al. (2018) Mukhoti, J., Stenetorp, P., and Gal, Y. On the importance of strong baselines in Bayesian deep learning. arXiv preprint arXiv:1811.09385, 2018.
  • Neal (2012) Neal, R. M. Bayesian learning for neural networks, volume 118. Springer Science & Business Media, 2012.
  • Riquelme et al. (2018) Riquelme, C., Tucker, G., and Snoek, J. Deep Bayesian bandits showdown: An empirical comparison of Bayesian deep networks for Thompson sampling. arXiv preprint arXiv:1802.09127, 2018.
  • Ritter et al. (2018) Ritter, H., Botev, A., and Barber, D. A scalable Laplace approximation for neural networks. In International Conference on Learning Representations, 2018.
  • Snoek et al. (2015) Snoek, J., Rippel, O., Swersky, K., Kiros, R., Satish, N., Sundaram, N., Patwary, M., Prabhat, M., and Adams, R. Scalable Bayesian optimization using deep neural networks. In International Conference on Machine Learning, pp. 2171–2180, 2015.
  • Tomczak et al. (2018) Tomczak, M. B., Swaroop, S., and Turner, R. E. Neural network ensembles and variational inference revisited. In 1st Symposium on Advances in Approximate Bayesian Inference, pp. 1–11, 2018.
  • Trippe & Turner (2017) Trippe, B. and Turner, R. E. Overpruning in variational Bayesian neural networks. In NIPS Workshop on Advances in Approximate Bayesian Inferenc, 2017.

To obtain the linearised Laplace approximation, we linearise the output of the network about :


We now have the following approximating distributions:

Since this is now a linear-Gaussian model, we can use standard formulas to obtain:


Consider a single hidden layer BNN with input and output with a mean field distribution over the output weights and biases but a point estimate for the input weights and biases . In detail:

We assume a fully factorised approximating distribution for the output weights such that:

We further assume that and are deterministic constants. Consider the variance of the output under this distribution:


Equation (3) is justified since each weight is independent under . This variance is a measure of the uncertainty in the output at represented by the approximate posterior . Consider the Hessian of this variance , where . Taking derivatives, we have:

where is the column vector whose elements are the th row of . Since is a sum of outer products, it will be positive semi-definite (PSD) if for all . This is the case for ReLU nonlinearities. The first and second derivatives of the ReLU do not exist at . However if we consider to be a bump function of arbitrarily small width and area 1, then all these derivatives exist and is non-negative. Since the Hessian of is PSD, it follows that is a convex function of .555This argument can be made rigorous by constructing a sequence of networks with non-linearities such that is a triangular function at zero with area 1 and width . Each network will have a convex output variance, and the variance of these networks converges pointwise to the variance of a ReLU network. Since a pointwise limit of convex functions is convex, the result holds for ReLU networks. Therefore it is impossible for this kind of posterior to exhibit greater uncertainty in between two regions of low uncertainty.

To investigate the relevance of this result to the standard case where the input parameters are not deterministic but are also mean-field, we train three ReLU BNNs on the 1D dataset in Figure 1: (i) mean-field VI on all parameters (MVFI), (ii) maximum-likelihood on the input parameters and mean-field on the output parameters (MFVI-output) and (iii) maximum-likelihood on the input parameters followed by Bayesian linear regression on the output parameters (BLR). Results are shown in Figure 4. MFVI-output has convex variance, as predicted. MFVI also has convex variance, even though its input parameters are mean-field. BLR with its full-covariance Gaussian posterior shows increased in-between uncertainty even though its input weights are deterministic, showing that it is the mean-field assumption on the output parameters that is responsible for severely restricting the expressiveness of the predictive uncertainty. Further work is required to characterise the expressiveness of mean field distributions with deeper networks.

Figure 4: Predictive variances (without observation noise) on the 1D dataset. Black lines show -locations of the data.

To gain intuition for why MFVI fails to provide in-between uncertainty, we consider a toy example involving a single hidden layer network with two ReLU hidden units mapping :

Here and are the output weights and bias, and and are the input weights and biases. Consider the case where are all deterministic and positive so that is non-decreasing. Then:

where in region (I), in region (II) and in region (III). Consider a simple observed dataset with many points at and where and . A reasonable Bayesian posterior predictive would have low uncertainty around and , but large uncertainty in between. To first fit this dataset with deterministic weights, we could place in region (I) and in region (III). Then to fit the -values we must set


Figure 5: Samples from a 2-hidden unit neural network obtained by HMC. Notice how the position of the kinks varies between samples, leading to larger uncertainty in between the 2 datapoints and , marked by black crosses. (For some of these samples, only one kink is between and ; the other is to the left of .)

There are many settings of and that satisfy Equation 5. Consider choosing one such setting as a point estimate. To obtain a Bayesian method, we would now like to increase our uncertainty in the parameters. In particular, we should have relatively large uncertainty in the position of the ‘kinks’ and since they can take any values between and and fit the data equally well.666Here we assume a reasonably broad prior such that the prior probabilities of the kink locations are roughly uniform over the range . This corresponds to having large uncertainty between two regions of low uncertainty ( and ), as in Figure 5. To express this, we could relax the distribution over, say, from a delta function to a Gaussian with positive variance. However, injecting randomness in jeopardises the fit in region (III) since is involved in Equation 5. The only way to express predictive uncertainty between and and still fit the data is to have the values of and compensate for any change in such that Equation 5 still holds. In other words, we need strong dependencies between the parameters to simultaneously fit the data regions and express predictive uncertainty in the in-between region.777The in-between uncertainty seen in Figure 3 in (Duvenaud & Adams, 2015) is seemingly an exception. However in that case radial basis function non-linearities were used. Since these have only local effects, the argument here does not apply.

The mean-field approximation assumes that there are no dependencies. Therefore any parameter randomness used to express increased in-between uncertainty leads to uncontrolled variations of in the data region. Hence Equation 5 is not satisfied. There are two possibilities: either the data fit will be poor or the variances will be minimised (leading to a large penalty in the KL term in the ELBO). In practice MFVI finds a solution that prunes out hidden units, allowing it to fit the data with the minimum number of variances set to zero (Trippe & Turner, 2017).


Figure 6: Mean and two standard deviation bars of the predictive distribution for (without output noise) using ReLU activations.

Synthetic 1D Dataset. We use single hidden layer BNNs with 50 hidden units. We include results for ReLU activations in Figure 6. To verify that MFVI’s lack of in-between uncertainty is due to approximate inference and not the model class, we include a Gaussian Process (GP) using the kernel for a BNN with infinitely many ReLU hidden units (Lee et al., 2017; Cho & Saul, 2009). Note LL shows strange discontinuous behaviour in its uncertainty. This is because the non-smooth ReLU function makes the gradient in equation (2) discontinuous. Similar behaviour is seen in (Snoek et al., 2015).

We use independent priors on the biases and priors on the weights; is the number of inputs to the weight matrix. This scaling is chosen so that the GP limit exists (Neal, 2012). is set to . To optimise Laplace, MFVI and FCVI we use ADAM (Kingma & Ba, 2014) with learning rate 0.001 and 20,000 epochs. We use the entire dataset for each batch. For MFVI, weight means were initialised from and all variances were initialised to . Bias means were initialised to zero. The local reparameterisation trick (Kingma et al., 2015) was used. For FCVI, the Cholesky decomposition of the covariance matrix was parameterised as a lower triangular matrix, with the diagonal entries made positive by exponentiating them. The diagonal entries were initialised to and the off-diagonals were initialised to 0. The mean vector was initialised from . For both MFVI and FCVI we approximate the ELBO during training with 32 samples. For HMC, the number of leapfrog steps was chosen uniformly between 5 and 10, and the step size uniformly sampled from . The chain was burned in for 10,000 iterations and samples were collected during the next 20,000 iterations. For MFVI, FCVI and HMC, the error bars in Figures 1 and 6 were estimated with 100 samples.

UCI Datasets. All BNNs had 50 neurons per hidden layer. Inputs and outputs were normalised to zero mean and unit variance. Hyperparameters were optimised by grid search on a validation set that consisted of of the training set. The best hyperparameters were used to train again on the training set with validation set combined. This was repeated for each split. Minibatches were randomly selected from the training set with replacement. For MAP and Laplace, all parameters had independent priors. For Laplace, minibatch size was . The hyperparameters optimised were: : , learning rate: , number of epochs: . For MAP, the same ranges were searched, except the number of epochs was , since we expected MAP to favour early stopping. For both methods, was initialised to and learned by maximum likelihood. For FCVI and MFVI, we used independent priors on all parameters. The hyperparameters searched for MFVI were: minibatch size: , learning rate: , number of epochs: for smaller datasets (boston, concrete, energy, wine, yacht) and for larger ones (kin8nm, naval, power, protein). The same ranges were used for FCVI except the learning rate was fixed to . The ELBO was approximated with samples. For MFVI, weight means were initialised from and all variances initialised to . Bias means were initialised to zero. For FCVI, the mean vector and covariance matrix of all the parameters in the network were optimised to maximise the ELBO. The Cholesky decomposition of the covariance matrix was parameterised directly as a lower triangular matrix, with the diagonal entries constrained to be positive by exponentiating them. The diagonal entries were initialised to and the off-diagonal entries were initialised to . The mean vector was initialised randomly from . For MFVI, FCVI and sampled Laplace, test log-likelihoods were computed by sampling times from the approximate posterior. was initialised to and learned by optimising the ELBO.

Full results are given in Tables 1 and 2. We also provide pairwise comparisons of LL versus MFVI-ReLU on the standard splits and the gap splits in Figures 7, 8 & 9. Each point corresponds to one test log-likelihood. Each colour represents a different dataset. The histogram shows the log likelihood of the method on the -axis minus that of the method on the -axis. The dotted blue line is the line .

model/dataset boston concrete energy kin8nm naval power protein wine yacht
MAP 1HL relu
MAP 1HL tanh
MAP 2HL relu
MAP 2HL tanh
MFVI 1HL relu
MFVI 1HL tanh
MFVI 2HL relu
MFVI 2HL tanh
FCVI 1HL relu
FCVI 1HL tanh
LL 1HL tanh
SL 1HL tanh
LL 2HL tanh
SL 2HL tanh
Table 1: Average test log likelihoods for standard splits. Best results within one standard error in bold.
model/dataset boston concrete energy kin8nm naval power protein wine yacht
MAP 1HL relu
MAP 1HL tanh
MAP 2HL relu
MAP 2HL tanh
MFVI 1HL relu
MFVI 1HL tanh
MFVI 2HL relu
MFVI 2HL tanh
FCVI 1HL relu
FCVI 1HL tanh
LL 1HL tanh
SL 1HL tanh
LL 2HL tanh
SL 2HL tanh
Table 2: Average test log likelihoods for gap splits. Best results within one standard error in bold.

Figure 7: Comparison of MFVI-ReLU and linearised Laplace tanh on the standard splits. Positive difference means Laplace performs better than MFVI.

Figure 8: Comparison of MFVI-ReLU and linearised Laplace tanh on the gap splits. Positive difference means Laplace performs better than MFVI. MFVI fails catastrophically on energy and naval.

Figure 9: Same as Figure 8 but with energy and naval removed. Positive difference means Laplace performs better than MFVI. The two methods now perform comparably.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description