Noisy Natural Gradient as Variational Inference

Noisy Natural Gradient as Variational Inference

Guodong Zhang     Shengyang Sun11footnotemark: 1     David Duvenaud    Roger Grosse
University of Toronto
Vector Institute
{gdzhang, ssy, duvenaud, rgrosse}@cs.toronto.edu
Equal contribution.
Abstract

Combining the flexibility of deep learning with Bayesian uncertainty estimation has long been a goal in our field, and many modern approaches are based on variational Bayes. Unfortunately, one is forced to choose between overly simplistic variational families (e.g. fully factorized) or expensive and complicated inference procedures. We show that natural gradient ascent with adaptive weight noise can be interpreted as fitting a variational posterior to maximize the evidence lower bound (ELBO). This insight allows us to train full covariance, fully factorized, and matrix variate Gaussian variational posteriors using noisy versions of natural gradient, Adam, and K-FAC, respectively. On standard regression benchmarks, our noisy K-FAC algorithm makes better predictions and matches HMC’s predictive variances better than existing methods. Its improved uncertainty estimates lead to more efficient exploration in the settings of active learning and intrinsic motivation for reinforcement learning.

\pdfstringdefDisableCommands

 

Noisy Natural Gradient as Variational Inference


  Guodong Zhangthanks: Equal contribution.     Shengyang Sun11footnotemark: 1     David Duvenaud    Roger Grosse University of Toronto Vector Institute {gdzhang, ssy, duvenaud, rgrosse}@cs.toronto.edu

\@float

noticebox[b]Second workshop on Bayesian Deep Learning (NIPS 2017), Long Beach, CA, USA. \end@float

1 Introduction

Combining deep learning with Bayesian uncertainty estimation has the potential to fit flexible and scalable models that are resistant to overfitting (mackay1992practical, ; neal1995bayesian, ; hinton1993keeping, ). Stochastic variational inference is especially appealing because it closely resembles ordinary backprop (graves2011practical, ; blundell2015weight, ), but such methods typically impose restrictive factorization assumptions on the approximate posterior, such as fully independent weights. There have been attempts to fit more expressive approximating distributions which capture correlations such as matrix-variate Gaussians (louizos2016structured, ; sun2017learning, ) or multiplicative normalizing flows (louizos2017multiplicative, ), but fitting such models can be expensive without further approximations.

In this work, we introduce and exploit a surprising connection between natural gradient descent (amari1998natural, ) and variational inference. In particular, several approximate natural gradient optimizers have been proposed which fit tractable approximations to the Fisher matrix to gradients sampled during training (kingma2014adam, ; martens2015optimizing, ). While these procedures were described as natural gradient descent on the weights using an approximate Fisher matrix, we reinterpret these algorithms as natural gradient on a variational posterior using the exact Fisher matrix. Both the weight updates and the Fisher matrix estimation can be seen as natural gradient ascent on a unified evidence lower bound (ELBO), analogously to how Neal and Hinton (neal1998view, ) interpreted the E and M steps of Expectation-Maximization (E-M) as coordinate ascent on a single objective.

Using this insight, we give an alternative training method for variational Bayesian neural networks. For a factorial Gaussian posterior, it corresponds to a diagonal natural gradient method with weight noise, and matches the performance of Bayes By Backprop (blundell2015weight, ), but converges faster. We also present noisy K-FAC, an efficient and GPU-friendly method for fitting a full matrix-variate Gaussian posterior, using a variant of Kronecker-Factored Approximate Curvature (K-FAC) (martens2015optimizing, ) with correlated weight noise.

(a) fully factorized
Gaussian
(b) matrix variate
Gaussian
(c) block tridiagonal
(d) full covariance
Gaussian
Figure 1: Normalized precision matrices for Gaussian variational posteriors trained using noisy natural gradient. We used a network with 2 hidden layers with 15 units each, trained on the Boston housing dataset.

2 Background

2.1 Variational Inference for Bayesian Neural Networks.

Assume we are given a dataset and a neural network architecture with a set of parameters . A Bayesian neural network (BNN) is defined in terms of a prior on the weights, as well as the likelihood . Performing inference on the BNN requires integrating over the intractable posterior distribution . Variational Bayesian methods hinton1993keeping (); graves2011practical (); blundell2015weight () attempt to fit an approximate posterior to maximize the evidence lower bound (ELBO):

(1)

where is a regularization parameter and are the parameters of the variational posterior. (Proper Bayesian inference corresponds to , but other values may work better in practice on some problems.)

The most commonly used variational BNN training method is Bayes By Backprop (BBB) (blundell2015weight, ), which uses a fully factorized Gaussian approximation to the posterior, i.e. . The variational parameters are adapted using stochastic gradients of obtained using the reparameterization trick kingma2013auto (). The resulting updates for look like ordinary backpropagation updates for the weights of a non-Bayesian neural network, except that is sampled from the variational posterior. Another interpretation of BBB regards as a point estimate of the weights and as the standard deviations of Gaussian noise added independently for each training example. This similarity to ordinary neural net training is a big part of the appeal of BBB and other variational BNN approaches.

2.2 Natural Gradient

Natural gradient descent is a second-order optimization method originally proposed by Amari amari1997neural (). There are two variants of natural gradient commonly used in machine learning, which do not have standard names, but which we refer to as natural gradient for point estimation (NGPE) and natural gradient for variational inference (NGVI). While both methods are broadly applicable, we limit the present discussion to neural networks for simplicity.

In natural gradient for point estimation (NGPE), we assume the neural network computes a predictive distribution and we wish to maximize a cost function , which may be the data log-likelihood. The natural gradient is the direction of steepest ascent in the Fisher information norm, and is given by , where , and the covariance is with respect to sampled from the data distribution and sampled from the model’s predictions. NGPE is typically justified as a way to speed up optimization; the details are inessential to this paper, but see martens2014new () for a comprehensive overview. Because the dimension of is the number of parameters, which can be in the tens of millions for modern neural networks, exact computation of the natural gradient is typically infeasible, and approximations are required (see below).

We now describe natural gradient for variational inference (NGVI) in the context of BNNs. We wish to fit the parameters of a variational posterior to maximize the ELBO (Eqn. 1). Analogously to the point estimation setting, the natural gradient is defined as ; but in this case, is the Fisher matrix of , i.e. . Note that in contrast with point estimation, is a metric on , rather than , and its definition doesn’t directly involve the data. Interestingly, because is chosen to be tractable, the natural gradient can be computed exactly, and in many cases is even simpler than the ordinary gradient. As an important example, Hoffman et al. hoffman2013stochastic () used natural gradient to scale up variational Bayes for latent variable models such as LDA, and found that the natural gradient ascent updates closely resemble the EM-like updates used in variational Bayes.

In general, NGPE and NGVI need not behave similarly; however, in Section 3, we show that in the case of Gaussian variational posteriors, the two are closely related.

2.3 Kronecker-Factored Approximate Curvature

As modern neural networks may contain millions of parameters, computing and storing the exact Fisher matrix and its inverse is impractical. Kronecker-factored approximate curvature (K-FAC) martens2015optimizing () uses a Kronecker-factored approximation to the Fisher to perform efficient approximate natural gradient updates. Considering th layer in the neural network whose input activations are , weight , and output , we have . Therefore, weight gradient is . With this gradient formula, K-FAC decouples this layer’s fisher matrix using mild approximations,

(2)

Where and . The approximation above assumes independence between and , which proves to be accurate in practice. Further, assuming between-layer independence, the whole fisher matrix can be approximated as block diagonal consisting of layerwise fisher matrices . Decoupling into and not only avoids the memory issue saving , but also provides efficient natural gradient computation.

(3)

As shown by Eqn. 3, computing natural gradient using K-FAC only consists of matrix transformations comparable to size of , making it very efficient.

3 Variational Inference using Noisy Natural Gradient

In this section, we draw a surprising relationship between natural gradient for point estimation (NGPE) of the weights of a neural net, and natural gradient for variational inference (NGVI) of a Gaussian posterior. (These terms are explained in Section 2.2.) In particular, we show that the NGVI updates can be approximated with a variant of NGPE with adaptive weight noise which we term Noisy Natural Gradient (NNG). This insight allows us to train variational posteriors with a variety of structures using noisy versions of existing optimization algorithms (see Fig. 1).

In NGVI, our goal is to maximize the ELBO (Eqn. 1) with respect to the parameters of a variational posterior distribution . We assume is a multivariate Gaussian parameterized by . Building on the analysis of opper2009variational (), we determine the natural gradient of the ELBO with respect to and the precision matrix (see Appendix A for details):

(4)
(5)

Here, is the weighting of the KL term in Eqn. 1. We make several observations. First, the term inside the expectation in Eqn. 4 is the gradient for MAP estimation of . Second, the update for is preconditioned by , which encourages faster movement in directions of higher posterior uncertainty. Finally, the fixed point equation for is given by

Hence, if (as in proper variational inference), will tend towards the expected Hessian of , so the update rule for will somewhat resemble a Newton-Raphson update. Smaller values of result in increased weighting of the log-likelihood Hessian, and hence a more concentrated posterior. (We note that, in independent work, Khan et al. khan2017variational () recently derived a similar stochastic Newton update; see Section 4.)

Based on these formulas, we derive the following stochastic natural gradient ascent updates. In each iteration, we sample and .

(6)

where and are separate learning rates for and . Roughly speaking, the update rule for corresponds to an exponential moving average of the Hessian, and the update rule for is a stochastic Newton step using .

This update rule has two problems. First, the log-likelihood Hessian may be hard to compute, and is undefined for neural net architectures which use non-differentiable activation functions such as ReLU. Second, if the negative log-likelihood is non-convex (as is the case for multilayer neural nets), the Hessian could have negative eigenvalues, so without further constraints, the update may result in which is not positive semidefinite. We circumvent both of these problems by approximating the negative log-likelihood Hessian with the NGPE Fisher matrix . This approximation guarantees that is positive semidefinite, and it allows for tractable approximations such as K-FAC (see below). In the context of BNNs, approximating the log-likelihood Hessian with the Fisher was first proposed by Graves (graves2011practical, ), so we refer to it henceforth as the Graves approximation.111Eqn. 7 leaves ambiguous what distribution the gradients are sampled from. One option is to use the same gradients used for optimization, as is done by Graves’s method graves2011practical () and by Adam kingma2014adam (). The Fisher matrix estimated using the empirical gradients is known as the empirical Fisher. Alternatively, one can sample the targets from the model’s predictions, as done in K-FAC (martens2015optimizing, ). The resulting is known as the true Fisher. The true Fisher is a better approximation to the Hessian martens2014new (), and this is what we use throughout our experiments.

(7)

In the case where the output layer of the network represents the natural parameters of an exponential family distribution (as is typical in regression or classification), the Graves approximation can be justified in terms of the generalized Gauss-Newton approximation to the Hessian; see martens2014new () for details.

3.1 Simplifying the Update Rules

We have now derived a stochastic natural gradient update rule for Gaussian variational posteriors. In this section, we rewrite the update rules in order to disentangle hyperparameters and highlight relationships with NGPE.

First, by separating out the two terms in the moving average, we can rewrite the update Eqn. 7 in terms of exponential moving averages of the Fisher matrix and the prior Hessian:

(8)

This update rule highlights an awkward interaction between the KL weight and the learning rates and . We can fix this by writing the update rules in terms of alternative learning rates and .

(9)

Observe that if is viewed as a point estimate of the weights, this update rule resembles NGPE with an exponential moving average of the Fisher matrix. The differences are that the Fisher matrix is damped by adding (see below), and that the weights are sampled from , which is a Gaussian with covariance . Because our update rule so closely resembles NGPE with correlated weight noise, we refer to this method as Noisy Natural Gradient (NNG).

3.2 Damping

In the special case of a spherical Gaussian prior, we have that , and therefore . Therefore, , where is an exponential moving average of the prior precision. (If is fixed, then we have simply ). Interestingly, in second-order optimization, it is very common to dampen the updates by adding a multiple of the identity matrix to the curvature before inversion in order to compensate for errors in the quadratic approximation to the cost. NNG automatically achieves this effect, with the strength of the damping being . In practice, it may be advantageous to add additional damping for purposes of optimization. However, we found we did not need to do this in any of our experiments.222We speculate that because the precision is fit using variational inference rather than a Taylor approximation, it is encouraged to reflect the global shape of the local mode of the distribution, helping to stabilize the update.

Formula above demonstrates interesting connection between covariance and Fisher information, thus imposing different structures on Fisher corresponds to varying posterior distributions. As a full covariance multivariate Gaussian posterior is computationally impractical for all but the smallest networks, next we will show several simplied Fisher structures and connect them to different posterior families.

3.3 Fitting Fully Factorized Gaussian Posteriors with Noisy Adam

The discussion so far has concerned NGVI updates for a full covariance Gaussian posterior. Unfortunately, the number of parameters needed to represent a full covariance Gaussian is of order . Since it can be in the millions even for a relatively small network, representing a full covariance Gaussian is impractical. There has been much work on tractable approximations to second-order optimization. Perhaps the simplest approach is to approximate with a diagonal matrix , as done by Adagrad duchi2011adaptive () and Adam (kingma2014adam, ). For our NNG approach, this yields the following updates, where division and squaring are applied elementwise:

(10)

These update rules are similar in spirit to methods such as Adam, but with the addition of adaptive weight noise. We note that these update rules also differ from Adam in some details: (1) Adam keeps exponential moving averages of the gradients, which is equivalent to momentum, and (2) Adam applies the square root to the entries of in the denominator. We regard these differences as inessential. We define noisy Adam by modifying the above update rules to be consistent with Adam in these two respects; the full procedure is given in Alg. 1. We note that these modifications may affect optimization performance, but they don’t change the fixed points, i.e. they are fitting the same functional form of the variational posterior using the same variational objective.

0:  : Stepsize
0:  : Exponential decay rates for updating and the Fisher
0:   KL weighting, prior variance
  
  
  Calculate the damping term (Note: in standard Adam, is typically set to )
  while stopping criterion not met do
     
     
     
          (Update momentum)
     
     
     
          (Update parameters)
  end while
Algorithm 1 Noisy Adam. Superscript denotes the th iteration. Differences from standard Adam are shown in blue.

3.4 Fitting Matrix Variate Gaussian Posteriors with Noisy K-FAC

There has been much interest in fitting BNNs with matrix variate Gaussian (MVG) posteriors in order to compactly capture posterior correlations between different weights louizos2016structured (); sun2017learning (). Let denote the weights for one layer of a fully connected network. An MVG distribution is a Gaussian distribution whose covariance is a Kronecker product, i.e. . (When we refer to a BNN with an “MVG posterior”, we mean that the weights in different layers are independent, and the weights for each layer follow an MVG distribution.) One can sample from an MVG distribution by sampling a matrix of i.i.d. standard Gaussians and then taking , where . If is of size , then the MVG covariance requires approximately parameters to represent, in contrast with a full covariance matrix over , which would require . Therefore, MVGs are potentially powerful due to their compact representation of posterior covariances between weights. However, training MVG posteriors is very difficult, since computing the gradients and enforcing the positive semidefinite constraint for and typically requires expensive matrix operations such as inversion. Therefore, existing methods for fitting MVG posteriors typically impose additional structure such as diagonal covariance louizos2016structured () or products of Householder transformations (sun2017learning, ) to ensure efficient updates.

We observe that K-FAC martens2015optimizing () uses a Kronecker-factored approximation to the Fisher matrix for each layer’s weights, as in Eqn. 2. By plugging this approximation in to Eqn. 8, we obtain an MVG posterior. In more detail, each block obeys the Kronecker factorization , where and are the covariance matrices of the activations and pre-activation gradients, respectively. K-FAC estimated and online using exponential moving averages which, conveniently for our purposes, are closely analogous to the exponential moving averages defining in Eqn. 9:

(11)

We note that our scheme is not equivalent to performing natural gradient ascent directly on and , so it is best regarded as a tractable approximation to Eqn. 9. Conveniently, because these factors are estimated from the empirical covariances, they (and hence also ) are automatically positive semidefinite.

Plugging the above formulas into Eqn. 8 does not quite yield an MVG posterior due to the addition of . For general , there may be no compact representation of . However, for spherical Gaussian priors333We consider spherical Gaussian priors for simplicity, but this trick can be extended to any prior whose Hessian is Kronecker-factored, such as group sparsity., we can approximate using a trick proposed by martens2015optimizing () in the context of damping. In particular, we add and for a scalar constant to the individual Kronecker factors and . In this way, the covariance decomposes as the Kronecker product of two terms:

(12)

This factorization corresponds to a matrix variate Gaussian posterior , where the factor is arbitrarily assigned to the first factor. We refer to this BNN training method as noisy K-FAC. The full algorithm is given as Alg. 2.

K-FAC is a very efficient optimizer, and has been observed to yield significant speedups over standard methods for training convolutional networks grosse2016kronecker () and improved sample efficiency in reinforcement learning wu2017scalable (). The algorithm is GPU-friendly, and efficient implementations typically only introduce small (e.g. 1.5–2x) overhead compared with ordinary SGD. However, our focus here is not optimization speed, but rather the ability of noisy K-FAC to fit flexible MVG posteriors, in order to obtain improved uncertainty estimates compared with fully factorized Gaussians.

0:  : stepsize
0:  : exponential moving average parameter for covariance factors
0:   KL weighting, prior variance
0:  stats and inverse update intervals and
  
  Initialize
  Calculate the damping term (Note: in standard K-FAC, is chosen manually)
  while stopping criterion not met do
     
     
     if  (mod then
        Sample the targets from the model’s predictive distribution.
        Update the factors using Eqn. 11 with decay rate
     end if
     if  (mod then
        Calculate the inverses using Eqn. 12.
     end if
     
     
  end while
Algorithm 2 Noisy K-FAC. Subscript denotes layers, , and . We assume zero momentum for simplicity. Differences from standard K-FAC are shown in blue.

3.5 Block Tridiagonal Covariance

Both the fully factorized and MVG posteriors assumed independence between layers. However, in practice the weights in different layers can be tightly coupled. To better capture these dependencies, we propose to approximate using the block tridiagonal approximation from martens2015optimizing (). The resulting posterior covariance is block tridiagonal, so it accounts for dependencies between adjacent layers. The noisy version of block tridiagonal K-FAC is completely analogous to the block diagonal version, but since the approximation is rather complicated, we refer the reader to martens2015optimizing () for the details.

4 Related Work

Variational inference was first applied to neural networks by peterson1987mean () and hinton1993keeping (). More recently, graves2011practical () proposed a practical method for variational inference with fully factorized Gaussian posteriors which used a simple (but biased) gradient estimator. Improving on that work, blundell2015weight () proposed a unbiased gradient estimator using the reparameterization trick of kingma2013auto (). In place of the ELBO, hernandez2015probabilistic () proposed to use expectation propagation with fully factorized posterior distributions and found a closed-form approximation which avoided the sampling for weights. kingma2015variational () observed that variance of stochastic gradients can be significantly reduced by local reparameterization trick where global uncertainty in the weights is translated into local uncertainty in the activations.

There has also been much work on modeling the correlations between weights using more complex Gaussian variational posteriors. louizos2016structured () introduced the matrix variate Gaussian posterior as well as a Gaussian process approximation. sun2017learning () decoupled the correlations of a matrix variate Gaussian posterior to unitary transformations and factorial Gaussian. Inspired by the idea of normalizing flows in latent variable models rezende2015variational (), louizos2017multiplicative () applied normalizing flows to the auxiliary latent variables to greatly enhance the posterior approximation. However, the introduction of auxiliary variables results in a looser variational lower bound.

Since natural gradient was proposed by Amari amari1998natural (), there has been much work on tractable approximations. hoffman2013stochastic () observed that for exponential family posteriors, the exact natural gradient could be tractably computed using stochastic versions of variational Bayes E-M updates. martens2015optimizing () proposed K-FAC for performing efficient natural gradient optimization in deep neural networks. Following on that work, K-FAC has been adopted in many tasks to gain optimization benefits, including convolutional networks grosse2016kronecker () and Reinforcement Learning wu2017scalable (), and was shown to be amenable to distributed computation ba2017distributed ().

In independent work, Khan et al. khan2017variational () derived a stochastic Newton update similar to Eqs. 4 and 5. As their focus was on optimization rather than variational inference, they did not include the KL term, and as a result, their formula for involved a running sum of the individual Hessians, rather than an exponential moving average.

5 Experiments

In this section, we conducted a series of experiments to investigate the following questions: (1) How does noisy natural gradient (NNG) compare with existing methods in terms of prediction performance and convergence speed? (2) Can NNG achieve better uncertainty estimates? (3) Does it enable more efficient exploration in active learning and reinforcement learning?

To evaluate the effectiveness of our method, we compared it with Bayes By Backprop (BBB) blundell2015weight (), probabilistic backpropagation (PBP) with factorial gaussian posterior hernandez2015probabilistic () and PBP with matrix variate Gaussian posterior (PBP_MV) sun2017learning (). Our method with full covariance multivariate Gaussian, fully factorial Gaussian, matrix variate Gaussian and block tridiagonal posterior are denoted as NNG-full, NNG-FFG, NNG-MVG and NNG-BlkTri, respectively.

5.1 Regression Benchmarks

Test RMSE Test log-likelihood
Dataset BBB PBP PBP_MV NNG-MVG BBB PBP PBP_MV NNG-MVG
Boston 2.5170.022 3.0140.180 3.1370.155 2.2960.029 -2.5000.004 -2.5740.089 -2.6660.081 -2.3360.005
Concrete 5.7700.066 5.6670.093 5.3970.130 5.1730.070 -3.1690.011 -3.1610.019 -3.0590.029 -3.0730.014
Energy 0.4990.019 1.8040.048 0.5560.016 0.4380.003 -1.5520.006 -2.0420.019 -1.1510.016 -1.4110.002
Kin8nm 0.0790.001 0.0980.001 0.0880.001 0.0760.000 1.1180.004 0.8960.006 1.0530.012 1.1510.006
Naval 0.0000.000 0.0060.000 0.0020.000 0.0000.000 6.4310.082 3.7310.006 4.9350.051 7.1820.057
Pow. Plant 4.2240.007 4.1240.035 4.0300.036 4.0850.006 -2.8510.001 -2.8370.009 -2.8300.008 -2.8180.002
Protein 4.3900.009 4.7320.013 4.4900.012 4.0580.006 -2.9000.002 -2.9730.003 -2.9170.003 -2.8200.002
Wine 0.6390.002 0.6350.008 0.6410.006 0.6340.001 -0.9710.003 -0.9680.014 -0.9690.013 -0.9610.001
Yacht 0.9830.055 1.0150.054 0.6760.054 0.8270.017 -2.3800.004 -1.6340.016 -1.0240.025 -2.2740.003
Year 9.076NA 8.879NA 9.450NA 8.885NA -3.614NA -3.603NA -3.392NA -3.595NA
Table 1: Averaged test RMSE and log-likelihood for the regression benchmarks.

We first experimented with regression datasets from the UCI collection asuncion2007uci () which were introduced as a standard BNN benchmark by hernandez2015probabilistic (). All experiments used networks with one hidden layer unless stated otherwise (experimental details see Appendix C). Following previous works hernandez2015probabilistic () louizos2016structured (), we report the standard metrics including root mean square error (RMSE) and test log-likelihood. The results are summarized in Table 1. As we can see from the results, our NNG-MVG method achieved substantially better RMSE and log-likelihoods than BBB due to the more flexible posterior. NNG-MVG also outperformed PBP_MV, which parameterizes matrix variate Gaussians with orthogonal matrices. A further comparison of the prediction performance on two-layer neural networks can be found in Appendix D.

Figure 2: Training curves for all three methods. For each method, we tuned the learning rate for updating the posterior mean. Note that BBB and NNG-FFG use the same form of , while NNG-MVG uses a more flexible distribution.

While optimization was not the primary focus of this work, we compared NNG with the baseline BBB in terms of convergence. Training curves for two regression datasets are shown in Fig. 2. We found that NNG-FFG trained in fewer iterations than BBB, while leveling off to similar ELBO values, even though our BBB implementation used Adam, and hence itself exploited diagonal curvature. Furthermore, despite the increased flexibility and larger number of parameters, NNG-MVG took roughly 2 times fewer iterations to converge, while at the same time surpassing BBB by a significant margin in terms of the ELBO.

5.2 Active Learning

One particularly promising application of uncertainty estimation is to guiding an agent’s exploration towards part of a space which it’s most unfamiliar with. We have evaluated our BNN algorithms in two instances of this general approach: active learning, and intrinsic motivation for reinforcement learning. The next two sections present experiments in these two domains, respectively.

In the simplest active learning setting settles2010active (), an algorithm is given a set of unlabeled examples and, in each round, chooses one unlabeled example to have labeled. A classic Bayesian approach to active learning is the information gain criterion mackay1992information (), which in each step attempts to achieve the maximum reduction in posterior entropy. Under the assumption of i.i.d. Gaussian noise (as assumed by all models under consideration), this is equivalent to choosing the unlabeled example with the largest predictive variance. Active learning using the information gain criterion was introduced as a BNN benchmark by hernandez2015probabilistic (); our experiments are based on their protocol.

All methods under consideration use the same neural network architecture and prior, and differ only in the inference procedure. We first investigated how accurately each of the algorithms could estimate predictive variances. In each trial, we randomly selected 20 labeled training examples and 100 unlabeled examples; we then computed each algorithm’s posterior predictive variances for the unlabeled examples. 10 independent trials were run. As is common practice, we treated the outputs of HMC as the “ground truth” predictive variance. Table 3 reports the average and standard error of Pearson correlations between the predictive variances of each algorithm and those of HMC. In all of the datasets, our two methods NNG-MVG and NNG-BlkTri match the HMC predictive variances significantly better than the other approaches, and NNG-BlkTri consistently matches them slightly better than NNG-MVG due to the more flexible variational posterior.

Next, we evaluated the performance of all methods on active learning, following the protocol of hernandez2015probabilistic (), which select next data with highest predictive variance. As a control, we also evaluated each algorithm with labeled examples selected uniformly at random; this is denoted with the _R suffix. Active learning results are denoted with the _A suffix. The average test RMSE for all methods is reported in Table 2. These results shows that NNG-MVG_A performs better than NNG-MVG_R in most datasets and is closer to HMC_A compared to PBP_A. However, we note that better predictive variance estimates do not reliably yield better active learning results, and in fact, active learning methods sometimes perform worse than random. Therefore, while information gain is a useful criterion for benchmarking purposes, it is important to explore other uncertainty-based active learning criteria.

Dataset PBP_R PBP_A NNG-MVG_R NNG-MVG_A HMC_R HMC_A
Boston 6.7160.500 5.4800.175 6.3640.287 5.1390.033 5.7500.222 5.1560.150
Concrete 12.4170.392 11.8940.254 11.8690.187 11.5920.117 10.5640.198 11.4840.191
Energy 3.7430.121 3.3990.064 3.4190.054 3.0350.038 3.2640.067 3.1180.062
Kin8nm 0.2590.006 0.2540.005 0.2470.002 0.2360.002 0.2260.004 0.2230.003
Naval 0.0150.000 0.0160.000 0.0100.000 0.0090.000 0.0130.000 0.0120.000
Pow. Plant 5.3120.108 5.0680.082 5.8210.056 5.4210.028 5.2290.097 4.8000.074
Wine 0.9450.044 0.8090.011 0.7730.009 0.8410.007 0.7400.011 0.7490.010
Yacht 5.3880.339 4.5080.158 7.1420.171 6.2450.068 4.6440.237 3.2110.120
Table 2: Average test RMSE in active learning. The suffix _R denotes the random selection control, and the suffix _A denotes active learning.
Dataset BBB PBP NNG-MVG NNG-BlkTri
Boston 0.7330.021 0.7610.032 0.8910.021 0.8890.024
Concrete 0.8090.027 0.8170.028 0.9130.010 0.9220.006
Energy 0.4140.074 0.4710.076 0.6170.087 0.6460.088
Kin8nm 0.6810.018 0.5870.021 0.7310.021 0.7590.023
Naval 0.3160.094 0.2700.098 0.5960.073 0.5980.070
Pow. Plant 0.6330.046 0.5090.068 0.8290.020 0.8530.020
Wine 0.9150.011 0.8830.042 0.9570.009 0.9640.006
Yacht 0.6170.053 0.6200.053 0.7170.072 0.7270.070
Table 3: Pearson correlation of each algorithm’s predictive variances with those of HMC, which we consider as the “ground truth.”

5.3 Reinforcement Learning

We next experimented with using uncertainty to provide intrinsic motivation in reinforcement learning. Reinforcement learning problems with sparse rewards can be particularly challenging, since the agent may need to execute a complex behavior even to obtain a single nonzero reward. Houthooft et al. (houthooft2016vime, ) presented an intrinsic motivation mechanism called Variational Information Maximizing Exploration (VIME), which encouraged the agent to seek novelty through an information gain criterion. VIME involves training a separate BNN to predict the dynamics, i.e. learn to model the distribution . With the idea that surprising states lead to larger updates to the dynamics network, the reward function was augmented with an “intrinsic term” corresponding to the information gain for the BNN. If the history of the agent up until time step is denoted as , then the modified reward can be written in the following form:

(13)

where is the original reward function or external reward, and is a hyperparameter controlling the urge to explore. In above formulation, the true posterior is generally intractable. (houthooft2016vime, ) approximated it using Bayes by Backprop (BBB), a fully factorized Gaussian variational posterior (blundell2015weight, ). We experimented with replacing the fully factorized posterior with our NNG-MVG model.

Following the experimental setup of (houthooft2016vime, ), we tested our method in three continuous control tasks and sparsified the rewards in the following way. A reward of is given in CartPoleSwingup when , with the pole angle; when the car escapes the valley in MountainCar; and when , with the distance from the target in DoublePendulum. We compared our NNG-MVG dynamics model with a Gaussian noise baseline, as well as the original VIME formulation using BBB. All experiments are based on the rllab (duan2016benchmarking, ) benchmark code base and used TRPO to optimize the policy itself (schulman2015trust, ). Because all the methods used the same policy gradient method and only differd only in the dynamics model, the performance differences should reflect the effectiveness of the BNN’s uncertainty measure for detecting novelty.

(a) CartPoleSwingup
(b) MountainCar
(c) DoublePendulum
Figure 3: Performance of [TRPO] TRPO baseline with Gaussian control noise, [TRPO+BBB] VIME baseline with BBB dynamics network, and [TRPO+NNG-MVG] VIME with NNG-MVG dynamics network (ours). The darker-colored lines represent the median performance in 10 different random seeds while the shaded area show the interquartile range.

Performance is measured by the average return (under the original MDP’s rewards, not including the intrinsic term) at each iteration. Fig. 3 shows the performance results in three tasks. Consistently with Houthooft et al. (houthooft2016vime, ), we observed that the Gaussian noise baseline completely breaks down and rarely achieves the goal, VIME significantly improved the performance. However, replacing the dynamics network with NNG-MVG considerably improved the exploration efficiency on all three tasks. Since the policy search algorithm was shared between all three conditions, we attribute this improvement to the improved uncertainty modeling by the dynamics network.

6 Conclusion

We drew a surprising connection between two different types of natural gradient ascent: for point estimation and for variational inference. We exploited this connection to derive surprisingly simple variational BNN training procedures which can be instantiated as noisy versions of widely used optimization algorithms for point estimation. This let us efficiently fit MVG variational posteriors, which capture correlations between different weights. Our variational BNNs with MVG posteriors matched the predictive variances of HMC much better than fully factorized posteriors, and led to more efficient exploration in the settings of active learning and reinforcement learning with intrinsic motivation.

References

  • (1) Shun-ichi Amari. Neural learning in structured parameter spaces-natural riemannian gradient. In Advances in neural information processing systems, pages 127–133, 1997.
  • (2) Shun-Ichi Amari. Natural gradient works efficiently in learning. Neural computation, 10(2):251–276, 1998.
  • (3) Arthur Asuncion and David Newman. Uci machine learning repository, 2007.
  • (4) Jimmy Ba, James Martens, and Roger Grosse. Distributed second-order optimization using kronecker-factored approximations. In International Conference on Learning Representations, 2017.
  • (5) Charles Blundell, Julien Cornebise, Koray Kavukcuoglu, and Daan Wierstra. Weight uncertainty in neural networks. arXiv preprint arXiv:1505.05424, 2015.
  • (6) Yan Duan, Xi Chen, Rein Houthooft, John Schulman, and Pieter Abbeel. Benchmarking deep reinforcement learning for continuous control. In International Conference on Machine Learning, pages 1329–1338, 2016.
  • (7) John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12:2121–2159, 2011.
  • (8) Alex Graves. Practical variational inference for neural networks. In Advances in Neural Information Processing Systems, pages 2348–2356, 2011.
  • (9) Roger Grosse and James Martens. A kronecker-factored approximate fisher matrix for convolution layers. In International Conference on Machine Learning, pages 573–582, 2016.
  • (10) José Miguel Hernández-Lobato and Ryan Adams. Probabilistic backpropagation for scalable learning of bayesian neural networks. In International Conference on Machine Learning, pages 1861–1869, 2015.
  • (11) Geoffrey E Hinton and Drew Van Camp. Keeping the neural networks simple by minimizing the description length of the weights. In Proceedings of the sixth annual conference on Computational learning theory, pages 5–13. ACM, 1993.
  • (12) Matthew D Hoffman, David M Blei, Chong Wang, and John Paisley. Stochastic variational inference. The Journal of Machine Learning Research, 14(1):1303–1347, 2013.
  • (13) Rein Houthooft, Xi Chen, Yan Duan, John Schulman, Filip De Turck, and Pieter Abbeel. Vime: Variational information maximizing exploration. In Advances in Neural Information Processing Systems, pages 1109–1117, 2016.
  • (14) Mohammad Emtiyaz Khan, Wu Lin, Voot Tangkaratt, Zouzhu Liu, and Didrik Nielsen. Variational adaptive-Newton method for explorative learning. arXiv preprint arXiv:1711.05560, 2017.
  • (15) Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
  • (16) Diederik P Kingma, Tim Salimans, and Max Welling. Variational dropout and the local reparameterization trick. In Advances in Neural Information Processing Systems, pages 2575–2583, 2015.
  • (17) Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
  • (18) Christos Louizos and Max Welling. Structured and efficient variational deep learning with matrix gaussian posteriors. In International Conference on Machine Learning, pages 1708–1716, 2016.
  • (19) Christos Louizos and Max Welling. Multiplicative normalizing flows for variational bayesian neural networks. arXiv preprint arXiv:1703.01961, 2017.
  • (20) David JC MacKay. Information-based objective functions for active data selection. Neural computation, 4(4):590–604, 1992.
  • (21) David JC MacKay. A practical Bayesian framework for backpropagation networks. Neural Computation, 4:448–472, 1992.
  • (22) James Martens. New insights and perspectives on the natural gradient method. arXiv preprint arXiv:1412.1193, 2014.
  • (23) James Martens and Roger Grosse. Optimizing neural networks with kronecker-factored approximate curvature. In International Conference on Machine Learning, pages 2408–2417, 2015.
  • (24) Radford M Neal. BAYESIAN LEARNING FOR NEURAL NETWORKS. PhD thesis, University of Toronto, 1995.
  • (25) Radford M Neal and Geoffrey E Hinton. A view of the em algorithm that justifies incremental, sparse, and other variants. In Learning in graphical models, pages 355–368. Springer, 1998.
  • (26) Manfred Opper and Cédric Archambeau. The variational gaussian approximation revisited. Neural computation, 21(3):786–792, 2009.
  • (27) Carsten Peterson. A mean field theory learning algorithm for neural networks. Complex systems, 1:995–1019, 1987.
  • (28) Danilo Jimenez Rezende and Shakir Mohamed. Variational inference with normalizing flows. arXiv preprint arXiv:1505.05770, 2015.
  • (29) John Schulman, Sergey Levine, Pieter Abbeel, Michael Jordan, and Philipp Moritz. Trust region policy optimization. In Proceedings of the 32nd International Conference on Machine Learning (ICML-15), pages 1889–1897, 2015.
  • (30) Burr Settles. Active learning literature survey. University of Wisconsin, Madison, 52(55-66):11, 2010.
  • (31) Shengyang Sun, Changyou Chen, and Lawrence Carin. Learning structured weight uncertainty in bayesian neural networks. In Artificial Intelligence and Statistics, pages 1283–1292, 2017.
  • (32) Yuhuai Wu, Elman Mansimov, Shun Liao, Roger Grosse, and Jimmy Ba. Scalable trust-region method for deep reinforcement learning using kronecker-factored approximation. arXiv preprint arXiv:1708.05144, 2017.

Appendix A Natural Gradient for Multivariate Gaussian

Suppose we have a model parameterized by which lives in a subspace (such as the set of symmetric matrices). The natural gradient is motivated in terms of a trust region optimization problem, that finding the optimal in a neighborhood of defined with KL divergence,

Then the optimal solution to this optimization problem is given by . Here is the Fisher matrix and is the learning rate. Note that and are defined only for , but these can be extended to the full space however we wish without changing the optimal solution.

Now let assume the model is parameterized by multivariate Gaussian . The KL-divergence between and are:

(14)

Hence, the Fisher matrix w.r.t and are

(15)

Then, by the property of vec-operator , we get the natural gradient updates

(16)

An analogous derivation gives us . Considering , we have , which gives us the convenient formulas

(17)

Recall in variational inference, the gradient of ELBO towards and are given as

(18)

Based on Eqn. 18 and Eqn. 17, the natural gradient is given by:

(19)

Appendix B Matrix Variate Gaussian

Recently Matrix Variate Gaussian (MVG) distribution are also used in Bayesian neural networks [18] [31]. A matrix variate Gaussian distributions models a Gaussian distribution for a matrix ,

(20)

In which is the mean, is the covariance matrix among rows and is the covariance matrix among columns. Both and are positive definite matrices to be a covariance matrix. Connected with Gaussian distribution, vectorization of confines a multivariate Gaussian distribution whose covariance matrix is Kronecker product of and .

(21)

Appendix C Experimental Settings

c.1 Experimental settings for regression

The datasets are randomly splitted into training and test sets, with 90% of the data for training and the remaining for testing. To reduce the randomness, we repeat the splitting process for 20 times (except from two largest datasets, i.e., "Year" and "Protein", where we repeat 5 times and 1 times, respectively.) For all datasets except two largest ones, we use neural networks with 50 hidden units. For two largest datasets, we use 100 hidden units. We also introduce a Gamma prior, for the precision of the Gaussian likelihood and include the posterior into variational objective. In training, the input features and training targets are normalized to be zero mean and unit variance. We remove the normalization on the targets in test time.

For each dataset, we set and unless state otherwise. All models run for 1000 epochs except for "Protein" and "Year" where we use 500. We set batch size 10 for 5 small datasets with less than 2000 data points, 500 for "year" and 100 for other fours. Besides, we decay the learning rate by in second half epochs.

c.2 Experimental settings for active learning

Following the experimental protocol in PBP [10], we split each dataset into training and test sets with 20 and 100 data points. All remaining data are included in pool sets. In all experiments, we use a neural network with one hidden layer and 10 hidden units.

After fitting our model in training data, we evaluate the performance in test data and further add one data point from pool set into training set. The selection is based on the method described by which is equivalent to choose the one with highest predictive variance. This process repeats 10 times, that is, we collect 9 points from pool sets. For each iteration, we re-train the whole model from scratch.

c.3 Experimental settings for reinforcement learning

In all three tasks, CartPoleSwingup, MountainCar and DoublePendulum, we use one-layer Bayesian Neural Network with 32 hidden units for both BBB and NNG-MVG. And we use rectified linear unit (RELU) as our activation function. The number of samples drawn from variational posterior is fixed to 10 in the training process. For TRPO, the batch size is set to be 5000 and the replay pool has a fixed number of 100,000 samples. In both BBB and NNG-MVG, the dynamic model is updated in each epoch with 500 iterations and 10 batch size. For the policy network, one-layer network with 32 tanh units is used.

Appendix D Results for two-layer nets on UCI datasets

Test RMSE Test log-likelihood
Dataset BBB PBP PBP_MV NNG-MVG BBB PBP PBP_MV NNG-MVG
Boston 2.9820.077 3.1340.139 3.1120.151 2.4510.022 -2.8030.010 -2.5270.046 -2.5390.076 -2.3830.012
Concrete 6.0100.067 5.1700.125 5.0750.138 4.9760.057 -3.2160.008 -3.0610.026 -3.0430.031 -2.9930.014
Energy 0.5960.014 0.7910.046 0.4480.013 0.4620.011 -1.5890.006 -1.7240.012 -1.0140.011 -1.4520.003
Kin8nm 0.0690.000 0.0720.000 0.0690.003 0.0680.001 1.2650.004 1.2190.003 1.2590.008 1.2730.001
Naval 0.0000.000 0.0050.000 0.0020.000 0.0000.000 6.6510.040 3.1720.047 4.8530.059 7.0710.013
Pow. Plant 4.0770.011 4.0110.034 3.9080.039 3.8810.009 -2.8150.002 -2.8040.007 -2.7810.008 -2.7620.003
Protein 3.8660.010 4.553 0.020 3.9390.019 3.7130.014 -2.7670.002 -2.9330.005 -2.7720.008 -2.7130.004
Wine 0.6460.004 0.6360.008 0.6420.007 0.6310.004 -0.9880.005 -0.9520.013 -0.9720.009 -0.9540.008
Yacht 1.5900.078 0.8640.042 0.8060.061 0.7830.013 -2.6810.019 -1.7780.015 -1.6420.021 -2.2490.014
Year 8.791NA 8.787NA 8.721NA 8.617NA -3.570NA -3.365NA -3.332NA -3.569NA
Table 4: Averaged predictions with standard errors in terms of RMSE, log-likelihood for the regression dataset using two-layer neural networks. PBP_MV denotes the method proposed by [10].
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
21209
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description