Uncertainty propagation in neural networks for sparse coding

Uncertainty propagation in neural networks for sparse coding

Danil Kuzin, Olga Isupova, Lyudmila Mihaylova
Department of Automatic Control and System Engineering, University of Sheffield, UK
Department of Engineering Science, University of Oxford, UK
{dkuzin1,l.s.mihaylova}@sheffield.ac.uk,olga.isupova@eng.ox.ac.uk

1 Introduction

The idea of Bayesian learning in neural networks (NNs) [1] has recently gained an attention with the development of distributed approximate inference techniques [2, 3] and general boost in popularity of deep learning. Recently several techniques [4, 5] have been proposed to handle specific types of NNs with efficient Bayesian inference. For example, feed-forward networks with the rectified linear unit nonlinearity [6], networks with discrete distributions [7], recurrent networks [8].

In this paper, we consider the area of sparse coding. The sparse coding problem can be viewed as a linear regression problem with the additional assumption that the majority of the basis representation coefficients should be zeros. This sparsity assumption may be represented as penalty [9], or, in Bayesian interpretation, as a prior that has a sharp peak at zero [10]. One of the modern approaches for sparse coding utilises NNs with the soft-thresholding nonlinearity [11, 12]. Sparse coding is widely used in different applications, such as compressive sensing [13], image and video processing [14, 15], neuroscience [16, 17].

A novel method to propagate uncertainty through the soft-thresholding nonlinearity is proposed in this paper. At every layer the current distribution of the target vector is represented as a spike and slab distribution [18], which represents the probabilities of each variable being zero, or Gaussian-distributed. Using the proposed method of uncertainty propagation, the gradients of the logarithms of normalisation constants are derived, that can be used to update a weight distribution. A novel Bayesian NN for sparse coding is designed utilising both the proposed method of uncertainty propagation and Bayesian inference algorithm.

The main contributions of this paper are: (i) for the first time a method for uncertainty propagation through the soft-thresholding nonlinearity is proposed for a Bayesian NN; (ii) an efficient posterior inference algorithm for weights and outputs of NNs with the soft-thresholding nonlinearity is developed; (iii) a novel Bayesian NN for sparse coding is designed.

The rest of the paper is organised as follows. A NN approach for sparse coding is described in Section 2.1. The Bayesian formulation is introduced in Section 2.2. Section 3 provides the experimental results. The proposed forward uncertainty propagation and probabilistic backpropagation methods are given in Appendices A and B.

2 Neural networks for sparse coding

This section presents background knowledge about networks for sparse coding and then describes the novel Bayesian neural network.

2.1 Frequentist neural networks

The NN approach to sparse coding is based on earlier Iterative Shrinkage and Thresholding Algorithm (ISTA) [19]. It addresses the sparse coding problem as the linear regression problem with the penalty that promotes sparsity. For the linear regression model with observations , the design matrix , and the sparse unknown vector of weights , ISTA minimises

(1)

where is a regularisation parameter.

0:  observations , weights , number of layers
1:  Dense layer
2:  Soft-thresholding function 
3:  for  to  do
4:     Dense layer
5:     Soft-thresholding function
6:  end for
7:  Output:  
Algorithm 1 LISTA forward propagation

At every iteration , ISTA obtains the new estimate of the target vector as the linear transformation propagated through the soft-thresholding function

(2)

where is a shrinkage parameter. In ISTA, weights and of the linear transformation are assumed fixed.

In contrast to ISTA, Learned ISTA (LISTA) [11] learns the values of matrices and based on a set of pairs , where is the number of these pairs. To achieve this, ISTA is limited with the fixed amount of iterations and interpreted as a recurrent NN: every iteration of ISTA corresponds to the layer of LISTA. A vector for an observation  is predicted by Algorithm 1.

2.2 BayesLISTA

This section introduces the proposed Bayesian version of LISTA (BayesLISTA). The prior distributions are imposed on the unknown weights

(3)

where is the precision of the Gaussian distribution.

For every layer of BayesLISTA, is assumed to have the spike and slab distribution with the spike probability , the slab mean , and the slab variance

(4)

where is the delta-function that represents a spike, denotes the -th component of a vector. In appendix we show that the output of the next layer can be approximated with the spike and slab distribution and, therefore, the output of the BayesLISTA network has the spike and slab distribution.

To introduce the uncertainty of predictions, we assume that the true is an output of the BayesLISTA network corrupted by the additive Gaussian zero-mean noise with the precision . Then the likelihood of is defined as

(5)

Gamma prior distributions with parameters  and  are specified on the introduced Gaussian precisions

(6)

The posterior distribution is then

(7)

The shrinkage parameter is a hyperparameter of the model.

In the appendix we describe modification of LISTA forward propagation (Algorithm 1) to include probability distributions of the random variables introduced in this section and also an efficient Bayesian inference algorithm.

3 Experiments

(a) Synthetic for different
(b) Synthetic for different
(c) Active learning example
Figure 1: NMSE results. The synthetic data results for different number of layers LABEL:sub@fig:nmse_n_layers_synthetic and for different sizes of observations LABEL:sub@fig:nmse_undersampling_synthetic. The active learning example results on the MNIST data LABEL:sub@fig:active_learning_mnist.

Proposed BayesLISTA is evaluated on sparse coding problems and compared with LISTA [11], ISTA [19] and Fast ISTA (FISTA) [20]. The number of iterations in ISTA and FISTA and the number of layers in NNs is . For quantitative comparison the normalised mean square error (NMSE) is used.

3.1 Predictive performance on synthetic data

First, performance is analysed on synthetic data. We generate and sparse vectors of size from the spike and slab distribution with the truncated slab: each component is zero with the probability  or is sampled from the standard Gaussian distribution without interval with the probability . The design matrix is random Gaussian. The observations are generated as in (1) with the zero-mean Gaussian noise with the standard deviation . The shrinkage parameter is set to . The algorithms are trained on the training data of size and evaluated on the test data of size .

In Figure (a)a NMSE for different number of layers (or iterations) is presented. The observation size is set to . BayesLISTA outperforms competitors. Figure (b)b gives NMSE for different observation sizes . The number of layers (iterations) is set as . In the previous experiment, Bayesian and classic LISTA show similar results with this number of layers. Figure (b)b confirms this competitive behaviour between two LISTAs. ISTA and FISTA underperform the NNs.

3.2 Active learning

To demonstrate a potential scenario that can benefit from uncertainty estimates of BayesLISTA, we consider the active learning example [21]. The active learning area researches ways to select new training subsets to reduce the total number of required supervision. One of the popular approaches in active learning is uncertainty sampling, when the data with the least certain predictions is chosen for labelling. We use a variance of the spike and slab distributed prediction as a measure of uncertainty.

The MNIST dataset [22] is utilised. The dataset contains images of handwritten digits of size . The design matrix is standard random Gaussian. Observations are generated as , where are flattened images. The shrinkage parameter is , the observation size is .

We use the training data of size , the pool data of size , and the test data of size . The algorithm learns on the training data and it is evaluated on the test data. To actively collect a next data point from the pool, the algorithm is used to predict a point with the highest uncertainty. The selected point is moved from the pool to the training data and the algorithms learns on the updated training data. Overall, pool additions are performed. After every addition the performance is measured on the test data. We compare the active approach of adding new points from the pool with the random approach that picks a new data point from the pool at random. The procedure is repeated for times.

Figure (c)c demonstrates performance of the active and non-active methods of updates with BayesLISTA. The active approach with uncertainty sampling steadily demonstrates better results. This means the posterior distribution learnt by BayesLISTA is an adequate estimate of the true posterior.

Appendix C provides additional results on predictive performance on the MNIST data.

References

  • Neal [1994] Radford M Neal. Bayesian learning for neural networks. PhD thesis, University of Toronto, 1994.
  • Li et al. [2015] Yingzhen Li, José Miguel Hernández-Lobato, and Richard E Turner. Stochastic expectation propagation. In Advances in Neural Information Processing Systems, pages 2323–2331, 2015.
  • Hoffman et al. [2013] Matthew D Hoffman, David M Blei, Chong Wang, and John Paisley. Stochastic variational inference. The Journal of Machine Learning Research, 14(1):1303–1347, 2013.
  • Ranganath et al. [2015] Rajesh Ranganath, Linpeng Tang, Laurent Charlin, and David Blei. Deep exponential families. In Artificial Intelligence and Statistics, pages 762–771, 2015.
  • Gal and Ghahramani [2016] Yarin Gal and Zoubin Ghahramani. Dropout as a Bayesian approximation: Representing model uncertainty in deep learning. In International Conference on Machine Learning, pages 1050–1059, 2016.
  • Hernández-Lobato and Adams [2015] José Miguel Hernández-Lobato and Ryan Adams. Probabilistic backpropagation for scalable learning of Bayesian neural networks. In International Conference on Machine Learning, pages 1861–1869, 2015.
  • Soudry et al. [2014] Daniel Soudry, Itay Hubara, and Ron Meir. Expectation backpropagation: Parameter-free training of multilayer neural networks with continuous or discrete weights. In Advances in Neural Information Processing Systems, pages 963–971, 2014.
  • McDermott and Wikle [2017] Patrick L McDermott and Christopher K Wikle. Bayesian recurrent neural network models for forecasting and quantifying uncertainty in spatial-temporal data. arXiv preprint arXiv:1711.00636, 2017.
  • Tibshirani [1996] Robert Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society. Series B (Methodological), pages 267–288, 1996.
  • Tipping [2001] Michael E Tipping. Sparse Bayesian learning and the relevance vector machine. Journal of machine learning research, 1(Jun):211–244, 2001.
  • Gregor and LeCun [2010] Karol Gregor and Yann LeCun. Learning fast approximations of sparse coding. In International Conference on Machine Learning, pages 399–406, 2010.
  • Sprechmann et al. [2015] Pablo Sprechmann, Alexander M Bronstein, and Guillermo Sapiro. Learning efficient sparse and low rank models. IEEE Transactions on pattern analysis and machine intelligence, 37(9):1821–1833, 2015.
  • Candès and Wakin [2008] Emmanuel J Candès and Michael B Wakin. An introduction to compressive sampling. IEEE signal processing magazine, 25(2):21–30, 2008.
  • Mairal et al. [2014] Julien Mairal, Francis Bach, Jean Ponce, et al. Sparse modeling for image and vision processing. Foundations and Trends in Computer Graphics and Vision, 8(2-3):85–283, 2014.
  • Wang et al. [2015] Zhaowen Wang, Ding Liu, Jianchao Yang, Wei Han, and Thomas Huang. Deep networks for image super-resolution with sparse prior. In Proceedings of the IEEE International Conference on Computer Vision, pages 370–378, 2015.
  • Baillet and Garnero [1997] Sylvain Baillet and Line Garnero. A Bayesian approach to introducing anatomo-functional priors in the EEG/MEG inverse problem. IEEE transactions on Biomedical Engineering, 44(5):374–385, 1997.
  • Jas et al. [2017] Mainak Jas, Tom Dupré La Tour, Umut Simsekli, and Alexandre Gramfort. Learning the morphology of brain signals using alpha-stable convolutional sparse coding. In Advances in Neural Information Processing Systems, pages 1099–1108, 2017.
  • Mitchell and Beauchamp [1988] Toby J Mitchell and John J Beauchamp. Bayesian variable selection in linear regression. Journal of the American Statistical Association, 83(404):1023–1032, 1988.
  • Daubechies et al. [2004] Ingrid Daubechies, Michel Defrise, and Christine De Mol. An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Communications on pure and applied mathematics, 57(11):1413–1457, 2004.
  • Beck and Teboulle [2009] Amir Beck and Marc Teboulle. A fast iterative shrinkage-thresholding algorithm with application to wavelet-based image deblurring. In IEEE International Conference on Acoustics, Speech and Signal Processing, 2009. ICASSP 2009., pages 693–696. IEEE, 2009.
  • Settles [2009] Burr Settles. Active learning literature survey. Computer Sciences Technical Report 1648, University of Wisconsin–Madison, 2009.
  • LeCun et al. [1998] Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.
  • Minka [2001] Thomas Minka. A family of algorithms for approximate Bayesian inference. PhD thesis, MIT, 2001.

Appendix A Appendix: Uncertainty propagation through soft-thresholding

This section describes modification of LISTA forward propagation (Algorithm 1) to include probability distributions of the random variables introduced in section 2.2.

Initialisation

At step 1 of LISTA (Algorithm 1) the matrix consists of Gaussian-distributed components , and is a deterministic vector. Then the output is a vector of Gaussian-distributed components , where , and .

At step 2 of LISTA (Algorithm 1) the Gaussian vector is taken as an input of the soft-thresholding function. When a Gaussian random variable is propagated through the soft-thresholding function , the probability mass of the resulting random variable is split into two parts. The values of from the interval are converted to  by the soft-thresholding operator. Therefore, the probability mass of the original distribution that lies in is squeezed into the probability of being zero. The values of from outside of the interval are shifted towards . The distribution of then represents the tails of the original Gaussian distribution. The distribution of can be then parametrised by the probability of being zero, , the mean and the variance of the truncated Gaussian distribution. Therefore, we approximate the distribution of at step 2 with a spike and slab distribution with parameters: the spike probability , the slab mean and variance .

Main layers

At step 4 of LISTA (Algorithm 1) the vector and matrix consist of Gaussian components: , , and is a vector of the spike and slab random variables: .

It can be shown that the expected value and variance of a spike and slab distributed variable with the probability of spike , the slab mean and slab variance are:

(8)

It can also be shown that if components of the matrix and vector are mutually independent then the components of their product have the marginal mean and variances:

(9a)
(9b)

According to the Central Limit Theorem can be approximated as a Gaussian-distributed variable when is sufficiently large. The parameters of this Gaussian distribution are the marginal mean and variance given in (9).

The output at step 4 is then represented as a sum of two Gaussian-distributed vectors: and , i.e. it is a Gaussian-distributed vector with components , where and .

Then at step 5 of LISTA (Algorithm 1) is the result of soft-thresholding of a Gaussian variable, which is approximated with the spike and slab distribution, similar to step 2 (section A). Thus, all the steps of BayesLISTA are covered and distributions for outputs of these steps are derived.

Appendix B Appendix: Backpropagation

The exact intractable posterior (7) is approximated with a factorised distribution

(10)

Parameters of approximating distributions are updated with the assumed density filtering (ADF) and expectation propagation (EP) algorithms derived on the derivatives of the logarithm of a normalisation constant (based on [6]). ADF iteratively incorporates factors from the true posterior in (7) into the factorised approximating distribution in (10), whereas EP iteratively replaces factors in by factors from .

When a factor from is incorporated into , has the form as a function of weights and , where is the normalisation constant and is an arbitrary function, . New parameters of the Gaussian distribution for can be computed as [23]

(11)

Then for new values of and derivatives of the logarithm of are required when the factor of is incorporated in .

With the likelihood factors (5) of the ADF approach is employed and they are iteratively incorporated into . The normalisation constant of with the likelihood term for the data point incorporated is (let denote (to simplify notation the superscript is omitted)

(12)

Assuming the spike and slab distribution for , the normalisation constant can be approximated as

(13)

where are the parameters of the spike and slab distribution for . Parameters of are then updated with the derivatives of according to (11).

Prior factors (3) and (6) from are incorporated into with the EP algorithm [6], i.e. they replace the corresponding approximating factors from , and then is updated to minimise the Kullback–Leibler divergence.

(a) MNIST for
(b) MNIST for
Figure 2: NMSE results on the MNIST data for increasing number of iterations with the observation size  LABEL:sub@fig:nmse_k_100 and  LABEL:sub@fig:nmse_k_250

Appendix C Appendix: Predictive performance on MNIST data

In this experiment, the methods are evaluated on the MNIST dataset in terms of predictive performance. We use images for training and for test.

Figures (a)a and (b)b present NMSE with observation sizes and . The experiment with presents severe conditions for the algorithms: the limited size of the training dataset combined with the small dimensionality of observations. BayesLISTA is able to learn under these conditions, significantly outperforming LISTA. Under better conditions of the second experiment with , both NNs converge to the similar results. However, BayesLISTA demonstrates a remarkably better convergence rate. ISTA and FISTA are unable to perform well in these experiments.

Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
322044
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description