Sparse coding and autoencoders

Sparse coding and autoencoders

Akshay Rangamani Equal Contributionrangamani.akshay@jhu.edu ECE Department
Johns Hopkins University
Anirbit Mukherjee amukhe14@jhu.edu AMS Department
Johns Hopkins University

Amitabh Basu
basu.amitabh@jhu.edu AMS Department
Johns Hopkins University
Tejaswini Ganapathy tganapathi@salesforce.com Salesforce, San Francisco Bay Area
Ashish Arora
aarora8@jhu.edu ECE Department
Johns Hopkins University
Sang (Peter) Chin spchin@cs.bu.edu Trac D. Tran trac@jhu.edu ECE Department
Johns Hopkins University
Abstract

In Dictionary Learning one tries to recover incoherent matrices (typically overcomplete and whose columns are assumed to be normalized) and sparse vectors with a small support of size for some while having access to observations where . In this work we undertake a rigorous analysis of whether gradient descent on the squared loss of an autoencoder can solve the dictionary learning problem. The Autoencoder architecture we consider is a mapping with a single ReLU activation layer of size .  

Under very mild distributional assumptions on , we prove that the norm of the expected gradient of the standard squared loss function is asymptotically (in sparse code dimension) negligible for all points in a small neighborhood of . This is supported with experimental evidence using synthetic data. We also conduct experiments to suggest that is a local minimum. Along the way we prove that a layer of ReLU gates can be set up to automatically recover the support of the sparse codes. This property holds independent of the loss function. We believe that it could be of independent interest.

1 Introduction

One of the fundamental themes in learning theory is to consider data being sampled from a generative model and to provide efficient methods to recover the original model parameters exactly or with tight approximation guarantees. Classic examples include learning a mixture of gaussians [28], certain graphical models [5], full rank square dictionaries [35, 13] and overcomplete dictionaries [2, 7, 8, 9] The problem is usually distilled down to a non-convex optimization problem whose solution can be used to obtain the model parameters. With these hard non-convex problems it has been difficult to find any universal view as to why sometimes gradient descent gives very good and sometimes even exact recovery. In recent times progress has been made towards achieving a geometric understanding of the landscape of such non-convex optimization problems [18], [27], [42]. The corresponding question of parameter recovery for neural nets with one layer of activation has been solved in some special cases, [17, 4, 21, 34, 24, 36, 43]. Almost all of these cases are in the supervised setting where it has also been assumed that the labels are being generated from a net of the same architecture as is being trained. In contrast to these works we address an unsupervised learning problem, and possibly more realistically, we do not tie the data generation model (sensing of sparse vectors by an overcomplete incoherent dictionary) to the neural architecture being analyzed except for assuming knowledge of a few parameters about the ground truth. In a related development, it has been shown by two of the authors here in a previous work [6], that for two layer deep nets even the exact global minima can be found deterministically in time polynomial in the data size. This work continues that line of investigation to now make use of generative model assumptions to probe the power of a class of two layer deep nets with ReLU activation.  

Here we specialize to the generative model of dictionary learning/sparse coding where one receives samples of vectors that have been generated as where and . We typically assume that the number of non-zero entries in to be no larger than some function of the dimension and that satisfies certain incoherence properties. The question now is to recover from samples of . There have been renewed investigations into the hardness of this problem [38] and many former results have recently been reviewed in these lectures [19]. This question has been a cornerstone of learning theory ever since the ground-breaking paper by Olshausen and Field ([31]) (a recent review by the same authors can be found in [32]). Over the years many algorithms have been developed to solve this problem and a detailed comparison among these various approaches can be found in [13].  

An autoencoder is a neural network that maps with a single hidden layer of Rectified Linear Unit (ReLU) activations. These networks have been used extensively ([11, 12, 33, 40, 41]) in the past for unsupervised feature learning tasks, and have been found to be successful in generating discriminative features [15]. A number of different autoencoder architectures and regularizers have been proposed which purportedly induce sparsity, at the hidden layer [10, 16, 23, 29]. There has also been some investigation into what autoencoders learn about the data distribution [3].  

Olshausen and Field had, as early as , already made the connection between sparse coding and training neural architectures and in today’s terminology this problem is very naturally reminiscent of the architecture of an autoencoder [30]. However, to the best of our knowledge, there has not been sufficient progress to rigorously establish whether autoencoders can do sparse coding. In this work, we present our progress towards bridging the above mentioned mathematical gap. To the best of our knowledge, there is no theoretical evidence (even under the usual generative assumptions of sparse coding) that the stationary points of any of the usual squared loss functions (with or without any of the usual regularizers) have any resemblance to the original dictionary that is being sought to be learned. The main point of this paper is to rigorously prove that for autoencoders with ReLU activation, the standard squared loss function has a neighborhood around the dictionary where the norm of the expected gradient is very small (for large enough sparse code dimension ). Thus, all points in a neighborhood of , including , are all asymptotic critical points of this standard squared loss. We supplement our theoretical result with experimental evidence for it in Section 6, which also strongly suggests that the standard squared loss function has a local minimum in a neighborhood around . We believe that our results provide theoretical and experimental evidence that the sparse coding problem can be tackled by training autoencoders.

1.1 A motivating experiment on MNIST using TensorFlow

We used TensorFlow [1] to train two ReLU autoencoders mapping . These networks were trained on a subset of the MNIST dataset of handwritten digits. One of the nets had a single hidden layer of size and the other one had two hidden layers of size and (and a fixed identity matrix giving the output from the second layer of activations). In both the cases the weights of the encoder and decoder were maintained as transposes of each other. We trained the autoencoders on the standard squared loss function using RMSProp [37]. The training was done on images of the digits and from the MNIST dataset. In the following panel we show four pairs (two for each net) of “reconstructed" image i.e output of the trained net when its given as input the “actual" photograph as input.


In our opinion, the above figures add support to the belief that a single and a double layer ReLU activated network can learn an implicit high dimensional structure about the handwritten digits dataset. In particular this demonstrates that though adding more hidden layers obviously helps enhance the reconstruction ability, the single hidden layer autoencoder do hold within them significant power for unsupervised learning of representations. Unfortunately analyzing the RMSProp update rule used in the above experiment is currently beyond our analytic means. However, we take inspiration from these experiments to devise a different mathematical set-up which is much more amenable to analysis taking us towards a better understanding of the power of autoencoders.

2 Introducing the neural architecture and the distributional assumptions

For any , an autoencoder is a fully connected neural network with a single hidden layer of activations. We focus on networks that use the Rectified Linear Unit (ReLU) activation which is the function mapping . In this case, the autoencoder can be seen as computing the following function as follows,

(1)

Here is the input to the autoencoder, is the linear transformation implemented by the first layer, is the output of the layer of activations, is the bias vector and is the output of the autoencoder. Note that we impose the condition that the second layer of weights is simply the transpose of the first layer. We shall define the columns of (rows of ) as .

Assumptions on the dictionary and the sparse code.

We assume that our signal is generated using sparse linear combinations of atoms/vectors of an overcomplete dictionary, i.e., , where is a dictionary, and is a non-negative sparse vector, with at most (for some ) non zero elements. The columns of the original dictionary (labeled as ) are assumed to be normalized and also satisfy the incoherence property that for some .

We assume that the sparse code is sampled from a distribution with the following properties. We fix a set of possible supports of , denoted by , where each element of has at most elements. We consider any arbitrary discrete probability distribution on such that the probability is independent of , and the probability is independent of . A special case is when is the set of all subsets of size , and is the uniform distribution on . For every there is a distribution say on which is supported on vectors whose support is contained in and which is uncorrelated for pairs of coordinates . Further, we assume that the distributions are such that each coordinate is compactly supported over an interval , where and are independent of both and but will be functions of . Moreover, , and are assumed to be independent of both and but allowed to depend on . For ease of notation henceforth we will keep the dependence of these variables implicit and refer to them as and . All of our results will hold in the special case when are constants (no dependence on ).

3 Main Results

3.1 Recovery of the support of the sparse code by a layer of ReLUs

First we prove the following theorem which precisely quantifies the sense in which a layer of ReLU gates is able to recover the support of the sparse code when the weight matrix of the deep net is close to the original dictionary. We recall that the size of the support of the sparse vector is for some . We also recall the parameters as defining the support of the marginal distribution of each coordinate of and is the expected value of this marginal distribution (recall that none of these depend on the coordinate or the actual support). These parameters will be referenced in the results below.

Theorem 3.1.

Let each column of be within a -ball of the corresponding column of , where for some , such that (where is the coherence parameter). We further assume that . Let the bias of the hidden layer of the autoencoder, as defined in (2) be . Let be the vector defined in (2). Then if , and if with probability at least (with respect to the distribution on ).


As long as is large, i.e., an increasing function of , we can interpret this as saying that the probability of the adverse event is small, and we have successfully achieved support recovery at the hidden layer in the limit of large sparse code dimension.

3.2 Asymptotic Criticality of the Autoencoder around

In this work we analyze the following standard squared loss function for the autoencoder,

(2)

In the above we continue to use the variables as defined in equation 2. If we consider a generative model in which is a square, orthogonal matrix and is a non-negative vector (not necessarily sparse), it is easily seen that the standard squared reconstruction error loss function for the autoencorder has a global minimum at . In our generative model, however, is an incoherent and overcomplete dictionary.

Theorem 3.2.

(The Main Theorem) Assume that the hypotheses of Theorem LABEL:theorem:support hold, and (and hence ). Further, assume the distribution parameters satisfy is superpolynomial in (which holds, for example, when are ). Then for ,

Roadmap.

We present the proof of the support recovery result, i.e., Theorem 3.1, in Section 4. Section 5 gives the proof of our main result, Theorem 3.2. The argument rests on two critical lemmas (Lemmas 5.1 and 5.2), whose proofs appear in the Supplementary material. In Section 6, we run simulations to verify Theorem 3.2. We also run experiments that strongly suggest that the standard squared loss function has a local minimum in a neighborhood around .

4 A Layer of ReLU Gates can Recover the Support of the Sparse Code (Proof of Theorem 3.1)

Most sparse coding algorithms are based on an alternating minimization approach, where one iteratively finds a sparse code based on the current estimate of the dictionary, and then uses the estimated sparse code to update the dictionary. The analogue of the sparse coding step in an autoencoder, is the passing through the hidden layer of activations of a certain affine transformation ( which behaves as the current estimate of the dictionary) of the input vectors. We show that under certain stochastic assumptions, the hidden layer of ReLU gates in an autoencoder recovers with high probability the support of the sparse vector which corresponds to the present input.

Proof of Theorem 3.1.

From the model assumptions, we know that the dictionary is incoherent, and has unit norm columns. So, for all , and for all . This means that for ,

(3)

Otherwise for ,

and thus,

(4)

where we use the fact that .

Let and let be the support of . Then we define the input to the ReLU activation as {dmath*} Q_i = ∑_j ∈S ⟨W_i, A^*_j ⟩x^*_j = ⟨W_i, A^*_i ⟩x^*_i 1_i∈S+ ∑_j ∈S ∖i ⟨W_i, A^*_j ⟩x^*_j = ⟨W_i, A^*_i ⟩x^*_i1_i∈S + Z_i.  
First we try to get bounds on when . From our assumptions on the distribution of we have, and for all in the support of . For ,

where we use (4). Using (4), has the following bounds:

Plugging in the lower bound for and the proposed value for the bias, we get


For , we need:

Now plugging in the values for the various quantities, and and , if we have , then .


Now, for we would like to analyze the following probability:

We first simplify the quantity as follows

{dmath*}

Pr[ Q_i ≥ϵ|i ∉supp(x^*) ] = Pr [ Z_i ≥ϵ]
= Pr [ ∑_j ∈S∖i ⟨W_i, A_j^* ⟩x_j^* ≥ϵ]  
Using the Chernoff’s bound, we can obtain


where the second inequality follows from  (4) and the fact that and are both nonnegative, and the third inequality follows from Hoeffding’s Lemma. Next, we also have


Finally, since and , we have

5 Criticality of a neighborhood of (Proof of Theorem 3.2)

It turns out that the expectation of the full gradient of the loss function (2) is difficult to analyze directly. Hence corresponding to the true gradient with respect to the column of we create a proxy, denoted by ), by replacing in the expression for the true expectation every occurrence of the random variable by the indicator random variable . This proxy is shown to be a good approximant of the expected gradient in the following lemma.

Lemma 5.1.

Assume that the hypotheses of Theorem 3.1 hold and additionally let be bounded by a polynomial in . Then we have for each (indexing the columns of ),

Proof.

This lemma has been proven in Section A of the Appendix. ∎


Lemma 5.2.

Assume that the hypotheses of Theorem 3.1 hold, and (and hence ). Then for each indexing the columns of , there exist real valued functions and , and a vector such that , and

Proof.

This lemma has been proven in Section B of the Appendix.∎



With the above asymptotic results, we are in a position to assemble the proof of Theorem 3.2.

Proof of Theorem 3.2.

Consider any indexing the columns of . Recall the definition of the proxy gradient at the beginning of this section. Let us define . Using and as defined in Lemma 5.2, we can write the expectation of the true gradient as, . Further, by Lemma 5.1,

Since is superpolynomial in , we obtain

6 Simulations

We conduct some experiments on synthetic data in order to check whether the gradient norm is indeed small within the columnwise -ball of . We also make some observations about the landscape of the squared loss function, which has implications for being able to recover the ground-truth dictionary .

Data Generation Model:

We generate random dictionaries () of size where , and and . The dictionary entries are drawn from a standard Gaussian, and the columns of the dictionary are then normalized. These dictionaries are incoherent, with high probability. For each , we generate a dataset containing sparse vectors with non-zero entries, where . In our experiments, the coherence parameter was approximately . We conduct experiments for values of that are at most . Here is the hidden layer dimension of the autoencoder and controls the sparsity of the data used to train the autoencoder. The support of each sparse vector is drawn uniformly from all sets of indices of size , and the non-zero entries in the sparse vectors are drawn from a uniform distribution between and . Once we have generated the sparse vectors, we collect them in a matrix and then compute the signals .  

We set up the autoencoder as defined through equation 2. The bias parameter in the hidden layer is set to . Choosing this prefactor of does not violate Theorem 3.1 and it was chosen to have the ReLU layer of the autoencoder recover a large fraction of the support of . We analyze the squared loss function in (2) and its gradient with respect to a column of through their empirical averages over the signals in .

\backslashbox 0.01 0.02 0.05 0.1
256 (0.0137, 0.0041) (0.0138, 0.0044) (0.0126, 0.0052) (0.0095, 0.0068)
512 (0.0058, 0.0021) (0.0058, 0.0022) (0.0054, 0.0027) (0.0071, 0.0036)
1024 (0.0025, 0.0010) (0.0024, 0.0011) (0.0026, 0.0014) (0.0079, 0.0020)
2048 (0.0011, 0.0005) (0.0012, 0.0006) (0.0025, 0.0007) (0.0031, 0.0010)
4096 (0.0006, 0.0003) (0.0012, 0.0003) (0.0013, 0.0004) (0.0026, 0.0006)
Table 1: Average gradient norm for points that are away from . For each and we report . We note that the gradient norm and are of the same order, and for any fixed the gradient norm is decreasing with as expected from Theorem 3.2

Results:

Once we have generated the data, we compute the empirical average of the gradient of the loss function in (2) at random points which are columnwise away from . We average the gradient over the points which are all at the same distance from , and compare the average column norm of the gradient to . Our experiments show that the average column norm of the gradient is of the order of (and thus falling with for any fixed ) as expected from Theorem 3.2. Results for points sampled at are shown in Table 1.

Figure 1: Loss function plot for ,
Figure 2: Average gradient norm plot for ,
Figure 3: Loss function plot for ,
Figure 4: Average gradient norm plot for ,
Figure 5: Loss function plot for ,
Figure 6: Average gradient norm plot for ,


We also plot the squared loss of the autoencoder along a randomly chosen direction to see if is possibly a local minimum. More precisely, we draw a matrix from a standard normal distribution, and normalize its columns. We then plot , as well as the gradient norm averaged over all the columns. For purposes of illustration, we show these plots for , in figures 1 and 2, and those for , in figures 3 and 4.  

From the first four plots, we can observe that the loss function value, and the gradient norm keeps decreasing as we get close to . Since is a randomly chosen direction, this suggests that is a local minimum for the squared loss function. The plots we show here are in the log-scale along the y-axis, which is why it seems as though there is a sharp decrease in the function value. Viewed in normal scale, the function seems to decrease smoothly to a local minimum at .  

In figures 5 and 6 we plot the function and gradient norm for and . This value of is much larger than the coherence parameter , and hence outside the region where the support recovery result, Theorem 3.1 is valid. We suspect that is now in a region where , which means the function is flat in a small neighborhood of .

7 Conclusion

In this paper we have undertaken a rigorous analysis of the loss function of the squared loss of an autoencoder when the data is assumed to be generated by sensing of sparse high dimensional vectors by an overcomplete dictionary. We have shown that the expected gradient of this loss function is very close to zero in a neighborhood of the generating overcomplete dictionary.  

Our simulations complement this theoretical result by providing further empirical support. Firstly, they show that the gradient norm in this ball of indeed falls with and is of the same order as as expected from our proof. Secondly, the experiments also strongly suggest ranges of values of and where is a local minima of this loss function and that it has a neighborhood where the reconstruction error is low.  

This suggests sparse coding problems can be solved by training autoencoders using gradient descent based algorithms. Further, recent investigations have led to the conjecture/belief that many important unsupervised learning tasks, e.g. recognizing handwritten digits, are sparse coding problems in disguise [25, 26]. Thus, our results could shed some light on the observed phenomenon that gradient descent based algorithms train autoencoders to low reconstruction error for natural data sets, like MNIST.  

It remains to rigorously show whether a gradient descent algorithm can be initialized randomly (may be far away from ) and still be shown to converge to this neighborhood of critical points around the dictionary. Towards that it might be helpful to understand the structure of the Hessian outside this neighborhood. Since our analysis applies to the expected gradient, it remains to analyze the sample complexities where these nice results will become prominent.  

The possibility also remains open that this standard loss or some other loss functions exist for the autoencoder with the provable property of having a global minima/minimum at the ground truth dictionary. We have mentioned one example of such in a special case (when is square orthogonal and is nonnegative) and even in this special case it remains open to find a provable optimization algorithm.  

On the simulation front we have a couple of open challenges yet to be tackled. Firstly, it is left to find efficient implementations of the iterative update rule based on the exact gradient of the proposed loss function which has been given in (2). This would open up avenues for testing the power of this loss function on real data rather than the synthetic data used here. Secondly, a simulation of the main Theorem 3.2 that can probe deeper into its claim would need to be able to sample for different at a fixed value of the incoherence parameter . This sampling question of with these constraints is an unresolved one that is left for future work.  

Autoencoders with more than one hidden layer have been used for unsupervised feature learning [22] and recently there has been an analysis of the sparse coding performance of convolutional neural networks with one layer [20] and two layers of nonlinearities [39]. The connections between neural networks and sparse coding has also been recently explored in [14]. It remains an exciting open avenue of research to try to do a similar study as in this work to determine if and how deeper architectures under the same generative model might provide better means of doing sparse coding.

Acknowledgements

Akshay Rangamani and Peter Chin are supported by the AFOSR grant FA9550-12-1-0136. Amitabh Basu and Anirbit Mukherjee gratefully acknowledges support from the NSF grant CMMI1452820. We would like to thank Raman Arora (JHU), and Siva Theja Maguluri (Georgia Institute of Technology) for illuminating comments and discussion.

References

  • [1] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, et al. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467, 2016.
  • [2] A. Agarwal, A. Anandkumar, P. Jain, P. Netrapalli, and R. Tandon. Learning sparsely used overcomplete dictionaries. In COLT, pages 123–137, 2014.
  • [3] G. Alain and Y. Bengio. What regularized auto-encoders learn from the data-generating distribution. Journal of Machine Learning Research, 15(1):3563–3593, 2014.
  • [4] Z. Allen-Zhu. Natasha 2:faster non-convex optimization than sgd. arXiv preprint arXiv:1708.08694, 2017.
  • [5] A. Anandkumar, R. Ge, D. J. Hsu, S. M. Kakade, and M. Telgarsky. Tensor decompositions for learning latent variable models. Journal of Machine Learning Research, 15(1):2773–2832, 2014.
  • [6] R. Arora, A. Basu, P. Mianjy, and A. Mukherjee. Understanding deep neural networks with rectified linear units. arXiv preprint arXiv:1611.01491, 2016.
  • [7] S. Arora, A. Bhaskara, R. Ge, and T. Ma. More algorithms for provable dictionary learning. arXiv:1401.0579, 2014.
  • [8] S. Arora, R. Ge, T. Ma, and A. Moitra. Simple, efficient, and neural algorithms for sparse coding. In COLT, pages 113–149, 2015.
  • [9] S. Arora, R. Ge, and A. Moitra. New algorithms for learning incoherent and overcomplete dictionaries. In COLT, pages 779–806, 2014.
  • [10] D. Arpit, Y. Zhou, H. Ngo, and V. Govindaraju. Why regularized auto-encoders learn sparse representation? In International Conference on Machine Learning, pages 136–144, 2016.
  • [11] P. Baldi. Autoencoders, unsupervised learning, and deep architectures. In Proceedings of ICML Workshop on Unsupervised and Transfer Learning, pages 37–49, 2012.
  • [12] Y. Bengio, L. Yao, G. Alain, and P. Vincent. Generalized denoising auto-encoders as generative models. In Advances in Neural Information Processing Systems, pages 899–907, 2013.
  • [13] J. Błasiok and J. Nelson. An improved analysis of the er-spud dictionary learning algorithm. arXiv:1602.05719, 2016.
  • [14] A. Bora, A. Jalal, E. Price, and A. G. Dimakis. Compressed sensing using generative models. arXiv preprint arXiv:1703.03208, 2017.
  • [15] A. Coates, A. Ng, and H. Lee. An analysis of single-layer networks in unsupervised feature learning. In Proceedings of the fourteenth international conference on artificial intelligence and statistics, pages 215–223, 2011.
  • [16] A. Coates and A. Y. Ng. The importance of encoding versus training with sparse coding and vector quantization. In Proceedings of the 28th International Conference on Machine Learning (ICML-11), pages 921–928, 2011.
  • [17] S. S. Du, J. D. Lee, and Y. Tian. When is a convolutional filter easy to learn? arXiv preprint arXiv:1709.06129, 2017.
  • [18] R. Ge, C. Jin, and Y. Zheng. No spurious local minima in nonconvex low rank problems: A unified geometric analysis. arXiv preprint arXiv:1704.00708, 2017.
  • [19] A. Gilbert. Cbms conference on sparse approximation and signal recovery algorithms, may 22-26, 2017 and 16th new mexico analysis seminar, may 21. https://www.math.nmsu.edu/ jlakey/cbms2017/
    cbms
    _lecture_notes.html.
  • [20] A. C. Gilbert, Y. Zhang, K. Lee, Y. Zhang, and H. Lee. Towards understanding the invertibility of convolutional neural networks. arXiv preprint arXiv:1705.08664, 2017.
  • [21] M. Janzamin, H. Sedghi, and A. Anandkumar. Beating the perils of non-convexity: Guaranteed training of neural networks using tensor methods. arXiv preprint arXiv:1506.08473, 2015.
  • [22] Q. V. Le. Building high-level features using large scale unsupervised learning. In 2013 IEEE international conference on acoustics, speech and signal processing, pages 8595–8598. IEEE, 2013.
  • [23] J. Li, T. Zhang, W. Luo, J. Yang, X.-T. Yuan, and J. Zhang. Sparseness analysis in the pretraining of deep neural networks. IEEE transactions on neural networks and learning systems, 2016.
  • [24] Y. Li and Y. Yuan. Convergence analysis of two-layer neural networks with relu activation. arXiv preprint arXiv:1705.09886, 2017.
  • [25] A. Makhzani and B. Frey. K-sparse autoencoders. arXiv preprint arXiv:1312.5663, 2013.
  • [26] A. Makhzani and B. J. Frey. Winner-take-all autoencoders. In Advances in Neural Information Processing Systems, pages 2791–2799, 2015.
  • [27] S. Mei, Y. Bai, and A. Montanari. The landscape of empirical risk for non-convex losses. arXiv preprint arXiv:1607.06534, 2016.
  • [28] A. Moitra and G. Valiant. Settling the polynomial learnability of mixtures of gaussians. In Foundations of Computer Science (FOCS), 2010 51st Annual IEEE Symposium on, pages 93–102. IEEE, 2010.
  • [29] A. Ng. Sparse autoencoder. 2011.
  • [30] B. A. Olshausen and D. J. Field. Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature, 381(6583):607, 1996.
  • [31] B. A. Olshausen and D. J. Field. Sparse coding with an overcomplete basis set: A strategy employed by v1? Vision research, 37(23):3311–3325, 1997.
  • [32] B. A. Olshausen and D. J. Field. How close are we to understanding v1? Neural computation, 17(8):1665–1699, 2005.
  • [33] S. Rifai, P. Vincent, X. Muller, X. Glorot, and Y. Bengio. Contractive auto-encoders: Explicit invariance during feature extraction. In Proceedings of the 28th international conference on machine learning (ICML-11), pages 833–840, 2011.
  • [34] H. Sedghi and A. Anandkumar. Provable methods for training neural networks with sparse connectivity. arXiv preprint arXiv:1412.2693, 2014.
  • [35] D. A. Spielman, H. Wang, and J. Wright. Exact recovery of sparsely-used dictionaries. In COLT, pages 37–1, 2012.
  • [36] Y. Tian. An analytical formula of population gradient for two-layered relu network and its applications in convergence and critical point analysis. arXiv preprint arXiv:1703.00560, 2017.
  • [37] T. Tieleman and G. Hinton. RMSprop Gradient Optimization.
  • [38] A. M. Tillmann. On the computational intractability of exact and approximate dictionary learning. IEEE Signal Processing Letters, 22(1):45–49, 2015.
  • [39] P. Vardan, Y. Romano, and M. Elad. Convolutional neural networks analyzed via convolutional sparse coding. arXiv preprint arXiv:1607.08194, 2016.
  • [40] P. Vincent, H. Larochelle, Y. Bengio, and P.-A. Manzagol. Extracting and composing robust features with denoising autoencoders. In Proceedings of the 25th international conference on Machine learning, pages 1096–1103. ACM, 2008.
  • [41] P. Vincent, H. Larochelle, I. Lajoie, Y. Bengio, and P.-A. Manzagol. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. Journal of Machine Learning Research, 11(Dec):3371–3408, 2010.
  • [42] L. Wu, Z. Zhu, et al. Towards understanding generalization of deep learning: Perspective of loss landscapes. arXiv preprint arXiv:1706.10239, 2017.
  • [43] Q. Zhang, R. Panigrahy, S. Sachdeva, and A. Rahimi. Electron-proton dynamics in deep learning. arXiv preprint arXiv:1702.00458, 2017.

Appendix

Appendix A The proxy gradient is a good approximation of the true expectation of the gradient (Proof of Lemma 5.1)

Proof.

To make it easy to present this argument let us abstractly think of the function (defined for any ) as where we have defined the random variable . It is to be noted that because of the ReLU term and its derivative this function has a dependency on even outside its dependency through . Let us define another random variable . Then we have,


In the last step above we have used the Cauchy-Schwarz inequality for random variables. We recognize that is precisely what we defined as the proxy gradient . Further for such as in this lemma the support recovery theorem (Theorem 3.1) holds and that is precisely the statement that the term, is small. So we can rewrite the above inequality as,


We remember that is a polynomial in because its dependency is through Frobenius norms of submatrices of and norms of projections of . But the norm of the training vectors (that is ) have been assumed to be bounded by . Also we have the assumption that the columns of are within a ball of the corresponding columns of which in turn is a dimensional matrix of bounded norm because all its columns are normalized. So summarizing we have,


The above inequality immediately implies the claimed lemma. ∎

Appendix B The asymptotics of the coefficients of the gradient of the squared loss (Proof of Lemma )

To recap we imagine being given as input signals (imagined as column vectors), which are generated from an overcomplete dictionary of a fixed incoherence. Let (imagined as column vectors) be the sparse code that generates . The model of the autoencoder that we now have is . is a matrix and the column of is to be denoted as the column vector .

b.1 Derivative of the standard squared loss of a ReLU autoencoder

Using the above notation the squared loss of the autoencoder is . But we introduce a dummy constant to be multiplied to because this helps read the complicated equations that would now follow. This marker helps easily spot those terms which depend on the sensing of (those with a factor of ) as opposed to the terms which are “purely” dependent on the neural net (those without the factor of ). Thus we think of the squared loss of our autoencoder as,


where we have defined as,

Then we have,

In the form of a derivative matrix this means,


This helps us write,


Now going over to the proxy gradient corresponding to this term we define the vector as,


Thus we have,


Now we invoke the distributional assumption about i.i.d sampling of the coordinates for a fixed support and the definition of and to write, for all and for , . Thus we get,