Modeling Sparse Deviations for Compressed Sensing using Generative Models

# Modeling Sparse Deviations for Compressed Sensing using Generative Models

Manik Dhar    Aditya Grover    Stefano Ermon
###### Abstract

In compressed sensing, a small number of linear measurements can be used to reconstruct an unknown signal. Existing approaches leverage assumptions on the structure of these signals, such as sparsity or the availability of a generative model. A domain-specific generative model can provide a stronger prior and thus allow for recovery with far fewer measurements. However, unlike sparsity-based approaches, existing methods based on generative models guarantee exact recovery only over their support, which is typically only a small subset of the space on which the signals are defined. We propose Sparse-Gen, a framework that allows for sparse deviations from the support set, thereby achieving the best of both worlds by using a domain specific prior and allowing reconstruction over the full space of signals. Theoretically, our framework provides a new class of signals that can be acquired using compressed sensing, reducing classic sparse vector recovery to a special case and avoiding the restrictive support due to a generative model prior. Empirically, we observe consistent improvements in reconstruction accuracy over competing approaches, especially in the more practical setting of transfer compressed sensing where a generative model for a data-rich, source domain aids sensing on a data-scarce, target domain.

Machine Learning, ICML

## 1 Introduction

In many real-world domains, data acquisition is costly. For instance, magnetic resonance imaging (MRI) requires scan times proportional to the number of measurements, which can be significant for patients (Lustig et al., 2008). Geophysical applications like oil drilling require expensive simulation of seismic waves (Qaisar et al., 2013). Such applications, among many others, can benefit significantly from compressed sensing techniques to acquire signals efficiently (Candès & Tao, 2005; Donoho, 2006; Candès et al., 2006).

In compressed sensing, we wish to acquire an -dimensional signal using only measurements linear in . The measurements could potentially be noisy, but even in the absence of any noise we need to impose additional structure on the signal to guarantee unique recovery. Classical results on compressed sensing impose structure by assuming the underlying signal to be approximately -sparse in some known basis, i.e., the -largest entries dominate the rest. For instance, images and audio signals are typically sparse in the wavelet and Fourier basis respectively (Mallat, 2008). If the matrix of linear vectors relating the signal and measurements satisfies certain mild conditions, then one can provably recover with only measurements using LASSO (Tibshirani, 1996; Candès & Tao, 2005; Donoho, 2006; Candès et al., 2006; Bickel et al., 2009).

Alternatively, structural assumptions on the signals being sensed can be learned from data, e.g., using a dataset of typical signals (Baraniuk et al., 2010; Peyre, 2010; Chen et al., 2010; Yu & Sapiro, 2011). Particularly relevant to this work, Bora et al. (2017) proposed an approach where structure is provided by a deep generative model learned from data. Specifically, the underlying signal being sensed is assumed to be close to the range of a deterministic function expressed by a pretrained, latent variable model such that where denote the latent variables. Consequently, the signal is recovered by optimizing for a latent vector that minimizes the distance between the measurements corresponding to and the actual ones. Even though the objective being optimized in this case is non-convex, empirical results suggest that the reconstruction error decreases much faster than LASSO-based recovery as we increase the number of measurements.

A limitation of the above approach is that the recovered signal is constrained to be in the range of the generator function . Hence, if the true signal being sensed is not in the range of , the algorithm cannot drive the reconstruction error to zero even when (even if we ignore error due to measurement noise and non-convex optimization). This is also observed empirically, as the reconstruction error of generative model-based recovery saturates as we keep increasing the number of measurements . On the other hand, LASSO-based recovery continues to shrink the error with increasing number of measurements, eventually outperforming the generative model-based recovery.

To overcome this limitation, we propose a framework that allows recovery of signals with sparse deviations from the set defined by the range of the generator function. The recovered signals have the general form of , where is a sparse vector. This allows the recovery algorithm to consider signals away from the range of the generator function. Similar to LASSO, we relax the hardness in optimizing for sparse vectors by minimizing the norm of the deviations. Unlike LASSO-based recovery, we can exploit the rich structure imposed by a (deep) generative model (at the expense of solving a hard optimization problem if is non-convex). In fact, we show that LASSO-based recovery is a special case of our framework if the generator function maps all to the origin. Unlike generative model-based recovery, the signals recovered by our algorithm are not constrained to be in the range of the generator function.

Our proposed algorithm, referred to as Sparse-Gen, has desirable theoretical properties and empirical performance. Theoretically, we derive upper bounds on the reconstruction error for an optimal decoder with respect to the proposed model and show that this error vanishes with measurements. We confirm our theory empirically, wherein we find that recovery using Sparse-Gen with variational autoencoders (Kingma & Welling, 2014) as the underlying generative model outperforms both LASSO-based and generative model-based recovery in terms of the reconstruction errors for the same number of measurements for MNIST and Omniglot datasets. Additionally, we observe significant improvements in the more practical and novel task of transfer compressed sensing where a generative model on a data-rich, source domain provides a prior for sensing a data-scarce, target domain.

## 2 Preliminaries

In this section, we review the necessary background and prior work in modeling domain specific structure in compressed sensing. We are interested in solving the following system of equations,

 y =Ax (1)

where is the signal of interest being sensed through measurements , and is a measurement matrix. For efficient acquisition of signals, we will design measurement matrices such that . However, the system is under-determined whenever . Hence, unique recovery requires additional assumptions on . We now discuss two ways to model the structure of .

Sparsity. Sparsity in a well-chosen basis is natural in many domains. For instance, natural images are sparse in the wavelet basis whereas audio signals exhibit sparsity in the Fourier basis (Mallat, 2008). Hence, it is natural to assume the domain of signals we are interested in recovering is

 Sl(0)={x:∥x−0∥0≤l}. (2)

This is the set of -sparse vectors with the distance measured from the origin. Such assumptions dominate the prior literature in compressed sensing and can be further relaxed to recover approximately sparse signals (Candès & Tao, 2005; Donoho, 2006; Candès et al., 2006).

Latent variable generative models. A latent variable model specifies a joint distribution over the observed data (e.g., images) and a set of latent variables (e.g., features). Given a training set of signals , we can learn the parameters of such a model, e.g., via maximum likelihood. When is parameterized using deep neural networks, such generative models can effectively model complex, high-dimensional signal distributions for modalities such as images and audio (Kingma & Welling, 2014; Goodfellow et al., 2014).

Given a pretrained latent variable generative model with parameters , we can associate a generative model function mapping a latent vector to the mean of the conditional distribution . Thereafter, the space of signals that can be recovered with such a model is given by the range of the generator function,

 SG={G(z):z∈Rk}. (3)

Note that the set is defined with respect to the latent vectors , and we omit the dependence of on the parameters (which are fixed for a pretrained model) for brevity.

### 2.1 Recovery algorithms

Signal recovery in compressed sensing algorithm typically involves solving an optimization problem consistent with the modeling assumptions on the domain of the signals being sensed.

Sparse vector recovery using LASSO. Under the assumptions of sparsity, the signal can be recovered by solving an minimization problem (Candès & Tao, 2005; Donoho, 2006; Candès et al., 2006).

 minx∥x∥0 s.t. Ax=y. (4)

The objective above is however NP-hard to optimize, and hence, it is standard to consider a convex relaxation,

 minx∥x∥1 s.t. Ax=y. (5)

In practice, it is common to solve the Lagrangian of the above problem. We refer to this method as LASSO-based recovery due to similarities of the objective in Eq. (2.1) to the LASSO regularization used broadly in machine learning (Tibshirani, 1996). LASSO-based recovery is the predominant technique for recovering sparse signals since it involves solving a tractable convex optimization problem.

In order to guarantee unique recovery to the underdetermined system in Eq. (1), the measurement matrix is designed to satisfy the Restricted Isometry Property (RIP) or the Restricted Eigenvalue Condition (REC) for -sparse matrices with high probability (Candès & Tao, 2005; Bickel et al., 2009). We define these conditions below.

###### Definition 1.

Let be the set of -sparse vectors. For some parameter , a matrix is said to satisfy RIP if ,

 (1−α)∥x∥2≤∥Ax∥2≤(1+α)∥x∥2.
###### Definition 2.

Let be the set of -sparse vectors. For some parameter , a matrix is said to satisfy REC if ,

 ∥Ax∥2≥γ∥x∥2.

Intuitively, RIP implies that approximately preserves Euclidean norms for sparse vectors and REC implies that sparse vectors are far from the nullspace of . Many classes of matrices satisfy these conditions with high probability, including random Gaussian and Bernoulli matrices where every entry of the matrix is sampled from a standard normal and uniform Bernoulli distribution respectively (Baraniuk et al., 2008).

Generative model vector recovery using gradient descent. If the signals being sensed are assumed to lie close to the range of a generative model function as defined in Eq. (3) , then we can recover the best approximation to the true signal by -minimization over ,

 minz∥AG(z)−y∥22. (6)

The function is typically expressed as a deep neural network which makes the overall objective non-convex, but differentiable almost everywhere w.r.t . In practice, good reconstructions can be recovered by gradient-based optimization methods. We refer to this method proposed by Bora et al. (2017) as generative model-based recovery.

To guarantee unique recovery, generative model-based recovery makes two key assumptions. First, the generator function is assumed to be -Lipschitz, i.e., ,

 ∥G(z1)−G(z2)∥2≤L∥z1−z2∥2.

Secondly, the measurement matrix is designed to satisfy the Set-Restricted Eigenvalue Condition (S-REC) with high probability (Bora et al., 2017).

###### Definition 3.

Let . For some parameters , , a matrix is said to satisfy the S-REC if ,

 ∥A(x1−x2)∥2≥γ∥x1−x2∥2−δ.

S-REC generalizes REC to an arbitrary set of vectors as opposed to just considering the set of approximately sparse vectors and allowing an additional slack term . In particular, is chosen to be the range of the generator function for generative model-based recovery.

## 3 The Sparse-Gen framework

The modeling assumptions based on sparsity and generative modeling discussed in the previous section can be limiting in many cases. On one hand, sparsity assumes a relatively weak prior over the signals being sensed. Empirically, we observe that the recovered signals have large reconstruction error especially when the number of measurements is small. On the other hand, generative models imposes a very strong, but rigid prior which works well when the number of measurements is small. However, the performance of the corresponding recovery methods saturates with increasing measurements since the recovered signal is constrained to lie in the range of the generator function . If is the optimum value returned by an optimization procedure for Eq. (6), then the reconstruction error is limited by the dimensionality of the latent space and the quality of the generator function.

To sidestep the above limitations, we consider a strictly more expressive class of signals by allowing sparse deviations from the range of a generator function. Formally, the domain of the recovered signals is given by,

 Sl,G=∪z∈Dom(G)Sl(G(z)) (7)

where denotes the set of sparse vectors centered on and varies over the domain of (typically ). We refer to this modeling assumption and the consequent algorithmic framework for recovery as Sparse-Gen.

Based on this modeling assumption, we will recover signals of the form for some that is preferably sparse. Specifically, we consider the optimization of a hybrid objective,

 minz,ν∥ν∥0 s.t. A(G(z)+ν)=y. (8)

In the above optimization problem the objective is non-convex and non-differentiable, while the constraint is non-convex (for general ), making the above optimization problem hard to solve. To ease the optimization problem, we propose two modifications. First, we relax the minimization to an minimization similar to LASSO.

 minz,ν∥ν∥1 s.t. A(G(z)+ν)=y. (9)

Next, we square the non-convex constraint on both sides and consider the Lagrangian of the above problem to get the final unconstrained optimization problem for Sparse-Gen,

 minz,ν∥ν∥1+λ∥A(G(z)+ν)−y∥22 (10)

where is the Lagrange multiplier.

The above optimization problem is non-differentiable w.r.t. and non-convex w.r.t. (if is non-convex). In practice, it can be solved in practice using gradient descent (since the non-differentiability is only at a finite number of points) or using sequential convex programming (SCP). SCP is an effective heuristic for non-convex problems where the convex portions of the problem are solved using a standard convex optimization technique (Boyd & Vandenberghe, 2004). In the case of Eq. (10), the optimization w.r.t. (for fixed ) is a convex optimization problem whereas the non-convexity typically involves differentiable terms (w.r.t. ) if is a deep neural network. Empirically, we find excellent recovery by standard first order gradient-based methods  (Duchi et al., 2011; Tieleman & Hinton, 2012; Kingma & Ba, 2015).

Unlike LASSO-based recovery which recovers only sparse signals, Sparse-Gen can impose a stronger domain-specific prior using a generative model. If we fix the generator function to map all to the origin, we recover LASSO-based recovery as a special case of Sparse-Gen. Additionally, Sparse-Gen is not constrained to recover signals over the range of , as in the case of generative model-based recovery. In fact, it can recover signals with sparse deviations from the range of . Note that the sparse deviations can be defined in a basis different from the canonical basis. In such cases, we consider the following optimization problem,

 minz,ν∥Bν∥1+λ∥A(G(z)+ν)−y∥22 (11)

where is a change of basis matrix that promotes sparsity of the vector . Figure 1 illustrates the differences in modeling assumptions between Sparse-Gen and other frameworks.

## 4 Theoretical Analysis

The proofs for all results in this section are given in the Appendix. Our analysis and experiments account for measurement noise in compressed sensing, i.e.,

 y =Ax+ϵ. (12)

Let denote an arbitrary decoding function used to recover the true signal from the measurements . Our analysis will upper bound the -error in recovery incurred by our proposed framework using mixed norm guarantees (in particular, ). To this end, we first state some key definitions. Define the least possible error for recovering under the Sparse-Gen modeling as,

 σSl,G(x)=inf^x∈Sl,G∥x−^x∥1

where the optimal is the closest point to in the allowed domain . We now state the main lemma guiding the theoretical analysis.

###### Lemma 1.

Given a function and measurement noise with , let be any matrix that satisfies S-REC() and RIP() for some , . Then, there exists a decoder such that,

 ∥x−Δ(Ax+ϵ)∥2≤(2l)−1/2C0σl,G(x)+C1ϵmax+δ′

for all , where , and .

The above lemma shows that there exists a decoder such that the error in recovery can be upper bounded for measurement matrices satisfying S-REC and RIP. Note that Lemma 1 only guarantees the existence of such a decoder and does not prescribe an optimization algorithm for recovery. Apart from the errors due to the bounded measurement noise and a scaled slack term appearing in the S-REC condition , the major term in the upper bound corresponds to (up to constants) the minimum possible error incurred by the best possible recovery vector in given by . Similar terms appear invariably in the compressed sensing literature and are directly related to the modeling assumptions regarding (for example, Theorem 8.3 in Cohen et al. (2009)).

Our next lemma shows that random Gaussian matrices satisfy the S-REC (over the range of Lipschitz generative model functions) and RIP conditions with high probability for with bounded domain, both of which together are sufficient conditions for Lemma 1 to hold.

###### Lemma 2.

Let be an -Lipschitz function where is the -norm ball in . For , if

 m=Ω(1α2(klog(Lrδ)+llog(n/l)))

then a random matrix with i.i.d. entries such that satisfies the S-REC() and RIP() with probability.

Using Lemma 1 and Lemma 2, we can bound the error due to decoding with generative models and random Gaussian measurement matrices in the following result.

###### Theorem 1.

Let be an -Lipschitz function. For any , , let be a random Gaussian matrix with

 m=Ω(1α2(klog(Lrδ)+llog(n/l)))

rows of i.i.d. entries scaled such that . Let be the decoder satisfying Lemma 1. Then, we have with probability,

 ∥x−Δ(Ax+ϵ)∥2≤(2l)−1/2C0σl,G(x)+C1ϵmax+δ′

for all , where are constants defined in Lemma 1.

From the above lemma, we see that the number of measurements needed to guarantee upper bounds on the reconstruction error of any signal with high probability depends on two terms. The first term includes dependence on the Lipschitz constant of the generative model function G. A high Lipschitz constant makes recovery harder (by requiring a larger number of measurements), but only contributes logarithmically. The second term, typical of results in sparse vector recovery, shows a logarithmic growth on the dimensionality of the signals. Ignoring logarithmic dependences and constants, recovery using Sparse-Gen requires about measurements for recovery. Note that Theorem 1 assumes access to an optimization oracle for decoding. In practice, we consider the solutions returned by gradient-based optimization methods to a non-convex objective defined in Eq. (11) that are not guaranteed to correspond to the optimal decoding in general.

Finally, we obtain tighter bounds for the special case when is expressed using a neural network with only ReLU activations. These bounds do not rely explicitly on the Lipschitz constant or require the domain of to be bounded.

###### Theorem 2.

If is a neural network of depth with only ReLU activations and at most nodes in each layer, then the guarantees of Theorem 1 hold for

 m=Ω(1α2((k+l)dlogc+(k+l)log(n/l))).

Our theoretical analysis formalizes the key properties of recovering signals using Sparse-Gen. As shown in Lemma 1, there exists a decoder for recovery based on such modeling assumptions that extends recovery guarantees based on vanilla sparse vector recovery and generative model-based recovery. Such recovery requires measurement matrices that satisfy both the RIP and S-REC conditions over the set of vectors that deviate in sparse directions from the range of a generative model function. In Theorems 1-2, we observed that the number of measurements required to guarantee recovery with high probability grow almost linearly (with some logarithmic terms) with the latent space dimensionality of the generative model and the permissible sparsity for deviating from the range of the generative model.

## 5 Experimental Evaluation

We evaluated Sparse-Gen for compressed sensing of high-dimensional signals from the domain of benchmark image datasets. Specifically, we considered the MNIST dataset of handwritten digits (LeCun et al., 2010) and the OMNIGLOT dataset of handwritten characters (Lake et al., 2015). Both these datasets have the same data dimensionality (), but significantly different characteristics. The MNIST dataset has fewer classes (10 digits from 0-9) as opposed to Omniglot which shows greater diversity (1623 characters across 50 alphabets). Additional experiments with generative adversarial networks on the CelebA dataset are reported in the Appendix.

Baselines. We considered methods based on sparse vector recovery using LASSO (Tibshirani, 1996; Candès & Tao, 2005) and generative model based recovery using variational autoencoders (VAE) (Kingma & Welling, 2014; Bora et al., 2017). For VAE training, we used the standard train/held-out splits of both datasets. Compressed sensing experiments that we report were performed on the entire test set of images. The architecture and other hyperparameter details are given in the Appendix.

Experimental setup. For the held-out set of instances, we artificially generated measurements through a random matrix with entries sampled i.i.d. from a Gaussian with zero mean and standard deviation of . Measurement noise is sampled from zero mean and diagonal scalar covariance matrix with entries as 0.01. For evaluation, we report the reconstruction error measured as where is the recovered signal and is a norm of interest, varying the number of measurements from to the highest value of . We report results for the norms.

We evaluated sensing of both continuous signals (MNIST) with pixel values in range and discrete signals (Omniglot) with binary pixel values . For all algorithms considered, recovery was performed by optimizing over a continuous space. In the case of sparse recovery methods (including Sparse-Gen) it is possible that unconstrained optimization returns signals outside the domain of interest, in which case they are projected to the required domain by simple clipping, i.e., any signal less than zero is clipped to and similarly any signal greater than one is clipped to .

Results and Discussion. The reconstruction errors for varying number of measurements are given in Figure 2. Consistent with the theory, the strong prior in generative model-based recovery methods outperforms the LASSO-based methods for sparse vector recovery. In the regime of low measurements, the performance of algorithms that can incorporate the generative model prior dominates over methods modeling sparsity using LASSO. The performance of plain generative model-based methods however saturates with increasing measurements, unlike Sparse-Gen and LASSO which continue to shrink the error. The trends are consistent for both MNIST and Omniglot, although we observe the relative magnitudes of errors in the case of Omniglot are much higher than that of MNIST. This is expected due to the increased diversity and variations of the structure of the signals being sensed in the case of Omniglot. We also observe the trends to be consistent across the various norms considered.

### 5.1 Transfer compressed sensing

One of the primary motivations for compressive sensing is to directly acquire the signals using few measurements. On the contrary, learning a deep generative model requires access to large amounts of training data. In several applications, getting the data for training a generative model might not be feasible. Hence, we test the generative model-based recovery on the novel task of transfer compressed sensing.

Experimental setup. We train the generative model on a source domain (assumed to be data-rich) and related to a data-hungry target domain we wish to sense. Given the matching dimensions of MNIST and Omniglot, we conduct experiments transferring from MNIST (source) to Omniglot (target) and vice versa.

Results and Discussion. The reconstruction errors for the norms considered are given in Figure 3. For both the source-target pairs, we observe that the Sparse-Gen consistently performs well. Vanilla generative model-based recovery shows hardly an improvements with increasing measurements. We can qualitatively see this phenomena for transferring from MNIST (source) to Omniglot (target) in Figure 4. With only measurements, all models perform poorly and generative model based methods particularly continue to sense images similar to MNIST. On the other hand, there is a noticeable transition at measurements for Sparse-VAE where it adapts better to the domain being sensed than plain generative model-based recovery and achieves lower reconstruction error.

## 6 Related Work

Since the introduction of compressed sensing over a decade ago, there has been a vast body of research studying various extensions and applications (Candès & Tao, 2005; Donoho, 2006; Candès et al., 2006). This work explores the effect of modeling different structural assumptions on signals in theory and practice.

Themes around sparsity in a well-chosen basis has driven much of the research in this direction. For instance, the paradigm of model-based compressed sensing accounts for the interdependencies between the dimensions of a sparse data signal  (Baraniuk et al., 2010; Duarte & Eldar, 2011; Gilbert et al., 2017). Alternatively, adaptive selection of basis vectors from a dictionary that best capture the structure of the particular signal being sensed has also been explored (Peyre, 2010; Tang et al., 2013). Many of these methods have been extended to recovery of structured tensors (Zhang et al., 2013, 2014). In another prominent line of research involving Bayesian compressed sensing, the sparseness assumption is formalized by placing sparseness-promoting priors on the signals (Ji et al., 2008; He & Carin, 2009; Babacan et al., 2010; Baron et al., 2010).

Research exploring structure beyond sparsity is relatively scarce. Early works in this direction can be traced to  Baraniuk & Wakin (2009) who proposed algorithms for recovering signals lying on a smooth manifold. The generative model-based recovery methods consider functions that do not necessarily define manifolds since the range of a generator function could intersect with itself. Yu & Sapiro (2011) coined the term statistical compressed sensing and proposed algorithms for efficient sensing of signals from a mixture of Gaussians. The recent work in deep generative model-based recovery differs in key theoretical aspects as well in the use of a more expressive family of models based on neural networks. A related recent work by Hand & Voroninski (2017) provides theoretical guarantees on the solution recovered for solving non-convex linear inverse problems with deep generative priors. Empirical advances based on well-designed deep neural network architectures that sacrifice many of the theoretical guarantees have been proposed for applications such as MRI (Mardani et al., 2017, 2018). Many recent methods propose to learn mappings of signals to measurements using neural networks, instead of restricting them to be linear, random matrices (Mousavi et al., 2015; Kulkarni et al., 2016; Chang et al., 2017; Lu et al., 2018).

Our proposed framework bridges the gap between algorithms that model structure using sparsity and enjoy good theoretical properties with advances in deep generative models, in particular their use for compressed sensing.

## 7 Conclusion and Future Work

The use of deep generative models as priors for compressed sensing presents a new outlook on algorithms for inexpensive data acquisition. In this work, we showed that these priors can be used in conjunction with classical modeling assumptions based on sparsity. Our proposed framework, Sparse-Gen, generalizes both sparse vector recovery and recovery using generative models by allowing for sparse deviations from the range of a generative model function. The benefits of using such modeling assumptions are observed both theoretically and empirically.

In the future, we would like to design algorithms that can better model the structure within sparse deviations. Follow-up work in this direction can benefit from the vast body of prior work in structured sparse vector recovery (Duarte & Eldar, 2011). From a theoretical perspective, a better understanding of the non-convexity resulting from generative model-based recovery can lead to stronger guarantees and consequently better optimization algorithms for recovery. Finally, it would be interesting to extend Sparse-Gen for compressed sensing of other data modalities such as graphs for applications in network tomography and reconstruction (Xu et al., 2011). Real-world graph networks are typically sparse in the canonical basis and can be modeled effectively using deep generative models (Grover et al., 2018), which is consistent with the modeling assumptions of the Sparse-Gen framework.

### Acknowledgements

We are thankful to Tri Dao, Jonathan Kuck, Daniel Levy, Aditi Raghunathan, and Yang Song for helpful comments on early drafts. This research was supported by Intel Corporation, TRI, a Hellman Faculty Fellowship, ONR, NSF (#1651565, #1522054, #1733686 ) and FLI (#2017-158687). AG is supported by a Microsoft Research PhD Fellowship.

## References

• Abadi et al. (2016) Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., Corrado, G. S., Davis, A., Dean, J., Devin, M., et al. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467, 2016.
• Andersen et al. (2013) Andersen, M. S., Dahl, J., and Vandenberghe, L. Cvxopt: A python package for convex optimization, version 1.1. 6. Available at cvxopt. org, 54, 2013.
• Babacan et al. (2010) Babacan, S. D., Molina, R., and Katsaggelos, A. K. Bayesian compressive sensing using Laplace priors. IEEE Transactions on Image Processing, 19(1):53–63, 2010.
• Baraniuk et al. (2008) Baraniuk, R., Davenport, M., DeVore, R., and Wakin, M. A simple proof of the restricted isometry property for random matrices. Constructive Approximation, 28(3):253–263, 2008.
• Baraniuk & Wakin (2009) Baraniuk, R. G. and Wakin, M. B. Random projections of smooth manifolds. Foundations of computational mathematics, 9(1):51–77, 2009.
• Baraniuk et al. (2010) Baraniuk, R. G., Cevher, V., Duarte, M. F., and Hegde, C. Model-based compressive sensing. IEEE Transactions on Information Theory, 56(4):1982–2001, 2010.
• Baron et al. (2010) Baron, D., Sarvotham, S., and Baraniuk, R. G. Bayesian compressive sensing via belief propagation. IEEE Transactions on Signal Processing, 58(1):269–280, 2010.
• Bickel et al. (2009) Bickel, P. J., Ritov, Y., and Tsybakov, A. B. Simultaneous analysis of lasso and Dantzig selector. The Annals of Statistics, pp. 1705–1732, 2009.
• Bora et al. (2017) Bora, A., Jalal, A., Price, E., and Dimakis, A. G. Compressed sensing using generative models. In International Conference on Machine Learning, 2017.
• Boyd & Vandenberghe (2004) Boyd, S. and Vandenberghe, L. Convex optimization. Cambridge university press, 2004.
• Candès & Tao (2005) Candès, E. J. and Tao, T. Decoding by linear programming. IEEE Transactions on Information Theory, 51(12):4203–4215, 2005.
• Candès et al. (2006) Candès, E. J., Romberg, J., and Tao, T. Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. IEEE Transactions on Information Theory, 52(2):489–509, 2006.
• Chang et al. (2017) Chang, J. R., Li, C.-L., Poczos, B., Kumar, B. V., and Sankaranarayanan, A. C. One network to solve them allâsolving linear inverse problems using deep projection models. arXiv preprint, 2017.
• Chen et al. (2010) Chen, M., Silva, J., Paisley, J., Wang, C., Dunson, D., and Carin, L. Compressive sensing on manifolds using a nonparametric mixture of factor analyzers: Algorithm and performance bounds. IEEE Transactions on Signal Processing, 58(12):6140–6155, 2010.
• Cohen et al. (2009) Cohen, A., Dahmen, W., and DeVore, R. Compressed sensing and best -term approximation. Journal of the American mathematical society, 22(1):211–231, 2009.
• Donoho (2006) Donoho, D. L. Compressed sensing. IEEE Transactions on Information Theory, 52(4):1289–1306, 2006.
• Duarte & Eldar (2011) Duarte, M. F. and Eldar, Y. C. Structured compressed sensing: From theory to applications. IEEE Transactions on Signal Processing, 59(9):4053–4085, 2011.
• Duchi et al. (2011) Duchi, J., Hazan, E., and Singer, Y. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12(Jul):2121–2159, 2011.
• Gilbert et al. (2017) Gilbert, A. C., Zhang, Y., Lee, K., Zhang, Y., and Lee, H. Towards understanding the invertibility of convolutional neural networks. arXiv preprint arXiv:1705.08664, 2017.
• Goodfellow et al. (2014) Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y. Generative adversarial nets. In Advances in Neural Information Processing Systems, 2014.
• Grover et al. (2018) Grover, A., Zweig, A., and Ermon, S. Graphite: Iterative generative modeling of graphs. arXiv preprint arXiv:1803.10459, 2018.
• Hand & Voroninski (2017) Hand, P. and Voroninski, V. Global guarantees for enforcing deep generative priors by empirical risk. arXiv preprint arXiv:1705.07576, 2017.
• He & Carin (2009) He, L. and Carin, L. Exploiting structure in wavelet-based bayesian compressive sensing. IEEE Transactions on Signal Processing, 57(9):3488–3497, 2009.
• Ji et al. (2008) Ji, S., Xue, Y., and Carin, L. Bayesian compressive sensing. IEEE Transactions on Signal Processing, 56(6):2346–2356, 2008.
• Kingma & Ba (2015) Kingma, D. and Ba, J. Adam: A method for stochastic optimization. In International Conference on Learning Representations, 2015.
• Kingma & Welling (2014) Kingma, D. and Welling, M. Auto-encoding variational Bayes. In International Conference on Learning Representations, 2014.
• Kulkarni et al. (2016) Kulkarni, K., Lohit, S., Turaga, P., Kerviche, R., and Ashok, A. Reconnet: Non-iterative reconstruction of images from compressively sensed measurements. In IEEE Conference on Computer Vision and Pattern Recognition, 2016.
• Lake et al. (2015) Lake, B. M., Salakhutdinov, R., and Tenenbaum, J. B. Human-level concept learning through probabilistic program induction. Science, 350(6266):1332–1338, 2015.
• LeCun et al. (2010) LeCun, Y., Cortes, C., and Burges, C. J. MNIST handwritten digit database. http://yann. lecun. com/exdb/mnist, 2010.
• Lu et al. (2018) Lu, X., Dong, W., Wang, P., Shi, G., and Xie, X. Convcsnet: A convolutional compressive sensing framework based on deep learning. arXiv preprint arXiv:1801.10342, 2018.
• Lustig et al. (2008) Lustig, M., Donoho, D. L., Santos, J. M., and Pauly, J. M. Compressed sensing MRI. IEEE signal processing magazine, 25(2):72–82, 2008.
• Mallat (2008) Mallat, S. A wavelet tour of signal processing: the sparse way. Academic press, 2008.
• Mardani et al. (2017) Mardani, M., Gong, E., Cheng, J. Y., Vasanawala, S., Zaharchuk, G., Alley, M., Thakur, N., Han, S., Dally, W., Pauly, J. M., et al. Deep generative adversarial networks for compressed sensing automates MRI. arXiv preprint arXiv:1706.00051, 2017.
• Mardani et al. (2018) Mardani, M., Monajemi, H., Papyan, V., Vasanawala, S., Donoho, D., and Pauly, J. Recurrent generative adversarial networks for proximal learning and automated compressive image recovery. In IEEE Conference on Computer Vision and Pattern Recognition, 2018.
• MatouÅ¡ek (2002) MatouÅ¡ek, J. Lectures on Discrete Geometry - 2002. Springer Science and Business Media, 2002.
• Mousavi et al. (2015) Mousavi, A., Patel, A. B., and Baraniuk, R. G. A deep learning approach to structured signal recovery. In Annual Allerton Conference on Communication, Control, and Computing, 2015.
• Peyre (2010) Peyre, G. Best basis compressed sensing. IEEE Transactions on Signal Processing, 58(5):2613–2622, 2010.
• Qaisar et al. (2013) Qaisar, S., Bilal, R. M., Iqbal, W., Naureen, M., and Lee, S. Compressive sensing: From theory to applications, a survey. Journal of Communications and networks, 15(5):443–456, 2013.
• Radford et al. (2015) Radford, A., Metz, L., and Chintala, S. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015.
• Tang et al. (2013) Tang, G., Bhaskar, B. N., Shah, P., and Recht, B. Compressed sensing off the grid. IEEE Transactions on Information Theory, 59(11):7465–7490, 2013.
• Tibshirani (1996) Tibshirani, R. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society. Series B (Methodological), pp. 267–288, 1996.
• Tieleman & Hinton (2012) Tieleman, T. and Hinton, G. Lecture 6.5-RMSProp: Divide the gradient by a running average of its recent magnitude. Coursera: Neural networks for machine learning, 4(2):26–31, 2012.
• Xu et al. (2011) Xu, W., Mallada, E., and Tang, A. Compressive sensing over graphs. In IEEE INFOCOM, 2011.
• Yu & Sapiro (2011) Yu, G. and Sapiro, G. Statistical compressed sensing of Gaussian mixture models. IEEE Transactions on Signal Processing, 59(12):5842–5858, 2011.
• Zhang et al. (2013) Zhang, X., Wang, D., Zhou, Z., and Ma, Y. Simultaneous rectification and alignment via robust recovery of low-rank tensors. In Advances in Neural Information Processing Systems, 2013.
• Zhang et al. (2014) Zhang, X., Zhou, Z., Wang, D., and Ma, Y. Hybrid singular value thresholding for tensor completion. In AAAI Conference on Artificial Intelligence, 2014.

## Appendix A Proofs of theoretical results

### a.1 Lemma 1

To account for measurement noise in our analysis, we define the -tube set of a matrix as,

 TA(ϵ)={w:∥Aw∥2≤ϵ}.

Note that in the absence of noise, corresponds to the nullspace of . Next, we define a difference function, such that . Consequently, we obtain a difference set as the Minkowski sum of (the space of sparse vectors) and range of ,

 Sl,G′=∪z1,z2Sl(G′(z1,z2)).

This allows us to define as,

 σSl,G′(x)=inf^x∈Sl,G′∥x−^x∥1.

Now, in order to prove Lemma 1, we state and derive a couple of lemmas. The proofs of the next two Lemmas (3 and 4) are modeled along the theory developed in  Cohen et al. (2009) for the sensing of -sparse vectors. We extend it to the case of . Lemma 3 encodes the idea that for sensing to be successful any two points in should not be very close when acted upon by the measurement map . This can be equivalently stated as requiring that any point in the nullspace of should not be approximated very well by points in . Because we are working with bounded noise we need these results on the tube , instead of just the nullspace. A point of interest is that informally the next lemma provides a sufficient condition for a good decoder to exist and also provides a different set of similar necessary conditions for good decoding.

###### Lemma 3.

Given a measurement matrix , measurement noise such that , and a generative model function we want a decoder which provides the following -mixed norm approximation guarantee on the set of -sparse vectors ,

 ∥x−Δ(Ax+ϵ)∥2≤C0l−tσl,G(x)+C1ϵmax+δ

for some constants .

The sufficient condition for such a decoder to exist is given by,

 ∥η∥2≤C02l−tσ2l,G′(η)+C1ϵmax+δ,∀η∈TA(2ϵmax).

We call this the -mixed norm null space property.

A necessary condition for the same follows,

 ∥η∥2≤C0l−tσ2l,G′(η)+2C1ϵmax+2δ,∀η∈TA(ϵmax).
###### Proof.

To prove the sufficiency of the null space condition, we define a decoder as follows,

 Δ(y)=argminx:∥Ax−y∥2≤ϵmaxσl,G(x).

We will prove this decoder satisfies the mixed norm guarantee given the -mixed norm null space property. Using the definition of , we have,

 ∥A(x−Δ(Ax+ϵ))∥2 ≤∥Ax+ϵ−AΔ(Ax+ϵ))∥2+ϵmax≤2ϵmax.

This implies . Combining with the mixed norm guarantee, we have,

 ∥x−Δ(Ax+ϵ)∥2−δ−C1ϵmax ≤C02l−tσ2l,G′(x−Δ(Ax+ϵ)) ≤C02l−t(σl,G(x)+σl,G(Δ(Ax+ϵ))) ≤C0l−tσl,G(x).

The second last step follows from the triangle inequality whereby and the last step uses the fact that the decoder is the minimizer of .

For the necessary condition, consider any decoder which provides the needed guarantee. Consider and now pick such that the following inequality is satisfied,

 ∥η−(G(z0)−G(z1)+η0)∥1−ε′≤σ2l,G′(η), (13)

where . We can find a for any arbitrarily small and positive . This is the case because we have,

 σ2l,G′(η)=inf^η∈S2l,G′∥η−^η∥1=inf^z0,^z1∈Rk,^η0∈S2l∥η−(G(^z0)−G(^z1)+^η0)∥1,

which we obtain by parameterizing as for . We cannot necessarily find such that because may not be a closed set. For convenience, we let and which means . We can split as for some , and for convenience define . Note, we can now rewrite  (13) as,

 ∥η3∥1≤σ2l,G′(η)+ε′. (14)

Since , we have . This simplifies the -mixed norm guarantee of our decoder when applied to ,

 ∥G0+η1−Δ(A(G0+η1))∥2≤δ+C1ϵmax. (15)

Plugging in all the above, we have:

 ∥η∥2 =∥η2+η1+η3+G0−G1∥2 ≤∥η1+G0−Δ(A(η1+G0))∥2+∥η3+η2−G1+Δ(A(η1+G0))∥2 ≤δ+C1ϵmax+∥−η3−η2+G1−Δ(Aη+A(G1−η2−η3))∥2 (from Eq. (15)) ≤2δ+2C1ϵmax+C0l−tσl,G(G1−η2−η3) (since η∈TA(ϵmax) and the (ℓ2,ℓ1)-guarantee) ≤2δ+2C1ϵmax+C0l−t∥η3∥1 =C0l−tσ2l,G′(η)+2δ+2C1ϵmax+C0l−tε′. (from Eq. (14))

As we can make arbitrarily small we can make it tend to providing us with the required result. ∎

The next lemma basically shows that if satisfies the S-REC and RIP conditions then we operate in the constraint regime required by the previous lemma.

###### Lemma 4.

If the measurement matrix satisfies S-REC() and RIP() for integers and function , then we have for any vector ,

 ∥η∥2≤(bl)−1/2(C0+1)σal,G′(η)+C1ϵ+δ′

where , .

###### Proof.

For any choice of and , let be the minimizer of . We can find this because is closed, concretely we can construct this by taking a dimensional vector which has everything but the top magnitude components in zeroed out. As the choice of and is arbitrary, it suffices to prove the statement for (instead of ).

Given a set of indices for a dimensional vector we use to denote the set of indices not in . Now note that corresponds to the largest coordinates of . Let the indices corresponding to those coordinates be . We take to be the indices of the next (and not ) largest coordinates. Similarly, define to be subsequent indices for the next largest coordinates. The final set can contain indices of less than coordinates. Let . We will use to denote the vector obtained by zeroing out values in for all indices in the set . We can write as where . We can write as where . This allows us to write as where . Now we use the fact that satisfies S-REC() to get,

 ∥ηT+(G(z1)−G(z2))Tc∥2 =∥G(z1)+s1−(G(z2)+s2)∥2 ≤(1−α)−1∥A(G(z1)+s1−(G(z2)+s2))∥2+(1−α)−1δ (using S-REC) ≤(1−α)−1∥A(ηT+(G(z1)−G(z2))Tc)∥2+(1−α)−1δ. (16)

We can write . As we can write where . Hence,

 ∥A(ηT+(G(z1)−G(z2))Tc)∥2 =∥A((η−G(z1)+G(z2))T2+..+(η−G(z1)+G(z2))Ts)−γ∥2 =∥Aη′T2+...+Aη′Ts−γ∥2 ≤s∑j=2∥Aη′Tj∥2+∥γ∥2 ≤(1+α)s∑j=2∥η′Tj∥2+ϵ. (using RIP) (17)

From Eq. (4) and Eq. (4), we get,

 ∥ηT+(G(z1)−G(z2))Tc∥2−δ′≤(1−α)−1(1+α)s∑j=2∥η′Tj∥2+(1−α)−1ϵ.

Adding on both sides and applying the triangle inequality, we get,

 ∥η∥2 ≤∥ηT+(G(z1)−G(z2))Tc∥2+∥η′Tc∥2 ≤((1−α)−1(1+α)+1)s∑j=2∥η′Tj∥2+δ′+C1ϵ. (18)

For any , ,and we have which in turn implies that . Squaring and adding the inequalities for all such indices in and , we get,

 ∥η′i+1∥2≤(bl)−1/2∥η′i∥1.

Substituting the result we obtained above in Eq. (4), we get,

 ∥η∥2−δ′−C1ϵ≤(bl)−1/2((1−α)−1(1+α)+1)s∑j=1∥η′Tj∥1=(bl)−1/2(C0+1)∥η′Tc0∥1

finishing the proof. ∎

Lemma 1 follows directly from Lemma 3 and Lemma 4 after substituting and .

### a.2 Lemma 2

Recall that random Gaussian matrices satisfy RIP and S-REC properties with high probability (Candès & Tao, 2005; Bora et al., 2017). For completeness and notation, we restate these facts before proving Lemma 2.

###### Fact 1.

Let be a random Gaussian matrix with each entry sampled i.i.d. from . . For

 m=Ω(lα2log(n/l)),

satisfies RIP with probability at least .

###### Fact 2.

Let be a random Gaussian matrix with each entry sampled i.i.d. from . Let be an -Lipschitz function and define to be the norm ball. For

 m=Ω(kα2log(Lrδ)),

satisfies S-REC() with probability at least .

Note the proofs of the next two results basically involve small modifications in the proofs presented in  Bora et al. (2017) at a few key places to extend them from the setting of the range of the generative model to the set .

###### Proof.

We will use the mathematical constructs of -nets for proving the lemma. Let be a -net for . Then there exists a net such that,

 log(|M|)≤klog(Lrδ).

As this net is -cover for , we will have that is a -cover of .

For any two points we can find points such that distance in norm between and is less than (similarly for and ). Now consider some set of indices of size and be an -sparse vector with support (that is all elements outside the indices in are zero). Using the triangle inequality, we get,

 ∥G(z1)−G(z2)+ν∥2 ≤∥G(z1)−G(z′1)∥2+∥G(z′1)−G(z′2)+ν∥2+∥G(z′2)−G(z2)∥2 ≤∥G(z′1)−G(z′2)+ν∥2+2δ.

Again using the triangle inequality, we have,

 ∥AG(z′1)−AG(z′2)+Aν∥2≤∥AG(z′1)−AG(z1)∥2+∥AG(z1)−AG(z2)+Aν∥2+∥AG(z2)−AG(z′2)∥2.

From Lemma 8.3 in Bora et al. (2017), we have , and with probability . Applying this to the previous inequality gives us,

 ∥AG(z′1)−AG(z′2)+Aν∥2≤∥AG(z1)−AG(z2)+Aν∥2+O(δ).

We note for fixed and varying over points with support ,