Instance Optimal Decoding and the Restricted Isometry Property
Abstract
In this paper, we address the question of information preservation in illposed, nonlinear inverse problems, assuming that the measured data is close to a lowdimensional model set. We provide necessary and sufficient conditions for the existence of a socalled instance optimal decoder, , that is robust to noise and modelling error. Inspired by existing results in compressive sensing, our analysis is based on a (Lower) Restricted Isometry Property (LRIP), formulated in a nonlinear fashion. We also provide sufficient conditions for nonuniform recovery with random measurement operators, with a new formulation of the LRIP. We finish by describing typical strategies to prove the LRIP in both linear and nonlinear cases, and illustrate our results by studying the invertibility of a onelayer neural net with random weights.

École Normale Supérieure. 45 rue d’Ulm, 75005 Paris, France.

Email: nicolas.keriven@ens.fr

Université Rennes 1, Inria, CNRS, IRISA. F35000 Rennes, France.

Email: remi.gribonval@inria.fr
1 Introduction
Inverse problems are ubiquitous in all areas of data science. While linear inverse problems have been arguably far more studied in the literature, some frameworks are intrinsically nonlinear [14]. In this paper, we aim at giving a characterization of the preservation of information in illposed inverse problems, regularized by the introduction of a “lowdimensional” model set close to which the data of interest is assumed to live. We consider a very general context that includes, in particular, measurement operators that are possibly nonlinear, and an ambient space that can be any pseudometric set. Our main results show that the existence of a decoder that is robust to noise and modelling error is equivalent to a modified Restricted Isometry Property (RIP), which is a classical property in compressive sensing [9]. We thus outline the fundamental nature of the RIP in settings that are more general than previously studied.
The problem is formulated as follows. Let be a set equipped with a pseudometric\@footnotemark\@footnotetextA pseudometric satisfies all the requirements of a metric except . , the set of data, and a seminormed\@footnotemark\@footnotetextSimilarly, a seminorm satisfy the requirements of a norm except that does not imply . vector space, the space of measurements. Consider a (possibly nonlinear) measurement map . The measured vector is:
(1) 
where is measurement noise and is the true signal. Our goal is to characterize the existence of any procedure that would allow us to approximately recover the data from .
Regularization.
In most interesting problems, the “dimension” of the space is far lower than that of the set (in a loose sense: we recall that here none is required to be finitedimensional, and is not necessarily a vector space), which makes the problem illposed, meaning that there are informationtheoretic limits that prevent us from recovering the underlying signal from the measurements, even in the noiseless case. A classical regularization technique is to introduce prior knowledge about the true signal , here we consider a model set of “simple” signals, such that is likely to be close to . For instance, sparsity, the assumption that the true signal is a linear combination of a few elements in a wellchosen dictionary, is a hugelystudied prior in modern signal processing, in particular in compressive sensing [15].
Instance Optimal Decoding.
Ideally, a decoder must be able to exactly retrieve from when the modelling is exact ( ) and the noise is zero (). However, as these conditions are highly unrealistic in practice, it is desirable for this decoding process to be both robust to noise and stable to modelling error. In the literature, such a decoder is said to be instance optimal [12] (see Def. 1). In this paper, our goal is to characterize necessary and sufficient conditions for the existence of an instance optimal decoder for the problem (1). Note that we will not study the existence of efficient algorithms to solve (1), which is another significant achievement of compressive sensing [15], but only the preservation of information of the encoding process.
In [12, 8], the authors outlined the crucial role played by the Restricted Isometry Property (RIP), and more precisely by the LowerRIP (LRIP), for the existence of instance optimal decoders in the linear case. In this paper, we extend these results to the nonlinear case and to nonuniform probabilistic recovery.
Outline of the paper.
The structure of the paper is as follows. In Section 2 we briefly outline some relevant references, keeping in mind that the field is large and we do not pretend to be exhaustive in this short paper. In Section 3 we state our main results relating instance optimal decoders (Def. 1) and the LRIP (Def. 2). In Section 4 we outline how one might typically prove the LRIP by extending a classical proof [2], and illustrate it on a simple example.
2 Related Work
Classical Compressive Sensing: the linear case.
Instance optimal decoding and the RIP are wellknown notions in compressive sensing [11, 10, 13]. We refer to the book of Foucart and Rauhut [15] for a review of the field, in particular to Chapters 6 and 11 for the topics of interest here. The interplay between the two notions was in particular studied in [12] in the finitedimensional case. These results were later extended to more general models in [21], and to any linear measurement operators in [8], which is the main inspiration behind the present work.
Nonuniform decoding.
In compressed sensing the measurement operator is often designed at random. Typical recovery results are therefore given with high probability. One can then distinguished between uniform guarantees, meaning that with high probability on the draw of all signals close to can be stably recovered, and nonuniform guarantees, for one fixed signal close to , with high probability on the decoding is successful. In [12] the authors study nonuniform instance optimality, but only under the light of the classical uniform RIP. In this paper we introduce a nonuniform version of the LRIP and prove that it is sufficient for nonuniform instance optimality.
Nonlinear inverse problems.
Nonlinear inverse problems can be found in many areas of signal processing, see e.g. [14] for a review of some applications. They have also been considered by the compressive sensing community, often when quantization occurs [19, 7], in the socalled “1bit” compressed sensing line of work [6]. Another focus is the development of efficient algorithms inspired by the linear case [3, 4]. In [4], the author assume that a locally linearized version of satisfy the classical RIP. In this paper we consider a different, “fully” nonlinear RIP. We note that one notion does not imply the other.
3 Equivalence between IOP and LRIP
In this section we state our main results on instance optimal decoders and the LRIP. We distinguish the case where the operator is deterministic, or, equivalently, when it is random but one seeks socalled uniform recovery guarantees, and the case of nonuniform recovery.
3.1 Deterministic operator
Recall that we consider a pseudometric set , a seminormed vector space , and measurements of the form where . We consider a model set , and are interested in characterizing the existence of a good decoder that takes and as inputs and return a signal that is close to . We want this decoder to be stable to modelling error and robust to noise, which is characterized by the notion of instance optimality.
Definition 1 (Instance Optimality Property (IOP)).
A decoder satisfies the Instance Optimality Property for the operator and model with constants , pseudometrics on and error if: for all signals and noise , denoting the recovered signal, it holds that:
(2) 
where .
As indicated by the r.h.s. of (1), the decoding error between the true signal and the recovered one is bounded by the amplitude of the noise and the distance from to the model set, which indicates how well is modelled by . An instance optimal decoder is therefore robust to noise and stable even if is not exactly in the model set. We also include a possible fixed additional error , which may be unavoidable in some cases (due to algorithmic precision for instance). Ideally, one has .
Let us now turn to the proposed nonlinear version of the LRIP. As described in [8], the LRIP is just one side of the classical RIP, which states that the measurement operator approximately preserves distances between elements of the model .
Definition 2 (Lower Restricted Isometry Property (LRIP)).
The operator satisfies the Lower Restricted Isometry Property for the model with constant , pseudometric and error if: for all it holds that
(3) 
The LRIP expresses the fact that must not collapse two elements of the model together. Like the IOP, we allow for a possible additional fixed error in the LRIP. Note that this type of error is often considered when introducing quantization [7, 19]. Ideally, one has , however in some cases it can be considerably simpler to prove that the LRIP holds with a nonzero [20]. The reader would note that the classical RIP is often expressed with a constant where is a small as possible.
We now state our main result. The proof, rather direct, can be found in Appendix A.
Theorem 1 (Equivalence between IOP and LRIP.).
Consider an operator and a model .

If there exists a decoder which satisfies the Instance Optimality Property for and with constants , pseudometrics and error , then the operator satisfies the LRIP for with constant , pseudometric and error .

If the operator satisfies the LRIP for the model with constant , pseudometric and error , then the decoder defined as\@footnotemark\@footnotetextIn this paper we assume that the minimization problem has at least one solution, for simplicity (ties can be broken arbitrarily). When this is not the case, it is possible to consider a decoder that returns any element that approaches the infimum with a fixed precision, at the expense of having this precision in the decoding error , as in [8].
(4) satisfies the Instance Optimality Property for the operator and model with constants and , pseudometrics and where is defined by , and error .
Theorem 1 states that if the LRIP is satisfied, then the decoder that returns the element in the model that best matches the measurement is instance optimal, with a special metric . On the other hand, if some instance optimal decoder exists, then the LRIP must be satisfied. In other words, when the LRIP is satisfied, then we know that a negligible amount of information is lost when encoding a signal wellmodeled by . Conversely, if the LRIP is not satisfied, one has no hope of deriving an instance optimal decoder.
3.2 Random operator, from uniform recovery to nonuniform recovery
In the vast majority of the compressive sensing literature, the measurement process is drawn at random: for instance, in the finite dimensional case, it is an open problem to find deterministic matrices that satisfies the RIP with an optimal number of measurements ([15], pp. 27), while on the contrary many classes of random matrices satisfy the RIP with high probability [2].
A wellstudied concept is that of uniform recovery guarantees, where one shows that, with high probability on the draw of , the LRIP holds. It follows by Theorem 1 that there is a decoder such that, with high probability on the draw of , all signals from can be stably recovered. There is also a notion of nonuniform recovery, where one considers a decoder and wonders if, given an arbitrary signal close to , this signal is stably recovered (with high probability on the draw of ) from . In this section we introduce a nonuniform version of the LRIP, and show that it is a sufficient condition for the existence of a nonuniform instance optimal decoder. We start by discussing a notion of projection on the model.
Remark 1 (Approximate projection.).
As we will see, in nonuniform recovery the distance from to is replaced by the distance from to a particular element , where is a “projection” function with respect to some metric . In full generality, it is not guaranteed that there exists such that , but one can always define it such that for all , for an arbitrary small .
Let us now introduce the proposed nonuniform IOP and LRIP.
Definition 3 (Nonuniform IOP).
A decoder satisfies the nonuniform Instance Optimality Property for the (random) mapping , model and projection function , with constants , pseudometrics , probability and error if:
(5) 
where is denoted by .
Note that in this definition the IOP is nonuniform with respect to the data but uniform with respect to the noise , meaning that with high probability on the draw of the (fixed) data can be stably recovered from a measurement vector with any additive noise.
Definition 4 (Nonuniform LRIP).
The operator satisfies the nonuniform LRIP for the model with constant , pseudometric , probability and error if:
(6) 
This LRIP is in fact “semi”uniform: it is nonuniform with respect to one element but uniform with respect to . A “fully” nonuniform LRIP would, in fact, be almost always valid for many operators (see Section 4), and thus probably too weak to yield recovery guarantees.
Before stating our result, let us remark that the definition of the metric in Theorem 1 involves the operator , which is potentially problematic when it is random. To solve this, [12] introduces a socalled Boundedness Property (BP) in the classical sparse setting in finite dimension. We extend this notion in the considered context here.
Definition 5 (Boundedness property (BP)).
The operator satisfies the Boundedness Property with constant , pseudometric and probability if:
(7) 
We then have the following result, proved in Appendix B.
Theorem 2 (The nonuniform LRIP and BP implies the nonuniform IOP).
Consider a random operator . Assume that:

the operator satisfies the nonuniform LRIP for the model with constant , pseudometric , probability and error ;

the operator satisfies the nonuniform Boundedness Property with constant , pseudometric and probability ;
Then, the decoder defined by (1) satisfies the nonuniform Instance Optimality Property for the operator , model and any projection function with constants , , pseudometrics and , probability and error .
Compared with the result in [12], which proves nonuniform recovery under a uniform LRIP and the BP in the finitedimensional case, our result holds under weaker hypotheses.
4 A typical proof of the LRIP
In this section, we outline a possible strategy to prove the LRIP, inspired by the proof for random matrices in [2]. This relatively simple proof has two steps: first, a pointwise concentration result, and second, an extension by covering arguments. For a set and a metric , we denote by the minimum number of balls of radius , with centers that belong to , required to cover .
4.1 Linear case
We start with the linear case, which follows closely the proof in [2]. We treat the uniform case (Def. 2), with no error (). Assume and are both vector spaces, and that we have a random linear operator such that the following concentration result holds:
(8) 
for an increasing concentration function . Typically, the “bigger” the space is ( the more measurements we collect), the higher the concentration function is: often, for measurements ( or ), classical concentration inequalities yield .
This property proves a “pointwise” (or “fully” nonuniform) LRIP: for two given , the quantity is a good approximation of with high probability. We now invert the quantifiers by covering arguments. From the formulation of the concentration (4.1) we see that a particular set of interest is the socalled normalized secant set [22]:
(9) 
The proof of the following result is in Appendix C.
Proposition 1.
Consider . Assume that the concentration property (4.1) holds, that has finite covering numbers, and that for any draw of and any we have . Set . Define the probability of failure
Then the operator satisfies the uniform LRIP for the model with constant , metric , probability and error .
4.2 Nonlinear case
It is possible to adapt the previous proof to nonlinear operators, by distinguishing the case where and are “close”, for which we resort to a linearization of and properties of the normalized secant set, and the case where and are distant from each other, for which we use directly the covering numbers of the model. We treat here the nonuniform case (Def. 4).
Assume again that and are vector spaces. Assume that we have a random map such that the concentration property (4.1) holds. Next, suppose that there exists such that for any fixed :

for all and any draw of ,

the model has finite covering numbers with respect to , and in particular it also has finite diameter .

for all , the following version of the normalized secant set has finite covering numbers.

for all such that , and any draw of , we have where is a linear map such that for all , .
The following result is proved in Appendix D.
Proposition 2.
Assume that the properties above are satisfied. Set , and . Define the probability of failure
Then the operator satisfies the nonuniform LRIP for the model with constant , metric , probability and error .
4.3 Illustration
In this section we illustrate the nonlinear LRIP on a simple example; that of recovering a vector from a random features embedding, which is a random map initially designed for kernel approximation, see [23, 24]. Such a random embedding can be seen as a onelayer neural network with random weights, for which invertibility and preservation of information have recently been topics of interest [16, 17].
Consider and define to be a Union of Subspaces, which is a popular model in compressed sensing [5], with controlled norm: where and each is an dimensional subspace of . As in [18], we choose a sampling that is a reweighted version of the original Fourier sampling for kernel approximation [23], for the Gaussian kernel with bandwidth . It is defined as follows: for a number of measurements , the measurements space is , the random map is defined as where and are drawn from (where is a Gaussian), with and . One can verify that , is a valid probability distribution. The metric is here the kernel metric associated to the Gaussian kernel with bandwidth :
We have the following result, which proof is in Appendix E.
Proposition 3.
If the number of measurements is such that
Then the operator satisfies the nonuniform LRIP for the model with constant , metric , probability and error , as well as the BP (Def. 5) with constant , metric and probability .
Hence, using Theorem 2, we have shown that a reduced number of random features preserves all information when encoding signals that are (in this case) wellmodeled by a union of subspaces, with respect to the associated kernel metric. This preliminary analysis may have consequences for classical random feature bounds in a learning context [1, 25].
5 Conclusion
In this paper we generalized a classical property, the equivalence between the existence of an instanceoptimal decoder and the LRIP, to nonlinear inverse problems with possible quantization error or limited algorithmic precision, and data that live in any pseudometric set. We also formulated a version of the result for nonuniform recovery, by introducing a nonuniform version of the LRIP. To further illustrate this principle, we provided a typical proof strategy for the LRIP that one might use in practice, and gave an example of nonlinear LRIP on random features for kernel approximation.
Although relatively simple in their proofs, these results may have important consequences for a large class of linear or nonlinear inverse problems, where one seeks stable and robust recovery. Naturally, once the LRIP guarantees (or disproves) the existence of an instance optimal decoder, an outstanding question is the existence of efficient algorithms that provide equivalent guarantees, as in classical compressive sensing [15] or some of its recent extensions [26].
References
References
 [1] Francis Bach. On the Equivalence between Kernel Quadrature Rules and Random Feature Expansions. Journal of Machine Learning Research, pages 1–38, 2017.
 [2] Richard Baraniuk, Mark Davenport, Ronald A Devore, and Michael Wakin. A simple proof of the restricted isometry property for random matrices. Constructive Approximation, 28(3):253–263, 2008.
 [3] Amir Beck and Yonina C. Eldar. Sparsity Constrained Nonlinear Optimization: Optimality Conditions and Algorithms. SIAM Journal on Optimization, 23(3):1480–1509, 2013.
 [4] Thomas Blumensath. Compressed sensing with nonlinear observations and related nonlinear optimization problems. Information Theory, IEEE Transactions on, pages 1–19, 2013.
 [5] Thomas Blumensath and Mike E. Davies. Sampling theorems for signals from the union of finitedimensional linear subspaces. IEEE Transactions on Information Theory, 55(4):1872–1882, 2009.
 [6] Petros T Boufounos and Richard G Baraniuk. 1Bit Compressive Sensing. In Information Sciences and Systems, pages 16–21, 2008.
 [7] Petros T. Boufounos, Shantanu Rane, and Hassan Mansour. Representation and Coding of Signal Geometry. Information and Inference: a Journal of the IMA, 6(4):349–388, 2017.
 [8] Anthony Bourrier, Mike E. Davies, Tomer Peleg, and Rémi Gribonval. Fundamental performance limits for ideal decoders in highdimensional linear inverse problems. IEEE Transactions on Information Theory, 60(12):7928–7946, 2014.
 [9] Emmanuel J Candès. The restricted isometry property and its implications for compressed sensing. Comptes Rendus Mathematique, 346(910):1–4, 2008.
 [10] Emmanuel J. Candès, Justin K. Romberg, and Terrence Tao. Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. IEEE Transactions on Information Theory, 52(2):480–509, 2006.
 [11] Emmanuel J. Candès and Terrence Tao. Decoding by linear programming. IEEE Transactions on Information Theory, 51(12):4203–4215, 2005.
 [12] Albert Cohen, Wolfgang Dahmen, and Ronald A Devore. Compressed sensing and best kterm approximation. Journal of the American mathematical Society, 22(1):211–231, 2009.
 [13] David L. Donoho. Compressed sensing. IEEE Transactions on Information Theory, 52(4):1289–1306, 2006.
 [14] Heinz W Engl and Philipp Kügler. Nonlinear inverse problems: theoretical aspects and some industrial applications. In Multidisciplinary Methods for Analysis Optimization and Control of Complex Systems, volume 06, pages 3–47. 2005.
 [15] Simon Foucart and Holger Rauhut. A Mathematical Introduction to Compressive Sensing. Applied and Numerical Harmonic Analysis. Springer New York, NY, 2013.
 [16] Anna C. Gilbert, Yi Zhang, Kibok Lee, Yuting Zhang, and Honglak Lee. Towards understanding the invertibility of convolutional neural networks. IJCAI International Joint Conference on Artificial Intelligence, pages 1703–1710, 2017.
 [17] Raja Giryes, Guillermo Sapiro, and Alex M Bronstein. Deep Neural Networks with Random Gaussian Weights : A Universal Classification Strategy ? IEEE Transactions on Signal Processing, 64(13):3444–3457, 2016.
 [18] Rémi Gribonval, Gilles Blanchard, Nicolas Keriven, and Yann Traonmilin. Compressive Statistical Learning with Random Feature Moments. arXiv:1706.07180, 2017.
 [19] Laurent Jacques. Small width, low distortions: quasiisometric embeddings with quantized subGaussian random projections. pages 1–26, 2015.
 [20] Nicolas Keriven, Anthony Bourrier, Rémi Gribonval, and Patrick Pérèz. Sketching for LargeScale Learning of Mixture Models. Information and Inference: A Journal of the IMA, pages 1–62, 2017.
 [21] Tomer Peleg, Rémi Gribonval, and Mike E. Davies. Compressed Sensing and Best Approximation from Unions of Subspaces: Beyond Dictionaries. European Signal Processing Conference (EUSIPCO), 2013.
 [22] Gilles Puy, Mike E. Davies, and Rémi Gribonval. Linear embeddings of lowdimensional subsets of a Hilbert space to R m. In European Signal Processing Conference (EUSIPCO), pages 469–473, 2015.
 [23] Ali Rahimi and Benjamin Recht. Random Features for Large Scale Kernel Machines. Advances in Neural Information Processing Systems (NIPS), 2007.
 [24] Ali Rahimi and Benjamin Recht. Weighted sums of random kitchen sinks: Replacing minimization with randomization in learning. Advances in Neural Information Processing Systems (NIPS), 2009.
 [25] Alessandro Rudi and Lorenzo Rosasco. Generalization Properties of Learning with Random Features. In Advances in Neural Information Processing System (NIPS), 2017.
 [26] Yann Traonmilin and Rémi Gribonval. Stable recovery of lowdimensional cones in Hilbert spaces : One RIP to rule them all. Applied and Computational Harmonic Analysis, 2016.
Appendix A Proof of Theorem 1

Consider . By triangular inequality we have
Then, by applying the Instance Optimality Property with noise we get , and by applying again the Instance Optimality Property it holds that , hence the result.

Consider any signal and noise , denote and . Let be any element of the model. We have:
By definition of the decoder (1) we have and therefore
where . Since the result is valid for all , we can take the infimum of with respect to .
Appendix B Proof of Theorem 2
Let be a fixed signal and denote by .
Applying the nonuniform LRIP, with probability at least on the draw of the operator we have
(B.1) 
In the same fashion, applying the nonuniform Boundedness Property, with probability at least on the draw of the operator we have
(B.2) 
Appendix C Proof of Proposition 1
From the definition of the normalized secant set, our goal is to prove that with high probability on the draw of , for all we have .
Let be small constants which values we shall define later and be the covering numbers of the normalized secant set. Let be an covering of the normalized secant set .
By the concentration property, it holds that, with probability at least , for all we have
(C.1) 
Now, given any element of the normalized secant set , one can find an element of the covering such that . Assuming (C.1) holds, we have
and therefore by choosing we obtain the desired result.
Appendix D Proof of Proposition 2
Fix any . Let be small constants which values we shall define later, such that . Define the model with a ball around removed, and , . Let and be, respectively, a covering of and a covering of , with defined such that and .
By the concentration property, it holds that, with probability at least , for all or for all indices , we have
(D.1) 
Our goal is to extend this property to any element in the model .
Let be any element of the model. We distinguish two cases. If , , we consider an element of the covering of such that . We have:
and therefore
(D.2) 
Now, when , we define and note that . We approximate it by an element of the covering of the normalized secant set (meaning that ) that verifies . Then, we have
and therefore
(D.3) 
To conclude, we set , , to obtain the desired result.
Appendix E Proof of Proposition 3
Concentration property.
The concentration result is based on the fact that by definition of the random features. Using simple function studies and a Bernstein concentration inequality with a control on moments of all orders, it is possible to show (see [18], eq. (160) then Prop. 6.11) that the concentration result (4.1) is valid with (we do not reproduce the detailed proof here for brevity).
Since this Berstein inequality is in fact valid for all vectors (not necessarily in the model), as a consequence the Boundedness Property is also satisfied with constant , metric and probability .
We now check hypotheses – in Proposition 2. We are going to repeatedly use the fact that for , since , we have
(E.1) 
where and .
(i)
Using a firstorder Taylor expansion we have hence by (E.1) hypothesis is valid with .
(ii)
It is immediate that . Then, using the wellknown fact that for each we have , by a union bound and (E.1) we have .
(iii)
(iv)
Finally, by a Taylor expansion we have
where . Hence hypothesis is satisfied with and , which concludes the proof.