Exact Reconstruction using GME

Exact Reconstruction using Beurling Minimal Extrapolation

Yohann de Castro  and  Fabrice Gamboa Institut de Mathématiques de Toulouse (CNRS UMR 5219). Université Paul Sabatier, 118 route de Narbonne, 31062 Toulouse, France. yohann.decastro@math.univ-toulouse.fr fabrice.gamboa@math.univ-toulouse.fr
Abstract.

We show that measures with finite support on the real line are the unique solution to an algorithm, named generalized minimal extrapolation, involving only a finite number of generalized moments (which encompass the standard moments, the Laplace transform, the Stieltjes transformation, etc).

Generalized minimal extrapolation shares related geometric properties with basis pursuit of Chen, Donoho and Saunders [CDS98]. Indeed we also extend some standard results of compressed sensing (the dual polynomial, the nullspace property) to the signed measure framework.

We express exact reconstruction in terms of a simple interpolation problem. We prove that every nonnegative measure, supported by a set containing points, can be exactly recovered from only generalized moments. This result leads to a new construction of deterministic sensing matrices for compressed sensing.

Key words and phrases:
Beurling Minimal Extrapolation, Basis Pursuit, Compressed Sensing, Convex optimization.

Introduction

In the last decade much emphasis has been put on the exact reconstruction of sparse finite dimensional vectors using the basis pursuit algorithm. The pioneering paper of Chen, Donoho and Saunders [CDS01] has brought this method to the statistics community. Note that the seminal ideas on the subject appeared in earlier works of Donoho and Stark [DS89]. Therein, mainly the discrete Fourier transform is considered. Similarly, P. Doukhan, E. Gassiat and one author of this present paper [DG96, GG96] considered the exact reconstruction of a nonnegative measure. More precisely, they derived results when one only knows the values of a finite number of linear functionals at the target measure. Moreover, they study stability with respect to a metric for weak convergence which is not the case here.

In this paper, we are concerned with the measure framework. We show that the exact reconstruction of a signed measure is still possible when one only knows a finite number of non-adaptive linear measurements. Surprisingly our method, called generalized minimal extrapolation, appears to uncover exact reconstruction results related to basis pursuit.

Let us explain more precisely what is done here. Consider a signed discrete measure on a set . Unless otherwise specified, assume that . Note that all our results easily extend to any real bounded set. Consider the Jordan decomposition,

and denote by (resp. ) the support of (resp. ). Let us define the Jordan support of the measure as the pair . Assume further that is finite and has cardinality . Moreover suppose that belongs to a family of pairs of subsets of (see Definition 1 for more details). We call a Jordan support family. The measure can be written as

where , are nonzero real numbers, and denotes the Dirac measure at point .

Let be any family of continuous functions on , where the set denotes the closure of (this statement is meant to be general and encompasses the case where is not closed). Let be a signed measure on . The -th generalized moment of is defined by

(1)

for all the indices .

Our main issue

We are concerned with the reconstruction of the target measure from the observation of , i.e. its first generalized moments. We assume that both the support and the weights of the target measure are unknown. We investigate if it is possible to recover uniquely from the observation of . More precisely, does an algorithm fitting among all the signed measures of recover the measure ?

Note that a finite number of assigned standard moments does not define a unique signed measure. In fact one can check that for each signed measure and for each integer there exists a measure having the same first moments. It seems there is no hope of recovering discrete measures from a finite number of its generalized moments. Surprisingly, we show that every extrema Jordan type measure (see Definition 1 and the examples that follow) is the unique solution of a total variation minimizing algorithm, generalized minimal extrapolation.

Basis pursuit

In [CDS98] Chen, Donoho and Saunders introduced basis pursuit. It is the process of reconstructing a target vector from the observation by finding a sparse solution to an under-determined system of equations:

()

where is the design matrix. This program is one of the other first steps [CRT06a, Don06] of a remarkable theory so-called compressed sensing. As a result, this extremum is appropriated to the reconstruction of sparse vectors (i.e. vectors with a small support [Don06]). In this paper we develop a related program that recovers all the measures with enough structured Jordan support (which can be seen as the sparsity-related measures).

Generalized minimal extrapolation

Denote by the set of finite signed measures on and by the total variation norm. We recall that for all ,

where the supremum is taken over all partitions of into a finite number of disjoint measurable subsets. By analogy with basis pursuit, generalized minimal extrapolation is the process of reconstructing a target measure from the observation of its first generalized moments by finding a solution of the problem

()

On one hand, basis pursuit minimizes the -norm subject to linear constraints. On the other hand, generalized minimal extrapolation naturally substitutes the -norm (the total variation norm) for the -norm. For the case of Fourier coefficients, () is simply Beurling Minimal Extrapolation [Beu38]. The program () is named after this remark.

Let us emphasize that generalized minimal extrapolation looks for a minimizer among all signed measures on . Nevertheless, the target measure is assumed to be of extrema Jordan type.

Extrema Jordan type measures

Let us define more precisely what we understand by the Jordan support family .

Definition 1 (Extrema Jordan type measure) —

We say that a signed measure is of extrema Jordan type with respect to a family if and only if its Jordan decomposition satisfies

where is defined as the support of the measure , and

  • denotes any linear combination of elements of ,

  • is not constant and ,

  • resp.  is the set of all points such that resp. .

In the following, we give some examples of extrema Jordan type measures with respect to the family

These measures can be seen as "interesting" target measures for () given observation of the first standard moments.

Examples with respect to the family

For the sake of readability, let be an even integer. We present three important examples.

Nonnegative measures:

The nonnegative measures whose support has size not greater than are extrema Jordan type measures. Indeed, let be a nonnegative measure and be its support. Set

Then, for a sufficiently small value of the parameter , the polynomial has supremum norm not greater than . The existence of such a polynomial shows that the measure is an extrema Jordan type measure.

In Section 2 we extend this notion to any homogeneous -system.

Chebyshev measures:

The -th Chebyshev polynomial of the first order is defined by

(2)

It is well known that it has supremum norm not greater than , and that

  • ,

  • ,

whenever . Then, any measure such that

for some , is an extrema Jordan type measure.

Further examples are presented in Section 3.

-spaced out type measures:

Let be a positive real and be the set of all pairs of subsets of such that

In Lemma 4.2, we prove that, for all , there exists a polynomial such that

  • has degree not greater than a bound depending only on ,

  • is equal to on the set ,

  • is equal to on the set ,

  • and .

This shows that any measure with Jordan support included in is an extrema Jordan type measure.

In this paper, we give exact reconstruction results for these three kinds of extrema Jordan type measures. In fact, our results extend to others families . Roughly, they can be stated as follows:

Nonnegative measures:

Assume that is a homogeneous -system (see 2.1.3). Theorem 2.1 shows that any nonnegative measure is the unique solution of generalized minimal extrapolation given the observation , where is not less than twice the size of the support of .

Generalized Chebyshev measures:

Assume that is an -system (see definition 2.1.2). Proposition 3.3 shows the following result: Let be a signed measure having Jordan support included in , for some , where denotes the -th generalized Chebyshev polynomial see 3.3.1. Then is the unique solution to generalized minimal extrapolation () given , i.e. its first generalized moments.

-interpolation:

Considering the standard family , Proposition 4.3 shows that generalized minimal extrapolation exactly recovers any -spaced out type measure from the observation , where is greater than a bound depending only on .

These results are closely related to standard results of basis pursuit [Don06]. In fact, further analogies with compressed sensing can be emphasized.

Analogy with compressed sensing

Our estimator follows the aura of the recent breakthroughs [CDS98, CRT06a] in compressed sensing.

In the past decade E. J. Candès, J. Romberg, and T. Tao have shown [CRT06b] that it is possible to exactly recover all sparse vectors from few linear measurements. They considered a matrix with i.i.d entries (centered Gaussian, Bernoulli, random Fourier sampling) and an -sparse vector (i.e. vector with support of size at most ). They pointed out that, with very high probability, the vector is the only point of contact between the -ball of radius and the affine space . This result holds as soon as , where is a universal constant . In our framework we uncover the same geometric property:

Let be an extrema Jordan type measure. Then is a point of contact between the ball of radius and the affine space , where is greater than a bound depending only on the structure of the Jordan support of . For instance, in the nonnegative measure case, if has support of size at most , then suffices see Theorem 2.1.

Actually the reader can check that the above property is equivalent to the fact that the measure is a solution of generalized minimal extrapolation (more details can be found in Section 1.2). Accordingly, generalized minimal extrapolation () minimizes the total variation in order to pursue support of the target measure.

Organization

This paper falls into four parts. The next section introduces generalized dual polynomials and shows that exact recovery can be understood in terms of an interpolation problem. Section 2 studies the exact reconstruction of nonnegative measures, and gives explicit construction of design matrices for basis pursuit. Section 3 focuses on generalized Chebyshev polynomials and shows that it is possible to reconstruct signed measures from very few generalized moments. The last section uncovers a property related to the nullspace property of compressed sensing.

1. Generalized dual polynomials

In this section we introduce generalized dual polynomial. In particular we are concerned with a sufficient condition that guarantees the exact reconstruction of the measure . In fact, this condition relies on an interpolation problem.

1.1. An interpolation problem

An insight into exact reconstruction is given by Lemma 1.1. Roughly, the existence of a generalized dual polynomial is a sufficient condition for the exact reconstruction of a signed measure with finite support.

As usual, the following result holds for any family of continuous functions on . Throughout, denotes the sign of the real .

Lemma 1.1 (The generalized dual polynomials) —

Let be a positive integer. Let be a subset of size and . If there exists a linear combination such that

  1. the generalized Vandermonde system

    has full column rank,

  2. ,

  3. ,

Then every measure , such that , is the unique solution of generalized minimal extrapolation given the observation .

Proof.

See A.1. ∎

The linear combination considered in the Lemma 1.1 is called a generalized dual polynomial. This naming is inherited from the original article [CRT06a] of Candès, Tao and Romberg, and the dual certificate named by Candès and Plan [CP10].

1.2. Reconstruction of a cone

Given a subset and a sign sequence , Lemma 1.1 shows that if the generalized interpolation problem defined by , and has a solution then generalized minimal extrapolation recovers exactly all measures with support and such that .

Let us emphasize that the result is slightly stronger. Indeed the proof of A.1 remains unchanged if some coefficients are zero. Consequently () recovers exactly all the measures of which support is included in and such that for all nonzero .

Let us denote this set by . It is exactly the cone defined by

Thus the existence of implies the exact reconstruction of all measures in this cone. The cone is the conic span of an -dimensional face of the -unit ball, that is

Furthermore, the affine space is tangent to the -unit ball at any point , as shown in the following remark.

Remark.

From a convex optimization point of view, the dual certificates [CP10] and the generalized dual polynomials are deeply related: the existence of a generalized dual polynomial implies that, for all , a subgradient of the -norm at the point is perpendicular to the set of the feasible points, that is

where denotes the nullspace. A proof of this remark can be found in A.2.

1.3. On condition (i) in Lemma 1.1

Obviously, when for , conditions and imply that and so condition . Nevertheless, this implication is not true for a general set of functions . Moreover, Lemma 1.1 can fail if condition is not satisfied. For example, set and consider a continuous function satisfying the two conditions and . In this case, if the target belongs to (where and are given by and ), then every measure is a solution of generalized minimal extrapolation given the observation . Indeed,

for all . This example shows that condition is necessary. Reading the proof A.1, conditions and ensure that the solutions to generalized minimal extrapolation belong to the cone , whereas condition gives uniqueness.

1.4. The extrema Jordan type measures

Lemma 1.1 shows that Definition 1 is well-founded. In fact, we have the the following corollary.

Corollary —

Let be an extrema Jordan type measure. Then the measure is a solution to generalized minimal extrapolation given the observation .

Furthermore, if the Vandermonde system given by in Lemma 1.1 has full column rank where denotes the support of , then the measure is the unique solution to generalized minimal extrapolation given the observation .

This corollary shows that the "extrema Jordan type" notion is appropriate to exact reconstruction using generalized minimal extrapolation.

2. Exact reconstruction of the nonnegative measures

In this section we show that if the underlying family is a homogeneous -system then () recovers exactly each finitely supported nonnegative measure from the observation of a surprisingly few generalized moments. We begin with the definition of homogeneous -systems.

2.1. Markov systems

Markov systems were introduced in approximation theory [KN77, BE95, KS66]. They deal with the problem of finding the best approximation, in terms of the -norm, of a given continuous function in norm. We begin with the definition of Chebyshev systems (the so-called -system). They can be seen as a natural extension of algebraic monomials. Thus a finite combination of elements of a -system is called a generalized polynomial.

2.1.1. -systems of order

Denote by a set of continuous real (or complex) functions on . This set is a -system of degree if and only if every generalized polynomial

where , has at most zeros in .

This definition is equivalent to each of the two following conditions:

  • For all distinct elements of and all real (or complex) numbers, there exists a unique generalized polynomial (i.e. ) such that , for all .

  • For all distinct elements of , generalized Vandermonde system

    has full rank.

2.1.2. -systems

We say that the family is an -system if and only if it is a -system of degree for all . Actually, -systems are common objects (see [KN77]). We mention some examples below.

In this paper, we are concerned with target measures on . Usually -systems are defined on general Hausdorff spaces (see [BEZ94] for instance). For the sake of readability, we present examples with different values of . In each case, our results easily extend to target measures with finite support included in the corresponding . As usual, if not specified, the set is assumed to be .

Real polynomials:

The family is an -system. The real polynomials give the standard moments.

Müntz polynomials:

Let be any real numbers. The family is an -system on .

Trigonometric functions:

The family is an -system on .

Characteristic function:

The family is an -system on . The moments are the characteristic function of at points , . It yields

In this case, the underlying scalar field is .

Stieltjes transformation:

The family , where none of the ’s belongs to , is an -system. The corresponding moments are the Stieltjes transformation of , namely

Laplace transform:

The family is an -system. The moments are the Laplace transform at integer points, namely

A broad variety of common families can be considered in our framework. The above list is not meant to be exhaustive.

Consider the family . Note that no linear combination of its elements gives the constant function . Thus the constant function is not a generalized polynomial of this system. To treat such cases, we introduce homogeneous -systems.

2.1.3. Homogeneous -systems

We say that a family is a homogeneous -system if and only if it is an -system and is a constant function. In this case, all constant functions , with (or ), are generalized polynomials. Hence the field (or ) is naturally embedded in generalized polynomials. The adjective homogeneous is named after this comment.

From any -system we can always construct a homogeneous -system. Indeed, let be an -system. In particular the family is a -system of order . Thus the continuous function does not vanish in . In fact the family is a homogeneous -system.

All the previous examples of -systems (see 2.1.2) are homogeneous, even Stieltjes transformation:

Using homogeneous -systems, we show that one can exactly recover all nonnegative measures from a few generalized moments.

2.2. An important theorem

The following result is one of the main theorems of our paper. It states that the generalized minimal extrapolation () recovers all nonnegative measures whose support is of size from only generalized moments.

Theorem 2.1 —

Let be an homogeneous -system on . Consider a nonnegative measure with finite support included in . Then the measure is the unique solution to generalized minimal extrapolation given observation , where is not less than twice the size of the support of .

Proof.

The complete proof can be found in B.1 but some key points from the theory of approximation are presented in 2.2.1. For further insights about Markov systems, we recommend the books [KN77, KS66]. ∎

In addition, this result is sharp in the following sense. Every measure with support size depends on parameters ( for its support and for its weights). Surprisingly, this information can be recovered from only of its generalized moments. Furthermore the program () does not use the fact that the target is nonnegative. It recovers among all signed measures with finite support.

2.2.1. Nonnegative interpolation

An important property of -systems is the existence of a nonnegative generalized polynomial that vanishes exactly at a prescribed set of points , where for all . Indeed, define the index as

(3)

where if belongs to (the interior of ) and otherwise. The next lemma guarantees the existence of nonnegative generalized polynomials.

Lemma 2.2 (Nonnegative generalized polynomial) —

Consider an -system and points in . These points are the only zeros of a nonnegative generalized polynomial of degree at most if and only if .

A proof of this lemma is in [KN77]. Note that this lemma holds for all -systems. However our main theorem needs a homogeneous -system.

2.2.2. Is homogeneous necessary?

If one considers non-homogeneous -systems then it is possible to give counterexamples that go against Theorem 2.1 for all . Indeed, we have the next result.

Proposition 2.3 —

Let be a nonnegative measure supported by points. Let be an integer such that . Then there exists an -system and a measure such that and .

Proof.

See B.2. ∎

Theorem 2.1 gives us the opportunity to build a large family of deterministic matrices for compressed sensing in the case of nonnegative signals.

2.3. Deterministic matrices for compressed sensing

The heart of this article lies in the next theorem. It gives deterministic matrices for compressed sensing. We begin with some state-of-the-art results in compressed sensing. In the following, denotes the number of predictors (or, from a signal processing view point, the length of the signal).

Deterministic Design:

As far as we know, for

there exists [BGI08] a deterministic matrix such that basis pursuit () recovers all -sparse vectors from the observation .

Random Design:

If

where is a universal constant, then there exists (with high probability) a random matrix such that basis pursuit recovers all -sparse vectors from the observation .

The deterministic result holds for large values of , and . For readability we do not specify the sense of large here. The reader may find an abundant literature in the respective references (see for example [BGI08, Don06]).

Considering nonnegative sparse vectors, it is possible to drop the bound on to

Unlike the above examples, this result holds for all values of the parameters (as soon as ). In addition it give explicit design matrices for basis pursuit. Last but not least, this bound on does not depend on . In special cases, this result has been previously developed in [DJHS92, Fuc96, DT05, DT10]. Using Theorem 2.1, it is possible to provide a generalization of this result to a broad range of measurement matrices:

Theorem 2.4 (Deterministic Design Matrices) —

Let be integers such that

Let be a homogeneous -system on . Let be distinct reals of . Let be generalized Vandermonde system defined by

Then basis pursuit () exactly recovers all nonnegative -sparse vectors from the observation .

Proof.

See B.3. ∎

Remark.

The purely analytical components of this result are tractable back to the theory of neighborly polytopes (see for instance [DT05]) and in some sense trace to the theory of moment problems which essentially follows from Carathéodory work [Car07, Car11]. Other relevant work includes [KS53, Der56, Stu88]. This list is not meant to be exhaustive.

Although the predictors could be highly correlated, basis pursuit exactly recovers the target vector . Of course, this result is theoretical. In practice, the sensing matrix can be very ill-conditioned. In this case, basis pursuit behaves poorly.

Numerical experiments

Our numerical experiments illustrate Theorem 2.4. They are of the following form:

  1. Choose constants (sparsity), (number of known moments), and (length of the vector). Choose the family (cosine, polynomial, Laplace, Stieltjes,…).

  2. Select the subset (of size ) uniformly at random.

  3. Randomly generate an -sparse vector of support whose nonzero entries have the chi-square distribution with degree of freedom.

  4. Compute the observation .

  5. Solve (), and compare with the target vector .

Figure 1. Consider the family on and the points , for . The blue circles represent the target vector (a -sparse vector), while the black crosses represent the solution of () from the observation of cosine moments. In this example , , and . More numerical results can be found in Appendix D. This example shows that the reconstruction is excellent.

The program () can be recast as a linear program (see [CDS01] for instance). Then we use an interior point method to solve ().

The entries of the target signal are distributed according to chi-square distribution with degree of freedom. We chose this distribution to ensure that the entries are nonnegative. Let us emphasize that the actual values of can be arbitrary; only the sign matters. The result remains the same if we take the nonzero entries to be , say.

Let us denote . The columns of are the values of this map at points . For large , the vectors can be highly correlated. In fact, the matrix can be ill-conditioned. To avoid such a case, we chose a family such that the map has a large derivative. It appears that the cosine family gives very good numerical results (see Figure 1).

We investigate the reconstruction error between the numerical result of the program () and the target vector . Our experiment is of the following form:

  1. Choose (length of the vector) and (number of numerical experiments).

  2. Let satisfy .

  3. Set and solve the program (). Let be the numerical result.

  4. Compute the -error .

  5. Repeat times the steps and , and compute , the arithmetic mean of the -errors.

  6. Return , the maximal value of .

For and , we find that

Note that all experiments were done for . This is the smallest value of such that Theorem 2.3 holds.

3. Exact reconstruction for generalized Chebyshev measures

In this section we give some examples of extremal polynomials as they appear in Definition 1. Considering -systems, corollary of Lemma 1.1 shows that every measure with Jordan support included in is the only solution to (). Indeed, condition of Lemma 1.1 is clearly satisfied when the underlying family is an -system.

3.1. Trigonometric families

In the context of -systems we can exhibit some very particular dual polynomials. The global extrema of these polynomials gives families of support for which results of Lemma 1.1 hold.

The cosine family

First, consider the -dimensional cosine system

on . Obviously, extremal polynomials

for , satisfy and , for . According to Definition 1, let us denote

  • ,

  • .

The corollary that follows Lemma 1.1 asserts the following result.

Consider a signed measure having Jordan support such that and , for some . Then the measure can be exactly reconstructed from the observation of

(4)

Moreover, since the family is an -system, condition in Lemma 1.1 is satisfied. Hence, the measure is the only solution of () given the observations (4).

Using the classical mapping

the system of function can be push-forward to the system of functions , where is the so-called Chebyshev polynomial of the first kind of order , (see 3.2).

The characteristic function

By the same token, consider the complex valued -system defined by

on . In this case, one can check that

where and , is a generalized polynomial. Following the previous example, we set

  • ,

  • .

Hence Lemma 1.1 can be applied. It yields the following:

Any signed measure having Jordan support included in , for some and , is the unique solution of () given the observation

where has been defined in the previous section see 2.1.2.

Note that the study of basis pursuit with this kind of trigonometric moments has been considered in the pioneering work of Donoho and Stark [DS89].

3.2. Chebyshev polynomials

As mentioned in the introduction, the -th Chebyshev polynomial of the first order is defined by

We give some well known properties of Chebyshev polynomials. The -th Chebyshev polynomial satisfies the equioscillation property on . In fact, there exist points with such that

where the supremum norm is taken over . Moreover, the Chebyshev polynomial satisfies the following extremal property.

Theorem 3.1 ([Riv90, Be95]) —

We have

where denotes the set of complex polynomials of degree less than , and the supremum norm is taken over . Moreover, the minimum is uniquely attained by .

These two properties, namely the equioscillation property and the extremal property, will be useful to us when we define generalized Chebyshev polynomial.

Using Lemma 1.1 we uncover an exact reconstruction result. Consider the family

on . Set

  • ,

  • .

The following result holds:

Consider a signed measure having Jordan support included in , for some . Then the measure is the only solution to () given its first standard moments.

Note that this result is restrictive in the location of the support points, they are not sparse in the usual sense, because they must be precisely located. Nevertheless, it can be extended to any -systems with the help of generalized Chebyshev polynomials.

3.3. Generalized Chebyshev polynomials

Following [BE95], we define generalized Chebyshev polynomials as follows. Let be an -system on .

3.3.1. Definition

The generalized Chebyshev polynomial

where , is defined by the following three properties:

  • is a generalized polynomial of degree , i.e. ,

  • there exists such that

    (5)

    for ,

  • and

    (6)

The existence and the uniqueness of such is proved in [BE95]. Moreover, the following theorem shows that the extremal property implies the equioscillation property (5).

Theorem 3.2 ([Riv90, Be95]) —

The -th generalized Chebyshev polynomial exists and can be written as

where are chosen to minimize

and the normalization constant can be chosen so that satisfies property (6).

Generalized Chebyshev polynomials give a new family of extrema Jordan type measures (see Definition 1). The corresponding target measures are named Chebyshev measures.

3.3.2. Exact reconstruction of Chebyshev measures

Considering the equioscillation property (5), set

  • as the set of the alternation point such that ,

  • as the set of the alternation point such that .

A direct consequence of the last definition is the following proposition.

Proposition 3.3 —

Let be a signed measure having Jordan support included in , for some . Then is the unique solution to generalized minimal extrapolation () given , i.e. its first generalized moments.

In the special case , Proposition 3.3 shows that () recovers all signed measures with Jordan support included in from first generalized moments. Note that has size . Hence, this proposition shows that, among all signed measure on , () can recover a signed measure of support size from only generalized moments. In fact, any measure with Jordan support included in can be uniquely defined by only generalized moments.

As far as we know, it is difficult to give the corresponding generalized Chebyshev polynomials for a given family . Nevertheless, Borwein, Erdélyi, and Zhang [BEZ94] gives the explicit form of for rational spaces (i.e. the Stieltjes transformation in our framework). See also [DS89, HSS96] for some applications in optimal design.

3.3.3. Construction of Chebyshev polynomials for Stieltjes transformation

We consider the case of Stieltjes transformation described in Section 2. In this case, Chebyshev polynomials can be precisely described. Consider homogeneous -system on defined by

where .

Reproducing [BE95], we can construct generalized Chebyshev polynomials of the first kind. It yields

where is uniquely defined by and , and is a known analytic function in a neighborhood of the closed unit disk. Moreover this analytic function can be expressed in terms of only . We refer to [BE95] for further details.

4. The nullspace property for measures

In this section we consider any countable family of continuous functions on . In particular we do not assume that is a non-homogeneous -system. We aim at deriving a sufficient condition for exact reconstruction of signed measures. More precisely, we are concerned with giving a related property to the nullspace property [CDD09] of compressed sensing.

Note that the solutions to program () depend only on the first elements of and on the target measure . We investigate the condition that the family must satisfy to ensure exact reconstruction. In the meantime, Cohen, Dahmen and DeVore introduced [CDD09] a relevant condition, the nullspace property. Their property binds the geometry of the nullspace of and the best -term approximation of the target given the observation . This well known property can be stated as follows.

4.1. The nullspace property in compressed sensing

Let