Blind Multi-Band Signal Reconstruction: Compressed Sensing for Analog Signals The authors are with the Technion—Israel Institute of Technology, Haifa Israel. Email: moshiko@tx.technion.ac.il, yonina@ee.technion.ac.il.

# Blind Multi-Band Signal Reconstruction: Compressed Sensing for Analog Signals

## Abstract

We address the problem of reconstructing a multi-band signal from its sub-Nyquist point-wise samples. To date, all reconstruction methods proposed for this class of signals assumed knowledge of the band locations. In this paper, we develop a non-linear blind perfect reconstruction scheme for multi-band signals which does not require the band locations. Our approach assumes an existing blind multi-coset sampling method. The sparse structure of multi-band signals in the continuous frequency domain is used to replace the continuous reconstruction with a single finite dimensional problem without the need for discretization. The resulting problem can be formulated within the framework of compressed sensing, and thus can be solved efficiently using known tractable algorithms from this emerging area. We also develop a theoretical lower bound on the average sampling rate required for blind signal reconstruction, which is twice the minimal rate of known-spectrum recovery. Our method ensures perfect reconstruction for a wide class of signals sampled at the minimal rate. Numerical experiments are presented demonstrating blind sampling and reconstruction with minimal sampling rate.

\setstretch

1.44

{keywords}

Kruskal-rank, Landau-Nyquist rate, multiband, multiple measurement vectors (MMV), nonuniform periodic sampling, orthogonal matching pursuit (OMP), signal representation, sparsity.

## I Introduction

\PARstart

The well known Whittaker, Kotelńikov, and Shannon (WKS) theorem links analog signals with a discrete representation, allowing the transfer of the signal processing to a digital framework. The theorem states that a real-valued signal bandlimited to Hertz can be perfectly reconstructed from its uniform samples if the sampling rate is at least samples per second. This minimal rate is called the Nyquist rate of the signal.

Multi-band signals are bandlimited signals that posses an additional structure in the frequency domain. The spectral support of a multi-band signal is restricted to several continuous intervals. Each of these intervals is called a band and it is assumed that no information resides outside the bands. The design of sampling and reconstruction systems for these signals involves three major considerations. One is the sampling rate. The other is the set of multi-band signals that the system can perfectly reconstruct. The last one is blindness, namely a design that does not assume knowledge of the band locations. Blindness is a desirable property as signals with different band locations are processed in the same way. Landau [1] developed a minimal sampling rate for an arbitrary sampling method that allows perfect reconstruction. For multi-band signals, the Landau rate is the sum of the band widths, which is below the corresponding Nyquist rate.

Uniform sampling of a real bandpass signal with a total width of Hertz on both sides of the spectrum was studied in [2]. It was shown that only special cases of bandpass signals can be perfectly reconstructed from their uniform samples at the minimal rate of samples/sec. Kohlenberg [3] suggested periodic non-uniform sampling with an average sampling rate of . He also provided a reconstruction scheme that recovers any bandpass signal exactly. Lin and Vaidyanathan [4] extended his work to multi-band signals. Their method ensures perfect reconstruction from periodic non uniform sampling with an average sampling rate equal to the Landau rate. Both of these works lack the blindness property as the information about the band locations is used in the design of both the sampling and the reconstruction stages.

Herley and Wong [5] and Venkataramani and Bresler [8] suggested a blind multi-coset sampling strategy that is called universal in [8]. The authors of [8] also developed a detailed reconstruction scheme for this sampling strategy, which is not blind as its design requires information about the spectral support of the signal. Blind multi-coset sampling renders the reconstruction applicable to a wide set of multi-band signals but not to all of them.

Although spectrum-blind reconstruction was mentioned in two conference papers in 1996 [6],[7], a full spectrum-blind reconstruction scheme was not developed in these papers. It appears that spectrum-blind reconstruction has not been handled since then.

We begin by developing a lower bound on the minimal sampling rate required for blind perfect reconstruction with arbitrary sampling and reconstruction. As we show the lower bound is twice the Landau rate and no more than the Nyquist rate. This result is based on recent work of Lue and Do [20] on sampling signals from a union of subspaces.

The heart of this paper is the development of a spectrum-blind reconstruction (SBR) scheme for multi-band signals. We assume a blind multi-coset sampling satisfying the minimal rate requirement. Theoretical tools are developed in order to transform the continuous nature of the reconstruction problem into a finite dimensional problem without any discretization. We then prove that the solution can be obtained by finding the unique sparsest solution matrix from Multiple-Measurement-Vectors (MMV). This set of operations is grouped under a block we name Continuous to Finite (CTF). This block is the cornerstone of two SBR algorithms we develop to reconstruct the signal. One is entitled SBR4 and enables perfect reconstruction using only one instance of the CTF block but requires twice the minimal sampling rate. The other is referred to as SBR2 and allows for sampling at the minimal rate, but involves a bi-section process and several uses of the CTF block. Other differences between the algorithms are also discussed. Both SBR4 and SBR2 can easily be implemented in DSP processors or in software environments.

Our proposed reconstruction approach is applicable to a broad class of multi-band signals. This class is the blind version of the set of signals considered in [8]. In particular, we characterize a subset of this class by the maximal number of bands and the width of the widest band. We then show how to choose the parameters of the multi-coset stage so that perfect reconstruction is possible for every signal in . This parameter selection is also valid for known-spectrum reconstruction with half the sampling rate. The set represents a natural characterization of multi-band signals based on their intrinsic parameters which are usually known in advance. We prove that the SBR4 algorithm ensures perfect reconstruction for all signals in . The SBR2 approach works for almost all signals in but may fail in some very special cases (which typically will not occur). As our strategy is applicable also for signals that do not lie in , we present a nice feature of a success recovery indication. Thus, if a signal cannot be recovered this indication prevents further processing of invalid data.

The CTF block requires finding a sparsest solution matrix which is an NP-hard problem [12]. Several sub-optimal efficient methods have been developed for this problem in the compressed sensing (CS) literature [15],[16]. In our algorithms, any of these techniques can be used. Numerical experiments on random constructions of multi-band signals show that both SBR4 and SBR2 maintain a satisfactory exact recovery rate when the average sampling rate approaches their theoretical minimum rate requirement and sub-optimal implementations of the CTF block are used. Moreover, the average runtime is shown to be fast enough for practical usage.

Our work differs from other main stream CS papers in two aspects. The first is that we aim to recover a continuous signal, while the classical problem addressed in the CS literature is the recovery of discrete and finite vectors. An adaptation of CS results to continuous signals was also considered in a set of conferences papers (see [21],[22] and the references therein). However, these papers did not address the case of multi-band signals. In [22] an underlying discrete model was assumed so that the signal is a linear combination of a finite number of known functions. Here, there is no discrete model as the signals are treated in a continuous framework without any discretization. The second aspect is that we assume a deterministic sampling stage and our theorems and results do not involve any probability model. In contrast, the common approach in compressive sensing assumes random sampling operators and typical results are valid with some probability less than 1 [13],[19],[21],[22].

The paper is organized as follows. In Section II we formulate our reconstruction problem. The minimal density theorem for blind reconstruction is stated and proved in Section III. A brief overview of multi-coset sampling is presented in Section IV. We develop our main theoretical results on spectrum-blind reconstruction and present the CTF block in Section V. Based on these results, in Section VI, we design and compare the SBR4 and the SBR2 algorithms. Numerical experiments are described in Section VII.

## Ii Preliminaries and Problem formulation

### Ii-a Notation

Common notation, as summarized in Table I, is used throughout the paper. Exceptions to this notation are indicated in the text.

In addition, the following abbreviations are used. The norm of a vector is defined as

 ∥v∥pp=∑i|vi|p,p≥0.

The default value for is 2, so that denotes the norm of . The standard norm is used for continuous signals. The th column of is written as , the th row is written as a column vector.

Indicator sets for vectors and matrices are defined respectively as

 I(v)={k|v(k)≠0},I(A)={k|(AT)k≠0}.

The set contains the indices of non-zero values in the vector . The set contains the indices of the non-identical zero rows of .

Finally, is the matrix that contains the columns of with indices belonging to the set . The matrix is referred to as the (columns) restriction of to . Formally,

 (AS)i=(A)Si,1≤i≤|S|.

Similarly, is referred to as the rows restriction of to .

### Ii-B Multi-band signals

In this work our prime focus is on the set of all complex-valued multi-band signals bandlimited to with no more than bands where each of the band widths is upper bounded by . Fig. 1 depicts a typical spectral support for .

The Nyquist rate corresponding to any is . The Fourier transform of a multi-band signal has support on a finite union of disjoint intervals in . Each interval is called a band and is uniquely represented by its edges . Without loss of generality it is assumed that the bands are not overlapping.

Although our interest is mainly in signals , our results are applicable to a broader class of signals, as explained in the relevant sections. In addition, the results of the paper are easily adopted to real-valued signals supported on . The required modifications are explained in Appendix A and are based on the equations derived in Section IV-A.

### Ii-C Problem formulation

We wish to perfectly reconstruct from its point-wise samples under two constraints. One is blindness, so that the information about the band locations is not used while acquiring the samples and neither can it be used in the reconstruction process. The other is that the sampling rate required to guarantee perfect reconstruction should be minimal.

This problem is solved if either of its constraints is removed. Without the rate constraint, the WKS theorem allows perfect blind-reconstruction for every signal bandlimited to from its uniform samples at the Nyquist rate . Alternatively, if the exact number of bands and their locations are known, then the method of [4] allows perfect reconstruction for every multi-band signal at the minimal sampling rate provided by Landau’s theorem [1].

In this paper, we first develop the minimal sampling rate required for blind reconstruction. We then use a multi-coset sampling strategy to acquire the samples at an average sampling rate satisfying the minimal requirement. The design of this sampling method does not require knowledge of the band locations. We provide a spectrum-blind reconstruction scheme for this sampling strategy in the form of two different algorithms, named SBR4 and SBR2. It is shown that if the sampling rate is twice the minimal rate then algorithm SBR4 guarantees perfect reconstruction for every . The SBR2 algorithm requires the minimal sampling rate and guarantees perfect reconstruction for most signals in . However, some special signals from , discussed in Section VI-B, cannot be perfectly reconstructed by this approach. Excluding these special cases, our proposed method satisfies both constraints of the problem formulation.

## Iii Minimal sampling rate

We begin by quoting Landau’s theorem for the minimal sampling rate of an arbitrary sampling method that allows known-spectrum perfect reconstruction. It is then proved that blind perfect-reconstruction requires a minimal sampling rate that is twice the Landau rate.

### Iii-a Known spectrum support

Consider the space of bandlimited functions restricted to a known support :

 BT={x(t)∈L2(R)|suppX(f)⊆T}. (1)

A classical sampling scheme takes the values of on a known countable set of locations . The set is called a sampling set for if can be perfectly reconstructed in a stable way from the sequence of samples . The stability constraint requires the existence of constants and such that:

 α∥x−y∥2≤∥xR−yR∥2≤β∥x−y∥2,∀x,y∈BT. (2)

Landau [1] proved that if is a sampling set for then it must have a density , where

 D−(R)=limr→∞infy∈R|R∩[y,y+r]|r (3)

is the lower Beurling density, and is the Lebesgue measure of . The numerator in (3) counts the number of points from in every interval of width of the real axis2. This result is usually interpreted as a minimal average sampling rate requirement for , and is called the Landau rate.

### Iii-B Unknown spectrum support

Consider the set of signals bandlimited to with bandwidth occupation no more than , so that

 λ(suppX(f))≤ΩT,∀x(t)∈NΩ.

The Nyquist rate for is . Note that is not a subspace so that the Landau theorem is not valid here. Nevertheless, it is intuitive to argue that the minimal sampling rate for cannot be below as this value is the Landau rate had the spectrum support been known.

A blind sampling set for is a sampling set whose design does not assume knowledge of . Similarly to (2) the stability of requires the existence of and such that:

 α∥x−y∥2≤∥xR−yR∥2≤β∥x−y∥2,∀x,y∈NΩ. (4)
###### Theorem 1 (Minimal sampling rate)

Let be a blind sampling set for . Then,

 D−(R)≥min{2ΩT,1T}. (5)
{proof}

The set is of the form

 NΩ=⋃T∈ΓBT, (6)

where

 Γ={T|T⊆F,λ(T)≤Ω/T}. (7)

Clearly, is a non-countable union of subspaces. Sampling signals that lie in a union of subspaces has been recently treated in [20]. For every define the subspaces

 Bγ,θ=Bγ+Bθ={x+y|x∈Bγ,y∈Bθ}. (8)

Since is a sampling set for , (4) holds for some constants . It was proved in [20, Proposition 2] that (4) is valid if and only if

 α∥x−y∥2≤∥xR−yR∥2≤β∥x−y∥2,∀x,y∈Bγ,θ (9)

holds for every . In particular, is a sampling set for every with .

Observe that the space is of the form (1) with . Applying Landau’s density theorem for each results in

 D−(R)≥λ(γ∪θ),∀γ,θ∈Γ. (10)

Choosing

 γ=[0,ΩT],θ=[1−ΩT,1T],

we have that for ,

 D−(R)≥λ(γ∪θ)=λ(γ)+λ(θ)=2ΩT. (11)

If then and

 D−(R)≥λ(γ∪θ)=1T. (12)

Combining (11) and (12) completes the proof.

In [20], the authors consider minimal sampling requirements for a union of shift-invariant subspaces, with a particular structure of sampling functions. Specifically, they view the samples as inner products with sampling functions of the form , which includes multi-coset sampling. Theorem 1 extends this result to an arbitrary point-wise sampling operator. In particular, it is valid for non periodic sampling sets that are not covered by [20].

An immediate corollary of Theorem 1 is that if then uniform sampling at the Nyquist rate with an ideal low pass filter satisfies the requirements of our problem formulation. Namely, both the sampling and the reconstruction do not use the information about the band locations, and the sampling rate is minimal according to Theorem 1. As is contained in the space of bandlimited signals, this choice also provides perfect reconstruction for every . Therefore, in the sequel we assume that so that the minimal sampling rate of Theorem 1 is exactly twice the Landau rate.

It is easy to see that for . Therefore, for known spectral support, the Landau rate is . Despite the fact that is a true subset of , the proof of Theorem 1 can be adopted to show that a minimal density of is required so that stable perfect reconstruction is possible for signals from .

We point out that both Landau’s and Theorem 1 state a lower bound but do not provide a method to achieve the bounds. The rest of the paper is devoted to developing a reconstruction method that approaches the minimal sampling rate of Theorem 1.

## Iv Universal Sampling

This section reviews multi-coset sampling which is used in our development. We also briefly explain the fundamentals of known-spectrum reconstruction as derived in [8].

### Iv-a Multi-coset sampling

Uniform sampling of at the Nyquist rate results in samples that contain all the information about . Multi-coset sampling is a selection of certain samples from this grid. The uniform grid is divided into blocks of consecutive samples. A constant set of length describes the indices of samples that are kept in each block while the rest are zeroed out. The set is referred to as the sampling pattern where

 0≤c1

Define the th sampling sequence for as

 xci[n]= x(t=nT) n=mL+ci, for some m∈Z xci[n]= 0 otherwise. (14)

The sampling stage is implemented by uniform sampling sequences with period , where the th sampling sequence is shifted by from the origin. Therefore, a multi-coset system is uniquely characterized by the parameters and the sampling pattern .

Direct calculations show that [8]

 Xci(ej2πfT)=1LTL−1∑r=0exp(j2πLcir)X(f+rLT), (15) ∀f∈F0=[0,1LT),1≤i≤p,

where is the discrete-time Fourier transform (DTFT) of . Thus, the goal is to choose parameters such that can be recovered from (15).

For our purposes it is convenient to express (15) in a matrix form as

 y(f)=Ax(f),∀f∈F0, (16)

where is a vector of length whose th element is , and the vector contains unknowns for each

 xi(f)=X(f+iLT),0≤i≤L−1,f∈F0. (17)

The matrix depends on the parameters and the set but not on and is defined by

 Aik=1LTexp(j2πLcik). (18)

Dealing with real-valued multi-band signals requires simple modifications to (16). These adjustments are detailed in Appendix A.

The Beurling lower density (i.e. the average sampling rate) of a multi-coset sampling set is

 1TAVG=pLT, (19)

which is lower than the Nyquist rate for . However, an average sampling rate above the Landau rate is not sufficient for known-spectrum reconstruction. Additional conditions are needed as explained in the next section.

### Iv-B Known-spectrum reconstruction and universality

The presentation of the reconstruction is simplified using CS sparsity notation. A vector is called -sparse if the number of non-zero values in is no greater than . Using the pseudo-norm the sparsity of is expressed as . We use the following definition of the Kruskal-rank of a matrix [14]:

###### Definition 1

The Kruskal-rank of , denoted as , is the maximal number such that every set of columns of is linearly independent.

Observe that for every the system of (16) has less equations than unknowns. Therefore, a prior on must be used to allow for recovery. In [8] it is assumed that the information about the band locations is available in the reconstruction stage. This information supplies the set for every . Without any additional prior the following condition is necessary for known-spectrum perfect reconstruction

 x(f) is p-sparse ,∀f∈F0. (20)

Using the Kruskal-rank of a sufficient condition is formulated as

 x(f) is σ(A)-sparse ,∀f∈F0. (21)

The known-spectrum reconstruction of [8] basically restricts the columns of to and inverts the resulting matrix in order to recover .

A sampling pattern that yields a fully Kruskal-rank is called universal and corresponds to . Therefore, the set of signals that are consistent with (21) is the broadest possible if a universal sampling pattern is used. As we show later, choosing , and a universal pattern makes (21) valid for every signal .

Finding a universal pattern , namely one that results in a fully Kruskal-rank , is a combinatorial process. Several specific constructions of sampling patterns that are proved to be universal are given in [8],[10]. In particular, choosing to be prime renders every pattern universal [10].

To summarize, choosing a universal pattern allows recovery of any satisfying (20) when the band locations are known in the reconstruction. We next consider blind signal recovery using universal sampling patterns.

## V Spectrum-Blind Reconstruction

In this section we develop the theory needed for SBR. These results are then used in the next section to construct two efficient algorithms for blind signal reconstruction.

The theoretical results are devoted in the following steps: We first note that when considering blind-reconstruction, we cannot use the prior of [8]. In Section V-A we present a different prior that does not assume the information about the band locations. Using this prior we develop a sufficient condition for blind perfect reconstruction which is very similar to (21). Furthermore, we prove that under certain conditions on , perfect reconstruction is possible for every signal in . We then present the basic SBR paradigm in Section V-B. The main result of the paper is transforming the continuous system of (16) into a finite dimensional problem without using discretization. In Section V-C we develop two propositions for this purpose, and present the CTF block.

### V-a Conditions for blind perfect reconstruction

Recall that for every the system of (16) is undetermined since there are fewer equations than unknowns. The prior assumed in this paper is that for every the vector is sparse but in contrast to [8] the location of the non-zero values is unknown. Clearly, in this case (20) is still necessary for blind perfect reconstruction. The following theorem from the CS literature is used to provide a sufficient condition.

###### Theorem 2

Suppose is a solution of . If then is the unique sparsest solution of the system.

Theorem 2 and its proof are given in [11], [15] with a slightly different notation of instead of the Kruskal-rank of . Note that the condition of the theorem is not necessary as there are examples that the sparsest solution of is unique while .

Using Theorem 2, it is evident that perfect reconstruction is possible for every signal satisfying

 x(f) is σ(A)2 -sparse ,∀f∈F0. (22)

As before, choosing a universal pattern makes the set of signals that conform with (22) the widest possible. Note that a factor of two distinguishes between the sufficient conditions of (21) and of (22), and results from the fact that here we do not know the locations of the non-zero values in .

Note that (22) provides a condition under which perfect reconstruction is possible, however, it is still unclear how to find the original signal. Although the problem is similar to that described in the CS literature, here finding the unique sparse vector must be solved for each value in the continuous interval , which clearly cannot be implemented.

In practice, conditions (21) and (22) are hard to verify since they require knowledge of and depend on the parameters of the multi-coset sampling. We therefore prefer to develop conditions on the class which characterizes multi-band signals based on their intrinsic properties: the number of bands and their widths. It is more likely to know the values of and in advance than to know if the signals to be sampled satisfy (21) or (22). The following theorem describes how to choose the parameters and so that the sufficient conditions for perfect reconstruction hold true for every , namely it is a unique solution of (16). The theorem is valid for both known and blind reconstruction with a slight difference resulting from the factor of two in the sufficient conditions.

###### Theorem 3 (Uniqueness)

Let be a multi-band signal. If:

1. The value of is limited by

 L≤1BT, (23)
2. for known reconstruction or for blind,

3. is a universal pattern,

then, for every , the vector is the unique solution of (16).

{proof}

If is limited by (23) then for the th band we have

 λ(Ti)≤B≤1LT,1≤i≤N.

Therefore, implies

 f+kLT∉Ti,∀k≠0.

According to (17) for every the vector takes the values of on a set of points spaced by . Consequently, the number of non-zero values in is no greater than the number of the bands, namely is -sparse.

Since is a universal pattern, . This implies that conditions (21) and (22) are satisfied.

Note that the condition on the value of implies the minimal sampling rate requirement. To see this, substitute (23) into (19):

 1TAVG=pLT≥pB. (24)

As pointed out in the end of Section III-B, if the signals are known to lie in then the Landau rate is , which is implied by . Theorem 1 requires an average sampling rate of , which can be guaranteed if .

### V-B Reconstruction paradigm

The goal of our reconstruction scheme is to recover the signal from the set of sequences . Equivalently, the aim is to reconstruct of (16) for every from the input data .

A straight forward approach is to find the sparsest solution on a dense grid of . However, this discretization strategy cannot guarantee perfect reconstruction. In contrast, our approach is exact and does not require discretization.

Our reconstruction paradigm is targeted at finding the diversity set which depends on and is defined as

 S=⋃f∈F0I(x(f)). (25)

The SBR algorithms we develop in Section VI are aimed at recovering the set . With the knowledge of perfect reconstruction of is possible for every by noting that (16) can be written as

 y(f)=ASxS(f). (26)

If the diversity set of satisfies

 |S|≤σ(A), (27)

then

 (AS)†AS=I, (28)

where is of size . Multiplying both sides of (26) by results in:

 xS(f)=(AS)†y(f),∀f∈F0. (29)

From (25),

 xi(f)=0,∀f∈F0,i∉S. (30)

Thus, once is known, and as long as (27) holds, perfect reconstruction can be obtained by (29)-(30).

As we shall see later on (27) is implied by the condition required to transform the problem into a finite dimensional one. Furthermore, the following proposition shows that for , (27) is implied by the parameter selection of Theorem 3.

###### Proposition 1

If is limited by (23) then . If in addition and is universal then for every , the set satisfies (27).

{proof}

The bands are continuous intervals upper bounded by . From (17) it follows that is constructed by dividing into equal intervals of length . Therefore if is limited by (23) then each band can either be fully contained in one of these intervals or it can be split between two consecutive intervals. Since the number of bands is no more than it follows that . With the additional conditions we have that .

As we described, our general strategy is to determine the diversity set and then recover via (29)-(30). In the non-blind setting, is known, and therefore if it satisfies (27) then the same equations can be used to recover . However, note that when the band locations are known, we may use a value of that is smaller than since the sampling rate can be reduced. Therefore, (27) may not hold. Nonetheless, it is shown in [8], that the frequency axis can be divided into intervals such that this approach can be used over each frequency interval. Therefore, once the set is recovered there is no essential difference between known and blind reconstruction.

### V-C Formulation of a finite dimensional problem

The set of equations of (16) consists of an infinite number of linear systems because of the continuous variable . Furthermore, the expression for the diversity set given in (25) involves a union over the same continuous variable. The main result of this paper is that can be recovered exactly using only one finite dimensional problem. In this section we develop the underlying theoretical results that are used for this purpose.

Consider a given . Multiplying each side of (16) by its conjugate transpose we have

 (31)

Integrating both sides over the continuous variable gives

 Q=AZ0AH, (32)

with the matrix

 Q=∫f∈Ty(f)yH(f)df⪰0, (33)

and the matrix

 Z0=∫f∈Tx(f)xH(f)df⪰0. (34)

Define the diversity set of the interval as

 ST=⋃f∈TI(x(f)). (35)

Now,

 (Z0)ii=∫f∈T|xi(f)|2df.

This means that if and only if , which implies that .

The next proposition is used to determine whether can be found by a finite dimensional problem. The proposition is stated for general matrices .

###### Proposition 2

Suppose of size and are given matrices. Let be any matrix satisfying

 Q=AZAH, (36a) Z⪰0, (36b) |I(Z)|≤σ(A). (36c) Then, rank(Z)=rank(Q). If, in addition, |I(Z)|≤σ(A)2, (36d)

then, is the unique solution of (36a)-(36d).

{proof}

Let satisfy (36a)-(36c). Define . Since it can be decomposed as with of size having orthogonal columns. From (36a),

 Q=(AP)(AP)H. (37)

It can be easily be concluded that , and thus . The following lemma whose proof is given in Appendix B ensures that the matrix of size also has full column rank.

###### Lemma 1

For every two matrices , if then .

Since for every matrix it is true that , (37) implies .

For the second part of Proposition 2 suppose that both satisfy (36a),(36b),(36d). From the first part,

 rank(Z)=rank(~Z)=rQ.

Following the earlier decompositions we write

 (38) ~Z=~P~PH,I(~Z)=I(~P).

 |I(P)|≤σ(A)2,|I(~P)|≤σ(A)2. (39)

From (36a),

 Q=(AP)(AP)H=(A~P)(A~P)H, (40)

which implies that

 A(P−~PR)=0, (41)

for some unitary matrix . It is easy to see that (39) results in . Therefore the matrix has at most non-identical zero rows. Applying Lemma 1 to (41) results in . Substituting this into (38) we have that .

The following proposition shows how to construct the matrix by finding the sparsest solution of a linear system.

###### Proposition 3

Consider the setting of Proposition 2 and assume satisfies (36d). Let and define a matrix of size using the decomposition , such that has orthogonal columns. Then the linear system

 V=AU (42)

has a unique sparsest solution matrix . Namely, and is minimal. Moreover, .

{proof}

Substitute the decomposition into (36a) and let . The result is for some unitary . Therefore, the linear system of (42) has a solution . It is easy to see that , thus (36d) results in . Applying Theorem 2 to each of the columns of provides the uniqueness of . It is trivial that .

Using the same arguments as in the proof it is easy to conclude that , so that can be found directly from the solution matrix . In particular, we develop the Continuous to Finite (CTF) block which determines the diversity set of a given frequency interval . Fig. 2 presents the CTF block that contains the flow of transforming the continuous linear system of (16) on the interval into the finite dimensional problem of (42) and then to the recovery of . The role of Propositions 2 and 3 is also illustrated. The CTF block is the heart of the SBR scheme which we discuss next.

In the CS literature, the linear system of (42) is referred to as an MMV system. Theoretical results regarding the sparsest solution matrix of an MMV system are given in [15]. Finding the solution matrix is known to be NP-hard [12]. Several sub-optimal efficient algorithms for finding are given in [16]. Some of them can indicate a success recovery of . We explain which class of algorithms has this property in Section VI-A.

## Vi SBR algorithms

The theoretical results developed in the previous section are now used in order to construct the diversity set which enables the recovery of via (29)-(30).

We begin by defining a class of signals. The SBR4 algorithm is then presented and is proved to guarantee perfect reconstruction for signals in . We then show that in order to ensure that the sampling rate must be at least , which is twice the minimal rate stated in Theorem 1. To improve on this result, we define a class of signals, and introduce a conceptual method to perfectly reconstruct this class. The SBR2 algorithm is developed so that it ensures exact recovery for a subset of . We then prove that is contained in this subset even for sampling at the minimal rate. However, the computational complexity of SBR2 is higher than that of SBR4. Since universal patterns lead to the largest sets and , we assume throughout this section that universal patterns are used, which results in .

### Vi-a The SBR4 algorithm

Define the class of signals

 AK={suppX(f)⊆F and |S|≤K}, (43)

with given by (25). Let , and observe that a multi-coset system with ensures that all the conditions of Proposition 2 are valid for every . Thus, applying the CTF block on results in a unique sparsest solution , with . The reconstruction of the signal is then carried out by (29)-(30). We note that (27) is valid as it represents the class that contains for .

Algorithm 1, named SBR4, follows the steps of the CTF block in Fig. 2 to recover the diversity set from , for any . The algorithm also outputs an indication flag which we discuss later on.

The SBR4 algorithm guarantees perfect reconstruction of signals in from samples at twice the Landau rate, which is also the lower bound stated in Theorem 1. To see this, observe that (25) implies that every must satisfy

 λ(suppX(f))≤KLT. (44)

Although is not a subspace, we use (44) to say that the Landau rate for is as it contains subspaces whose widest support is . As we proved, ensures perfect reconstruction for . Substituting the smallest possible value into (19) results in an average sampling rate of .

It is easy to see that flag is equal to 1 for every signal in . However, when a sub-optimal algorithm is used to solve the MMV in step 1 we cannot guarantee a correct solution . Thus, flag=0 indicates that the particular MMV method we used failed, and we may try a different MMV approach.

Existing algorithms for MMV systems can be classified into two groups. The first group contains algorithms that seek the sparsest solution matrix , e.g. Basis Pursuit [17] or Matching Pursuit [18] with a termination criterion based on the residual. The other contains methods that approximate a sparse solution according to user specification, e.g. Matching Pursuit with a predetermined number of iterations. Using a technique from the latter group neutralizes the indication flag as the approximation is always sparse. Therefore, this set of algorithms should be avoided if an indication is desired.

An important advantage of algorithm SBR4 is that the matrix can be computed in the time domain from the known sequences . The computation involves a set of digital filters that do not depend on the signal and thus can be designed in advance. The exact details are given in Appendix C.

The drawback of the set , is that typically we do not know the value of . Moreover, even if is known, then usually we do not know in advance whether as does not characterize the signals according to the number of bands and their widths. Therefore, we would like to determine conditions that ensure . Proposition 1 shows that for the set satisfies if . Thus, under this condition on we have , which in turn implies as a minimal value for . Consequently, SBR4 guarantees perfect reconstruction for under the restrictions and . However, the Landau rate for is , while implies a minimal sampling rate of . Indeed, substituting and into (19) we have

 pLT≥4NT1BT=4NB. (45)

In contrast, it follows from Theorem 3 that is sufficient for uniqueness of the solution. The reason for the factor of two in the sampling rate is that is -sparse for each specific ; however, when combining the frequencies, the maximal size of is . The SBR2 algorithm, developed in the next section, capitalizes on this difference to regain the factor of two in the sampling rate, and thus achieves the minimal rate, at the expense of a more complicated reconstruction method.

### Vi-B The SBR2 algorithm

We now would like to reduce the sampling rate required for signals of to its minimum, i.e. twice the Landau rate. To this end, we introduce a set for which SBR2 guarantees perfect reconstruction, and then prove that if .

Consider a partition of into consecutive intervals defined by

 0=¯d1<¯d2<⋯<¯dM+1=1LT.

For a given partition set we define the set of signals

 BK,¯D={suppX(f)⊆F % and |S[¯di,¯di+1]|≤K,1≤i≤M}.

Clearly, if then we can perfectly reconstruct every by applying the CTF block to each of the intervals . We now define the set as

 BK=⋃¯DBK,¯D, (46)

which is the union of over all choices of partition sets and integers . Note that neither nor is a subspace. If we are able to find a partition such that , then can be perfectly reconstructed using . Since the Landau rate for is , this approach requires the minimal sampling rate3.

The following proposition shows that if the parameters are chosen properly, then . Thus, and a method to find of is sufficient for perfect reconstruction of .

###### Proposition 4

If are selected according to Theorem 3 then .

{proof}

In the proof of Theorem 3 we showed that under the conditions of the theorem, is -sparse for every . The proof of the proposition then follows from the following lemma [8]:

###### Lemma 2

If is a multi-band signal with bands sampled by a multi-coset system then there exists a partition set with intervals such that is a constant set over the interval for .

Lemma 2 implies that for every which means that .

So far we showed that , however to recover we need a method to find in practice; Lemma 2 only ensures its existence. Given the data , our strategy is aimed at finding any partition set such that

 ^S=|D|−1⋃i=0S[di,di+1] (47)

is equal to , and such that for every . As long as (27) holds, once we find the solution is exactly recovered via (29)-(30). To find , we apply the CTF block on each interval . If , then the conditions of Proposition 2 are valid, a unique solution is guaranteed for each interval. Since for (27) is valid for , our method guarantees perfect reconstruction of signals in . As always, using a universal pattern makes the set of signals the largest. Since the Landau rate for is this approach allows for the minimal sampling rate when .

In order to find we suggest a bi-section process on . We initialize and seek . If does not satisfy some condition explained below, then we halve into and and determine and . The bi-section process is repeated several times until the conditions are met, or until it reaches an interval width of no more than . The set is then determined according to (47).

We now describe the conditions for which a given is halved. The matrix of (34) satisfies the constraints (36a)-(36b). Since and (36c) is also valid. However, the last constraint (36d) of Proposition 2 is not guaranteed as it requires a stronger condition . Note that this condition is satisfied immediately if since . We suggest to approximate the value by , and solve the MMV system for the sparsest solution only if . This approximation is motivated by the fact that for any it is true that . From Proposition 2 we have that which results in

 rank(Q)≤|I(Z)|. (48)

However, only special multi-band signals result in strict inequality in (48). Therefore, an interval that produces is halved. Otherwise, we apply the CTF block for this assuming that (48) holds with equality. As in SBR4 the flag indicates a correct solution for . Therefore, if the flag is 0 we halve . These reconstruction steps are detailed in Algorithm 2, named SBR2.

It is important to note that SVR2 is sub-optimal, since the final output of the algorithm may not be equal to even for . One reason this can happen is if strict inequality holds in (48) for some interval . In this scenario step 2 is executed even though does not satisfy (36d). For example, a signal with two equal width bands and such that

 (49)

and . If also satisfies

 X(f−a1)=X(f−a2),∀f∈[0,W], (50)

then it can be verified that while on the interval . This is of course a rare special case. Another reason is a signal for which the algorithm reached the termination step 2 for some small enough interval. This scenario can happen if two or more points of reside in an interval width of . As an empty set is returned for this interval, the final output may be missing some of the elements of . Clearly, the value of influences the amount of cases of this type. We note that since we do not rely on the missing values are typically recovered from other intervals. Thus, both of these sources of error are very uncommon.

The most common case in which SBR2 can fail is due to the use of sub-optimal algorithms to find ; this issue also occurs in SBR4. As explained before, we assume that flag=0 means an incorrect solution and halves the interval . An interesting behavior of MMV methods is that even if cannot be found for , the algorithm may still find a sparse solution for each of its subsections. Thus, the indication flag is also a way to partially overcome the practical limitations of MMV techniques. Note that the indication property is crucial for SBR2 as it helps to refine the partition and reduce the sub-optimality resulting from the MMV algorithm.

We point out that Proposition 4 shows that . We also have that from Proposition 1, which motivates our approach. The SBR2 algorithm itself does not impose any additional limitations on other than those of Theorem 3 required to ensure the uniqueness of the solution. Therefore, theoretically, perfect reconstruction for is guaranteed if the samples are acquired at the minimal rate, with the exception of the special cases discussed before.

The complexity of SBR2 is dictated by the number of iterations of the bi-section process, which is also affected by the behavior of the MMV algorithm that is used. Numerical experiments in Section VII show that empirically SBR2 converges sufficiently fast for practical usage.

Finally, we emphasize that SBR2 does not provide an indication on the success recovery of even for since there is no way to know in advance if is a signal of the special type that SBR2 cannot recover.

### Vi-C Comparison between SBR4 and SBR2

Table II compares the properties of SBR4 and SBR2. We added the WKS theorem as it also offers spectrum-blind reconstruction. Both SBR4 and SBR2 algorithms recover the set according to the paradigm stated in Section V-B. Observe that an indication property is available only for SBR4 and only if the signals are known to lie in . Although both SBR4 and SBR2 can operate at the minimal sampling rate, SBR2 guarantees perfect reconstruction for a wider set of signals as is a true subset of .

Considering signals from we have to restrict the parameter selection. The specific behavior of SBR4 and SBR2 for this scenario is compared in Table III. In particular, SBR4 requires twice the minimal rate.

In the tables, perfect reconstruction refers to reconstruction with a brute-force MMV method that finds the correct solution. In practice, sub-optimal MMV algorithms may result in failure of recovery even when the other requirements are met. The indication flag is intended to discover these cases.

The entire reconstruction scheme is presented in Fig. 3. The scheme together with the tables allow for a wise decision on the particular implementation of the system. Clearly, for it should be preferred to sample at the Nyquist rate and to reconstruct with an ideal low pass filter. For we have to choose between SBR4 and SBR2 according to our prior on the signal. Typically, it is natural to assume