Efficient Sampling of Sparse Wideband Analog Signals
^{†}^{†}thanks: This work was
supported in part by the Israel Science Foundation under Grant no.
1081/07 and by the European Commission in the framework of the FP7
Network of Excellence in Wireless COMmunications NEWCOM++
(contract no. 216715). JAT was supported in part by DARPA and ONR. This work has been submitted to the IEEE for
possible publication. Copyright may be transferred without notice,
after which this version may no longer be accessible.
Moshe Mishali and Yonina C. Eldar are with the Technion—Israel
Institute of Technology, Haifa Israel. Emails:
moshiko@tx.technion.ac.il, yonina@ee.technion.ac.il. Joel A. Tropp
is with the Applied & Computational Mathematics department of
California Institute of Technology, Pasadena, CA 911255000.
Email: jtropp@acm.caltech.edu
Abstract
Periodic nonuniform sampling is a known method to sample spectrally sparse signals below the Nyquist rate. This strategy relies on the implicit assumption that the individual samplers are exposed to the entire frequency range. This assumption becomes impractical for wideband sparse signals. The current paper proposes an alternative sampling stage that does not require a fullband front end. Instead, signals are captured with an analog front end that consists of a bank of multipliers and lowpass filters whose cutoff is much lower than the Nyquist rate. The problem of recovering the original signal from the lowrate samples can be studied within the framework of compressive sampling. An appropriate parameter selection ensures that the samples uniquely determine the analog input. Moreover, the analog input can be stably reconstructed with digital algorithms. Numerical experiments support the theoretical analysis.
I Introduction
Radio frequency (RF) technology enables the modulation of a narrowband signal by a high carrier frequency. As a consequence, manmade radio signals are often sparse. That is, they consist of relatively small number of narrowband transmissions spread across a wide territory of spectrum. A convenient description for these signals is the multiband model where the frequency support of a signal resides within several continuous intervals in a wide spectrum but vanishes elsewhere.
It has become prohibitive to sample modern multiband signals because their Nyquist rates may exceed the specifications of the best analogtodigital converters (ADCs) by orders of magnitude. As a result, any attempt to acquire a multiband signal must exploit its structure in an intelligent way.
Previous work on multiband signals has shown that it is possible to reduce the sampling rate by acquiring samples from a periodic but nonuniform grid [1]. Multicoset sampling, a specific strategy of this type, was analyzed in [2], which established that exact recovery is possible when the band locations are known. The blind case, in which the band locations are unknown, has been extensively studied in [3]. Unfortunately, the sampling front ends proposed in [1, 2, 3] are impractical for wideband applications because they require ADCs whose sampling rate is matched to the Nyquist rate of the input signal, even when the average sampling rate is much lower. Other limitations are described in Section IIB. Another recent work [4] has partially overcome these shortcomings using a hybrid optic–electronic system at the expense of size and cost.
In this paper, we analyze a practical sampling system inspired by the recent work on the random demodulator [5]. This system multiplies the input signal by a random square wave alternating at the Nyquist rate, then it performs lowpass filtering, and samples the signal at a lower rate. Our system consists of a bank of random demodulators running in parallel. We show that, for an appropriate choice of parameters, our system uniquely and stably determines a multiband input signal. Moreover, we describe digital algorithms for reconstructing the signals from the parallel samples.
We continue with an outline of the paper. Section II reviews essential background material. In Section III, we describe the system design and a frequencydomain analysis that leads to an infinite measurement vectors (IMV) system. Applying ideas from [6], we reduce the problem of locating the frequency bands to a finitedimensional compressive sampling problem. We then derive an appropriate choice of parameters for the sampling system. Section IV presents our numerical experiments, which demonstrate that the system permits stable signal recovery in the presence of noise.
Ii Formulation and Background
Iia Design Goals for Efficient Sampling
Let be a realvalued, finiteenergy, continuoustime signal, and let be its Fourier transform. We treat a multiband signal model in which is bandlimited to and the support of consists of at most frequency interval whose widths do not exceed . Fig. 1 depicts a typical communication application that obeys this signal model.
We wish to design a sampling system for signals from our model that satisfies the following properties:

The sampling rate should be as low as possible;

the system has no prior knowledge of band locations; and

the system can be implemented with existing devices.
We will call this type of sampling stage efficient.
The set is a union of subspaces corresponding to all possible signal supports. Every lies in one of these subspaces. Detecting the exact subspace, prior to sampling, may be impossible or too expensive to implement. An efficient system should therefore be blind, in the sense that band locations are not assumed to be known.
The lowest (average) sampling rate that allows blind perfect reconstruction for all signals in is samples/sec [3]. This rate is proportional to the effective bandwidth of , and it is typically far less than the Nyquist rate , which depends only on the maximum frequency in . See Section IIIC for more discussion.
Our previous work describes blind reconstruction of from multicoset samples taken at the minimal rate [3]. The next section details the practical limitations of the multicoset strategy, which make it inefficient for wideband signals.
IiB Practical Limitations of MultiCoset Sampling
Multicoset sampling involves periodic nonuniform sampling of the Nyquistrate sequence , where . The th coset takes the th value in every block of consecutive samples. Retaining only cosets, indexed by , gives sequences
otherwise,  (1) 
with an average sampling rate , which is lower than the Nyquist rate.
To explain the practical limitations of this strategy, we observe that standard ADC devices have a specified maximal rate , and manufactures require a preceding lowpass filter with cutoff . Distortions occur if the antialiasing filter is not used, since the design is tailored to bandlimited signals and has an internal parasitic response to frequencies above . To avoid these distortions, an ADC with matching the Nyquist rate of the input signal must be used, even if the actual sampling rate is below the maximal conversion rate . In multicoset sampling, each sequence corresponds to uniform sampling at rate , whereas the input contains frequencies up to . Acquiring is only possible using an ADC with , which runs times slower than its maximal rate. Besides the resource waste, this renders multicoset sampling impractical in wideband applications where is higher (typically by orders of magnitude) than the rate of available devices.
One recent paper [4] developed a nonconventional ADC design for wideband applications by means of highrate optical devices. The hybrid optic–electronic system allows sampling at rate with minimal attenuation to higher frequencies (up to ). Unfortunately, at present, this performance cannot be achieved with purely electronic technology. Thus, for wideband applications that cannot afford the size or expense of an optical system, multicoset sampling becomes impractical.
Another limitation of multicoset sampling, which also exists in the optical implementation, is maintaining accurate time delays between the ADCs of different cosets. Any uncertainty in these delays hobbles the recovery from the sampled sequences.
Before describing the way our proposed sampling stage overcomes these limitations, we briefly review the mechanism underlying the blind reconstruction of [3].
IiC IMV System
Let be a collection of dimensional vectors indexed by a fixed set that may be infinite. The support of a vector is the set , and we define . We will assume that the vectors in are jointly sparse in the sense that . In words, the nonzero entries of each vector lie within a set of at most indices.
Let be an matrix with , and consider a parameterized family of linear systems
(2) 
When the support is known, recovering from the known vector set is possible if the submatrix , consisting of the columns of indicated by , has full column rank. In this case,
(3a)  
(3b) 
where contains only the entries of indexed by , where denotes the conjugate transpose of and where is the Moore–Penrose pseudoinverse. For unknown support , (2) is still invertible if is known, and every set of columns from is linearly independent [7, 8, 6]. In general, solving (2) for is NPhard because it may require a combinatorial search. Nevertheless, recent advances in compressive sampling and sparse approximation delineate situations where polynomialtime recovery algorithms correctly identify for finite . This challenge is sometimes referred to as a multiple measurement vectors (MMV) problem [9, 10, 11, 8, 12, 13].
Recovering a multiband signal from a set of multicoset samples can be reduced to a certain infinite measurement vectors (IMV) problem (where is infinite). When the band locations are known, the support set is determined and reconstruction can be performed via (3) [1, 2]. In a blind scenario, the support of the unknown vectors can be recovered in two steps [3, 6]. First, construct a (finite) frame for . Then, find the (unique) solution to the MMV system that has the fewest nonzero rows. It holds that is the set , where the union occurs over columns of . Fig. 2 summarizes these recovery steps.
In the next section, we describe and analyze the proposed sampling system. In contrast to the multicoset strategy, our system uses standard lowrate ADCs. We match the analog input of the ADCs to their maximal rate. The system also avoids time offsets between devices. As in multicoset sampling, the sampling sequences generated by our system are related to via an IMV system, different from the one which is based on the sequences (1). Consequently, the recovery of can be performed via the steps described in Fig. 2 and (3).
Iii Efficient Sampling
Iiia Description
Let us present the proposed system in more detail. A block diagram appears in Fig. 3. We discuss the choice of system parameters in the sequel.
The signal enters channels simultaneously. In the th channel, is multiplied by a mixing function , which is a periodic, piecewise constant function that alternates between the levels for each of equal time intervals. Formally,
(4) 
with , and for every .
After mixing, the output is converted to digital using the standard approach. In each channel, the signal spectrum is truncated by a lowpass filter with cutoff and the filtered signal is sampled at rate .
Note that the cutoff and the sampling rate match, and each channel operates independently. Since there are channels, the average sampling rate is samples/sec. A further advantage of this type of system is that samples are produced at a constant rate, so they may be fed to a digital processor operating at the same frequency, whereas multicoset sampling requires an additional hardware buffer to synchronize the nonuniform sequences.
IiiB Analysis
To ease exposition we choose an odd , , and . These choices are relaxed in the sequel. Consider the th channel. Since is periodic, it has a Fourier expansion
(5) 
where
(6)  
(7)  
(8) 
Evaluating the integral gives
(9) 
where and . Expressing the Fourier transform in terms of the Fourier series coefficients leads to
(10) 
with denoting the Dirac delta function. The analog multiplication translates to convolution in the frequency domain,
(11) 
Therefore, is a linear combination of shifted copies of .
Filtering by , whose frequency response is an ideal rect function in the interval , results in
(12) 
where is the smallest integer satisfying
(13) 
Under the choices above, . The discretetime Fourier transform of is
(14)  
(15) 
Substituting (8) in (15) leads to the system
(16) 
where , is an matrix whose th entry . The matrix is a certain cyclic columns shift of the discrete Fourier transform matrix of order . The square diagonal scales
(17) 
according to the last term in (8). Since has nonzero diagonal entries, it can be absorbed into while keeping . Thus, (16) is an IMV system with replacing of (2).
We now explain the choices assumed in the beginning of the section.

justifies the implication (14)(15). For , (15) does not contain the entire information about . Thus, this choice should be avoided. In contrast, the selection has the following benefit for hardware implementation. Suppose for some integer . Then, it can be verified that under technical conditions on , each sequence corresponds to channels which are designed with . Consequently, the number of mixers, filters and ADCs is reduced by a factor of . This choice requires an ADC with and is thus limited by the sampling rates of available devices.

Using ensures that the right hand side of (13) is an integer. Other choices are possible, but imply a sampling rate that is higher than the minimum.
IiiC Parameter Selection and Stable Recovery
The following theorem suggests a parameter selection for which the infinite sequences match a unique . When the band locations are known, the same selection works with half as many sampling channels. Thus, the system appearing in Fig. 3 can also replace the multicoset stage of [2].
Theorem 1 (Uniqueness)
Let be a multiband signal and assume the choices for an integer (not necessarily odd) and . If

,

for nonblind reconstruction or for blind,

is such that every columns are linearly independent,
then, for every , the vector is the unique sparse solution of (16). In addition, under these choices is jointly sparse.
Proof. The proof goes along the line of [3]. The relation (17) can be thought of slicing the spectrum into pieces of length and then rearranging them in a vector form . Fig. 4 visualizes this relation for even and odd .
The choice ensures that and thus every band can contribute only a single nonzero value to . As a consequence, is sparse for every . In addition, this choice of and the continuity of the bands guarantee that each band can occupy two spectrum pieces at the most. Therefore, when aggregating the frequencies to compute , we have
In the nonblind setting, the band locations imply the support set . The other two conditions on ensure the existence of , and thus (3) provides the uniqueness of .
In blind recovery, is unknown, and the following CS result is used to ensure the uniqueness. A sparse vector is the unique solution of if every columns of are linearly independent [7]. Clearly, this condition translates to and the condition on of the theorem.
The parameter selection of Theorem 1 guarantees an average sampling rate . Depending on whether is an integer, this selection allows to achieve the minimal rate when taking the extreme values for . Note that each is sparse, while is jointly sparse under the parameter selection of the theorem. As detailed in [3], this factor requires doubling in order to use Fig. 2 and (3). Gaining back this factor at the expense of a higher recovery complexity is also described in [3].
Verifying that a set of signs satisfies the requirement of the theorem is computationally difficult because one must check the rank of every set of columns from . It is known that a random choice of signs will work, except with probability exponentially small in [14].
In fact, recent work on compressive sampling shows that a random choice of signs ensures that signal acquisition is stable [9]. A matrix is said to have the restricted isometry property (RIP) of order , if there exists such that
(18) 
for every sparse vector [9]. When satisfies the RIP of order , then the matrices and are well conditioned for every possible frequency subset with . It was proved in [15] that basis pursuit can recover the sparse solution of for an underdetermined , if has . The mean squared error of the recovery in the presence of noise or model mismatch was also shown to be bounded under the same condition. Similar conditions were shown to hold for other recovery algorithms. In particular, [16] proved a similar argument for a mixed program, for the MMV setting. Thus, if has the RIP of order , then it implies the stability of the recovery using Fig. 2, when the mixednorm program is utilized to solve the sparsest solution of the MMV .
It remains to quantify when stable recovery is possible for specific choices of and . Let be an unitary matrix (such as in (16)), and suppose that is an random matrix whose entries are equally likely to be . The RIP of order holds with high probability for the matrix when , where is a positive constant independent of everything [17]. The log factor is necessary [18]. In practice, we empirically evaluate the stability of the system since the RIP cannot be verified computationally.
Iv Numerical Evaluation
To evaluate the empirical performance of the proposed system (see Fig. 3), we can simulate the action of the system on test signals contaminated with white Gaussian noise. To recover the signals from the sequences of samples, we apply the reduction from an IMV system to an MMV system, as described in Fig. 2. We solve the resulting MMV systems using simultaneous orthogonal matching pursuit [11, 12].
More precisely, we evaluate the performance on 100 noisy test signals of the form , where is a multiband signal and is a white Gaussian noise process. The multiband signals consist of pairs of bands, each of width MHz, constructed using the formula
where . The energy coefficients are fixed , whereas for every signal the carriers are chosen uniformly at random in with GHz. To represent the continuous signals in simulation, we place a dense grid of 4000 equispaced points in the time interval . The Gaussian noise is added and scaled so that the test signal has the desired signaltonoise ratio (SNR), where we define the SNR to be .
We simulate the proposed system with channels, where each mixing function alternated sign at most times. The sampling rate parameters are chosen so that . Each sign is chosen uniformly at random and fixed for the duration of the experiment. To simulate the analog lowpass filter, we use a 50tap digital FIR filter, designed with the MATLAB command h=fir1(50,1/M). The output of the filter is decimated to produce the sampled sequences .
The input signal is reconstructed from channels. We follow the procedure described in Fig. 2 to obtain an estimated support set . When , the true support set, we declare that the system has recovered the signal. Fig. 5 reports the percentage of recoveries for various numbers of channels and various SNRs.
To construct the frame , we begin by computing the values . We then perform the eigenvalue decomposition and then discard the eigenvectors of the noise space [3].
V Conclusions
We developed an efficient sampling stage for analog multiband signals. In the proposed system, analog mixers and standard ADCs replace impractical nonuniform sampling of multicoset strategy. Analog mixers for wideband applications is an existing RF technology, though selecting the exact devices may require an expertise in analog design.
The proposed system has a set of parameters, which determines the signal, if selected according to the conditions we derived. Analyzing our system in the frequency domain lead to an IMV system, which allows to use existing reconstruction stages with only minor modifications. In addition, based on the IMV system and recent works in the CS literature, we deduce the rate requirements for stable blind recovery, which in general is higher than the rate required to determine the signal from its samples.
A preliminary computer evaluation of our system shows a promise for stable blind recovery from subNyquist sampling rate, although further work is required to quantify the optimal working point in the tradeoff between sampling rate, blindness, and practical implementation.
References
 [1] Y.P. Lin and P. P. Vaidyanathan, “Periodically nonuniform sampling of bandpass signals,” IEEE Trans. Circuits Syst. II, vol. 45, no. 3, pp. 340–351, Mar. 1998.
 [2] R. Venkataramani and Y. Bresler, “Perfect reconstruction formulas and bounds on aliasing error in subnyquist nonuniform sampling of multiband signals,” IEEE Trans. Inform. Theory, vol. 46, no. 6, pp. 2173–2183, Sep. 2000.
 [3] M. Mishali and Y. C. Eldar, “Blind multiband signal reconstruction: Compressed sensing for analog signals,” CCIT Report no. 639, EE Dept., Technion  Israel Institute of Technology; submitted to IEEE Trans. Signal Processing, Sep. 2007.
 [4] M. Fleyer, A. Rosenthal, A. Linden, and M. Horowitz, “Multirate synchronous sampling of sparse multiband signals,” Arxiv preprint arXiv:0806.0579, 2008.
 [5] J. N. Laska, S. Kirolos, M. F. Duarte, T. S. Ragheb, R. G. Baraniuk, and Y. Massoud, “Theory and implementation of an analogtoinformation converter using random demodulation,” in Proc. of. ISCAS 2007, 2007, pp. 1959–1962.
 [6] M. Mishali and Y. C. Eldar, “Reduce and boost: Recovering arbitrary sets of jointly sparse vectors,” IEEE Trans. Signal Processing, vol. 56, no. 10, pp. 4692–4702, Oct. 2008.
 [7] D. L. Donoho and M. Elad, “Maximal sparsity representation via minimization,” Proc. Natl. Acad. Sci., vol. 100, pp. 2197–2202, Mar. 2003.
 [8] J. Chen and X. Huo, “Theoretical results on sparse representations of multiplemeasurement vectors,” IEEE Trans. Signal Processing, vol. 54, no. 12, pp. 4634–4643, Dec. 2006.
 [9] E. J. Candès, J. Romberg, and T. Tao, “Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information,” IEEE Trans. Inform. Theory, vol. 52, no. 2, pp. 489–509, Feb. 2006.
 [10] D. L. Donoho, “Compressed sensing,” IEEE Trans. Inform. Theory, vol. 52, no. 4, pp. 1289–1306, April 2006.
 [11] S. F. Cotter, B. D. Rao, K. Engan, and K. KreutzDelgado, “Sparse solutions to linear inverse problems with multiple measurement vectors,” IEEE Trans. Signal Processing, vol. 53, no. 7, pp. 2477–2488, July 2005.
 [12] J. A. Tropp, A. C. Gilbert, and M. J. Strauss, “Algorithms for simultaneous sparse approximation. Part I: Greedy pursuit,” Signal Process., vol. 86, pp. 572–588, Apr. 2006.
 [13] J. A. Tropp, “Algorithms for simultaneous sparse approximation. Part II: Convex relaxation,” Signal Process., vol. 86, pp. 589–602, Apr. 2006.
 [14] T. Tao and V. Vu, “On random 1 matrices: Singularity and determinant,” Random Structures Algorithms, vol. 28, no. 1, pp. 1–23, 2006.
 [15] E. Candès, “The restricted isometry property and its implications for compressed sensing,” C. R. Acad. Sci. Paris, Ser. I, vol. 346, pp. 589–592, 2008.
 [16] Y. C. Eldar and M. Mishali, “Robust recovery of signals from a union of subspaces,” arXiv.org 0807.4581; submitted to IEEE Trans. Inform. Theory, Jul. 2008.
 [17] R. Baraniuk, M. Davenport, R. DeVore, and M. Wakin, “A simple proof of the restricted isometry property for random matrices,” Const. Approx., 2007.
 [18] T. Blumensath and M. E. Davies, “Sampling theorems for signals from the union of linear subspaces,” preprint, 2007.