Recovery guarantees for multifrequency chirp waveforms in compressed radar sensing

# Recovery guarantees for multifrequency chirp waveforms in compressed radar sensing

Nithin Sugavanam and Emre Ertin
The Ohio State University
###### Abstract

Radar imaging systems transmit modulated wideband waveform to achieve high range resolution resulting in high sampling rates at the receiver proportional to the bandwidth of the transmit waveform. Analog processing techniques can be used on receive to reduce the number of measurements to , the number of potential delay bins. If the scene interrogated by the radar is assumed to be sparse consisting of point targets, results from compressive sensing suggest that number of measurements can be further reduced to scale with for stable recovery of a sparse scene from measurements with additive noise. While unstructured random projectors guarantee successful recovery under sparsity constraints, they cannot be implemented in the radar hardware in practice. Recently, structured random Toeplitz and Circulant matrices that result from using stochastic waveforms in time delay estimation setting have been shown to yield recovery guarantees similar to unstructured sensing matrices. However, the corresponding transmitter and receiver structures have high complexity and large storage requirements. In this paper, we propose an alternative low complexity compressive wideband radar sensor which combines multitone signal chirp waveform on transmit with a receiver that utilizes an analog mixer followed with a uniform sub-Nyquist sampling stage. We derive the recovery guarantees for the resulting structured measurement matrix and sufficient conditions for the number of tones. The only random component of our design is the sparse tone spectrum implementable efficiently in hardware. Our analytical and empirical results show that the performance of our scheme is in par with unstructured random sensing matrices and structured Toeplitz and Circulant matrices with random entries.

Compressive sensing, mutual coherence, Structured measurement matrix, Linear Frequency modulated waveform, Radar.

## I Introduction

If a linear frequency modulated waveform (LFM) is used on transmit, the matched filtering can be approximately implemented through mixing the received signal with a reference LFM waveform and low pass filtering the mixer output. At the mixer output each copy of the waveform delayed by appears as a tone whose frequency is given by . This pre-processing step is termed stretch processing [1, 2] and can result in substantially reduced sampling rate for the ADC used in the mixer output if the delay support is smaller than the pulse length. Specifically, received signal at stretch processor’s output can be written as:

 y(t)=N∑n=1xnej(nβΔ)t, (1)

In essence, stretch processing converts range profile estimation problem into frequency spectrum estimation problem with Nyquist rate samples in time obtained after analog processing. If the scene can be assumed to have fewer targets than the number of delay bins , well-known results [3, 4] from the Compressive sensing (CS) shows successful reconstruction with sub-Nyquist samples is possible with the number of measurements scaling with , if appropriate measurement operators can be implemented. Furthermore, there are numerous tractable algorithms with provable performance that are either based on convex relaxation [5, 6, 7] or greedy methods [8, 9] to solve the reconstruction problem. Motivated by these advances, compressed sensing techniques have been applied to a variety of problems in Radar [10]: range profile estimation [11], single pulse systems for range-doppler estimation in [12], single pulse multiple transmit and receive system for range-doppler and azimuth estimation and target detection in [13, 14], remote sensing in [15], direction of arrival estimation in [16]. CS based radar sensors based on pure random waveforms [17], Xampling framework [18] and Random Modulator Pre-Integrator (RMPI) framework [19] using the receiver structure from [20] have also been implemented. A common theme in most of the CS literature has been randomization as it leads to measurement matrices that have provable recovery guarantees. Implementation of the randomness into compressive radar systems has proven to be a challenging task in practice; uncorrelated random signals with high peak-to-average power ratio is mismatched to the nonlinear power amplifiers used in radar systems and the system bandwidth (and as a result range resolution) is limited as digital to analog converters (DAC) have to be employed for generation of precise random signals for transmit and receive mixing.

The LFM pulse model in (1) provides an alternative strategy as it converts the range estimation problem into an equivalent sparse frequency spectrum estimation problem. Uniform subsampling in this setting has poor perforamnce [21]. Non-uniform random sub-sampling can be used to obtain measurements with low mutual coherency [22, 21]. However, non-uniform sampling with commercially available ADCs still requires it to be rated at the Nyquist rate to accommodate closely spaced samples. We propose to use low speed uniform sub-sampling using a high analog bandwidth ADC in the sparse frequency spectrum estimation setting and push randomness to transmit signal structure to obtain compressive measurements. ADCs whose analog bandwidth exceeds their maximum sampling rate by several factors is readily available and used routinely in pass-band sampling. This compressive radar structure proposed in [23] uses linear combination of LFM waveform at the transmitter with randomly selected center frequencies, while maintaining the simple standard stretch processing receiver structure. The output of the stretch processor receiver is given by

 y(t)=N∑n=1xnNc∑k=1ejϕn,kej(nβΔ+ωk)t

where is a predetermined known complex phase, is the number of tones modulating the LFM waveform. We observe that under the proposed compressive sensor design each delayed copy of the transmitted waveform is mapped to multi-tone spectra with known structure. As shown in this paper this known multi-tone frequency structure enables recovery from aliased time samples with provable guarantees. These results complement previous work which has shown good empirical performance in simulation and measurements [24].

The contributions put forth in this sequel is that we present the theoretical analysis for this multi-frequency LFM system. We show that the system with a relatively small number of LFM waveform has performance guarantees similar to a matrix with independent random entries for a sufficiently large number of tones modulating the LFM waveform. We also present a numerical analysis comparing our system with other measurement schemes.

### Notation and Preliminaries

We denote a vector in -dimensional complex domain as . is called as norm, which is given as the number of non-zero elements in a vector. Clearly, this is not a valid norm but is used in formulating the fundamental problem in compressed sensing. We denote as the norm. The Euclidean or norm is given as . We denote a matrix as , as conjugate transpose of , and as the identity matrix of dimensions dependent on the context. The spectral or operator norm of the matrix is given as the largest singular value of the matrix. The Frobenius norm of a matrix is given as . Another important quantity of interest is the mutual coherence , which is a measure of the correlation between the columns of matrix . The mutual coherence is given as , where is a column of matrix . Another fundamental property for the measurement matrices is called as Restricted Isometry Property (RIP). A measurement matrix is said to satisfy RIP of order , if for any K-sparse vector

 equivalently, δK=max\lx@stackrelΓcard(Γ)≤K∥A∗ΓAΓ−I∥≤δ, (2)

where is an index set that selects the columns of , and refers to the number of elements in the set, and is the restriction of having columns indexed by . We denote the expectation operator as . The circularly-symmetric complex Gaussian distribution with mean and variance is denoted as .

### I-a Relation to other works

Related results in literature on recovery guarantees for compressed radar sensing can be broadly categorized in two categories: Results relating to signal reconstruction establish uniform recovery guarantees for successful reconstruction of all K-sparse signals, whereas results on support recovery is concerned with the detection of non-zero locations of a K-sparse signal that is assumed to be sampled from a generic statistical model, such as uniformly sampling from all possible subsets of size  [26].

In this paper we show that randomly sampled K-sparse signals can be recovered with high probability using LASSO for the structured measurement matrix of the proposed ompressive radar sensor sensing scheme. Next, we show that the estimates of mutual coherence and column norms we obtain can be used to provide uniform recovery guarantee following a standard argument.

Table I summarizes related results for support recovery for different measurement matrices. The upper bound on the sparsity level that guarantees successful support recovery for our scheme has an additional penalty compared to unstructured Gaussian matrix as shown by Candes and Plan in [26] and block Toeplitz matrices with entries sampled from Rademacher distribution as shown by Bajwa in [28]. We note that the random matrix with independent entries is not realizable in radar setting but included in the table to provide a baseline.

Uniform recovery guarantees are often formulated in terms of satisfying RIP property with high probability, since if a measurement matrix satisfies RIP of order 2K such that , then all K-sparse vectors are successfully recovered with a reconstruction error of an oracle estimator that knows the support of the sparse vector or the support of K largest elements [29].

Baraniuk et. al. in [30] have shown that random matrices with i.i.d entries from either Gaussian or sub-Gaussian probability distribution satisfy the RIP condition. For any , if where is dependent on and the sub-Gaussian norm of the random variables. Although these unstructured random matrices have remarkable recovery guarantees they do not represent any practical measurement scheme, which leads us to consider classical linear time invariant (LTI) systems.

Typically, an active imaging system transmits a signal that interacts with a scene of interest and the acquired measurements are used to estimate characteristics of the scene. The unknown environment is modeled as an LTI system whose transfer function has to be estimated using compressed measurements from the data acquisition step. It is assumed that there exists a sparse or compressible representation of the transfer function in some domain and the goal is to solve the sparse estimation problem with the least possible measurements. This leads to a structured measurement matrix that is either a partial or sub-sampled Toeplitz or circulant matrix. The RIP condition of order for partial Toeplitz matrices in the context of channel estimation was established by Haupt et. al. in [31]. They showed that if the sparsity , then , where depends on . This quadratic scaling of number of measurements with respect to sparsity was improved in [32, 33, 34]. Romberg in [32] considered an active imaging system that used waveform with a random symmetric frequency spectrum and acquired compressed measurements using random sub-sampler or random demodulator at the receiver to estimate the sparse scene. The resultant system is a randomly sub-sampled circulant matrix representing the convolution and compression process. It is shown that for a given sparsity level , the condition that is satisfied if the number of measurements , where is a universal constant independent of the size of problem and . This was extended by Rauhut et. al. in [33]. They consider a deterministically sampled random waveform in time domain with samples following Rademacher distribution, which is modeled as a sub-sampled Toeplitz or Circulant matrix with entries sampled from Rademacher distribution. It was shown that for a given sparsity level K, with high probability if the number of measurements , where is a universal constant. In the subsequent work by Krahmer et. al. in [34], the relation between sparsity level and number of measurements is improved and more general random variables are considered such as vectors following sub-Gaussian distribution to generate the Toeplitz or Circulant matrix. It is shown that, for a given sparsity level K the condition is satisfied if the number of measurements , where the constant is a function of only the sub-Gaussian norm of the random variables generating the matrix. We adopt a method similar to [31] and establish the RIP condition of order K and obtain a similar result stating that if , where is independent of .

The rest of the paper is organized as follows, Section II gives the mathematical model for the multi-frequency chirp model and the statistical model considered for the target. Section III states the main result about the measurement scheme employed for sparse recovery. Section IV contains detailed simulation results of the proposed multi-frequency chirp waveform. Section V contains the detailed proof of the main recovery result. We conclude with some future directions in Section VI.

## Ii Signal model and problem statement

### Ii-a Multi-frequency chirp model

We consider a radar sensor with collocated transmitter and receiver antennas employing the compressive illumination framework proposed in [23] and [24] for estimating the range and complex reflectivity of reflectors in the scene. The chirp rate of all the transmitted linear frequency modulated (LFM) waveform is fixed at , where is the bandwidth of each transmitted waveform, is the pulse duration and is the system bandwidth for . We denote the unambiguous time interval as , where , , are the maximum and minimum range in the area of interest, respectively, while is the velocity of light in vacuum. The whole space of range is discretized into grids based on the Radar’s resolution, therefore we get grids. The interval of frequency from is divided into grids such that , which are used as center frequencies for the chirp waveform. From the possible waveform, a subset of size is chosen at random for transmission. We simplify this selection model by considering independent Bernoulli random variables as indicator variables to select LFM waveform such that waveform are selected on an average. Let be the random variable indicating that is part of the subset of size . It can be seen that

 γi={0 with probability (w.p.) 1−NcN1w.p. NcN. (3)

The chosen LFM waveform are then scaled by independent random variables such as

1. a sequence of independent and identical complex phase with probability density function as ,

 ci=γiexp(jΦi), (4)
2. a sequence of scaling variables following Rademacher distribution given by

 ξi ={−1w.p. 0.51w.p. 0.5 ci =γiξi. (5)

We choose the model in (2), which states that the chirp waveform are scaled by random signs in our analysis but use the model in (4) in simulation results.

The transmitted signal can be written as

 s(t)=1√MNcN−1∑i=0ciexp(j2π(fc+iβM)t+β2τt2),

where . The receiver utilizes stretch processing at the same chirp rate as the transmitter and a fixed reference frequency to demodulate the carrier frequency and estimate the round-trip delay. The overall duration of the de-chirping waveform is . The sampling rate employed at the receiver is . The total number of samples in the pulse duration is . The output samples of the stretch processor due to the target at different delay bins are

 y(k) =1√MNcN−1∑i=0N−1∑m=0ciexp(−j2πimN) ×exp(2πj(ipM−mN)k)x(m)+w(k),

where , is measurement noise process with 0 mean and variance , and is the complex scattering coefficient due to a target at the delay bin . This can be compactly written as

 y=Ax+w, (6)

where , , and . The sensing matrix can be represented as a series of deterministic matrices corresponding to the response to each of the chirp waveform scaled by zero mean random coefficients as shown

The individual components are as follows

 ¯A=1√MNc[¯A(0)⋯¯A(N−1)] ¯A(r)=[1exp(−2πjrN)⋯exp(−2πjr(M−1N))]T, Di=diag[1exp(−j2πiN)⋯exp(−j2πi(N−1)N)], Hi=diag[1exp(j2πipM)⋯exp(j2πip(M−1)M)], (8)

where and , are the samples from tones that correspond to each delay bin generated as a result of the de-chirping process in case of a single chirp system, is the shift in frequency due to the chirp waveform, and is the phase term associated with different delay bins due to the chirp. Each column of sensing matrix can also be represented as

 A(m)=EmFGmc, where Em=diag[1exp(−j2πmN)⋯exp(−j2πm(M−1)N)] F=1√MNc[F(0)⋯F(N−1)] F(r)=[1exp(2πjrpM)⋯exp(2πjrp(M−1M))]T, Gm=diag[1exp(−j2πmN)⋯exp(−j2πm(N−1)N)] (9)

where , , represents the tone generated due to target present at delay bin, are the different chirp center frequencies, is the phase term due to different chirp frequencies for a particular delay bin m, and is the random vector that selects the chirp waveform and scales them. A closer inspection of matrix reveals that each of the center frequencies used to shift the chirp waveform is being aliased into lower frequency tones as we are sampling at Sub-Nyquist rate. We assumed in order to simplify the analysis as we get sub-sampled Discrete Fourier Transform (DFT) matrices. We impose an additional condition that should be co-prime with in order for frequency tones to be uniformly mapped onto frequency bins, where . A simple example of , which makes p co-prime with M, circularly maps the N possible frequencies into M bins.

### Ii-B Target model

We consider a statistical model similar to Strohmer and Friedlander in [13] for the sparse range profile of targets. We assume that the targets are located at the N discrete locations corresponding to different delay bins. The support of the K-sparse range profile is chosen uniformly from all possible subsets of size . The complex amplitude of non-zero components is assumed to have an arbitrary magnitude and uniformly distributed phase in .

### Ii-C Problem statement

Given a sparse scene with targets following the statistical model discussed in previous section, and measurement scheme in (6) with and sparsity level , the goal of compressed sensing [3] is to recover the sparse or compressible vector using minimum number of measurements in constructed using random linear projections. The search for sparsest solution can be formulated as an optimization problem given below

 minx∥x∥0, subject to ∥Ax−y∥2≤η, (10)

where is the noise variance. This was shown to be NP-hard and hence, intractable [35], and many approximate solutions have been found. One particular solution is to use convex relaxation technique to modify the objective as norm minimization instead of the non-convex norm given by,

 minx∥x∥1 % subject to ∥Ax−y∥2≤η. (11)

This approach has been shown to successfully recover sparse or compressible vectors [6, 29] given that the sub-matrices formed by columns of sensing matrix are well conditioned. Our analysis is based on LASSO [7], which is a related method that solves the optimization problem in (11). It has been shown in [26] that for an appropriate choice of and conditions on measurement matrix, the support of the solution of the below mentioned optimization problem coincides with the support of the solution of the intractable problem in (10),

 minxλ∥x∥1+12∥Ax−y∥22. (12)

The goal of our analysis is to show that the measurement model given in (7) satisfies conditions on mutual coherence given in[26] and to find a bound on the sparsity level of range profile, which guarantees successful support recovery of almost all sparse signals using LASSO with high probability from noisy measurements. In addition, we also provide an estimate of the number of measurements required for the sensing matrix representing our scheme to satisfy the RIP condition. The next section presents our main results of our analysis.

## Iii Recovery guarantees

In order to obtain the non-asymptotic recovery guarantee for our system employing multiple chirps, we find an estimate of the tail bounds of mutual coherence and spectral or operator norm of our measurement matrix. Using the estimates, we also provide conditions for RIP condition of order to hold.
We make use of the Matrix Bernstein inequality given in lemma 6 to get a tail bound on the operator norm for the measurement matrix given in (7).

###### Lemma 1

Given the measurement matrix model in (7), if , then we can bound the tail probability for the operator norm as follows

 P⎛⎝∥A∥op≥2√Nlog(N+M)M⎞⎠ ≤(1N+M)α2−1,where (13)
 α2=21+23√log(N+M)Nc.

In addition, we also obtain an estimate of the expected value of the operator norm of measurement matrix given as

 E(∥A∥op)≤√2NMlog(N+M)+log(N+M)3√NMNc. (14)

The following results about the Euclidean norm of columns and mutual coherence are obtained using concentration inequalities of quadratic forms of sub-Gaussian random vectors given in [36], which is extended to the complex domain in lemma 12.

###### Lemma 2

The concentration inequality for the minimum of Euclidean norm of any column of is given as follows

 P(minm∥A(m)∥22≤1−ϵ)≤ 4Nexp⎛⎜ ⎜ ⎜ ⎜⎝−Md⎛⎜ ⎜ ⎜⎝ϵq∗(NcN)2q∗−1⎞⎟ ⎟ ⎟⎠2⎞⎟ ⎟ ⎟ ⎟⎠, (15)

where is a universal constant, , and

 q∗=max(1,2log(NNc)).
###### Lemma 3

If then there exists constant such that the mutual coherence of our sensing matrix is bounded by

 P⎛⎝μ(A)≥α31−ϵ√logNM⎞⎠≤ ⎧⎪ ⎪⎨⎪ ⎪⎩2Nu1−2+4Nexp(−Md¯ϵ2),if logN>q∗(NcN)(2/q∗−1)α32Nu2−2+4Nexp(−Md¯ϵ2),% otherwise (16)

where is a universal constant, are arbitrary constants, and

 u1=d⎛⎜ ⎜ ⎜⎝q∗α3NcN(2q∗−1)⎞⎟ ⎟ ⎟⎠2, u2=q∗α3NcN(2q∗−1)dlogN, q∗=max(1,2log(NNc)), ¯ϵ=⎛⎜ ⎜ ⎜⎝ϵq∗(NcN)2q∗−1⎞⎟ ⎟ ⎟⎠.
###### Theorem 1

For a measurement model , where is defined in (7) such that is drawn from a K-sparse model in complex domain and , the following conditions guarantee successful support recovery from solving (12) with regularizer ,

 K≤Kmax=(1−ϵ1)α1Mlog(N)log(N+M), (17) M≥(logN)3,logN≥q∗(NcN)(2/q∗−1)α3 (18) Nc≥max(49log(N+M),νN), (19) mink∈I|xk|>8√1−ϵσ√2log(N), (20)

with probability for some , is a universal constant independent of N,M, where

 ¯p1=2Nu1−2+4Nexp(−dM¯ϵ2) ¯p2=(1N+M)α2−1+4Nexp(−dM¯ϵ2) ¯p3=4Nexp(−dM¯ϵ2), ¯p4=1−2N−1(2πlog(N)+KN−1)−O(N−2log2), α2=21+23√log(N+M)Nc u1=d⎛⎜ ⎜ ⎜⎝q∗α3NcN(2q∗−1)⎞⎟ ⎟ ⎟⎠2 q∗=max(1,2log(NNc)), ¯ϵ=⎛⎜ ⎜ ⎜⎝ϵq∗(NcN)2q∗−1⎞⎟ ⎟ ⎟⎠.

The proof in section V involves direct application of lemma 5 and uses the estimates of the spectral norm and the mutual coherence of the measurement matrix.

###### Theorem 2

For the measurement matrix given in (7) and any such that , the RIP condition given as is satisfied with probability if the number of measurements , where

 p5=1N(u3−2), p6=4Nexp⎛⎜ ⎜ ⎜ ⎜⎝−d⎛⎜ ⎜ ⎜⎝ϵq∗NcN(2q∗−1)⎞⎟ ⎟ ⎟⎠2M⎞⎟ ⎟ ⎟ ⎟⎠, u3=a⎛⎜ ⎜ ⎜⎝q∗NcN(2q∗−1)⎞⎟ ⎟ ⎟⎠2,

is a constant independent of .

We adopt a similar approach as Haupt et. al. in [31] and utilize the estimates of inner-product of columns of sensing matrix and norms to obtain a simple bound on the number of measurements required to guarantee RIP of order K.

### Discussion

The support recovery guarantee stated in Theorem 1 is satisfied for almost all K-sparse vectors sampled from the generic sparse signal model discussed earlier, i.e. given a measurement matrix one could find a K-sparse vector (with arbitrarily small probability varying and . ) for which the recovery fails. This differs from the worst-case guarantees as well as reconstruction error bounds that depend on Restricted Isometry Property (RIP) given in Theorem 2. The exponent in probability tail bounds for quantities such as mutual coherence , spectral norm of the measurement matrix are controlled by number of chirp waveform employed . Specifically, it can be seen that the upper bound to the tail probability of the above estimates reduce as increases until . We also show empirically in section IV that the expected value of mutual coherence reduces as the number of chirp waveform increases. Specifically, it converges in mean to the mutual coherence of an unstructured random matrix with independent Gaussian entries and thereby converges in probability as well. Typically, smaller values of are desirable for robust recovery as shown by Candes and Plan in [26] as it ensures that the Grammian matrices of the sub-matrices formed using a subset of columns of sensing matrix are well conditioned as shown by Tropp in [37]. Since, the minimum value of the signal has to be above the noise floor (20) for successful recovery, we get a condition on the signal to noise ratio SNR for a particular target located at a fixed range bin below which the recovery guarantee does not hold, which is given by

 SNRr=|xr|2σ2≥128κlogN.

The authors in [13, 26] also show that the threshold on scales with with constant determining the probability of successful recovery.In section IV we study the effect of SNR on the reconstruction error using simulations.

## Iv Simulation examples

For our simulation studies, we consider a system with a bandwidth from which we choose center frequency of each chirp waveform randomly with each chirp sweeping a fraction of the system bandwidth . We note that a wideband multi-tone signal with bandwidth modulating a LFM waveform of bandwidth results in a system bandwidth of Hz, but we seek to resolve targets with range resolution that corresponds to Hz common to all modulated chirps. The fractional bandwidth defined as the ratio of represents the under-sampling ratio as the stretch processor output is uniformly sub-sampled at that rate. We assume that the minumum and minimum range of the area of interest are and , respectively. The pulse duration is chosen such that the ratio is co-prime with the number of samples. We make use of the model in (4) to select a subset of chirp waveforms and scale with a random phase term to obtain the simulation results. The other measurement matrices that we compare the performance of our scheme with are

1. matrix with i.i.d. complex Gaussian random entries sampled from distribution ,

2. matrix , which is a partial Toeplitz matrix, where is a uniform sub-sampling operator,

 T1=⎡⎢ ⎢ ⎢ ⎢⎣tNtN−1⋯t1tN+1tN⋯t2⋮⋮t2N−1t2N−2⋯tN⎤⎥ ⎥ ⎥ ⎥⎦ ti∼CN(0,1),i=1,⋯,2N−1. and

We generate 100 realizations of matrices and , in order to observe the effect of under-sampling on mutual coherence. It can be seen empirically from figure 1 that the mutual coherence of the sensing matrix representing our system converges in mean to the mutual coherence of a sensing matrix as the number of chirps increase. We observe that even for small values of , the mean of the mutual coherence of our measurement matrix is quite close to the mean of the mutual coherence of matrix . In addition, we also evaluate the coherence of Toeplitz matrix , which is representative of active-imaging schemes using stochastic waveform that are modeled as linear time invariant systems.

Next, we consider the recovery performance of our measurement system used in conjunction with Basis pursuit de-noise using SPGL1 solver developed by Van den Berg and Friedlander in [38, 39] to estimate the unknown target locations and their amplitudes in the area of interest. For each realization of the measurement matrix we generate multiple samples of target range profile with specified sparsity level and scattering coefficient is sampled at specified locations are sampled from a complex Gaussian distribution. The overall target range profile is normalized to get a fixed SNR. We consider a function of mean square error as a performance measure, specifically a thresholding function for a fixed SNR of 25dB. We consider a threshold of on the mean squared error and vary both the number of targets in the scene, and the bandwidth of the chirp waveform, which in turn influence the sampling rate at receiver. Again it is clear from figure 2 that the performance becomes similar to that of the random Gaussian sensing matrix as the number of chirps increase. We also observe that the recovery performance of Toeplitz matrix is similar to our system at lower values of fractional bandwidth but performs better when .

Next, we see the influence of noise on sparse target recovery using Basis pursuit de-noise employing our measurement scheme. We fix fractional bandwidth and vary the noise variance as well as the number of targets in the scene. In Figure 3 the intensity of the image represents the mean square error in dB scale. Figure 3 shows that as the number of chirps increase, the performance achieved by our scheme in terms of the mean square error approaches the mean square error achieved by the random Gaussian matrix . The reconstruction error for Toeplitz matrix is marginally better at lower values of compared to our measurement scheme.

## V Proofs

###### Proof:

The result can be obtained by direct application of lemma 6, using the value of and obtained from lemma 8 and lemma 9, respectively. The upper bound on the tail probability is given as

 P(∥A∥op≥t)≤(N+M)exp⎛⎜ ⎜⎝−t2/2√NMNct3+NM⎞⎟ ⎟⎠.

By plugging in , we get the result in (1). For the tail probability to decay, we require that . This gives us the condition that , which implies . Similarly, using the estimates , and we can bound the expected value of the operator norm of as given in lemma 6. \qed

###### Proof:

The norm of a column m of the sensing matrix can be written as follows

 ∥A(m)∥22 =c∗Bc,

where , and is a sequence of random variables that selects and scales a subset of the chirp waveforms. It can be verified that the diagonal elements of matrix B are as follows

 Bi,i=1Nc,i=1,⋯,N.

Since the random variables are independent and , the off-diagonal terms vanish and we get

 E(∥A(m)∥22) =N∑i=1E(|ci|2)Bi,i, =1NN∑i=11, E(∥A(m)∥22) =1.

Using results from lemma 10 and lemma 12 along with the result on sub-Gaussian norm from lemma 11, we have

 P(∣∣∥A(m)∥22−1∣∣>t)≤ 4exp⎛⎜ ⎜ ⎜ ⎜⎝−dNcN(2q∗)1Ncq∗⌈NM⌉min⎛⎜ ⎜ ⎜ ⎜⎝t2NcN(2q∗)MNcq∗⌈NM⌉,t⎞⎟ ⎟ ⎟ ⎟⎠⎞⎟ ⎟ ⎟ ⎟⎠, (21) P(∥A(m)∥22<1−t)≤ 4exp⎛⎜ ⎜ ⎜ ⎜⎝−dNcN(2q∗)1Ncq∗⌈NM⌉min⎛⎜ ⎜ ⎜ ⎜⎝t2NcN(2q∗)MNcq∗⌈NM⌉,t⎞⎟ ⎟ ⎟ ⎟⎠⎞⎟ ⎟ ⎟ ⎟⎠

The concentration inequality for the minimum value of norm of any column can be written as for any ,

 P(minm∥A(m)∥22≥1−t)≥1−NP(∥A(m)∥22≤1−t), P(minm∥A(m)∥22≤1−t)≤NP(∥A(m)∥22≤1−t).

Let for any . Using the approximation , we get the required result. \qed

###### Proof:

We can express the inner-product between any two columns and of sensing matrix as

 ⟨A(m1),A(m2)⟩=cT¯Bc,

where . The diagonal terms of the matrix is given as

 ¯Bi,i=DM(m1−m2N)1Ncexp(2πj(m1−m2)(i−1)N), DM(m1−m2N)=1MM−1∑t=0exp(2πj(m1−m2N)t)

where . By using the fact that are zero mean independent random variables, we obtain the following expression for

 E(cT¯Bc)=∑iE(c2i)¯Bi,i =0.

Using the results from lemma 10 and lemma 12 along with the sub-Gaussian norm result from lemma 11, and making the approximation , we see that there such that

 P⎛⎝|⟨A(m1),A(m2)⟩|>α3√logNM⎞⎠≤ 4exp⎛⎜ ⎜ ⎜⎝−q∗dα3logNNcN(2q∗−1)h(N)⎞⎟ ⎟ ⎟⎠,

where . Using the fact that and , we get

 P⎛⎝|⟨A(m1),A(m2)⟩|>α3√logNM⎞⎠ ≤⎧⎪⎨⎪⎩4Nu1 if logN>q∗(NcN)(2/q∗−1)α34Nu2otherwise, (22)

where

 u1=d⎛⎜ ⎜ ⎜⎝q∗α3NcN(2q∗−1)⎞⎟ ⎟ ⎟⎠2, u2=q∗α3NcN(2q∗−1)dlogN.

The concentration inequality for the coherence of matrix can be obtained by using the following inequality

 μ(A) ≤maxm1,m2|⟨A(m1),A(m2)⟩|maxm1∥A(m)∥2
 P⎛⎝μ(A)≥α31−ϵ√log(N)M⎞⎠ ≤P⎛⎝maxm1,m2|⟨A(m1),A(m2)⟩|≥α3√log(N)M⎞⎠ +P(minm∥A(m)∥22≤1−ϵ) ≤N22P⎛⎝|⟨A(m1),A(m2)⟩|≥α3√log(N)M⎞⎠ +P(minm∥A(m)∥22≤1−ϵ)

Using (2) and (V) in the above expression, we get the result in (3). \qed

###### Proof:

Using and in (3), the coherence condition given in [26] is satisfied with high probability as shown below

 μ(A) =O(1logN) w.p. p1≥1−2Nu1−2+4Nexp(−dM¯ϵ2), (23)

where is a constant independent of N and M, ,

 u1=d⎛⎜ ⎜ ⎜⎝q∗α3NcN(2q∗−1)⎞⎟ ⎟ ⎟⎠2,

and . This establishes the condition in (18). We also note that the exponent in the probability tail bound in (18) depends on as the function is an increasing in but decreases in