Compressed Remote Sensing of Sparse Objects

# Compressed Remote Sensing of Sparse Objects

Albert C. Fannjiang Department of Mathematics, University of California, Davis, CA 95616-8633 Pengchong Yan Applied and Computational Mathematics, California Institute of Technology, CA 91125  and  Thomas Strohmer Department of Mathematics, University of California, Davis, CA 95616-8633
###### Abstract.

The linear inverse source and scattering problems are studied from the perspective of compressed sensing, in particular the idea that sufficient incoherence and sparsity guarantee uniqueness of the solution. By introducing the sensor as well as target ensembles, the maximum number of recoverable targets (MNRT) is proved to be at least proportional to the number of measurement data modulo a log-square factor with overwhelming probability.

Important contributions include the discoveries of the threshold aperture, consistent with the classical Rayleigh criterion, and the decoherence effect induced by random antenna locations.

The prediction of theorems are confirmed by numerical simulations.

The research supported in part by DARPA Grant N00014-02-1-0603 and NSF Grant DMS 0811169

## 1. Introduction

We consider the imaging problem in the form of inverse source or scattering problem which has wide-range applications such as radar, sonar and computed tomography. The imaging problem is typically plagued by nonuniqueness and instability and hence mathematically challenging. Traditional methods such as matched field processing  [25] are limited in the number of targets that can be reliably recovered at high resolution. They often fail to detect a substantial number of targets, while at the same time they tend to produce artifacts obscuring the real target images. These limitations are due to the presence of noise and the fact that the imaging problem is in practice underdetermined. The standard regularization methods can handle to some extent the problem with noise but are inadequate to remedy the issue of nonuniqueness of the solution.

In this paper we utilize the fact that in many imaging applications the targets are sparse in the sense that they typically occupy a small fraction of the overall region of interest (the target domain). This sparsity assumption suggests to approach the imaging problem by using the framework of compressed sensing.

At the core of compressed sensing lies the following problem (here we focus, as is common in the compressed sensing community, on the discrete setting). Assume is a signal that is sparse, i.e., the number of its non-zero components (measured by the -quasinorm which is simply the number of non-zero entries of ) satisfies . Let be the measurement data vector. We explore in this paper the linear inverse problem which can be formulated as where is an matrix with . The goal is to recover , given the data vector and the sensing matrix of full rank. As , is severely underdetermined and unique reconstruction of is in general impossible.

However, due to the sparsity of one can compute by solving the optimization problem

 (L0) min∥X∥0s.t.AX=Y.

Since (L0) is NP-hard and thus computationally infeasible, we consider instead its convex relaxation, also known as Basis Pursuit (BP),

 (L1) min∥X∥1s.t.AX=Y

which can be solved by linear and quadratic programming techniques. The amazing discovery due to David Donoho was that under certain conditions on the matrix and the sparsity of , both (L1) and (L0) have the same unique solution [14]. One such condition is the Restricted Isometry Property (RIP) due to Candes and Tao [7], which requires essentially that any submatrix of is an approximate isometry. This property is satisfied by a number of matrices such as Gaussian random matrices or random partial Fourier matrices [7, 5, 24]. In that case, as long as , with high probability the solution of (L1) will indeed coincide with the solution of (L0). Another conditon for which equivalence between (L0) and (L1) can be proven is based on the incoherence of the columns of , which refers to the property that the inner product of any two columns of is small [12, 17, 26]. Moreover, the performance of BP is stable w.r.t. the presence of noise and error [6, 13, 27]. Finally the computational complexity of BP can be significantly reduced by using the various greedy algorithms in place of the linear programming technique [9, 21, 22, 26, 27]. The most basic greedy algorithm relevant here is Orthogonal Matching Pursuit (OMP) which has been thoroughly analyzed in [26].

For the imaging problem, the sensing matrix represents a physical process (typically wave propagation) and thus its entries cannot be arbitrarily chosen at our convenience. Therefore we cannot simply assume that satisfies any of the conditions that make compressed sensing work. The few physical parameters that we have control over are the wavelength of the probe wave, the locations and number of sensors and the aperture of the probe array. This is one of the reasons that make the practical realization of compressed sensing a challenging task.

The paper is organized as follows. In Section 2 we describe the physical setup, formulate the imaging problem in the framework of compressed sensing and make qualitative statements of our main results. In Section 3 we prove the main result for the inverse source problem, in particular the coherence estimate (Section 3.1) and the spectral norm bound (Section 3.2). In Section 4, we prove the main result for the inverse Born scattering problem for the response matrix imaging (Section 4.1) and the synthetic aperture imaging (Section 4.2). In Section 5 and Appendix B, we discuss the numerical method and present simulation results that confirm qualitatively the predictions of our theorems. In Appendix A we discuss the RIP approach to our problems.

## 2. Problem formulations and main results

In this paper, we study the inverse source and scattering problems both in the linear regime to suit the current framework of compressed sensing. For simplicity and definiteness we consider the three dimensional space and assume that all targets are in the transverse plane and all sensors are in another transverse plane . The exact Green function for the Helmholtz equation which governs the monochromatic wave propagation is

 (1)

We assume that the phase speed so that the frequency equals the wavenumber.

We consider the Fresnel diffraction regime where the distance between the targets and the sensors is much larger than the wavelength of the probe wave and the linear dimensions of the domains [3]

 (2) z0≫A+L,z0≫λ

where is the linear dimension of the target domain. This is the remote sensing regime.

Under (2) the Green function (1) can be approximated by the universal parabolic form [3]

 (3) G(r,a)=eiωz04πz0eiω|x−ξ|2/(2z0)eiω|y−η|2/(2z0),

which is called the paraxial Green function. This follows from the truncated Taylor expansion of the function

 |r−a|≈z0+|x−ξ|22z0+|y−η|22z0

under (2).

In the case of the inverse source problem, the corresponding sensing matrix is essentially made of the paraxial Green function for various points in the sensor array and the target domain. In this set-up, the entries (3) of the paraxial sensing matrix have the same magnitude so without loss of generality the column vectors of are assumed to have unit -norm.

A key idea in our construction of a suitable sensing matrix is to randomize the locations of the sensors within a fixed aperture (a square of size for example). Indeed, we assume are independent uniformly distributed in . We assume that the antenna elements are independently uniformly distributed in a square array in the plane . Define the sensor ensemble to be the sample space of i.i.d. uniformly distributed points in .

We consider the idealized situation where the locations of the targets are a subset of a square lattice. More precisely, let be a regular square sub-lattice of mesh size in the transverse plane . Hence the total number of grid points is a perfect square. We defer the discussion on extended targets to the concluding section.

Let be the set of target locations and be the (source or scattering) amplitudes of the targets. Set . Define the target vector to be . We consider the target ensemble consisting of target vectors with at most non-zero entries whose phases are independently, uniformly distributed in and whose support indices are independently and randomly selected from the index set . The number is called the sparsity of the target vector.

For source inversion the targets emit the paraxial waves described by (3) which are then recorded by the sensors. The measurement vector can be written as

 (4) Y=AX

where the matrix have the entries

 (5) Aij=G(ai,rj),∀i=1,...,n,j=1,...,m.

The first main result proved in this paper can be stated roughly as follows (see Theorem 2 and Remark 3 for the precise statement).

Result A. Suppose

 (6) ℓAλz0≡1ρ∈N.

For the product ensemble of targets and sensors, sources of sparsity up to can be exactly recovered by BP with overwhelming probability.

When only the sensor ensemble is considered, all sources of sparsity up to can be exactly recovered by BP and OMP with overwhelming probability.

The relation (6) indicates the existence of the threshold, optimal aperture given by corresponding to (see Remark 2 for more discussion on this point). Since the meshsize has the meaning of resolution, is consistent with the classical Rayleigh criterion [3]

 (7) ℓ≥λz0A.

Our numerical simulations (Figure 1) indeed indicate that (7) is sufficient to realize the performance stated in Result A.

Next we consider two imaging settings where the targets are scatterers instead of sources. For point scatterers of amplitudes located at , the resulting Green function , including the multiple scattering effect, obeys the Lippmann-Schwinger equation

 ~G(r,ai)=G(r,ai)+s∑l=1σjlG(r,rjl)~G(rjl,ai),i=1,...,n.

The exciting field is part of the unknown and can be solved for from the so called Foldy-Lax equation (see e.g. [15] for details).

Hence, the inverse scattering problem is intrinsically nonlinear. However, often linear scattering model is a good approximation and widely used in, e.g. radar imaging in the regimes of physical optics and geometric optics [2, 8] (see [29] for a precise formulation of the condition).

One such model is the Born approximation (also known as Rayleigh-Gans scattering in optics) in which the unknown exciting field is replaced by the incident field resulting in

 (8) ~G(r,ai)−G(r,ai)=s∑l=1σjlG(r,rjl)G(rjl,ai),i=1,...,n.

The left hand side of (8) is precisely the scattered field when the incident field is emitted from a point source at . The Born approximation linearizes the relation between the scatterers and the scattered field. The goal of inverse scattering is to reconstruct the targets given the measurements of the scattered field.

For the response matrix (RM) imaging [15, 16], we use the real array aperture as in the inverse source problem discussed above except the array is also the source of probe waves. One by one, each antenna of the array emits an impulse and the entire array receives the echo. Each transmitter-receiver pair gives rise to a datum and there are altogether data forming a datum matrix called the response matrix. These data represent the responses of the targets to the interrogating waves.

From (8) we see that the corresponding sensing matrix has the entries

 ARMlj=G(ai,rj)G(rj,ak),l=1,..,n2,j=1,...,m

where is related to as

 l=i(n−1)+k.

In the second setting, called the synthetic aperture (SA) imaging, the real, physical array consists of only one antenna. The imaging aperture is synthesized by the antenna taking different transmit-receive positions [16].

The SA imaging considered here is motivated by synthetic aperture radar (SAR) imaging. SAR is a technique where a substantial aperture can be synthesized by moving a transmit-receive antenna along a trajectory and repeatedly interrogating a search area by firing repeated pulses from the antenna and measuring the responses. This can greatly leverage a limited probe resource and has many applications in remote sensing. The image formation is typically obtained via the matched filter technique and analyzed in the Born approximation [8].

Here we consider a simplified set-up, neglecting the Doppler effect associated with the relative motion between the antenna and targets. In this case, the sensing matrix has the entries

 (9) ASAij=G2(ai,rj),i=1,...,n,j=1,...,m.

In other words, with A crucial observation about SA imaging is that

 (10) G2(ai,rj;ω)∼G(ai,rj;2ω)

modulo a -dependent factor which does not matter.

The following is a rough statement for inverse Born scattering (Theorems 7, 8 and Remarks 4, 5) proved in Section 4.

Result B. (i) For RM imaging, assume the aperture condition (6).

For the product ensemble of sensor and target, scatterers of sparsity up to can be reconstructed exactly by BP with overwhelming probability.

When only the sensor ensemble is considered, all scatterers of sparsity up to can be exactly recovered by BP and OMP with overwhelming probability.

(ii) For SA imaging, assume the aperture condition

 (11) 2/ρ∈N

which is weaker than (6).

For the product ensemble of sensor and target, scatterers of sparsity up to can be reconstructed exactly by BP with overwhelming probability.

When only the sensor ensemble is considered, all scatterers of sparsity up to can be exactly recovered by BP and OMP with overwhelming probability.

As a result of the SA aperture condition (11), the corresponding optimal aperture is half of that for the inverse source and RM imaging. In other words, SA can produce the qualitatively optimal performance with half of the aperture. This two-fold enhancement of resolving power in SA imaging has been previously established for the matched-field imaging technique [16].

Our numerical simulations (Section 5) confirm qualitatively the predictions of Result A and B, in particular the threshold aperture and the asymptotic number of recoverable targets.

Currently there are two avenues to compressed sensing [4]: the incoherence approach and the RIP (restricted isometry property) approach. When the RIP approach works, the results are typically superior in that all targets under a slightly weaker sparsity constraint can be uniquely determined by BP without introducing the target ensemble. We demonstrate the strength of the RIP approach for our problems in Appendix A (see Theorem 11 and Theorem 12 for stronger results than Result A and Result B (ii), respectively). However, Result B(i) seems unattainable by the RIP approach at present, particularly the quadratic-in- behavior of the sparsity constraint. On the other hand, the incoherence approach gives a unified treatment to all three results and therefore is adopted in the main text of the paper.

## 3. Source inversion

Let be the Green function of the time-invariant medium and let be the Green vector

 (12) G(r)=[G(r,a1),G(r,a2),...,G(r,an)]t

where denotes transpose. For the matrix (5) define the coherence of the matrix by

 μ(A)=maxi≠j∣∣G∗(pi)G(pj)∣∣∥G(pi)∥∥G(pj)∥.

The following theorem is a reformulation of results due to Tropp [28] and the foundation of the imaging techniques developed in this paper.

###### Theorem 1.

Let be drawn from the target ensemble. Assume that

 (13) μ2s≤(8lnmϵ)−1,ϵ∈(0,1)

and that for

 (14) 3(plns2lnmϵ)1/2+sm∥A∥22≤14e1/4.

Then is the unique solution of BP with probability . Here denotes the spectral norm of .

We explain the connection of the theorem with [28] in Appendix B.

###### Theorem 2.

Let the target vector be randomly drawn from the target ensemble and the antenna array be randomly drawn from the sensor ensemble and suppose

 (15) ℓAλz0≡1ρ∈N.

If

 (16) m≤δ2eK2/2,δ,K>0.

then the targets of sparsity up to

 (17) s

can be recovered exactly by BP with probability greater than or equal to

 (18) [1−2δ−ρn(n−1)3/2m1/2]×[1−2ϵ−s−p],p=lnm−lnϵ288√elns.
###### Proof.

The proof of the theorem hinges on the following two estimates.

###### Theorem 3.

Assume (15) and

 (19) m≤δ2eK2/2

for some positive and . Then the coherence of satisfies

 (20) μ(A)≤√2K/√n

with probability greater than .

###### Remark 1.

The general lower bound for coherence [10, 30]

 √m−nn(m−1)≤μ≤1

implies that the coherence bound (20) is optimal modulo a constant factor.

###### Remark 2.

Since the coherence of the sensing matrix should decrease as the aperture increases and since the analysis in Section 3.1 shows that the coherence is of the same order of magnitude as whenever (15) holds, simple interpolation leads to the conclusion that the coherence should be roughly constant for

 (21) A≥λz0ℓ

corresponding to . The right hand side of (21), corresponding to , defines the optimal aperture.

###### Theorem 4.

The matrix has full rank and its spectral norm satisfies the bound

 (22) ∥A∥22≤2m/n

with probability greater than

 (23) 1−ρn(n−1)3/2m1/2,ρ=λz0ℓA.
###### Remark 3.

By the theorems of Donoho, Elad [12] and Tropp [26], the targets of sparsity

 s<12(1+1μ(A))

can be recovered exactly by BP as well as by Orthogonal Matching Pursuit (OMP).

Theorems 3 and 4 imply that with probability greater than

 1−2δ−ρn(n−1)3/2m1/2

of the sensor ensemble, all targets of sparsity

 s<12(1+√n√2K)

can be recovered exactly by BP as well as OMP.

Condition (17) implies the existence of such that

 (24) 2ln2mδ

As a consequence (19) and (13) are satisfied with probability greater than by Theorem 3.

Now the norm bound (22) implies (14) if

 (25) 3(plns2lnmϵ)1/2+2sn≤14e1/4,p>1,

which in turn follows from (17) and the condition

 ln2mδlnmϵ≥196⎛⎝12e1/4−(plns2lnmϵ)1/2⎞⎠−1.

Hence for (and hence ) we can choose in (14) to be

 p=lnm−lnϵ72√elns.

Since Theorems 3 and 4 hold with probability greater than

 1−2δ−ρn(n−1)3/2m1/2.

and since the target ensemble is independent of the sensor ensemble we have the bound (18) for the probability of exact recovery.

### 3.1. Proof of Theorem 3: coherence estimate

###### Proof.

Summing over we obtain

 (26) n∑l=1A∗liAlj = eiω(x2j+y2j−x2i−y2i)/(2z0)1nn∑l=1eiξlω(xi−xj)/z0eiηlω(yi−yj)/z0.

Define the random variables , as

 (27) Xl = cos[(ξl(xi−xj)+ηl(yi−yj))ω/z0] (28) Yl = sin[(ξl(xi−xj)+ηl(yi−yj))ω/z0]

and their respective sums

 Sn=n∑l=1Xl,Tn=n∑l=1Yl.

Then the absolute value of the right hand side of (26) is bounded by

 (29) 1n|Sn+iTn|≤1n(|Sn−ESn|+|Tn−ETn|+|E(Sn+iTn)|).

To estimate the right hand side of (29), we recall the Hoeffding inequality [20].

###### Proposition 1.

Let be independent random variables. Assume that almost surely. Then we have

 (30) P[|Sn−ESn|≥nt]≤2exp[−2n2t2∑nl=1(bl−al)2]

for all positive values of .

We apply the Hoeffding inequality to both and . To this end, we have and set

 t=K/√n,K>0.

Then we obtain

 (31) P[n−1|Sn−ESn|≥K/√n] ≤ 2e−K2/2 (32) P[n−1|Tn−ETn|≥K/√n] ≤ 2e−K2/2.

Note that the quantities depend on , i.e.

 Sn=Sn(xi−xj,yi−yj),Tn=Tn(xi−xj,yi−yj).

We use (31)-(32) and the union bound to obtain

 (33) P[maxi≠jn−1∣∣Sn(xi−xj,yi−yj)−ESn(xi−xj,yi−yj)∣∣≥K/√n] ≤2(m−1)e−K2/2 (34) P[maxi≠jn−1∣∣Tn(xi−xj,yi−yj)−ETn(xi−xj,yi−yj)∣∣≥K/√n] ≤2(m−1)e−K2/2

Hence, if (19) holds for any small number , then the right hand side of (33)-(34) is less than .

The third term on the right hand side of (29) can be calculated as follows. By the mutual independence of and we have

 1n|E(Sn+iTn)| = 1n∣∣ ∣∣n∑l=1E(Xl+iYl)∣∣ ∣∣ = 1n∣∣ ∣∣n∑l=1E(eiξlω(xi−xj)/z0)E(eiηlω(yi−yj)/z0)∣∣ ∣∣ =

since are independently identically distributed.

Simple calculation with the uniform distribution on yields

 (35) = ∣∣∣eiϕij−1ϕij∣∣∣∣∣∣eiψij−1ψij∣∣∣ = 4∣∣ ∣ ∣∣sinϕij2ϕij∣∣ ∣ ∣∣∣∣ ∣ ∣∣sinψij2ψij∣∣ ∣ ∣∣

with

 ϕij=Aω(xi−xj)/z0,ψij=Aω(yi−yj)/z0.

The optimal condition is to choose such that

 (36) ϕij=ψij∈2πZ,

under which (35) vanishes. Condition (36) can be fulfilled for an equally spaced grid as is assumed here. Let

 ℓ=mini≠j|xi−xj|=mini≠j|yi−yj|.

The smallest satisfying condition (36) is given by

 (37) ℓ=z0λA,λ=2π/ω

which can be interpreted as the resolution of the imaging system and is equivalent to the classical Rayleigh criterion.

In this case, and hence

 μ(A)≤√2K/√n

with probability under the condition (19).

### 3.2. Proof of Theorem 4: spectral norm bound

###### Proof.

For the proof, it suffices to show that the matrix satisfies

 (38) ∥nmAA∗−In∥2<1

where is the identity matrix with the corresponding probability bound. By the Gershgorin circle theorem, (38) would in turn follow from

 (39) μ(√nmA∗)<1n−1

since the diagonal elements of are unity.

Since are uniformly distributed in , with probability one.

Summing over we obtain

 (40) nmm∑l=1AjlA∗li = 1meiω(ξ2j+η2j−ξ2i−η2i)/(2z0)eiω(ξi−ξj)(x1+√mℓ)/z0−eiω(ξi−ξj)x1/z01−eiω(ξi−ξj)ℓ/z0 ×eiω(ηi−ηj)(y1+√mℓ)/z0−eiω(ηi−ηj)y1/z01−eiω(ηi−ηj)ℓ/z0.

Thus,

 (41) nm∣∣ ∣∣m∑l=1AjlA∗li∣∣ ∣∣ ≤ 1m∣∣ ∣ ∣∣sin√mω(ξi−ξj)ℓ2z0sinω(ξi−ξj)ℓ2z0∣∣ ∣ ∣∣∣∣ ∣ ∣∣sin√mω(ηi−ηj)ℓ2z0sinω(ηi−ηj)ℓ2z0∣∣ ∣ ∣∣,

where we have used the identity

 (42) ∣∣1−eiθ∣∣=2∣∣∣sinθ2∣∣∣.

Let

 (43) κ=mini≠jmink∈Z{∣∣∣ℓ(ξi−ξj)λz0−k∣∣∣,∣∣∣ℓ(ηi−ηj)λz0−k∣∣∣}≤1/2

which is nonzero with probability one. For the random variables

 ℓ(ξi−ξj)λz0,ℓ(ηi−ηj)λz0

have the symmetric triangular distribution supported on with height . Note that is an integer by the choice (36). Hence the probability that for small is larger than

 (1−2ρα)n(n−1)>1−2ρn(n−1)α,ρ=λz0ℓA

where the power accounts for the number of different pairs of random variables involved in (43).

Using the inequality that

 sinπκ>2κ,κ∈(0,1/2),

(41) and the choice

 12√n−1m=α

we deduce with probability larger than

 1−2ρn(n−1)α=1−ρn(n−1)3/2m1/2

the decoherence estimate

 μ(√nmA∗)<14mα2

implying (39). ∎

## 4. Inverse Born scattering

In this section, we consider two imaging settings where the targets are scatterers instead of sources under the Born approximation (8).

### 4.1. Response matrix (RM) imaging

For the coherence calculation, we have

 n2∑l=1ARM∗liARMlj = n∑p,q=1G(ap,rj)G(rj,aq)G∗(ap,ri)G∗(ri,aq) = [n∑p=1G(ap,rj)G∗(ap,ri)]2

and thus

 μ(ARM)=μ2(A).

In view of (4.1) and Theorem 3 the following theorem is automatic.

###### Theorem 5.

Under the assumptions (15) and (19) the coherence of satisfies

 μ(ARM)≤2K2/n

with probability greater than .

We now proceed to establish the counterpart of Theorem 4.

###### Theorem 6.

The matrix has full rank and its spectral norm satisfies the bound

 (45) ∥ARM∥22≤2m/n2

with probability greater than or equal to

 1−ρn2(n2−1)3/2m1/2,ρ=λz0ℓA.
###### Remark 4.

As in Remark 3, Theorems 5 and 6 imply that with probability greater than

 1−2δ−ρn2(n2−1)3/2m1/2

of the sensor ensemble, all targets of sparsity

 s<12(1+n2K2)

can be recovered exactly by BP as well as OMP.

###### Proof.

We proceed as in the proof of Theorem 4. As before, we seek to prove

 (46) μ(n√mARM∗)<1n2−1.

For the RM setting, (26) becomes

 (47) n2mm∑j=1ARMjlARM∗jl′ = 1meiω(ξ2k+η2k+ξ2i+η2i−ξ2k′−η2k′−ξ2i′−η2i′)/(2z0) ×eiω(ξi+ξk−ξi′−ξk′)(x1+√mℓ)/z0−eiω(ξi+ξk−ξi′−ξk′)x1/z01−eiω(ξi+ξk−ξi′−ξk′)ℓ/z0 ×eiω(ηi+ηk−ηi′−ηk′)(y1+√mℓ)/z0−eiω(ηi+ηk−ηi′−ηk′)y1/z01−eiω(ηi+ηk−ηi′−ηk′)ℓ/z0

where

We apply the same analysis as (26) here. Let

 (48) κ=minl≠l′mink∈Z{∣∣∣ℓ(ξi+ξk−ξi′−ξk′)λz0−k∣∣∣,∣∣∣ℓ(ηi+ηk−ηi′−ηk′)λz0−k∣∣∣}.

which is nonzero with probability one. For the probability density functions (PDF) for the random variables

 ℓ(ξi+ξk−ξi′−ξk′)λz0,ℓ(ηi+ηk−ηi′−ηk′)λz0

are either the symmetric triangular distribution or its self-convolution supported on . In either case, their PDFs are bounded by (indeed, by ). Hence the probability that for small is larger than

 (1−2ρα)n2(n2−1)>1−2ρn2(n2−1).

With the choice

 12√n2−1m=α

we deduce that

 μ(√n2mARM∗)<1n2−1

with probability larger than

 1−2ρn2(n2−1)α=1−ρn2(n2−1)3/2m1/2.

As before, the above estimates yield the following result.

###### Theorem 7.

Consider the response matrix imaging with the target vector randomly drawn from the target ensemble and the antenna array randomly drawn from the sensor ensemble. If (15) and (16) hold then the targets of sparsity up to

 (49) n264ln2mδlnmϵ

can be recovered exactly by BP with probability greater than or equal to (18).

### 4.2. Synthetic aperture (SA) imaging

In view of (10), we obtain

 μ(ASA(ω))=μ(A(2ω)).

The following result is an immediate consequence of the correspondence (9)-(10) between SA imaging and inverse source setting.

###### Theorem 8.

Let the target vector be randomly drawn from the target ensemble and the antenna array be randomly drawn from the sensor ensemble. If

 (50) 2ρ∈N

and (16) hold then the targets of sparsity up to

 (51) n64ln2mδlnmϵ

can be recovered exactly by BP with probability greater than or equal to (18).

###### Remark 5.

As in Remark 3, conditions (19) and (50) imply that with probability greater than

 1−2δ−ρn(n−1)3/2m1/2

of the sensor ensemble, all targets of sparsity

 s<12(1+√n√2K)

can be recovered exactly by BP as well as OMP.

## 5. Numerical simulations

In the simulations, we set and for the most part to enforce the second condition of the paraxial regime (2). The computational domain is with mesh-size . The threshold, optimal aperture according to Theorem 3 is . As a result, the first condition of the paraxial regime (2) is also enforced. Note that the Fresnel number for this setting is

 (A+L)2z0λ=360≫1

indicating that this is not the Fraunhofer diffraction regime and the Fourier approximation of the paraxial Green function is not appropriate [3].

We use the true Green function (1) in the direct simulations and its paraxial approximation for inversion. In other words, we allow model mismatch between the propagation and inversion steps. The degradation in performance can be seen in the figures but is still manageable as the simulations are firmly in the Fresnel diffraction regime. The stability of BP with linear model mismatch has been analyzed in [19] for the case when the matrix satisfies the Restricted Isometry Property (RIP), see Appendix A.

In the left plot of Figure 1, the coherence is calculated with aperture and for the sensing matrices with the exact Green function (red-solid curve) as entries and its paraxial approximation (black-asterisk curve). The coherence of the exact sensing matrix at the borderline of the paraxial regime with is also calculated (blue-dashed curve). All three curves track one another closely and flatten near and beyond in agreement with the theory (Theorem 3), indicating the validity of the optimal aperture throughout the paraxial regime.

Figure 1 (right plot) displays the numerically found maximum number of recoverable source points as a function of with by using the exact (red-solid curve) and paraxial (black-asterisk curve) sensing matrices. The maximum number of recoverable targets (MNRT) is in principle a random variable as our theory is formulated in terms of the target and sensor ensembles. To compute MNRT, we start with one target point and apply the sensing scheme. If the recovery is (nearly) perfect a new target vector with one additional support is randomly drawn and the sensing scheme is rerun. We iterate this process until the sensing scheme fails to recover the targets and then we record the target support in the previous iterate as MNRT. This is an one-trial test and no averaging is applied. The linear profile in the right plot of Figure 1 is consistent with the prediction (17) of Theorem 2.

To reduce the computational complexity of the compressed sensing step, we use an iterative scheme called Subspace Pursuit (SP) [9]. It has been shown to yield the BP solution under the RIP [9] (see Appendix A.

In the scattering simulation, we use the Foldy-Lax formulation accounting for all the multiple scattering effect [15]. Hence there are two mismatches (the paraxial approximation and the Born approximation) in the simulation.

In the left plot of Figure 2 the compressed sensing image with RM set-up is shown for and . The size of the sensing matrix is and targets are (nearly) exactly recovered. For comparison, the image obtained by the linear processor of the traditional matched field processing is shown on the right. In Appendix C, we outline the rudiments of matched field processing.

In Figure 3 the numerically found maximum number of recoverable scatterers is depicted as a function of the number of antennas for and for both RM and SA imaging set-ups by using the paraxial and exact sensing matrices. Clearly, both curves are qualitatively consistent with the predictions (49) and (51).

## 6. Conclusions

In this paper, we have studied the imaging problem from the perspective of compressed sensing, in particular the idea that sufficient incoherence and sparsity guarantee uniqueness of the solution. Moreover, by adopting the target ensemble following [28] and the sensor ensemble, the maximum number of recoverable targets is proved to be at least proportional to the number of measurement data modulo a log-square factor with overwhelming probability.

We have analyzed three imaging settings: the inverse source, the inverse scattering with the response matrix and with the synthetic aperture. Important contributions of our analysis include the discoveries of the decoherence effect induced by random antenna locations and the threshold aperture defined by for source and RM imaging and for SA imaging where .

In this paper we have considered the localization of point targets and the determination of their amplitudes. A natural next step is to consider extended targets. However our approach does not extend in a straightforward manner to imaging of extended targets, as can be easily seen. Assume that we model an extended target approximately as an ensemble of point targets that are spaced very close together. Clearly, this requires the mesh size to be so small as to render . To apply our theorems would then require that the aperture and the number of antennas increase without bound. Clearly this is not a feasible way to image extended targets via compressed sensing. Therefore a somewhat different approach, on which we plan to report in our future work, is required for extended targets.

## Appendix A Restricted isometry property (RIP)

A fundamental notion in compressed sensing under which BP yields the unique exact solution is the restrictive isometry property due to Candès and Tao [7]. Precisely, let the sparsity of the target vector be the number of nonzero components of and define the restricted isometry constant to be the smallest positive number such that the inequality

 (1−δs)∥Z∥22≤∥AZ∥22≤(1+δs)∥Z∥22

holds for all of sparsity at most .

Now we state the fundamental result of the RIP approach.

###### Theorem 9.

[7] Suppose the true target vector has the sparsity at most . Suppose the restricted isometry constant of satisfies the inequality

 (52) δ3s+3δ4s<2.

Then is the unique solution of BP.

###### Remark 6.

Greedy algorithms have significantly lower computational complexity than linear programming and have provable performance under various conditions. For example under the condition the Subspace Pursuit (SP) algorithm is guaranteed to exactly recover via a finite number of iterations [9].

In this appendix we show that the sensing matrix for source inversion satisfies RIP. This can be readily seen by rewriting the paraxial Green function (3)

 (53) G(r,a)=eiωz04πz0eiω(x2+y2)/(2z0)e−iωxξ/z0e−iωyη/z0eiω(ξ2+η2)/(2z0),

for

Now the sensing matrix (5) can be written as the product of three matrices

 A=D1ΦD2

where

 D1=diag(eiω(ξ2j+