Two-Dimensional Super-Resolution via Convex Relaxation

Two-Dimensional Super-Resolution via Convex Relaxation

Iman Valiulahi, Sajad Daei, Farzan Haddadi and Farzad Parvaresh

In this paper, we address the problem of recovering point sources from two dimensional low-pass measurements, which is known as super-resolution problem. This is the fundamental concern of many applications such as electronic imaging, optics, microscopy, and line spectral estimation. We assume that the point sources are located in the square with unknown locations and complex amplitudes. The only available information is low-pass Fourier measurements band-limited to integer square . The signal is estimated by minimizing Total Variation norm, which leads to a convex optimization problem. It is shown that if the sources are separated by at least , there exist a dual certificate that is sufficient for exact recovery.

Super-Resolution, continuous dictionary, convex optimization, Dirichlet kernel, dual certificate.

I Introduction

Two dimensional (2-D) super resolution refers to recovering 2-D point sources from their low-resolution measurements. One may think of this problem as recovering a high resolution image of stars in a photo captured by a low-resolution telescope. Various other fields are also involved in this problem. In the direction of arrival (DOA) estimation, far field point sources are to be located in terms of their 2-D directions using measurements by a two-dimensionally dispersed array of sensors. Applications are vast from Radar, sonar to cellular communication systems. The main performance measure for any DOA estimation algorithm is its ability to resolve two closely-spaced sources which leads to the term super-resolution methods [1]. High-dimensional super-resolution has also important applications in off-the-grid Multiple-input multiple-output (MIMO) radar where the aim is to estimate the angle-delay-Doppler continuous triplets from the reflections recorded at receiver antennas[2]. Another example is high dimensional medical imaging, a diagnosis method to determine the presence of some certain diseases[3].

Consider high-frequency 2-D signals in the form of Dirac delta functions. The signal is observed after convolution with a low-pass kernel. In some applications, there is no exact information about the kernel. In this case, joint estimation of the signal and the kernel is required which is blind super-resolution [4], [5]. In many scenarios, the low-pass kernel is known before-hand. In this paper, we assume that the signal is observed through convolution with a 2-D sinc kernel band-limited to the integer square . With this assumption, the measurements are in the form of superposition of sinusoids with arbitrary complex amplitudes. This model has a closed relation with 2-D line spectral estimation.

Conventional parametric approaches to super-resolve sparse 2-D point sources are based on decomposition of measurement space into orthogonal signal and noise subspaces such as 2-D MUSIC [6], 2-D unitary ESPRIT [7] and Matrix Enhancement Matrix Pencil (MEMP) method[8]. However, these techniques are sensitive to noise and outliers. They are also dependent on model order. Discrete 2-D super-resolution suggests that one can recover the sparse signal by solving an minimization problem [9, 10, 11, 12]. This method assumes all the point sources to lie on the grid. However, this assumption is not realistic in practice. When the true point sources do not lie on the grid, basis mismatch occurs which leads to reduced performance. One is able to achieve better reconstruction using finer grids, but this imposes higher computational complexity [13, 14]. To overcome grid mismatch, [15] presented a new method based on convex optimization that recovers the infinite-dimensional signal from low-resolution measurements by minimizing a continuous version of norm known as Total Variation () norm. Similar to compressed sensing, a sufficient condition for exact recovery is the existence of a dual certificate orthogonal to the null-space of the measurements with sign pattern of the signal in the support and magnitude less than one in off-support locations. [15] constructed this dual certificate as a linear combination of shift copies of forth power of Dirichlet kernel (and its derivatives). They prove that existence of such a linear combination imposes minimum separation between the point sources for 1-D situation where is the cut-off frequency. In 2-D case, the sources must be separated at least by to construct dual polynomial band-limited to integer square .

Implementation of norm minimization problem may seem tough because of infinite dimensionality of the primal variable. To handle this situation, one can convert the dual problem to a tractable semidefinite program (SDP) using Positive Trigonometric Polynomial (PTP) theory. In fact, PTP theory provides conditions to control the magnitude of trigonometric polynomials in signal domain by some linear matrix inequalities (LMI) [16, 17, 18]. Moreover, it is possible to control the magnitude of trigonometric polynomial in any partition of signal domain which can be translated to prior information [19, 20].

The approach of [15] was extended to off-the-gird spectral estimation in compressed sensing (CS) regime[21]. It shows that atomic norm minimization can recover a 1-D continuous spectrally sparse signal from partial time domain samples as long as the frequency sources are separated by where is number of Nyquist samples. Proof is based on constructing a random dual certificate that guarantee exact recovery with high probability. Similar to this work, [18] presented 2-D random dual polynomial time-limited to integer square for estimating the true off-the-grid 2-D frequencies under the condition that they satisfy a minimum separation . [22] investigated multi-dimension frequency reconstruction by minimizing nuclear norm of a Hankel matrix subject to some time domain constraints known as Enhanced Matrix Completion (EMaC). Recently, Fernandez in [23], has shown that the guaranty for exact recovery of 1-D point sources using norm minimization can be improved up until the minimum separation by constructing a dual certificate that interpolates the sign pattern of closer point sources. The main idea of his work is to use the product of Dirichlet kernels with different bandwidths instead of its forth power that was used in [15] for 1-D case. This leads to a better trade-off between the spikiness of the dual polynomial in the support locations and decay of its tail.

The main result of this paper is to guarantee that norm minimization achieves exact solution as long as the 2-D sources are separated by at least . Specifically, we extend the approach of Fernandez to the recovery of 2-D point sources which lie in with arbitrary locations and complex amplitudes. For this purpose, we first propose a 2-D low-pass kernel caped by the integer square which is obtained by tensorizing the 1-D kernel used by Fernandez [23]. In comparison with the 2-D kernel used by [15], our kernel better balances between spikiness at the origin and decay of its tail. Then we construct 2-D low-pass dual polynomial by linearly combining shifted copies of the kernel and its partial derivatives. Our theoretical guaranty requires bounds on the 1-D and 2-D kernels which are verified by numerical simulations given in Section V.
The rest of the paper is organized as follows. The problem is formulated in Section II. Section III presents norm minimization and the proposed uniqueness guaranty. Implementation of the dual problem is given in Section IV. In Section V results are validated via numerical experiments. Finally, we conclude the paper and introduce future directions in Section VI.

Notation. Throughout the paper, scalars are denoted by lowercase letters, vectors by lowercase boldface letters, and matrices by uppercase boldface letters. The th element of the vector and the element of the matrix are given by and , respectively. denotes cardinality for sets and absolute value for scalars. For a function and a matrix , and are defined as and , respectively. Null space of linear operators are denoted by . denotes relative interior of a set . and denote th derivate and partial derivatives of 1-D function and 2-D function , respectively. and shows transpose and hermitian of a vector, respectively. denotes the element-wise sign of the vector . Also, denotes the columns of being stacked on top of each other. The inner product between two functions and is defined as . and denotes tensor and Kronecker product, respectively. The adjoint of a linear operator is denoted by .

Ii Problem Formulation

We consider a mixture of 2-D Dirac function on a continuous support :


where is an arbitrary complex amplitude of each point source with , in the continuous square , and denotes Dirac function. Assume that the only available information about is its 2-D Fourier transform band-limited to integer square as:


where , denotes all of indices of the signal ( is an integer). It is beneficial to consider each observation an element of a matrix as below:


which and is the 2-D linear operator that maps a continuous function to its lowest 2-D Fourier coefficients up until the integer square . The problem is then to estimate from the observation matrix .

Iii Total Variation Minimization for 2-D Sources

To super-resolve the point sources from Fourier measurements, one can use the following optimization problem:


where norm promotes sparse atomic measures which define as:


in which is any partition of into finite number of disjoint measurable 2-D subsets and is a positive measure on . In particular, .
The main goal of this paper is to show that, exactly recovers if the sources satisfy some mild separation. In the following, we define the minimum distance of a point from a 2-D set.

Definition III.1

Let be the product space of two circles obtained by identifying the endpoints on . For each set of points , the minimum separation is defined as:


where and denote warp-around distances on the unit circle.

It has been shown that can achieve exact recovery for as long as the components of the support are separated by at least [15]. The following theorem which is the main result of this paper states that for under some milder separation condition on the support, can reach the exact solution.

Theorem III.1

Let be the support of . If and the minimum septation obeys


where , then the solution of is unique.

Iii-a Uniqueness Guaranty

To prove uniqueness of , it is sufficient to find a function known as dual certificate which is orthogonal to the null space of and belongs to relative interior of sub-differential of norm at the original point [15].

Proposition III.2

If the conditions of Theorem III.1 hold, then for any sign pattern with there exist a low-pass function


such that


The conditions (8), (9) and (10) refer to the fact that . The proof of Proposition is given in Appendix B.
It is beneficial to emphasize that if the sign of closely-spaced sources differ from each others, then it is ill-posed to interpolate sign pattern of a the low-pass trigonometric polynomial. That’s why the minimum separation condition is required. This can be rebated if the sign of sources be the same as in [24, 25].

Iii-B Construction of the dual certificate

To construct the dual certificate in Proposition III.2 under the conditions of Theorem III.1, we first propose the following 2-D low-pass kernel:


where is multiplication of three Dirichlet kernel with different cut-off frequencies defined as below:




in which is the cut-off frequency, , , , and is the convolution of the Fourier coefficient of , , and . Consequently,


Fernandez in [23] proposed for 1-D situation instead of forth power of Dirichlet kernel that was previously used in [15]. This kernel provides a better trade-off between spikiness in the origin and the order of tail decay. This motivates proposition of 2-D kernel in the form of (11).

If was constructed such that only (9) is satisfied, the magnitude of the resulting polynomial may exceed one near the elements of the support . To handle this situation, we force the derivative of the polynomial to be zero at the support of . We construct as


to better control and its derivatives. and denote the partial derivatives of with respect to , , respectively. Therefore, instead of (9) and (10), the following conditions are considered.


In Appendix B, it is shown that one can always find interpolation coefficients under the conditions of Theorem III.1.

Fig. 1: \subreffig.bound1Bounds on 1-D kernel . The bounds are calculated on the grid size covering the interval . All bounds are monotone in . \subreffig.bound4 and \subreffig.bound5 Bounds on and its partial derivatives. The bounds are calculated on the grid size covering the interval .

Fig. 2: Bounds on in (58). The bounds are calculated on the grid size covering the interval .
Fig. 3: 2-D Point sources recovery using the dual polynomial (22). The true sources are estimated by localizing the modulus of the dual polynomial is one.

Fig. 4: Graphs showing phase transition of successful recovery of over 10 trials. In \subreffig.fc8 and \subreffig.fc10, the number of samples is and , respectively.

Iv Implementation

It may seem challenging to find an exact solution of since the variable lies in a continuous domain. Due to the establishment of Slater’s condition and convexity of strong duality holds. Therefore, one can consider the following dual problem


where denotes the real part of Frobenius inner product, is the dual variable, and the inequality constraint implies that the modulus of the following trigonometric polynomial is uniformly bounded by :


Also this inequality can be converted to some linear matrix inequalities using PTP theory. Then the problem can be considered as a SDP which can be solved in polynomial time [17]. Hence, (18) is equivalent to


where is a positive semidefinite Hermitian matrix, ,  is an elementary Toeplitz matrix with ones on it’s k-th diagonal and zeros else where. if , and zero otherwise.

By strong duality, for any solution and of and (20), respectively, we have:


Therefore, . This suggests that one can find the support by looking for such that (see Fig. 3).

V Experiment

In this section, we numerically provide some bounds on and its derivatives to show their monotonicity in the interval which is necessary in Lemma B.2. These bounds are shown in Fig.1and 1111These bounds are evaluated using [23] and the MATLAB code therein.. The proof of Theorem III.1 makes essentially use of the bounds on , its partial derivatives and which is defined in (58). These bounds are numerically shown in Figs. 1, 1 and Fig. 2, respectively. It may seem surprising at the first sight how the 2-D kernels are drawn versus one variable. Precisely, the proof of Lemma B.2 requires some bounds on 2-D kernels in . Remark that the positive definiteness of the Hessian matrix in (52) is why the radius is chosen. Since the bounds are monotonic both on and in the interval (See Appendix D and Fig. 1), it is sufficient to evaluate them on the line . Moreover, the small grid size that is used in the simulations imposes much computational complexity in the 2-D case. This idea that was first used by [15] leads to simple computations. Fig. 1 and 1 demonstrate bounds on and its partial derivatives with the condition that .

We further uniformly generate 2-D points in with coefficients and build 2-D Fourier measurements in the form of (2) up until the square in which . To reconstruct the location of point sources from the measurements, we first implement the SDP problem (20) using CVX [26]. Then, the following dual polynomial is obtained by the solution .


Based on (IV) one can localize to find the signal support as shown in Fig. 3.

In the last experiment, we evaluate how the success rate scales with changing number of spikes and minimum separation in different number of samples. Fig. 4 shows that the phase transition occurs near about .

Vi Conclusion And Future Directions

This paper is concerned with the recovery of 2-D point sources where we are given low-resolution measurements caped to integer square . We show that TV norm minimization achieves exact recovery when the sources are separated by at least . The proof is based on construction of a 2-D low-pass dual polynomials that can interpolate any sign patterns of the support signal.

There are several interesting future directions to be explored. [18] has shown that off-the-grid 2-D point sources can be recovered from partially observed Fourier coefficients as long as the separation is . This bound can be improved to using the proposed 2-D dual polynomial in Proposition III.2. It is beneficial to consider our 2-D problem in line spectral estimation when the measurements are corrupted with sparse noise using the approach of [27].

Appendix A Useful Lemmas

The proof sketch of our 2-D low-pass polynomial construction is based on the bounds on the 1-D kernel in (12) and its derivatives. These bounds are derived in [23, Section 4.1] based on Taylor series expansion of Dirichlet kernel and its derivatives around origin.

Lemma A.1

[23, Lemma 4.4] For any , if is such that , then we have the following non-asymptotic bounds on


where and are defined in [23, Section B.1].

Consequently, one has upper bounds on the magnitude of and its derivatives as:

Lemma A.2

[23, Lemma 4.6] For , and the following decreasing bound is established:


where is defined in [23, Section B.2]. Also, there is a global upper bound as:


Since the upper bound on sum of and its shift copies are required for 2-D situation, the result of [23, Lemma 4.7] with minor changes is given below.

Lemma A.3

Following [23, Lemma 4.7], suppose and where . If and , then for all and such that , ,




where covers the interval with equispaced steps and


where , , , and .
The following bounds are beneficial in the proof of Lemmas B.2 and B.3,


Also, and are strictly increasing and decreasing functions, respectively.

Appendix B Proof of Proposition iii.2

To meet (16) and (17) and obtain upper bounds on interpolation coefficients, we present the following Lemma.

Lemma B.1

If the condition of Theorem III.1 hold, in (III-B), there exist coefficient vectors , and satisfying


where . Also, if , then


To control the magnitude of near the elements of the support , we present the following lemmas. Without loss of generality, we can assume that the first element of the support is located in .

Lemma B.2

Assume, without loss of generality, . Then if the conditions of Theorem III.1 hold, for any , .

Lemma B.3

Assume, without loss of generality, . Then if the conditions of Theorem III.1 hold, for any .

The proof of our main result requires a numerical upper bound on (23) and (14) which are shown in Fig. 1.

Appendix C Proof of Lemma b.1

To prove this lemma, the approach of [15] is followed. Without loss of generality, we assume the unit square be mapped to . First, (16) and (17) are written in matrix form as:




The interpolation coefficients are calculated from the above equations.ُ Since and are even and is odd, , , and are symmetric, while and are antisymmetric. Let , and . Therefore, the above matrix system is converted to:


To bound the infinity norm of the sub-matrices in (33), 1-D results can be used as below:


where , and the inequality follows form (11). To apply the 1-D result as discussed in AppendixA to 2-D case, we divide the set into regions or and . With this assumption, we reach:


where the last inequality stems from (27) when and the fact that . Also the first one is the result of minimum separation between the sources and the union bound.
For the last region, we have:


where the last inequality stems from the minimum separation condition and (27). Consequently,


By applying the same approach, we reach:


where the bound follows from Fig. 1 and (27). Same bound holds for . Similarly,




where the above inequities come is obtained by (27) and fact that occur at origin (see Fig. 1).
To ease notation, consider,


where is the Schur’s complement of . Also by the definition of inverse of Schur’s complement, we have

Regarding the fact that is the Schur’s complement of , the solution of linear system can be written as

Respected to and value of Fig. 1, we reach:


By using and ,


Similarity reads


Next, , and allow to bound


using , , , , and we achieve


with this bound we have


One can bound the interpolation vector with the above results as below:


where the last inequality holds when . The upper bound computations for