Modified Papoulis-Gerchberg algorithm for sparse signal recovery

# Modified Papoulis-Gerchberg algorithm for sparse signal recovery

## Abstract

Motivated by the well-known Papoulis-Gerchberg algorithm, an iterative thresholding algorithm for recovery of sparse signals from few observations is proposed. The sequence of iterates turns out to be similar to that of the thresholded Landweber iterations, although not the same. The performance of the proposed algorithm is experimentally evaluated and compared to other state-of-the-art methods.

## 1 Introduction and Problem Definition

In many real world problems, instead of the complete signal, we have some observations of the signal of interest from which we want to reconstruct the original signal. In the simplest case, which is fortunately applicable to many practical situations, the observation process can be approximated as a linear operator:

 o=Kf (1)

where is the original signal of interest, is the observed features, i.e. samples, and is the (linear) observation operator. We are interested in the case where the number of observations is much fewer than the length of the original signal. That is, has much fewer rows than columns, i.e. . Furthermore, in practice the observation process is not exact and typically we can only obtain a distorted version of the observed features. This distortion is usually modeled by an additive error term. So the observation process can be modeled as

 g=o+e=Kf+e (2)

in which represents the (additive) observation error, e.g. noise.

The objective is to find the original signal, , from the set of available observations, , while is also known. That is, to solve the linear inverse problem (2). In order to do so one might minimize the discrepancy . However, except for the very special case that the operator has a trivial null space (see  for details), the minimizer is not unique. In order to address this problem, i.e. to regularize the inverse problem, one might suppose a priori assumptions and impose different constraints on the solution, which is usually taken into account by adding a penalization term to the discrepancy. In this letter we are especially interested in the case where we have a sparsity constraint on the solution. For the sake of a more precise explanation, suppose there exists an orthonormal basis in which the signal of interest can be expanded with the expansion coefficients , a large number of which are zero or negligibly small. In order to make use of this constraint for solving the inverse problem (2), define the -norm . One might then find the minimizer of the following functional, as the solution of (2) with the aforementioned sparsity constraint:

 Φμ(f)=∥Kf−g∥2+μ∥θ∥p (3)

where and is the regularization parameter that can be chosen based on application.

This is a well-known problem and different approaches to solving it have been proposed. Here we will not go through the details of the problem, which have been widely studied by other authors. The reader is referred to  for a comprehensive discussion of the problem. For the sake of consistency, we follow the same mathematical notations as those used in . Furthermore, it is worth mentioning that beside the -norm introduced above, some authors have as well used the total variation (TV) norm as the constraint. See, for example, .

There are several methods of recovery available in the literature, among of which iterative thresholding algorithms are an important class. Iterative thresholding algorithms are, more or less, based on a thresholded version of the Landweber iterations (see, for example, ). I.e. the sequence of iterates has the general form

 fn=Sγ(fn−1+K∗(g−Kfn−1)) (4)

where denotes the conjugate of , and is a thresholding operator. In  the authors prove the convergence of the above iterative algorithm to the (unique) minimizer of (3), when is the soft thresholding operator (see the definition of soft thresholding in section 2). Noticeable effort has been put into accelerating the original algorithm. In  the authors propose a method for accelerating thresholded Landweber iterations, which is based on alternating subspace corrections. Other methods for this purpose are introduced in , and . Although use of soft thresholding is more common, some authors have, as well, used hard thresholding to address the above inverse problem. See ,  or  as examples.

## 2 Description of the proposed algorithm

In order to explain the underlying idea of the proposed method, let us begin with the following problem. In  Papoulis introduces an iteration method for reconstruction of a band-limited signal from a known segment. Suppose is a signal of which we only know a small segment, , where . Also suppose is the Fourier transform of and for (bandlimitedness). The objective is to reconstruct from .

In order to solve this problem, we begin with , the Fourier transform of , and form , i.e. truncate for . In other words, we change so that it satisfies the constraint on the original signal (bandlimitedness in this case). , the inverse transform of , is then used to form , which recovers the known segment of . is supposed to be a better estimate of the desired signal, , than . This estimate can be further improved by repeating the above procedure in an iterative manner. That is, in the th iteration, we form the function:

 Hn(ω)=Fn−1(ω)pσ(ω) (5)

compute its inverse transform, , and recover the known segment of the original signal

 fn(t)=g(t)+hn(t)−hn(t)pτ(t) (6)

It can be proved that tends to as .

In brief, in each iteration, we change the latest estimate of the desired signal, i.e. the output of the previous iteration, so that it satisfies the constraint (bandlimitedness in this case). Since this process might affect the entire signal, including the known segment, the known segment is then recovered before further progress.

This problem is obviously different from our original problem stated in (3), because, firstly, it concentrates on the special case of recovering a continuous signal from a known segment and, secondly, the constraint on the signal is bandlimitedness while the constraint of (3) is sparsity. Nevertheless, we will implement the above idea to solve our own problem as explained below.

Based on the above algorithm, our iterative algorithm involves two main operations in each iteration, namely, an operation to maintain the constraint followed by an operation to recover the original observations. Since we are interested in problems with sparsity constraint, a thresholding operation can maintain this constraint for us, i.e.

 hn=Sγ(fn−1) (7)

where is the latest estimate of the original signal, obtained in the previous iteration, and is the soft thresholding operator, defined as

 Sγ(g)=∑isγ()ψi (8)

where

Analogous to (6), the original observations are then recovered by

 fn=hn+K∗(g−Khn) (9)

The sequence of iterates can, thus, be expressed in the following form:

 fn=K∗(g+Sγ(fn−1)−KSγ(fn−1)) (10)

with .

Although (10) is not exactly a sequence of Landweber iterations, it can still be viewed as a modified version of the thresholded Landweber iterations. Note, especially, the analogy between (10) and (4).

In this letter we only introduce the algorithm and experimentally evaluate its performance, compared to similar state-of-the-art algorithms. A detailed discussion of the convergence of the iterative algorithm and its relation to the thresholded Landweber iterations is beyond the scope of the current letter and will be postponed to future publications. The motivation behind the proposed algorithm was briefly discussed, though.

## 3 Experiments

In all the experiments described below, the thresholding operator is applied to stationary wavelet transform (SWT)  coefficients, obtained using DB1 (Haar) mother function for 1 level of decomposition. All thresholds are obtained using the well-known Birge-Massart strategy . The iterative algorithm continues until a convergence criterion, e.g. , is met. For the sake of comparison, the results are compared with those obtained by norm minimization  and total-variation (TV) norm minimization , which are two well-known state-of-the-art methods of sparse signal recovery.

Due to space constraints, the results of the experiments are included very concisely. More comprehensive results can be found at http://mkayvan.googlepages.com/sparsesignalrecovery.

### 3.1 Recovery of 1D signals

First, we consider the ideal case of sampling with no distortion, i.e. we assume in (2). The HeaviSine test signal (figure 1), from the well-known Donoho-Johnstone  collection of synthetic test signals, is reconstructed from different numbers, , of randomly selected samples. Table 1 shows the the mean squared error (MSE) between the reconstructed and the original signal for reconstruction by (10) as well as by and TV norm minimization. As it is obvious from results, reconstruction by (10) outperforms the two other methods in almost all cases.

### 3.2 Recovery of 2D Signals

Here we consider the classical problem of reconstruction of images from highly incomplete frequency domain observations, which has received considerable attention, because of its important applications in medical imaging, especially in MRI. In particular, acquiring MR images involves acquisition of 2-D Fourier domain data of the image. Due to the physics of the imaging device, this process is usually carried out by taking 1-dimensional slices from the 2-dimensional Fourier domain data of the image. This process is often too time-consuming, though, so for a rapid MR imaging it is desirable to take only a subset of these slices, e.g. a reduced number of Fourier domain samples taken over radial lines.

Radial sampling is a common way of sampling over the 2-dimensional Fourier domain, and several authors have addressed this problem, especially as an important application of compressive sampling. For the sake of comparison of the proposed method with other state-of-the-art reconstruction methods, here we will also address the same problem. The reconstruction problem might, however, seem a bit different here, since sampling is confined to radial lines, instead of being random. Nevertheless, what we are essentially doing is reconstruction of the signal from its projections onto a lower dimensional subspace, which is exactly what was being done in the previous cases.

The test image used is the Shepp-Logan phantom (figure 1) of size . The pixels in this image take values between 0 and 1, and the image has a nonzero gradient at 2184 pixels. The setup of our experiments is the same as that of  which has been as well adopted by several other authors as a framework to evaluate the performance of their methods, including , , , and .

Table 2 shows the PSNR (peak signal-to-noise ratio) values of the reconstructed images from samples taken along , 11, 15, and 21 radial lines in the 2-dimensional discrete Fourier Transform (DFT) domain, compared to those obtained by and TV norm minimization. The reconstructed images by (10) as well as by minimizing and TV norms, from samples taken over 9 radial lines, are shown in Figure 2. Figure 2: Reconstruction of the 256×256 Shepp-Logan phantom from samples taken along K=9 radial lines in the Fourier domain. From left to right: (a) Original 256×256 Shepp-Logan phantom (b) Reconstruction by (10) (c) Reconstruction by TV norm minimization (d) Reconstruction by l1 norm minimization.

In the next experiment we consider the more realistic case of distorted observations. Particularly, samples are taken from noisy versions of the Shepp-Logan Phantom test image, affected by additive white Gaussian noise (AWGN). The results are shown in table 3. Note that especially when the noise is significant, the reconstructed image enjoys considerably less error than the noisy one from which we got our samples.

## 4 Conclusion

Motivated by the Papoulis-Gerchberg algorithm, a method for recovery of sparse signals from very limited numbers of observations was proposed. Iterative thresholding algorithms have been widely used to address this problem. Our algorithm also takes advantage of thresholding to maintain the sparsity constraint in each iteration. The signal is then reconstructed by iteratively going through a constraint-maintaining operation followed by recovery of the known features. The performance of the method was experimentally evaluated and compared to other state-of-the-art methods.

### References

1. A. Papoulis, Signal Analysis, McGraw-Hill, 1977.
2. I. Daubechies, M. Defrise, C. D. Mol, “An Iterative Thresholding Algorithm for Linear Inverse Problems with a Sparsity Constraint,” Communications on Pure and Applied Mathematics, vol.57, pp.1413-1457,2007.
3. E. J. Candes, M. B. Wakin, “An Introduction To Compressive Sampling [A sensing/sampling paradigm that goes against the common knowledge in data acquisition],” IEEE Signal Processing Magazine, Vol.25, pp.21-30, 2008.
4. A. Kirsch, “An Introduction to the Mathematical Theory of Inverse Problems,” Springer, 1996.
5. M. Fornasier, “Domain Decomposition Methods for Linear Inverse Problems with Sparsity Constraints,” Journal of Inverse problems, vol.23, no.6, 2007.
6. I. Daubechies, M. Fornasier, I. Loris, “Acceleration of the projected gradient method for linear inverse problems with sparsity constraints,” Preprint, 2008.
7. J. M. Bioucas-Dias, M. A. T. Figueiredo, “A New TwIST: Two-Step Iterative Shrinkage/Thresholding Algorithms for Image Restoration,” IEEE Trans. on Image Processing, vol.16, No.12, pp.2992-3004, 2007.
8. K. Bredies,D. A. Lorenz, “Iterated hard Shrinkige for Minimization Problems With Sparsity Constiants,” SIAM Journal on Scientific Computing, vol.30, no.2, pp.657-683, 2008.
9. T. Blumensath and M. E. Davies, “Iterative hard thresholding for compressed sensing,” Preprint, 2008.
10. “Iterative thresholding algorithms,” Preprint, 2007.
11. R. Coifman and D. Donoho, “Translation invariant de-noising,” Lecture Notes in Statistics: Wavelets and Statistics, Springer-Verlag, pp. 125-150, 1995.
12. L. Birg’e and P. Massart, “From model selection to adaptive estimation,” Research Papers in Probability and Statistics: Festschrift for Lucien Le Cam, (D. Pollard, E. Torgersen and G. Yang, eds.), Springer, New York, 1996.
13. E. J. Candes, J. Romberg, and T. Tao, “Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information,” IEEE Trans. Inf. Theory, vol.52, 2006.
14. D. Donoho and I. Johnstone, “Ideal spatial adaptation via wavelet shrinkage,” Biometrika, vol.81, pp.425-455, 1994.
15. T. Blumensath,M.E. Davies, “Gradient Pursuits,” IEEE Trans. on Signal Processing, vol.56, 2008.
16. K. Egiazarian,A. Foi, V. Katkovnik, “Compresed sensing image reconstruction via recursive spatially adaptive filtering,” IEEE Int. Conf. on Image Processing, vol. 1, pp.549-552, 2007.
17. E. J. Candes, M. B. Wakin, and S. Boyd, “Enhancing Sparsity by Reweighted Minimization,” Journal of Fourier Analysis and Applications, special issue on sparsity, 2008.
18. T. Blumensath and M. E. Davies, “Stagewise Weak Gradient Pursuits Part I: Fundamentals and Numerical Studies,” Preprint 2008.
You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters   