Linear inverse problems with noise: primal and primal-dual splittingSubmitted to NCMIP 2011 on the 02/20/11.

Linear inverse problems with noise: primal and primal-dual splitting1

Abstract

In this paper, we propose two algorithms for solving linear inverse problems when the observations are corrupted by noise. A proper data fidelity term (log-likelihood) is introduced to reflect the statistics of the noise (e.g. Gaussian, Poisson). On the other hand, as a prior, the images to restore are assumed to be positive and sparsely represented in a dictionary of waveforms. Piecing together the data fidelity and the prior terms, the solution to the inverse problem is cast as the minimization of a non-smooth convex functional. We establish the well-posedness of the optimization problem, characterize the corresponding minimizers, and solve it by means of primal and primal-dual proximal splitting algorithms originating from the field of non-smooth convex optimization theory. Experimental results on deconvolution, inpainting and denoising with some comparison to prior methods are also reported.

1Introduction

A lot of works have already been dedicated to linear inverse problems with Gaussian noise (see [16] for a comprehensive review), while linear inverse problems in presence of other kind of noise such as Poisson noise have attracted less interest, presumably because noises properties are more complicated to handle. Such inverse problems have however important applications in imaging such as restoration (e.g. deconvolution in medical and astronomical imaging), or reconstruction (e.g. computerized tomography).

Since the pioneer work for Gaussian noise by [9], many other methods have appeared for managing linear inverse problem with sparsity regularization. But they limited to the Gaussian case. In the context of Poisson linear inverse problems using sparsity-promoting regularization, a few recent algorithms have been proposed. For example, [10] stabilize the noise and proposed a family of nested schemes relying upon proximal splitting algorithms (Forward-Backward and Douglas-Rachford) to solve the corresponding optimization problem. The work of [4] is in the same vein. These methods may be extended to other kind of noise. However, nested algorithms are time-consuming since they necessitate to sub-iterate. Using the augmented Lagrangian method with the alternating method of multipliers algorithm (ADMM), which is nothing but the Douglas-Rachford splitting applied to the Fenchel-Rockafellar dual problem, [13] presented a deconvolution algorithm with TV and sparsity regularization, and [1] a denoising algorithm for multiplicative noise. This scheme however necessitates to solve a least-square problem which can be done explicitly only in some cases.

In this paper, we propose a framework for solving linear inverse problems when the observations are corrupted by noise. In order to form the data fidelity term, we take the exact likelihood. As a prior, the images to restore are assumed to be positive and sparsely represented in a dictionary of atoms. The solution to the inverse problem is cast as the minimization of a non-smooth convex functional, for which we prove well-posedness of the optimization problem, characterize the corresponding minimizers, and solve them by means of primal and primal-dual proximal splitting algorithms originating from the realm of non-smooth convex optimization theory. Convergence of the algorithms is also shown. Experimental results and comparison to other algorithms on deconvolution are finally conducted.

Notation and terminology

Let a real Hilbert space, here a finite dimensional vector subspace of . We denote by the norm associated with the inner product in , and is the identity operator on . is the norm. and are respectively reordered vectors of image samples and transform coefficients. We denote by the relative interior of a convex set . A real-valued function is coercive, if , and is proper if its domain is non-empty . is the class of all proper lower semicontinuous (lsc) convex functions from to . We denote by the spectral norm of the linear operator , and its kernel.

Let be an image. can be written as the superposition of elementary atoms parameterized by such that . We denote by the dictionary (typically a frame of ), whose columns are the atoms all normalized to a unit -norm

2Problem statement

Consider the image formation model where an input image of pixels is indirectly observed through the action of a bounded linear operator , and contaminated by a noise through a composition operator (e.g. addition),

The linear inverse problem at hand is to reconstruct from the observed image .

A natural way to attack this problem would be to adopt a maximum a posteriori (MAP) bayesian framework with an appropriate likelihood function the distribution of the observed data given an original reflecting the statistics of the noise. As a prior, the image is supposed to be economically (sparsely) represented in a pre-chosen dictionary as measured by a sparsity-promoting penalty supposed throughout to be convex but non-smooth, e.g. the norm.

2.1Gaussian noise case

For Gaussian noise, we consider the following formation model,

where .

From the probability density function, the negative log-likelihood writes:

From this function, we can directly derive the following result,

2.2Poisson noise case

The observed image is then a discrete collection of counts which are bounded, i.e. . Each count is a realization of an independent Poisson random variable with a mean . Formally, this writes in a vector form as

From the probability density function of a Poisson random variable, the likelihood writes: Taking the negative log-likelihood, we arrive at the following data fidelity term:

Using classical results from convex theory, we can show that,

2.3Multiplicative noise

We consider the case without linear operator and as in [1] with a -look full developed speckle noise,

In order to simplify the problem, the logarithm of the observation is considered, . And in [1], the authors proof that the anti log-likelihood yields,

Using classical results from convex theory, we can directly derive,

2.4Optimization problem

Our aim is then to solve the following optimization problems, under a synthesis-type sparsity prior2, The data fidelity term reflect the noise statistics, the penalty function is positive, additive, and chosen to enforce sparsity, is a regularization parameter and is the indicator function of the convex set (e.g. the positive orthant for Poissonian data).

For the rest of the paper, we assume that is a proper, convex and lsc function, i.e. . This is true for many kind of noises including Poisson, Gaussian, Laplacian…(see [3] for others examples).

From the objective in , we get the following,

2.5Well-posedness of

Let be the set of minimizers of problem . Suppose that is coercive. Thus is coercive. Therefore, the following holds:

3Iterative Minimization Algorithms

3.1Proximal calculus

We are now ready to describe the proximal splitting algorithms to solve . At the heart of the splitting framework is the notion of proximity operator.

Then, the proximity operator of the indicator function of a convex set is merely its orthogonal projector. One important property of this operator is the separability property:

For Gaussian noise, we can easily prove that with as defined in ,

The following result can be proved easily by solving the proximal optimization problem in Definition ? with as defined in , see also [5].

As with multiplicative noise involves the exponential, we need the W-Lambert function [8] in order to derive a closed form of the proximity operator,

We now turn to which is given by Lemma ? and the following result:

Among the most popular penalty functions satisfying the above requirements, we have , in which case the associated proximity operator is soft-thresholding, denoted in the sequel.

3.2Splitting on the primal problem

Splitting for sums of convex functions

Suppose that the objective to be minimized can be expressed as the sum of functions in , verifying domain qualification conditions:

Proximal splitting methods for solving are iterative algorithms which may evaluate the individual proximity operators , supposed to have an explicit convenient structure, but never proximity operators of sums of the .

Splitting algorithms have an extensive literature since the 1970’s, where the case predominates. Usually, splitting algorithms handling have either explicitly or implicitly relied on reduction of to the case in the product space . For instance, applying the Douglas-Rachford splitting to the reduced form produces Spingarn’s method, which performs independent proximal steps on each , and then computes the next iterate by essentially averaging the individual proximity operators. The scheme described in [6] is very similar in spirit to Spingarn’s method, with some refinements.

Application to noisy inverse problems

Problem is amenable to the form , by wisely introducing auxiliary variables. As involves two linear operators ( and ), we need two of them, that we define as and . The idea is to get rid of the composition of and . Let the two linear operators and . Then, the optimization problem can be equivalently written:

Notice that in our case by virtue of separability of the proximity operator of in , and ; see Lemma ?.

The proximity operators of and are easily accessible through Lemmas ?, ?, ? and ?. The projector onto is trivial for most of the case (e.g. positive orthant, closed interval). It remains now to compute the projector on , , which by well-known linear algebra arguments, is obtained from the projector onto the image of .

The inverse in the expression of is can be computed efficiently when is a tight frame. Similarly, for , the inverse writes , and its computation can be done in the domain where is diagonal; e.g. Fourier for convolution or pixel domain for mask.

Finally, the main steps of our primal scheme are summarized in Algorithm ?. Its convergence is a corollary of [6][Theorem 3.4].

Splitting on the dual: Primal-dual algorithm

Our problem can also be rewritten in the form,

where now and . Again, one may notice that the proximity operator of can be directly computed using the separability in and .

Recently, a primal-dual scheme, which turns to be a pre-conditioned version of ADMM, to minimize objectives of the form was proposed in [2]. Transposed to our setting, this scheme gives the steps summarized in Algorithm ?.

Adapting the arguments of [2], convergence of the sequence generated by Algorithm ? is ensured.

3.3Discussion

Algorithm ? and ? share some similarities, but exhibit also important differences. For instance, the primal-dual algorithm enjoys a convergence rate that is not known for the primal algorithm. Furthermore, the latter necessitates two operator inversions that can only be done efficiently for some and , while the former involves only application of these linear operators and their adjoints. Consequently, Algorithm ? can virtually handle any inverse problem with a bounded linear . In case where the inverses can be done efficiently, e.g. deconvolution with a tight frame, both algorithms have comparable computational burden. In general, if other regularizations/constraints are imposed on the solution, in the form of additional proper lsc convex terms that would appear in , both algorithms still apply by introducing wisely chosen auxiliary variables.

4Experimental results

4.1Deconvolution under Poisson noise

Our algorithms were applied to deconvolution. In all experiments, was the -norm. Table 1 summarizes the mean absolute error (MAE) and the execution times for an astronomical image, where the dictionary consisted of the wavelet transform and the PSF was that of the Hubble telescope. Our algorithms were compared to state-of-the-art alternatives in the literature. In summary, flexibility of our framework and the fact that Poisson noise was handled properly, demonstrate the capabilities of our approach, and allow our algorithms to compare very favorably with other competitors. The computational burden of our approaches is also among the lowest, typically faster than the PIDAL algorithm. Figure 1 displays the objective as a function of the iteration number and times (in s). We can clearly see that Algorithm 2 converges faster than Algorithm 1.

Table 1: MAE and execution times for the deconvolution of the sky image.
RL-MRS RL-TV StabG PIDAL-FS
MAE 63.5 52.8 43 43.6
Times 230s 4.3s 311s 342s
Alg. Alg.
MAE 46 43.6
Times 183s 154s
Figure 1: Objective function for deconvolution under Poisson noise in function if iterations (left) and times (right).
Objective function for deconvolution under Poisson noise in function if iterations (left) and times (right).
Figure 1: Objective function for deconvolution under Poisson noise in function if iterations (left) and times (right).

4.2Inpainting with Gaussian noise

We also applied our algorithms to inpainting with Gaussian noise. In all experiments, was the -norm. Fig ? summarizes the results with the PSNR and the execution times for the Cameraman, where the dictionary consisted of the wavelet transform and the mask was create from a random process (here with about 34% of missing pixels). Notice that both algorithms leads to the same solution which gives a good reconstruction of the image. Figure 2 displays the objective as a function of the iteration number and times (in s). Again, we can clearly see that Algorithm 2 converges faster than Algorithm 1.

Inpainting results for the Cameraman using our two algorithms. Inpainting results for the Cameraman using our two algorithms.
Original Masked and noisy (PSNR = 11.1)
Inpainting results for the Cameraman using our two algorithms. Inpainting results for the Cameraman using our two algorithms.
Alg. ? (PSNR = 25.8) Alg. ? (PSNR = 25.8)
Figure 2: Objective function for inpainting with Gaussian noise in function if iterations (left) and times (right).
Objective function for inpainting with Gaussian noise in function if iterations (left) and times (right).
Figure 2: Objective function for inpainting with Gaussian noise in function if iterations (left) and times (right).

4.3Denoising with Multiplicative noise

As we work on the logarithm the problem (see Section 2.3, the final estimate for each algorithm is given by taking the exponential of the result. In all experiments, was the -norm. The Barbara image was set to a maximal intensity of 30 and the minimal to a non-zero value in order to avoid issues with the logarithm. The noise was added using which leads to a medium level of noise. Fig ? summarizes the results with the MAE and the execution times for Barbara, where the dictionary consisted of the curvelets transform. Our methods give correct reconstruction of the image. Figure 3 displays the objective as a function of the iteration number and times (in s). Again, we can clearly see that Algorithm 2 converges faster than Algorithm 1.

Denoising results for Barbara using our two algorithms. Denoising results for Barbara using our two algorithms.
Original Masked and noisy (MAE = 3.6)
Denoising results for Barbara using our two algorithms. Denoising results for Barbara using our two algorithms.
Alg. ? (MAE = 3.2) Alg. ? (MAE = 2.3)
Figure 3: Objective function for denoising with multiplicative noise in function if iterations (left) and times (right).
Objective function for denoising with multiplicative noise in function if iterations (left) and times (right).
Figure 3: Objective function for denoising with multiplicative noise in function if iterations (left) and times (right).

5Conclusion

In this paper, we proposed two provably convergent algorithms for solving the linear inverse problems with a sparsity prior. The primal-dual proximal splitting algorithm seems to perform better in terms of convergence speed than the primal one. Moreover, its computational burden is lower than most comparable of state-of-art methods. Inverse problems with multiplicative noise does not enter currently in this framework, we will consider its adaptation to such problems in future work.

Footnotes

  1. Submitted to NCMIP 2011 on the 02/20/11.
  2. Our framework and algorithms extend to an analysis-type prior just as well.

References

  1. Multiplicative noise removal using variable splitting and constrained optimization.
    J. Bioucas-Dias and M. Figueiredo. IEEE Transactions on Image Processing, 19(7):1720–1730, 2010.
  2. A first-order primal-dual algorithm for convex problems with applications to imaging.
    A. Chambolle and T. Pock. Technical report, CMAP, Ecole Polytechnique, 2010.
  3. A variational formulation for frame-based inverse problems.
    C. Chaux, P. L. Combettes, J.-C. Pesquet, and V. R. Wajs. Inv. Prob., 23:1495–1518, 2007.
  4. Nested iterative algorithms for convex constrained image recovery problems.
    C. Chaux, J.-C. Pesquet, and N. Pustelnik. SIAM Journal on Imaging Sciences, 2(2):730–762, 2009.
  5. A Douglas-Rachford splittting approach to nonsmooth convex variational signal recovery.
    P. L. Combettes and J.-. Pesquet. IEEE J. Selec. Top. Sig. Pro., 1(4):564–574, 2007.
  6. A proximal decomposition method for solving convex variational inverse problems.
    P. L. Combettes and J.-C. Pesquet. Inv. Prob., 24(6), 2008.
  7. Signal recovery by proximal forward-backward splitting.
    P. L. Combettes and V. R. Wajs. SIAM Multiscale Model. Simul., 4(4):1168–1200, 2005.
  8. On the Lambert W function.
    R. Corless, G. Gonnet, D. Hare, D. Jeffrey, and D. Knuth. Advances in Computational Mathematics, 5:329–359, 1996.
  9. An iterative thresholding algorithm for linear inverse problems with a sparsity constraints.
    I. Daubechies, M. Defrise, and C. D. Mol. Comm. Pure Appl. Math., 112:1413–1541, 2004.
  10. A proximal iteration for deconvolving poisson noisy images using sparse representations.
    F.-X. Dupé, M. Fadili, and J.-L.Starck. IEEE Trans. on Im. Pro., 18(2):310–321, 2009.
  11. A deconvolution method for confocal microscopy with total variation regularization.
    N. D. et al. In ISBI 2004, pages 1223–1226. IEEE, 2004.
  12. Inpainting and zooming using sparse representations.
    M. Fadili, J.-L. Starck, and F. Murtagh. The Computer Journal, 2006.
  13. Restoration of poissonian images using alternating direction optimization.
    M. Figueiredo and J. Bioucas-Dias. IEEE Transactions on Image Processing, 19(12), 2010.
  14. Fonctions convexes duales et points proximaux dans un espace hilbertien.
    J.-J. Moreau. CRAS Sér. A Math., 255:2897–2899, 1962.
  15. Astronomical Image and Data Analysis.
    J.-L. Starck and F. Murtagh. Springer, 2006.
  16. Sparse Image and Signal Processing.
    J.-L. Starck, F. Murtagh, and M. Fadili. Cambridge University Press, 2010.
10111
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
Edit
-  
Unpublish
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel
0
Comments 0
""
The feedback must be of minumum 40 characters
Add comment
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question