Linear inverse problems with noise: primal and primal-dual splitting1footnote 11footnote 1Submitted to NCMIP 2011 on the 02/20/11.

Linear inverse problems with noise: primal and primal-dual splitting111Submitted to NCMIP 2011 on the 02/20/11.

François-Xavier Dupé
AIM UMR CNRS - CEA
91191 Gif-sur-Yvette, France
Jalal M. Fadili GREYC-ENSICAEN-Université de Caen
14050 Caen, France
Jean-Luc Starck
AIM UMR CNRS - CEA
91191 Gif-sur-Yvette, France
francois-xavier.dupe@cea.fr jfadili@greyc.ensicaen.fr jean-luc.starck@cea.fr
20 February 2011
Abstract

In this paper, we propose two algorithms for solving linear inverse problems when the observations are corrupted by noise. A proper data fidelity term (log-likelihood) is introduced to reflect the statistics of the noise (e.g. Gaussian, Poisson). On the other hand, as a prior, the images to restore are assumed to be positive and sparsely represented in a dictionary of waveforms. Piecing together the data fidelity and the prior terms, the solution to the inverse problem is cast as the minimization of a non-smooth convex functional. We establish the well-posedness of the optimization problem, characterize the corresponding minimizers, and solve it by means of primal and primal-dual proximal splitting algorithms originating from the field of non-smooth convex optimization theory. Experimental results on deconvolution, inpainting and denoising with some comparison to prior methods are also reported.

Inverse Problems, Poisson noise, Gaussian noise, Multiplicative noise, Duality, Proximity operator, Sparsity
\numberofauthors

3

1 Introduction

A lot of works have already been dedicated to linear inverse problems with Gaussian noise (see [16] for a comprehensive review), while linear inverse problems in presence of other kind of noise such as Poisson noise have attracted less interest, presumably because noises properties are more complicated to handle. Such inverse problems have however important applications in imaging such as restoration (e.g. deconvolution in medical and astronomical imaging), or reconstruction (e.g. computerized tomography).

Since the pioneer work for Gaussian noise by [9], many other methods have appeared for managing linear inverse problem with sparsity regularization. But they limited to the Gaussian case. In the context of Poisson linear inverse problems using sparsity-promoting regularization, a few recent algorithms have been proposed. For example, [10] stabilize the noise and proposed a family of nested schemes relying upon proximal splitting algorithms (Forward-Backward and Douglas-Rachford) to solve the corresponding optimization problem. The work of [4] is in the same vein. These methods may be extended to other kind of noise. However, nested algorithms are time-consuming since they necessitate to sub-iterate. Using the augmented Lagrangian method with the alternating method of multipliers algorithm (ADMM), which is nothing but the Douglas-Rachford splitting applied to the Fenchel-Rockafellar dual problem, [13] presented a deconvolution algorithm with TV and sparsity regularization, and [1] a denoising algorithm for multiplicative noise. This scheme however necessitates to solve a least-square problem which can be done explicitly only in some cases.

In this paper, we propose a framework for solving linear inverse problems when the observations are corrupted by noise. In order to form the data fidelity term, we take the exact likelihood. As a prior, the images to restore are assumed to be positive and sparsely represented in a dictionary of atoms. The solution to the inverse problem is cast as the minimization of a non-smooth convex functional, for which we prove well-posedness of the optimization problem, characterize the corresponding minimizers, and solve them by means of primal and primal-dual proximal splitting algorithms originating from the realm of non-smooth convex optimization theory. Convergence of the algorithms is also shown. Experimental results and comparison to other algorithms on deconvolution are finally conducted.

Notation and terminology

Let a real Hilbert space, here a finite dimensional vector subspace of . We denote by the norm associated with the inner product in , and is the identity operator on . is the norm. and are respectively reordered vectors of image samples and transform coefficients. We denote by the relative interior of a convex set . A real-valued function is coercive, if , and is proper if its domain is non-empty . is the class of all proper lower semicontinuous (lsc) convex functions from to . We denote by the spectral norm of the linear operator , and its kernel.

Let be an image. can be written as the superposition of elementary atoms parameterized by such that . We denote by the dictionary (typically a frame of ), whose columns are the atoms all normalized to a unit -norm

2 Problem statement

Consider the image formation model where an input image of pixels is indirectly observed through the action of a bounded linear operator , and contaminated by a noise through a composition operator (e.g. addition),

(1)

The linear inverse problem at hand is to reconstruct from the observed image .

A natural way to attack this problem would be to adopt a maximum a posteriori (MAP) bayesian framework with an appropriate likelihood function — the distribution of the observed data given an original — reflecting the statistics of the noise. As a prior, the image is supposed to be economically (sparsely) represented in a pre-chosen dictionary as measured by a sparsity-promoting penalty supposed throughout to be convex but non-smooth, e.g. the norm.

2.1 Gaussian noise case

For Gaussian noise, we consider the following formation model,

(2)

where .

From the probability density function, the negative log-likelihood writes:

(3)

From this function, we can directly derive the following result,

Proposition 1

is a proper, strictly convex and lsc function.

2.2 Poisson noise case

The observed image is then a discrete collection of counts which are bounded, i.e. . Each count is a realization of an independent Poisson random variable with a mean . Formally, this writes in a vector form as

(4)

From the probability density function of a Poisson random variable, the likelihood writes:

(5)

Taking the negative log-likelihood, we arrive at the following data fidelity term:

(6)

Using classical results from convex theory, we can show that,

Proposition 2

is a proper, convex and lsc function. is strictly convex if and only if .

2.3 Multiplicative noise

We consider the case without linear operator and as in [1] with a -look full developed speckle noise,

(7)

In order to simplify the problem, the logarithm of the observation is considered, . And in [1], the authors proof that the anti log-likelihood yields,

(8)

Using classical results from convex theory, we can directly derive,

Proposition 3

is a proper, strictly convex and lsc function.

2.4 Optimization problem

Our aim is then to solve the following optimization problems, under a synthesis-type sparsity prior222Our framework and algorithms extend to an analysis-type prior just as well.,

()

The data fidelity term reflect the noise statistics, the penalty function is positive, additive, and chosen to enforce sparsity, is a regularization parameter and is the indicator function of the convex set (e.g. the positive orthant for Poissonian data).

For the rest of the paper, we assume that is a proper, convex and lsc function, i.e. . This is true for many kind of noises including Poisson, Gaussian, Laplacian…(see [3] for others examples).

From the objective in (), we get the following,

Proposition 4

  1. is a convex functions and so are and .

  2. Suppose that is strictly convex, then remains strictly convex if is an orthobasis and .

  3. Suppose that . Then .

2.5 Well-posedness of ()

Let be the set of minimizers of problem (). Suppose that is coercive. Thus is coercive. Therefore, the following holds:

Proposition 5

  1. Existence: () has at least one solution, i.e. .

  2. Uniqueness: () has a unique solution if is strictly convex, or under (ii) of Proposition 4.

3 Iterative Minimization Algorithms

3.1 Proximal calculus

We are now ready to describe the proximal splitting algorithms to solve (). At the heart of the splitting framework is the notion of proximity operator.

Definition 6 ([14])

Let . Then, for every , the function achieves its infimum at a unique point denoted by . The operator thus defined is the proximity operator of .

Then, the proximity operator of the indicator function of a convex set is merely its orthogonal projector. One important property of this operator is the separability property:

Lemma 7 ([7])

Let and let . Then .

For Gaussian noise, we can easily prove that with as defined in (3),

Lemma 8

Let be the observation, the proximity operator associated to (i.e. the Gaussian anti log-likelihood) is,

(9)

The following result can be proved easily by solving the proximal optimization problem in Definition 6 with as defined in (6), see also [5].

Lemma 9

Let be the count map (i.e. the observations), the proximity operator associated to (i.e. the Poisson anti log-likelihood) is,

(10)

As with multiplicative noise involves the exponential, we need the W-Lambert function [8] in order to derive a closed form of the proximity operator,

Lemma 10

Let be the observations, the proximity operator associated to is,

(11)

where is the W-Lambert function.

We now turn to which is given by Lemma 7 and the following result:

Theorem 11 ([12])

Suppose that : (i) is convex even-symmetric, non-negative and non-decreasing on , and ; (ii) is twice differentiable on ; (iii) is continuous on , and admits a positive right derivative at zero . Then, the proximity operator has exactly one continuous solution decoupled in each coordinate :

(12)

Among the most popular penalty functions satisfying the above requirements, we have , in which case the associated proximity operator is soft-thresholding, denoted in the sequel.

3.2 Splitting on the primal problem

3.2.1 Splitting for sums of convex functions

Suppose that the objective to be minimized can be expressed as the sum of functions in , verifying domain qualification conditions:

(13)

Proximal splitting methods for solving (13) are iterative algorithms which may evaluate the individual proximity operators , supposed to have an explicit convenient structure, but never proximity operators of sums of the .

Splitting algorithms have an extensive literature since the 1970’s, where the case predominates. Usually, splitting algorithms handling have either explicitly or implicitly relied on reduction of (17) to the case in the product space . For instance, applying the Douglas-Rachford splitting to the reduced form produces Spingarn’s method, which performs independent proximal steps on each , and then computes the next iterate by essentially averaging the individual proximity operators. The scheme described in [6] is very similar in spirit to Spingarn’s method, with some refinements.

3.2.2 Application to noisy inverse problems

Problem () is amenable to the form (13), by wisely introducing auxiliary variables. As () involves two linear operators ( and ), we need two of them, that we define as and . The idea is to get rid of the composition of and . Let the two linear operators and . Then, the optimization problem () can be equivalently written:

(14)
(15)

Notice that in our case by virtue of separability of the proximity operator of in , and ; see Lemma 7.

Parameters: The observed image , the dictionary , number of iterations , and regularization parameter .
Initialization:
. .
Main iteration:
For to ,
  • Data fidelity (Lemmas 8, 9 and 10): .

  • Sparsity-penalty (Lemma 11): .

  • Positivity constraint: .

  • Auxiliary constraints with and : (Lemma 12): .

  • Average the proximity operators: .

  • Choose .

  • Update the components: .

  • Update the coefficients estimate: .

End main iteration
Output: Reconstructed image .
Algorithm 1 Primal scheme for solving ().

The proximity operators of and are easily accessible through Lemmas 8, 9, 10 and 11. The projector onto is trivial for most of the case (e.g. positive orthant, closed interval). It remains now to compute the projector on , , which by well-known linear algebra arguments, is obtained from the projector onto the image of .

Lemma 12

The proximity operator associated to is

(16)

The inverse in the expression of is can be computed efficiently when is a tight frame. Similarly, for , the inverse writes , and its computation can be done in the domain where is diagonal; e.g. Fourier for convolution or pixel domain for mask.

Finally, the main steps of our primal scheme are summarized in Algorithm 1. Its convergence is a corollary of [6][Theorem 3.4].

Proposition 13

Let be a sequence generated by Algorithm 1. Suppose that Proposition 4-(iii) is verified, and . Then converges to a (non-strict) global minimizer of ().

3.2.3 Splitting on the dual: Primal-dual algorithm

Our problem () can also be rewritten in the form,

(17)

where now and . Again, one may notice that the proximity operator of can be directly computed using the separability in and .

Recently, a primal-dual scheme, which turns to be a pre-conditioned version of ADMM, to minimize objectives of the form (17) was proposed in [2]. Transposed to our setting, this scheme gives the steps summarized in Algorithm 2.

Adapting the arguments of [2], convergence of the sequence generated by Algorithm 2 is ensured.

Proposition 14

Suppose that Proposition 4-(iii) holds. Let , choose and such that , and let as defined by Algorithm 2. Then, converges to a (non-strict) global minimizer () at the rate on the restricted duality gap.

3.3 Discussion

Algorithm 1 and 2 share some similarities, but exhibit also important differences. For instance, the primal-dual algorithm enjoys a convergence rate that is not known for the primal algorithm. Furthermore, the latter necessitates two operator inversions that can only be done efficiently for some and , while the former involves only application of these linear operators and their adjoints. Consequently, Algorithm 2 can virtually handle any inverse problem with a bounded linear . In case where the inverses can be done efficiently, e.g. deconvolution with a tight frame, both algorithms have comparable computational burden. In general, if other regularizations/constraints are imposed on the solution, in the form of additional proper lsc convex terms that would appear in (), both algorithms still apply by introducing wisely chosen auxiliary variables.

Parameters: The observed image , the dictionary , number of iterations , proximal steps and , and regularization parameter .
Initialization:
.
Main iteration:
For to ,
  • Data fidelity (Lemmas 8, 9 and 10): .

  • Positivity constraint: .

  • Sparsity-penalty (Lemma 11): .

  • Update the coefficients estimate:

End main iteration
Output: Reconstructed image .
Algorithm 2 Primal-dual scheme for solving ().

4 Experimental results

4.1 Deconvolution under Poisson noise

Our algorithms were applied to deconvolution. In all experiments, was the -norm. Table 1 summarizes the mean absolute error (MAE) and the execution times for an astronomical image, where the dictionary consisted of the wavelet transform and the PSF was that of the Hubble telescope. Our algorithms were compared to state-of-the-art alternatives in the literature. In summary, flexibility of our framework and the fact that Poisson noise was handled properly, demonstrate the capabilities of our approach, and allow our algorithms to compare very favorably with other competitors. The computational burden of our approaches is also among the lowest, typically faster than the PIDAL algorithm. Fig. 1 displays the objective as a function of the iteration number and times (in s). We can clearly see that Algorithm 2 converges faster than Algorithm 1.

RL-MRS [15] RL-TV [11] StabG [10] PIDAL-FS [13]
MAE 63.5 52.8 43 43.6
Times 230s 4.3s 311s 342s
Alg. 1 Alg. 2
MAE 46 43.6
Times 183s 154s
Table 1: MAE and execution times for the deconvolution of the sky image.
Figure 1: Objective function for deconvolution under Poisson noise in function if iterations (left) and times (right).

4.2 Inpainting with Gaussian noise

We also applied our algorithms to inpainting with Gaussian noise. In all experiments, was the -norm. Fig 2 summarizes the results with the PSNR and the execution times for the Cameraman, where the dictionary consisted of the wavelet transform and the mask was create from a random process (here with about 34% of missing pixels). Notice that both algorithms leads to the same solution which gives a good reconstruction of the image. Fig. 3 displays the objective as a function of the iteration number and times (in s). Again, we can clearly see that Algorithm 2 converges faster than Algorithm 1.

Original Masked and noisy (PSNR = 11.1)
Alg. 1 (PSNR = 25.8) Alg. 2 (PSNR = 25.8)
Figure 2: Inpainting results for the Cameraman using our two algorithms.
Figure 3: Objective function for inpainting with Gaussian noise in function if iterations (left) and times (right).

4.3 Denoising with Multiplicative noise

As we work on the logarithm the problem (see 2.3, the final estimate for each algorithm is given by taking the exponential of the result. In all experiments, was the -norm. The Barbara image was set to a maximal intensity of 30 and the minimal to a non-zero value in order to avoid issues with the logarithm. The noise was added using which leads to a medium level of noise. Fig 4 summarizes the results with the MAE and the execution times for Barbara, where the dictionary consisted of the curvelets transform. Our methods give correct reconstruction of the image. Fig. 5 displays the objective as a function of the iteration number and times (in s). Again, we can clearly see that Algorithm 2 converges faster than Algorithm 1.

Original Masked and noisy (MAE = 3.6)
Alg. 1 (MAE = 3.2) Alg. 2 (MAE = 2.3)
Figure 4: Denoising results for Barbara using our two algorithms.
Figure 5: Objective function for denoising with multiplicative noise in function if iterations (left) and times (right).

5 Conclusion

In this paper, we proposed two provably convergent algorithms for solving the linear inverse problems with a sparsity prior. The primal-dual proximal splitting algorithm seems to perform better in terms of convergence speed than the primal one. Moreover, its computational burden is lower than most comparable of state-of-art methods. Inverse problems with multiplicative noise does not enter currently in this framework, we will consider its adaptation to such problems in future work.

References

  • [1] J. Bioucas-Dias and M. Figueiredo. Multiplicative noise removal using variable splitting and constrained optimization. IEEE Transactions on Image Processing, 19(7):1720–1730, 2010.
  • [2] A. Chambolle and T. Pock. A first-order primal-dual algorithm for convex problems with applications to imaging. Technical report, CMAP, Ecole Polytechnique, 2010.
  • [3] C. Chaux, P. L. Combettes, J.-C. Pesquet, and V. R. Wajs. A variational formulation for frame-based inverse problems. Inv. Prob., 23:1495–1518, 2007.
  • [4] C. Chaux, J.-C. Pesquet, and N. Pustelnik. Nested iterative algorithms for convex constrained image recovery problems. SIAM Journal on Imaging Sciences, 2(2):730–762, 2009.
  • [5] P. L. Combettes and J.-. Pesquet. A Douglas-Rachford splittting approach to nonsmooth convex variational signal recovery. IEEE J. Selec. Top. Sig. Pro., 1(4):564–574, 2007.
  • [6] P. L. Combettes and J.-C. Pesquet. A proximal decomposition method for solving convex variational inverse problems. Inv. Prob., 24(6), 2008.
  • [7] P. L. Combettes and V. R. Wajs. Signal recovery by proximal forward-backward splitting. SIAM Multiscale Model. Simul., 4(4):1168–1200, 2005.
  • [8] R. Corless, G. Gonnet, D. Hare, D. Jeffrey, and D. Knuth. On the Lambert W function. Advances in Computational Mathematics, 5:329–359, 1996. 10.1007/BF02124750.
  • [9] I. Daubechies, M. Defrise, and C. D. Mol. An iterative thresholding algorithm for linear inverse problems with a sparsity constraints. Comm. Pure Appl. Math., 112:1413–1541, 2004.
  • [10] F.-X. Dupé, M. Fadili, and J.-L.Starck. A proximal iteration for deconvolving poisson noisy images using sparse representations. IEEE Trans. on Im. Pro., 18(2):310–321, 2009.
  • [11] N. D. et al. A deconvolution method for confocal microscopy with total variation regularization. In ISBI 2004, pages 1223–1226. IEEE, 2004.
  • [12] M. Fadili, J.-L. Starck, and F. Murtagh. Inpainting and zooming using sparse representations. The Computer Journal, 2006.
  • [13] M. Figueiredo and J. Bioucas-Dias. Restoration of poissonian images using alternating direction optimization. IEEE Transactions on Image Processing, 19(12), 2010.
  • [14] J.-J. Moreau. Fonctions convexes duales et points proximaux dans un espace hilbertien. CRAS Sér. A Math., 255:2897–2899, 1962.
  • [15] J.-L. Starck and F. Murtagh. Astronomical Image and Data Analysis. Springer, 2006.
  • [16] J.-L. Starck, F. Murtagh, and M. Fadili. Sparse Image and Signal Processing. Cambridge University Press, 2010.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
10111
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description