Learning \ell_{1}-based analysis and synthesis sparsity priors using bi-level optimization

Learning -based analysis and synthesis sparsity priors using bi-level optimization

Yunjin Chen    Thomas Pock    Horst Bischof
Institute for Computer Graphics and Vision, Graz University of Technology
Graz A-8010, Austria
{cheny, pock, bischof}@icg.tugraz.at

We consider the analysis operator and synthesis dictionary learning problems based on the the regularized sparse representation model. We reveal the internal relations between the -based analysis model and synthesis model. We then introduce an approach to learn both analysis operator and synthesis dictionary simultaneously by using a unified framework of bi-level optimization. Our aim is to learn a meaningful operator (dictionary) such that the minimum energy solution of the analysis (synthesis)-prior based model is as close as possible to the ground-truth. We solve the bi-level optimization problem using the implicit differentiation technique. Moreover, we demonstrate the effectiveness of our leaning approach by applying the learned analysis operator (dictionary) to the image denoising task and comparing its performance with state-of-the-art methods. Under this unified framework, we can compare the performance of the two types of priors.


Learning -based analysis and synthesis sparsity priors using bi-level optimization

  Yunjin Chen    Thomas Pock    Horst Bischof Institute for Computer Graphics and Vision, Graz University of Technology Graz A-8010, Austria {cheny, pock, bischof}@icg.tugraz.at

1 Introduction

Maximum a Posteriori (MAP) inference under the Bayesian framework is a popular method for solving various inverse problems in image processing. The MAP estimator is equivalent to an energy minimization problem, which consists of a data fidelity term and a signal prior term (also known as regularization term). Roughly speaking, the priors fall into two main prior types. One is the analysis-based prior and the other is the synthesis-based one.

Notation: In this paper our model presents a global prior over the entire image, in contrast to the common patch-based one. In order to distinguish between a patch and an image, we use the notation to indicate a patch (patch size: , is odd), and to indicate an image (image size:, with ). We refer and with to the patch-based synthesis dictionary and analysis operator respectively. Furthermore, when the analysis operator is applied to the entire image , we use the common sliding-window fashion to compute the coefficients for all patches in the 2-D image form of . This result is equivalent to a multiplication of a sparse matrix and , i.e., . We can group to separable sparse matrices , where is associated with the row of (). If we consider as a 2-D filter (), we have: is equivalent to the result of convolving image with filter . Finally, we use that is expanded from the patch-based analysis operator , to denote the global analysis operator associated with an entire image.

Patch based analysis and synthesis model: Under the framework of MAP, the patch-based analysis model is given as the following minimization problem


where is called analysis operator. The form of the penalty function depends on the prior utilized. For sparse representation, it can be or . The second type of prior is so-called synthesis prior. Basically, in the synthesis-based sparse representation model, a signal is called sparse over a given dictionary , when it can be approximated as a linear combination of a few atoms from dictionary . This is formulated as following minimization problem using the MAP estimator. When we concentrate on the sparse prior, normally the penalty function is chose as .


Learning patch based analysis and synthesis prior: In order to pursue better performance, an intuitive possibility is to make a better choice for the analysis operator and dictionary based on training. Indeed, there exist several typical and successful training algorithms for over-complete dictionary learning: (i) the K-SVD algorithm [7, 1] (ii) On-line dictionary learning algorithm [13] (ii) efficient sparse coding algorithms [11]. However, compared to the extensive study for the training of the synthesis dictionary, the analysis operator learning problem has received relatively much less attention in the past decade, although the analysis model is the counterpart to the celebrated synthesis sparse model. But fortunately, it has been gaining more and more attention these two years. Consequently, there appear different algorithms for analysis operator learning [14, 17, 19, 21, 20, 9, 15]. Among existing analysis operator learning algorithms, the learning approach proposed by Peyré and Fadili is very appealing since they consider this problem from a novel point of view. They interpret the action of analysis operator as convolution with some finite impulse response filters and they formulate the analysis operator learning task as a bi-level optimization problem [5] which is solved using a gradient descent algorithm.

Contributions: Based on the investigation of existing dictionary and analysis operator learning algorithms, we find that (1) all the training approaches are based on patch priors; (2) the study of the later is immature since so far only few prior work has been tested with natural images [21, 20, 9]; and (3) most analysis operator learning algorithms have to impose some non-convex constraints on the operator ; this therefore makes the corresponding optimization problems relatively complex and difficult to solve. Thus three questions arise: (1) can we formulate the image-based model using the patch priors? (2) is it possible to formulate the analysis operator learning problem in a relatively easy way? (3) can we compare two types of priors under an unified framework? We give answers to these questions in this paper.

2 Analysis operator and dictionary learning via bi-level optimization

From patch-based model to image-based one: In this paper, we concentrate on convex sparse representation. In the case of analysis model, following the filter-based MRF model for image restoration, it is straightforward to extend the patch-based analysis model to the image-based one, which is given as:


where is the global analysis operator constructed from the local patch-based analysis operator , and are images (). However, if we want to extend the patch-based synthesis model to the image-based one, we find it not as easy as the analysis case. Considering the common strategy that averages over-lapping patches, we can make explicit use of this strategy of patch-averaging to reconstruct the recovered image, then we arrive at our image-based synthesis model


where the size of image is , the patch size is , matrix is an matrix that extracts the patch from the image, and is a vector. We explicitly average all the over-lapping patches by a factor , because (the number of patches is equal to the number of pixels using symmetrical boundary condition). Note that in our formulation, is not independent any more, in contrast to their independence in [7]. If we stack all the and to a huge column vector and a huge matrix respectively, and construct a huge diagonal-block matrix D by using dictionary , (2.2) can be rewritten as


where . Now we can see the image-based model has the unified form with the patch-based one, which has a nice MAP interpretation. However, this formulation involves too many unknown variables (), compared to unknown variables for the analysis model. This is a big drawback for our training scheme. we expect to formulate it by variables. Indeed, we succeed by considering its dual problem. We introduce a auxiliary variable into the norm, and use to denote the Lagrange multiplier, by using definition of the convex conjugate function to the norm [3], we arrive at


where the function denotes the indicator function of the interval . After having a closer look at the connection between the primal variable and the dual variable , we find that is exactly the additive noise itself, because the recovered image is given by with . More interestingly, after expanding , we find that this result is surprisingly equivalent to filter response result when applying the analysis operator to an image . Then we draw the conclusion that the synthesis dictionary can also be interpreted as an analysis operator . If we use the notation to denote the global analysis operator as used in the aforementioned image-based analysis model, we can present the similarity between the -based analysis and synthesis model, which is given as


Bi-level framework for synthesis dictionary and analysis operator learning: Motivated by the work presented in [15], and the successful training instance to learn optimized parameters of MRF model [18], we propose our analysis operator (dictionary) learning approach based on the unified bi-level optimization framework. Equation (2.5) is so-called lower-level problem in our bi-level framework, and we need to define an upper-level problem, also known as loss function. Following the work of [18], we use the differentiable loss function


where is the ground-truth image and is the minimizer of energy function (2.5). Given training samples , where and are the clean image and the corresponding noisy version respectively, our bi-level model aims to learn an meaningful analysis operator (dictionary) such that the overall loss function for all samples is as small as possible. Therefore, our learning model is formally formulated as the following unconstrained bi-level optimization problem (take analysis operator learning model for instance; the dictionary learning model is similar).


Advantage of our model: The most appealing property of our approach is that it is not necessary to impose any constraint set over the analysis operator . Our training model can avoid trivial solutions naturally, e.g., if , the optimal solution of the energy function of (2.7) is certainly , which makes the loss function still large; thus this trivial solution is not acceptable since the target of our model is to minimize the loss function. Therefore, the learned operator must contain some meaningful filters such that the minimizer of the lower-level problem is close to the ground-truth.

Solving the bi-level problem using implicit differentiation: Following the work of [18], we can compute the gradient of the loss function w.r.t the parameter by using implicit differentiation. In order to employ the implicit differentiation rule, we need differentiable penalty functions. We have


In our training, we concentrate on mean-zero filters to keep consistent with the findings in the work [10]; therefore, we express the filter as a linear combination of a set of basis filters , i.e., . Then we obtain the derivatives of the loss function with respect to parameters , which is given as


where is an vector obtained by applying function element-wise to the vector , and is an diagonal matrix with each entry given by applying the function element-wise to the vector . In this formulation, we eliminate the parameter for simplicity, since it can be incorporated into the norm of the analysis operator . As given by (2.9), we have collected all the necessary information to compute the required gradients, then we can employ the gradient descent based algorithms for optimization. In this paper, we make use of an efficient quasi-Newton’s method, L-BFGS[12].

3 Learning experiments and application results for image denoising

We conducted our training experiments using the training images from the BSDS300[2] image segmentation database. We used the whole 200 training images, and randomly sampled one patch from each training image, giving us a total of 200 training samples. We then generated the noisy versions by adding Gaussian noise with standard deviation . In our experiments, we learned an analysis operator and synthesis dictionary from the given training samples. In order to guarantee the property of mean-zero, each atom in or is expressed as the linear combination of the DCT-7 basis excluding the first filter with uniform entries.

After we learned an meaningful operator and dictionary , we applied them to the image denoising problem based on the same 68 test images used in [16]. Tab. 3 presents the comparison of the average denoising results achieved by our -based analysis and synthesis model with (i) one state-of-the-art denoising method BM3D [6] (ii) the K-SVD approach [7] and (iii) the total variation (TV)-based ROF denoising model [4]. We would like to point out that the TV based approach is the most commonly used -based analysis operator; the K-SVD approach is a synthesis sparse representation model based on optimization; BM3D is one current state-of-the-art denoising approach which is an image based, not generic prior based method, and is a specialized denoising algorithm. Fig. 3 presents a detailed comparison between our -based analysis model and our -based synthesis model along with three considered denoising methods over 68 test images for . A point above the line means better performance than our -based analysis model. (Due to space limits, we can not present this figure in a large scale. Please refer to the digital version for better visibility.)

Figure 1: Scatter-plots   Model TV K-SVD BM3D Analysis Synthesis   avg. PSNR 26.72 27.93 28.35 27.78 27.32 Table 1: Averages of denoising results for 68 test images ()

4 Conclusions and future work

From Tab. 3 and Fig. 3 we can draw the following conclusions: (i) the -based analysis model is significantly superior to the -based synthesis model which is coherent with the findings in the work [8]. We believe the essential reason lies in the ineffectual way the -based synthesis model characterizes the natural images, since it tries to model the noise signal, not the natural image itself as aforementioned. This inferiority also appeared in the training. (ii) our analysis model is comparable with the -based synthesis model K-SVD, as can be seen in Fig. 3. Compared with specialized methods for image denoising task such as BM3D, our -based analysis model still can not compete. However, its denoising performance is always significantly better than the TV based approach.

It is well known that the probability density function (PDF) of the response of zero mean linear filters on natural images has heavily tailed distribution [10]. Therefore, our future work will concentrate on non-convex penalty function such as or . According to our preliminary experience about the analysis model using as penalty function, it clearly outperforms the -based synthesis model K-SVD, and has already been on par with BM3D. (We will present this result in our future work.) However, for the case of non-convex, since the Fenchel’s duality we used in this paper is not available any more, how to handle the synthesis model becomes a problem. This will be the subject of our future work.


  • [1] M. Aharon, M. Elad, and A. Bruckstein. K-svd: An algorithm for designing over-complete dictionaries for sparse representation. IEEE Transactions on Signal Processing, 54(11):4311–4322, 2006.
  • [2] P. Arbelaez, M. Maire, C. Fowlkes, and J. Malik. Contour detection and hierarchical image segmentation. IEEE Trans. Pattern Anal. Mach. Intell., 33(5):898–916, May 2011.
  • [3] Stephen Boyd and Lieven Vandenberghe. Convex Optimization. Cambridge University Press New York, NY, USA, 2004.
  • [4] Antonin Chambolle and Thomas Pock. A first-order primal-dual algorithm for convex problems with applications to imaging. Journal of Mathematical Imaging and Vision, 40(1):120–145, 2011.
  • [5] B. Colson, P. Marcotte, and G. Savard. An overview of bilevel optimization. Annals OR, 153(1):235–256, 2007.
  • [6] K. Dabov, A. Foi, V. Katkovnik, and K. O. Egiazarian. Image denoising by sparse 3-d transform-domain collaborative filtering. IEEE Transactions on Image Processing, 16(8):2080–2095, 2007.
  • [7] M. Elad and M. Aharon. Image denoising via sparse and redundant representations over learned dictionaries. IEEE Transactions on Image Processing, 15(12):3736–3745, 2006.
  • [8] M. Elad, P. Milanfar, and R. Rubinstein. Analysis versus synthesis in signal priors. Inverse Problems, 23(3):947–968, 2007.
  • [9] S. Hawe, M. Kleinsteuber, and K. Diepold. Analysis operator learning and its application to image reconstruction. Technical Report, Technical University Munich, 2012.
  • [10] J. Huang and D. Mumford. Statistics of natural images and models. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR1999), pages 541–547, Fort Collins, CO, USA, 1999.
  • [11] H. Lee, A. Battle, R. Raina, and A. Y. Ng. Efficient sparse coding algorithms. In NIPS, pages 801–808, 2006.
  • [12] D. C. Liu and J. Nocedal. On the limited memory BFGS method for large scale optimization. Mathematical Programming, 45(1):503–528, 1989.
  • [13] J. Mairal, F. Bach, J. Ponce, and G. Sapiro. Online dictionary learning for sparse coding. In ICML, pages 689–696, 2009.
  • [14] B. Ophir, M. Elad, N. Bertin, and M. D. Plumbley. Sequential minimal eigenvalues – an approach to analysis dictionary learning. In European Signal Processing Conference, pages 1465–1469, 2011.
  • [15] G. Peyré and J. Fadili. Learning analysis sparsity priors. In Proc. of Sampta’11, 2011.
  • [16] Stefan Roth and Michael J. Black. Fields of experts. International Journal of Computer Vision, 82(2):205–229, 2009.
  • [17] R. Rubinstein, T. Faktor, and M. Elad. K-svd dictionary-learning for the analysis sparse model. In IEEE International Conference on Acoustics, Speech, and Signal Processing, 2012.
  • [18] K. G. G. Samuel and M.F. Tappen. Learning optimized map estimates in continuously-valued mrf models. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR2009), 2009.
  • [19] M. Yaghoobi, S. Nam, R. Gribonval, and M. E. Davies. Analysis operator learning for over-complete co-sparse representations. In European Signal Processing Conference, 2011.
  • [20] M. Yaghoobi, S. Nam, R. Gribonval, and M. E. Davies. Constrained overcomplete analysis operator learning for cosparse signal modelling. preprint, 2012.
  • [21] M. Yaghoobi, S. Nam, R. Gribonval, and M. E. Davies. Noise aware analysis operator learning for approximately cosparse signals. In IEEE International Conference on Acoustics, Speech, and Signal Processing, 2012.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description