An Empirical Study of ADMM for Nonconvex Problems

An Empirical Study of ADMM for
Nonconvex Problems

Zheng Xu1, Soham De1, Mário A. T. Figueiredo2, Christoph Studer 3, Tom Goldstein1
1Department of Computer Science, University of Maryland, College Park, MD
2Instituto de Telecomunicações, Instituto Superior Técnico, Universidade de Lisboa, Portugal
3Department of Electrical and Computer Engineering, Cornell University, Ithaca, NY
Abstract

The alternating direction method of multipliers (ADMM) is a common optimization tool for solving constrained and non-differentiable problems. We provide an empirical study of the practical performance of ADMM on several nonconvex applications, including regularized linear regression, regularized image denoising, phase retrieval, and eigenvector computation. Our experiments suggest that ADMM performs well on a broad class of non-convex problems. Moreover, recently proposed adaptive ADMM methods, which automatically tune penalty parameters as the method runs, can improve algorithm efficiency and solution quality compared to ADMM with a non-tuned penalty.

 

An Empirical Study of ADMM for
Nonconvex Problems


  Zheng Xu1, Soham De1, Mário A. T. Figueiredo2, Christoph Studer 3, Tom Goldstein1 1Department of Computer Science, University of Maryland, College Park, MD 2Instituto de Telecomunicações, Instituto Superior Técnico, Universidade de Lisboa, Portugal 3Department of Electrical and Computer Engineering, Cornell University, Ithaca, NY

\@float

noticebox[b]\end@float

ZX, SD, and TG were supported by US NSF grant CCF-1535902 and by US ONR grant N00014-15-1-2676.
CS was supported in part by Xilinx Inc., and by the US NSF under grants ECCS-1408006 and CCF-1535897.

1 Introduction

The alternating direction method of multipliers (ADMM) has been applied to solve a wide range of constrained convex and nonconvex optimization problems. ADMM decomposes complex optimization problems into sequences of simpler subproblems that are often solvable in closed form. Furthermore, these sub-problems are often amenable to large-scale distributed computing environments Goldstein et al. (2016); Taylor et al. (2016). ADMM solves the problem

(1)

where , , , , and , by the following steps,

(2)
(3)
(4)

where is a vector of dual variables (Lagrange multipliers), and is a scalar penalty parameter.

The convergence of the algorithm can be monitored using primal and dual “residuals,” both of which approach zero as the iterates become more accurate, and which are defined as

(5)

respectively Boyd et al. (2011). The iteration is generally stopped when

(6)

where is the stopping tolerance.

ADMM was introduced by Glowinski and Marroco (1975) and Gabay and Mercier (1976), and convergence has been proved under mild conditions for convex problems Gabay (1983); Eckstein and Bertsekas (1992); He and Yuan (2015). The practical performance of ADMM on convex problems has been extensively studied, see Boyd et al. (2011); Goldstein et al. (2014); Xu et al. (2016a) and references therein. For nonconvex problems, the convergence of ADMM under certain assumptions are studied in  Wang et al. (2014); Li and Pong (2015); Hong et al. (2016); Wang et al. (2015). The current weakest assumptions are given in Wang et al. (2015), which requires a number of strict conditions on the objective, including a Lipschitz differentiable objective term. In practice, ADMM has been applied on various nonconvex problems, including nonnegative matrix factorization Xu et al. (2012), -norm regularization ()Bouaziz et al. (2013); Chartrand and Wohlberg (2013), tensor factorization Liavas and Sidiropoulos (2015); Xu et al. (2016b), phase retrieval Wen et al. (2012), manifold optimization Lai and Osher (2014); Kovnatsky et al. (2015), random fields Miksik et al. (2014), and deep neural networks Taylor et al. (2016).

The penalty parameter is the only free choice in ADMM, and plays an important role in the practical performance of the method. Adaptive methods have been proposed to automatically tune this parameter as the algorithm runs. The residual balancing method He et al. (2000) automatically increase or decrease the penalty so that the primal and dual residuals have approximately similar magnitudes. The more recent AADMM method Xu et al. (2016a) uses a spectral (Barzilai-Borwein) rule for tuning the penalty parameter. These methods achieve impressive practical performance for convex problems and are guaranteed to converge under moderate conditions (such as when adaptivity is stopped after a finite number of iterations).

In this manuscript, we study the practical performance of ADMM on several nonconvex applications, including regularized linear regression, regularized image denoising, phase retrieval, and eigenvector computation. While the convergence of these applications may (not) be guaranteed by the current theory, ADMM is one of the (popular) choices to solve these nonconvex problems. The following questions are addressed using these model problems: (i) does ADMM converge in practice, (ii) does the update order of and matter, (iii) is the local optimal solution good, (iv) does the penalty parameter matter, and (v) is an adaptive penalty choice effective?

2 Nonconvex applications

regularized linear regression. Sparse linear regression can be achieved using the non-convex, regularized problem

(7)

where is the data matrix, is a measurement vector, and is the regression coefficients. ADMM is applied to solve problem (7) using the equivalent formulation

(8)

regularized image denoising. The regularizer Dong and Zhang (2013) can be substituted for the regularizer when computing total variation for image denoising. This results in the formulation  Chartrand (2007)

(9)

where represents a given noisy image, is the linear discrete gradient operator, and is the / norm. We solve the equivalent problem

(10)

The resulting ADMM sub-problems can be solved in closed form using fast Fourier transforms Goldstein and Osher (2009).

Phase retrieval. Ptychographic phase retrieval Yang et al. (2011); Wen et al. (2012) solves the problem

(11)

where , , and denotes the elementwise magnitude of a complex vector. ADMM is applied to the equivalent problem

(12)

Eigenvector problem. The eigenvector problem is a fundamental problem in numerical linear algebra. The leading eigenvalue of a matrix is found by computing

(13)

ADMM is applied to the equivalent problem

(14)

where is the characteristic function defined by , if , and , otherwise.

3 Experiments & Observations

Experimental setting. We implemented “vanilla ADMM” (ADMM with constant penalty), and fast ADMM with Nesterov acceleration and restart Goldstein et al. (2014). We also implemented two methods for automatically selecting penalty parameters: residual balancing He et al. (2000), and the spectral adaptive method Xu et al. (2016a). For regularized linear regression, the synthetic problem in Zou and Hastie (2005); Goldstein et al. (2014); Xu et al. (2016a) and realistic problems in Efron et al. (2004); Zou and Hastie (2005); Xu et al. (2016a) are investigated with For regularized image denoising, a one-dimensional synthetic problem was created by the process described in Zou and Hastie (2005), and is shown in Fig. 3. For the total-variation experiments, the "Barbara" , "Cameraman", and "Lena" images are investigated, where Gaussian noise with zero mean and standard deviation 20 was added to each image (Fig. 4). and are used for the synthetic problem and image problems, respectively. For phase retrieval, a synthetic problem is constructed with a random matrix , , and . Three images in Fig. 4 are used. Each image is measured with 21 octanary pattern filters as described in Candes et al. (2015). For the eigenvector problem, a random matrix is used.

Figure 1: Sensitivity to the (initial) penalty parameter for the regularized linear regression, eigenvector computation, "cameraman" denoising, and phase retrieval. (top) Number of iterations needed as a function of initial penalty parameter. (bottom) The objective/PSNR of the minima found for each non-convex problem.

Does ADMM converge in practice? The convergence of vanilla ADMM is quite sensitive to the choice of penalty parameter. For vanilla ADMM, the iterates may oscillate, and if convergence occurs it may be very slow when the penalty parameter is not properly tuned. The residual balancing method converges more often than vanilla ADMM, and the spectral adaptive ADMM converges the most often. However, none of these methods uniformly beats all others, and it appears that vanilla ADMM with a highly tuned stepsize can sometimes outperform adaptive variants.

Does the update order of and matter? In Fig. 1, ADMM is performed by first minimizing with respect to the smooth objective term, and then the nonsmooth term. We repeat the experiments with the update order swapped, and report the results in Fig. 2 of the appendix. When updating the non-smooth term first, the convergence of ADMM for the phase retrieval problem becomes less reliable. However, for some problems (like image denoising), convergence happened a bit faster than with the original update order. Although the behavior of ADMM changes, there is no predictable difference between the two update orderings.

Is the local optimal solution good? The bottom row of Fig. 1 presents the objective/PSNR achieved by the ADMM variants when varying the (initial) penalty parameter. In general, the quality of the solution depends strongly on the penalty parameter chosen. There does not appear to be a predictable relationship between the best penalty for convergence speed and the best penalty for solution quality.

Does the adaptive penalty work? In Table 1, we see that adaptivity not only speeds up convergence, but for most problem instances it also results in better minimizers. This behavior is not uniform across all experiments though, and for some problems a slightly lower objective value can be achieved using a finely tuned constant stepsize.

Application Dataset
#samples
#features1
Vanilla
ADMM
Residual
balance He et al. (2000)
Adaptive
ADMM Xu et al. (2016a)
regularized
linear regression
Synthetic
50 40
2000+(.621) 2000+(.604) 39(.018)
1.71e4 1.71e4 15.2
Boston
506 13
2000+(.598) 2000+(.570) 1039(.342)
1.50e5 1.50e5 1.34e5
Diabetes
768 8
2000+(.751) 2000+(.708) 28(.014)
384 648 285
Leukemia
38 7129
2000+(15.3) 78(.578) 63(.477)
19.0 19.0 19.0
Prostate
97 8
2000+(.413) 2000+(.466) 29(.013)
1.14e3 380 324
Servo
130 4
2000+(.426) 2000+(.471) 45(.014)
267 267 198
regularized
image restoration
Synthetic1D
100 1
2000+(.701) 1171(.409) 866(.319)
40.6 45.4 45.4
Barbara
512 512
200+(35.5) 200+(35.1) 18(3.33)
24.7 24.7 24.7
Cameraman
256 256
200+(5.75) 200+(5.60) 6(.190)
25.9 25.9 27.8
Lena
512 512
200+(35.5) 200+(35.8) 11(1.98)
25.9 25.9 27.9
phase retrieval
Synthetic 200+(19.4) 94(9.01) 46(4.45)
Barbara
512 512 21
59(91.1) 59(89.6) 50(88.1)
81.5 81.5 81.5
Cameraman
256 256 21
59(29.6) 55(19.4) 48(20.8)
75.7 75.7 75.7
Lena
512 512 21
59(90.1)) 57(87.4) 52(92.0)
81.4 81.5 81.5
  • width height for image restoration; width height filters for phase retrieval

Table 1: Iterations (with runtime in seconds) and objective (or PSNR) for the various algorithms and applications described in the text. Absence of convergence after iterations is indicated as .

4 Conclusion

We provide a detailed discussion of the performance of ADMM on several nonconvex applications, including regularized linear regression, regularized image denoising, phase retrieval, and eigenvector computation. In practice, ADMM usually converges for those applications, and the penalty parameter choice has a significant effect on both convergence speed and solution quality. Adaptive penalty methods such as AADMM Xu et al. (2016a) automatically select the penalty parameter, and perform optimization with little user oversight. For most problems, adaptive stepsize methods result in faster convergence or better minimizers than vanilla ADMM with a constant non-tuned penalty parameter. However, for some difficult non-convex problems, the best results can still be obtained by fine-tuning the penalty parameter.

References

  • Bouaziz et al. [2013] S. Bouaziz, A. Tagliasacchi, and M. Pauly. Sparse iterative closest point. In Computer graphics forum, volume 32, pages 113–123. Wiley Online Library, 2013.
  • Boyd et al. [2011] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein. Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. and Trends in Mach. Learning, 3:1–122, 2011.
  • Candes et al. [2015] E. J. Candes, X. Li, and M. Soltanolkotabi. Phase retrieval via wirtinger flow: Theory and algorithms. IEEE Transactions on Information Theory, 61(4):1985–2007, 2015.
  • Chartrand [2007] R. Chartrand. Exact reconstruction of sparse signals via nonconvex minimization. IEEE Signal Processing Letters, 14(10):707–710, 2007.
  • Chartrand and Wohlberg [2013] R. Chartrand and B. Wohlberg. A nonconvex ADMM algorithm for group sparsity with sparse groups. In 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, pages 6009–6013. IEEE, 2013.
  • Dong and Zhang [2013] B. Dong and Y. Zhang. An efficient algorithm for minimization in wavelet frame based image restoration. Journal of Scientific Computing, 54(2-3):350–368, 2013.
  • Eckstein and Bertsekas [1992] J. Eckstein and D. Bertsekas. On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Mathematical Programming, 55(1-3):293–318, 1992.
  • Efron et al. [2004] B. Efron, T. Hastie, I. Johnstone, and R. Tibshirani. Least angle regression. The Annals of statistics, 32(2):407–499, 2004.
  • Gabay [1983] D. Gabay. Applications of the method of multipliers to variational inequalities. Studies in mathematics and its applications, 15:299–331, 1983.
  • Gabay and Mercier [1976] D. Gabay and B. Mercier. A dual algorithm for the solution of nonlinear variational problems via finite element approximation. Computers & Mathematics with Applications, 2(1):17–40, 1976.
  • Glowinski and Marroco [1975] R. Glowinski and A. Marroco. Sur l’approximation, par éléments finis d’ordre un, et la résolution, par pénalisation-dualité d’une classe de problémes de Dirichlet non linéaires. ESAIM: Modélisation Mathématique et Analyse Numérique, 9:41–76, 1975.
  • Goldstein and Osher [2009] T. Goldstein and S. Osher. The split Bregman method for L1-regularized problems. SIAM Journal on Imaging Sciences, 2(2):323–343, 2009.
  • Goldstein et al. [2014] T. Goldstein, B. O’Donoghue, S. Setzer, and R. Baraniuk. Fast alternating direction optimization methods. SIAM Journal on Imaging Sciences, 7(3):1588–1623, 2014.
  • Goldstein et al. [2016] T. Goldstein, G. Taylor, K. Barabin, and K. Sayre. Unwrapping ADMM: efficient distributed computing via transpose reduction. In AISTATS, 2016.
  • He and Yuan [2015] B. He and X. Yuan. On non-ergodic convergence rate of Douglas-Rachford alternating direction method of multipliers. Numerische Mathematik, 130:567–577, 2015.
  • He et al. [2000] B. He, H. Yang, and S. Wang. Alternating direction method with self-adaptive penalty parameters for monotone variational inequalities. Jour. Optim. Theory and Appl., 106(2):337–356, 2000.
  • Hong et al. [2016] M. Hong, Z.-Q. Luo, and M. Razaviyayn. Convergence analysis of alternating direction method of multipliers for a family of nonconvex problems. SIAM Journal on Optimization, 26(1):337–364, 2016.
  • Kovnatsky et al. [2015] A. Kovnatsky, K. Glashoff, and M. M. Bronstein. Madmm: a generic algorithm for non-smooth optimization on manifolds. arXiv preprint arXiv:1505.07676, 2015.
  • Lai and Osher [2014] R. Lai and S. Osher. A splitting method for orthogonality constrained problems. Journal of Scientific Computing, 58(2):431–449, 2014.
  • Li and Pong [2015] G. Li and T. K. Pong. Global convergence of splitting methods for nonconvex composite optimization. SIAM Journal on Optimization, 25(4):2434–2460, 2015.
  • Liavas and Sidiropoulos [2015] A. P. Liavas and N. D. Sidiropoulos. Parallel algorithms for constrained tensor factorization via alternating direction method of multipliers. IEEE Transactions on Signal Processing, 63(20):5450–5463, 2015.
  • Miksik et al. [2014] O. Miksik, V. Vineet, P. Pérez, P. H. Torr, and F. Cesson Sévigné. Distributed non-convex admm-inference in large-scale random fields. In British Machine Vision Conference, BMVC, 2014.
  • Taylor et al. [2016] G. Taylor, R. Burmeister, Z. Xu, B. Singh, A. Patel, and T. Goldstein. Training neural networks without gradients: A scalable ADMM approach. arXiv preprint arXiv:1605.02026, 2016.
  • Wang et al. [2014] F. Wang, Z. Xu, and H.-K. Xu. Convergence of bregman alternating direction method with multipliers for nonconvex composite problems. arXiv preprint arXiv:1410.8625, 2014.
  • Wang et al. [2015] Y. Wang, W. Yin, and J. Zeng. Global convergence of admm in nonconvex nonsmooth optimization. arXiv preprint arXiv:1511.06324, 2015.
  • Wen et al. [2012] Z. Wen, C. Yang, X. Liu, and S. Marchesini. Alternating direction methods for classical and ptychographic phase retrieval. Inverse Problems, 28(11):115010, 2012.
  • Xu et al. [2012] Y. Xu, W. Yin, Z. Wen, and Y. Zhang. An alternating direction algorithm for matrix completion with nonnegative factors. Frontiers of Mathematics in China, 7(2):365–384, 2012.
  • Xu et al. [2016a] Z. Xu, M. A. Figueiredo, and T. Goldstein. Adaptive ADMM with spectral penalty parameter selection. arXiv preprint arXiv:1605.07246, 2016a.
  • Xu et al. [2016b] Z. Xu, F. Huang, L. Raschid, and T. Goldstein. Non-negative factorization of the occurrence tensor from financial contracts. NIPS tensor workshop, 2016b.
  • Yang et al. [2011] C. Yang, J. Qian, A. Schirotzek, F. Maia, and S. Marchesini. Iterative algorithms for ptychographic phase retrieval. arXiv preprint arXiv:1105.5628, 2011.
  • Zou and Hastie [2005] H. Zou and T. Hastie. Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 67(2):301–320, 2005.

Appendix A Appendix: more experimental results

Figure 2: Convergence results when the non-smooth objective term is updated first, and the smooth term is updated second. Sensitivity to the (initial) penalty parameter is shown for the synthetic problem of regularized linear regression, eigenvector computation, the "cameraman" denoising problem, and phase retrieval. The top row shows the convergence speed in iterations. The bottom row shows the objective/PSNR achieved by the final iterates.

Appendix B Appendix: implementation details

b.1 regularized linear regression

regularized linear regression is a nonconvex problem

(15)

where is the data matrix, is the measurement vector, and is the regression coefficients. ADMM is applied to solve problem (15) by solving the equivalent problem

(16)

The proximal operator of the norm is the hard-thresholding,

(17)

where represents element-wise multiplication, and is the indicator function of the set : , if , and , otherwise. Then the steps of ADMM can be written

(18)
(19)
(20)
(21)

b.2 regularized image denoising

The regularizer Dong and Zhang [2013] is an alternative to the regularizer when computing total variation Goldstein and Osher [2009], Goldstein et al. [2014]. regularized image denoising solves the nonconvex problem

(22)

where represents a given noisy image, is the linear gradient operator, and / denotes the / norm of vectors. The steps of ADMM for this problem are

(23)
(24)
(25)
(26)

where the linear systems can be solved using fast Fourier transforms.

b.3 Phase retrieval

Ptychographic phase retrieval Yang et al. [2011], Wen et al. [2012] solves problem

(27)

where , , and denotes the elementwise magnitude of a complex-valued vector. ADMM is applied to the equivalent problem

(28)

Define the projection operator of a complex valued vector as

(29)

where denotes the elementwise phase of a complex-valued vector. In the following ADMM steps, notice that the dual variable is complex, and the penalty parameter is a real non-negative scalar,

(30)
(31)
(32)

b.4 Eigenvector problem

The eigenvector problem is a fundamental problem in numerical linear algebra. The leading eigenvector of a matrix can be recovered by solving the Rayleigh quotient maximization problem

(33)

ADMM is applied to the equivalent problem

(34)

where is the characteristic function of the set : , if , and , otherwise. The ADMM steps are

(35)
(36)
(37)

Appendix C Appendix: synthetic and realistic datasets

We provide the detailed construction of the synthetic dataset for our linear regression experiments. The same synthetic dataset has been used in Zou and Hastie [2005], Goldstein et al. [2014], Xu et al. [2016a]. Based on three random normal vectors , the data matrix is defined as

(38)

where are random normal vectors from . The problem is to recover the vector

(39)

from noisy measurements of the form with

Figure 3: The synthetic one-dimensional signal for regularized image denoising. The groundtruth signal, noisy signal (PSNR = 37.8) and recovered signal by AADMM (PSNR = 45.4) are shown.

Figure 4: The groundtruth image (left), noisy image (middle), and recovered image by AADMM (right) for regularized image denoising. The PSNR of the noisy/recovered images are 21.9/24.7 for "Barbara", 22.4/27.8 for "Cameraman", 21.9/27.9 for "Lena".
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
49534
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description