FAASTA: A fast solver for total-variation regularization of ill-conditioned problems with application to brain imaging
FAASTA: A fast solver for total-variation regularization of ill-conditioned problems with application to brain imaging
Gaël Varoquaux, Michael Eickenberg, Elvis Dohmatob, Bertand Thirion
|INRIA Saclay, Équipe Parietal|
|Neurospin, Bât 145, CEA Saclay, 91191 Gif sur Yvette, France|
Résumé – Il n’y a pas de formule analytique pour résoudre les problèmes de débruitage avec pénalité en variation totale, tout comme pour beaucoup d’autres problèmes de parcimonie en analyse. En conséquence, son utilisation pour régulariser un problème inverse conduit à de difficiles problèmes d’optimisation, qui sont souvent résolus par des méthodes de premier ordre. Cependant, lorsque le terme d’attache aux données est très mal conditionné et sans structure simple, comme en imagerie cérébrale, son optimisation est coûteuse. Il convient alors de minimiser le nombre d’itérations globales. Nous présentons pour cela fAASTA, une variante de FISTA qui utilise une optimisation interne pour l’opérateur proximal avec une tolérance adaptative. Nous illustrons son intérêt sur une étude empirique d’un problème de “décodage cérébral”.
Abstract – The total variation (TV) penalty, as many other analysis-sparsity problems, does not lead to separable factors or a proximal operator with a closed-form expression, such as soft thresholding for the penalty. As a result, in a variational formulation of an inverse problem or statistical learning estimation, it leads to challenging non-smooth optimization problems that are often solved with elaborate single-step first-order methods. When the data-fit term arises from empirical measurements, as in brain imaging, it is often very ill-conditioned and without simple structure. In this situation, in proximal splitting methods, the computation cost of the gradient step can easily dominate each iteration. Thus it is beneficial to minimize the number of gradient steps. We present fAASTA, a variant of FISTA, that relies on an internal solver for the TV proximal operator, and refines its tolerance to balance computational cost of the gradient and the proximal steps. We give benchmarks and illustrations on “brain decoding”: recovering brain maps from noisy measurements to predict observed behavior. The algorithm as well as the empirical study of convergence speed are valuable for any non-exact proximal operator, in particular analysis-sparsity problems.
1 Introduction: problem setting
Minimizing a functional using the Total Variation (TV) of an image was originally introduced for denoising purposes [rudin1992nonlinear], but it is useful in more general problems, as a regularization for ill-posed inverse problems. For instance for image reconstruction in computed tomography [sidky2008image], or in regression or classification settings for brain imaging [michel2011tv]. In image-recovery applications, unlike with pure prediction problems as in machine learning, optimizing the corresponding cost function to a high tolerance is important (see eg in brain imaging [dohmatob2014benchmarking]). Indeed, the penalty is most relevant in the ill-conditioned region of the data-fit term. However, this optimization is particularly challenging, as the TV penalty introduces a long-distance coupling across the image in the final solution. This paper studies optimization algorithms for TV-penalized inverse problems with an ill-conditioned and computationally costly data-fit term, as in brain imaging. In particular, we introduce a new variant of FISTA that is well suited to inexact proximal operators.
The TV penalty can be seen as an instance of a wider set of regularizations, analysis sparsity problems [vaiter2014low], that impose sparsity on a linear transformation of the image, as eg in TV- used in brain imaging [gramfort2013]. In general, an analysis-sparsity risk-minimization problem is written as
where is the data-fit term, is the “analysis operator”, and is a simple sparsity-inducing norm, such as the or norm. Often, is rank-deficient, as in over-complete dictionaries, and is ill-conditioned. Problem (1) is thus a challenging ill-conditioned non-smooth optimization.
With linear models, is the design matrix. It is ill-conditioned when different features are heavily correlated. These unfortunate settings often happen when the design matrix results from experimental data, as in brain imaging or genotyping, or for many measurement operators that are nearly blind to some aspects of the weights , eg the high frequencies in tomography or image deblurring applications. Analysis sparsity is then particularly interesting because it can impose sparsity with specifically along those aspects of the weights. For instance TV is edge-preserving: it sharpens some high-frequency features.
2 State of the art optimizers
As the norm is not smooth, standard gradient-based optimization methods cannot be readily used to solve (1). Proximal-gradient methods [parikh2013] generalize the gradient step with an implicit subgradient step using the proximal operator, defined by .
If is the data-fit term, which we assume smooth, while is the non-smooth penalty, the simplest method is the Iterative Shrinkage-Thresholding Algorithm (ISTA) [daubechies2004iterative, combettes2011proximalsplitting]: alternating gradient descent on the smooth part and application of the proximal map on the non-smooth part with iterations of
where is the Lipschitz constant of . An accelerated gradient method, that is multi-step, can speed up convergence for ill-conditioned problems by adding a momentum term: in the fast iterative shrinkage-thresholding algorithm (FISTA) [beck09afast] the gradient steps are applied to a combination of and . The drawback is that there is no guarantee that each step of FISTA decreases the energy and large rebounds are common. This non-monotone behavior can be remedied by switching to ISTA iterations whenever an increase in cost is detected, as in monotone FISTA (mFISTA) [beck09fastgradient-based].
Proximal algorithms for analysis sparsity
The success of ISTA-type algorithms hinges upon the computation of the proximal operator, that is very efficient for synthesis sparsity as in lasso and elastic-net like problems ( and penalties). However, for many analysis-sparsity proximal operators there is no analytical formula. If the norm used in the analysis sparsity is or an , writing the dual problem leads to a constraint formulation amenable to an accelerated projected gradient [beck09fastgradient-based], as the dual norms are respectively an and with easy projections to the unit ball. Importantly, a monotone scheme can also be used, and the tolerance of the optimization can be controlled via the dual gap [michel2011tv].
While the computation of the proximal operator is no longer exact, proximal gradient algorithms can be shown to converge as long as error on the proximal operation decreases sufficiently with the iteration number of the outer loop [schmidt2011convergence]. To achieve the best convergence rates in , the error may be required to decrease as fast as . Using an accelerated algorithm to compute the proximal, achieving a tolerance of is in . Thus, in such a scheme the cost of each iteration increases as , which renders optimization to a tight tolerance prohibitive.
Operator splitting algorithms
The costly inner loop is required in ISTA-type algorithms because the proximal operator cannot be easily computed on the primal variables of the optimization. Another family of proximal algorithms rely on “splitting”: introducing auxiliary variables in the analysis space: . Optimizing on these is then a simple or proximal problem. The alternating direction method of multipliers (ADMM) is such an approach that is popular for signal processing applications for its versatility [boyd2011]. Note that it relies on the choice of a hyper-parameter, controlling the tightness of the split that can have an impact on the convergence. Similarly, the primal-dual algorithm in [chambolle-pock:2011] relies on two hyperparameters and can be written as a preconditioned version of ADMM.
Optimizing a smooth surrogate
Newton or quasi-newton algorithms are excellent optimizers for smooth problems that achieve quadratic convergence rates under weak conditions and are somewhat unaffected by ill conditioning [nocedal2006numerical]. In particular, the limited-memory BFGS (LBFGS) [liu1989limited] quasi-newton scheme is well suited to high-dimensional problems. An approach to minimizing a convex non-smooth function relies on minimizing series of smooth upper bounds, decreasing the approximation as the algorithm progresses [dohmatob2014benchmarking, dubois2014predictive]. Intuitively, the benefit of such an approach is that it should be well suited to strong ill-conditioning of the smooth part of the energy, to the cost of more difficulty in optimizing the non-smooth part.
When the design matrix is large and dense, as in brain imaging or genomics, the computation cost of multiplying it with a vector or matrix is by far the most expensive elementary operations as does not fit in CPU cache. Thus it is important to limit as much as possible these operations, which typically come into play when computing the gradient or energy of the smooth data-fit part of the problem111Note that such situation is different than in many signal-processing applications in which is a sparse or structured operator as a convolution or a wavelet transform, and thus can be applied very fast..
3 FAASTA: adapting inner tolerance
ISTA-type schemes are parameter-free algorithms with a good convergence rate. However their optimal use in analysis-sparsity problems require balancing the computational cost of the gradient step with that of the proximal operator. Indeed, while a tight tolerance on the proximal operator may guarantee the optimal convergence speed in terms of number of iterations of the outer loop, it implies increasing computing time for each iteration. Thus, for a low desired tolerance on the global energy function, it may be beneficial to set a lax tolerance on the dual gap of the proximal operator, and thus achieve fast iterates. On the opposite, optimizing the global problem to a stringent tolerance requires very high precision on the proximal operator, which in turn slows down iterations of the outer loop.
We introduce a new variant of ISTA, fast Adaptively Accurate Shrinkage Thresholding Algorithm (fAASTA). The core idea is to adapt the tolerance of the proximal operator as needed for convergence. For this, we rely on the fact that in an ISTA iteration –eq. (2)– the global energy must decrease [daubechies2004iterative]. Thus we adapt the tolerance of the proximal-operator solver to ensure this decrease. In practice, in the inner loop, we control that tolerance by checking the dual gap of the proximal optimization problem, which is much cheaper to compute than the global energy, as it does not involve the accessing the large, dense design matrix . We decrease this dual gap setting by a factor of 5 when we observe that an ISTA iteration did not result in an energy decrease. In addition, to benefit from accelerated schemes as in FISTA, we rely on an mFISTA strategy: we apply FISTA iterations but when the energy increases, switch to an ISTA step, in which we can then, if needed, decrease the tolerance on the inner dual gap.
The actual procedure is described in alg. 1. One difficulty is implementing the back and forth between ISTA and FISTA without recomputing intermediate variables. With speed in mind, another important implementation aspect is to factor out expressions common to the gradient and energy computations in order to access the large matrix as little as possible. These breakdowns are not detailed here.
4 Empirical study: TV- for decoding
Here, we benchmark the various algorithms on a brain-decoding application: the task is to predict subject’s behavior, encoded as a categorical variable in , from fMRI images that constitute the design matrix . For this purpose, we use a TV--penalized logistic regression, which is written as an analysis sparsity problem in notations of eq. (1):
where is the image-domain finite difference operator and the norm groups the different image directions for an isotrope TV formulation. We set . We investigate discrimination of whether the subject is presented images of shoes or scrambled pictures in a study of human vision, using the data from [haxby2001]. We rely on the nilearn library for data download, loading and cleaning [abraham2014machine].
We study convergence of the following approaches: ISTA and mFISTA schemes with i) a lax dual-gap tolerance () on the inner problem, ii) a tight dual-gap tolerance (), iii) an adaptive control of the dual-gap tolerance as exposed above, and iv) the decrease motivated theoretically in [schmidt2011convergence]; an LBFGS optimizer on a series of smooth upper-bounds following the approach of [dohmatob2014benchmarking]; an ADMM with different values of the control parameter (implemented as in [dohmatob2014benchmarking]).
For all algorithms, we investigate convergence for 3 different values of the regularization parameter, centered around the value minimizing prediction error as measured by cross-validation. On fig. 1, we report energy as a function of time. Importantly, we report time, and not iteration number. Indeed, not only is computing time the most important quantity for the user, but also it is suitable to compare different algorithms.
We find that, as expected, ISTA or FISTA with a lax tolerance on the inner problem reaches an energy plateau before convergence although, for weak to medium regularization, it progresses fast before hitting this limitation. On the opposite, setting a stringent tolerance leads to a slow but steady decrease of the energy (though it eventually also reaches a plateau before complete convergence in the high-regularization case). The theoretical decrease on the tolerance has an intermediate behavior. In particular, it does not stagnate. The convergence of the ADMM approach depends strongly on its control parameter: on the one hand, no control parameter achieves satisfying convergence for all regularization values, on the other hand the ADMM does not converge at all for a bad choice of the parameter. LBFGS reaches a plateau, but gets there fast in the case of weak regularization. Finally, our adaptive approach, whether in its accelerate form, or simply in an ISTA loop, reaches the lowest energy value.
5 Discussion and conclusions
Optimizing an analysis-sparsity functional with a ill-conditioned data-fit term is a very challenging problem as the non-smooth part cannot be readily written as a proximal problem with an analytic solution. Splitting approaches with auxiliary variables in the analysis space are often put forward as a solution for these problems as the non-smooth contribution is easy to solve on these variables. However, when the computation cost of the smooth data-fit part is high, eg when the design matrix is dense and large as in brain imaging, our experience is that these approaches are sub-optimal –see also [dohmatob2014benchmarking], that explored the preconditioned variant of [chambolle-pock:2011]. We conjecture that in these situations, the choice of the control parameter becomes crucial to accelerate the convergence222We evaluated the heuristics describe in [boyd2011] to evolve the control parameter, but with no conclusive success..
Our preferred solution relies on using an inexact proximal operator and adapting ISTA-type algorithms to set the corresponding tolerance. Experiments show that this approach is robust and always converges to a high tolerance. For very low regularization, the smooth data-fit part of the functional dominates and careful use of smooth optimization methods can achieve a quicker initial rate of convergence but are eventually limited. We currently lack theory on the optimal strategy to set the tolerance of the inner problem. Indeed, on the experiment studied in this contribution, the accelerated gradient variant of the algorithm did not outperform the single-step variant.
Analysis sparsity can be used to capture structure in a much wider set of situations than the simpler, more common synthesis sparsity. However it is seldom used in practice, likely because of the computational cost it entails. Many problems can benefit from faster solvers outside of brain imaging, for instance in tomography image recovery [sidky2008image].