Forwardbackward truncated Newton methods for convex composite optimization^{1}
Abstract.
This paper proposes two proximal NewtonCG methods for convex nonsmooth optimization problems in composite form. The algorithms are based on a a reformulation of the original nonsmooth problem as the unconstrained minimization of a continuously differentiable function, namely the forwardbackward envelope (FBE). The first algorithm is based on a standard line search strategy, whereas the second one combines the global efficiency estimates of the corresponding firstorder methods, while achieving fast asymptotic convergence rates. Furthermore, they are computationally attractive since each Newton iteration requires the approximate solution of a linear system of usually small dimension.
1. Introduction
The focus of this work is on efficient Newtonlike algorithms for convex optimization problems in composite form, i.e.,
(1.1) 
where ^{2}^{2}2: class of twice continuously differentiable, strongly convex functions with modulus of strong convexity , whose gradient is Lipschitz continuous with constant . and ^{3}^{3}3: class of proper, lower semicontinuous, convex functions from to . has a cheaply computable proximal mapping [2]. Problems of the form (1.1) are abundant in many scientific areas such as control, signal processing, system identification, machine learning and image analysis, to name a few. For example, when is the indicator of a convex set then (1.1) becomes a constrained optimization problem, while for and it becomes the regularized leastsquares problem which is the main building block of compressed sensing. When is equal to the nuclear norm, then problem (1.1) can model lowrank matrix recovery problems. Finally, conic optimization problems such as LPs, SOCPs and SPDs can be brought into the form of (1.1), see [3].
Perhaps the most well known algorithm for problems in the form (1.1) is the forwardbackward splitting (FBS) or proximal gradient method [4, 5], a generalization of the classical gradient and gradient projection methods to problems involving a nonsmooth term. Accelerated versions of FBS, based on the work of Nesterov [6, 7, 8], have also gained popularity. Although these algorithms share favorable global convergence rate estimates of order or (where is the solution accuracy), they are firstorder methods and therefore usually effective at computing solutions of low or medium accuracy only. An evident remedy is to include secondorder information by replacing the Euclidean norm in the proximal mapping with the norm, where is the Hessian of at or some approximation of it, mimicking Newton or quasiNewton methods for unconstrained problems. This route is followed in the recent work of [9, 10]. However, a severe limitation of the approach is that, unless has a special structure, the linearized subproblem is very hard to solve. For example, if models a QP, the corresponding subproblem is as hard as the original problem.
In this paper we follow a different approach by defining a function, which we call forwardbackward envelope (FBE), that has favorable properties and can serve as a realvalued, smooth, exact penalty function for the original problem. Our approach combines and extends ideas stemming from the literature on merit functions for variational inequalities (VIs) and complementarity problems (CPs), specifically the reformulation of a VI as a constrained continuously differentiable optimization problem via the regularized gap function [11] and as an unconstrained continuously differentiable optimization problem via the Dgap function [12] (see [13, Ch. 10] for a survey and [14], [15] for applications to constrained optimization and model predictive control of dynamical systems).
Next, we show that one can design Newtonlike methods to minimize the FBE by using tools from nonsmooth analysis. Unlike the approaches of [9, 10], where the corresponding subproblems are expensive to solve, our algorithms require only the solution of a usually small linear system to compute the Newton direction. However, this work focuses on devising algorithms that have good complexity guarantees provided by a global (nonasymptotic) convergence rate while achieving superlinear or quadratic^{4}^{4}4A sequence converges to with superlinear rate if . It converges to with quadratic rate if there exists a such that , for some and all . asymptotic convergence rates in the nondegenerate cases. We show that one can achieve this goal by interleaving Newtonlike iterations on the FBE and FBS iterations. This is possible by relating directions of descent for the considered penalty function with those for the original nonsmooth function.
The main contributions of the paper can be summarized as follows. We show how Problem (1.1) can be reformulated as the unconstrained minimization of a realvalued, continuously differentiable function, the FBE, providing a framework that allows to extend classical algorithms for smooth unconstrained optimization to nonsmooth or constrained problems in composite form (1.1). Moreover, based on this framework, we devise efficient proximal Newton algorithms with superlinear or quadratic asymptotic convergence rate to solve (1.1), with global complexity bounds. The conjugate gradient (CG) method is employed to compute efficiently an approximate Newton direction at every iteration. Therefore our algorithms are able to handle largescale problems since they require only the calculation of matrixvector products and there is no need to form explicitly the generalized Hessian matrix.
The outline of the paper is as follows. In Section 2 we introduce the FBE, a continuously differentiable penalty function for (1.1), and discuss some of its properties. In Section 3 we discuss the generalized differentiability properties of the gradient of the FBE and introduce a linear Newton approximation (LNA) for it, which plays a role similar to that of the Hessian in the classical Newton method. Section 4 is the core of the paper, presenting two algorithms for solving Problem (1.1) and discussing their local and global convergence properties. In Section 5 we consider some examples of and discuss the generalized Jacobian of their proximal operator, on which the LNA is based. Finally, in Section 6, we consider some practical problems and show how the proposed methods perform in solving them.
2. Forwardbackward envelope
In the following we indicate by and , respectively, the set of solutions of problem (1.1) and its optimal objective value. Forwardbackward splitting for solving (1.1) relies on computing, at every iteration, the following update
(2.1) 
where the proximal mapping [2] of is defined by
(2.2) 
The value function of the optimization problem (2.2) defining the proximal mapping is called the Moreau envelope and is denoted by , i.e.,
(2.3) 
Properties of the Moreau envelope and the proximal mapping are well documented in the literature [16, 17, 18, 5]. For example, the proximal mapping is singlevalued, continuous and nonexpansive (Lipschitz continuous with Lipschitz ) and the envelope function is convex, continuously differentiable, with Lipschitz continuous gradient
(2.4) 
We will next proceed to the reformulation of (1.1) as the minimization of an unconstrained continuously differentiable function. It is well known [16] that an optimality condition for (1.1) is
(2.5) 
Since , we have that [19, Lem. 1.2.2], therefore is symmetric and positive definite whenever . Premultiplying both sides of (2.5) by , , one obtains the equivalent condition
(2.6) 
The lefthand side of equation (2.6) is the gradient of the function that we call forwardbackward envelope, indicated by . Using (2.4) to integrate (2.6), one obtains the following definition.
Definition 2.1.
Let , where , . The forwardbackward envelope of is given by
(2.7) 
Alternatively, one can express as the value function of the minimization problem that yields forwardbackward splitting. In fact
(2.8a)  
(2.8b) 
where
One distinctive feature of is the fact that it is realvalued despite the fact that can be extendedrealvalued. In addition, enjoys favorable properties, summarized in the next theorem.
Theorem 2.2.
The following properties of hold:

is continuously differentiable with
(2.9) If then the set of stationary points of equals .

For any ,
(2.10) 
For any ,
(2.11) In particular, if then
(2.12) 
If then .
Proof.
Part (i) has already been proven. Regarding (ii), from the optimality condition for the problem defining the proximal mapping we have
i.e., is a subgradient of at . From the subgradient inequality
Adding to both sides proves the claim. For part (iii), we have
where the inequality follows by Lipschitz continuity of and the descent lemma, see e.g. [20, Prop. A.24]. For part (iv), putting in (2.10) and (2.11) and using we obtain . Now, for any we have , where the first inequality follows by optimality of for , while the second inequality follows by (2.11). This shows that every is also a (global) minimizer of . The proof finishes by recalling that the set of minimizers of are a subset of the set of its stationary points, which by (i) is equal to . ∎
Parts (i) and (iv) of Theorem (2.2) show that if , the nonsmooth problem (1.1) is completely equivalent to the unconstrained minimization of the continuously differentiable function , in the sense that the sets of minimizers and optimal values are equal. In other words we have
Part (ii) shows that an optimal solution of is automatically optimal for , while part (iii) implies that from an optimal for we can directly obtain an optimal solution for if is chosen sufficiently small, i.e.,
Notice that part (iv) of Theorem 2.2 states that if , then not only do the stationary points of agree with (cf. Theorem 2.2(i)), but also that its set of minimizers agrees with , i.e., although may not be convex, the set of stationary points turns out to be equal to the set of its minimizers. However, in the particular but important case where is convex quadratic, the FBE is convex with Lipschitz continuous gradient, as the following theorem shows.
Theorem 2.3.
If and , then , where
(2.13a)  
(2.13b) 
and , .
Proof.
Let
Due to Lemma A.1 (in the Appendix), is strongly convex with modulus . Function is convex, as the composition of the convex function with the linear mapping . Therefore, is strongly convex with convexity parameter . On the other hand, for every
where the second inequality is due to Lemma A.2 in the Appendix. ∎
Notice that if and we choose , then and , so . In other words the condition number of is roughly double compared to that of .
2.1. Interpretations
It is apparent from (2.1) and (2.5) that FBS is a Picard iteration for computing a fixed point of the nonexpansive mapping . It is well known that fixedpoint iterations may exhibit slow asymptotic convergence. On the other hand, Newton methods achieve much faster asymptotic convergence rates. However, in order to devise globally convergent Newtonlike methods one needs a merit function on which to perform a line search, in order to determine a step size that guarantees sufficient decrease and damps the Newton steps when far from the solution. This is exactly the role that FBE plays in this paper.
Another interesting observation is that the FBE provides a link between gradient methods and FBS, just like the Moreau envelope (2.3) does for the proximal point algorithm [21]. To see this, consider the problem
(2.14) 
where . The proximal point algorithm for solving (2.14) is
(2.15) 
It is well known that the proximal point algorithm can be interpreted as a gradient method for minimizing the Moreau envelope of , cf. (2.3). Indeed, due to (2.4), iteration (2.15) can be expressed as
This simple idea provides a link between nonsmooth and smooth optimization and has led to the discovery of a variety of algorithms for problem (2.14), such as semismooth Newton methods [22], variablemetric [23] and quasiNewton methods [24], and trustregion methods [25], to name a few. However, when dealing with composite problems, even if and are cheaply computable, computing proximal mapping and Moreau envelope of is usually as hard as solving (1.1) itself. On the other hand, forwardbackward splitting takes advantage of the structure of the problem by operating separately on the two summands, cf. (2.1). The question that naturally arises is the following:
Is there a continuously differentiable function that provides an interpretation of FBS as a gradient method, just like the Moreau envelope does for the proximal point algorithm and problem (2.14)?
The forwardbackward envelope provides an affirmative answer. Specifically, FBS can be interpreted as the following (variable metric) gradient method on the FBE:
Furthermore, the following properties holding for
correspond to Theorem 2.2(iii) and Theorem 2.2(iv) for the FBE. The relationship between Moreau envelope and forwardbackward envelope is then apparent. This opens the possibility of extending FBS and devising new algorithms for problem (1.1) by simply reconsidering and appropriately adjusting methods for unconstrained minimization of continuously differentiable functions, the most well studied problem in optimization. In this work we exploit one of the numerous alternatives, by devising Newtonlike algorithms that are able to achieve fast asymptotic convergence rates. The next section deals with the other obstacle that needs to be overcome, i.e., constructing a secondorder expansion for the (but not ) function around any optimal solution, that behaves similarly to the Hessian for functions and allows us to devise algorithms with fast local convergence.
3. Secondorder Analysis of
As it was shown in Section 2, is continuously differentiable over . However fails to be in most cases: since is nonsmooth, its Moreau envelope is hardly ever . For example, if is realvalued then is and is if and only if is [26]. Therefore, we hardly ever have the luxury of assuming continuous differentiability of and we must resort into generalized notions of differentiability stemming from nonsmooth analysis. Specifically, our analysis is largely based upon generalized differentiability properties of which we study next.
3.1. Generalized Jacobians of proximal mappings
Since is globally Lipschitz continuous, by Rademacher’s theorem [17, Th. 9.60] it is almost everywhere differentiable. Recall that Rademacher’s theorem asserts that if a mapping is locally Lipschitz continuous on , then it is almost everywhere differentiable, i.e., the set has measure zero, where is the subset of points in for which is differentiable. Hence, although the Jacobian of in the classical sense might not exist everywhere, generalized differentiability notions, such as the subdifferential and the generalized Jacobian of Clarke, can be employed to provide a local firstorder approximation of .
Definition 3.1.
Let be locally Lipschitz continuous at . The Bsubdifferential (or limiting Jacobian) of at is
whereas the (Clarke) generalized Jacobian of at is
If is locally Lipschitz on then is a nonempty, convex and compact subset of by matrices, and as a setvalued mapping it is outersemicontinuous at every . The next theorem shows that the elements of the generalized Jacobian of the proximal mapping are symmetric and positive semidefinite. Furthermore, it provides a bound on the magnitude of their eigenvalues.
Theorem 3.2.
Suppose that and . Every is a symmetric positive semidefinite matrix that satisfies .
Proof.
Since is convex, its Moreau envelope is a convex function as well, therefore every element of is a symmetric positive semidefinite matrix (see e.g. [13, Sec. 8.3.3]). Due to (2.4), we have that , therefore
(3.1) 
The last relation holds with equality (as opposed to inclusion in the general case) due to the fact that one of the summands is continuously differentiable. Now from (3.1) we easily infer that every element of is a symmetric matrix. Since is Lipschitz continuous with Lipschitz constant , using [27, Prop. 2.6.2(d)], we infer that every satisfies . Now, according to (3.1), it holds
Therefore,
On the other hand, since is Lipschitz continuous with Lipschitz constant 1, using [27, Prop. 2.6.2(d)] we obtain that , for all . ∎
An interesting property of , documented in the following proposition, is useful whenever is (block) separable, i.e., , , . In such cases every is a (block) diagonal matrix. This has favorable computational implications especially for largescale problems. For example, if is the norm or the indicator function of a box, then the elements of (or ) are diagonal matrices with diagonal elements in (or in ).
Proposition 3.3 (separability).
If is (block) separable then every element of and is (block) diagonal.
Proof.
Since is block separable, its proximal mapping has the form
The result follows directly by Definition 3.1. ∎
The following proposition provides a connection between the generalized Jacobian of the proximal mapping for a convex function and that of its conjugate, stemming from the celebrated Moreau’s decomposition [16, Th. 14.3].
Proposition 3.4 (Moreau’s decomposition).
Suppose that . Then
Proof.
Using Moreau’s decomposition we have
The first result follows directly by Definition 3.1, since is expressed as the difference of two functions, one of which is continuously differentiable. The second result follows from the fact that, with a little abuse of notation,
∎
Semismooth mappings [28] are precisely Lipschitz continuous mappings for which the generalized Jacobian (and consequenlty the subdifferential) furnishes a firstorder approximation.
Definition 3.5.
Let be locally Lipschitz continuous at . We say that is semismooth at if
whereas is said to be strongly semismooth if can be replaced with .
We remark that the original definition of semismoothness given by [29] requires to be directionally differentiable at . The definition given here is the one employed by [30]. Another worth spent remark is that can be replaced with the smaller set in Definition 3.5.
Fortunately, the class of semismooth mappings is rich enough to include proximal mappings of most of the functions arising in interesting applications. For example piecewise smooth () mappings are semismooth everywhere. Recall that a continuous mapping is if there exists a finite collection of smooth mappings , such that
The definition of mappings given here is less general than the one of, e.g., [31, Ch. 4] but it suffices for our purposes. For every we introduce the set of essentially active indices
In other words, contains only indices of the pieces for which there exists a fulldimensional set on which agrees with . In accordance to Definition 3.1, the generalized Jacobian of at is the convex hull of the Jacobians of the essentially active pieces, i.e., [31, Prop. 4.3.1]
(3.2) 
As it will be clear in Section 5, in many interesting cases is and thus semismooth. Furthermore, through (3.2) an element of can be easily computed once has been computed.
A special but important class of convex functions whose proximal mapping is are piecewise quadratic (PWQ) functions. A convex function is called PWQ if can be represented as the union of finitely many polyhedral sets, relative to each of which is given by an expression of the form ( must necessarily be symmetric positive semidefinite) [17, Def. 10.20]. The class of PWQ functions is quite general since it includes e.g. polyhedral norms, indicators and support functions of polyhedral sets, and it is closed under addition, composition with affine mappings, conjugation, infconvolution and infprojection [17, Prop. 10.22, Proposition 11.32]. It turns out that the proximal mapping of a PWQ function is piecewise affine (PWA) [17, 12.30] ( is partitioned in polyhedral sets relative to each of which is an affine mapping), hence strongly semismooth [13, Prop. 7.4.7]. Another example of a proximal mapping that it is strongly semismooth is the projection operator over symmetric cones [32]. We refer the reader to [33, 34, 35, 36] for conditions that guarantee semismoothness of the proximal mapping for more general convex functions.
3.2. Approximate generalized Hessian for
Having established properties of generalized Jacobians for proximal mappings, we are now in position to construct a generalized Hessian for that will allow the development of Newtonlike methods with fast asymptotic convergence rates. The obvious route to follow is to assume that is semismooth and employ as a generalized Hessian for . However, semismoothness would require extra assumptions on . Furthermore, the form of is quite complicated involving thirdorder partial derivatives of . On the other hand, what is really needed to devise Newtonlike algorithms with fast local convergence rates is a linear Newton approximation (LNA), cf. Definition 3.6, at some stationary point of , which by Theorem 2.2(iv) is also a minimizer of , provided that . The approach we follow is largely based on [37], [13, Prop. 10.4.4]. The following definition is taken from [13, Def. 7.5.13].
Definition 3.6.
Let be continuous on . We say that admits a linear Newton approximation at a vector if there exists a setvalued mapping that has nonempty compact images, is upper semicontinuous at and for any
If instead
then we say that admits a strong linear Newton approximation at .
Arguably the most notable example of a LNA for semismooth mappings is the generalized Jacobian, cf. Definition 3.1. However, semismooth mappings can admit LNAs different from the generalized Jacobian. More importantly, mappings that are not semismooth may also admit a LNA. It turns out that we can define a LNA for at any stationary point, whose elements have a simpler form than those of , without assuming semismoothness of . We call it approximate generalized Hessian and it is given by
The key idea in the definition of , reminiscent to the GaussNewton method for nonlinear leastsquares problems, is to omit terms vanishing at that contain thirdorder derivatives of . The following proposition shows that is indeed a LNA of at any .
Proposition 3.7.
Let , and . Then

if is semismooth at , then is a LNA for at ,

if is strongly semismooth at , and is locally Lipschitz around , then is a strong LNA for at .
Proof.
See Appendix. ∎
The next proposition shows that every element of is a symmetric positive semidefinite matrix, whose eigenvalues are lower and upper bounded uniformly over all .
Proposition 3.8.
Any is symmetric positive semidefinite and satisfies
(3.3) 
where , .
Proof.
See Appendix. ∎
The next lemma shows uniqueness of the solution of (1.1) under a nonsingularity assumption on the elements of . Its proof is similar to [13, Lem. 7.2.10], however is not required to be locally Lipschitz around .
Lemma 3.9.
Let . Suppose that , is semismooth at and every element of is nonsingular. Then is the unique solution of (1.1). In fact, there exist positive constants and such that
Proof.
See Appendix. ∎
4. ForwardBackward NewtonCG Methods
Having established the equivalence between minimizing and , as well as a LNA for , it is now very easy to design globally convergent Newtonlike algorithms with fast asymptotic convergence rates, for computing a . Algorithm 1 is a standard linesearch method for minimizing , where a conjugate gradient method is employed to solve (approximately) the corresponding regularized Newton system. Therefore our algorithm does not require to form an element of the generalized Hessian of explicitly. It only requires the computation of the corresponding matrixvector product and is thus suitable for largescale problems. Similarly, there is no need to form explicitly the Hessian of , in order to compute the directional derivative needed in the backtracking procedure for computing the stepsize (4.4); only matrixvector products with are required. Under nonsingularity of the elements of , eventually the stepsize becomes equal to 1 and Algorithm 1 reduces to a regularized version of the (undamped) linear Newton method [13, Alg. 7.5.14] for solving .
(4.1) 
(4.2) 
(4.3a)  
(4.3b) 
(4.4) 
The next theorem delineates the basic convergence properties of Algorithm 1.
Theorem 4.1.
Every accumulation point of the sequence generated by Algorithm 1 belongs to .
Proof.
We will first show that the sequence is gradient related to [20, Sec. 1.2]. That is, for any subsequence that converges to a nonstationary point of , i.e.,
(4.5) 
the corresponding subsequence is bounded and satisfies
(4.6) 
Without loss of generality we can restrict to subsequences for which , for all . Suppose that is one such subsequence. Due to (4.3a), we have for all . Matrix is positive semidefinite due to Proposition 3.8, therefore is nonsingular for all and
Now, direction satisfies
where . Therefore
(4.7)  
proving that is bounded. According to [38, Lemma A.2], when CG is applied to (4.1) we have that
(4.8) 
Using (4.3a) and Proposition 3.8, we have that
therefore
(4.9) 
As , the right hand side of (4.9) converges to , which is either a finite negative number (if is finite) or . In any case, this together with (4.9) confirm that (4.6) is valid as well, proving that is gradient related to . All the assumptions of [20, Prop. 1.2.1] hold, therefore every accumulation point of converges to a stationary point of , which by Theorem 2.2(iv) is also a minimizer of . ∎
The next theorem shows that under a nonsingularity assumption on , the asymptotic rate of convergence of the sequence generated by Algorithm 1 is at least superlinear.
Theorem 4.2.
Suppose that is an accumulation point of the sequence generated by Algorithm 1. If is semismooth at and every element of is nonsingular, then the entire sequence converges to and the convergence rate is Qsuperlinear. Furthermore, if is strongly semismooth at and is locally Lipschitz continuous around then converges to with Qorder at least .
Proof.
Theorem 4.1 asserts that must be a stationary point for . Due to Proposition 3.7, is a LNA of at . Due to Lemma 3.9, is the globally unique minimizer of . Therefore, by Theorem 4.1 every subsequence must converge to this unique accumulation point, implying that the entire sequence converges to . Furthermore, for any
(4.10) 
where the second inequality follows from and Lemma A.2 (in the Appendix).
We know that satisfies