A globally convergent and locally quadratically
convergent modified Bsemismooth Newton method for penalized minimization
Abstract
We consider the efficient minimization of a nonlinear, strictly convex functional with penalty term. Such minimization problems appear in a wide range of applications like Tikhonov regularization of (non)linear inverse problems with sparsity constraints. In (2015 Inverse Problems 31 025005), a globalized Bouligandsemismooth Newton method was presented for Tikhonov regularization of linear inverse problems. Nevertheless, a technical assumption on the accumulation point of the sequence of iterates was necessary to prove global convergence. Here, we generalize this method to general nonlinear problems and present a modified semismooth Newton method for which global convergence is proven without any additional requirements. Moreover, under a technical assumption, full Newton steps are eventually accepted and locally quadratic convergence is achieved. Numerical examples from image deblurring and robust regression demonstrate the performance of the method.
Keywords: global convergence, semismooth Newton method, Tikhonov regularization, inverse problems, sparsity constraints, quadratic convergence
Mathematics Subject Classification: 49M15, 49N45, 90C56
1 Introduction
We are concerned with the efficient minimization of
(1) 
where is a twice Lipschitzcontinuously differentiable and strictly convex functional, and is a positive weight sequence with . Minimization problems of the form (1) appear in various applications from engineering and natural sciences. A wellknown example is Tikhonov regularization for inverse problems with sparsity constraints, e.g. medical imaging, geophysics, nondestructive testing or compressed sensing, see e.g. [16, 20, 26, 55]. Here, one aims to solve a possibly nonlinear illposed operator equation , . In practice, one has to reconstruct from noisy measurement data . In the presence of perturbed data, regularization strategies are required for the stable computation of a numerical solution to an inverse problem [16, 49]. Applying Tikhonov regularization with sparsity constraints, one minimizes a functional consisting of a suitable discrepancy term and a sparsity promoting penalty term, see e.g. [13] and the references therein. Sparsity here means the a priori assumption that the unknown solution is sparse, i.e. has only few nonzero entries. As an example, in the special case of a linear discrete illposed operator equation , linear, bounded and injective, , one may choose the discrepancy term [16]. For nonlinear inverse problems like parameter identification problems, convex discrepancy terms from energy functional approaches may be considered, see e.g. [31, 35, 38, 40]. Sparsity of the Tikhonov minimizer with respect to a given basis can be enforced by using the penalty term in (1), where the weights act as regularization parameters, see e.g. [25, 26, 34] and the references therein.
In current research, sparsitypromoting regularization techniques are widely used, see e.g. [6, 13, 24, 26, 33, 34, 38, 39, 40] and the references therein. Such recovery schemes usually outperform classical Tikhonov regularization with coefficient penalties in terms of reconstruction quality if the unknown solution is sparse w.r.t. some basis. This is the case in many parameter identification problems for partial differential equations with piecewise smooth solutions, like electrical impedance tomography [24, 33] or inverse heat conduction scenarios [7].
There exists a variety of approaches for the numerical minimization of (1) in the literature. In the special case of a quadratic functional , iterative softthresholding [13] as well as related approaches for general functionals are wellstudied, see e.g. [6, 8, 48]. Accelerated methods and gradientbased methods introduced in [4, 20, 38, 41, 54] often gain from clever stepsize choices. Homotopytype solvers [42] and alternating direction methods of multipliers [55] besides many others are also stateoftheart.
Other popular approaches for the solution of (1) are semismooth Newton methods [9, 52]. A semismooth Newton method and a quasiNewton method for the minimization of (1) were proposed by Muoi et al. in the infinitedimensional setting [40], inspired by previous work of Herzog and Lorenz [26]. If is convex and smooth, it was shown e.g. in [11, 26, 39], that is a minimizer of (1) if and only if is a solution to the zerofinding problem ,
(2) 
for any fixed , where denotes the componentwise soft thresholding of with respect to a positive weight sequence , denotes the gradient of and . In [40], from (2) was shown to be Newton differentiable, i.e. under a suitable assumption on there exists a family of slanting functions with
(3) 
see also [9, 26, 52] for the definition of Newton derivatives. A local semismooth Newton method was defined in [40] by
(4)  
(5) 
with a specially chosen , cf. [9, 52]. In [40], locally superlinear convergence was proven under suitable assumptions on the functional .
Nevertheless, the above mentioned semismooth Newton methods are only locally convergent in general. In [39], a semismooth Newton method with filter globalization was presented where semismooth Newton steps are combined with damped shrinkage steps. Another globalized semismooth Newton method was developed in [28]. In loc. cit., inspired by [27, 32, 43, 45], the method from [26] was globalized in a finitedimensional setting for the special case of a quadratic discrepancy term
(6) 
where is injective and . In [28], was shown to be Lipschitz continuous and directionally differentiable, i.e. Bouligand differentiable [17, 43, 50]. For such nonlinearities a B(ouligand)Newton method can be defined [43], replacing (4) by the generalized Newton equation
(7) 
In [28], the system (7) was shown to be equivalent to a uniquely solvable mixed linear complementarity problem [12]. By the choice (7), automatically is a descent direction with respect to the merit functional ,
(8) 
cf. [43]. Additionally, this BouligandNewton method can be interpreted as a semismooth Newton method with a specially chosen slanting function and is therefore called a Bsemismooth Newton method, cf. [46]. By introducing suitable damping parameters, the method can be shown to be globally convergent under a technical assumption on the in practice unknown accumulation point of the sequence of iterates, see also [27, 32, 43, 45]. Indeed, if the chosen Armijo stepsizes tend to zero, the merit functional has to fulfill the condition
(9) 
at to ensure global convergence.
In this work, we present a modified, globally convergent semismooth Newton method for the minimization problem (1) in the finitedimensional setting
(10) 
for general (not necessarily quadratic) strictly convex functionals . Our work is inspired by Pang [44], where a globally and locally quadratically convergent modified BouligandNewton method was presented for the solution of variational inequalities, nonlinear complementarity problems and nonlinear programs. We take advantage of similarities of nonlinear complementarity problems and the zerofinding problem (2) to propose a modified method similar to [44]. Starting out from [28, 40], we develop a globalized Bsemismooth Newton method for general possibly nonquadratic discrepancy functionals . In order to achieve global convergence without any requirements on the a priori unknown accumulation point of the iterates, inspired by [44], we propose a special modification of the Newton directions from (7), retaining the descent property w.r.t. . The resulting generalized Newton equation is again shown to be equivalent to a uniquely solvable mixed linear complementarity problem. Fortunately, in our proposed scheme, under a technical assumption, full Newton steps are accepted in the vicinity of the zero of . As a consequence, under an additional regularity assumption, locally quadratic convergence is achieved. Additionally, the resulting modified method can be interpreted as a generalized Newton method proposed by Han, Pang and Rangaraj [27]. In a neighborhood of the zero of , the modified method, under a technical assumption, coincides with the Bsemismooth Newton method from [28] reformulated for nonquadratic . If is a quadratic functional, it was shown in [28] that in a neighborhood of the zero, the Bsemismooth Newton method finds the exact zero of within finitely many iterations.
Alternatively, one may consider other globalization strategies as trust region methods or pathsearch methods instead of the considered linesearch damping strategy, see e.g. [52, 18] and the references therein. The pathsearch globalization strategy proposed by the authors of [47, 14] could be a promising, albeit conceptually different, alternative. These approaches go beyond the scope of this paper and are part of future work.
For the rest of the paper, we require the following assumption on the smoothness of , similar to [40, Assumption 3.1, Example 3.4]. In Section 3, we will need a further assumption regarding the locally quadratic convergence of the method.
Assumption 1.

The function is twice Lipschitzcontinuously differentiable and the Hessian is positive definite for all . Moreover, there exist constants with
uniformly for all .

The level sets of are compact.
The compactness of the level sets in the case of a quadratic functional , injective, , was shown in [28]. Note that the positive definiteness of the Hessian implies strict convexity of the functional and ensures unique solvability of (10).
The paper is organized as follows. Section 2 treats the proposed Bsemismooth Newton method and its modification as well as their feasibility. Section 3 addresses the global convergence and the local convergence speed of the methods. Numerical examples demonstrate the performance of the proposed algorithms in Section 4.
2 A Bsemismooth Newton method and its modification
In this section, we present the algorithm of the Bsemismooth Newton method from [28] generalized to the minimization problem (10) as well as a modified version and discuss their feasibility. Additionally, we suggest a hybrid method. We start with the modified algorithm because the generalized Bsemismooth Newton method can immediately be deduced from the modified method.
2.1 A modified Bsemismooth Newton method and its feasibility
In the following, we introduce a modified Bsemismooth Newton method for the solution of (10). We denote the active set by , where
(11)  
(12) 
and the inactive set by , where
(13)  
(14)  
(15) 
Below, we drop the argument if there is no risk of confusion.
For defined by (2), we then have
(16)  
(17) 
By Assumption 1, is Lipschitz continuous and directionally differentiable. The directional derivative of can be easily deduced.
Lemma 2.
The directional derivative of at in the direction is given elementwise by
(18) 
Proof.
The directional derivative of the merit functional from (8) at in the direction is given by , where denotes the Euclidean scalar product, see e.g. [28, Lemma 3.2].
To introduce the modified semismooth Newton method, we define the subsets
(19)  
(20)  
(21)  
(22) 
Inspired by [44], we define the modified index sets
(23)  
(24)  
(25)  
(26)  
(27) 
We denote and respectively. The subsets (19)–(22) fulfill , , and if .
In the following lemma, we consider a linear complementarity problem which is important for all further discussions, cf. [28].
Lemma 3.
Let and . The linear complementarity problem
(28) 
with
(29) 
and
(30) 
has a unique solution.
Proof.
Now we can define the generalized Newton equation for , cf. [28]. Let and
(31) 
where is the unique solution to the linear complementarity problem (28). Then, by defining the generalized derivative blockwise
(32) 
the modified semismooth Newton method is given by
(33)  
(34) 
with suitably chosen damping parameters .
Remark 4.
In [40], Muoi et al. chose the slanting function
(35) 
blocked according to the active and inactive sets, to define the local semismooth Newton method (4),(5). The key difference of (32) compared to (35) is the modification of the index sets. Note that from (32) is not a slanting function in general because in regions where is smooth, does not coincide with the Fréchetderivative of .
Let and . Then solves (33) if and only if
(36)  
(37) 
and
(38)  
where , solve the linear complementarity problem (28), cf. [28, Lemma 3.4].
We summarize the above observations in the following lemma, cf. [28, Theorem 3.5].
Lemma 5.
Before proceeding, we prove some useful identities similar to [44, Lemma 2].
Lemma 6.
Let , the unique solution to (39) and . For , we have the following identities
(40)  
(41)  
(42) 
Additionally, for the inequality
(43) 
holds, for we have
(44) 
and for we have
(45) 
Proof.
Lemma 7.
Let with and . Let be the solution to (39). Then, we have
(46) 
i.e. is a true descent direction of at in the direction .
Proof.
We choose the stepsizes in (34) by the wellknown Armijo rule
where and , see also [27, 28, 32, 43, 44, 45]. These stepsizes can be computed in finitely many iterations. We cite the following lemma from [28, Proposition 4.1].
Lemma 8.
Let , . Let with and let be computed by (39). Then, there exists a finite index with
(47) 
Proof.
The algorithm of the modified Bsemismooth Newton method, in the following denoted by modBSSN, is stated in Algorithm 1. The feasibility of Algorithm modBSSN is guaranteed because of the lemmata stated above.
Remark 9.
Pang [44] introduced a modified BNewton method for a nonlinear complementarity problem. Han, Pang and Rangaraj [27] interpreted this iteration as a generalized Newton method
where fulfills the assumption that is surjective for each fixed , and
for all , , see [27, Section 2.3]. In the very same way, our Algorithm modBSSN can be interpreted as a generalized Newton method with and from (32), cf. Lemma 7.
2.2 A Bsemismooth Newton method and its feasibility
The generalized formulation of the Bsemismooth Newton method (5), (7) from [28] for the setting (10), in the following denoted by BSSN, is identical to Algorithm modBSSN replacing the modified index sets (23)–(27) by the original index sets (11)–(15) in (28)–(30) and (39), cf. Algorithm 1 and [28]. Analogously to the proofs in Section 2.1, the Newton directions can be shown to be uniquely determined and the Armijo stepsizes are welldefined because the Newton directions are descent directions w.r.t. the merit functional . Thus, the feasibility of the Algorithm BSSN is guaranteed.
Remark 10.
The modification of the index sets in Algorithm modBSSN is needed to prove global convergence without any additional requirements, see Section 3. Let be the unique zero of and let , i.e. is smooth at . Then, there exists a neighborhood of where the index subsets (19)–(22) are empty for all , i.e. the modified index sets (23)–(27) match the original index sets (11)–(15). Therefore, Algorithm modBSSN locally coincides with in a neighborhood of the zero of if and hence is a semismooth Newton method there.
2.3 A globally convergent hybrid method
The Bsemismooth Newton method (Algorithm BSSN) from Section 2.2 is efficient in practice because the index sets in step are usually empty so that the generalized Newton equation simplifies to a system of linear equations of the size . The size of the system of linear equations usually decreases in the course of the iteration. Nevertheless, the method may fail to converge, see Remark 10 and Theorem 15. However, the global convergence of Algorithm modBSSN from Section 2.1 is ensured by Theorem 12 but here a mixed linear complementarity problem has to be solved in each iteration, see (39). Additionally, in order to set up the matrix and the vector from (3) and (30), systems of linear equations of the size with the same matrix have to be solved if . Note that in (36) resp. (39) no additional system of linear equations has to be solved for the computation of . Nevertheless, Algorithm modBSSN is usually less efficient than Algorithm BSSN.
We suggest a hybrid method by starting with Algorithm BSSN and switching to Algorithm modBSSN when Algorithm BSSN begins to stagnate, by replacing the modified index sets (23)–(27) by the index sets (11)–(15) in (28)–(30) and (39). In our numerical experiments, we switch to Algorithm modBSSN if the number of Newton steps exceeds a limit and if the chosen stepsize is smaller than a threshold , i.e. if and . In the sequel, this hybrid method is called hybridBSSN. An overview of the proposed methods is given in Algorithm 1. Similar hybrid methods, combining the fast local convergence properties of a local semismooth Newton method with the globally convergent generalized Newton method from [27] were proposed by Qi [45] and Ito and Kunisch [32].
3 Global convergence and local convergence speed
In this section, we consider the convergence properties of the algorithms from Section 2.
3.1 Convergence of the modified Bsemismooth Newton method
In the following, we address the global convergence of Algorithm modBSSN and its convergence speed in a neighborhood of the zero of . Concerning the boundedness of the sequence of Newton directions , we cite [28, Proposition 4.6].
Lemma 11.
Let and be the solution to (39). Then, there exists a constant independent of , with
(48) 
Proof.
In the following theorem, we present our main result on the global convergence of Algorithm modBSSN.
Theorem 12.
Let be an accumulation point of the sequence of iterates produced by Algorithm . Then, we have .