A globally convergent and locally quadratically
convergent modified B-semismooth Newton method for -penalized minimization
We consider the efficient minimization of a nonlinear, strictly convex functional with -penalty term. Such minimization problems appear in a wide range of applications like Tikhonov regularization of (non)linear inverse problems with sparsity constraints. In (2015 Inverse Problems 31 025005), a globalized Bouligand-semismooth Newton method was presented for -Tikhonov regularization of linear inverse problems. Nevertheless, a technical assumption on the accumulation point of the sequence of iterates was necessary to prove global convergence. Here, we generalize this method to general nonlinear problems and present a modified semismooth Newton method for which global convergence is proven without any additional requirements. Moreover, under a technical assumption, full Newton steps are eventually accepted and locally quadratic convergence is achieved. Numerical examples from image deblurring and robust regression demonstrate the performance of the method.
Keywords: global convergence, semismooth Newton method, -Tikhonov regularization, inverse problems, sparsity constraints, quadratic convergence
Mathematics Subject Classification: 49M15, 49N45, 90C56
We are concerned with the efficient minimization of
where is a twice Lipschitz-continuously differentiable and strictly convex functional, and is a positive weight sequence with . Minimization problems of the form (1) appear in various applications from engineering and natural sciences. A well-known example is Tikhonov regularization for inverse problems with sparsity constraints, e.g. medical imaging, geophysics, nondestructive testing or compressed sensing, see e.g. [16, 20, 26, 55]. Here, one aims to solve a possibly nonlinear ill-posed operator equation , . In practice, one has to reconstruct from noisy measurement data . In the presence of perturbed data, regularization strategies are required for the stable computation of a numerical solution to an inverse problem [16, 49]. Applying Tikhonov regularization with sparsity constraints, one minimizes a functional consisting of a suitable discrepancy term and a sparsity promoting penalty term, see e.g.  and the references therein. Sparsity here means the a priori assumption that the unknown solution is sparse, i.e. has only few nonzero entries. As an example, in the special case of a linear discrete ill-posed operator equation , linear, bounded and injective, , one may choose the discrepancy term . For nonlinear inverse problems like parameter identification problems, convex discrepancy terms from energy functional approaches may be considered, see e.g. [31, 35, 38, 40]. Sparsity of the Tikhonov minimizer with respect to a given basis can be enforced by using the penalty term in (1), where the weights act as regularization parameters, see e.g. [25, 26, 34] and the references therein.
In current research, sparsity-promoting regularization techniques are widely used, see e.g. [6, 13, 24, 26, 33, 34, 38, 39, 40] and the references therein. Such recovery schemes usually outperform classical Tikhonov regularization with coefficient penalties in terms of reconstruction quality if the unknown solution is sparse w.r.t. some basis. This is the case in many parameter identification problems for partial differential equations with piecewise smooth solutions, like electrical impedance tomography [24, 33] or inverse heat conduction scenarios .
There exists a variety of approaches for the numerical minimization of (1) in the literature. In the special case of a quadratic functional , iterative soft-thresholding  as well as related approaches for general functionals are well-studied, see e.g. [6, 8, 48]. Accelerated methods and gradient-based methods introduced in [4, 20, 38, 41, 54] often gain from clever stepsize choices. Homotopy-type solvers  and alternating direction methods of multipliers  besides many others are also state-of-the-art.
Other popular approaches for the solution of (1) are semismooth Newton methods [9, 52]. A semismooth Newton method and a quasi-Newton method for the minimization of (1) were proposed by Muoi et al. in the infinite-dimensional setting , inspired by previous work of Herzog and Lorenz . If is convex and smooth, it was shown e.g. in [11, 26, 39], that is a minimizer of (1) if and only if is a solution to the zero-finding problem ,
for any fixed , where denotes the componentwise soft thresholding of with respect to a positive weight sequence , denotes the gradient of and . In , from (2) was shown to be Newton differentiable, i.e. under a suitable assumption on there exists a family of slanting functions with
Nevertheless, the above mentioned semismooth Newton methods are only locally convergent in general. In , a semismooth Newton method with filter globalization was presented where semismooth Newton steps are combined with damped shrinkage steps. Another globalized semismooth Newton method was developed in . In loc. cit., inspired by [27, 32, 43, 45], the method from  was globalized in a finite-dimensional setting for the special case of a quadratic discrepancy term
where is injective and . In , was shown to be Lipschitz continuous and directionally differentiable, i.e. Bouligand differentiable [17, 43, 50]. For such nonlinearities a B(ouligand)-Newton method can be defined , replacing (4) by the generalized Newton equation
In , the system (7) was shown to be equivalent to a uniquely solvable mixed linear complementarity problem . By the choice (7), automatically is a descent direction with respect to the merit functional ,
cf. . Additionally, this Bouligand-Newton method can be interpreted as a semismooth Newton method with a specially chosen slanting function and is therefore called a B-semismooth Newton method, cf. . By introducing suitable damping parameters, the method can be shown to be globally convergent under a technical assumption on the in practice unknown accumulation point of the sequence of iterates, see also [27, 32, 43, 45]. Indeed, if the chosen Armijo stepsizes tend to zero, the merit functional has to fulfill the condition
at to ensure global convergence.
In this work, we present a modified, globally convergent semismooth Newton method for the minimization problem (1) in the finite-dimensional setting
for general (not necessarily quadratic) strictly convex functionals . Our work is inspired by Pang , where a globally and locally quadratically convergent modified Bouligand-Newton method was presented for the solution of variational inequalities, nonlinear complementarity problems and nonlinear programs. We take advantage of similarities of nonlinear complementarity problems and the zero-finding problem (2) to propose a modified method similar to . Starting out from [28, 40], we develop a globalized B-semismooth Newton method for general possibly nonquadratic discrepancy functionals . In order to achieve global convergence without any requirements on the a priori unknown accumulation point of the iterates, inspired by , we propose a special modification of the Newton directions from (7), retaining the descent property w.r.t. . The resulting generalized Newton equation is again shown to be equivalent to a uniquely solvable mixed linear complementarity problem. Fortunately, in our proposed scheme, under a technical assumption, full Newton steps are accepted in the vicinity of the zero of . As a consequence, under an additional regularity assumption, locally quadratic convergence is achieved. Additionally, the resulting modified method can be interpreted as a generalized Newton method proposed by Han, Pang and Rangaraj . In a neighborhood of the zero of , the modified method, under a technical assumption, coincides with the B-semismooth Newton method from  reformulated for nonquadratic . If is a quadratic functional, it was shown in  that in a neighborhood of the zero, the B-semismooth Newton method finds the exact zero of within finitely many iterations.
Alternatively, one may consider other globalization strategies as trust region methods or path-search methods instead of the considered line-search damping strategy, see e.g. [52, 18] and the references therein. The path-search globalization strategy proposed by the authors of [47, 14] could be a promising, albeit conceptually different, alternative. These approaches go beyond the scope of this paper and are part of future work.
For the rest of the paper, we require the following assumption on the smoothness of , similar to [40, Assumption 3.1, Example 3.4]. In Section 3, we will need a further assumption regarding the locally quadratic convergence of the method.
The function is twice Lipschitz-continuously differentiable and the Hessian is positive definite for all . Moreover, there exist constants with
uniformly for all .
The level sets of are compact.
The compactness of the level sets in the case of a quadratic functional , injective, , was shown in . Note that the positive definiteness of the Hessian implies strict convexity of the functional and ensures unique solvability of (10).
The paper is organized as follows. Section 2 treats the proposed B-semismooth Newton method and its modification as well as their feasibility. Section 3 addresses the global convergence and the local convergence speed of the methods. Numerical examples demonstrate the performance of the proposed algorithms in Section 4.
2 A B-semismooth Newton method and its modification
In this section, we present the algorithm of the B-semismooth Newton method from  generalized to the minimization problem (10) as well as a modified version and discuss their feasibility. Additionally, we suggest a hybrid method. We start with the modified algorithm because the generalized B-semismooth Newton method can immediately be deduced from the modified method.
2.1 A modified B-semismooth Newton method and its feasibility
In the following, we introduce a modified B-semismooth Newton method for the solution of (10). We denote the active set by , where
and the inactive set by , where
Below, we drop the argument if there is no risk of confusion.
For defined by (2), we then have
By Assumption 1, is Lipschitz continuous and directionally differentiable. The directional derivative of can be easily deduced.
The directional derivative of at in the direction is given elementwise by
To introduce the modified semismooth Newton method, we define the subsets
Inspired by , we define the modified index sets
In the following lemma, we consider a linear complementarity problem which is important for all further discussions, cf. .
Let and . The linear complementarity problem
has a unique solution.
Now we can define the generalized Newton equation for , cf. . Let and
where is the unique solution to the linear complementarity problem (28). Then, by defining the generalized derivative blockwise
the modified semismooth Newton method is given by
with suitably chosen damping parameters .
In , Muoi et al. chose the slanting function
blocked according to the active and inactive sets, to define the local semismooth Newton method (4),(5). The key difference of (32) compared to (35) is the modification of the index sets. Note that from (32) is not a slanting function in general because in regions where is smooth, does not coincide with the Fréchet-derivative of .
Let and . Then solves (33) if and only if
We summarize the above observations in the following lemma, cf. [28, Theorem 3.5].
Before proceeding, we prove some useful identities similar to [44, Lemma 2].
Let , the unique solution to (39) and . For , we have the following identities
Additionally, for the inequality
holds, for we have
and for we have
Equations (40), (41) and (42) immediately follow from (36) and (37). For , we have by definition and with (28) and (38) we have implying (43). For , we have by definition and with (28) and (38) we have , implying (44).
Let with and . Let be the solution to (39). Then, we have
i.e. is a true descent direction of at in the direction .
We choose the stepsizes in (34) by the well-known Armijo rule
Let , . Let with and let be computed by (39). Then, there exists a finite index with
The algorithm of the modified B-semismooth Newton method, in the following denoted by modBSSN, is stated in Algorithm 1. The feasibility of Algorithm modBSSN is guaranteed because of the lemmata stated above.
where fulfills the assumption that is surjective for each fixed , and
2.2 A B-semismooth Newton method and its feasibility
The generalized formulation of the B-semismooth Newton method (5), (7) from  for the setting (10), in the following denoted by BSSN, is identical to Algorithm modBSSN replacing the modified index sets (23)–(27) by the original index sets (11)–(15) in (28)–(30) and (39), cf. Algorithm 1 and . Analogously to the proofs in Section 2.1, the Newton directions can be shown to be uniquely determined and the Armijo stepsizes are well-defined because the Newton directions are descent directions w.r.t. the merit functional . Thus, the feasibility of the Algorithm BSSN is guaranteed.
The modification of the index sets in Algorithm modBSSN is needed to prove global convergence without any additional requirements, see Section 3. Let be the unique zero of and let , i.e. is smooth at . Then, there exists a neighborhood of where the index subsets (19)–(22) are empty for all , i.e. the modified index sets (23)–(27) match the original index sets (11)–(15). Therefore, Algorithm modBSSN locally coincides with in a neighborhood of the zero of if and hence is a semismooth Newton method there.
2.3 A globally convergent hybrid method
The B-semismooth Newton method (Algorithm BSSN) from Section 2.2 is efficient in practice because the index sets in step are usually empty so that the generalized Newton equation simplifies to a system of linear equations of the size . The size of the system of linear equations usually decreases in the course of the iteration. Nevertheless, the method may fail to converge, see Remark 10 and Theorem 15. However, the global convergence of Algorithm modBSSN from Section 2.1 is ensured by Theorem 12 but here a mixed linear complementarity problem has to be solved in each iteration, see (39). Additionally, in order to set up the matrix and the vector from (3) and (30), systems of linear equations of the size with the same matrix have to be solved if . Note that in (36) resp. (39) no additional system of linear equations has to be solved for the computation of . Nevertheless, Algorithm modBSSN is usually less efficient than Algorithm BSSN.
We suggest a hybrid method by starting with Algorithm BSSN and switching to Algorithm modBSSN when Algorithm BSSN begins to stagnate, by replacing the modified index sets (23)–(27) by the index sets (11)–(15) in (28)–(30) and (39). In our numerical experiments, we switch to Algorithm modBSSN if the number of Newton steps exceeds a limit and if the chosen stepsize is smaller than a threshold , i.e. if and . In the sequel, this hybrid method is called hybridBSSN. An overview of the proposed methods is given in Algorithm 1. Similar hybrid methods, combining the fast local convergence properties of a local semismooth Newton method with the globally convergent generalized Newton method from  were proposed by Qi  and Ito and Kunisch .
3 Global convergence and local convergence speed
In this section, we consider the convergence properties of the algorithms from Section 2.
3.1 Convergence of the modified B-semismooth Newton method
In the following, we address the global convergence of Algorithm modBSSN and its convergence speed in a neighborhood of the zero of . Concerning the boundedness of the sequence of Newton directions , we cite [28, Proposition 4.6].
Let and be the solution to (39). Then, there exists a constant independent of , with
In the following theorem, we present our main result on the global convergence of Algorithm modBSSN.
Let be an accumulation point of the sequence of iterates produced by Algorithm . Then, we have .