Steepest Descent Preconditioning for Nonlinear GMRES Optimization

Steepest Descent Preconditioning for Nonlinear GMRES Optimization

Abstract

Steepest descent preconditioning is considered for the recently proposed nonlinear generalized minimal residual (N-GMRES) optimization algorithm for unconstrained nonlinear optimization. Two steepest descent preconditioning variants are proposed. The first employs a line search, while the second employs a predefined small step. A simple global convergence proof is provided for the N-GMRES optimization algorithm with the first steepest descent preconditioner (with line search), under mild standard conditions on the objective function and the line search processes. Steepest descent preconditioning for N-GMRES optimization is also motivated by relating it to standard non-preconditioned GMRES for linear systems in the case of a standard quadratic optimization problem with symmetric positive definite operator. Numerical tests on a variety of model problems show that the N-GMRES optimization algorithm is able to very significantly accelerate convergence of stand-alone steepest descent optimization. Moreover, performance of steepest-descent preconditioned N-GMRES is shown to be competitive with standard nonlinear conjugate gradient and limited-memory Broyden-Fletcher-Goldfarb-Shanno methods for the model problems considered. These results serve to theoretically and numerically establish steepest-descent preconditioned N-GMRES as a general optimization method for unconstrained nonlinear optimization, with performance that appears promising compared to established techniques. In addition, it is argued that the real potential of the N-GMRES optimization framework lies in the fact that it can make use of problem-dependent nonlinear preconditioners that are more powerful than steepest descent (or, equivalently, N-GMRES can be used as a simple wrapper around any other iterative optimization process to seek acceleration of that process), and this potential is illustrated with a further application example.

n

onlinear optimization, GMRES, steepest descent {AMS} 65K10 Optimization, 65F08 Preconditioners for iterative methods, 65F10 Iterative methods

1 Introduction

In recent work on canonical tensor approximation [3], we have proposed an algorithm that accelerates convergence of the alternating least squares (ALS) optimization method for the canonical tensor approximation problem considered there. The algorithm proceeds by linearly recombining previous iterates in a way that approximately minimizes the residual (the gradient of the objective function), using a nonlinear generalized minimal residual (GMRES) approach. The recombination step is followed by a line search step for globalization, and the resulting three-step non-linear GMRES (N-GMRES) optimization algorithm is shown in [3] to significantly speed up the convergence of ALS for the canonical tensor approximation problem considered.

As explained in [3] (which we refer to as Paper I in what follows), for the tensor approximation problem considered there, ALS can also be interpreted as a preconditioner for the N-GMRES optimization algorithm. The question then arises what other types of preconditioners can be considered for the N-GMRES optimization algorithm proposed in Paper I, and whether there are universal preconditioning approaches that can make the N-GMRES optimization algorithm applicable to nonlinear optimization problems more generally. In the present paper, we propose such a universal preconditioning approach for the N-GMRES optimization algorithm proposed in Paper I, namely, steepest descent preconditioning. We explain how updates in the steepest descent direction can indeed naturally be used as a preconditioning process for the N-GMRES optimization algorithm. In fact, we show that steepest descent preconditioning can be seen as the most basic preconditioning process for the N-GMRES optmization method, in the sense that applying N-GMRES to a quadratic objective function with symmetric positive definite (SPD) operator, corresponds mathematically to applying standard non-preconditioned GMRES for linear systems to the linear system corresponding to the quadratic objective function. We propose two variants of steepest descent preconditioning, one with line search and one with a predefined small step. We give a simple global convergence proof for the N-GMRES optimization algorithm with our first proposed variant of steepest descent preconditioning (with line search), under standard mild conditions on the objective function and for line searches satisfying the Wolfe conditions. The second preconditioning approach, without line search, is of interest because it is more efficient in numerical tests, but there is no convergence guarantee. Numerical results are employed for a variety of test problems demonstrating that N-GMRES optimization can significantly speed up stand-alone steepest descent optimization. We also compare steepest-descent preconditioned N-GMRES with a standard nonlinear conjugate gradient (N-CG) method for all our test problems, and with a standard limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) method.

We consider the following unconstrained nonlinear optimization problem with associated first-order optimality equations:

optimization problem I:

(1)

first-order optimality equations I:

(2)

The N-GMRES optimization algorithm proposed in Paper I for accelerating ALS for canonical tensor approximation consists of three steps that can be summarized as follows. (Fig. 1 gives a schematic representation of the algorithm, and it is described in pseudo-code in Algorithm 1.) In the first step, a preliminary new iterate is generated from the last iterate using a one-step iterative update process , which can be interpreted as a preconditioning process (see Paper I and below). ALS preconditioning is used for in Paper I. In the second step, an accelerated iterate is obtained by linearly recombining previous iterates in a window of size , , using a nonlinear GMRES approach. (The details of this step will be recalled in Section 2 below.) In the third step, a line search is performed that minimizes objective function on a half line starting at preliminary iterate , which was generated in Step I, and connecting it with accelerated iterate , which was generated in Step II, to obtain the new iterate .

The second step in the N-GMRES optimization algorithm (Step II in Algorithm 1) uses the nonlinear extension of GMRES for solving nonlinear systems of equations that was proposed by Washio and Oosterlee in [18] in the context of nonlinear partial differential equation (PDE) systems (see also [12] and [18] for further applications to PDE systems). It is a nonlinear extension of the celebrated GMRES method for iteratively solving systems of linear equations [15, 14]. Washio and Oosterlee’s nonlinear extension is related to Flexible GMRES as described in [13], and is also related to the reduced rank extrapolation method [16]. An early description of this type of nonlinear iterate acceleration ideas for solving nonlinear equation systems appears in so-called Anderson mixing, see, e.g., [5, 17]. More recent applications of these ideas to nonlinear equation systems and fixed-point problems are discussed in [5, 17]. In Paper I we formulated a nonlinear GMRES optimization algorithm for canonical tensor decomposition that uses this type of acceleration as one of its steps, combined with an ALS preconditioning step and a line search for globalization. The type of nonlinear iterate acceleration in Step II of Algorithm 1 has thus been considered several times before in the context of solving nonlinear systems of equations, but we believe that its combination with a line search to obtain a general preconditioned nonlinear optimization method as in Algorithm 1 (see Paper I) is new in the optimization context. In the present paper we show how this N-GMRES optimization approach can be applied to a broad class of sufficiently smooth nonlinear optimization problems by using steepest descent preconditioning. We establish theoretical convergence properties for this approach and demonstrate its effectiveness in numerical tests.

{algorithm}

[H] \dontprintsemicolon

Input: initial iterates .   
\Repeatconvergence criterion satisfied Step I: (generate preliminary iterate by one-step update process )Step II: (generate accelerated iterate by nonlinear GMRES step)gmresStep III: (generate new iterate by line search process)    if is a descent direction      linesearch    else          end     
N-GMRES optimization algorithm (window size ) (Note that the initial iterates required in Algorithm 1 can naturally be generated by applying the algorithm with a window size that gradually increases from one up to , starting from a single initial guess. Also, as in [3], we perform a restart and reset the window size back to 1 whenever is not a descent direction.)

Figure 1: Schematic representation of one iteration of the N-GMRES optimization algorithm (from [3]). Given previous iterations , and , new iterate is generated as follows. In Step I, preliminary iterate is generated by the one-step update process : . In Step II, the nonlinear GMRES step, accelerated iterate is obtained by determining the coefficients in such that the gradient of the objective function in is approximately minimized. In Step III, the new iterate, , is finally generated by a line search that minimizes the objective function .

The rest of this paper is structured as follows. In Section 2 we propose two types of steepest descent preconditioners for N-GMRES Optimization Algorithm 1. We briefly recall the details of the nonlinear GMRES optimization step, give a motivation and interpretation for steepest descent preconditioning that relate it to non-preconditioned GMRES for SPD linear systems, and give a simple proof for global convergence of the N-GMRES optimization algorithm using steepest descent preconditioning with line search. In Section 3 we present extensive numerical results for N-GMRES optimization with the two proposed steepest descent preconditioners, applied to a variety of nonlinear optimization problems, and compare with stand-alone steepest descent, N-CG and L-BFGS. Finally, Section 4 concludes.

2 Steepest Descent Preconditioning for N-GMRES Optimization

In this section, we first propose two variants of steepest descent preconditioning. We then briefly recall the details of the nonlinear GMRES recombination step (Step II in Algorithm 1), and relate N-GMRES optimization to standard non-preconditioned GMRES for linear systems in the case of a simple quadratic optimization problem with SPD operator. Finally, we give a simple global convergence proof for the N-GMRES optimization algorithm using steepest descent preconditioning with line search.

2.1 Steepest Descent Preconditioning Process

We propose a general steepest descent preconditioning process for Step I of N-GMRES Optimization Algorithm 1 with the following two variants:

Steepest Descent Preconditioning Process:

option A: (3)
option B: (4)

For Option A, is the step length obtained by a line search procedure. For definiteness, we consider a line search procedure that satisfies the Wolfe conditions (see below). We refer to the steepest descent preconditioning process with line search (3) as the sdls preconditioner. For Option B, we predefine the step as the minimum of a small positive constant , and the norm of the gradient. In the numerical results to be presented further on in the paper, we use , except where noted. We refer to the steepest descent preconditioning process with predefined step (4) as the sd preconditioner. These two Options are quite different, and some discussion is in order.

Preconditioning process A can be employed as a stand-alone optimization method (it can converge by itself), and N-GMRES can be considered as a wrapper that accelerates this stand-alone process. We will show below that N-GMRES with preconditioning process A has strong convergence properties, but it may be expensive because the line search may require a significant number of function and gradient () evaluations. However, the situation is very different for preconditioning process B. Here, no additional evaluations are required, but convergence appears questionable. It is clear that preconditioning process B cannot be used as a stand-alone optimization algorithm; in most cases it would not converge. It can, however, still be used as a preconditioning process for N-GMRES. As is well-known and will be further illustrated below, preconditioners used by GMRES for linear systems do not need to be convergent by themselves, and this suggests that it may be interesting to consider this for N-GMRES optimization as well. As will be motivated further below, the role of the N-GMRES preconditioning process is to provide new ‘useful’ directions for the nonlinear generalization of the Krylov space, and the iteration can be driven to convergence by the N-GMRES minimization, even if the preconditioner is not convergent by itself. However, for this to happen in the three-step N-GMRES optimization algorithm with preconditioning process B, it is required that eventually approaches and the step length approaches 0. For this reason, we select as soon as . The initial step length is chosen to be not larger than a small constant because the linear case (see below) suggests that a small step is sufficient to provide a new direction for the Krylov space, and because the minimization of the residual is based on a linearization argument (see also below), and small steps tend to lead to small linearization errors.

2.2 N-GMRES Recombination Step

Before relating steepest-descent preconditioned N-GMRES to non-preconditioned GMRES for linear systems, we first recall from [3] some details of the N-GMRES recombination step, Step II in Algorithm 1. In this step, we find an accelerated iterate that is obtained by recombining previous iterates as follows:

(5)

The unknown coefficients are determined by the N-GMRES algorithm in such a way that the two-norm of the gradient of the objective function evaluated at the accelerated iterate is small. In general, is a nonlinear function of the , and linearization is used to allow for inexpensive computation of coefficients that may approximately minimize . Using the following approximations

(6)

one arrives at minimization problem

find coefficients that minimize
(7)

This is a standard least-squares problem that can be solved, for example, by using the normal equations, as explained in [18, 3]. (In this paper, we solve the least-squares problem as described in [3].)

In a windowed implementation with window size , the memory cost incurred by N-GMRES acceleration is the storage of previous approximations and residuals. The dominant parts of the CPU cost for each acceleration step are the cost of building and solving the least-squares system (which can be done in approximately flops if the normal equations are used and some previous inner products are stored, see [18]), and flops to compute the accelerated iterate. For problems with expensive objective functions, this cost is often negligible compared to the cost of the evaluations in the line searches [3].

2.3 Motivation and Interpretation for Steepest Descent Preconditioning

Consider a standard quadratic minimization problem with objective function

(8)

where is SPD. It is well-known that its unique minimizer satisfies . Now consider applying the N-GMRES optimization algorithm with steepest descent preconditioner to the quadratic minimization problem. The gradient of at approximation is given by

(9)

where is defined as the residual of the linear system in . N-GMRES steepest descent preconditioner (3)-(4) then reduces to the form

(10)

and it can easily be shown that this corresponds to the stationary iterative method that generates the Krylov space in non-preconditioned linear GMRES applied to . We now briefly show this because it provides further insight (recalling parts of the discussion in [18, 3]).

We first explain how preconditioned GMRES for works. Consider so-called stationary iterative methods for of the following form:

(11)

Here, matrix is an approximation of that has an easily computable inverse, i.e., . For example, can be chosen to correspond to Gauss-Seidel or Jacobi iteration, or to a multigrid cycle [18].

Consider a sequence of iterates generated by update formula (11), starting from some initial guess . Note that the residuals of these iterates are related as This motivates the definition of the following vector spaces:

Vector space is the so-called Krylov space of order , generated by matrix and vector . It is easy to show that these vector spaces are equal (see, e.g., [18, 3]).

Expression (11) shows that . The GMRES procedure can be seen as a way to accelerate stationary iterative method (11), by recombining iterates (or, equivalently, by reusing residuals). In particular, we seek a better approximation , with in the Krylov space , such that has minimal two-norm. In other words, we seek optimal coefficients in

and it is easy to show that this corresponds to seeking optimal coefficients in

(12)

such that is minimized (which leads to a small least-squares problem equivalent to (7)). Note that and do not easily generalize to the nonlinear case, but the image of under , , does generalize naturally and is taken as the ‘generalized Krylov space’ that is used to seek the approximation in the nonlinear case.

Up to this point, we have presented GMRES as a way to accelerate one-step stationary iterative method (11). A more customary way, however, to see GMRES is in terms of preconditioning. The approach described above reduces to ‘non-preconditioned’ GMRES when one sets . Applying non-preconditioned GMRES to the preconditioned linear equation system also results in the expressions for preconditioned GMRES derived above. In this viewpoint, the matrix is called the preconditioner matrix, because its role is viewed as to pre-condition the spectrum of the linear system operator such that the (non-preconditioned) GMRES method applied to becomes more effective. It is also customary to say that the stationary iterative process preconditions GMRES (for example, Gauss-Seidel or Jacobi can precondition GMRES). We can summarize that the role of the stationary iterative method is to generate preconditioned residuals that build the Krylov space.

In the presentation above, all iterates for (for instance, in the right-hand side of (12)) refer to the unaccelerated iterates generated by stationary iterative method (11). However, the formulas remain valid when accelerated iterates are used instead; this does change the values of the coefficients , but leads to the same accelerated iterates [18]. This is so because the Krylov spaces generated in the two cases are identical due to linearity, and consequently GMRES selects the same optimal improved iterate.

This brings us to the point where we can compare steepest-descent preconditioned N-GMRES applied to quadratic objective function (8) with SPD operator , to non-preconditioned linear GMRES applied to . Assume we have previous iterates and residuals . Stationary iterative process (11) without preconditioner () would add a vector to the Krylov space which has the same direction as the vector that would be added to it by the steepest descent preconditioning process (10). This means that the accelerated iterate produced by N-GMRES with steepest descent preconditioner applied to quadratic objective function (8) with SPD operator is the same as the accelerated iterate produced by linear GMRES with identity preconditioner applied to . This motivates our proposal to use steepest descent preconditioning as the natural and most basic preconditioning process for the N-GMRES optimization algorithm applied to general nonlinear optimization problems.

Note that, in the case of linear systems, the efficiency of GMRES as an acceleration technique for stationary iterative methods can be understood in terms of how optimal polynomials can damp modes that are slow to converge [18, 14]. In the case of N-GMRES for nonlinear optimization, if the approximation is close to a stationary point and the nonlinear residual vector function can be approximated well by linearization, then it can be expected that the use of the subspace for acceleration may give efficiency similar to the linear case [18]. Note finally that the above also explains why a small step is allowed in the preconditioner of (4) (basically, in the linear case, the size of the coefficient does not matter for the Krylov space), and the linearization argument of (6) indicates that a small step may be beneficial.

2.4 Convergence Theory for N-GMRES Optimization with Steepest Descent Preconditioning

We now formulate and prove a convergence theorem for N-GMRES Optimization Algorithm 1 using steepest descent preconditioning with line search (3). We assume that all line searches provide step lengths that satisfy the Wolfe conditions [10]:

sufficient decrease condition:
(13)
curvature condition:
(14)

with . Condition (13) ensures that large steps are taken only if they lead to a proportionally large decrease in . Condition (14) ensures that a step is taken that is large enough to sufficiently increase the gradient of in the line search direction (make it less negative). Global convergence (in the sense of convergence to a stationary point from any initial guess) can then be proved easily using standard approaches [6, 10].
{theorem}[Global convergence of N-GMRES optimization algorithm with steepest descent line search preconditioning] Consider N-GMRES Optimization Algorithm 1 with steepest descent line search preconditioning (3) for Optimization Problem I, and assume that all line search solutions satisfy the Wolfe conditions, (13) and (14). Assume that objective function is bounded below in and that is continuously differentiable in an open set containing the level set , where is the starting point of the iteration. Assume also that the gradient is Lipschitz continuous on , that is, there exists a constant such that for all . Then the sequence of N-GMRES iterates is convergent to a fixed point of Optimization Problem I in the sense that

(15)
{proof}

Consider the sequence formed by the iterates , , , , , of Algorithm I, but with removed if is not a descent direction in Step III of the algorithm. Then all iterates are of the form , with a descent direction and such that the Wolfe conditions are satisfied. According to Theorem 3.2 of [10] (p. 38, Zoutendijk’s Theorem), we have that

(16)

with

(17)

which implies that

(18)

Consider the subsequence of . Since all the are followed by a steepest descent step in the algorithm, the corresponding to all the elements of satisfy . Therefore, it follows from (18) that , which concludes the proof.

Note that the notion of convergence (15) we prove in Theorem 2.4 for N-GMRES optimization with steepest descent line search preconditioning is stronger than the type of convergence that can be proved for some N-CG methods [6, 10], namely,

(19)

Also, it appears that, in the proof of Theorem 2.4, we cannot guarantee that sequence converges to 0. We know that sequence converges to a value since it is nonincreasing and bounded below, but it appears that the properties of the line searches do not guarantee that the sequence converges to 0. They do guarantee that the subsequence converges to 0, but it cannot be ruled out that, as the approach and the approach 0, large steps with very small decrease in may still be made from each to the next (large steps with small decrease are allowed in this case since the approach a stationary point), while, at the same time, large steps with very small decrease in may be made from the to the next (large steps with small decrease are allowed in this case if the search direction from is such that is very close to 0). These large steps may in principle preclude from converging to 0 (but we do not observe such pathological cases in our numerical tests). Nevertheless, we are able to prove the strong convergence result (15) for the iterates of N-GMRES optimization with steepest descent line search preconditioning: sequence converges to 0.

3 Numerical Results

We now present extensive numerical results for the N-GMRES optimization algorithm with steepest descent preconditioners (3) and (4), compared with stand-alone steepest descent optimization, N-CG and L-BFGS.

In all tests, we utilize the Moré-Thuente line search method [8] and the N-CG and L-BFGS optimization methods as implemented in the Poblano toolbox for Matlab [4]. For all experiments, the Moré-Thuente line search parameters used were as follows: function value tolerance for (13), gradient norm tolerance for (14), starting search step length , and a maximum of 20 evaluations are used. These values were also used for the N-CG and L-BFGS comparison runs. We use the N-CG variant with Polak-Ribière update formula, and the two-loop recursion version of L-BFGS [10]. We normally choose the N-GMRES window size equal to 20, which is confirmed to be a good choice in numerical tests described below. The L-BFGS window size is chosen equal to 5 (we found that larger window sizes tend to harm L-BFGS performance for the tests we considered). All initial guesses are determined uniformly randomly with components in the interval , and when we compare different methods they are given the same random initial guess. All numerical tests were run on a laptop with a dual-core 2.53 GHz Intel Core i5 processor and 4GB of 1067 MHz DDR3 memory. Matlab version 7.11.0.584 (R2010b) 64-bit (maci64) was used for all tests.

3.1 Test Problem Description

We first describe the seven test problems we consider. In what follows, all vectors are chosen in , and all matrices in .

Problem A. (Quadratic objective function with spd diagonal matrix.)

(20)

This problem has a unique minimizer in which . We choose . Note that and the condition number of is given by . It is well-known that for problems of this type large condition numbers tend to lead to slow convergence of the steepest descent method due to a zig-zag effect. Problem A can be used to show how methods like N-CG and N-GMRES improve over steepest descent and mitigate this zig-zag effect.

Problem B. (Problem A with paraboloid coordinate transformation.)

(21)

This modification of Problem A still has a unique minimizer in which . We choose . The gradient of is given by . This modification of Problem A increases nonlinearity (the objective function is now quartic in ) and changes the level surfaces from ellipsoids into parabolically skewed ellipsoids. As such, the problem is more difficult for nonlinear optimization methods. For , the level curves are modified from elliptic to ‘banana-shaped’. In fact, the objective function of Problem B is a multi-dimensional generalization of Rosenbrock’s ‘banana’ function.

Problem C. (Problem B with a random non-diagonal matrix with condition number .)

(22)

This modification of Problem B still has a unique minimizer in which . We choose . The gradient of is given by . The random matrix is the factor obtained from a QR-factorization of a random matrix with elements uniformly drawn from the interval . This modification of Problem B introduces nonlinear ‘mixing’ of the coordinates (cross-terms) and further increases the difficulty of the problem.

Figure 2: Problem A (). Convergence histories of the 10-logarithms of and as a function of iterations and evaluations. N-GMRES-sdls is the N-GMRES optimization algorithm using steepest descent preconditioning with line search, N-GMRES-sd is the N-GMRES optimization algorithm using steepest descent preconditioning with predefined step, N-CG is the Polak-Ribière nonlinear conjugate gradient method, L-BFGS is the limited-memory Broyden-Fletcher-Goldfarb-Shanno method, and sdls is the stand-alone steepest descent method with line search.
Figure 3: Problem A (). Effect of varying window size on and convergence for N-GMRES-sdls and N-GMRES-sd optimization as a function of evaluations. Window size emerges as a suitable choice, leading to rapid convergence. These results give some general indication that, if sufficient memory is available, may be a good choice. However, if memory is scarce, already provides good results, especially for N-GMRES-sd.

Problem D. (Extended Rosenbrock function, problem (21) from [9].)

Note that can easily be computed using ().

Problem E. (Brown almost-linear function, problem (27) from [9].)

Problem F. (Trigonometric function, problem (26) from [9].)

Problem G. (Penalty function I, problem (23) from [9].)

3.2 Numerical Results for Problems A–C

Figure 4: Problem B (). Convergence comparison.
Figure 5: Problem C (). Convergence comparison.

We first present some convergence plots for instances of Problems A–C. Fig. 2 shows results for an instance of Problem A. We see that stand-alone steepest descent with line search (sdls) converges slowly, which is expected because the condition number of matrix is . Both N-GMRES optimization using steepest descent preconditioning with line search (3) (N-GMRES-sdls) and N-GMRES optimization using steepest descent preconditioning with predefined step (4) (N-GMRES-sd) are significantly faster than stand-alone sdls, in terms of iterations and evaluations, confirming that the N-GMRES acceleration mechanism is effective, and steepest descent is an effective preconditioner for it. As could be expected, the preconditioning line searches of N-GMRES-sdls add significantly to its evaluation cost, and N-GMRES-sd is more effective. N-GMRES accelerates steepest descent up to a point where performance becomes competitive with N-CG and L-BFGS. It is important to note that convergence profiles like the ones presented in Fig. 2 tend to show significant variation depending on the random initial guess. The instances presented are arbitrary and not hand-picked with a special purpose in mind (they simply correspond to seed 0 in our matlab code) and we show them because they do provide interesting illustrations and show patterns that we have verified to be quite general over many random instances. However, they cannot reliably be used to conclude on detailed relative performance of various methods. For this purpose, we provide tables below that compare performance averaged over a set of random trials.

Fig. 3 shows the effect of varying the window size on and convergence for N-GMRES-sdls and N-GMRES-sd optimization as a function of evaluations, for an instance of Problem A. Window size emerges as a suitable choice if sufficient memory is available, leading to rapid convergence. However, window sizes as small as already provide good results, especially for N-GMRES-sd. This indicates that satisfactory results can be obtained with small windows, which may be useful if memory is scarce. We use window size for all numerical results in this paper.

Fig. 4 shows results for an instance of Problem B, which is a modification of Problem A introducing more nonlinearity, and Fig. 5 shows results for the even more difficult Problem C, with random nonlinear mixing of the coordinate directions. Both figures show that stand-alone sdls is very slow, and confirm that N-GMRES-sdls and N-GMRES-sd significantly speed up steepest descent. For Problem B, N-GMRES-sdls, N-GMRES-sd, N-CG and L-BFGS perform similarly, but for the more difficult Problem C N-GMRES-sdls, N-GMRES-sd and L-BFGS perform much better than N-CG.

problem N-GMRES-sdls N-GMRES-sd N-CG L-BFGS
A =100 242 111 84 73
A =200 406 171 127 104
B =100 1200 395 198 170
B =200 1338 752 606 321
C =100 926(1) 443 13156(7) 151
C =200 1447 461 26861(9) 204
Table 1: Average number of evaluations needed to reach for 10 instances of Problems A–C with random initial guess and with different sizes. Numbers in brackets give the number of random trials (out of 10) that did not converge to the required tolerance within 1500 iterations (if any).
Figure 6: Problem D (). Convergence comparison.

Table 1 confirms the trends that were already present in the specific instances of test problems A–C that were shown in Figures 2, 4 and 5. The table gives the average number of evaluations that were needed to reach for 10 random instances of Problems A–C with different sizes. For Problems A and B, N-GMRES-sdls and N-GMRES-sd consistently give evaluation counts that are of the same order of magnitude as N-CG. N-GMRES-sd comes close to being competitive with N-CG. L-BFGS is the fastest method for all problems in Table 1. For the more difficult Problem C, both N-GMRES-sdls, N-GMRES-sd and L-BFGS are significantly faster than N-CG, which appears to have convergence difficulties for this problem. N-GMRES-sd is clearly faster than N-GMRES-sdls for all tests.

3.3 Numerical Results for Problems D–G

Figure 6 gives convergence plots for a single instance of Problem D. It confirms the observations from Figures 2, 4 and 5: for this standard test problem from [9], stand-alone sdls again is very slow, and N-GMRES-sdls and N-GMRES-sd significantly speed up steepest descent convergence. N-GMRES-sdls and N-GMRES-sd have iteration and counts that are of the same order of magnitude as N-CG and L-BFGS, and in particular N-GMRES-sd is competitive with N-CG and L-BFGS. Convergence plots for instances of Problems E–G show similar behaviour and are not presented.

problem N-GMRES-sdls N-GMRES-sd N-CG L-BFGS
D =500 525 172 222 166
D =1000 445 211 223 170
E =100 294 259 243 358
E =200 317 243 240 394
F =200 140 102(1) 102 92
F =500 206(1) 175(1) 135 118
G =100 1008(2) 152 181 358
G =200 629(1) 181 137 240
Table 2: Average number of evaluations needed to reach for 10 instances of Problems D–G with random initial guess and with different sizes. Numbers in brackets give the number of random trials (out of 10) that did not converge to the required tolerance within 500 iterations (if any).

Table 2 on evaluation counts for Problems E–G again confirms the trends that were observed before. N-GMRES-sdls and N-GMRES-sd give evaluation counts that are of the same order of magnitude as N-CG and L-BFGS, and N-GMRES-sd in particular is competitive with N-CG and L-BFGS.

4 Conclusion

Figure 7: Convergence histories of the 10-logarithm of as a function of evaluations, for the canonical tensor approximation problem of Figures 1.2 and 1.3 in [3]. Panel (a) shows that stand-alone sdls is very slow for this problem, and N-GMRES-sdls and N-GMRES-sd significantly speed up steepest descent. However, for this difficult problem, it is beneficial to use a more powerful nonlinear preconditioner. Using the ALS preconditioner in stand-alone fashion already provides faster convergence than N-GMRES-sdls and N-GMRES-sd. The zoomed view in Panel (b) shows that N-CG and L-BFGS are faster than stand-alone ALS when high accuracy is required, but N-GMRES preconditioned with the powerful ALS preconditioner is the fastest method by far, beating N-CG and L-BFGS by a factor of 2 to 3. This illustrates that the real power of the N-GMRES optimization algorithm may lie in its ability to employ powerful problem-dependent nonlinear preconditioners (ALS in this case).

In this paper, we have proposed and studied steepest descent preconditioning as a universal preconditioning approach for the N-GMRES optimization algorithm that we recently introduced in the context of a canonical tensor approximation problem and ALS preconditioning [3] (Paper I). We have considered two steepest descent preconditioning process variants, one with a line search, and the other one with a predefined step length. The first variant is significant because we showed that it leads to a globally convergent optimization method, but the second variant proved more efficient in numerical tests, with no apparent degradation in convergence robustness. Numerical tests showed that the two steepest-descent preconditioned N-GMRES methods both speed up stand-alone steepest descent optimization very significantly, and are competitive with standard N-CG and L-BFGS methods, for a variety of test problems. These results serve to theoretically and numerically establish steepest-descent preconditioned N-GMRES as a general optimization method for unconstrained nonlinear optimization, with performance that appears promising compared to established techniques.

However, we would like to argue that the real potential of the N-GMRES optimization framework lies in the fact that it can use problem-dependent nonlinear preconditioners that are more powerful than steepest descent. Preconditioning of N-CG in the form of (linear) variable transformations is an area of active research [7]. However, it is interesting to note that our N-GMRES optimization framework naturally allows for a more general type of preconditioning: any nonlinear optimization process can potentially be used as a nonlinear preconditioner in the framework, or, equivalently, N-GMRES can be used as a simple wrapper around any other iterative optimization process to seek acceleration of that process. This can be illustrated with the following example, in which we first apply N-GMRES with the steepest descent preconditioners proposed in this paper, to a canonical tensor approximation problem from [3]. (In particular, we consider the canonical tensor approximation problem of Figures 1.2 and 1.3 in [3], in which a rank-three canonical tensor approximation (with 450 variables) is sought for a three-way data tensor of size .) Panel (a) of Fig. 7 shows how stand-alone steepest descent (sdls) is very slow for this problem: it requires more than 30,000 evaluations. (The tensor calculations are performed in matlab using the Tensor Toolbox [2]. For this problem, we use in (4).) The GMRES-sdls and N-GMRES-sd convergence profiles confirm once more one of the main messages of this paper: steepest-descent preconditioned N-GMRES speeds up stand-alone steepest descent very significantly. However, steepest descent preconditioning (which we have argued is in some sense equivalent to non-preconditioned GMRES for linear systems) is not powerful enough for this difficult problem, and a more advanced preconditioner is required. Indeed, Panel (a) of Fig. 7 shows that the stand-alone ALS process is already more efficient than steepest-descent preconditioned N-GMRES. Panel (b) indicates, however, that N-GMRES preconditioned by ALS is a very effective method for this problem: it speeds up ALS very signficantly, and is much faster than N-CG and L-BFGS, by a factor of 2 to 3. (Panel (b) of Fig. 7 illustrates the findings from extensive tests comparing ALS, N-CG and ALS-preconditioned N-GMRES that were reported in Paper I and [1].)

In the case of GMRES for linear systems, non-preconditioned GMRES (or: GMRES with the identity preconditioner) is often just a starting point. For many difficult problems it converges too slowly, and there is a very extensive and ever expanding research literature on developing advanced problem-dependent preconditioners that in many cases speed up convergence very significantly. In the same way, the present paper is likely not more than a starting point in theoretically and numerically establishing the N-GMRES optimization method with general steepest descent preconditioning process. As the results shown in Fig. 7 already indicate, we expect that the real power of the N-GMRES optimization framework will turn out to lie in its ability to use powerful problem-dependent nonlinear preconditioners. This suggests that further exploring N-GMRES optimization with advanced preconditioners may lead to efficient numerical methods for a variety of nonlinear optimization problems.

Acknowledgments

This work was sponsored by the Natural Sciences and Engineering Research Council of Canada and by Lawrence Livermore National Laboratory under subcontract B594099. The research was conducted during a sabbatical visit at the Algorithms and Complexity Department of the Max Planck Institute for Informatics in Saarbruecken, whose hospitality is greatly acknowledged.

Footnotes

  1. Department of Applied Mathematics, University of Waterloo, Waterloo, Ontario, Canada
  2. hdesterck@uwaterloo.ca

References

  1. E. Acar, D.M. Dunlavy, and T.G. Kolda, A Scalable Optimization Approach for Fitting Canonical Tensor Decompositions, Journal of Chemometrics, 25 (2011), pp. 67–86.
  2. B.W. Bader and T.G. Kolda, MATLAB Tensor Toolbox Version 2.4, http://csmr.ca.sandia.gov/ tgkolda/TensorToolbox/, March 2010.
  3. H. De Sterck, A Nonlinear GMRES Optimization Algorithm for Canonical Tensor Decomposition, submitted to SIAM J. Sci. Comp., 2011, arXiv:1105.5331.
  4. D.M. Dunlavy, T.G. Kolda, and E. Acar, Poblano v1.0: A Matlab Toolbox for Gradient-Based Optimization, Technical Report SAND2010-1422, Sandia National Laboratories, Albuquerque, NM and Livermore, CA, March 2010.
  5. H. Fang and Y. Saad, Two classes of multisecant methods for nonlinear acceleration, Numerical Linear Algebra with Applications, 16 (2009), pp. 197–221.
  6. J.C. Gilbert and J. Nocedal, Global Convergence Properties of Conjugate Gradient Methods for Optimization, SIAM J. Optim., 2 (1992), pp. 21–42.
  7. W.W. Hager and H. Zhang, A Survey of Nonlinear Conjugate Gradient Methods, Pacific Journal of Optimization, 2 (2006), pp. 35–58.
  8. J.J. Moré and D.J. Thuente, Line search algorithms with guaranteed sufficient decrease, ACM Transactions on Mathematical Software, 20 (1994), pp. 286–307.
  9. J.J. Moré, B.S. Garbow, and K.E. Hillstrom, Testing Unconstrained Optimization Software, ACM Trans. Math. Softw., 7 (1981), pp. 17–41.
  10. J. Nocedal and S.J. Wright, Numerical optimization, Second Edition, Springer, Berlin, 2006.
  11. C.W. Oosterlee, On multigrid for linear complementarity problems with application to American-style options, Electronic Transactions on Numerical Analysis, 15 (2003), pp. 165–185.
  12. C.W. Oosterlee and T. Washio, Krylov Subspace Acceleration of Nonlinear Multigrid with Application to Recirculating Flows, SIAM J. Sci. Comput., 21 (2000), pp. 1670–1690.
  13. Y. Saad, A flexible inner-outer preconditioned GMRES algorithm, SIAM J. Sci. Comp., 14 (1993), pp. 461–469.
  14. Y. Saad, Iterative Methods for Sparse Linear Systems, Second Edition, SIAM, Philadelphia, 2003.
  15. Y. Saad and M.H. Schultz, GMRES: A generalized minimal residual algorithm for solving nonsymmetric linear systems, SIAM J. Sci. Comp., 7 (1986), pp. 856–869.
  16. D.A. Smith, W.F. Ford, and A. Sidi, Extrapolation methods for vector sequences, SIAM Rev., 29 (1987), pp. 199–234.
  17. H. Walker and P. Ni, Anderson acceleration for fixed-point iterations, to appear in SIAM J. Numer. Anal (2011).
  18. T. Washio and C.W. Oosterlee, Krylov subspace acceleration for nonlinear multigrid schemes, Electronic Transactions on Numerical Analysis, 6 (1997), pp. 271–290.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minumum 40 characters
Add comment
Cancel
Loading ...
106341
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description