A proof that Anderson acceleration increases the convergence rate in linearly converging fixed point methods (but not in quadratically converging ones)
This paper provides the first proof that Anderson acceleration (AA) increases the convergence rate of general fixed point iterations. AA has been used for decades to speed up nonlinear solvers in many applications, however a rigorous mathematical justification of the improved convergence rate remained lacking. The key ideas of the proof are relating errors with residuals, using results arising from the optimization, and explicitly defining the gain in the optimization stage to be the ratio of improvement over a step of the unaccelerated fixed point iteration. The main result we prove is that AA improves a the convergence rate of a fixed point iteration to first order by a factor of the gain at each step. In addition to improving the convergence rate, our results also show that AA increases the radius of convergence (even beyond a set where the fixed point operator is contractive). Lastly, our estimate shows that while the linear convergence rate is improved, additional quadratic terms arise in the estimate, which shows why AA does not typically improve convergence in quadratically converging fixed point iterations. Results of several numerical tests are given which illustrate the theory.
We study an acceleration technique for fixed point problems called Anderson acceleration, in which a history of search-directions is used to increase the rate of convergence of fixed-point iterations. The method was originally introduced by D.G. Anderson in 1965 in the context of integral equations . It has recently been used in many applications, including multisecant methods for fixed-point iterations in electronic structure computations , geometry optimization problems , various types of flow problems [10, 12], radiation diffusion and nuclear physics [1, 15], molecular interaction , machine learning , improving the alternating projections method for computing nearest correlation matrices , and on a wide range of nonlinear problems in the context of generalized minimal residual (GMRES) methods in . We further refer readers to [7, 9, 10, 16] and references therein for detailed discussions on both practical implementation and a history of the method and its applications.
Despite a long history of use and a strong recent interest, the first mathematical convergence results for Anderson acceleration (for both linear and nonlinear problems) appear in 2015 in , under the usual local assumptions for convergence of Newton iterations. However, this theory does not prove that Anderson acceleration improves the convergence of a fixed point iteration, or in other words accelerates convergence in the sense of . Rather, it proves that Anderson accelerated fixed point iterations will converge in the neighborhood of a fixed point; and, an upper bound on the convergence rate is shown to approach from above the convergence rate of the underlying fixed point iteration. While an important stage in the developing theory, this does not explain the efficacy of the method, which has gained popularity as practitioners have continued to observe a dramatic speedup and increase in robustness from Anderson acceleration over a wide range of problems.
The purpose of this paper is to address this gap in the theory by proving a rigorous estimate for Anderson acceleration that shows a guaranteed increase in the convergence rate for fixed point iterations (for general functions) that converge linearly (with rate ). By explicitly defining the gain of the optimization stage at iteration to be the ratio of the optimized objective function compared to that of the usual fixed point method, we prove the new convergence rate is at step , where is a damping parameter and produces the undamped iteration. The key ideas to the proof are an expansion of the residual errors, developing expressions relating the errors and residuals, and explicitly factoring in the gain from the optimization stage. A somewhat similar approach is used by the authors to prove that Anderson acceleration speeds up Picard iteration convergence for finite element discretizations of the steady Navier-Stokes equations in , (without the assumption on the fixed-point operator) and herein we extend these ideas to general fixed point iterations.
In addition to the improved linear convergence rate, our analysis also indicates that Anderson acceleration introduces quadratic error terms, which is consistent with known results that Anderson acceleration does not accelerate quadratically converging fixed point methods (see the numerical experiments section below), establishing a barrier which theoretically prevents establishing an improved convergence rate for general fixed-point iterations. A third important result we show is that both Anderson acceleration and the use of damping can extend the radius of convergence for the method, i.e. Anderson acceleration can allow the iteration to converge even when outside the domain where the fixed point function is contractive.
This paper is arranged as follows. In Section 2, we review Anderson acceleration, describe the problem setting, and give some basic definitions and notation. Section 3 gives several important technical results, to make the later analysis cleaner and simpler. Section 4 gives the main result of the paper, proving that the linear convergence rate is reduced by Anderson acceleration, but additional quadratic error terms arise. Section 5 gives results from numerical tests, with the intent of illustrating the theory proven herein. Conclusions are given in the final section.
2 Anderson acceleration
In what follows, we will consider a fixed-point operator where is a Hilbert space with norm and inner-product . The Anderson acceleration algorithm with depth applied to the fixed-point problem reads as follows.
Algorithm 2.1 (Anderson iteration).
The Anderson-acceleration with depth and damping factors
Step 0: Choose
Step 1: Find such that . Set .
Step : For Set
[a.] Find .
[b.] Solve the minimization problem for
[c.] For damping factor , set
We will use throughout this work the stage- residual and error terms
Define the following averages given by the solution to the optimization problem (2.1):
Then the update (2.2) can be written in terms of the averages and ,
and the stage- gain can be defined by
The key to showing the acceleration technique defined by taking a linear combination of a history of steps corresponding to the coefficients of the optimization problem (2.1) is connecting the gain given by (2.6) to the error and residual terms in (2.4). As such, the success (or failure) of the algorithm to reduce the residual is coupled to the success of the optimization problem at each stage of the algorithm. As is an admissible solution to (2.1), it follows immediately that . As discussed in the remainder, the improvement in the contraction rate of the fixed-point iteration is characterized by .
The two main components of the proof of residual convergence at an accelerated rate are the expansion of the residual into and error terms ; and, control of the ’s in terms of the corresponding ’s. In the next section, the first of these is established for general , and the second for the particular cases of depth and , with the result then extrapolated for general .
3 Technical preliminaries
There are several important technical results that our theory utilizes. We choose to separate them out and into this section, to allow for cleaner proofs of the main results in Section 4.
In  the operator was assumed Lipschitz continuous and contractive. Here, we make the stronger assumption to allow for Taylor expansions of the error terms.
Assume has a fixed point , and there are positive constants and with
for each .
for each .
Reducing the assumptions from global to local is possible, but will make the technical analysis below significantly more technical.
where is on the line segment joining and . A second application of Taylor’s Theorem provides
with on the line segment connecting with .
3.1 Expansion of the residual
Expanding the first term on the right hand side of (3.3) yields
It is worth noting that . Expanding the second term on the right hand side of (3.3), we get
The next calculation shows that the sum multiplying is equal to . First observe that and . Separating the first term of the sum and using ,
3.2 Relating errors to residuals
We now derive estimates to bound (in norm) the ’s from the right hand side of (3.11) by the corresponding ’s. For the bounds in this section, we require to be a contractive operator. In fact, the estimates of this section hold for contractive but not necessarily operators, see . However the application of these results in the current setting requires to be , so we state the next assumption as follows.
Let be a Hilbert space and . Assume for each .
The next lemma establishes a bound for in terms of and in the case of depth . The subsequent lemma generalizes the same idea for general .
Under the conditions of Assumption 3.3, the following bounds hold true:
Begin by rewriting the optimization problem (2.1) in the equivalent form
The use of as the second parameter of in the proof above is not purely coincidental, as this agrees with the used in Section 3.1. The same essential technique yields the necessary bounds for . The estimate for general is given in the lemma below, with the particular estimate for given as a proposition.
As in the case above, two forms of the optimization problem are used. The -formulation is used to bound the terms that appear from the expansion (3.11); whereas, the -formulation is used to bound the terms that appear in the numerator without leading optimization coefficients. It is then of particular importance that estimates of the form have the property that is bounded away from zero. This is a reasonable assumption on the leading coefficient for each , as some nonvanishing component in the latest search direction is necessary for progress. It is also a reasonable assumption on , meaning the coefficient of the earliest search direction considered is bounded away from unity. Presumably, is a reasonable assumption to make, although this is not explicitly required (cf., ).
The optimization problem (2.1) at level is to minimize
Differencing from the left and right respectively, this can be posed as the following unconstrained optimization problems:
Recombining the terms inside the sum, noting , and obtain
Applying Cauchy-Schwarz and triangle inequalities then yields
Following the same idea for estimate (3.17), we are now concerned with bounding in norm the final difference term . Again expanding (3.19) as an inner-product and seeking the critical point this time for yields
Recombining terms noting
Applying Cauchy-Schwarz and triangle inequalities then yields
Proposition 3.4 (Depth ).
The approach taken in  is to reduce the right hand side of (3.22) and (3.23) to two terms each by relating their expansion to that of (3.21) and (3.24), respectively. Here the terms are left as they are to emphasize the direct generality to greater depth .
3.3 Explicit computation of the optimization gain
The stage- gain has a simple description assuming the optimization is performed over a norm induced by an inner product , in other words in a Hilbert space setting.
Consider the unconstrained -form of the optimization problem (3.20) at iteration with depth : Find that minimize
Where is the matrix with columns , and is the corresponding vector of coefficients . Indeed, (3.25) (or equivalently reindexed) is the preferred way to state the optimization problem , particularly in the case where is the norm and a fast algorithm can be used.
This is also the preferred statement of the problem to understand the gain from (2.6), which satisfies Define the unique decomposition with and . Then is the least-squares residual satisfying meaning
and, has the interpretation of the direction-sine between and the subspace spanned by . This is particularly clear in the case where by solving for the critical point of (3.15) yields
Expanding and using the particular value of above yields
with the clear interpretation that is the direction cosine between and , hence is the direction-sine.
If indeed an (economy) algorithm is used to solve the optimization problem then , which can be used to predict whether an accelerated step would be (sufficiently) beneficial. This explicit computation of is used in Section 5.3 to propose an adaptive damping strategy based on the gain at each step. Finally, it is noted that the improvement in the gain as is increased depends on sufficient linear indepence or small direction cosines between the columns of , as information from ealier in the history is added. This is discussed in some greater depth in .
4 Convergence rates for depths and
Theorem 4.1 (Convergence of the residual with depth ).
In this case the expansion found for in (3.11) reduces to
Taking norms of both sides and applying (2.6) and the triangle inequality,
Notably (4.2) holds regardless of whether is a contractive operator, hence for error terms and small enough, convergence can be observed for , particularly if a damping factor is applied, and if the gain is sufficiently less than one. This justifies the observation that Anderson acceleration can enlarge the effective domain of convergence of a fixed point iteration.
As discussed in Section 3.2, and are each the leading coefficients in their respective optimization problems, multiplying the most recent iterate. As such, these coefficients may be reasonably considered bounded away from zero.
Theorem 4.2 (Convergence of the residual with depth ).
For depth the residual expansion (3.11) reduces to