A Stochastic LevenbergMarquardt Method Using Random Models with Application to Data Assimilation
Abstract
Globally convergent variants of the GaussNewton algorithm are often the preferred methods to tackle nonlinear least squares problems. Among such frameworks, the LevenbergMarquardt and the trustregion methods are two wellestablished paradigms, and their similarities have often enabled to derive similar analyses of these schemes. Both algorithms have indeed been successfully studied when the GaussNewton model is replaced by a random model, only accurate with a given probability. Meanwhile, problems where even the objective value is subject to noise have gained interest, driven by the need for efficient methods in fields such as data assimilation.
In this paper, we describe a stochastic LevenbergMarquardt algorithm that can handle noisy objective function values as well as random models, provided sufficient accuracy is achieved in probability. Our method relies on a specific scaling of the regularization parameter, which clarifies further the correspondences between the two classes of methods, and allows us to leverage existing theory for trustregion alorithms. Provided the probability of accurate function estimates and models is sufficiently large, we establish that the proposed algorithm converges globally to a firstorder stationary point of the objective function with probability one. Furthermore, we derive a bound the expected number of iterations needed to reach an approximate stationary point. We finally describe an application of our method to variational data assimilation, where stochastic models are computed by the socalled ensemble methods.
Keywords: LevenbergMarquardt method, nonlinear least squares, regularization, random models, noisy functions, data assimilation.
1 Introduction
Minimizing a nonlinear leastsquares function is one of the most classical problems in numerical optimization, that arises in a variety of fields. In many applications, the objective function to be optimized can only be accessed through noisy estimates. Typical occurrences of such a formulation can be found when solving inverse problems [16, 27, 28] or while minimizing the error of a model in the context of machine learning [9]. In such cases, the presence of noise is often due to the estimation of the objective function via cheaper, less accurate calculations: this is for instance true when part of the data is left aside while computing this estimate. In fact, in datafitting problems such as those coming from machine learning, a huge amount of data is available, and considering the entire data throughout the optimization process can be extremely costly. Moreover, the measurements can be redundant and possibly corrupted: in that context, a full evaluation of the function or the gradient may be unnecessary.
Such concerns have motivated the development of optimization frameworks that cope with inexactness in the objective function or its derivatives. In particular, the field of derivativefree optimization [15], where it is assumed that the derivatives exist but are unavailable for use in an algorithm, has expanded in the recent years with the introduction of random models. One seminal work in this respect is [1], where the authors applied arguments from compressed sensing to guarantee accuracy of quadratic models whenever the Hessian had a certain (unknown) sparsity pattern. Trustregion methods based on general probabilistic models were then proposed in [2], where convergence to first and secondorder stationary points was established under appropriate accuracy assumptions on the models. Global convergence rates were derived for this approach in [19], in expectation and with high probability. Of particular interest to us is the extension of trustregion methods with probabilistic models to the case of noisy function values [13]: the corresponding algorithm considers two sources of randomness, respectively arising from the noisy function estimates and the random construction of the models. A global convergence rate in expectation for this method was derived in [7], where it was established that the method needed iterations in expectation to drive the gradient norm below some threshold .
In the context of derivativefree leastsquares problems where exact function values are available, various deterministic approaches based on globalization of the GaussNewton method have been studied. The algorithms developed in the derivativefree community are mostly of trustregion type, and rely on building models that satisfy the socalled fully linear property, which requires the introduction of a socalled criticality step to guarantee its satisfaction throughout the algorithmic process [12, 31, 32, 29]. The recent DFOGN algorithm [12] was equipped with a complexity result, showing a bound of the same order than derivativefree trustregion methods for generic functions [18]. As for general problems, considering random models is a possible way of relaxing the need for accuracy at every iteration. A LevenbergMarquardt algorithm based in this idea was proposed in [6], motivated by problems from data assimilation. The authors of [6] proposed an extension of the classical LM algorithm that replaces the gradient of the objective function by a noisy estimate, that is accurate only with a certain probability. Using similar arguments than for the trustregion case [2], almostsure global convergence to a firstorder stationary point was established.
The case of noisy least squares has also been examined. A very recent preprint [10] proposed a efficient approach for handling noisy values in practice, but did not provide theoretical guarantees. A LevenbergMarquardt framework for noisy optimization without derivatives was proposed in [4], with similar goals as those aimed in this paper. The method proposed in [4] assumes that function values can be estimated to a prescribed accuracy level, and explicitly maintains a sequence of these accuracies throughout the algorithm. Although such an approach is relevant when any accuracy level can be used (for instance, all the data can be utilized to estimate the function), it does not allow for arbitrarily bad estimations on any iteration: moreover, the noise level must be small compared to the norm of the upcoming LevenbergMarquardt step, a condition that may force to reduce this noise level, and resembles the criticality step of derivativefree modelbased methods. By contrast, the use of random models and estimates with properties only guaranteed in probability allows for arbitrarily bad estimates, which seems more economical at the iteration level, and does not necessarily mean that a good step will not be computed in that case. Probabilistic properties thus emerges as an interesting alternative, particularly when it is expensive to compute accurate estimates, and one can then think of exploiting the connection between LevenbergMarquardt and trustregion methods [23] to analyze the former in the case of noisy problems.
In this paper, we propose a stochastic framework that builds upon the approach developed in [6] to handle both random models and noise in the function evaluations. This new algorithm is also inspired by a recently proposed variant of the LevenbergMarquardt framework [5], where a specific scaling of the regularization parameter enabled the derivation of worstcase complexity results. We adapt the analysis of the stochastic trustregion framework using random models proposed in [7, 13] to prove that our framework enjoys comparable convergence and complexity guarantees. Unlike [4] , our setup allows for arbitrarily inaccurate models or function estimates, as long as it happens with a small probability. Our method is particularly suited for applications in data assimilation, which we illustrate in the context of ensemble methods.
The remainder of the paper is organized as follows. Section 2 presents our LevenbergMarquardt framework; Section 3 established the accuracy requirements we make on the function values and the models, as well as their probabilistic counterparts. Global convergence and worstcase complexity of the method are analyzed in Sections 4 and 5, respectively. Finally, Section 6 describes an application of our method in data assimilation.
2 A LevenbergMarquardt algorithm based on estimated values
In this paper, we consider the following nonlinear least squares problem:
(1) 
where is the residual vectorvalued function, assumed to be continuously differentiable, and most likely . During the minimization process, the optimizer can only have access to estimates of  referred as . This estimate is assumed to be noisy, i.e., one has for all , where the noise is a random variable.
This section recalls the main features of the LevenbergMarquardt method, then describes our extension of this algorithm to handle noisy function values and gradients.
2.1 Deterministic LevenbergMarquardt algorithm
Whenever the function and its Jacobian can be accessed, one possible approach for solving the problem (1) is based on the GaussNewton model. More precisely, at a given iterate , a step is computed as a solution of the linearized least squares subproblem
where and denotes the Jacobian of at . The subproblem has a unique solution if has full column rank, and in that case the step is a descent direction for . When is not of full column rank, the introduction of a regularization parameter can lead to similar properties. This is the underlying idea behind the LevenbergMarquardt [21, 22, 24] algorithm, a globally convergent method based upon the GaussNewton model. At each iteration, one considers a step of the form , corresponding to the unique solution of
(2) 
where is an appropriately chosen regularization parameter, typically updated in the spirit of the classical trustregion radius update strategy at each iteration. Several strategies were then developed to update . Several approaches have considered scaling this parameter using the norm of the gradient of the GaussNewton model [5, 33]. A similar choice will be adopted in this paper.
2.2 Algorithmic framework based on estimates
In this work, we are interested in the case where and cannot be directly accessed, but noisy estimates are available. As a result, we will consider a variant of the LevenbergMarquardt algorithm in which both the function and gradient values are approximated.
Algorithm 1 presents a description of our method. At every iteration, estimates of the values of and its derivative at the current iterate are obtained, and serve to define a regularized GaussNewton model (3), where the regularization parameter is defined using a specific scaling formula: where . The model is then approximately minimized, yielding a trial step . The resulting new point is accepted only if the ratio between estimated decrease ( is again estimated at the new trial point) and model decrease is sufficiently high.
The LevenbergMarquardt parameter is updated depending on the value of , and also on a condition involving the model gradient. Such updates have been widely used in derivativefree modelbased methods based on random estimates [2, 6, 13, 19].

Compute an estimate of .

Compute and , the gradient and the Jacobian estimate at , set , and define the model of around by:
(3) 
Compute an approximate solution of the subproblem
(4) 
Compute an estimate of , then compute

If and , set and .
3 Probabilistic properties for the LevenbergMarquardt method
We are interested in the case where the objective function values, the gradient and the Jacobian are noisy, and we only have their approximations.
3.1 Gradient and function estimates
We begin by describing our accuracy requirements for the models computed based on sampled values, of the form given in (3). Following previous work on derivativefree LevenbergMarquardt methods [6], we propose the following accuracy definition, and motivate further its use below.
Definition 3.1
Remark 3.1
The accuracy requirement for the model gradient (5) is similar to the firstorder accuracy property introduced by Bergou, Gratton and Vicente [6]. However, it is not exactly equivalent as we use instead of . The purpose of this new parametrization is twofold. First, it allows us to measure the accuracy in formulas (5) and (6) through a parameter that is updated in an explicit fashion throughout the algorithmic run: this is a key property for performing a probabilistic analysis of optimization methods. Secondly, we believe this choice to be a better reflection of the relationship between the LevenbergMarquardt and the trustregion parameter. Indeed, for a realization of the method, the LevenbergMarquardt direction minimizing is given by
(7) 
which is also the solution of the trustregion subproblem
(8) 
As a result, we see that for a large value of , one would have:
(9) 
which suggests that is not exactly equivalent to the inverse of the trustregion radius, as suggested in [6], but that it rather is an equivalent to . Still, this relation implies that can be seen as an equivalent to : in that sense, (5) matches the gradient assumption for fully linear models [15].
Note that Definition 3.1 contains two requirements: in the absence of noise, (6) is trivially satisfied by setting . In this work, we consider that even function values cannot be accessed inexactly, thus (6) appears to be necessary.
In the case of noisy function values, we also expect the estimates computed by Algorithm 1 to be sufficiently accurate with a suitable probability. This is formalized in the following definitions.
Definition 3.2
Given , we say that two values and are accurate estimates of and , respectively, for a given , if
(10) 
3.2 Probabilistic accuracy of model gradients and function estimates
We are further interested in the case where the models are built in some random fashion. We will thus consider random models of the form , and we use the notation for its realizations. Correspondingly, let random variables and denote the estimates of the gradient and the Jacobian , with their realizations denoted by , and .
Note that the randomness of the models implies the randomness of the iterate , the parameters , and the step , and so , , and will denote their respective realizations.
As described in the introduction, another source of randomness from our problem in that the objective function is accessed through a randomized estimator . For a given iteration index , we define and . The realizations of and (taken over the randomness of as well as that of the iterate ) will be denoted by and .
Definition 3.3
Let , and . A sequence of random models is said to be probabilistically firstorder accurate with respect to the sequence if the events
satisfy the following condition
(11) 
where is the algebra generated by and .
Definition 3.4
Given constants , and , the sequences of random quantities and is called probabilistically accurate, for corresponding sequences , , if the events
satisfy the following condition
(12) 
where is the algebra generated by .
Here again, we point out that the parameter plays the role of a reciprocal of the trustregion radius. In that sense, the previous definitions are consistent with the definitions of sufficient accuracy presented in the case of stochastic trustregion methods [13].
4 Global convergence to firstorder critical points
In this section, we aim at establishing convergence of Algorithm 1 when the function estimates and the models satisfy the probabilistic properties described in Section 3. Our analysis bears strong similarities with that of the STORM algorithm [13], but possesses significant differences induced by the use of probabilistic gradients rather than probabilistic fully linear models.
4.1 Assumptions and deterministic results
We will analyze Algorithm 1 under the following assumptions.
Assumption 4.1
is continuously differentiable on an open set containing the level set
, with Lipschitz
continuous gradient, of Lipschitz constant .
We also require that the Jacobian model is uniformly bounded. Note that the result is assumed to hold for every realization of the algorithm, therefore such an assumption will be valid in both a deterministic and random context.
Assumption 4.2
There exists such that for all and all realizations of the th model Jacobian, one has:
Additionally, we assume that the subproblem is approximately solved so that a fraction of a Cauchy decrease is satisfied for the model.
Assumption 4.3
There exists such that for every iteration of every realization of the algorithm,
(13) 
We will also assume the following bounds hold.
Assumption 4.4
At each iteration and for each realization of the algorithm, the step size satisfies
(14) 
and there exists such that
(15) 
Several choices for the approximate minimization of that verify (13),(14) and (15) can be proposed; in particular, the result holds for steps computed via a truncated Conjugate Gradient algorithm (initialized with the null vector) applied to the quadratic ) [6, Lemma 5.1].
Lemma 4.1
Proof. Using Assumptions 4.1, 4.2, and 4.4 within a Taylor expansion of the function around , we obtain:
hence the result.
Lemma 4.1 illustrates that our accuracy requirements are enough to guarantee accuracy of any computed step.
We now state various results holding for a realization of Algorithm 1 that do not make direct use of the probabilistic nature of the method. These will be instrumental in proving Theorem 4.1.
Lemma 4.2
Proof. Since the model is firstorder accurate, we have:
where we used the result of Lemma 4.1 and Assumption 4.3. Using the first part of (17), we have and thus
where the second part of the maximum in (17) was used in the last line, yielding the expected result.
The next result is a consequence of Lemma 4.2.
Lemma 4.3
Let the assumptions of Lemma 4.2 hold. If is firstorder accurate and
(19) 
then the trial step satisfies
(20) 
where .
Proof. Since the model is firstorder accurate, we have:
(21) 
Using (19) to bound the lefthand side, we obtain:
which gives . We are thus in the assumptions of Lemma 4.2, and (18) holds.
Using again the fact that the model is firstorder accurate together with (17) and (21), we have:
leading to
Lemma 4.4
Proof. To simplify the notations, we will omit the indices in the proof.
We look in more detail at the first term arising in the numerator; we have:
Thus, we obtain:
As a result, we have
Since the righthand side is a secondorder polynomial in , this gives
But this contradicts (22), from which we conclude that we necessarily have , and thus . Since as a direct consequence of (22), the iteration is a successful one, and the parameter is not increased.
We point out that Lemma 4.4 only involves the accuracy requirements on the model gradient, thanks to the accuracy of the function estimates.
Lemma 4.5
Proof. By definition of a successful iteration and using the accuracy properties of the models and the estimates, we have