Convergence of the stochastic Euler scheme for locally Lipschitz coefficients
Stochastic differential equations are often simulated with the Monte Carlo Euler method. Convergence of this method is well understood in the case of globally Lipschitz continuous coefficients of the stochastic differential equation. The important case of superlinearly growing coefficients, however, has remained an open question. The main difficulty is that numerically weak convergence fails to hold in many cases of superlinearly growing coefficients. In this paper we overcome this difficulty and establish convergence of the Monte Carlo Euler method for a large class of one-dimensional stochastic differential equations whose drift functions have at most polynomial growth.
Many applications require the numerical approximation of moments or expectations of other functionals of the solution of a stochastic differential equation (SDE) whose coefficients are superlinearly growing. Moments are often approximated by discretizing time using the stochastic Euler scheme (see e.g. , , ) (a.k.a. Euler-Maruyama scheme) and by approximating expectations with the Monte Carlo method. This Monte Carlo Euler method has been shown to converge in the case of globally Lipschitz continuous coefficients of the SDE (see e.g. Section 14.1 in  and Section 12 in ). The important case of superlinearly growing coefficients, however, has remained an open problem. The main difficulty is that numerically weak convergence fails to hold in many cases of superlinearly growing coefficients; see . In this paper we overcome this difficulty and establish convergence of the Monte Carlo Euler method for a large class of one-dimensional SDEs with at most polynomial growing drift functions and with globally Lipschitz continuous diffusion functions; see Section 2 for the exact statement.
For clarity of exposition, we concentrate in this introductory section on the following prominent example. Let be fixed and let be the unique strong solution of the one-dimensional SDE
for all , where is a one-dimensional standard Brownian motion with continuous sample paths and where and are given constants. Our goal is then to solve the cubature approximation problem of the SDE (1). More formally, we want to compute moments and, more generally, the deterministic real number
for a given smooth function whose derivatives have at most polynomial growth.
A frequently used scheme for solving this problem is the Monte Carlo Euler method. In this method, time is discretized through the stochastic Euler scheme and expectations are approximated by the Monte Carlo method. More formally, the Euler approximation of the solution of the SDE (1) is defined recursively through and
for every and every . Moreover, let , , , for be independent copies of the Euler approximation defined in (3). The Monte Carlo Euler approximation of (2) with time steps and Monte Carlo runs is then the random real number
In order to balance the error due to the Euler method and the error due to the Monte Carlo method, it turns out to be optimal to have increasing at the order of ; see . We say that the Monte Carlo Euler method converges if
holds almost surely for every smooth function whose derivatives have at most polynomial growth (see also Appendix A.1 in ).
In the literature, convergence of the Monte Carlo Euler method is usually established by estimating the bias and by estimating the statistical error (see e.g. Section 3.2 in ). More formally, the triangle inequality yields
for every and every smooth function whose derivatives have at most polynomial growth. The first summand on the right-hand side of (6) is the absolute value of the bias due to approximating the exact solution with Euler’s method. The second summand on the right-hand side of (6) is the statistical error which is due to approximating an expectation with the arithmetic average over independent copies. The bias is usually the more difficult part to estimate. This is why the concept of numerically weak convergence, which concentrates on that error part, has been studied intensively in the literature (see for instance , , , , , , ,  or Part VI in ). To give a definition, we say that the stochastic Euler scheme converges in the numerically weak sense (not to be confused with stochastic weak convergence) if the bias of the Monte Carlo Euler method converges to zero, i.e., if
holds for every smooth function whose derivatives have at most polynomial growth. If the coefficients of the SDE are globally Lipschitz continuous, then numerically weak convergence of Euler’s method and convergence of the Monte Carlo Euler method is well-established; see e.g. Theorem 14.1.5 in  and Section 12 in .
The case of superlinearly growing coefficients is more subtle. The main difficulty in that case is that numerically weak convergence usually fails to hold; see  for a large class of examples. In particular, the sequence , , of second moments of the Euler approximations (3) diverges to infinity if although the second moment of the exact solution of the SDE (1) is finite and, hence, we have
instead of (7). The absolute value of the bias thus diverges to infinity in case of SDEs with superlinearly growing coefficients. This in turn implies divergence of the Monte Carlo Euler method in the mean square sense, i.e.,
as . Clearly, the mean square divergence (9) does not exclude the almost sure convergence (5). Indeed, the main result of this article proves the convergence (5) of the Monte Carlo Euler method. For proving this result, we first need to understand why Euler’s method does not converge in the sense of numerically weak convergence. In the deterministic case, that is, (1) and (3) with , the Euler approximation diverges if the starting point is sufficiently large. This divergence has been estimated in  and turns out to be at least double-exponentially fast. Now in the presence of noise (), the Brownian motion has an exponentially small chance to push the Euler approximation outside of . On this event, the Euler approximation grows at least double-exponentially fast due to the deterministic dynamics. Consequently, as being double-exponentially large over-compensates that the event has an exponentially small probability, the -norm of the Euler approximation diverges to infinity and, hence, numerically weak convergence fails to hold.
Now we indicate for example (1) with and why the Monte Carlo Euler method converges although the stochastic Euler scheme fails to converge in the sense of numerically weak convergence. Consider the event and note that the probability of is exponentially small in . The key step in our proof is to show that the Euler approximation does not diverge on as goes to infinity. More precisely, one can show that the Euler approximations (3) are uniformly dominated on by twice the supremum of the Brownian motion, i.e.,
holds. Consequently, the restricted absolute moments are uniformly bounded
for all . This estimate complements the divergence
for all , which has been established in . Now once the restricted absolute moments are uniformly bounded, an adaptation of the arguments of the globally Lipschitz case leads to the modified numerically weak convergence
for every smooth function whose derivatives have at most polynomial growth, see Lemma 4.6. By substituting this into an inequality analogous to (6) and by using the exponential decay of the probability of in , one can establish convergence of the Monte Carlo Euler method. Note that a domination as strong as (10) holds for more general non-increasing drift functions if the diffusion function is identically equal to . For more general drift and diffusion functions, however, both and the dominating process are more complicated in that they depend on the Euler approximation. Nevertheless, the dominating process can be shown to have uniformly bounded absolute moments; see Section 4 for the details.
Our main result, Theorem 2.1 below, establishes convergence of the Monte Carlo Euler method for SDEs with globally one-sided Lipschitz continuous drift functions and with globally Lipschitz continuous diffusion functions. Moreover, the coefficients of the SDE are assumed to have continuous fourth derivatives with at most polynomial growth, see Section 2 for the exact statement. The order of convergence turns out to be as in the globally Lipschitz case. In that case, the stochastic Euler scheme converges in the sense of numerically weak convergence with order . The Monte Carlo simulation of with independent Euler approximations has convergence order . For a real number , we write for the convergence order if the convergence order is better than for every arbitrarily small . We therefore choose in order to balance the error arising from Euler’s approximation and the error arising from the Monte Carlo approximation. Both error terms are then bounded by a random multiple of with . Since function evaluations, arithmetical operations and random variables are needed to compute the Monte Carlo Euler approximation (4), the Monte Carlo Euler method converges with order with respect to the computational effort in the case of global Lipschitz coefficients of the SDE (see ). Theorem 2.1 shows that is also the convergence order in the case of superlinearly growing coefficients of the SDE. Simulations support this result, see Section 3.
Let us reconsider the standard splitting (6) of the approximation error into bias and statistical error. Theorem 2.1 of  implies that the absolute value of the bias diverges to infinity as . This together with our Theorem 2.1 below yields that also the statistical error diverges to infinity. More formally, we see that
-a.s. as for every smooth function with at most polynomially growing derivatives and with for all and some . This emphasizes that the standard splitting of the approximation error of the Monte Carlo Euler method into bias and statistical error is not appropriate in case of SDEs with superlinearly growing coefficients.
2 Main result
We establish convergence of the Monte Carlo Euler method for more general one-dimensional diffusions than our introductory example (1). More precisely, we pose the following assumptions on the coefficients. The drift function is assumed to be globally one-sided Lipschitz continuous and the diffusion function is assumed to be globally Lipschitz continuous. Additionally, both the drift function and the diffusion function are assumed to have a continuous fourth derivative with at most polynomial growth.
We introduce further notation for the formulation of our main result. Fix and let be a probability space with a normal filtration . Let , , be a sequence of independent, identically distributed /-measurable mappings with for every and let , , be a sequence of independent scalar standard -Brownian motions with continuous sample paths. Furthermore, let be four times continuously differentiable functions. Generalizing (1), let be a one-dimensional diffusion with drift function and diffusion function . More precisely, let be an (up to indistinguishability unique) adapted stochastic process with continuous sample paths which satisfies
for every . The functions and are the infinitesimal mean and the infinitesimal variance respectively.
Next we introduce independent versions of the Euler approximation. Define -measurable mappings , , , , by and by
for every , and every . Now we formulate the main result of this article.
Suppose that are four times continuously differentiable functions with
for every , where and are fixed constants. Moreover, assume that the drift coefficient is globally one-sided Lipschitz continuous
and that the diffusion coefficient is globally Lipschitz continuous
Then there are -measurable mappings , , and a set with such that
holds for every , and every .
The proof is deferred to Section 4. For further numerical approximation results for SDEs with superlinearly growing coefficients, see e.g. , , ,  and the references in the introductory section of .
Since function evaluations, arithmetical operations and random variables are needed to compute the expression in (20) for , Theorem 2.1 shows that the Monte Carlo Euler method converges under the above assumptions with order with respect to the computational effort. This is the standard convergence order as in the global Lipschitz case (see e.g. ).
In this section, we simulate the second moment of two stochastic differential equations. First we simulate the stochastic Ginzburg-Landau equation with multiplicative noise, which we choose as there exists an explicit solution for this SDE. Let be the solution of
for all . The exact solution at time is known explicitly (see e.g. Section 4.4 in ) and is given by
The exact value of the second moment is not known. Instead we use the exact solution (23) at time to approximate the second moment. For this, we approximate the Lebesgue integral in the denominator of (23) with a Riemann sum with summands. Moreover, we approximate the second moment at time by a Monte Carlo simulation with independent approximations of . This results in the approximate value .
Next we approximate the second moment at time with the Monte Carlo Euler method. We will sample one random and calculate the Monte Carlo Euler approximations for this for different discretization step sizes. More precisely, Table 1
shows the Monte Carlo Euler approximation
the approximation error of these Monte Carlo Euler approximations, i.e., the quantity
is plotted against for every . Note that is the computational effort up to a constant. The three order lines in Figure 1 correspond to the convergence orders , and . Hence, Figure 1 indicates that the Monte Carlo Euler method converges in the case of the stochastic Ginzburg-Landau equation (22) with its theoretically predicted order .
for all . Here there exists no explicit expression for the solution or its second moments. As an approximation of the exact value , we now take a Monte Carlo Euler approximation with a larger . We choose and obtain the value as an approximation of . Table 2 shows the value of
the Monte Carlo Euler approximation
the approximation error of these Monte Carlo Euler approximations, i.e., the quantity
is plotted against for every . Note that is the computational effort up to a constant. The three order lines in Figure 2 correspond to the convergence orders , and . Therefore, Figure 2 suggests that the Monte Carlo Euler method converges in the case of the SDE (26) with its theoretically predicted order .
First we introduce more notation. Recall the standard Brownian motion and the Euler approximations , , , from Section 2. Throughout this section, we use the stochastic process and the /-measurable mappings , , , given by
for every , , and every .
For general drift and diffusion functions, our proof of Theorem 2.1 somewhat buries the main new ideas. To explain these ideas, we give now a very rough outline (the precise estimates and assertions can be found in Subsections 4.2-4.7 below). The main step will be to establish uniform boundedness of the restricted absolute moments of the Euler approximations
where is a sequence of events whose probabilities converge to sufficiently fast. From here, one can then adapt the arguments of the global Lipschitz case to derive the modified numerically weak convergence (13) and to obtain Theorem 2.1.
for all and all . Fix and assume that the Euler approximation does not change sign until and including the -th approximation step for some which we fix for now, that is, for all . Then, using for all , we have
and iterating this inequality shows
for all . Thus the Euler approximation is bounded above by the dominating process . This dominating process has uniformly bounded absolute moments. So the absolute moments of the Euler approximation can only be unbounded if the Euler approximation changes its sign. Now if happens to be very large for one , then is negative with absolute value being very large because of the negative cubic drift. Through a sequence of changes in sign, it could happen that the absolute value of the Euler approximation increases more and more. To avoid this, we restrict the Euler approximation to an event on which the drift alone cannot change the sign of the Euler approximation. On , the Euler approximation changes sign only due to the diffusive part. As the diffusion function is at most linearly growing, these changes of sign can be controlled. In between consecutive changes of sign, the Euler approximation is again bounded by a dominating process as above. Through this pathwise comparison with a dominating process, we will establish the uniform boundedness (30) of the restricted absolute moments. For the details, we refer to Lemma 4.2, which is the key result in our proof of Theorem 2.1.
4.2 Notation and auxiliary lemmas
In order to show Theorem 2.1, the following objects are needed. First of all, define for every and every and let the /-measurable mapping be given by
for all . Moreover, let the -measurable mappings and be given by
for every , and every . For simplicity we also use given by and for every , and every . Using these ingredients, we now define the dominating process. Let the -measurable mapping be given by
for every , and every , where is given by for every and for every . As usual, for every , , and every . Note that only depends on the Brownian motion and the initial random variable for every and every . Therefore, , , is a sequence of independent random variables for every and every . We will show that the Euler approximation is dominated by the dominating process since the last change of sign. More formally, let the /-measurable mapping be given by
for every , and every . The random time is the last time of a change of sign of , , for every and every . Lemma 4.2 below shows that is bounded by on a certain event for every and every . Next we define these events , , , such that the drift alone cannot cause a change of sign and such that the increment of the Brownian motion is not extraordinary large. Let the real number be given by
for every . Now define sets by
for every and every . Finally, define by
Note that is indeed in . Moreover, we write for all and all /-measurable mappings . Our proof of Theorem 2.1 uses the following lemmas.
Lemma 4.1 (Burkholder-Davis-Gundy inequality).
Let and let be /-measurable mappings with for all and with for all . Then we obtain
for every , where , , are universal constants.
The following lemma is the key result in our proof of Theorem 2.1.
Lemma 4.2 (Dominator Lemma).
The domination (45) might not look helpful at first view since depends on the Euler approximation and since is in general not a stopping time for and . However, the dependence of the dominating process on the Euler approximation can be controlled as is bounded and the dependence of on for all and all is no problem as can be controlled uniformly in and . This is subject of the following lemma.
Lemma 4.3 (Uniformly bounded absolute moments of the dominator).
Let for and be given by (37). Then we have
for all .
Corollary 4.4 (Bounded moments of the Euler approximation).
Let the Euler approximation for and be given by (29). Then we have