On arbitrarily slow convergence rates for strong numerical approximations of CoxIngersollRoss processes and squared Bessel processes
Abstract
CoxIngersollRoss (CIR) processes are extensively used in stateoftheart models for the approximative pricing of financial derivatives. In particular, CIR processes are day after day employed to model instantaneous variances (squared volatilities) of foreign exchange rates and stock prices in Hestontype models and they are also intensively used to model shortrate interest rates. The prices of the financial derivatives in the above mentioned models are very often approximately computed by means of explicit or implicit Euler or Milsteintype discretization methods based on equidistant evaluations of the driving noise processes. In this article we study the strong convergence speeds of all such discretization methods. More specifically, the main result of this article reveals that each such discretization method achieves at most a strong convergence order of , where is the dimension of the squared Bessel process associated to the considered CIR process. In particular, we thereby reveal that discretization methods currently employed in the financial industry may converge with arbitrarily slow strong convergence rates to the solution of the considered CIR process. We thereby lay open the need of the development of other more sophisticated approximation methods which are capable to solve CIR processes in the strong sense in a reasonable computational time and which thus can not belong to the class of algorithms which use equidistant evaluations of the driving noise processes.
Contents
 1 Introduction
 2 Basics of CoxIngersollRoss (CIR) processes and squared Bessel processes
 3 Basics of general SDEs

4 Lower error bounds for CIR processes and squared Bessel processes
in the case
of a special choice of the parameters
 4.1 Setting

4.2 Properties of the constructed random objects
 4.2.1 The Feller boundary condition revisited
 4.2.2 One step in the construction of the Brownian motions
 4.2.3 Properties of the constructed random times
 4.2.4 Properties of the constructed Brownian motions
 4.2.5 Properties of the constructed squared Bessel processes
 4.2.6 On conditional distributions of the considered random objects

4.3 Lower bounds for strong
distances
between the constructed squared Bessel processes
 4.3.1 A first very rough lower bound for strong distances between the constructed squared Bessel processes
 4.3.2 On conditional distances between the constructed squared Bessel processes
 4.3.3 A lower bound for hitting time probabilities
 4.3.4 A refined lower bound for strong distances between the constructed squared Bessel processes
 4.4 Proofs for the lower error bounds
 5 Lower error bounds for CIR processes and squared Bessel processes in the general case
1 Introduction
Stochastic differential equations (SDEs) are a key ingredient in a number of models from economics and the natural sciences. In particular, SDE based models are day after day used in the financial engineering industry to approximately compute prices of financial derivatives. The SDEs appearing in such models are typically highly nonlinear and contain nonLipschitz nonlinearities in the drift or diffusion coefficient. Such SDEs can in almost all cases not be solved explicitly and it has been and still is a very active topic of research to approximate SDEs with nonLipschitz nonlinearities; see, e.g., Hu [24], Gyöngy [14], Higham, Mao, & Stuart [21], Hutzenthaler, Jentzen, & Kloeden [27], Hutzenthaler & Jentzen [26], Sabanis [37, 38], and the references mentioned therein. In particular, in about the last five years several results have been obtained that demonstrate that approximation schemes may converge arbitrarily slow, see Hairer, Hutzenthaler, & Jentzen [16], Jentzen, MüllerGronbach, & Yaroslavtseva [28], Yaroslavtseva & MüllerGronbach [40], Yaroslavtseva [39], and Gerencsér, Jentzen, & Salimova[12]. For example, Theorem 1.2 in [28] demonstrates that there exists an SDE that has solutions with all moments bounded but for which all approximation schemes that use only evaluation points of the driving Brownian motion converge in the strong sense with an arbitrarily slow rate; see also [16, Theorem 1.3], [40, Theorem 3], [39, Theorem 1], and [12, Theorem 1.2] for related results. All the SDEs in the above examples are purely academic with no connection to applications. The key contribution of this work is to reveal that such slow convergence phenomena also arise in concrete models from applications. To be more specific, in this work we reveal that CoxIngersollRoss (CIR) processes and squared Bessel processes can in the strong sense in general not be solved approximately in a reasonable computational time by means of schemes using equidistant evaluations of the driving Brownian motion. The precise formulation of our result is the subject of the following theorem.
Theorem 1 (CoxIngersollRoss processes).
Let , satisfy , let be a probability space with a normal filtration , let be a Brownian motion, let be a adapted stochastic process with continuous sample paths which satisfies for all a.s. that
(1) 
Then there exists a real number such that for all it holds that
(2) 
Theorem 1 is an immediate consequence of Theorem 34 in Section 5 below. Upper error bounds for strong approximation of CIR processes and squared Bessel processes, i.e., the opposite question of Theorem 1, have been intensively studied in the literature; see, e.g., Delstra & Delbaan [10], Alfonsi [1], Higham & Mao [22], Berkaoui, Bossy, & Diop [3], Gyöngy & Rásonyi [15], Dereich, Neuenkirch, & Szpruch [11], Alfonsi [2], Hutzenthaler, Jentzen, & Noll [25], Neuenkirch & Szpruch [35], Bossy & Olivero Quinteros [5], Hutzenthaler & Jentzen [26], Chassagneux, Jacquier, & Mihaylov [6], Hefter & Herzwurm [17], and Hefter & Herzwurm [18] (for further approximation results, see, e.g., Milstein & Schoenmakers [33], Cozma & Reisinger [9], and Kelly & Lord [31]). In the following we relate our result to these results.
Using the truncated Milstein scheme with the corresponding error bound from Hefter & Herzwurm [18] we get that the the lower bound obtained in (2) is essentially sharp. The precise formulation of this observation is the subject of the following corollary.
Corollary 2 (CoxIngersollRoss processes).
Let , satisfy , let be a probability space with a normal filtration , let be a Brownian motion, let be a adapted stochastic process with continuous sample paths which satisfies for all a.s. that
(3) 
Then there exist real numbers such that for all it holds that
(4) 
The lower bound in (4) is an immediate consequence of Theorem 1 and the upper bound in (4) is an immediate consequence of Hefter & Herzwurm [18, Theorem 2] using the truncated Milstein scheme. We conjecture that in the full parameter range the convergence order in (4) is equal to , since for scalar SDEs with coefficients satisfying standard assumptions a convergence order of one is optimal; see, e.g., Hofmann, MüllerGronbach, & Ritter [23] and MüllerGronbach [34]. Upper and lower error bounds for CIR processes are crucial due to the fact that CIR processes are a key ingredient in several models for the approximative pricing of financial derivatives on stocks (see, e.g., Heston [20]), interest rates (see, e.g., Cox, Ingersoll, & Ross [7]), and foreign exchange markets (see, e.g., Cozma & Reisinger [8]).
The remainder of this article is organized as follows. In Section 2 we review a few elementary properties of CIR processes and squared Bessel processes. In Section 3 we present some basic results for general SDEs. In Section 4 we prove the lower error bound for a specific parameter range, which is then generalized in Section 5.
2 Basics of CoxIngersollRoss (CIR) processes and squared Bessel processes
2.1 Setting
Let be a complete probability space, let be a Brownian motion, and for every , and every Brownian motion let be a adapted stochastic process with continuous sample paths which satisfies that for all it holds a.s. that
(5) 
2.2 A comparison principle
Lemma 3.
Assume the setting in Section 2.1 and let , satisfy and . Then
(6) 
2.3 A priori moment bounds
Lemma 4.
Assume the setting in Section 2.1 and let , , . Then
(7) 
2.4 Lipschitz continuity in the initial value
In the next result, Lemma 5, we recall a wellknown explicit formula for the first moments of CIR processes and squared Bessel processes (cf., e.g., Cox, Ingersoll, & Ross [7, Equation (19)]).
Lemma 5 (An explicit formula for the first moment).
Assume the setting in Section 2.1 and let , . Then
(8) 
Proof of Lemma 5.
Throughout this proof let be the function which satisfies for all that
(9) 
Observe that Lemma 4, the fact that , , is a stochastic process with continuous sample paths, and Lebesgue’s dominated convergence theorem ensure that is a continuous function. This and (5) show that for all it holds that
(10) 
This demonstrates that is continuously differentiable and that for all it holds that
(11) 
Hence, we obtain that for all it holds that
(12) 
This and the fact that
(13) 
complete the proof of Lemma 5. ∎
Lemma 6 (Lipschitz continuity).
Assume the setting in Section 2.1 and let , . Then
(14) 
2.5 The scaling property
Lemma 7.
Assume the setting in Section 2.1 and let , . Then
(16) 
2.6 Hitting times
Lemma 8 (The Feller boundary condition).
Assume the setting in Section 2.1 and let , . Then
(17) 
Proof of Lemma 8.
Lemma 9 (Bounds for hitting times).
Assume the setting in Section 2.1 and let , , . Then there exists a real number such that for every it holds that
(19) 
Proof of Lemma 9.
Throughout this proof let be the real number given by , let be the function which satisfies for all that
(20) 
(Gamma function), and let be the function which satisfies for all , that
(21) 
There exists a real number which satisfies for every that
(22) 
see, e.g., Borodin & Salminen [4, Part I, Chapter IV, Section 6, last equation in 46 on page 79] (with , in the notation of [4, Part I, Chapter IV, Section 6, last equation in 46 on page 79]). This and the fact that imply that for every it holds that
(23) 
In the next step we note that for every it holds that the random variable is distributed with degrees of freedom (see, e.g., Revuz & Yor [36, Corollary XI.1.4]). Hence, we obtain that for all it holds that
(24) 
This and (23) imply that for all , it holds that
(25) 
Therefore, we obtain that for all it holds that
(26) 
This and Lemma 3 show that that for all it holds that
(27) 
Hence, we obtain that
(28) 
This assures that
(29) 
The proof of Lemma 9 is thus completed. ∎
3 Basics of general SDEs
3.1 Setting
Let be a Borelmeasurable and universally adapted function (see Kallenberg [29, page 423] for the notion of an universally adapted function), let be continuous functions, assume that for every complete probability space , every normal filtration on , every Brownian motion , all sample paths continuous adapted stochastic processes with , and every it holds that
(30) 
assume that for every complete probability space , every normal filtration on , every Brownian motion , every /measurable function , and every it holds that
(31) 
and let be a complete probability space.
3.2 Brownian motion shifted by a stopping time
Lemma 10.
Assume the setting in Section 3.1, let be a normal filtration on , let be a Brownian motion, let be a stopping time, let be a /measurable function, let be the stochastic process which satisfies for all that , and let be the random variable given by . Then

it holds that is a Brownian motion,

it holds that and are independent, and

it holds that
(32)
Proof of Lemma 10.
Throughout this proof let be the normal filtration on which satisfies for all that . Observe that the fact that the function is /measurable ensures that is /measurable. In addition, note that, e.g., Kallenberg [29, Theorem 13.11] demonstrates that is a Brownian motion. This and the fact that is /measurable show that and are independent. Next observe that the stochastic process has continuous sample paths, is adapted, and satisfies that for all it holds a.s. that
(33) 
This establishes (32). The proof of Lemma 10 is thus completed. ∎
Lemma 11.
Assume the setting in Section 3.1, let be Brownian motions, let be a random variable, assume for all that , let be a random variable, assume that and are independent, and assume that and are independent. Then
(34) 
Proof of Lemma 11.
Observe that it holds that
(35) 
The fact that and have continuous sample paths hence shows that
(36) 
The assumption that is universally adapted therefore proves that
(37) 
This and the fact that the stochastic process , , has leftcontinuous sample paths establishes (34). The proof of Lemma 11 is thus completed. ∎
Lemma 12.
Assume the setting in Section 3.1, for every let be a normal filtration on , assume that , for every let be a Brownian motion, for every let be a stopping time, let be the normal filtration on which satisfies for all that
(38) 
let be the random variable given by , let be the stochastic process which satisfies for all that
(39) 
let be a random variable which is /measurable, and let be the random variable given by . Then

it holds that ,

it holds that ,

it holds that is a stopping time,

it holds that is a stopping time,

it holds that is a Brownian motion,

it holds that and are independent,

it holds that and are independent, and

it holds that
(40)
Proof of Lemma 12.
Throughout this proof let be the function which satisfies for all that (distribution function of the standard normal distribution) and for every let be the random variable given by . Observe that for every it holds that is a stopping time. The fact that ensures that it holds for every that and
(41) 
This proves item (i). Next observe that for every , it holds that
(42) 
and
(43) 
Hence, we obtain for every , that
(44) 
This proves item (ii). Observe that for every it holds that
(45) 
and
(46) 
This proves item (iii). In the next step we note that for every it holds that
(47) 
and
(48) 
This proves item (iv). The strong Markov property of Brownian motion (see, e.g., Kallenberg [29, Theorem 13.11]) implies that it holds for every that is a Brownian motion independent of . This and the fact that for every , it holds that demonstrate that for every , , , it holds that