Error distributions for random grid approximations of multidimensional stochastic integrals\thanksrefT1

# Error distributions for random grid approximations of multidimensional stochastic integrals

## Abstract

This paper proves joint convergence of the approximation error for several stochastic integrals with respect to local Brownian semimartingales, for nonequidistant and random grids. The conditions needed for convergence are that the Lebesgue integrals of the integrands tend uniformly to zero and that the squared variation and covariation processes converge. The paper also provides tools which simplify checking these conditions and which extend the range for the results. These results are used to prove an explicit limit theorem for random grid approximations of integrals based on solutions of multidimensional SDEs, and to find ways to “design” and optimize the distribution of the approximation error. As examples we briefly discuss strategies for discrete option hedging.

[
\kwd
\doi

10.1214/12-AAP858 \volume23 \issue2 2013 \firstpage834 \lastpage857 \newproclaimcondition[theorem]Condition \newproclaimdefinition[theorem]Definition

\runtitle

Error distributions

\thankstext

T1Supported by the Swedish Foundation for Strategic Research through the Gothenburg Mathematical Modelling Centre.

{aug}

A]\fnmsCarl \snmLindberg\correflabel=e1]carl.lindberg@alumni.chalmers.se and A]\fnmsHolger \snmRootzénlabel=e2]hrootzen@chalmers.selabel=u1,url]http://www.math.chalmers.se/~rootzen/

class=AMS] \kwd[Primary ]60F05 \kwd60H05 \kwd91G20 \kwd[; secondary ]60G44 \kwd60H35. Approximation error \kwdrandom grid \kwdjoint weak convergence \kwdmultidimensional stochastic differential equation \kwdstochastic integrals \kwdrandom evaluation times \kwddiscrete option hedging \kwdportfolio tracking error.

## 1 Introduction

The error in numerical approximations of stochastic integrals is a random variable, or, if one also is interested in the “time” development of the error, a stochastic process. Hence the most precise evaluation of the error, which is possible to obtain, is to derive the distribution of the error. The prototype example is the Euler method for the stochastic integral , for a Brownian motion . The Euler method approximates the integrand with a step-function which is constant between the “evaluation times” (or, in finance terminology, “intervention times”) of the grid . This leads to the approximation , with on the intervals . In Rootzén (1980) it is shown that the approximation error converges stably in distribution,

 Un⇒s1√2∫t0f′(B(s),s)dW(s),

where is a Brownian motion independent of and , and where Rényi’s quite useful concept of stable convergence means that converges jointly with any sequence which converges in probability.

The intuition behind this result is that “the small wiggles of a Brownian path are asymptotically independent of the global behavior of the path.” The result has seen much further development, in particular, to the error in numerical solution schemes for SDEs, and has recently found significant application in measuring the risks associated with discrete hedging. A brief overview of some of this literature is given below.

The present paper generalizes this result in three ways: to joint convergence of the approximation error for several stochastic integrals, to local Brownian semimartingales instead of Brownian motions, and to nonequidistant and random evaluation times. The tools which help us quantify the intuition given above is Girsanov’s theorem which shows how a multidimensional Brownian motion is affected by a change of measure, and Lévy’s characterization of a multidimensional Brownian motion in terms of its square variation processes.

The conditions needed for convergence apply more generally than to approximation schemes. They are that the Lebesgue integrals of the integrands tend uniformly to zero in probability and that the square variation and covariation processes converge in probability. We additionally provide tools which simplify checking these conditions and which extend the range of the results. Further we apply these results to prove an explicit limit theorem for approximations of integrals based on solutions of multidimensional SDEs.

One center of interest for this paper is the possibility to improve approximation by using variable and random grids. In particular we study approximation schemes where the evaluation times are replaced by time points given by the recursion and

 τnk+1=τnk+1nθ(τnk)

for a positive adapted process . We also study how the function can be chosen to design the approximation error so that it has desirable properties. For example, these could be homogeneous evolution of risk, or how to make the approximation error have minimal standard deviation.

A main motivation for writing this paper is to provide tools to study discrete hedging which uses random intervention times. We exemplify these possibilities by using the general results to exhibit a “no bad days” strategy and a minimum standard deviation strategy for the Black–Scholes model.

Weak convergence theory for approximations of stochastic integrals and solutions to stochastic differential equations is developed in Rootzén (1980), Kurtz and Protter (1991a, 1991b, 1996) and, in particular, an extensive study of the Euler method for SDEs is provided by Jacod and Protter (1998). This theory has been used and extended to solve and analyze various aspects of approximation and hedging error problems in mathematical finance. As examples we mention Duffie and Protter (1992), Bertsimas, Kogan and Lo (2000), Hayashi and Mykland (2005), Tankov and Voltchkova (2009), Brodén and Wiktorsson (2010) and Fukasawa (2011). A Malliavin calculus approach to discrete hedging is used in Gobet and Temam (2001) and in a number of papers, which also consider variable but deterministic grids, by Geiss and coworkers; see Geiss and Toivola (2009) and the references therein. The main theoretical tool of Hayashi and Mykland (2005) is related to our general result, as discussed further below. The quite interesting paper Fukasawa (2011) also studies random grid approximations, for one-dimensional processes. The setting of Fukasawas paper is more or less in the middle between our Theorems 2.1 and 3.2. The conditions used by Fukasawa are rather different from ours, and there does not seem to be any simple relations between his results and ours.

Now a brief overview of the paper. The next section, Section 2, contains the basic general theorem on multidimensional convergence for stochastic integrals with respect to local multidimensional Brownian semimartingales, and the tools to check conditions and extend the result. In Section 3 we give the explicit result for random grid approximations of stochastic integrals based on the solution of a multidimensional SDE. Section 4 investigates ways to design and optimize approximation errors, and in Section 5 this is applied to discrete financial hedging.

## 2 General results

This section contains two main results. The first one gives a means to establish multidimensional convergence of the distribution of stochastic integrals with more and more rapidly varying integrands, and the second one shows how convergence of integrals with simple integrands can be extended to more general integrands. In addition, Lemma 2.5 provides tools to check the assumptions of the theorems. Our main aim is the error in approximations of stochastic integrals, but the results may in fact also have more general use.

Let be the space of continuous -valued functions defined on , define by , let be the probability measure which makes a Brownian motion starting at and let be the completion of the -algebra generated by . Further write for the smallest -algebra which contains all the . Until further notice is given all random variables we consider are defined on the filtered probability space . Weak convergence will be for random variables (or “processes”) with values in , the space of continuous -dimensional functions defined on the time interval , and with respect to the uniform metric. Usually the dimension of the processes will be clear from the context, and then we, for brevity, write , instead of , and just write for weak convergence.

Weak convergence is stable (or “Rényi-stable”) if it holds on any subset of , and the convergence is mixing (or “Rényi-mixing”) if, in addition, the limit is the same on any subset. In the present setting this is specified by the definition which follows below. To appreciate part (ii) of the definition, recall that convergence in distribution often is written as , but that in this notation is not a random variable defined on some probability space. It is just a convenient notation for the limiting distribution of . However, one can, of course, construct a random variable with this distribution, to give a life of its own. {definition} (i) Let be a sequence of random variables defined on the same probability space and with values in . Then converges stably if converges for any bounded continuous function and any bounded measurable random variable defined on . If, in addition,

 limnE[Uf(Xn)]=E[U]limnE[f(Xn)], (1)

then the convergence is mixing.

(ii) If converges stably, then it is always possible to enlarge the probability space and construct a new random variable on the enlarged probability space such that for all bounded random variables ; see Aldous and Eagleson (1978). Thus, with this construction we can write stable convergence as . If the convergence, in addition, is mixing, then is independent of , and we write .

It is straightforward to see that to establish stable or mixing convergence it is enough to prove convergence of for strictly positive with . Further, see Aldous and Eagleson (1978), if and only if for any sequence of random variables which converges in probability if and only if with respect to for any set with . (In the middle statement, convergence is with respect to the product topology.) Finally, if stability (or mixing) holds with respect to a sigma-algebra and the sigma-algebra is independent of , then it also holds with respect to the sigma-algebra generated by and .

Let be a continuous -dimensional Brownian semimartingale defined on the space by

 Xj(t)=d∑k=1∫t0Gj,k(s)dBk(s)+∫t0aj(s)ds (2)

with and adapted, and with and a.s. for all . Further let be a -dimensional array of -adapted processes such that a.s. for each , and write

 {Hni,j⋅Xj} = {Hni,j⋅Xj;1≤i,j≤d} = {∫t0Hni,j(s)dXj(s);1≤i,j≤d}0≤t≤T.

Thus takes values in . In the following we let denote convergence in probability and take “positive” to mean the same as “nonnegative.”

The form of the second condition, equation (5) of the following theorem requires some explanation. For simplicity of exposition suppressing the index , it says that converges in probability to some absolutely continuous limit, which we temporarily write as . Since limits of positive variable are positive, we further assume that for each and the array is “positive definite,” that is, equivalently, that it can be obtained as the covariances of some array of random variables. The diagonal elements of the array are obtained from the limits of and hence it is natural to write them as . Further, taking positive square roots we may then more generally write . The array then is the “correlation array” corresponding to the covariances . This gives the formulation (5). (If some is zero, we just set the corresponding ’s and off-diagonal elements of to zero, and the diagonal elements to 1.)

Further, it is possible to find a “root” of , that is, an array such that . This can be seen by reordering the index set , linearly, say lexicographically, making the corresponding reordering of into a matrix which then is positive definite, finding a root of this matrix, and then making the identification back to the array ordering.

###### Theorem 2.1

Suppose that satisfies

 sup0≤t≤T∣∣∣∫t0Hni,jds∣∣∣→p0,n→∞,1≤i,j≤d, (4)

and that for

 ∫t0Hni,jGj,kHnl,mGm,kds→p∫t0Hi,jGj,kHl,mGm,kρk(i,j),(l,m)ds (5)

as , for , and for some correlation array processes and processes such that all are positive. Let be an arbitrary root of ; see the discussion just before the theorem. Then, for given by (2),

 {Hni,j⋅Xj}⇒s{d∑r,s,k=1Hi,jGj,kσk(i,j),(r,s)⋅Wr,s,k} (6)

as , where is a -dimensional Brownian motion which is independent of .

This result simplifies in the special case when is just a Brownian motion ; see the following corollary. The corollary is close to Theorem A.1 of Hayashi and Mykland (2005). Differences are that the corollary makes the basic condition (4) explicit, gives a more detailed description of the limit distribution and has the more powerful conclusion of stable convergence.

In Theorem 2.1 we, for simplicity of notation, considered a quadratic array . This does not involve any loss of generality, but still, for later use in the proof of Theorem 2.1, it is convenient to formulate the corollary for a rectangular array.

###### Corollary 2.2

Suppose that (4) is satisfied for and that

 ∫t0Hni,kHnj,kds→p∫t0Hi,kHj,kρki,jds,n→∞, (7)

as , for some correlation matrix processes , where , and positive processes , and for . Then

 {Hni,k⋅Bk}⇒s{d1∑j=1Hi,kσki,j⋅Wj,k} (8)

as , where is a Brownian motion which is independent of .

The following lemma plays an important role in the proofs.

###### Lemma 2.3

Suppose that and are real-valued random processes with a.s. and with a.s. for some positive constant . Suppose further that

 sup0≤t≤S∣∣∣∫t0Hnds∣∣∣→p0,n→∞.

Then

 sup0≤t≤S∣∣∣∫t0Hnηds∣∣∣→p0,n→∞. (9)
{pf}

Suppose first that there exists a sequence of processes such that

 ∫S0(η(t)−ηk(t))2dt →p 0as k→∞, sup0≤t≤S∣∣∣∫t0Hnηk(s)ds∣∣∣ →p 0as n→∞for each k.

Then, by the Cauchy–Schwarz inequality,

 limsupnsup0≤t≤S∣∣∣∫t0Hnηds∣∣∣ ≤ limsupnsup0≤t≤S∣∣∣∫t0Hnηkds∣∣∣ +limsupnsup0≤t≤S∣∣∣∫t0Hn(η−ηk)ds∣∣∣ ≤ 0+√limsupn∫S0(Hn)2dt√∫S0(η−ηk)2dt,

which tends to 0 as , so that (9) holds.

Thus the lemma follows if there exist a sequence which satisfies the two requirements above.

Now, for each there exists a continuous process , measurable in and , such that . Briefly, to see this note that if is approximated by convolving it with a sequence of “approximate -functions,” for example, with a sequence of centered normal densities with variance parameters tending to , then the convolutions are measurable in and and for almost all converge to in . The existence of the sequence follows at once from this, since convergence a.s. implies convergence in probability.

Next, with denoting the indicator function of a set , for it follows that

 ∫S0(~ηk(t)−~ηk,m(t))2dt→a.s.0as m→∞

and thus, choosing suitably, satisfies the first one of the two relations above. Furthermore, the second one is easily seen to hold for of this form. {pf*}Proof of Theorem 2.1 and Corollary 2.2 We do this in reverse order, and first prove Corollary 2.2. For simplicity of notation we only prove the corollary for a two-dimensional Brownian motion, that is, for the case . The general case is the same.

By Rootzén [(1980), Theorem 1.2], each marginal process is tight , and then also the entire -dimensional sequence is tight , so only stable finite-dimensional convergence remains to be proved. We prove this in two steps, where the first one follows along the lines of Rootzén (1980) and the second step uses the Cramér–Wold device. A final third step uses Corollary 2.2 to prove Theorem 2.1.

Step 1: Let be adapted processes such that, for ,

 sup0≤t≤T∣∣∣∫t0ψnids∣∣∣→p0 (10)

and such that

 ∫t0(ψni)2ds→p∫t0(ψi)2ds (11)

for some . To make inverses well defined, we, without loss of generality, can assume that the are defined also for , and such that equations (10) and (11) hold with replaced by for any , and with for and . This does not involve the result to be proved nor the assumptions, and hence can be done without loss of generality.

Let be the space of continuous real valued functions defined on and endowed with the topology of uniform convergence on compact sets; see Whitt (1970). Let the random variable satisfy , and assume the functional is bounded and continuous. Further, set , let and define by . Additionally let be a one-dimensional Brownian motion which is independent of . We first prove that

 EUf(∫τ−1n(⋅)0ψn1dB1+∫τ−1n(⋅)0ψn2dB2)→Ef(~W(⋅)), (12)

for each such , so that , on .

Now, define a new probability measure by , and write for expectation taken with respect to . Then, by Girsanov’s theorem [Rogers and Williams (2000), Theorem IV 38.5] there exists an adapted square integrable process such that is a Brownian motion under .

Hence,

 EUf(∫τ−1n(⋅)0ψn1dB1+∫τ−1n(⋅)0ψn2dB2) =EQf(∫τ−1n(⋅)0ψn1d~B1+∫τ−1n(⋅)0ψn2d~B2 (13) +∫τ−1n(⋅)0ψn1c1ds+∫τ−1n(⋅)0ψn2c2ds).

Under the process has the same distribution as [Rogers and Williams (2000), Theorem IV 34.1]. Further, by Lemma 2.3, we have that in , for any fixed . Since is bounded and continuous on , these two facts prove (12), and hence mixing convergence on .

It thus follows from that , and hence, by composing with [cf. Billingsley (1999), page 145], that

 ∫t0ψn1dB1+∫t0ψn2dB2⇒s~W(τ(t)) (14)

in , and hence, in particular, in .

Step 2: Finite-dimensional stable convergence now follows by standard but notationally complicated Cramér–Wold arguments. To lessen complications we here only consider two basic cases, and leave the general argument to the reader. Thus, first, let for , with . Equation (7) implies that

 τn(t)→pτ(t)=b21∫t∧t10(H1,1)2ds+b22∫t∧t20(H1,2)2ds

so that by (14),

 b1∫t∧t10Hn1,1dB1+b2∫t∧t20Hn1,2dB2 ⇒s~W(b21∫t∧t10(H1,1)2ds+b22∫t∧t20(H1,2)2ds).

Now, using elementary properties of Brownian motion together with Rogers and Williams [(2000), Theorem IV 34.1] we have that has the same distribution, and the same dependency with any -measurable variable, as

 b1∫t∧t10H1,1dW1,1+b2∫t∧t20H1,2dW1,2

for independent Brownian motions , so that we by (14) have established that , for any real numbers . In particular stable two-dimensional convergence of to follows by Cramér–Wold.

If we instead take and then, by (7),

 τn(t) →p τ(t) = b21∫t∧t10(H1,1)2ds+2b1b2∫t∧t1∧t20H1,1H2,1ρ11,2ds +b22∫t∧t20(H2,1)2ds.

Furthermore, similarly as before and recalling that the matrix is a root of the correlation matrix , it can be seen that then has the same distribution, and the same dependency with any -measurable variable, as

 b1(∫t∧t10H1,1σ11,1dW1,1+∫t∧t10H1,1σ11,2dW2,1) +b2(∫t∧t20H2,1σ12,1dW1,1+∫t∧t20H2,1σ12,2dW2,1).

Reasoning as above we get that

 b1∫t10Hn1,1dB1+b2∫t20Hn2,1dB1 ⇒sb1∫t10H1,1σ11,1dW1,1+b1∫t10H1,1σ11,2dW2,1 +b2∫t20H2,1σ12,1dW1,1+b2∫t20H2,1σ12,2dW2,1

for independent Brownian motions . Since and are arbitrary, this proves stable two-dimensional convergence of . A general proof of Corollary 2.2 is only notationally more complicated.

We next use Corollary 2.2 to obtain the conclusion of Theorem 2.1.

Step 3: By Lemma 2.3, if satisfies (4), then , for all , and hence the general result follows if we can prove that the result of the theorem holds for the case when all are identically zero. Thus, to find the limit of one only has to consider

 {d∑k=1Hni,jGj,k⋅Bk}.

Again by Lemma 2.3, if satisfies (4), then

 sup0≤t≤T∣∣∣∫t0Hni,jGj,kds∣∣∣→p0. (15)

Now, making the definition and replacing the index in (8) by the “multiindex” , convergence of the array follows from Corollary 2.2 with . The result (6) then follows by summing over and writing for .

We now change to a more general setup, from Brownian semimartingales to general processes which are defined on filtered probability spaces . Here is a -complete -algebra and is a filtration which satisfies the usual hypotheses (but which is not necessarily generated by a Brownian motion). The following definition is key to our goal. We give it for vector valued processes. The definition for matrix valued processes is analogous. {definition} Let be a sequence of continuous -valued semimartingales defined on , and assume that . The sequence is good if for any sequence of -valued adapted càdlàg stochastic processes defined on such that , there exists a filtration such that is a semimartingale and is an adapted càdlàg process, and .

The following criterion is sufficient for goodness; see, for example, Theorem 2.2 in Kurtz and Protter (1991a). {definition} A sequence of continuous -valued semimartingales is said to have uniformly controlled variations (UCV) if for each , there exist decompositions such that

 supnEn{[Mn,Mn]T+∫T0|dAns|}<∞.

The next theorem combined with Theorem 2.2 will give the asymptotic distributions of approximation errors for stochastic integrals. If, in addition to the conditions of the theorem, is bounded, then the result follows from Theorem 3.5 in Kurtz and Protter (1991b). However, in the present setting the result holds also without the boundedness condition, and it is further possible to give a quite simple proof. In the theorem, are -stopping times, and is defined by , .

###### Theorem 2.4

Let be a continuous -valued -semimartingale on , and suppose that is continuously differentiable. Assume that tends to the identity in probability for , and let be a positive sequence converging to infinity. Further, set

 Un = λn∫(f(Y)−f(Y∘ηn))dY : = λnd∑i=1∫(fi(Y)−fi(Y∘ηn))dYi

and define

 Znij(t)=λn∫t0(Yi(s)−Yi∘ηn(s))dYj(s). (16)

Suppose that is good, and that . Then on , where

 U=d∑i,j=1∫∂fj(Y)∂yidZij.

Since is nondecreasing, pointwise convergence in probability in , as assumed in the theorem, is equivalent to uniform convergence in probability in . Below we will use this without further comment. {pf*}Proof of Theorem 2.4 For simplicity of exposition, we assume that . By the continuous mapping theorem we have that . Since is continuous, and converges uniformly in probability to the unity, this in turn can be seen to imply that , for example, by using the Skorokhod translation of convergence in distribution to convergence a.s.

We now define

 g(x,y)=f(x)−f(y)x−y,

where we make the continuous choice when the denominator vanishes. The function is uniformly continuous on , so the continuous mapping theorem gives that . Now,

 Un=λ