An SDE approximation for stochastic differential delay equations with state-dependent colored noise

An SDE approximation for stochastic differential delay equations with state-dependent colored noise

Abstract

We consider a general multidimensional stochastic differential delay equation (SDDE) with state-dependent colored noises. We approximate it by a stochastic differential equation (SDE) system and calculate its limit as the time delays and the correlation times of the noises go to zero. The main result is proven using a theorem about convergence of stochastic integrals by Kurtz and Protter. It formalizes and extends a result that has been obtained in the analysis of a noisy electrical circuit with delayed state-dependent noise, and may be used as a working SDE approximation of an SDDE modeling a real system where noises are correlated in time and whose response to noise sources depends on the system’s state at a previous time.

Keywords: Stochastic differential equations, stochastic differential delay equations, colored noise, noise-induced drift

AMS Subject Classification: 60H10, 34F05

1 Introduction

Stochastic differential equations (SDEs) are widely employed to describe the time evolution of systems encountered in physics, biology, and economics, among others [1, 2, 3]. It is often natural to introduce a delay into the equations in order to account for the fact that the system’s response to changes in its environment is not instantaneous. We are, therefore, led to consider stochastic differential delay equations (SDDEs). A survey of the theory of SDDEs, including theorems on existence and uniqueness of solutions as well as stochastic stability, can be found in Ref. [4]. In addition to numerous other results, a treatment of the (appropriately defined) Markov property and the concept of a generator are contained in Ref. [5]. Numerical aspects of SDDEs are treated in Ref. [6]. For other aspects of the theory see Ref. [7].

Since the theory of SDDEs is much less developed than the theory of SDEs [1, 2, 3], it is useful to introduce working approximations of SDDEs by SDEs. For example, such an approximation was applied in Ref. [8] to a physical system with one dynamical degree of freedom (the output voltage of a noisy electrical circuit). It was used there to show that the experimental system shifts from obeying Stratonovich calculus to obeying Itô calculus as the ratio between the driving noise correlation time and the feedback delay time changes (see [9] for related work). In this article we employ the systematic and rigorous method developed in Ref. [10] to obtain much more general results which are applicable to systems with an arbitrary number of degrees of freedom, driven by several colored noises, and involving several time delays. More precisely, we derive an approximation of SDDEs driven by colored noise (or noises) in the limit in which the correlation times of the noises and the response delays go to zero at the same rate. The approximating equation contains noise-induced drift terms which depend on the ratios of the delay times to the noise correlation times.

An equation related to, but simpler than, the one considered here was studied in a different context in Ref. [11]. There, the limit that the authors derive is analogous to our Theorem 1. Results on small delay approximations for SDDEs of a different type than the one considered here are contained in Ref. [12]; see also Ref. [13]. We are not aware of any previous studies addressing the question of the effective equation in the limit as the time delays and correlation times of the noises go to zero, other than a less mathematical and less general treatment in our previous work [8]. In fact, the present paper was motivated by [8] and can be seen as its mathematically formal extension.

2 Mathematical Model

We consider the multidimensional SDDE system

(1)

where is the state vector (the superscript denotes transposition), where is a vector-valued function describing the deterministic part of the dynamical system,

where is a matrix-valued function, is the delayed state vector (note that each component is delayed by a possibly different amount ), and is a vector of independent noises , where the are colored (harmonic) noises with characteristic correlation times . These stochastic processes (defined precisely in equation (5)) have continuously differentiable realizations which makes the realizations of the solution process twice continuously differentiable under the natural assumptions on and that are made in the statement of Theorem 1.

Equation (1) is written componentwise as

(2)

For each , we define the process . In terms of the variables, equation (2) becomes

(3)

Expanding to first order in , we have and

Substituting these approximations into equation (3), we obtain a new (approximate) system

where . We write these equations as the first order system

(4)

Supplemented by the equations defining the noise processes (see equation (5)), these equations become the SDE system we study in this article.

3 Derivation of Limiting Equation

We study the limit of the system (4) as the time delays and the correlation times of the colored noises go to zero. We take each to be a stationary harmonic noise process [14] defined as the stationary solution of the SDE

(5)

where and are constants, is an -dimensional Wiener process, and is the correlation time of the Ornstein-Uhlenbeck process obtained by taking the limit while keeping constant. The system (5) has a unique stationary measure. The distribution of the system’s solution with an arbitrary (nonrandom) initial condition converges to this stationary measure as . The solution with the initial condition distributed according to the stationary measure defines a stationary process, whose realizations will play the role of colored noise in the SDE system (4). We note that as , the component of the solution of equation (5) converges to a white noise (see the Appendix for details).

In taking the limit as the delay times and the noise correlation times go to zero, we assume that all the and stay proportional to a single characteristic time . That is, we let and where remain constant in the limit .

We consider the solution to equations (4) and (5) on a bounded time interval . We let denote the underlying probability space. We will use the filtration on where is (the usual augmentation of) , i.e. the -algebra generated by the Wiener process up to time .

Throughout this article, for an arbitrary vector , will denote its Euclidean norm, and for a matrix , will denote the matrix norm induced by the Euclidean norm on .

Theorem 1.

Suppose that the are bounded functions with bounded, continuous first derivatives and bounded second derivatives and that the are bounded functions with bounded, continuous first derivatives. Let solve equations (4) and (5) (which depend on through ) on with initial conditions , where is the same for every and is distributed according to the stationary distribution corresponding to equation (5). Let solve

(6)

on with the same initial condition , and suppose strong uniqueness holds on for (6) with the initial condition (strong uniqueness is implied, for example, by the additional assumption that the have bounded second derivatives). Then

(7)

for every .

Remark 1.

Taking the limit in equation (6) while keeping constant, we get the simpler limiting equation

(8)
Remark 2.

Our choice of the distribution of the initial condition is the only one that makes the noise process stationary—physically a very natural assumption. However, the proof of Theorem 1 applies to any choice of such that and do not grow faster than as .


Outline of the proof of Theorem 1. The proof uses the method of Hottovy et al. [10]. The main tool that we use is a theorem by Kurtz and Protter about convergence of stochastic integrals. In Section 3.1 we write equations (4) and (5) together in the matrix form that is used in the Kurtz-Protter theorem. The theorem itself is stated in Section 3.2. In Section 3.3 we use it to derive the limiting equations (6) and (8). The key steps are integrating by parts and then rewriting a certain differential by solving a Lyapunov matrix equation. In Section 4 we verify that the assumptions of the Kurtz-Protter theorem are satisfied, thus completing the proof of Theorem 1.

3.1 Matrix form

We introduce the vector process

where, as in the statement of the theorem, solves equations (4) and (5), where , and where . We let , so that . Equations (4) and (5) can be written in terms of the processes and as

(9)

where is the vector of length that is given, in block form, by

where ; is the matrix that is given, in block form, by

(10)

where

and

is the matrix that is given, in block form, by

where

is the matrix that is given, in block form, by

and is the -dimensional Wiener process in equation (5). Using the introduced notation, we obtain the desired matrix form of equations (4) and (5). The equation for becomes

By Lemma 2 in Section 4, for sufficiently small, is invertible. Thus, for sufficiently small, we can solve for , rewriting the equation for as

In integral form, this equation is

(11)

where is independent of due to our assumption that is the same for all .

Remark 3.

The equations in (9) have a structure similar to equations studied in Ref. [10], except for the additional term . The method of Ref. [10] will be suitably adapted to treat this term and to account for the structure of the other terms in the second equation in (9).

3.2 Convergence of stochastic integrals

We use a theorem of Kurtz and Protter [15] which, for greater clarity, we state here in a less general but sufficient form. Let be a filtration on a probability space . In our case will be the usual augmentation of (the -algebra generated by the Wiener process up to time ) introduced earlier. The processes we consider below are assumed to be adapted to this filtration. We consider a family of pairs of processes where has paths in (i.e. the space of continuous functions from to ) and where is a semimartingale with paths in . Let be the Doob-Meyer decomposition of so that is a local martingale and is a process of locally bounded variation [16]. We denote the total variation of by . Let and , , be a family of matrix-valued functions. Suppose that the process , with paths in , satisfies the stochastic integral equation

(12)

with independent of . Let be a semimartingale with paths in and let , with paths in , satisfy the stochastic integral equation

(13)
Lemma 1 ([15, Theorem 5.4 and Corollary 5.6]).

Suppose in probability with respect to , i.e. for all ,

(14)

as , and the following conditions are satisfied:

Condition 1.

For every , the family of total variations evaluated at , , is stochastically bounded, i.e. as , uniformly in .

Condition 2.
  1. as

  2. is continuous (see [15, Example 5.3])

Suppose that there exists a strongly unique global solution to equation (13). Then, as , in probability with respect to , i.e. for all ,

3.3 Proof of Theorem 1

We cannot apply Lemma 1 directly to equation (3.1) because does not satisfy Condition 1. Instead, we integrate by parts the component of the last integral in equation (3.1):

(15)

where . Note that

because is continuously differentiable. The Itô term in the integration by parts formula is zero for a similar reason.

Since , we can write the last integral in equation (3.3) as

The product that appears in the above integral is the entry of the outer product matrix . Our next step is to express this matrix as the solution of a certain equation. We start by using the Itô product formula to calculate

so that, using equation (9),

(16)

Defining

(17)

and combining (3.3) and (17), we obtain

(18)

Our goal is to write the differential in another form and substitute it back into equation (3.3). Letting , we integrate (3.3) to obtain

(19)

Defining

we write (3.3) as

(20)

Letting , , and

equation (3.3) becomes

An equation of this form (to be solved for ) is called Lyapunov’s equation [17, 18]. By Ref. [18, Theorem 6.4.2], if the real parts of all eigenvalues of are negative, it has a unique solution

for any . The eigenvalues of are

(21)

in particular, they do not depend on and have positive real parts (since and for ). Thus, all eigenvalues of have negative real parts, so we have