Impulse Control of Multi-dimensional Jump Diffusions in Finite Time Horizon

Impulse Control of Multi-dimensional Jump Diffusions in Finite Time Horizon

Yann-Shin Aaron Chen Department of Mathematics, University of California at Berkeley, CA 94720-3840. Email address:    Xin Guo Department of Industrial Engineering and Operations Research, University of California at Berkeley, CA 94720-1777. Email address:

This paper analyzes a class of impulse control problems for multi-dimensional jump diffusions in the finite time horizon. Following the basic mathematical setup from Stroock and Varadhan [38], this paper first establishes rigorously an appropriate form of Dynamic Programming Principle (DPP). It then shows that the value function is a viscosity solution for the associated Hamilton-Jacobi-Belleman (HJB) equation involving integro-differential operators. Finally, under additional assumptions that the jumps are of infinite activity but are of finite variation and that the diffusion is uniformly elliptic, it proves that the value function is the unique viscosity solution and has regularity for .


tochastic Impulse Control, Viscosity solution, Parabolic Partial Differential Equations


49J20, 49N25, 49N60

1 Introduction

This paper considers a class of impulse control problem for an -dimensional diffusion process in the form of Equation (1). The objective is to choose an appropriate impulse control so that a certain class of cost function in the form of (2) can be minimized.

Impulse control, in contrast to regular and singular controls, allows the state space to be discontinuous and is a more natural mathematical framework for many applied problems in engineering and economics. Examples in financial mathematics include portfolio management with transaction costs [6, 24, 25, 13, 32, 34], insurance models [21, 9], liquidity risk [27], optimal control of exchange rates [22, 33, 10], and real options [40, 28]. Similar to their regular and singular control counterparts, impulse control problems can be analyzed via various approaches. One approach is to focus on solving the value function for the associated (quasi)-variational inequalities or Hamilton-Jacobi-Bellman (HJB) integro-differential equations, and then establishing the optimality of the solution by verification theorem. (See ksendal and Sulem [35].) Another approach is to characterize the value function of the control problem as a (unique) viscosity solution to the associated PDEs, and/or to study their regularities. In both approaches, in order to connect the PDEs and the original control problems, some form of the Dynamic Programming Principle (DPP) is usually implicitly or explicitly assumed.

Compared to the regular and the singular controls, the main difficulty with impulse controls is the associated non-local operator, which is more difficult to analyze via the classical PDEs tools. When jumps are added to the diffusion, one also has to deal with an integral-differential operator instead of the differential operator. The earliest mathematical literature on impulse controls is the well-known book by Bensoussan and Lions [5], where value functions of the control problems for diffusions without jumps were shown to satisfy the quasi-variational-inequalities and where their regularity properties were established when the control is strictly positive and the state space is in a bounded region. See also work by Menaldi [30], [29] and [31] for jump diffusions with degeneracy. Recently, Barles, Chasseigne, and Imbert [1] provided a general framework and careful analysis on the unique viscosity solution of second order nonlinear elliptic/parabolic partial integro-differential equations. However, in all these PDEs-focused papers, the DPP, the essential link between the PDEs and control problems, is missing. On the other hand, Tang and Yong [39] and Ishikawa [20] established some version of the DPP and the uniqueness of the viscosity solution for diffusions without jumps. More recently, Seydel [37] used a version of the DPP for Markov controls to study the viscosity solution of control problems on jump diffusions. This Markovian assumption simplifies the proof of DPP significantly. Based on the Markovian setup of [37], Davis, Guo and Wu [12] focused on the regularities of the viscosity solution associated with the control problems on jump diffusions in an infinite time horizon. [12] extended some techniques developed in Guo and Wu [19] and used the key connection between the non-local operator and the differential operator developed in [19].

In essence, there are three aspects when studying impulse control problems: the DPP, the HJB, and the regularity of the value function. However, all previous work addressed only one or two of the above aspects and used quite different setups and assumptions, making it difficult to see exactly to what extent all the relevant properties hold in a given framework. This is the motivation of our paper.

Our Results.

This paper studies the finite time horizon impulse control problem of Eq. (2) on multi-dimensional jump diffusions of Eq. 1. Within the same mathematical framework, this paper study three aspects of the control problem: the DPP, the viscosity solution, and its uniqueness and regularity.


First, it takes the classical setup of Stroock and Varadhan [38], assumes the natural filtration of the underlying Brownian motion and the Poisson process, and establishes a general form of DPP. This natural filtration is different from the “usual hypothesis”, i.e., the completed right continuous filtration assumed a priori in most existing works. This specification ensures the existence and certain properties for the existence of the regular conditional probability, crucial for rigorously establishing towards DPP. (See Lemma 4.3.3 from Stroock & Varadhan [38]). With additional appropriate estimation for the value function, the DPP is proved.

We remark that there are some previous works of DPP for impulse controls. For instance, [39] proved a form of DPP for diffusions without jumps and [37] restricted controls to be only Markov controls. Because of the inclusion of both jumps and non-Markov controls, there are essential mathematical difficulty for establishing the DPP and hence the necessity to adopt the framework of [38].

Note that an alternative approach would be to adopt the general weak DPP formulation by Bouchard and Touzi [8] and Bouchard and Nutz [7], or the classical work by El Karoui [14]. However, verifying the key assumptions of the weak DPP, especially the “flow property” in a controlled jump diffusion framework, does not appear simpler than directly establishing the regular conditional probability.

Second, it shows that the value function is a viscosity solution in the sense of [1]. This form of viscosity solution is convenient for the HJB equations involving integro-differential operators, which is the key for analyzing control problems on jump diffusions.

Again, special cases have been studied in [37] for the Markov controls and [39] for diffusions without jumps.

Third, under additional assumption that the jumps are of finite variation with possible infinite activity, it proves the regularity and the unique viscosity solution properties of the value function. Note that the uniqueness of the viscosity solution in our paper is a “local” uniqueness, which is appropriate to study the regularity property.

Compared to [19] without jumps and especially [12] with jumps for infinite horizon problems, this paper is on a finite time horizon which requires different techniques. First, it is more difficult in a parabolic setting to obtain a priori estimates for the value function of the stochastic control problem, especially with the relaxed assumption of the Hölder growth condition (Outstanding Assumption 4 in our paper). Our estimation extends the earlier work of [39] to diffusions with jumps. Secondly, from a PDE perspective, we introduce the notion of Hölder continuity on the measure (Assumption 11). We believe that Assumptions 10 and 11 are more general than those in [12], and are consistent with the approach in [38] with the focus on the integro-differential operator itself. Finally, [19] [12] studies neither the DPP nor the uniqueness of the viscosity solution. There were also studies by Xing and Bayraktar [3] and Pham [36] on value functions for optimal stopping problems for jump diffusions. Their work however did not involve controls. 111We are brought to attention by one of the referees some very recent and nice work by [4] and [2] regarding the regularity analysis for optimal stopping and impulse control problems with infinite variation jumps.

2 Problem Formulation and Main Results

2.1 Problem formulation


Fix a time . For each , let be a probability space that supports a Brownian motion starting at , and an independent Poisson point process on with intensity . Here is the Lebesgue measure on and is a measure defined on . For each , define to be the natural filtration of the Brownian motion and the Poisson process , define to be restricted to the interval .

Throughout the paper, we will use this uncompleted natural filtration .

Now, we can define mathematically the impulse control problem, starting with the set of admissible controls. {definition} The set of admissible impulse control consists of pairs of sequences such that

  1. such that are stopping times with respect to the filtration ,

  2. for all ,

  3. is a random variable such that .

Now, given an admissible impulse control , a stochastic process follows a stochastic differential equation with jumps,


Here , , , and . For each , and , denote .

The stochastic control problem is to


subject to Eqn. (1) with


Here we denote for the associated value function


In order for and to be well defined, and for the Brownian motion and the Poisson process as well as the controlled jump process to be unique at least in a distribution sense, we shall specify some assumptions in Section 2.2.

The focus of the paper is to analyze the following HJB equation associated with the value function



Main result.

Our main result states that the value function is a unique viscosity solution to the (HJB) equation with . In particular, for each , for any .

The main result is established in three steps.

  • First, in order to connect the (HJB) equation with the value function, we prove an appropriate form of the DPP. (Theorem 3.1).

  • Then, we show that the value function is a continuous viscosity solution to the (HJB) equation in the sense of [1]. (Theorem 4).

  • Finally, with additional assumptions, we show that the value function is for , and in fact a unique viscosity solution to the (HJB) equation. (Theorem 5.2).

    All the results in this paper, unless otherwise specified, are built under the assumptions specified in Section 2.2.

2.2 Outstanding assumptions

Assumption 1

Given , assume that

such that the projection map is the Brownian motion and the Poisson point process with density under , and for ,

Assumption 2

(Lipschitz Continuity.) The functions , , and are deterministic measurable functions such that there exists constant independent of , such that

Assumption 3

(Growth Condition.) There exists constant , , such that for any ,

Assumption 4

(Hlder Continuity.) and are measurable functions such that there exists , , such that

for all , .

Assumption 5

(Lower Boundedness) There exists an and such that

for all , , .

Assumption 6

(Monotonicity and Subadditivity) is a continuous function such that for any , , and for being in a fixed compact subset of , there exists constant such that

Assumption 7

(Dominance) The growth of exceeds the growth of the cost functions and so that

Assumption 8

(No Terminal Impulse) For any ,

Assumption 9

Suppose that there exists a measurable map , in which is the set of locally finite measure on , such that one has the following representation of the integro operator:

And assume that for in some compact subset of , there exists such that


Throughout the paper, unless otherwise specified, we will use the following notations.

  • .

  • is the set of points for which achieves the value, i.e.,

  • The continuation region and the action region are

  • Let be a bounded open set in . Denote to be the parabolic boundary of , which is the set of points such that for all , . Here .

    Note that is the closure of the open set in . In the special case of a cylinder, , the parabolic boundary .

  • Function spaces for being a bounded open set,

3 Dynamic Programming Principle and Some Preliminary Results

3.1 Dynamic Programming Principle


(Dynamic Programming Principle) Under Assumptions 1-7, for , , let be a stopping time on , we have


In order to establish the DPP, the first key issue is: given a stopping time , how the martingale property and the stochastic integral change under the regular conditional probability distribution . The next key issue is the continuity of the value function, which will ensure that a countable selection is adequate without the abstract measurable selection theorem. (See [16]).

To start, let us first introduce a new function that connects two Brownian paths which start from the origin at different times into a single Brownian path. This function also combines two Poisson measures on different intervals into a single Poisson measure.


For each , define a map such that

Note that this is an -measurable bijection. Therefore, for fixed , the map from defined by

is -measurable for each .

Next, we need two technical lemmas regarding . Specifically, the first lemma states that the local martingale property is preserved, and the second one ensures that the stochastic integration is well defined under .

According to Theorem 1.2.10 of [38], {lemma} Given a filtered space, , and an associated martingale . Let be an -stopping time. Assume exists. Then, for -a.e. , is a local martingale under .


Given a filtered space , a stopping time , a previsible process , a local martingale such that

-almost surely, and (a version of the stochastic integral that is right-continuous on all paths). Assume that exists. Then, for -a.e. , is also the stochastic integral under the new probability measure .

The proof is elementary and is listed in the Appendix for completeness.

Now, we establish the first step of the Dynamic Programming Principle.


Let be a stopping time defined on some setup . For any impulse control ,


Here are defined as follows. For , for each ,

And for each ,


Consider on . Since we are working with canonical spaces, the sample space is in fact a Polish space (see [23] Theorem A2.1 and A2.3), and the regular conditional probability exists by Theorem 6.3 of [23]. Since Polish spaces are completely separable metric spaces and have countably generated -algebra, is countably generated. By Lemma 1.3.3 from Stroock & Varadhan [38], there exists some null set such that if , then

Therefore, for , , and almost surely.

Moreover, by Lemma 3.1, the stochastic integrals are preserved. Therefore, for , the solution to Eq. (1) remains a solution to the same equation on the interval with . So on the interval has the same distribution as for under for .

Now, to obtain the Dynamic Programming Principle, one needs to take the infimum on both sides of Eq. (9). The part of “” is immediate, but the opposite direction is more delicate. At the stopping time , for each , one needs to choose a good control so that the cost is close to the optimal . To do this, one needs to show that the functional is continuous in some sense, and therefore a countable selection is adequate.

The following result, the Hlder continuity of the value function, is essentially Theorem 3.1 of Tang & Yong [39]. The major difference is that their work is for diffusions without jumps, therefore some modification in terms of estimation and adaptedness are needed, as outlined in the proof.


There exists constant such that for all , ,


To include the jump terms, it suffices to note the following inequalities,

Moreover, in our framework, and would not be in because it is adapted to the filtration instead of . To fix this, consider for each ,

and consequently use instead of .

Given that the value function is continuous, we can prove Theorem 3.1.


(Dynamic Programming Principle) Without loss of generality, assume that .

Taking infimum on both sides, we get