An Elementary Method for the Explicit Solution of Multidimensional Optimal Stopping Problems

# An Elementary Method for the Explicit Solution of Multidimensional Optimal Stopping Problems

Sören Christensen Universität Hamburg, Department of Mathematics, Research Group Statistics and Stochastic Processes, Bundesstr. 55 (Geomatikum), 20146 Hamburg, Germany    Albrecht Irle Christian-Albrechts-Universität, Mathematisches Seminar, Ludewig-Meyn-str. 4, 24098 Kiel, Germany
July 2, 2019
###### Abstract

We study explicitly solvable multidimensional optimal stopping problems. Our approach is based on the notion of monotone stopping problems in discrete and continuous time. The method is illustrated with a variety of examples including multidimensional versions of the house-selling and burglar’s problem, the Poisson disorder problem, and an optimal investment problem.

Keywords: Monotone Stopping Rules; Optimal Stopping; Explicit Solutions; Multidimensional; Doob Decomposition; Doob-Meyer Decomposition; House-Selling Problem; Burglar’s Problem; Poisson Disorder Problem; Optimal Investment Problem

Mathematics Subject Classification: 60G40; 62L10; 91G80

## 1 Introduction

In multidimensional problems, optimal stopping theory reaches its limits when trying to find explicit solutions for problems with a finite time horizon or an underlying (Markovian) process in dimension . In the one-dimensional case with infinite time horizon, the optimal continuation set usually is an interval of the real line, bounded or unbounded, so it remains to determine the boundary of that interval, which boils down to finding equations for one or two, resp., real numbers. A wealth of techniques has been developed to achieve this, see Salminen (1985), Dayanik and Karatzas (2003) for one dimensional diffusions, or Mordecki and Salminen (2007) and Christensen et al. (2013) for jump processes, to name but a few.

In the multidimensional case, the optimal continuation set is an open subset of a -dimensional space, so its boundary is usually described by some -dimensional surface. There are a few problems, see Dubins et al. (1994), Margrabe (1978), Gerber and Shiu (1996), Shepp and Shiryaev (1995), which, by some transformation method, may be transfered to a one-dimensional problem. Let us call a multidimensional problem truly multidimensional if such a transformation does not seem to be possible, at least is not available in the current literature. We do not have any knowledge of such problems in the literature which feature an explicit solution, but, of course, many techniques have been developed to tackle such problems, either semi-explicitly using nonlinear integral equations, see the monograph Peskir and Shiryaev (2006) for an overview, or numerically, see Chapter 8 in Detemple (2006) and Glasserman (2004).

The purpose of this note is to provide some examples of seemingly truly multidimensional problems with an explicit solution. The key for this is the notion of monotone stopping problems. The class of monotone stopping problems has been used extensively in the solution of optimal stopping problems, in particular in the first decades starting with Chow and Robbins (1961, 1963). A long list of examples can be found in Chow et al. (1971) and, more recently, in Ferguson (2008). The extension to continuous time problems is not straightforward. This was developed in Ross (1971), Irle (1979), Irle (1983), and Jensen (1989). Although these references are not very recent, it is interesting to note that the solution to certain “modern” optimal stopping problems is directly based on the notion of monotone case problems, see, e.g., the odds-algorithm initiated in Bruss (2000), or can be interpreted this way, see Christensen (2017) and also the discussion at the end of Appendix B.

The structure of this paper is as follows: We start by reviewing monotone case problems in terms of the Doob(-Meyer) decomposition in Section 2 and state criteria for optimality of the myopic stopping time in two appendices. We then present implications for multidimensional optimal stopping problems in Section 3. We observe that, under certain assumptions, the monotonicity property of the special individual underlying processes carries over to sum- and product-type problems, which makes these also solvable. To convince the reader that this elementary line of argument is nonetheless useful, we discuss a variety of examples in Section 4. We start with multidimensional versions of the classical house-selling and burglar’s problem. Here, the original one-dimensional problems are well-known to be solvable using the theory of monotone stopping. The last two examples are multidimensional extensions of continuous-time problems which are traditionally solved using other arguments: the Poisson disorder problem and the optimal investment problem.

## 2 Monotone Stopping Problems

### 2.1 Monotone stopping problems in discrete time and the Doob decomposition

Let us first stay in the realm of discrete time problems with infinite time horizon. For a sequence of integrable random variables, adapted to a given filtration , we want to find a stopping time such that

 EXτ∗=supτEXτ. (1)

Here runs through all stopping times such that exists. We include a random variable , so that the stopping times may assume the value . A natural choice for our problems below is see the discussion in Subsection A.3.

There is a certain class of such problems for which we can easily solve this. Call the above problem a monotone case problem iff for all it holds that

 E(Xn+1|An)≤Xn⟹E(Xn+2|An+1)≤Xn+1.

A particularly simple sufficient condition for the monotone case is that the differences

 Yk=E(Xk+1|Ak)−Xk are non-increasing in k.

This condition turns out to be fulfilled in many examples of interest and allows for the treatment of multidimensional problems of sum type discussed in the following section.

For monotone case problems, it is natural to consider the myopic stopping time, defined as

 τ∗=inf{n:Xn≥E(Xn+1|An)}.

The discussion of the optimality of is a well-known topic, see the references mentioned in the introduction. We, however, find it enlightening to provide a short review using the Doob decomposition, which leads to a shortcut to optimality results without the usual machinery of optimal stopping theory. This approach also provides a unifying line of argument for both discrete and continuous time. For every let

 Mn =n∑k=2(Xk−E(Xk|Ak−1)), An =n−1∑k=1(E(Xk+1|Ak)−Xk)=n−1∑k=1Yk,A1=0,

so that

 Xn=X1+Mn+An

with a zero mean martingale .

For the myopic stopping time

 E(Xk+1|Ak)−Xk>0 for k=1,…,τ∗−1

– valid for all if – and in the monotone case

 E(Xk+1|Ak)−Xk≤0 for k=τ∗,τ∗+1,….

Thus we have

 Aτ∗=supnAn

and, using for ,

 Aτ∗L=supn≤LAn.

So, for any stopping time , not necessarily finite a.s.,

 Aτ≤Aτ∗,Amin{τ,L}≤Aτ∗L.

Basically these simple inequalities lead to the optimality properties of the myopic stopping time. We provide sufficient conditions, called (V1) and (V2), which are suitable for our classes of problems, in Appendix A.

###### Remark 2.1.

It is remarkable that the myopic stopping rule immediately provides optimal stopping times for all possible time horizons in the monotone case, see Appendix A for the details. The same observation also holds true in the continuous time case discussed below. See also Ferguson (2008) and Irle (2017). This is in strong contrast to most Markovian-type optimal stopping problems, where infinite time problems are often easier to solve as the stopping boundary is not time dependent.

### 2.2 Monotone stopping problems in continuous time and the Doob-Meyer decomposition

To find the extension of the discrete time case to continuous time processes we may use the Doob-Meyer decomposition. Under regularity assumptions, not discussed here, we have

 Xt=X0+Mt+At,

where is a zero mean martingale and is of locally bounded variation. Now assume that we may write

 At=∫t0YsdVs

where is increasing. Then the myopic stopping time – here often called infinitesimal look ahead rule – becomes

 τ∗=inf{t:Yt≤0}.

In this situation, we say that the monotone case holds if

 Yt≤0 for t>τ∗.

If is non-increasing in , then again the monotone case property is immediate. The discussion of optimality is essentially the same as in the discrete time case, so is omitted. As no confusion can occur, we keep the notations (V1) and (V2) for the continuous-time versions of the optimality conditions.

### 2.3 Monotone stopping problems in discrete time and the multiplicative Doob decomposition

For processes with we may also consider the multiplicative Doob decomposition

 Xn=MnAn

where, with ,

 Mn =n∏k=1XkE(Xk|Ak−1),n≥1, is a mean 1-martingale, An =n∏k=1E(Xk|Ak−1)Xk−1,n≥1.

Optimality of the myopic stopping time may also be inferred from this multiplicative decomposition in the monotone case. As in Subsection 2.1, we have for any

 Aτ≤Aτ∗=supnAn,Amin{τ,L}≤Aτ∗L.

The multiplicative decomposition leads, however, to different sufficient conditions for optimality. There is also a connection to a change of measure approach. Both is discussed in Appendix B.

We furthermore observe that we have a monotone case problem in particular if

 E(Xn+1Xn∣∣∣An) is non-% increasing in n,

which turns out to be a basis for the treatment of product-type problems in the following section.

### 2.4 Monotone stopping problems in discrete time and the multiplicative Doob-Meyer decomposition

Also in continuous time, a multiplicative Doob-Meyer-type decomposition of the form

 Xt=MtAt

can be found in the case of a positive special semimartingales , see Jamshidian (2007). For the ease of exposition, we now concentrate on the case of continuous semimartingales to have more explicit formulas. Using ibid, Theorem 4.2, is a local martingale and if denotes the process in the additive Doob-Meyer decomposition as in Subsection 2.2, the process  here is given by

 At=exp(∫t0YsXsdVs).

The optimality may be discussed as in the discrete time case.

We again remark that the problem can be identified to be monotone in particular if the process

 (YtXt)t∈[0,∞) is non-increasing.

## 3 Multidimensional Monotone Case Problems

We now come to the main point of this paper: Can we use the monotone case approach to find truly multidimensional stopping problems with explicit solutions? The answer is yes in so far, as we can present nontrivial examples in the following section.

### 3.1 The sum problem

Belonging to the basic knowledge of any student of mathematics is the fact that

 supx(ax+bx)≤supxax+supxbx,

with strict inequality as a rule. For optimal stopping, this means that being able to solve the stopping problems

 supτEX1τ and supτEX2τ

does not imply that we are able to solve the stopping problem

 supτE(X1τ+X2τ).

#### 3.1.1 Discrete time case

Now let us look at sequences , adapted to a common filtration , with Doob decompositions

 Xin=Xi1+Min+Ain,i=1,…,m,

where as in Subsection 2.1. Then, the Doob decomposition for the sum process is

 m∑i=1Xin=m∑i=1Xi1+m∑i=1Min+n−1∑k=1m∑i=1Yik,i=1,…,m.

Now, if for each the stopping problem for is a monotone case problem it does not necessarily follow that we have a monotone case problem for , see Example 4.2 below. But in the special case that all the are non-increasing in the monotone case property holds. We formulate this as a simple proposition:

###### Proposition 3.1.

Assume that the processes have Doob decompositions

 Xin=Xi1+Min+n−1∑k=1Yik,i=1,…,m,

such that all the sequences are non-increasing in . Then:

1. The sum problem for is a monotone case problem with myopic rule

 τ∗=inf{k:m∑i=1Yik≤0}.
2. If (V1) and (V2) hold for , then the myopic rule is optimal.

###### Proof.

For the Doob decomposition

 m∑i=1Xin=m∑i=1Xi1+m∑i=1Min+n−1∑k=1m∑i=1Yik,i=1,…,m,

the sequence is non-increasing in  by assumption, yielding .
now follows from using the discussion in Appendix A. ∎

#### 3.1.2 Continuous time case

Now let us look at continuous time processes , adapted to a common filtration , with Doob-Meyer decompositions

 Xii=Xi0+Mit+Ait,i=1,…,m,

where for an increasing independent of . (The typical case is .) Then, the Doob decomposition for the sum process is

 m∑i=1Xit=m∑i=1Xi0+m∑i=1Mit+∫t0m∑i=1YisdVs,i=1,…,m,

so that for non-increasing the monotone case property holds. We obtain as in the discrete time case:

###### Proposition 3.2.

Assume that the processes have Doob decompositions

 Xii=Xi0+Mit+∫t0YisdVs,i=1,…,m,

such that all the processes are non-increasing in .

1. The sum problem for is a monotone case problem with myopic rule

 τ∗=inf{t:m∑i=1Yit≤0}.
2. If (V1) and (V2) hold for , then the myopic rule is optimal.

###### Remark 3.3.

The assumptions of the previous Propositions can obviously be relaxed by assuming that the processes are of the form

 Yi=B~Yi,

where is a non-increasing process and is a process independent of .

### 3.2 The product problem

#### 3.2.1 Discrete time case

Now let again consider positive sequences , which we now assume to be independent. We are interested in the product problem with gain In this case, it holds that

 E(Xn+1Xn∣∣∣An)=m∏i=1E(Xin+1Xin∣∣∣An).

So, if in the special individual monotone case problems it holds that

 E(Xin+1Xin∣∣∣An) is % non-increasing in n, (2)

then this also holds for the product. By noting that (V2) is fulfilled automatically by the positivity, we obtain

###### Proposition 3.4.

Assume that the processes are nonnegative, independent, and have multiplicative Doob decomposition

 Xin=Minn∏j=1E(Xij|Aj−1)Xij−1,i=1,…,m,

such that (2) holds true. Then:

1. The product problem for is a monotone case problem with myopic rule

2. If (V1) holds for , then the myopic rule is optimal.

#### 3.2.2 Continuous time case

The same argument as in the discrete case yields

###### Proposition 3.5.

Assume that the processes are nonnegative, independent semimartingales, and have multiplicative Doob-Meyer decomposition

 Xit=Mitexp(∫t0YisXisdVs),i=1,…,m,

such that is non-increasing in for . Then:

1. The product problem for is a monotone case problem with myopic rule

 τ∗=inf{t:m∏i=1YitXit≤1}.
2. If (V1) holds for , then the myopic rule is optimal.

## 4 Examples

### 4.1 The multidimensional house-selling problem

#### 4.1.1 Sum Problem

Consider independent i.i.d. integrable sequences and let for

 Xin=max{Zi1,…,Zin}−cn,c>0.

In the terminology of the house-selling problem we have houses to sell with gain when selling it at time describing a seller’s market. It is well-known, see Chow et al. (1971), that this is a monotone case problem with

 Yik=E(max{Zi1,…,Zik+1}|Ak)−max{Zi1,…,Zik}−c=fi(Sik)−c

where

 Sik=max{Zi1,…,Zik},fi(z)=E((Zi1−z)+).

The filtration is, of course, the one generated by the independent sequences. Since the are non-increasing in  and the are non-decreasing in , the processes are non-increasing in . Proposition 3.1 yields that the multidimensional sum problem with reward

 Xn=m∑i=1Xin−cn

is a monotone case problem, and the myopic stopping time

 τ∗=inf{k:m∑i=1fi(Sik)≤c}

is optimal under the additional condition of finite variance for . The validity of (V1) and (V2) follows as in the univariate case, see Ferguson (2008), Appendix to Chapter 4.

Now, we discuss some explicit examples.

1. If all are uniformly distributed on , then

 fi(z)=f(z)=∫10(u−z)+du=(1−z)22,z∈[0,1],

so

 τ∗=inf{k:(S1k,…,Smk)∈Sm},

where

 Sm={(z1,…,zm):m∑i=1(1−zi)2≤2c}.

So, the optimal stopping set is a ball around with radius intersected with the support , see Figure 1 for an illustration in dimension .

2. Let all be exponentially distributed on with mean 1. Then

 fi(z)=f(z)=∫∞0(u−z)+e−udu=e−z,

so that

 τ∗=inf{k:(S1k,…,Smk)∈Sm},

is optimal, where

 Sm={(z1,…,zm):m∑i=1e−zi≤c},

see Figure 2.

3. Consider and with such that for all

 P(Zin=¯¯¯¯zl)=pl.

Then

 fi(z)=f(z)=∑l:¯¯¯¯zl>z(¯¯¯¯zl−z)pl

is piecewise linear, and

 Sm={(z1,..,zk):m∑i=1∑l:¯¯¯¯zl>zi(¯¯¯¯zl−zi)pl≤c}

is a polyhedron.

###### Remark 4.1.

In the house-selling problem, the functions are decreasing and convex. In the examples above, we assumed identical distributions, so that  are independent of and the stopping sets are symmetrical. Of course, this will disappear for non-identical distributions, see Figure 3.

#### 4.1.2 Product problem

Another multidimensional version of the house-selling problem is the product problem with constant costs, that is, with reward

 m∏i=1max{Zi1,…,Zin}−cn.

It can, however, straightforwardly be checked that this does not lead to a monotone case problem. We now modify the classical problem by using a discounting factor . More precisely, for as above with a.s., let for

 Xin=ρnmax{Zi1,…,Zin}.

Then,

 E(Xik+1Xik∣∣∣Ak) =ρgi(Sik)

where

 Sik=max{Zi1,…,Zik},gi(z)=E(max{1,Zi1z}).

Similar as for the sum problem, the are decreasing in  and the are non-decreasing in , so that the processes are non-increasing in . Therefore, the multidimensional product problem with gain

 Xn=m∏i=1Xin

is a monotone case problem and the myopic stopping time reads as

 τ∗=inf{k:m∏i=1gi(Sik)≤ρ−m}.

It is not difficult to see that is optimal according to Proposition 3.4. Indeed, (V2) is clear due to the non-negativity and for (V1) it can be checked that for all integrable using arguments similar to Ferguson (2008), Appendix to Chapter 4.

As for the sum problem, we obtain an explicit optimal stopping rule when considering concrete distributions. For example, consider again the case that all are uniformly distributed on , then

 gi(z)=g(z)=∫10max{1,uz}du=1+z22z,

so

 τ∗=inf{k:(S1k,…,Smk)∈Sm},

where

 Sm={(z1,…,zm):m∏i=11+z2izi≤(ρ2)−m}.

See Figure 4 for an illustration.

###### Remark 4.2.

In the house-selling problem, as well as in other stopping problems, one might want to also study a problem of -type, that is the problem with gain

 Xn =maxi=1,…,mXin−cn =maxi=1,…,mmax{Zi1,…,Zin}−cn =maxl=1,…,nmaxi=1,…,mZil−cn.

So, this is not a truly multidimensional problem as it boils down to a one-dimensional problem for . But in general, it seems harder to work with max-type problems than with sum problems due to the nonlinearity of the max function.

### 4.2 The multidimensional burglar’s problem

#### 4.2.1 Sum problem

Here, we have for independent i.i.d. sequences and , where describes the burglar’s gain and or when getting caught or not caught, resp. Then, we look at

 Xin=(n∑j=1Zij)n∏j=1δij

with obvious interpretation. The sum problem corresponds to the question when a burglar gang should stop their work. It is well-known that for each we have a monotone case problem. Indeed, writing it holds that

 Yik =E((k∑j=1Zij+Zik+1)k∏j=1δijδik+1∣∣∣Ak)−Xik =Xikpi+(k∏j=1δij)aipi−Xik =Xik(pi−1)+(k∏j=1δij)aipi,

hence iff

 k∏j=1δij=0 or k∑j=1Zij≥aipi1−pi.

Let us first look at the sum problem for and constant . Then,

 Y1k+Y2k=(X1k+X2k)(p−1)+((k∏j=1δ1j)+(k∏j=1δ2j))ap.

If , this becomes

 k∑j=1(Z1j+Z2j)(p−1)+2ap,

hence

 Y1k+Y2k≤0 iff k∑j=1(Z1j+Z2j)≥2ap1−p.

But if, e.g., the next then

 Y1k+1+Y2k+1=(k+1∑j=1Z1j)(p−1)+ap

and does not hold true in general. So, the sum problem is not monotone in general.

In the case that is independent of – that is the police takes away all stolen goods when catching one member of the gang – the problem for

 m∑i=1Xin=(n∑j=1m∑i=1Zij)k∏j=1δj

is simply the one-dimensional case for

 ~Zj=m∑i=1Zij.

#### 4.2.2 Product problem

We now consider the product version of the multidimensional burglar’s problem. We could directly apply Proposition 3.4, but we want to cover a slightly more general case including geometric averages of the gains:

 Xn=m∏i=1(Sinρin)αi,αi>0

with

 Sin=n∑j=1Zij,ρin=n∏j=1δij,

so

 Xn=m∏i=1Sinαim∏i=1ρin,Xn+1=m∏i=1(Sin+Zin+1)αim∏i=1(ρinδin+1).

Using it follows

 E(Xn+1|An)=λm∏i=1ρinm∏i=1∫(Sin+z)αiPZi1(dz)

so that holds iff

 m∏i=1ρin=0 or λm∏i=1∫(Sin+z)αiPZi1(dz)≤m∏i=1Sinαi.

So under , the inequality to be considered becomes

 m∏i=1∫(1+zSin)αiPZi1(dz)≤1λ.

Since is non-decreasing in , we have a monotone case problem and the myopic stopping time is the the first entrance time for the -dimensional random walk into the set

 Sm={(y1,…,ym):m∏i=1hi(yi)≤1λ}

with

 hi(y)=∫(1+zy)αiPZi1(dz).

The optimality of the myopic stopping time follows as in the univariate case; see Proposition 3.4 and Ferguson (2008), 5.4.

### 4.3 The multidimensional Poisson disorder problem

The classical Poisson disorder problem is a change point-detection problem where the goal is to determine a stopping time which is as close as possible to the unobservable time when the intensity of an observed Poisson process changes its value. Early treatments include Galcuk and Rozovskiĭ (1971), Davis (1976), and a complete solution was obtained in Peskir and Shiryaev (2002). Further calculations can be found in Bayraktar et al. (2005).

Our multidimensional version of this problem is based on observing such independent processes with different change points The aim is now to find one time which is as close as possible to the unobservable times We now give a precise formulation. For each , the unobservable random time is assumed to be exponentially distributed with parameter and the corresponding observable process is a counting process whose intensity switches from a constant to at . Furthermore, all random variables are independent for different . We denote by the filtration given by

 Ft=σ(Nis,1{σi≤s}:s≤t,i=1,…,m).

As is not observable, we have to work under the subfiltration generated by only. If we stop the process at , a measure to describe the distance of and often used in the literature is

 Zit=1{σi≥t}+ci(t−σi)+

for some constant . We also stay in this setting, although a similar line of reasoning could be applied for other gain functions also. As is not adapted to the observable information , we introduce the processes by conditioning as

 Xit=E(Zit|At).

The classical Poisson disorder problem for is the optimal stopping problem of over all -stopping times . Here, of course, we want to minimize (and not maximize) the expected distance, so that we have to make the obvious minor changes in the theory.

We now study the corresponding problem for the sum process

 m∑i=1Xit=E(m∑i=1(1{σi≥t}+ci(t−σi)+)∣∣At),t∈[0,∞).

Here, denotes the number of processes without a change before plus a weighted sum of the cumulated times that have passed by since the other processes have changed their intensity.

A possible application is a technical system consisting of components. Component changes its characteristics at a random time . After these changes, the component produces additional costs of per time unit. denotes a time for maintenance. Inspecting component before produce (standardized) costs 1. Then, the optimal stopping problem corresponds to the following question: What is the best time for maintenance in this technical system?

The Doob-Meyer decomposition for can explicitly be found in Peskir and Shiryaev (2002), (2.14), and is given by

 Xit=Xi0+Mit+∫t0Yisds,

where

 Yit=−λi+(ci+λi)πit

and denotes the posterior probability process

 πit=P(σi≤t|At).

The process can be calculated in terms of in this case, see Peskir and Shiryaev (2002), (2.8),(2.9). Indeed,

 πit=ϕit1+ϕit

where

 ϕit=λie(λi+μi0−μi1)teNitlog(μi1/μi0)∫t0e−(λi+μi0−μi1)se−Nislog(μi1/μi0)ds.

In particular, it can be seen that the process is increasing in the case , and therefore so is It is furthermore easily seen that the integrability assumptions in Proposition 3.2 are fulfilled. Therefore, we obtain that the optimal stopping time in the multidimensional Poisson disorder problem is – under the assumption – given by

 τ∗ =inf{t:m∑i=1Yit≥0} =inf{t:m∑i=1(ci+λi)πit≥m∑i=1λi},

so that the optimal stopping time is a first entrance time into a half space

 Sm