An Elementary Method for the Explicit Solution of Multidimensional Optimal Stopping Problems
Abstract
We study explicitly solvable multidimensional optimal stopping problems. Our approach is based on the notion of monotone stopping problems in discrete and continuous time. The method is illustrated with a variety of examples including multidimensional versions of the houseselling and burglar’s problem, the Poisson disorder problem, and an optimal investment problem.
Keywords: Monotone Stopping Rules; Optimal Stopping; Explicit Solutions; Multidimensional; Doob Decomposition; DoobMeyer Decomposition; HouseSelling Problem; Burglar’s Problem; Poisson Disorder Problem; Optimal Investment Problem
Mathematics Subject Classification: 60G40; 62L10; 91G80
1 Introduction
In multidimensional problems, optimal stopping theory reaches its limits when trying to find explicit solutions for problems with a finite time horizon or an underlying (Markovian) process in dimension . In the onedimensional case with infinite time horizon, the optimal continuation set usually is an interval of the real line, bounded or unbounded, so it remains to determine the boundary of that interval, which boils down to finding equations for one or two, resp., real numbers. A wealth of techniques has been developed to achieve this, see Salminen (1985), Dayanik and Karatzas (2003) for one dimensional diffusions, or Mordecki and Salminen (2007) and Christensen et al. (2013) for jump processes, to name but a few.
In the multidimensional case, the optimal continuation set is an open subset of a dimensional space, so its boundary is usually described by some dimensional surface. There are a few problems, see Dubins et al. (1994), Margrabe (1978), Gerber and Shiu (1996), Shepp and Shiryaev (1995), which, by some transformation method, may be transfered to a onedimensional problem. Let us call a multidimensional problem truly multidimensional if such a transformation does not seem to be possible, at least is not available in the current literature. We do not have any knowledge of such problems in the literature which feature an explicit solution, but, of course, many techniques have been developed to tackle such problems, either semiexplicitly using nonlinear integral equations, see the monograph Peskir and Shiryaev (2006) for an overview, or numerically, see Chapter 8 in Detemple (2006) and Glasserman (2004).
The purpose of this note is to provide some examples of seemingly truly multidimensional problems with an explicit solution. The key for this is the notion of monotone stopping problems. The class of monotone stopping problems has been used extensively in the solution of optimal stopping problems, in particular in the first decades starting with Chow and Robbins (1961, 1963). A long list of examples can be found in Chow et al. (1971) and, more recently, in Ferguson (2008). The extension to continuous time problems is not straightforward. This was developed in Ross (1971), Irle (1979), Irle (1983), and Jensen (1989). Although these references are not very recent, it is interesting to note that the solution to certain “modern” optimal stopping problems is directly based on the notion of monotone case problems, see, e.g., the oddsalgorithm initiated in Bruss (2000), or can be interpreted this way, see Christensen (2017) and also the discussion at the end of Appendix B.
The structure of this paper is as follows: We start by reviewing monotone case problems in terms of the Doob(Meyer) decomposition in Section 2 and state criteria for optimality of the myopic stopping time in two appendices. We then present implications for multidimensional optimal stopping problems in Section 3. We observe that, under certain assumptions, the monotonicity property of the special individual underlying processes carries over to sum and producttype problems, which makes these also solvable. To convince the reader that this elementary line of argument is nonetheless useful, we discuss a variety of examples in Section 4. We start with multidimensional versions of the classical houseselling and burglar’s problem. Here, the original onedimensional problems are wellknown to be solvable using the theory of monotone stopping. The last two examples are multidimensional extensions of continuoustime problems which are traditionally solved using other arguments: the Poisson disorder problem and the optimal investment problem.
2 Monotone Stopping Problems
2.1 Monotone stopping problems in discrete time and the Doob decomposition
Let us first stay in the realm of discrete time problems with infinite time horizon. For a sequence of integrable random variables, adapted to a given filtration , we want to find a stopping time such that
(1) 
Here runs through all stopping times such that exists. We include a random variable , so that the stopping times may assume the value . A natural choice for our problems below is see the discussion in Subsection A.3.
There is a certain class of such problems for which we can easily solve this. Call the above problem a monotone case problem iff for all it holds that
A particularly simple sufficient condition for the monotone case is that the differences
This condition turns out to be fulfilled in many examples of interest and allows for the treatment of multidimensional problems of sum type discussed in the following section.
For monotone case problems, it is natural to consider the myopic stopping time, defined as
The discussion of the optimality of is a wellknown topic, see the references mentioned in the introduction. We, however, find it enlightening to provide a short review using the Doob decomposition, which leads to a shortcut to optimality results without the usual machinery of optimal stopping theory. This approach also provides a unifying line of argument for both discrete and continuous time. For every let
so that
with a zero mean martingale .
For the myopic stopping time
– valid for all if – and in the monotone case
Thus we have
and, using for ,
So, for any stopping time , not necessarily finite a.s.,
Basically these simple inequalities lead to the optimality properties of the myopic stopping time. We provide sufficient conditions, called (V1) and (V2), which are suitable for our classes of problems, in Appendix A.
Remark 2.1.
It is remarkable that the myopic stopping rule immediately provides optimal stopping times for all possible time horizons in the monotone case, see Appendix A for the details. The same observation also holds true in the continuous time case discussed below. See also Ferguson (2008) and Irle (2017). This is in strong contrast to most Markoviantype optimal stopping problems, where infinite time problems are often easier to solve as the stopping boundary is not time dependent.
2.2 Monotone stopping problems in continuous time and the DoobMeyer decomposition
To find the extension of the discrete time case to continuous time processes we may use the DoobMeyer decomposition. Under regularity assumptions, not discussed here, we have
where is a zero mean martingale and is of locally bounded variation. Now assume that we may write
where is increasing. Then the myopic stopping time – here often called infinitesimal look ahead rule – becomes
In this situation, we say that the monotone case holds if
If is nonincreasing in , then again the monotone case property is immediate. The discussion of optimality is essentially the same as in the discrete time case, so is omitted. As no confusion can occur, we keep the notations (V1) and (V2) for the continuoustime versions of the optimality conditions.
2.3 Monotone stopping problems in discrete time and the multiplicative Doob decomposition
For processes with we may also consider the multiplicative Doob decomposition
where, with ,
Optimality of the myopic stopping time may also be inferred from this multiplicative decomposition in the monotone case. As in Subsection 2.1, we have for any
The multiplicative decomposition leads, however, to different sufficient conditions for optimality. There is also a connection to a change of measure approach. Both is discussed in Appendix B.
We furthermore observe that we have a monotone case problem in particular if
which turns out to be a basis for the treatment of producttype problems in the following section.
2.4 Monotone stopping problems in discrete time and the multiplicative DoobMeyer decomposition
Also in continuous time, a multiplicative DoobMeyertype decomposition of the form
can be found in the case of a positive special semimartingales , see Jamshidian (2007). For the ease of exposition, we now concentrate on the case of continuous semimartingales to have more explicit formulas. Using ibid, Theorem 4.2, is a local martingale and if denotes the process in the additive DoobMeyer decomposition as in Subsection 2.2, the process here is given by
The optimality may be discussed as in the discrete time case.
We again remark that the problem can be identified to be monotone in particular if the process
3 Multidimensional Monotone Case Problems
We now come to the main point of this paper: Can we use the monotone case approach to find truly multidimensional stopping problems with explicit solutions? The answer is yes in so far, as we can present nontrivial examples in the following section.
3.1 The sum problem
Belonging to the basic knowledge of any student of mathematics is the fact that
with strict inequality as a rule. For optimal stopping, this means that being able to solve the stopping problems
does not imply that we are able to solve the stopping problem
3.1.1 Discrete time case
Now let us look at sequences , adapted to a common filtration , with Doob decompositions
where as in Subsection 2.1. Then, the Doob decomposition for the sum process is
Now, if for each the stopping problem for is a monotone case problem it does not necessarily follow that we have a monotone case problem for , see Example 4.2 below. But in the special case that all the are nonincreasing in the monotone case property holds. We formulate this as a simple proposition:
Proposition 3.1.
Proof.
For the Doob decomposition
the sequence is nonincreasing in by assumption, yielding .
now follows from using the discussion in Appendix A.
∎
3.1.2 Continuous time case
Now let us look at continuous time processes , adapted to a common filtration , with DoobMeyer decompositions
where for an increasing independent of . (The typical case is .) Then, the Doob decomposition for the sum process is
so that for nonincreasing the monotone case property holds. We obtain as in the discrete time case:
Proposition 3.2.
Remark 3.3.
The assumptions of the previous Propositions can obviously be relaxed by assuming that the processes are of the form
where is a nonincreasing process and is a process independent of .
3.2 The product problem
3.2.1 Discrete time case
Now let again consider positive sequences , which we now assume to be independent. We are interested in the product problem with gain In this case, it holds that
So, if in the special individual monotone case problems it holds that
(2) 
then this also holds for the product. By noting that (V2) is fulfilled automatically by the positivity, we obtain
3.2.2 Continuous time case
The same argument as in the discrete case yields
Proposition 3.5.
Assume that the processes are nonnegative, independent semimartingales, and have multiplicative DoobMeyer decomposition
such that is nonincreasing in for . Then:

The product problem for is a monotone case problem with myopic rule

If (V1) holds for , then the myopic rule is optimal.
4 Examples
4.1 The multidimensional houseselling problem
4.1.1 Sum Problem
Consider independent i.i.d. integrable sequences and let for
In the terminology of the houseselling problem we have houses to sell with gain when selling it at time describing a seller’s market. It is wellknown, see Chow et al. (1971), that this is a monotone case problem with
where
The filtration is, of course, the one generated by the independent sequences. Since the are nonincreasing in and the are nondecreasing in , the processes are nonincreasing in . Proposition 3.1 yields that the multidimensional sum problem with reward
is a monotone case problem, and the myopic stopping time
is optimal under the additional condition of finite variance for . The validity of (V1) and (V2) follows as in the univariate case, see Ferguson (2008), Appendix to Chapter 4.
Now, we discuss some explicit examples.

If all are uniformly distributed on , then
so
where
So, the optimal stopping set is a ball around with radius intersected with the support , see Figure 1 for an illustration in dimension .

Consider and with such that for all
Then
is piecewise linear, and
is a polyhedron.
Remark 4.1.
In the houseselling problem, the functions are decreasing and convex. In the examples above, we assumed identical distributions, so that are independent of and the stopping sets are symmetrical. Of course, this will disappear for nonidentical distributions, see Figure 3.
4.1.2 Product problem
Another multidimensional version of the houseselling problem is the product problem with constant costs, that is, with reward
It can, however, straightforwardly be checked that this does not lead to a monotone case problem. We now modify the classical problem by using a discounting factor . More precisely, for as above with a.s., let for
Then,
where
Similar as for the sum problem, the are decreasing in and the are nondecreasing in , so that the processes are nonincreasing in . Therefore, the multidimensional product problem with gain
is a monotone case problem and the myopic stopping time reads as
It is not difficult to see that is optimal according to Proposition 3.4. Indeed, (V2) is clear due to the nonnegativity and for (V1) it can be checked that for all integrable using arguments similar to Ferguson (2008), Appendix to Chapter 4.
As for the sum problem, we obtain an explicit optimal stopping rule when considering concrete distributions. For example, consider again the case that all are uniformly distributed on , then
so
where
See Figure 4 for an illustration.
Remark 4.2.
In the houseselling problem, as well as in other stopping problems, one might want to also study a problem of type, that is the problem with gain
So, this is not a truly multidimensional problem as it boils down to a onedimensional problem for . But in general, it seems harder to work with maxtype problems than with sum problems due to the nonlinearity of the max function.
4.2 The multidimensional burglar’s problem
4.2.1 Sum problem
Here, we have for independent i.i.d. sequences and , where describes the burglar’s gain and or when getting caught or not caught, resp. Then, we look at
with obvious interpretation. The sum problem corresponds to the question when a burglar gang should stop their work. It is wellknown that for each we have a monotone case problem. Indeed, writing it holds that
hence iff
Let us first look at the sum problem for and constant . Then,
If , this becomes
hence
But if, e.g., the next then
and does not hold true in general. So, the sum problem is not monotone in general.
In the case that is independent of – that is the police takes away all stolen goods when catching one member of the gang – the problem for
is simply the onedimensional case for
4.2.2 Product problem
We now consider the product version of the multidimensional burglar’s problem. We could directly apply Proposition 3.4, but we want to cover a slightly more general case including geometric averages of the gains:
with
so
Using it follows
so that holds iff
So under , the inequality to be considered becomes
Since is nondecreasing in , we have a monotone case problem and the myopic stopping time is the the first entrance time for the dimensional random walk into the set
with
The optimality of the myopic stopping time follows as in the univariate case; see Proposition 3.4 and Ferguson (2008), 5.4.
4.3 The multidimensional Poisson disorder problem
The classical Poisson disorder problem is a change pointdetection problem where the goal is to determine a stopping time which is as close as possible to the unobservable time when the intensity of an observed Poisson process changes its value. Early treatments include Galcuk and Rozovskiĭ (1971), Davis (1976), and a complete solution was obtained in Peskir and Shiryaev (2002). Further calculations can be found in Bayraktar et al. (2005).
Our multidimensional version of this problem is based on observing such independent processes with different change points The aim is now to find one time which is as close as possible to the unobservable times We now give a precise formulation. For each , the unobservable random time is assumed to be exponentially distributed with parameter and the corresponding observable process is a counting process whose intensity switches from a constant to at . Furthermore, all random variables are independent for different . We denote by the filtration given by
As is not observable, we have to work under the subfiltration generated by only. If we stop the process at , a measure to describe the distance of and often used in the literature is
for some constant . We also stay in this setting, although a similar line of reasoning could be applied for other gain functions also. As is not adapted to the observable information , we introduce the processes by conditioning as
The classical Poisson disorder problem for is the optimal stopping problem of over all stopping times . Here, of course, we want to minimize (and not maximize) the expected distance, so that we have to make the obvious minor changes in the theory.
We now study the corresponding problem for the sum process
Here, denotes the number of processes without a change before plus a weighted sum of the cumulated times that have passed by since the other processes have changed their intensity.
A possible application is a technical system consisting of components. Component changes its characteristics at a random time . After these changes, the component produces additional costs of per time unit. denotes a time for maintenance. Inspecting component before produce (standardized) costs 1. Then, the optimal stopping problem corresponds to the following question: What is the best time for maintenance in this technical system?
The DoobMeyer decomposition for can explicitly be found in Peskir and Shiryaev (2002), (2.14), and is given by
where
and denotes the posterior probability process
The process can be calculated in terms of in this case, see Peskir and Shiryaev (2002), (2.8),(2.9). Indeed,
where
In particular, it can be seen that the process is increasing in the case , and therefore so is It is furthermore easily seen that the integrability assumptions in Proposition 3.2 are fulfilled. Therefore, we obtain that the optimal stopping time in the multidimensional Poisson disorder problem is – under the assumption – given by
so that the optimal stopping time is a first entrance time into a half space