Optimal decision under ambiguity for diffusion processes
Abstract
In this paper we consider stochastic optimization problems for an ambiguity averse decision maker who is uncertain about the parameters of the underlying process. In a first part we consider problems of optimal stopping under drift ambiguity for onedimensional diffusion processes. Analogously to the case of ordinary optimal stopping problems for onedimensional Brownian motions we reduce the problem to the geometric problem of finding the smallest majorant of the reward function in a twoparameter function space. In a second part we solve optimal stopping problems when the underlying process may crash down. These problems are reduced to one optimal stopping problem and one Dynkin game. Examples are discussed.
Keywords: optimal stopping; drift ambiguity; crashscenario; Dynkingames; diffusion processes
Subject Classifications: 60G40; 62L15 [PS06]
1 Introduction
In most articles dealing with stochastic optimization problems one major assumption is that the decision maker has full knowledge of the parameter of the underlying stochastic process. This does not seem to be a realistic assumption in many real world situations. Therefore, different multiple prior models were studied in the economic literature in the last years. Here, we want to mention [DE92] and [ES03], and refer to [CR10] for an economic discussion and further references.
In this setting it is assumed that the decision maker deals with the uncertainty via a worstcase approach, that is, she optimizes her reward under the assumption that the “market” chooses the worst possible prior. This is a natural assumption, and we also want to pursue this approach.
A very important class of stochastic optimization problems is given by optimal stopping problems. These problems arise in many different fields, e.g., in pricing Americanstyle options, in portfolio optimization, and in sequential statistics. Discrete time problems of optimal stopping in a multiple prior setting were first discussed in [Rie09] and analogous results to the classical ones were proved. In this setting a generalization of the classical best choice problem was treated in detail in [CR09]. In continuous time the case of an underlying diffusion with uncertainty about the drift is of special interest. The general theory (including adjusted HamiltonJacobiBellman equations) is developed in [CR10]. Some explicit examples are given there, but no systematic way for finding an analytical solution is described. In [Alv07] the case of monotonic reward functions for onedimensional diffusion processes is considered. The restriction to monotonic reward functions simplifies the problem since only two different worstcase measures can arise.
Another class of stochastic optimization problems under uncertainty was dealt with in a series of papers starting with [KW02]: Portfolio optimization problems are considered under the assumption that the underlying asset price process may crash down at a certain (unknown) time point. The decision maker is again considered to be ambiguity averse in the sense that she tries to choose the best possible stopping policy out of the worst possible realizations of the crash date. See [KS09] for an overview on existing results.
The aim of this article is to treat optimal stopping problems under uncertainty for underlying onedimensional diffusion processes. These kinds of problems are of special interest since they arise in many situations and often allow for an explicit solution.
The structure of this article is as follows: In Section 2 we first review some wellknown facts about the solution of ordinary optimal stopping problems for an underlying Brownian motion. These problems can be solved graphically by characterizing the value function as the smallest concave majorant of the reward function. Then we treat the optimal stopping problem under ambiguity about the drift in a similar way: The result is that the value function can be characterized as the smallest majorant of the reward function in a twoparameter class of functions. The main tool is the use of generalized harmonic functions. After giving an example and characterizing the worstcase measure, we generalize the results to general onedimensional diffusion processes.
In Section 3 we introduce the optimal stopping problem under ambiguity about crashes of the underlying process in the spirit of [KS09]. In this situation the optimal strategy can be described by two easy strategies: One precrash and one postcrash strategy. These strategies can be found as solutions of a onedimensional Dynkin game and an ordinary optimal stopping problem, which can both be solved using standard methods. We want to point out that this model is a natural situation where Dynkin games arise and the theory developed in the last years can be used fruitfully. As an explicit example we study the valuation of American calloptions in the model with crashes. Here, the postcrash strategy is the wellknown thresholdstrategy in the standard BlackScholes setting. The precrash strategy is of the same type, but the optimal threshold is lower.
2 Optimal stopping under drift ambiguity
2.1 Graphical solution of ordinary optimal stopping problems
Problems of optimal stopping in continuous time are wellstudied and the general theory is welldeveloped. Nonetheless, the explicit solution to such problems is often hard to find and the class of explicit examples is very limited. Most of them are generalizations of the following situation, that allows for an easy geometric solution:
Let be a standard Brownian motion on a compact interval with absorbing boundary points and . We consider the problem of optimal stopping given by the value function
where the reward function is continuous and the supremum is taken over all stopping times w.r.t. the natural filtration for . Here and in the following, denotes taking expectation for the process conditioned to start in . In this case it is wellknown that the value function can be characterized as the smallest concave majorant of , see [DY69]. This means that the problem of optimal stopping can be reduced to finding the smallest majorant of in an easy class of functions. For finding the smallest concave majorant of a function one only has to consider affine functions, i.e., for each fixed point the value of the smallest concave majorant is given by
where is an element of the twoparameter class of affine functions of the form . This problem can be solved geometrically, see Figure 1. We want to remark that this problem is indeed a semiinfinite linear programming problem:
s.t 
This gives rise to an efficient method for solving these problems, which can be generalized in an appropriate way, see [HS10] for an analytical method and [Chr12] for a numerical point of view.
The example described above is important both for theory and applications of optimal stopping since by studying it one can obtain an intuition for more complex situations such as finite time horizon problems and multidimensional driving processes, where numerical methods have to be used in most situations of interest.
The goal of this section is to handle optimal stopping problems with drift ambiguity for diffusion processes similarly to the ordinary case discussed above. This gives rise to an easy to handle geometric method for solving optimal stopping problems under drift ambiguity explicitly.
2.2 Special Case: Brownian motion
In the following we use the notation of [CR10]: Let be a Brownian motion under the measure , fix and denote by the set of all probability measures, that are equivalent to with density process of the form
for a progressively measurable process with for all . We want to find the value function
for some fixed discounting rate and a measurable reward function , where means taking expectation under the measure when the process is started in . Instead of taking affine functions as in Subsection 2.1 we construct another class of appropriate functions based on the minimal harmonic functions (introduced below) for the Brownian motion with drift resp. as follows:
Denote the roots of the equation
by and the roots of
by . Then are the minimal harmonic functions for a Brownian motion with drift , and the corresponding functions for a Brownian motion with drift . Note that and and . For all define the functions via
and
For , the function is constructed by smoothly merging harmonic functions for the Brownian motion with drift (for ) and (for ) at their minimum in . By taking derivatives and taking into account that and , one sees that the function is indeed .
The set does not form a convex cone for . This is the main difference compared to the case without drift ambiguity. Therefore, the standard techniques for optimal stopping are not applicable immediately. Nonetheless, this leads to the right supermartingales to work with:
Lemma 2.1.

For all with , , and it holds that
where the measure is such that
for a Brownian motion under .

For all and all stopping times it holds that

For all with , , and , it holds that
and
Proof.

For with density process , by Girsanov’s theorem, we may write
where is a Brownian motion under . Since we can apply Itô’s lemma and obtain
By construction of , it holds that
hence
Noting that iff , we obtain that the process is a bounded submartingale. Therefore, by the optional sampling theorem,
Under we see that is actually a local martingale that is bounded. Therefore, the optional sampling theorem yields equality.

By the calculation in the process is a positive local martingale, i.e. also a supermartingale. The optional sampling theorem for nonnegative supermartingales is applicable.

By noting that is decreasing and is increasing the same arguments as in apply.
∎
The following theorem shows that the geometric solution described in Subsection 2.1 can indeed be generalized to the drift ambiguity case. Moreover, we give a characterization of the optimal stopping set as maximum point of explicitly given functions.
Theorem 2.2.

It holds that
Furthermore, the infimum in is indeed a minimum.

A point is in the optimal stopping set if and only if there exists such that
Proof.
For each , and each stopping time we obtain using Lemma 2.1 (ii)
Since holds if and only if we obtain that
For the other inequality consider the following cases:
Case 1:
Take a sequence with such that . Then for using Lemma 2.1 (iii) we obtain
Therefore, .
Moreover, if is in the stopping set, i.e. , then we see that , i.e. is a maximum point of the function , i.e. .
Case 2: The case can be handled the same way.
Case 3:
First we show that there exists such that
To this end, write
By construction of it holds that . Therefore,
Since the functions
and
are continuous as concave functions, we obtain that the function is continuous. By the same argument, the function is also continuous. By the intermediate value theorem applied to the function
there exists with as desired.
Now take sequences and with such that
Using we obtain by Lemma 2.1 (i)
This yields the result . As above we furthermore see that if is in the optimal stopping set, then it is a maximum point of , i.e. . ∎
Remark 2.3.

We would like to emphasize that we do not need any continuity assumptions on . This is remarkable, because even for the easy case described at the beginning of this section most standard techniques do not lead to such a general result.

The previous proof is inspired by the ideas first described in [BL97]. It seems that other standard methods for dealing with optimal stopping problems for diffusions without drift ambiguity (such as Martin boundary theory as in [Sal85], generalized concavity methods as in [DK03], or linear programming arguments as in [HS10]) are not applicable with minor modifications due to the nonlinear structure coming from drift ambiguity. A characterization of the optimal stopping points as in Theorem 2.2 (ii) for the problem without ambiguity can be found in [CI11].
2.3 Worstcase prior
Theorem 2.2 leads to the value of the optimal stopping problem with drift ambiguity and also provides an easy way to find the optimal stopping time. Another important topic is to determine the worstcase measure for a process started in a point , i.e. we would like to determine the measure such that . Using the results described above the worstcase measure can also be found immediately:
Theorem 2.4.
Let and let be a minimizer as in Theorem 2.2 (i). Then is a worstcase measure for the process started in .
Proof.
This is immediate from the proof of Theorem 2.2. ∎
2.4 Example: American straddle in the Bachelier market
Because it is easy and instructive we consider the example discussed in [CR10] in the light of our method:
We consider a variant of the American straddle option in a Bachelier market model as follows: As a driving process we consider a standard Brownian motion under with reward function . Our aim is to find the value in 0 of the optimal stopping problem
Using Theorem 2.2 we have to find the majorant of in the set
One immediately sees that if , then and furthermore . Therefore, we only have to consider majorants of in the set
This onedimensional problem can be solved immediately. For one obtains .
In fact, if denote the maximum points of we obtain that for . Moreover, for one immediately sees that there exists such that is a maximum point of and we obtain
Moreover, the worstcase measure is , i.e. the process has positive drift on and drift on .
2.5 General diffusion processes
The results obtained before can be generalized to general onedimensional diffusion processes. The only problem is to choose appropriate functions carefully. After these functions are constructed the same arguments as in the previous subsections work.
Let be a regular onedimensional diffusion process on some interval with boundary points , that is characterized by its generator
for some continuous functions . For convenience we furthermore assume that the boundary points of are natural. For a generalization to other boundary behaviors see the discussion in [BL00, Section 6]. Again denote by the set of all probability measures, that are equivalent to with density process of the form
for a progressively measurable process with for all . We denote the fundamental solutions of the equation
by resp. for the increasing resp. decreasing positive solution, cf. [BS02, II.10] for a discussion and further references. Analogously, denote the fundamental solutions of
by resp. . Note that for each positive solution of one of the above ODEs it holds that
(1) 
hence all extremal points are minima, so that has at most one minimum. Therefore, for each the function has a unique minimum point and each arises as such a minimum point. Therefore, for each we can find constants such that the function
is with a unique minimum point in with the standardization . More explicitly, are given by
(2) 
where
Furthermore, write and . First, we show that the functions are always .
Lemma 2.5.
For each , the function is .
Proof.
For the claim obviously holds. Let . We only have to prove that . Using equation (1), we obtain that
By the choice of , we obtain
and analogously
This proves the claim. ∎
Now all the arguments given in Subsection 2.2 and 2.3 apply and we again obtain the following results (compare Theorem 2.2 and Theorem 2.4):
Theorem 2.6.

It holds that

A point is in the optimal stopping set if and only if there exists such that
Theorem 2.7.
Let and let be a minimizer as in Theorem 2.6 (i). Then is a worstcase measure for the process started in .
2.6 Example: An optimal decision problem for Brownian motions with drift
The following example illustrates that our method also works in the case of a discontinuous reward function , where differential equation techniques cannot be applied immediately. Furthermore, we see that our approach can be used for all parameters in the parameter space, although the structure of the solution changes.
Let denote a Brownian motion with drift and volatility under , and let
The fundamental solutions are given by
where and are the roots of
Using equation (2) we obtain
We consider
and , where denotes the unique maximum point of .
We first consider the case . By Theorem 2.6 (ii), we obtain that is in the optimal stopping set as a maximizer of . Furthermore, by decreasing to , we see that . Since and for , there exists a unique such that . Therefore, by Theorem 2.6 (ii) again, and by increasing to , we obtain that . Theorem 2.6 (i) yields
see Figure 3 below.
By Theorem 2.7 we furthermore obtain that is a worstcase measure for the process started in . That is, under the worstcase measure, the process has drift on and drift on .
Now, we consider the case . By a similar reasoning as in the first case, we see that there exists such that . Write