Constructing TimeHomogeneous Generalised Diffusions Consistent with Optimal Stopping Values
Abstract
Consider a set of discounted optimal stopping problems for a oneparameter family of objective functions and a fixed diffusion process, started at a fixed point. A standard problem in stochastic control/optimal stopping is to solve for the problem value in this setting.
In this article we consider an inverse problem; given the set of problem values for a family of objective functions, we aim to recover the diffusion. Under a natural assumption on the family of objective functions we can characterise existence and uniqueness of a diffusion for which the optimal stopping problems have the specified values. The solution of the problem relies on techniques from generalised convexity theory.
Keywords: optimal stopping, generalised convexity, generalised diffusions, inverse American option problem
1 Introduction
Consider a classical optimal stopping problem in which we are given a discount parameter, an objective function and a timehomogeneous diffusion process started at a fixed point, and we are asked to maximise the expected discounted payoff. Here the payoff is the objective function evaluated at the value of the diffusion at a suitably chosen stopping time. We call this problem the forward optimal stopping problem, and the expected payoff under the optimal stopping rule the (forward) problem value.
The setup can be generalised to a oneparameter family of objective functions to give a oneparameter family of problem values. In this article we are interested in an associated inverse problem. The inverse problem is, given a oneparameter family of objective functions and associated optimal values, to recover the underlying diffusion, or family of diffusions, for which the family of forward stopping problems yield the given values.
The approach of this article is to exploit the structure of the optimal control problem and the theory of generalised convexity from convex analysis to obtain a duality relation between the Laplace transform of the first hitting time and the set of problem values. The Laplace transform can then be inverted to give the diffusion process.
The generalised convexity approach sets this article apart from previous work on this problem, see [alfonsi3, alfonsi2, hobson]. All these papers are set in the realm of mathematical finance where the values of the stopping problems can be identified with the prices of perpetual American options, and the diffusion process is the underlying stock process. In that context, it is a natural question to ask: Given a set of perpetual American option prices from the market, parameterised by the strike, is it possible to identify a model consistent with all those prices simultaneously? In this article we abstract from the finance setting and ask a more general question: When can we identify a timehomogeneous diffusion for which the values of a parameterised family of optimal stopping problems coincide with a prespecified function of the parameter.
Under restrictive smoothness assumptions on the volatility coefficients, Alfonsi and Jourdain [alfonsi3] develop a ‘putcall parity’ which relates the prices of perpetual American puts (as a function of strike) under one model to the prices of perpetual American calls (as a function of the initial value of the underlying asset) under another model. This correspondence is extended to other payoffs in [alfonsi2]. The result is then applied to solve the inverse problem described above. In both papers the idea is to find a coupled pair of freeboundary problems, the solutions of which can be used to give a relationship between the pair of model volatilities.
In contrast, in Ekström and Hobson [hobson] the idea is to solve the inverse problem by exploiting a duality between the put price and the Laplace transform of the first hitting time. This duality gives a direct approach to the inverse problem. It is based on a convex duality which requires no smoothness on the volatilities or option prices.
In this article we consider a general inverse problem of how to recover a diffusion which is consistent with a given set of values for a family of optimal stopping problems. The solution requires the use of generalised, or convexity (Carlier [carlier], Villani [villani], Rachev and Rüschendorf [rachev]). The logvalue function is the convex dual of the logeigenfunction of the generator (and viceversa) and the subdifferential corresponds to the optimal stopping threshold. These simple concepts give a direct and probabilistic approach to the inverse problem which contrasts with the involved calculations in [alfonsi3, alfonsi2] in which pdes play a key role.
A major advantage of the dual approach is that there are no smoothness conditions on the value function or on the diffusion. In particular, it is convenient to work with generalised diffusions which are specified by the speed measure (which may have atoms, and intervals which have zero mass).
Acknowledgement: DGH would like to thank Nizar Touzi for suggesting generalised convexity as an approach for this problem.
2 The Forward and the Inverse Problems
Let be a class of diffusion processes, let be a discount parameter, and let be a family of nonnegative objective functions, parameterised by a real parameter which lies in an interval . The forward problem, which is standard in optimal stopping, is for a given , to calculate for each , the problem value
(2.1) 
where the supremum is taken over finite stopping times , and denotes the fact that . The inverse problem is, given a fixed and the family , to determine whether could have arisen as a solution to the family of problems (2.1) and if so, to characterise those elements which would lead to the value function . The inverse problem, which is the main object of our analysis, is much less standard than the forward problem, but has recently been the subject of some studies ([alfonsi3, alfonsi2, hobson]) in the context of perpetual American options. In these papers the space of candidate diffusions is , where is the set of price processes which, when discounted, are martingales and is the put option payoff (slightly more general payoffs are considered in [alfonsi2]). The aim is to find a stochastic model which is consistent with an observed continuum of perpetual put prices.
In fact it will be convenient in this article to extend the set to include the set of generalised diffusions in the sense of Itô and McKean [mckean]. These diffusions are generalised in the sense that the speed measure may include atoms, or regions with zero or infinite mass. Generalised diffusions can be constructed as time changes of Brownian Motion, see Section 5.1 below, and also [mckean], [watanabe], [rogers], and for a setup related to the one considered here, [hobson].
We will concentrate on the set of generalised diffusions started and reflected at , which are local martingales (at least when away from zero). We denote this class . (Alternatively we can think of an element as the modulus of a local martingale whose characteristics are symmetric about the initial point zero.) The twin reasons for focusing on rather than , are that the optimal stopping problem is guaranteed to become onesided rather than twosided, and that within there is some hope of finding a unique solution to the inverse problem. The former reason is more fundamental (we will comment in Section 6.2 below on other plausible choices of subsets of for which a similar approach is equally fruitful). For , 0 is a reflecting boundary and we assume a natural right boundary but we do not exclude the possibility that it is absorbing. Away from zero the process is in natural scale and can be characterised by its speed measure, and in the case of a classical diffusion by the diffusion coefficient . In that case we may consider to be a solution of the SDE (with reflection)
where is the local time at zero.
We return to the (forward) optimal stopping problem: For fixed define , where is the first hitting time of level . Let
(2.2) 
Clearly . Indeed, as the following lemma shows, there is equality and for the forward problem (2.1), the search over all stopping times can be reduced to a search over first hitting times.
Lemma 2.1.
and coincide.
Proof.
See Appendix. ∎
The first step in our approach will be to take logarithms which converts a multiplicative problem into an additive one. Introduce the notation
Then the equivalent transformed problem (compare (2.2)) is
(2.3) 
where the supremum is taken over those for which is finite. To each of these quantities we may attach the superscript if we wish to associate the solution of the forward problem to a particular diffusion. For reasons which will become apparent, see Equation (2.5) below, we call the eigenfunction (and the logeigenfunction) associated with .
In the case where , and are convex duals. More generally the relationship between and is that of convexity ([carlier], [villani], [rachev]). (In Section 3 we give the definition of the convex dual of a function , and derive those properties that we will need.) For our setting, and under mild regularity assumptions on the functions , see Assumption 3.6 below, we will show that there is a duality relation between and via the payoff function which can be exploited to solve both the forward and inverse problems. In particular our main results (see Proposition 4.4 and Theorems 5.1 and 5.4 for precise statements) include:
Forward Problem: Given a diffusion , let and . Set . Then the solution to the forward problem is given by , at least for those for which there is an optimal, finite stopping rule. We also find that is locally Lipschitz over the same range of .
Inverse Problem: For to be logarithm of the solution of (2.1) for some it is sufficient that the convex dual (given by ) satisfies , is convex and increasing, and for all .
Note that in stating the result for the inverse problem we have assumed that contains its endpoints, but this is not necessary, and our theory will allow for to be open and/or unbounded at either end.
If is a solution of the inverse problem then we will say that is consistent with . By abuse of notation we will say that (or ) is consistent with (or ) if, when solving the optimal stopping problem (2.1) for the diffusion with eigenfunction , we obtain the problem values for each .
The main technique in the proofs of these results is to exploit (2.3) to relate the fundamental solution with . Then there is a second part of the problem which is to relate to an element of . In the case where we restrict attention to , each increasing convex with is associated with a unique generalised diffusion . Other choices of subclasses of may or may not have this uniqueness property. See the discussion in Section 5.6.
The following examples give an idea of the scope of the problem:
Example 2.2.
Forward Problem: Suppose . Let and suppose that solves for . For such a diffusion . Then for , .
Example 2.3.
Forward Problem: Let be reflecting Brownian Motion on the positive halfline with a natural boundary at . Then . Let so that convexity is standard convexity, and suppose . Then
It is easy to ascertain that the supremum is attained at where
(2.4) 
for . Hence, for
with limits and . For we have .
Example 2.4.
Inverse Problem: Suppose that and . Suppose also that for
Then is reflecting Brownian Motion.
Note that is uniquely determined, and its diffusion coefficient is specified on . In particular, if we expand the domain of definition of to then for consistency we must have for .
Example 2.5.
Inverse Problem: Suppose and . Then for and, at least whilst , solves the SDE . In particular, does not contain enough information to determine a unique consistent diffusion in since there is some indeterminacy of the diffusion coefficient on .
Example 2.6.
Inverse Problem: Suppose , and . Then the dual of is given by , and is a candidate for . However is not convex. There is no diffusion in consistent with .
Example 2.7.
Forward and Inverse Problem: In special cases, the optimal strategy in the forward problem may be to ‘stop at the first hitting time of infinity’ or to ‘wait forever’. Nonetheless, it is possible to solve the forward and inverse problems.
Let be an increasing, differentiable function on with , such that is convex; let be a positive, increasing, differentiable function on such that ; and let be a nonnegative, increasing and differentiable function on with .
Suppose that
Note that the crossderivative is nonnegative.
Consider the forward problem. Suppose we are given a diffusion in with logeigenfunction . Then the logproblem value is given by
Conversely, suppose we are given the value function on . Then
is the logeigenfunction of a diffusion which solves the inverse problem.
A generalised diffusion can be identified by its speed measure . Let be a nonnegative, nondecreasing and rightcontinuous function which defines a measure on , and let be identically zero on . We call a point of growth of if whenever and denote the closed set of points of growth by . Then may assign mass to 0 or not, but in either case we assume . We also assume that if then . If then either is an absorbing endpoint, or does not reach in finite time.
The diffusion with speed measure is defined on and is constructed via a timechange of Brownian motion as follows.
Let be a filtration supporting a Brownian Motion started at with a local time process . Define to be the leftcontinuous, increasing, additive functional
and define its rightcontinuous inverse by
If we set then is a generalised diffusion which is a local martingale away from 0, and which is absorbed the first time that hits .
For a given diffusion recall that is defined via . It is well known (see for example [rogers, V.50] and [dym, pp 147152]) that is the unique increasing, convex solution to the differential equation
(2.5) 
Conversely, given an increasing convex function with and , (2.5) can be used to define a measure which in turn is the speed measure of a generalised diffusion .
If then the process spends a positive amount of time at . If is an isolated point, then there is a positive holding time at , conversely, if for each neighbourhood of , also assigns positive mass to , then is a sticky point.
If and has a density, then where is the diffusion coefficient of and the differential equation (2.5) becomes
(2.6) 
In this case, depending on the smoothness of , will also inherit smoothness properties. Conversely, ‘nice’ will be associated with processes solving (2.6) for a smooth . However, rather than pursuing issues of regularity, we prefer to work with generalised diffusions.
3 uconvex Analysis
In the following we will consider convex functions for a function of two variables and . There will be complete symmetry in role between and so that although we will discuss convexity for functions of , the same ideas apply immediately to convexity in the variable . Then, in the sequel we will apply these results for the function , and we will apply them for convex functions of both and .
For a more detailed development of convexity, see [rachev], [villani], [carlier] and the references therein. Proofs of the results below are included in the Appendix.
Let and be subintervals of . We suppose that is well defined, though possibly infinite valued.
Definition 3.1.
is convex iff there exists a nonempty such that for all
Definition 3.2.
The dual of is the convex function on given by
A fundamental fact from the theory of convexity is the following:
Lemma 3.3.
A function is convex iff .
The function (the convexification of ) is the greatest convex minorant of (see the Appendix). The condition provides an alternative definition of a convex function, and is often preferred; checking whether is usually more natural than trying to identify the set .
Diagrammatically (see Figure 1.), we can think of as the vertical distance between and . Thus when for all .
The following description due to Villani [villani] is helpful in visualising what is going on: is convex if at every point we can find a parameter so that we can caress from below with .
The definition of the dual implies a generalised version of the Young inequality (familiar from convex analysis, e.g [rockafellar]),
for all . Equality holds at pairs where the supremum
is achieved.
Definition 3.4.
The subdifferential of at is defined by
or equivalently
If is a subset of then we define to be the union of subdifferentials of over all points in .
Definition 3.5.
is subdifferentiable at if . is subdifferentiable on if it is subdifferentiable for all , and is subdifferentiable if it is subdifferentiable on .
In what follows it will be assumed that the function is satisfies the following ‘regularity conditions’.
Assumption 3.6.

is continuously twice differentiable.

as a function of , and as a function of , are strictly increasing.
Remark 3.7.
Remark 3.8.
The following results from convex analysis will be fundamental in our application of convex analysis to finding the solutions of the forward and inverse problems.
Lemma 3.9.
Suppose is subdifferentiable, and satisfies
Assumption 3.6.
Then is monotone in the following
sense:
Let , .
Suppose and .
Then .
Definition 3.10.
We say that a function is strictly convex, when its subdifferential is strictly monotone.
Proposition 3.11.
Suppose that satisfies Assumption 3.6.
Suppose is a.e differentiable and subdifferentiable. Then there exists a map such that if is differentiable at then and
(3.1) 
Moreover, is such that is nondecreasing.
Conversely, suppose that is a.e differentiable and equal to the integral of its derivative. If (3.1) holds for a nondecreasing function , then is convex and subdifferentiable with .
Note that the subdifferential may be an interval in which case may be taken to be any element in that interval. Under Assumption 3.6, is nondecreasing
We observe that since we have and so that may be defined directly as an element of . If is strictly increasing then is just the inverse of .
Remark 3.12.
Proposition 3.13.
Suppose that satisfies Assumption 3.6.
Suppose is subdifferentiable in a neighbourhood of . Then is continuously differentiable at if and only if is continuous at .
4 Application of convex analysis to the Forward Problems
Now we return to the context of the family of optimal control problems (2.1) and the representation (2.3).
Lemma 4.1.
Let be a diffusion in natural scale reflected at the origin with a finite or infinite right boundary point . Then the increasing eigenfunction of the generator
is locally Lipschitz continuous on .
Proof.
is increasing, convex and finite and therefore locally Lipschitz on . , and since is locally Lipschitz on , is locally Lipschitz on . ∎
Henceforth we assume that satisfies Assumption 3.6, so that is twice differentiable and satisfies the SpenceMirrlees condition. We assume further that is nondecreasing in . Note that this is without loss of generality since it can never be optimal to stop at if , since to wait until the first hitting time of involves greater discounting and a lower payoff.
Consider the forward problem. Suppose the aim is to solve (2.3) for a given with associated eigenfunction for the family of objective functions . Here is assumed to be an interval with endpoints and , such that .
Now let
(4.1) 
Then is the convex dual of .
By definition is the (set of) level(s) at which it is optimal to stop for the problem parameterised by . If is empty then there is no optimal stopping strategy in the sense that for any finite stopping rule there is another which involves waiting longer and gives a higher problem value.
Let be the infimum of those values of such that
. If is nowhere subdifferentiable then we set .
Lemma 4.2.
The set where is subdifferentiable forms an interval with endpoints and .
Proof.
Suppose is subdifferentiable at , and suppose . We claim that is subdifferentiable at .
Fix . Then and
(4.2) 
and for if . We write the remainder of the proof as if we are in the case ; the case involves replacing with .
Fix . We want to show
(4.3) 
for then
and since is continuous in the supremum is attained.
Lemma 4.3.
is locally Lipschitz on .
Proof.
On is convex, subdifferentiable and is monotone increasing.
Fix such that . Choose and and suppose has Lipschitz constant (with respect to ) in a neighbourhood of .
Then and so that
and a reverse inequality follows from considering . ∎
Note that it is not possible under our assumptions to date ( satisfying Assumption 3.6, and monotonic in ) to conclude that is continuous at , or even that exists. Monotonicity guarantees that even if we can still define . For example, suppose and for let . Then if is the convex dual of we have , where . If and are such that exists and is finite, then choosing any bounded for which does not exist gives an example for which does not exist. It is even easier to construct modified examples such that is infinite.
Denote . Then for , . We have shown:
Proposition 4.4.
If satisfies Assumption 3.6, is increasing in and if is a reflecting diffusion in natural scale then the solution to the forward problem is .
Remark 4.5.
We close this section with some examples.
Example 4.6.
Recall Example 2.5, but note that in that example was restricted to take values in . Suppose , and . Then and for , . Further, for
and for .
Note that is continuous on , but not on .
Example 4.7.
Suppose and . Suppose is a diffusion on , with a natural boundary and diffusion coefficient . Then and
It is straightforward to calculate that and then that is given by
(4.5) 
5 Application of convex analysis to the Inverse Problem
Given an interval with endpoints and and a value function defined on we now discuss how to determine whether or not there exists a diffusion in that solves the inverse problem for . Theorem 5.1 gives a necessary and sufficient condition for existence. This condition is rather indirect, so in Theorem 5.4 we give some sufficient conditions in terms of the convex dual and associated objects.
Then, given existence, a supplementary question is whether contains enough information to determine the diffusion uniquely. In Sections 5.3, 5.4 and 5.5 we consider three different phenomena which lead to nonuniqueness. Finally in Section 5.6 we give a simple sufficient condition for uniqueness.
Two key quantities in this section are the lower and upper bound for the range of the subdifferential of on . Recall that we are assuming that the SpenceMirrlees condition holds so that is increasing on . Then, if is somewhere subdifferentiable we set , or if , . Similarly, we define , or if , , and . If is nowhere subdifferentiable then we set .
5.1 Existence
In the following we assume that is convex on , which means that for all ,
Trivially this is a necessary condition for the existence of a diffusion such that the solution of the optimal stopping problems are given by . Recall that we are also assuming that is increasing in and that it satisfies Assumption 3.6.
The following fundamental theorem provides necessary and sufficient conditions for existence of a consistent diffusion.
Theorem 5.1.
There exists such that if and only if there exists such that , is increasing and convex and is such that on .
Proof.
If then and is increasing and convex. Set . If then
Conversely, suppose satisfies the conditions of the theorem, and set . Let . Note that if then
and the maximiser satisfies .
For define a measure via
(5.1) 
Let for , and, if is finite for . We interpret (5.1) in a distributional sense whenever has a discontinuous derivative. In the language of strings is the length of the string with mass distribution . We assume that . The case is a degenerate case which can be covered by a direct argument.
Let be a Brownian motion started at 0 with local time process and define via
Let be the rightcontinuous inverse to . Now set . Then is a local martingale (whilst away from zero) such that . When , we have .
We want to conclude that . Now, is the unique increasing solution to
with the boundary conditions and . Equivalently, for all with , solves
By the definition of above it is easily verified that is a solution to this equation. Hence and our candidate process solves the inverse problem. ∎
Remark 5.2.
Since is convex a natural candidate for is , at least if and is convex. Then is the eigenfunction of a diffusion .
Our next example is one where is convex but not twice differentiable, and in consequence the consistent diffusion has a sticky point. This illustrates the need to work with generalised diffusions. For related examples in a different context see Ekström and Hobson [hobson].
Example 5.3.
Let and let the objective function be . Suppose
Writing we calculate
Note that is increasing and convex, and . Then jumps at and since
we conclude that . Then includes a multiple of the local time at 1 and the diffusion is sticky there.
Theorem 5.1 converts a question about existence of a consistent diffusion into a question about existence of a logeigenfunction with particular properties including . We would like to have conditions which apply more directly to the value function . The conditions we derive depend on the value of .
As stated in Remark 5.2, a natural candidate for is . As we prove below, if this candidate leads to a consistent diffusion provided and is convex and strictly increasing. If then the sufficient conditions are slightly different, and need not be globally convex.
Theorem 5.4.
Assume is convex. Each of the following is a sufficient condition for there to exist a consistent diffusion:

, and is convex and increasing on .
