Explicit Solutions for Optimal Stopping of Maximum Process with Absorbing Boundary that Varies with It
We provide, in a general setting, explicit solutions for optimal stopping problems that involve a diffusion process and its running maximum. Besides, a new feature includes absorbing boundaries that vary with the value of the running maximum. The existence of the absorbing boundary of this type makes the problem harder but more practical and flexible. Our approach is to use the excursion theory for Lévy processes. Since general diffusions are, in particular, not of independent increments, we use an appropriate measure change to make the process have that property. Then we rewrite the original two-dimensional problem as an infinite number of one-dimensional ones and complete the solution. We show general solution methods with explicit value functions and corresponding optimal strategies, illustrating them by some examples.
Key words: Optimal stopping; excursion theory;
diffusions; scale functions.
Mathematics Subject Classification (2010) : Primary: 60G40 Secondary: 60J75
We let be one-dimensional diffusion and denote by the reflected process,
where with . Hence is the excursion of from its running maximum . We consider an optimal stopping problem that involves both and . It is subject to absorbing boundary that varies with . That is,
subject to absorption
where the rewards and are measurable functions from to and is also a measurable function. The rigorous mathematical definition of this problem is presented in Section 2. This setup means that while grows and keeps attaining new maxima, the absorbing boundary is accompanying with . In this paper, we shall solve for optimal strategy and corresponding value function explicitly along with optimal stopping region in the -plane. The existence of the absorbing state makes the stopping region more complex.
The idea is the following: we look at excursions that occur from each level of , and reduce the problem to an infinite number of one-dimensional optimal stopping problems. For finding the explicit form of the value function, we employ the theory of excursion of Lévy processes, in particular the characteristic measure that is related to the height of excursions. (Refer to Bertoin  as a general reference.) Since the diffusion is not in general of independent increments, we use the measure change (3.3) to make the diffusion behave like a Brownian motion under the new measure. Having done that, we solve, at each level of , one-dimensional optimal stopping problems by using the excessive characterization of the value function. This corresponds to the concavity of the value function after certain transformation. See Dynkin, Alvarz and Dayanik and Karatzas . Note that for the excursion theory for spectrally negative Lévy processes (that have only downward jumps), see also Avram et al. , Pistorius   and Doney  where, among others, an exit problem of the reflected process is studied. For optimal stopping that involve both and , we mention a pioneering work of Peskir. There are also Ott  and Guo and Zervos . In the former paper, the author solves problems including a capped version of the Shepp-Shiryaev problem  and the latter makes another contribution that extends . A recent development in this area includes Alvarez and Matoäki  where a discretized approach is taken to find optimal solutions and a numerical algorithm is presented.
Our contributions in this paper may advance the literature in several respects: we do not assume any specific forms or properties in the reward functions and we provide explicit forms of the value function with or without the absorbing boundary and illustrate the procedure of the solution method. Hence we present a very general solution method in a general setting.
The existence of the absorbing state that varies with the maximum process leads to various applications. In this paper, we provide a new problem where an investor puts her money in risky assets and she maintains the following investment policy: if the drawdown of the asset value exceeds certain level, say a fraction of the running maximum, she would sell all her risky assets and put the proceeds to risk-free assets. This investment policy is due to avoid the liquidity problem that prevailed in the years of the financial crisis. That is, when asset markets deteriorate, some investors are forced to sell their assets further since they need cash to repay their debt. That would lead to vicious circle: further price depreciations and depletion of liquidity. Hence her problem is to set up a rule as to when she converts her risky assets to risk-free. Other applications to real-life problems include bank’s failures during the financial crisis. These banks had maintained high leverage and accordingly, they were, despite their large size, not so safe since the bankruptcy threshold (=absorption state) keeps up with the size of the banks. Egami and Oryu   modeled this phenomenon by using spectrally negative Lévy processes for the bank’s asset size . Further applications are possible: one can add barriers that would nullify the value of lookback options (with the terminal payoff for instance).
The rest of the paper is organized as follows. In Section 2, we formulate a mathematical model with a review of some important facts of linear diffusions, and then find an optimal solution in Section 3. Finally in Section 4, we shall demonstrate the solution methodology using an example of the investment problem in risky assets, providing an explicit calculation.
2. Mathematical Model
Let the diffusion process represent the state variable defined on the probability space , where is the set of all possible realizations of the stochastic economy, and is a probability measure defined on . The state space of is given by . We denote by the filtration with respect to which is adapted and with the usual conditions being satisfied. We assume that satisfies the following stochastic differential equation:
where for any and is a standard Brownian motion.
The running maximum process with is defined by . In addition, we write for the reflected process defined by , and let be the stopping time defined by
which is the time of absorption. Note that is a measurable function and the level at which the process is absorbed depends on .
We consider the following optimal stopping problem and the value function associated with initial values and ;
where and is the expectation operator corresponding to , is the constant discount rate and is the set of all -adapted stopping times. The payoff is composed of two parts; the running income to be received continuously until stopped or absorbed, and the terminal reward part. The running income function is a measurable function that satisfies
The reward function is assumed to be measurable. Our main purpose is to calculate and to find the stopping time which attains the supremum.
For each collection of Borel measurable sets, we define a stopping time by
and define a set of stopping times by . In other words, is the first time the excursion from level, say , enters the region . Suppose that for any , we write
Next we let be the set of stopping times defined by
Note that if , then . The following lemma shows that it suffices to consider stopping times :
Let us define by
Then for any , we can find a such that .
Set the collections , , by for some and on . Then it is clear from the definition that if and only if , and on . Hence the right hand side of (2.1) for and are equal to each other. Hence the lemma is proved. ∎
Due to the above lemma, we can reduce the original problem (2.1) to the following:
2.1. Optimal Strategy
From the strong Markov property of , we have
Hence the value function can be written as
Since has nothing to do with the choice of , we concentrate on .
Let us first define the first passage times of :
By the dynamic programming principle, we can write as
for any stopping time . See, for example, Pham  page 97. Now we set in (2.7). For each level from which an excursion occurs, the value does not change during the excursion. Hence, during the first excursion interval from , we have and for any , and (2.7) can be written as the following one-dimensional problem for the state process ;
Now we can look at only the process and find .
In relation to (2.1), we consider the following one-dimensional optimal stopping problem as for and its value function ;
where is a constant. Note that holds when , and can be obtained by our solution method offered in Section 3.
Recall that, in the pursuit of optimal strategy in the linear diffusion case, we can utilize the full characterization of the value function and optimal stopping rule: an optimal stopping rule is given by the threshold strategy in a very general setup. In our present problem, optimal strategy belongs to the set of in (2.2). See Dayanik and Karatzas ; Propositions 5.7 and 5.14. See also Pham ; Section 5.2.3. Note that, however, writing the value of in an explicit form is not trivial and is an essential part of the solution, which we shall do in the next section (see Propositions 3.1 and 3.2).
2.2. Important Facts of Diffusions
Let us recall the fundamental facts about one-dimensional diffusions; let the differential operator be the infinitesimal generator of the process defined by
and consider the ODE . This equation has two fundamental solutions: and . We set to be the increasing and to be the decreasing solution. They are linearly independent positive solutions and uniquely determined up to multiplication. It is well known that
For the complete characterization of and , refer to Itô and McKean . Let us now define
Then is continuous and strictly increasing. Next, following Dynkin (pp. 238, ), we define concavity of a function with respect as follows: A real-valued function is called -concave on if, for every ,
Now consider the optimal stopping problem:
where : . Let be the smallest nonnegative concave majorant of on where is the inverse of . Then we have and the optimal stopping region are
as in Propositions 4.3 and 4.4 of .
3. Explicit Solution
Now we look to an explicit solution of for . The first step is to find in (2.1).
As a first step, we consider the case . Set stopping times as and recall as in (2.3). Define the function by
for which . In other words, if , given a threshold strategy where is in the form of , the value is equal to . Accordingly, on the set .
From the strong Markov property of , when and , we have
Now we calculate these expectations by changing probability measure. We introduce the probability measure defined by
Then is a process in natural scale under this measure . See Borodin and Salminen (pp. 33) and Dayanik and Karatzas  for detailed explanations. Hence we can write, under , , where is a constant and is a Brownian motion under . Since is a Lévy process, we can define the process of the height of the excursion as
and otherwise, where . Then is a Poisson point process, and we denote its characteristic measure under by of . It is well known that
See, for example, Çinlar  (pp. 416). By using these notations, we have111Note that when the diffusion is a standard Brownian motion , then and the right-hand side reduces to .
On the other hand, from the definition of the measure , we have
Combining these two things together,
Similarly, by changing the measure and noting that , we have
Since is a Lévy process under , we can apply Theorem 2 in Pistorius  to calculate the last probability. Then we have
Thanks to Lemma 2.1, we have, up to this point, proved the following:
When , the function for can be represented by
This proposition applies to general cases. The following corollary can be shown directly from the above integral:
If does not depend on , the function reduces to
and is the maximizer of the map .
We wish to obtain more explicit formulae for . For this purpose, let us denote
to avoid the long expression and rewrite (3.1) in the following way: for any ,
This expression naturally motivates us to set as
and we have . Dividing both sides by and choosing the optimal level , we have, for any given,
Let us consider the transformation of a Borel function defined on through
To make explicit calculations possible, we consider the case where the reward increases as does, a natural problem formulation.
Fix . If (1) the reward function is nondecreasing in the second argument and (2) for all and , we have
and is the maximizer of the map
Note that it can be confirmed that diffusions that satisfy the second assumption include geometric Brownian motion, Ornstein-Uhlenbeck process, etc.
First, we claim the following statement:
Under the assumption of Proposition 3.2, for sufficiently close to zero, we have
Note that for all and and that for all .
(of the lemma) Recall (3.1) for the definition of . In view of (3.4), the probabilistic meaning of (3.10) is that is attained when one chooses the excursion level optimally in the following optimal stopping:
that is, if the excursion from does not reach the level of before reaches , one shall receive and otherwise, one shall receive the reward. By using the transformation (3.12), one needs to consider the function and the point in the -plane. Then the value function of (3.16) in this plane is the smallest concave majorant of which passes through the point . It follows that . As , it is clear that and . Suppose, for a contradiction, that we have