# Optimal Investment Decision Under Switching regimes of Subsidy Support

# Optimal Investment Decision Under Switching regimes of Subsidy Support

## Abstract

We address the problem of making a managerial decision when the investment project is subsidized, which results in the resolution of an infinite-horizon optimal stopping problem of a switching diffusion driven by either an homogeneous or an inhomogeneous continuous-time Markov chain.

We provide a characterization of the value function (and optimal strategy) of the optimal stopping problem. On the one hand, broadly, we can prove that the value function is the unique viscosity solution to a system of HJB equations. On the other hand, when the Markov chain is homogeneous and the switching diffusion is one-dimensional, we obtain stronger results: the value function is the difference between two convex functions.

Keywords. Optimal stopping, Switching diffusions, Investment Decisions.

## 1 Introduction

The optimal time to make managerial decisions has been broadly studied in the context of Real Options since the pioneering works of Dixit and Pindyck [1] and Trigeorgis [2]. Over time, while trying to fit the market’s necessities, this type of models has become more and more complex from both the economic and the mathematical point of view. From the economic side, the number of sequential decisions studied in these models has increased and, from the mathematical angle, the associated stochastic control problems have become progressively more difficult to solve.

In the past few years, several authors have introduced in real options models the existence of temporary subsidy support schemes in order to study their influence in the optimal investment time. This is particularly important in subsidized fields such as renewable energies, where there is an intense research activity (see, for instance, Boomsma, Meade and Fleten [3]), Boomsma and Linnerud [4], Adkins and Paxson [5], Fleten, Linnerud, Molnár and Nygaard [6], Kitzing, Juul, Drud, Boomsma [7] and Guerra, Kort, Nunes and Oliveira [8]).

Following the previously cited authors, we formulate an investment model in a more general sense, where we assume that: (1) there are various different levels of subsidy, (2) the coefficients of the dynamic relative to the economic indicator change with the level of subsidy and (3) the follow-up of the firm’s situation is influenced by the time since the previous evaluation. In consequence, we formulate our model as an infinite-horizon optimal stopping problem where the uncertainty is generally modeled by a switching diffusion driven by an inhomogeneous continuous-time Markov chain.

There are a few articles on optimal stopping problems for switching diffusions, covering different topics of financial mathematics. On the one hand, Eloe, Liu, Yatsuki, Yin and Zhang [9], Guo [10], and Guo and Zhang [11] give explicit solutions for a few particular problems; on the other hand, Pemy [12], Pemy and Zhang [13], and Liu [14] show that, in certain conditions, the value function for the correspondent optimal stopping problem is a viscosity solution to a system of Hamilton-Jacobi-Bellman (HJB) equations. Very recently, Egami and Kevkhishvili [15] show that these type of problems can be reduced to a set of optimal stopping problems without a switching regime.

In this work, we show that, in general, the value function is time-dependent and the unique viscosity solution to a system of HJB equations. Additionally, when the continuous-time Markov chain is homogeneous and when the diffusion is one-dimensional, the value function is the difference of two convex functions and the time-dependence is lost.

We organize the text as follows: in Section 2, we describe the stochastic process that we consider; in Section 3, we define the optimal stopping problem and some of the required assumptions; in Section 4 we prove that the value function is the unique viscosity solution to a system of HJB equations and, finally, in Section 5, we discuss the optimal stopping problem in the homogeneous and one-dimensional case.

## 2 The stochastic process

We consider an investment project enrolled in an assistant program where there are different levels of subsidy. The process , which provides the information concerning the level of subsidy at the current moment, is such that

(1) |

To completely characterize the Markov chain , we introduce the process , where , with , is the time until the transition of Markov chain , defined by

We assume that for every

where is continuous and is a continuous function such that and , for all . Additionally, we consider that, for every , the random variables are independent.

The investment project operates in a random environment characterized by an economic indicator, which is modeled by a dimensional stochastic process . This process solves the switching stochastic differential equation (SDE)

(2) |

taking values in the open set , where is an dimensional Brownian motion independent of and where and are Borel measurable functions. Therefore, we build this model on a complete filtered probability space satisfying the usual conditions and supporting the independent processes and .

The next assumption characterizes the solution of the switching SDE (2). Some results concerning the existence and uniqueness of solutions to switching diffusions may be found in Mao and Yuan [16], and Yin and Zhu [17]. Additionally, in Kallenberg [18], Karatzas and Shreve [19] and Krylov [20], one can find results concerning the existence and uniqueness of SDEs without switching regimes.

###### Assumption 2.1.

The Borel measurable functions and are such that the SDE (2), for each initial condition, has a unique strong solution on the filtered probability space that remains in for all times. Additionally, we assume that

For any set we define the stopping time

where is obtained by considering the topology on , which is the trace of the usual topology on . If , then , since is open in the usual topology on . In addition, we assume that is such that , for all open and .

The process is not, in general, a Markov process, because it is never known how much time was spent since the last transition in the Markov chain. Therefore, we introduce the process , which represents the time spent from the last change in the level of subsidy until the moment , defined by

where is an stopping time. Unless otherwise stated, we will work with the process , which is the Markovian representation of the process .

## 3 Optimal stopping problem

In this section, we formulate the stochastic optimization problem that we are interested in. Thus, we consider that the cash-flow associated with the investment project is different in the different levels of subsidy. Therefore, the running payoff is represented by the function , and the cost of abandonment is represented by . Additionally, we represent the instantaneous interest rate with .

###### Assumption 3.1.

The functions are such that

If the investment project is permanently abandoned at the moment , where is a stopping time, its revenue is given by

Therefore, the expected outcome associated with the project, when the initial observation is , is given by the functional

(3) |

Here, is the expected value conditional on , and ^{1}

Notice that, in this formulation, the project is necessarily abandoned for . Therefore, if is the set of all stopping times and , we intend to find the value function , verifying

(4) |

Since the strategy (to stop immediately, regardless of the current state ) verifies , it is obvious that . Thus, an optimal stopping time is given by the rule

In what follows, for every real function , we set , . Thus and . The problem’s well-posedness is guaranteed by introducing the following integrability conditions:

###### Assumption 3.2.

The functions are such that

For future reference, we notice that according to Assumption 3.2, for any initial condition ,

is a uniformly integrable family of random variables, meaning that there is a uniform integrability test function (see Definition C.2 and Theorem C.3 in Øksendal [21]) such that

(5) |

To finalize this section, in the next proposition we establish that under the assumptions considered in this section, is a uniformly integrable family of random variables.

###### Proposition 3.1.

Let be the value function defined as in (4). Then, is a uniformly integrable family of random variables, for every initial condition .

## 4 HJB equations

In this section, our main goal is to provide the system of HJB equations associated with the optimal stopping problem (4). Furthermore, we will prove that, under certain conditions, the value function is the unique viscosity solution to this system of HJB equations.

In Section 4.1, we present a weak version of the dynamic programming principle (DPP) that we will use in the following sections. A general formulation of this DPP can be found in Bouchard and Touzi [22].

### 4.1 Dynamic programming principle

Consider the Markov process , and its infinitesimal generator , defined by

(7) |

for all in the domain of . In the next proposition, we present an expression for .

###### Proposition 4.1.

Before we prove Proposition 4.1, we note that the process is a semimartingale (indeed is the sum of a martingale and a finite variation process, and and are finite variation processes) and, consequently, admits a generalized Itô decomposition (see Theorem II.33 from Protter [23]). Indeed, for any function such that and any ,

(8) |

###### Proof of Proposition 4.1.

Taking into account the Itô formula presented in Equation (8), we get that

Furthermore, we note that

(9) | ||||

(10) | ||||

where, for ,

(11) | ||||

Since, for ,

Equation (11) can be given by

where , for . Furthermore, as , for , Equation (11) can also be given by

(12) |

A similar representation can be found for the expected value in (10), since we have

(13) |

Then, by using the definition of the infinitesimal generator in (7), the result is straightforward.

It can be useful to consider the operator given by

For future reference, we note that along the same lines of Proposition 4.1, we can prove that the Dynkin’s formula, for the dimensional process , holds true and verifies

In Proposition 4.2, a weak version of the DPP for the optimal stopping problem (4) is presented. The proof relies on the Markov structure of the process and we follow the exposition of Guerra [24] (pages 143-167) and Touzi [25]. Before we introduce the DPP, we state an auxiliary result concerning the continuity of the function . If necessary, to highlight the dependence of on the initial condition and on the element , we will write and , respectively.

###### Lemma 4.1.

The function is continuous, for every and .

###### Proof.

Firstly, by definition of a solution to a switching SDE, the function is continuous. Additionally, we prove that the function is almost surely continuous. To do this, we note that

From the Doob’s maximal inequality, we get that, for all

Therefore, by using the Itô isometry,

By the Grönwall’s inequality, we get

which proves the first statement after an application of Kolmogorov’s continuity criterion.

To show that is continuous, we note that

where the last equality follows in light of Equation (13) and Fubini’s Theorem.

Let be a compact set, such that , and fix . Due to the continuity of , the functions , and have maximum and minimum on the set , namely

Let , then, for a fixed , it follows from the Dominated Convergence Theorem that

Let and be stopping times, assuming that . Then, if , and , The result holds true if