# Optimal Switching Problems Under

Partial Information

###### Abstract.

In this paper we formulate and study an optimal switching problem under partial information. In our model the agent/manager/investor attempts to maximize the expected
reward by switching between different states/investments. However, he is not fully aware of his environment and only an observation process, which contains partial information about the environment/underlying, is accessible. It is based on the partial information carried by this observation process that all decisions must be made. We propose
a probabilistic numerical algorithm based on dynamic programming, regression Monte Carlo methods, and stochastic filtering theory to compute the value function.
In this paper, the approximation of the value function and the corresponding convergence result are obtained when the underlying and observation processes satisfy the linear Kalman-Bucy setting. A numerical example is included to show some specific features of partial information.

2000 Mathematics Subject Classification: 60C05, 60F25, 60G35, 60G40, 60H35, 62J02.

Keywords and phrases: optimal switching problem, partial information, diffusion, regression, Monte-Carlo, Euler scheme, stochastic filtering, Kalman-Bucy filter.

## 1. Introduction

In recent years there has been an increasing activity in the study of optimal switching problems, associated reflected backward stochastic differential equations and systems of variational inequalities, due to the potential use of these types of models/problems to address the problem of valuing investment opportunities, in an uncertain world, when the investor/producer is allowed to switch between different investments/portfolios or production modes. To briefly outline the traditional setting of multi-modes optimal switching problems, we consider a production facility which can be run in () different production modes and assume that the running pay-offs in the different modes, as well as the cost of switching between modes, depend on an -dimensional diffusion process which is a solution to the system of stochastic differential equations

(1.1) |

where and is an -dimensional Brownian motion, , defined on a filtered probability space . In the case of electricity and energy production the process can, for instance, be the electricity price, functionals of the electricity price, or other factors, like the national product or other indices measuring the state of the local and global business cycle, which in turn influence the price. Given as in (1), let the payoff rate in production mode , at time , be and let be the continuous switching cost for switching from mode to mode at time . A management strategy is a combination of a non-decreasing sequence of -adapted stopping times , where, at time , the manager decides to switch the production from its current mode to another one, and a sequence of -adapted indicators , taking values in , indicating the mode to which the production is switched. At the production is switched from mode (current mode) to . When the production is run under a strategy , over a finite horizon , the total expected profit is defined as

where is the to associated index process. The traditional multi-modes optimal switching problem now consists of finding an optimal management strategy such that

Let from now on denote the filtration generated by the process up to time , i.e., . We let denote the set of all (admissible) strategies such that for , and such that the stopping times and the indicators are adapted to the filtration . Furthermore, given , , we let , be the subset of strategies such that and a.s. We let

(1.2) |

Then represents the value function associated with the optimal switching problem on time interval , and is the optimal expected profit if, at time , the production is in mode and the underlying process is at . Under sufficient assumptions it can be proved that the vector satisfies a system of variational inequalities, e.g., see [LNO12]. Using another perspective, the solution to the optimization problem can be phrased in the language of reflected backward stochastic differential equations. For these connections, and the application of multi-mode optimal switching problems to economics and mathematics, see [AH09], [DH09], [DHP10], [HM12], [HT07], [PVZ09], [LNO12] and the references therein. More on reflected backward stochastic differential equations in the context of optimal switching problems can be found in [AF12], [DH09], [DHP10], [HT07] and [HZ10].

### 1.1. Optimal switching problems under partial information

In this paper we formulate and consider a multi-mode optimal switching problem under partial or incomplete information. While assuming that the running pay-offs in the different modes of production, as well as the cost of switching between modes, depend on , with as in (1), we also assume that the manager of the production facility can only observe an auxiliary, and -dependent process, , based on which the manager can only retrieve partial information of the -dimensional stochastic process . More precisely, we assume that the manager can only observe an -dimensional diffusion process which solves the system of stochastic differential equations

(1.3) |

Here and is an -dimensional Brownian motion, , defined on and independent of . is assumed to be a continuous and bounded function. From here on we let , denote the filtration generated by the observation process up to time . Note that in our set-up we have , and hence knowledge of the process only gives partial information about the process . We emphasize that although the value of the fully observable process is known with certainty at time , the value of the process is not. The observed process acts as a source of information for the underlying process . By construction, in the formulation of an optimal switching problem under partial information we must restrict our strategies, and decisions at time , to only depend on the information accumulated from up to time . Hence, an optimal switching problem under partial information must differ from the standard optimal switching problem in the sense that in the case of partial information, the value of the running payoff functions , and the switching costs , are not known with certainty at time , even though we know . Hence, in this context the production must be run under incomplete information.

Our formulation of an optimal switching problem under partial information is based on ideas and techniques from stochastic filtering theory. Generally speaking, stochastic filtering theory deals with the estimation of an evolving system (“the signal” ) by using observations which only contain partial information about the system (“the observation” ). The solution to the stochastic filtering problem is the conditional distribution of the signal process given the -algebra , and in the context of stochastic filtering theory the goal is to compute the conditional expectations , for suitably chosen test functions . In the following the conditional distribution of given , is denoted by , i.e.,

(1.4) |

Note that the measure-valued (random) process introduced in (1.4) can be viewed as a stochastic process taking values in an infinite dimensional space of probability measures over the state space of the signal. Concerning stochastic filtering we refer to [CR11] and [BC09] for a survey of the topic.

Based on the above we define, when the production is run using an -adapted strategy , over a finite horizon , the total expected profit up to time as

(1.5) |

where the to associated index process is defined in the bulk of the paper. Again we are interested in finding an optimal management strategy which maximizes . Let be defined in analogy with but with replaced by , and let, for given , , , be the subset of strategies such that and a.s. Given , and a measure of finite mass , we let

(1.6) |

Then represents the value function associated with the optimal switching problem under partial information formulated above, on the time interval , and is the optimal expected profit if, at time , the production is in mode , and the distribution of is given by , . Note that for a test function , is an -adapted random variable and hence, the problem in (1.1) can be seen as a full information problem with underlying process . In fact, it is this connection to an optimal switching problem with perfect information that underlies our formulation of the optimal switching problem under partial information. Furthermore, note that if is an -adapted process, then (1.1) reduces to (1.2), i.e., to the standard optimal switching problem under complete information.

The object of study in this paper is the value function introduced in (1.1) and we emphasize and iterate the probabilistic interpretation of the underlying problem in (1.1). In (1.1) the manager wishes to maximize by selecting an optimal . However, the manager only has access to the observed process . The state is not revealed and can only be partially inferred through its impact on the drift of . Thus, must be based on the information contained solely in , i.e., must be -adapted. Hence, the optimal switching problem under partial information considered here gives a model for the decision making of a manager who is not fully aware of the economical environment he is acting in. As pointed out in [L09], one interesting feature here is the interaction between learning and optimization. Namely, the observation process plays a dual role as a source of information about the system state , and as a reward ingredient. Consequently, the manager has to consider the trade-off between further monitoring in order to obtain a more accurate inference of , vis-a-vis making the decision to switch to other modes of production in case the state of the world is unfavorable.

### 1.2. Contribution of the paper

The contribution of this paper is fourfold. Firstly, we are not aware of any papers dealing with optimal switching problems under partial information and we therefore think that our paper represent a substantial contribution to the literature devoted to optimal switching problems and to stochastic optimization problems under partial information. Secondly, we propose a theoretically sound and entirely simulation-based approach to calculate the value function in (1.1) when and satisfy the Kalman-Bucy setting of linear stochastic filtering. In particular, we propose a probabilistic numerical algorithm to approximate in (1.1) based on dynamic programming and regression Monte Carlo methods. Thirdly, we carry out a rigorous error analysis and prove the convergence of our scheme. Fourthly, we illustrate some of the features of partial information in a computational example. It is known that in the linear Kalman-Bucy setting it is possible to solve the stochastic filtering problem analytically and describe the a posteriori probability distribution explicitly. Although much of the analysis in this paper also holds in the non-linear case, we focus on the, already quite involved, linear setting. In general, numerical schemes for optimal switching problems, already under perfect information, seem to be a less developed area of research and we are only aware of the papers [ACLP12] and [GKP12] where numerical schemes are defined and analyzed rigorously. Our research is influenced by [ACLP12] but our setting is different since we consider an optimal switching problem assuming only partial information.

### 1.3. Organization of the paper

The paper is organized as follows. Section 2 is of preliminary nature and we here state the assumptions on the systems in (1), (1.1), the payoff rate in production mode , , and the switching costs , assumptions used throughout the paper. Section 3 is devoted to the general description of the stochastic filtering problem and the linear Kalman-Bucy filter. In Section 4 we prove that the value function in (1.1) satisfies the dynamic programming principle. This is the result on which the numerical scheme, outlined in the subsequent sections, rests. Section 5 gives, step by step, the details of the proposed numerical approximation scheme. In Section 6 we perform a rigorous mathematical convergence analysis of the proposed numerical approximation scheme and the main result, Theorem 6.1, is stated and proved. We emphasize that by proving Theorem 6.1 we are able to establish a rigorous error control for the proposed numerical approximation scheme. Section 7 contains a numerical illustration of our algorithm and the final section, Section 8, is devoted to a summary and conclusions.

## 2. Preliminaries and Assumptions

We first state the assumptions on the systems (1), (1.1), the payoff rate in production mode , , and the switching costs, , which will be used in this paper. We let denote the (finite) set of available states and we let for . As stated, the profit made (per unit time) in state is given by the function . The cost of switching from state to state is given by the function . Focusing on the problem in (1.5), and in particular on the value function in (1.1), we need to give a precise definition of the strategy process and the notation . Indeed, in our context a strategy , over a finite horizon , corresponds to a sequence , where is a sequence of -adapted stopping times and is a sequence of measurable random variables taking values in and such that is -adapted. Given we let

where is the indicator function for a measurable set , be the associated index process. In particular, to each strategy there is an associated index process and this is the process used in the definition of .

We denote by the space of all real-valued functions such that and all its partial derivatives up to order are continuous and bounded on . Given we let

Similarly, we denote by the space of all real-valued functions such that , , , , and are continuous and bounded on . With a slight abuse of notation we will often write instead of . We denote by the space of of all positive measures on with finite mass. Considering the systems in (1), (1.1), we assume that , and are continuous and bounded functions. Here is the set of all -dimensional real-valued matrices. Furthermore, concerning the regularity of these functions we assume that

(2.1) |

Clearly (2.1) implies that

(2.2) |

for some constant , , for all , and whenever . Here is the standard Euclidean norm of . Given (2.1) and (2.2), we see, using the standard existence theory for stochastic differential equations, that there exist unique strong solutions and to the systems in (1) and (1.1). Concerning regularity of the payoff functions and the switching costs , we assume that

For future reference we note, in particular, that

(2.3) |

for some constants , whenever . Note that (2) implies that and , , are, for fixed, Lipschitz continuous w.r.t. , uniformly in , and vice versa. Concerning the switching costs we also impose the following structural assumptions on the functions ,

(2.4) | for all , | ||||

and for any sequence of indices , , , for . |

Note that (2) states that it is always less expensive to switch directly from state to state compared to passing through an intermediate state . We emphasize that we are able to carry out most of the analysis in the paper assuming only (2.1)–(2). However, there is one instance where we currently need to impose stronger structural restrictions on the functions to pull the argument through. Indeed, our final argument relies heavily on the Lipschitz continuity of certain value functions, established in Lemma 6.3 and Lemma 6.4 below. Currently, to prove these lemmas we need the extra assumption that

(2.5) |

In particular, we need the switching cost to depend only on and the sole reason is that we need to be able to estimate terms of the type appearing in the proof of Lemma 6.3 (Lemma 6.4). While we strongly believe that these lemmas remain true without (2.5), we also believe that the proofs in the more general setting require more refined techniques beyond the dynamic programming principle, and that we have to resort to the connection to systems of variational inequalities with interconnected obstacles and reflected backward stochastic differential equations.

## 3. The filtering problem

As outlined in the introduction, the general goal of the filtering problem is to find the conditional distribution of the signal given the observation . In particular, given ,

and the aim is to find the (random) measure . Note that can be viewed as a stochastic process taking values in the infinite dimensional space of probability measures over the state space of the signal. Let

where is the transpose of , and let be the following partial differential operator

Using this notation and the assumptions stated in Section 2, one can show, e.g., see [BC09], that the stochastic process satisfies

(3.1) |

for any . Recall that is the function appearing in (1.1). The non-linear stochastic PDE in (3.1) is called the Kushner-Stratonovich equation. Furthermore, it can also be shown, under certain conditions, that the Kushner-Stratonovich equation has, up to indistinguishability, a pathwise unique solution, e.g., see [BC09]. From here on in we will, to simplify the notation, write

### 3.1. Kalman-Bucy filter

It is known that in some particular cases the filtering problem outlined above can be solved explicitly and hence the a posteriori distribution is known. In particular, if we assume that the signal and the observation solve linear SDEs, then the solution to the filtering problem can be given explicitly. To be even more specific, assume that the signal and the observation are given by the systems in (1), (1.1), with

(3.2) |

respectively, where , and are measurable and locally bounded time-dependent functions. Furthermore, assume that , where denotes the -dimensional multivariate normal distribution defined by the vector of means and by the covariance matrix , is independent of the underlying Brownian motions and . Let and denote the conditional mean and the covariance matrix of , respectively. The following results concerning the filter and the processes and , can be found in, e.g., [KB61] or Chapter 6 in [BC09].

###### Theorem 3.1.

Assume (3.2) and that for some . Then the conditional distribution of , conditional on , is a multivariate normal distribution, .

###### Theorem 3.2.

Assume (3.2) and that for some . Then the conditional covariance matrix satisfies the deterministic matrix equation

(3.3) |

with initial condition , and the conditional mean satisfies the stochastic differential equation

(3.4) |

with initial condition .

For a positive semi-definite matrix , let denote the unique positive semi-definite matrix such that , where, as above, denotes the transpose of . Recalling that the density defining in , at , equals

we see that the following result follows immediately from Theorem 3.1 and Theorem 3.2.

###### Corollary 3.1.

The distribution is fully characterized by and and

for any .

Note that the covariance matrix is deterministic and depends only on the known quantities , , , see (3.2), and the distribution of the starting point of . Hence, once the initial distribution is given, the covariance matrix can be determined for all . Furthermore, in the Kalman-Bucy setting, the measure is Gaussian and hence fully characterized by its mean and its covariance matrix . As a consequence, the value function to the partial information optimal switching problem, , can in this setting be seen as a function . We will, when is a Gaussian measure with mean and covariance matrix , write

###### Remark 3.1.

Consider a fixed , and let , , , whenever . Let and . Let , with initial distribution determined by and , and be given as above for . Furthermore, given and , let and be the unique solutions to the systems in (1) and (1.1), with , , , defined as in (3.2) but with replaced by and with initial data and . In addition, let be defined as in (3.3) and (3.4), with and . Finally, consider the value function and let be the value function of the optimal switching problem on , with replaced by . Then

In particular,

and we see that there is no loss of generality to assume that initial observations are made at .

###### Remark 3.2.

As the covariance matrix solves the deterministic Riccati equation in (3.3), it is completely determined by the parameters of the model and the covariance matrix of at time . Hence, once the initial condition is given, can be solved deterministically for all , and consequently viewed as a known parameter. Therefore, we omit the dependence of in the value function and instead, with a slight abuse of notation, simply write .

###### Remark 3.3.

Although (3.3) is a deterministic ordinary differential equation, it may not be possible to solve it analytically. Therefore, in a general numerical treatment of the problem outlined above one has to use numerical methods to find the covariance matrix . The error stemming from the numerical method used to solve (3.3) will have influence on the total error, defined as the absolute value of the difference between the true value function and its numerical approximation derived in this paper. However, as is deterministic, it can be solved off-line and to arbitrary accuracy without effecting the computational efficiency of the main numerical scheme presented in this paper. Therefore, we will throughout this paper consider as exactly known and ignore any error caused by the numerical algorithm used for solving (3.3).

### 3.2. Connection to the full information optimal switching problem

As mentioned in the introduction, the problem in (1.1) can interpreted as a full information optimal switching problem with underlying process . We here expand on this interpretation in the context of Kalman-Bucy filters. Let, using the notation in Remark 3.2,

whenever , and let be defined through

Furthermore, let and be defined as

Then, for fixed, is a solution to an optimal switching problem with perfect information. Using the above notation, we see that

(3.5) |

and that the upper bound

holds. Moreover, based on (3.5) we see that also is a solution to an optimal switching problem with perfect information, with payoff rate in production mode , at time , defined by , and with switching cost, for switching from mode to mode at time , defined by .

## 4. The dynamic programming principle

In this section we prove that the value function associated to our problem satisfies the dynamic programming principle (DPP). This is the result on which the numerical scheme outlined in the next section rests. It should be noted that the dynamic programming principle holds for general systems as in (1) and (1.1), systems which are not necessarily linear.