On Investment-Consumption with Regime-Switching Work supported by NSERC grants 371653-09, 88051 and MITACS grants 5-26761, 30354 and the Natural Science Foundation of China (10901086).

On Investment-Consumption with Regime-Switching thanks: Work supported by NSERC grants 371653-09, 88051 and MITACS grants 5-26761, 30354 and the Natural Science Foundation of China (10901086).

Traian A. Pirvu
Dept of Mathematics & Statistics
McMaster University
1280 Main Street West
Hamilton, ON, L8S 4K1
tpirvu@math.mcmaster.ca
   Huayue Zhang
Dept of Finance
Nankai University
94 Weijin Road
Tianjin, China, 300071
hyzhang69@nankai.edu.cn

Abstract. In a continuous time stochastic economy, this paper considers the problem of consumption and investment in a financial market in which the representative investor exhibits a change in the discount rate. The investment opportunities are a stock and a riskless account. The market coefficients and discount factor switches according to a finite state Markov chain. The change in the discount rate leads to time inconsistencies of the investor’s decisions. The randomness in our model is driven by a Brownian motion and Markov chain. Following [3] we introduce and characterize the equilibrium policies for power utility functions. Moreover, they are computed in closed form for logarithmic utility function. We show that a higher discount rate leads to a higher equilibrium consumption rate. Numerical experiments show the effect of both time preference and risk aversion on the equilibrium policies.

Key words: Portfolio optimization, time inconsistency, equilibrium policies, regime-switching discounting


JEL classification: G11



Mathematics Subject Classification (2000): 91B30, 60H30, 60G44

1 Introduction

Dynamic asset allocation in a stochastic paradigm received a lot of scrutiny lately. The first papers in this area are [11] and [12]. Many works then followed, most of them assuming an exponential discount rate. [3] has given an overview of the literature in the context of Merton portfolio management problem with exponential discounting.

The issue of discounting was the subject of many studies in financial economics. Several papers stepped away from the exponential discounting modeling, and based on empirical and experimental evidence proposed different discount models. They can be organised in two classes: exogenous discount rates and endogenous discount rates. In the first class the most well known example is the hyperbolic discounting. This type discounts near future more heavily than distant future which is in accordance with the experimental findings. [3] and [4] discuss about this class of discounting.

The concept of endogenous time preference was developed by [8] in a discrete time formulation. [17] considered the continuous time version which was later extended by [6]. The class of discount rates emerged in response to the following two observed phenomena: “decreasing marginal impatience” DMI and “increasing marginal impatience” IMI. DMI means that the lower the level of consumption, the more heavily an agent discount the future. IMI is just the opposite: the higher the level of consumption, the more heavily an agent discount the future. Some papers support DMI, e.g. [1], others advocate for IMI, [15].

In this paper we consider a regime switching model for the financial market. This modeling is consistent with some cyclicality observed in financial markets. Many papers considered these types of markets for pricing derivative securities. Here we recall only two such works, [7] and [5]. In [7], the author considers a stock price model which allows for the drift and the volatility coefficients to switch according to two-states. This market is incomplete, but is completed with new securities. In [5] the problem of option pricing is considered in a model where the risky underlying assets are driven by Markov-modulated Geometric Brownian motions. A regime switching Esscher transform is used in order to find a martingale pricing measure. When it comes to optimal investment in regime switching markets we point to [14]. In their paper they allow for the risk preference to switch according to the regime.

The discount rate in our paper is stochastic, exogenous and it depends on the regime. By the best of our knowledge is the first work to consider stochastic discounting within the Merton problem framework. In a discrete time model, [16] considers a cyclical discount factor.

Non constant discount rates lead to time inconsistency of the decision maker as shown in [3] and [4]. The resolution is to consider subgame perfect equilibrium strategies. These are strategies which are optimal to implement now given that they will be implemented in the future. After we introduce this concept we try to characterize it. In order to achieve this goal the methodology developed in [4] is employed. That is a new result in stochastic control theory: it mixes the idea of value function (from the dynamic programming principle) with the idea that in the future “optimal trading strategies” are implemented (from the maximum principle of Pontryagin). The new twist in our paper is the Markov chain, and the mathematical ingredient used is formula for the Markov-modulated diffusions. Thus, we obtain a system of four equations: first equation says that the value function is equal to the continuation utility of subgame perfect strategies; second equation is the wealth equation generated by subgame perfect strategies; the last two equations relate the value function to the subgame perfect strategies. The end result is a complicated system of PDEs, SDE and a nonlinear equation with a nonlocal term. The investor’s risk preference in this model is of CRRA type which suggest an ansatz for the value function (it disentangles the time, the space and the Markov chain state component). This result in subgame perfect strategies which are time/state dependent and linear in wealth. In the special case of logarithmic utility we can compute them explicitly. If constant discount rates, we notice that subgame perfect strategies coincide with the optimal ones.

The goal of this paper is twofold: first, to consider a model with stochastic discount rates and second, to study the relationship between consumption and discount rates (which resembles IMI and/or DMI ). In the Merton problem with constant discount rates we show that higher the discount rate higher the consumption rate (this is somehow an inverse relationship to IMI). We explore this relationship in our model with stochastic discount rates. Numerical experiments revealed that the consumption rate is higher in the market states with higher discount rate. We provide an analytic proof of this result for the special case of logarithmic utility. This result is somehow consistent with the discount rate monotonocity of consumption rate in the Merton problem. It can also explain the IMI effect: if we observe the consumption rate, then a possible upside jump can be linked to a jump in the discount rate and vice versa. The effect of risk aversion on the consumption rate is analysed. Here the results are consistent with [11] and [12]. That is consumption rate is increasing in time for most levels of risk aversion except at very high levels when is decreasing ( this can be explained by the investor’s increased appetite for risk which leads to more investment in the risky asset and a deceasing consumption rate).

The reminder of this paper is organized as follow. In section we describe the model and formulate the objective. Section contains the main result under the power utility and logarithmic utility. In section 4, we present the numerical results. Section 5 examines consumption versus discount rate. The paper ends with an appendix containing the proofs.

2 The Model

2.1 The Financial Market

Consider a probability space which accommodates a standard Brownian motion and a homogeneous finite state continuous time Markov Chain (MC) For simplicity assume that MC takes values in Our results hold true in the more general situation of having finitely many states. The filtration is the completed filtration generated by and that is We assume that the stochastic processes and are independent. The MC has a generator with  for and for every

In our setup the financial market consists of a bank account and a risky asset , that are traded continuously over a finite time horizon (here is an exogenously given deterministic time). The price process of the bank account and risky asset are governed by the following Markov-modulated SDE:

where and are the initial prices. The functions are assumed to be deterministic, positive and continuous in Given the state of the MC at they represent the riskless rate, the stock volatility and the stock return. Moreover

stands for the stock excess return.

2.2 Investment-consumption strategies and wealth processes

In our model, a representative investor continuously invests in the stock,bond and consumes. An acceptable investment-consumption strategy is defined below:

Definition 2.1.

An -valued stochastic process is called an admissible strategy process and write if it is progressively measurable and it satisfies the following integrability condition

(2.1)

Here stands for the dollar value invested in stock at time and for the consumption. represents the wealth of the investor at time associated with the trading strategy it satisfies the following stochastic differential equation (SDE)

(2.2)

where is the initial wealth and is initial state. This SDE is called the self-financing condition. Under the regularity condition (2.1) imposed on above, the SDE (2.2) admits a unique strong solution. In the end of this section, we introduce further assumptions.

Assumption 2.2.

The utility function of the investor is of CRRA type, i.e.,

where

Thus, the inverse marginal utility function is

(2.3)
Assumption 2.3.

For any risk aversion level the following inequalities hold

(2.4)
(2.5)

2.3 The discount rate

As we mentioned in the introduction, this paper considers stochastic discount rates. An easy way to achieve this is to let the discount rate depend on the state of MC. Thus, at some intermediate time the discount are is for some positive constants and The intuition of this way of modeling discount rate stems from the connection between market state and discount rates (this can be explained by some models with endogenous discount rate which may be influenced by economic factors).

2.4 The Risk Criterion

In our model, the investor decides what investment/consumption strategy to choose according to the expected utility risk criterion. Thus, investor’s goal is to maximize utility of intertemporal consumption and final wealth. The novelty here is that we allow investor to update the risk criterion and to reconsider the optimal strategies she/he computed in the past. This will lead to a time inconsistent behavior as we show below. Let the agent start with a given positive wealth and a given market state at some instant The optimal trading strategy is chosen such that

Throughout the paper we denote The optimal trading strategy is derived by the supermartingale/martingale principle. This leads to the Hamilton-Jacobi-Bellman (HJB) equation

(2.6)

with the boundary condition

(2.7)

Here stands for the value of MC at time Thus, the HJB depends on the current time ( through ) and this dependence is inherited by the optimal trading strategy. This in turn leads to time inconsistencies. The resolution is to introduce subgame perfect equilibrium strategies. They are optimal now given that they will be implemented in the future.

3 The Main Result

3.1 The subgame perfect trading strategies

For a policy process satisfying (2.1) and its corresponding wealth process given by (2.2), we denote the expected utility functional by

(3.8)

Following [3] we shall give a rigorous mathematical formulation of the equilibrium strategies in the formal definition below.

Definition 3.1.

Let be a map such that for any and

(3.9)

where

(3.10)

and satisfies (2.1). Here, the process is the wealth corresponding to The process is another investment-consumption strategy defined by

(3.11)
(3.12)

with is any trading strategy for which is an admissible policy. If (3.9) holds true, then is a subgame perfect strategy.

3.2 The value function

Our goal is in a first step to characterize the subgame perfect strategies and then to find them. Inspired by [3], the value function satisfies

(3.13)

Recall that is the wealth process corresponding to so it solves the SDE

(3.14)

Moreover, is given by

(3.15)

Thus, the value function is characterized by a system of four equations: one integral equation with nonlocal term (3.13), one SDE (3.14) and two PDEs (3.15). Of course the existence of such a function satisfying the equations above is not a trivial issue. We will take advantage of the special form of the utility function to simplify the problem of finding We search for:

(3.16)

where the function is to be found. We consider the case (the case of logarithmic utility will be treated separately). In the light of equations (3.15) one gets

(3.17)

By (3.14), the associated wealth process satisfies the following SDE:

(3.18)

This is a linear SDE which can be easily solved. By plugging of (3.16) into (3.13) (with of (3.17) and of (3.18), we obtain the following equation for

(3.19)

with the final condition Next we show that there exists a unique solution of this ODE system. Let us summarize these findings:

Lemma 3.2.

There exists a unique continuously differentiable solution of the system (3.19). Furthermore, is a value function, meaning that solves (3.13) with of (3.17) and of (3.18).

Appendix A proves this Lemma.

The following Theorem states the central result of our paper.

Theorem 3.3.

Suppose that is given by (3.16) with the solution of the system (3.19). Let be the solution of SDE (3.18). Then given by

(3.20)

is a subgame perfect strategy.

Appendix B proves this Theorem.

Remark 3.4.

In the case of constant discount rate, i.e., the subgame perfect strategies coincide with optimal ones. This can be seen by looking at the integral equation (6.34) which is exactly HJB (2.6) (after the first order conditions are implemented).

Next we turn to the special case of logarithmic utility.

3.3 Logarithmic Utility

When the risk aversion we search for the value function of the following form:

(3.21)

Arguing as in the previous subsection we find that the functions should satisfy the following system of equations:

(3.22)

with the final conditions and Notice that solves a linear ODE system and it can be found explicitly. With known, also solves a linear ODE system. The subgame perfect strategy is given by

(3.23)

with

(3.24)

Next we want to explore the relationship between subgame perfect consumption and discount rate. The following lemma shows that the higher discount rate the more consumption.

Lemma 3.5.

Assume that and solves the linear ODE (3.22). Then for any In other words the subgame perfect consumption rate is higher in the states in which the discount rate is higher.

Appendix C proves this Theorem.

4 Numerical Analysis

In this section, we use Matlab’s powerful ODE solvers (especially the functions ode23 and ode45) to perform numerical experiments. We numerically solve ODE system (3.19) and this in turn yield the subgame perfect strategies. Let the market coefficients be and the discount rate We take the Markov Chain generator to be

Next, define the consumption rate by

with the functions for being the solution of (3.19).



Fig. Equilibrium proportion of wealth consumed for different and The axis represents the time, and the axis the consumption rate

Remark 4.1.

As consumption rate approaches as it was to be expected. From the picture with , we can see that the consumption rate decreases with time. This is explained by the fact that the higher the less risk aversion, which leads to higher proportion of wealth invested into stock and less consumption. This is consistent with a graph from [11]. We see from the graphs that consumption rate increases when increases (this is also consistent with graphs from [11]). For a given risk aversion level , a higher discount rate will result in higher consumption rate. The difference in the consumption rates in the two MC states is decreasing with respect to

5 Consumption versus Discount Rate

We were able to prove for the case of logarithmic utility that higher discount rates lead to higher consumption rates. Numerical evidence suggest that this is also true for general power utilities. On the other hand, in a model with constant discount rate the same result holds true as the following Lemma shows.

Lemma 5.1.

Let be the optimal consumption rate in a model with constant discount rate Then

Appendix D proves this Theorem.

6 Appendix

6.1 It’s formula for Markov Chain modulated diffusions

Suppose the stochastic processes satisfies the SDE

for some and for each Then

where

Here is the martingale process associated with the Markov Chain.

6.2 A Proof of Lemma 3.2

The existence of a unique solution of ODE system (3.19) is granted locally in time by a fixed point theorem. If we can establish estimates for then this local solution is also global solution. Let us introduce the process by

(6.25)

with of (3.19). Dynkin formula implies that the process defined by (6.25) is a martingale. Further let

(6.26)

By product rule

(6.27)

Integrating (6.27) from to , we get

(6.28)

Taking expectation on the both sides of (6.28) and letting , it leads to

(6.29)

Boundedness assumption on market coefficients makes the process of (6.27) bounded. This in turn yields that is uniformly bounded from below. Next we want to prove that is also bounded from above. For a vector in we introduce the norm by :

When is positive (6.29) yields an upper bound on (since is bounded from below). When because we get from (6.29) that

(6.30)
(6.31)

Thus,

for some positive constant Gronwal’s inequality yields an upper bound on Next we prove uniqueness of the solution of (3.19). Indeed, let and be two solutions of (3.19), then by (6.29) it follows that

(6.32)

for some positive constants and Consequently

Thus, by Gronwal’s inequality it follows that

Next we want to prove that is a value function, i.e., solves (3.13) with of (3.17) and of (3.18). Since the function is sufficiently differentiable, we can compute the derivative of :

Next we show that solves the PDE system (6.34) (which Lemma 6.1 shows that is equivalent to (3.13)). Substituting the above derivatives into the equation (6.34) yields

(6.33)

After cancelation of we recover the ODE system (3.19); hence solves (6.34) (and also (3.13)).

6.3 B Proof of Theorem 3.3

We need the following Lemma which gives a PDE version of (3.13).

Lemma 6.1.

For a given assume there exists a function of class which satisfies (3.13). Then solves the following equation

(6.34)

with the boundary condition

Proof: We rewrite equation (3.13) as

(6.35)

where

For a fixed time the process is a martingale. Thus, in the light of (3.14) the function satisfy the following PDEs :

(6.36)
(6.37)

By differentiating (6.35) with respect to we get

(6.38)

Moreover

(6.39)
(6.40)

In the light of (6.36), (6.37), (6.38), (6.39) and (6.40) it follows that