On Investment-Consumption with Regime-Switching ††thanks: Work supported by NSERC grants 371653-09, 88051 and MITACS grants 5-26761, 30354 and the Natural Science Foundation of China (10901086).
Abstract. In a continuous time stochastic economy, this paper considers the problem of consumption and investment in a financial market in which the representative investor exhibits a change in the discount rate. The investment opportunities are a stock and a riskless account. The market coefficients and discount factor switches according to a finite state Markov chain. The change in the discount rate leads to time inconsistencies of the investor’s decisions. The randomness in our model is driven by a Brownian motion and Markov chain. Following  we introduce and characterize the equilibrium policies for power utility functions. Moreover, they are computed in closed form for logarithmic utility function. We show that a higher discount rate leads to a higher equilibrium consumption rate. Numerical experiments show the effect of both time preference and risk aversion on the equilibrium policies.
Key words: Portfolio optimization, time inconsistency, equilibrium policies, regime-switching discounting
JEL classification: G11
Mathematics Subject Classification (2000): 91B30, 60H30, 60G44
Dynamic asset allocation in a stochastic paradigm received a lot of scrutiny lately. The first papers in this area are  and . Many works then followed, most of them assuming an exponential discount rate.  has given an overview of the literature in the context of Merton portfolio management problem with exponential discounting.
The issue of discounting was the subject of many studies in financial economics. Several papers stepped away from the exponential discounting modeling, and based on empirical and experimental evidence proposed different discount models. They can be organised in two classes: exogenous discount rates and endogenous discount rates. In the first class the most well known example is the hyperbolic discounting. This type discounts near future more heavily than distant future which is in accordance with the experimental findings.  and  discuss about this class of discounting.
The concept of endogenous time preference was developed by  in a discrete time formulation.  considered the continuous time version which was later extended by . The class of discount rates emerged in response to the following two observed phenomena: “decreasing marginal impatience” DMI and “increasing marginal impatience” IMI. DMI means that the lower the level of consumption, the more heavily an agent discount the future. IMI is just the opposite: the higher the level of consumption, the more heavily an agent discount the future. Some papers support DMI, e.g. , others advocate for IMI, .
In this paper we consider a regime switching model for the financial market. This modeling is consistent with some cyclicality observed in financial markets. Many papers considered these types of markets for pricing derivative securities. Here we recall only two such works,  and . In , the author considers a stock price model which allows for the drift and the volatility coefficients to switch according to two-states. This market is incomplete, but is completed with new securities. In  the problem of option pricing is considered in a model where the risky underlying assets are driven by Markov-modulated Geometric Brownian motions. A regime switching Esscher transform is used in order to find a martingale pricing measure. When it comes to optimal investment in regime switching markets we point to . In their paper they allow for the risk preference to switch according to the regime.
The discount rate in our paper is stochastic, exogenous and it depends on the regime. By the best of our knowledge is the first work to consider stochastic discounting within the Merton problem framework. In a discrete time model,  considers a cyclical discount factor.
Non constant discount rates lead to time inconsistency of the decision maker as shown in  and . The resolution is to consider subgame perfect equilibrium strategies. These are strategies which are optimal to implement now given that they will be implemented in the future. After we introduce this concept we try to characterize it. In order to achieve this goal the methodology developed in  is employed. That is a new result in stochastic control theory: it mixes the idea of value function (from the dynamic programming principle) with the idea that in the future “optimal trading strategies” are implemented (from the maximum principle of Pontryagin). The new twist in our paper is the Markov chain, and the mathematical ingredient used is formula for the Markov-modulated diffusions. Thus, we obtain a system of four equations: first equation says that the value function is equal to the continuation utility of subgame perfect strategies; second equation is the wealth equation generated by subgame perfect strategies; the last two equations relate the value function to the subgame perfect strategies. The end result is a complicated system of PDEs, SDE and a nonlinear equation with a nonlocal term. The investor’s risk preference in this model is of CRRA type which suggest an ansatz for the value function (it disentangles the time, the space and the Markov chain state component). This result in subgame perfect strategies which are time/state dependent and linear in wealth. In the special case of logarithmic utility we can compute them explicitly. If constant discount rates, we notice that subgame perfect strategies coincide with the optimal ones.
The goal of this paper is twofold: first, to consider a model with stochastic discount rates and second, to study the relationship between consumption and discount rates (which resembles IMI and/or DMI ). In the Merton problem with constant discount rates we show that higher the discount rate higher the consumption rate (this is somehow an inverse relationship to IMI). We explore this relationship in our model with stochastic discount rates. Numerical experiments revealed that the consumption rate is higher in the market states with higher discount rate. We provide an analytic proof of this result for the special case of logarithmic utility. This result is somehow consistent with the discount rate monotonocity of consumption rate in the Merton problem. It can also explain the IMI effect: if we observe the consumption rate, then a possible upside jump can be linked to a jump in the discount rate and vice versa. The effect of risk aversion on the consumption rate is analysed. Here the results are consistent with  and . That is consumption rate is increasing in time for most levels of risk aversion except at very high levels when is decreasing ( this can be explained by the investor’s increased appetite for risk which leads to more investment in the risky asset and a deceasing consumption rate).
The reminder of this paper is organized as follow. In section we describe the model and formulate the objective. Section contains the main result under the power utility and logarithmic utility. In section 4, we present the numerical results. Section 5 examines consumption versus discount rate. The paper ends with an appendix containing the proofs.
2 The Model
2.1 The Financial Market
Consider a probability space which accommodates a standard Brownian motion and a homogeneous finite state continuous time Markov Chain (MC) For simplicity assume that MC takes values in Our results hold true in the more general situation of having finitely many states. The filtration is the completed filtration generated by and that is We assume that the stochastic processes and are independent. The MC has a generator with for and for every
In our setup the financial market consists of a bank account and a risky asset , that are traded continuously over a finite time horizon (here is an exogenously given deterministic time). The price process of the bank account and risky asset are governed by the following Markov-modulated SDE:
where and are the initial prices. The functions are assumed to be deterministic, positive and continuous in Given the state of the MC at they represent the riskless rate, the stock volatility and the stock return. Moreover
stands for the stock excess return.
2.2 Investment-consumption strategies and wealth processes
In our model, a representative investor continuously invests in the stock,bond and consumes. An acceptable investment-consumption strategy is defined below:
An -valued stochastic process is called an admissible strategy process and write if it is progressively measurable and it satisfies the following integrability condition
Here stands for the dollar value invested in stock at time and for the consumption. represents the wealth of the investor at time associated with the trading strategy it satisfies the following stochastic differential equation (SDE)
where is the initial wealth and is initial state. This SDE is called the self-financing condition. Under the regularity condition (2.1) imposed on above, the SDE (2.2) admits a unique strong solution. In the end of this section, we introduce further assumptions.
The utility function of the investor is of CRRA type, i.e.,
Thus, the inverse marginal utility function is
For any risk aversion level the following inequalities hold
2.3 The discount rate
As we mentioned in the introduction, this paper considers stochastic discount rates. An easy way to achieve this is to let the discount rate depend on the state of MC. Thus, at some intermediate time the discount are is for some positive constants and The intuition of this way of modeling discount rate stems from the connection between market state and discount rates (this can be explained by some models with endogenous discount rate which may be influenced by economic factors).
2.4 The Risk Criterion
In our model, the investor decides what investment/consumption strategy to choose according to the expected utility risk criterion. Thus, investor’s goal is to maximize utility of intertemporal consumption and final wealth. The novelty here is that we allow investor to update the risk criterion and to reconsider the optimal strategies she/he computed in the past. This will lead to a time inconsistent behavior as we show below. Let the agent start with a given positive wealth and a given market state at some instant The optimal trading strategy is chosen such that
Throughout the paper we denote The optimal trading strategy is derived by the supermartingale/martingale principle. This leads to the Hamilton-Jacobi-Bellman (HJB) equation
with the boundary condition
Here stands for the value of MC at time Thus, the HJB depends on the current time ( through ) and this dependence is inherited by the optimal trading strategy. This in turn leads to time inconsistencies. The resolution is to introduce subgame perfect equilibrium strategies. They are optimal now given that they will be implemented in the future.
3 The Main Result
3.1 The subgame perfect trading strategies
Following  we shall give a rigorous mathematical formulation of the equilibrium strategies in the formal definition below.
Let be a map such that for any and
and satisfies (2.1). Here, the process is the wealth corresponding to The process is another investment-consumption strategy defined by
with is any trading strategy for which is an admissible policy. If (3.9) holds true, then is a subgame perfect strategy.
3.2 The value function
Our goal is in a first step to characterize the subgame perfect strategies and then to find them. Inspired by , the value function satisfies
Recall that is the wealth process corresponding to so it solves the SDE
Moreover, is given by
Thus, the value function is characterized by a system of four equations: one integral equation with nonlocal term (3.13), one SDE (3.14) and two PDEs (3.15). Of course the existence of such a function satisfying the equations above is not a trivial issue. We will take advantage of the special form of the utility function to simplify the problem of finding We search for:
where the function is to be found. We consider the case (the case of logarithmic utility will be treated separately). In the light of equations (3.15) one gets
By (3.14), the associated wealth process satisfies the following SDE:
with the final condition Next we show that there exists a unique solution of this ODE system. Let us summarize these findings:
Appendix A proves this Lemma.
The following Theorem states the central result of our paper.
Appendix B proves this Theorem.
Next we turn to the special case of logarithmic utility.
3.3 Logarithmic Utility
When the risk aversion we search for the value function of the following form:
Arguing as in the previous subsection we find that the functions should satisfy the following system of equations:
with the final conditions and Notice that solves a linear ODE system and it can be found explicitly. With known, also solves a linear ODE system. The subgame perfect strategy is given by
Next we want to explore the relationship between subgame perfect consumption and discount rate. The following lemma shows that the higher discount rate the more consumption.
Assume that and solves the linear ODE (3.22). Then for any In other words the subgame perfect consumption rate is higher in the states in which the discount rate is higher.
Appendix C proves this Theorem.
4 Numerical Analysis
In this section, we use Matlab’s powerful ODE solvers (especially the functions ode23 and ode45) to perform numerical experiments. We numerically solve ODE system (3.19) and this in turn yield the subgame perfect strategies. Let the market coefficients be and the discount rate We take the Markov Chain generator to be
Next, define the consumption rate by
with the functions for being the solution of (3.19).
Fig. Equilibrium proportion of wealth consumed for different and The axis represents the time, and the axis the consumption rate
As consumption rate approaches as it was to be expected. From the picture with , we can see that the consumption rate decreases with time. This is explained by the fact that the higher the less risk aversion, which leads to higher proportion of wealth invested into stock and less consumption. This is consistent with a graph from . We see from the graphs that consumption rate increases when increases (this is also consistent with graphs from ). For a given risk aversion level , a higher discount rate will result in higher consumption rate. The difference in the consumption rates in the two MC states is decreasing with respect to
5 Consumption versus Discount Rate
We were able to prove for the case of logarithmic utility that higher discount rates lead to higher consumption rates. Numerical evidence suggest that this is also true for general power utilities. On the other hand, in a model with constant discount rate the same result holds true as the following Lemma shows.
Let be the optimal consumption rate in a model with constant discount rate Then
Appendix D proves this Theorem.
6.1 It’s formula for Markov Chain modulated diffusions
Suppose the stochastic processes satisfies the SDE
for some and for each Then
Here is the martingale process associated with the Markov Chain.
6.2 A Proof of Lemma 3.2
The existence of a unique solution of ODE system (3.19) is granted locally in time by a fixed point theorem. If we can establish estimates for then this local solution is also global solution. Let us introduce the process by
By product rule
Integrating (6.27) from to , we get
Taking expectation on the both sides of (6.28) and letting , it leads to
Boundedness assumption on market coefficients makes the process of (6.27) bounded. This in turn yields that is uniformly bounded from below. Next we want to prove that is also bounded from above. For a vector in we introduce the norm by :
for some positive constants and Consequently
Thus, by Gronwal’s inequality it follows that
6.3 B Proof of Theorem 3.3
We need the following Lemma which gives a PDE version of (3.13).
For a given assume there exists a function of class which satisfies (3.13). Then solves the following equation
with the boundary condition
Proof: We rewrite equation (3.13) as
For a fixed time the process is a martingale. Thus, in the light of (3.14) the function satisfy the following PDEs :
By differentiating (6.35) with respect to we get