On Investment-Consumption with Regime-Switching Work supported by NSERC grants 371653-09, 88051 and MITACS grants 5-26761, 30354 and the Natural Science Foundation of China (10901086).

# On Investment-Consumption with Regime-Switching ††thanks: Work supported by NSERC grants 371653-09, 88051 and MITACS grants 5-26761, 30354 and the Natural Science Foundation of China (10901086).

Traian A. Pirvu
Dept of Mathematics & Statistics
McMaster University
1280 Main Street West
Hamilton, ON, L8S 4K1
tpirvu@math.mcmaster.ca
Huayue Zhang
Dept of Finance
Nankai University
Tianjin, China, 300071
hyzhang69@nankai.edu.cn

Abstract. In a continuous time stochastic economy, this paper considers the problem of consumption and investment in a financial market in which the representative investor exhibits a change in the discount rate. The investment opportunities are a stock and a riskless account. The market coefficients and discount factor switches according to a finite state Markov chain. The change in the discount rate leads to time inconsistencies of the investor’s decisions. The randomness in our model is driven by a Brownian motion and Markov chain. Following [3] we introduce and characterize the equilibrium policies for power utility functions. Moreover, they are computed in closed form for logarithmic utility function. We show that a higher discount rate leads to a higher equilibrium consumption rate. Numerical experiments show the effect of both time preference and risk aversion on the equilibrium policies.

Key words: Portfolio optimization, time inconsistency, equilibrium policies, regime-switching discounting

JEL classification: G11

Mathematics Subject Classification (2000): 91B30, 60H30, 60G44

## 1 Introduction

Dynamic asset allocation in a stochastic paradigm received a lot of scrutiny lately. The first papers in this area are [11] and [12]. Many works then followed, most of them assuming an exponential discount rate. [3] has given an overview of the literature in the context of Merton portfolio management problem with exponential discounting.

The issue of discounting was the subject of many studies in financial economics. Several papers stepped away from the exponential discounting modeling, and based on empirical and experimental evidence proposed different discount models. They can be organised in two classes: exogenous discount rates and endogenous discount rates. In the first class the most well known example is the hyperbolic discounting. This type discounts near future more heavily than distant future which is in accordance with the experimental findings. [3] and [4] discuss about this class of discounting.

The concept of endogenous time preference was developed by [8] in a discrete time formulation. [17] considered the continuous time version which was later extended by [6]. The class of discount rates emerged in response to the following two observed phenomena: “decreasing marginal impatience” DMI and “increasing marginal impatience” IMI. DMI means that the lower the level of consumption, the more heavily an agent discount the future. IMI is just the opposite: the higher the level of consumption, the more heavily an agent discount the future. Some papers support DMI, e.g. [1], others advocate for IMI, [15].

In this paper we consider a regime switching model for the financial market. This modeling is consistent with some cyclicality observed in financial markets. Many papers considered these types of markets for pricing derivative securities. Here we recall only two such works, [7] and [5]. In [7], the author considers a stock price model which allows for the drift and the volatility coefficients to switch according to two-states. This market is incomplete, but is completed with new securities. In [5] the problem of option pricing is considered in a model where the risky underlying assets are driven by Markov-modulated Geometric Brownian motions. A regime switching Esscher transform is used in order to find a martingale pricing measure. When it comes to optimal investment in regime switching markets we point to [14]. In their paper they allow for the risk preference to switch according to the regime.

The discount rate in our paper is stochastic, exogenous and it depends on the regime. By the best of our knowledge is the first work to consider stochastic discounting within the Merton problem framework. In a discrete time model, [16] considers a cyclical discount factor.

Non constant discount rates lead to time inconsistency of the decision maker as shown in [3] and [4]. The resolution is to consider subgame perfect equilibrium strategies. These are strategies which are optimal to implement now given that they will be implemented in the future. After we introduce this concept we try to characterize it. In order to achieve this goal the methodology developed in [4] is employed. That is a new result in stochastic control theory: it mixes the idea of value function (from the dynamic programming principle) with the idea that in the future “optimal trading strategies” are implemented (from the maximum principle of Pontryagin). The new twist in our paper is the Markov chain, and the mathematical ingredient used is formula for the Markov-modulated diffusions. Thus, we obtain a system of four equations: first equation says that the value function is equal to the continuation utility of subgame perfect strategies; second equation is the wealth equation generated by subgame perfect strategies; the last two equations relate the value function to the subgame perfect strategies. The end result is a complicated system of PDEs, SDE and a nonlinear equation with a nonlocal term. The investor’s risk preference in this model is of CRRA type which suggest an ansatz for the value function (it disentangles the time, the space and the Markov chain state component). This result in subgame perfect strategies which are time/state dependent and linear in wealth. In the special case of logarithmic utility we can compute them explicitly. If constant discount rates, we notice that subgame perfect strategies coincide with the optimal ones.

The goal of this paper is twofold: first, to consider a model with stochastic discount rates and second, to study the relationship between consumption and discount rates (which resembles IMI and/or DMI ). In the Merton problem with constant discount rates we show that higher the discount rate higher the consumption rate (this is somehow an inverse relationship to IMI). We explore this relationship in our model with stochastic discount rates. Numerical experiments revealed that the consumption rate is higher in the market states with higher discount rate. We provide an analytic proof of this result for the special case of logarithmic utility. This result is somehow consistent with the discount rate monotonocity of consumption rate in the Merton problem. It can also explain the IMI effect: if we observe the consumption rate, then a possible upside jump can be linked to a jump in the discount rate and vice versa. The effect of risk aversion on the consumption rate is analysed. Here the results are consistent with [11] and [12]. That is consumption rate is increasing in time for most levels of risk aversion except at very high levels when is decreasing ( this can be explained by the investor’s increased appetite for risk which leads to more investment in the risky asset and a deceasing consumption rate).

The reminder of this paper is organized as follow. In section we describe the model and formulate the objective. Section contains the main result under the power utility and logarithmic utility. In section 4, we present the numerical results. Section 5 examines consumption versus discount rate. The paper ends with an appendix containing the proofs.

## 2 The Model

### 2.1 The Financial Market

Consider a probability space which accommodates a standard Brownian motion and a homogeneous finite state continuous time Markov Chain (MC) For simplicity assume that MC takes values in Our results hold true in the more general situation of having finitely many states. The filtration is the completed filtration generated by and that is We assume that the stochastic processes and are independent. The MC has a generator with  for and for every

In our setup the financial market consists of a bank account and a risky asset , that are traded continuously over a finite time horizon (here is an exogenously given deterministic time). The price process of the bank account and risky asset are governed by the following Markov-modulated SDE:

 dB(t)=r(t,J(t))B(t)dt, dS(t)=S(t)[α(t,J(t))dt+σ(t,J(t))dW(t)],0≤t≤∞,

where and are the initial prices. The functions are assumed to be deterministic, positive and continuous in Given the state of the MC at they represent the riskless rate, the stock volatility and the stock return. Moreover

 μ(t,i)≜α(t,i)−r(t,i)

stands for the stock excess return.

### 2.2 Investment-consumption strategies and wealth processes

In our model, a representative investor continuously invests in the stock,bond and consumes. An acceptable investment-consumption strategy is defined below:

###### Definition 2.1.

An -valued stochastic process is called an admissible strategy process and write if it is progressively measurable and it satisfies the following integrability condition

 E[∫t0|π(s)μ(s,J(s))−c(s)|ds+∫t0|π(s)σ(s,J(s))|2ds]<∞, a.s., for all t∈[0,∞). (2.1)

Here stands for the dollar value invested in stock at time and for the consumption. represents the wealth of the investor at time associated with the trading strategy it satisfies the following stochastic differential equation (SDE)

 dX(t) = (r(t,J(t))Xu(t)+μ(t,J(t))π(t)−c(t))dt+σ(t,J(t))π(t)dW(t), (2.2)

where is the initial wealth and is initial state. This SDE is called the self-financing condition. Under the regularity condition (2.1) imposed on above, the SDE (2.2) admits a unique strong solution. In the end of this section, we introduce further assumptions.

###### Assumption 2.2.

The utility function of the investor is of CRRA type, i.e.,

 U(x)=xγγ

where

Thus, the inverse marginal utility function is

 I(x)≜(U′)−1(x)=x1γ−1. (2.3)
###### Assumption 2.3.

For any risk aversion level the following inequalities hold

 Esupt∈[0,T]|X(t)|γ<∞. (2.4)
 Esupt∈[0,T]|c(t)|γ<∞. (2.5)

### 2.3 The discount rate

As we mentioned in the introduction, this paper considers stochastic discount rates. An easy way to achieve this is to let the discount rate depend on the state of MC. Thus, at some intermediate time the discount are is for some positive constants and The intuition of this way of modeling discount rate stems from the connection between market state and discount rates (this can be explained by some models with endogenous discount rate which may be influenced by economic factors).

### 2.4 The Risk Criterion

In our model, the investor decides what investment/consumption strategy to choose according to the expected utility risk criterion. Thus, investor’s goal is to maximize utility of intertemporal consumption and final wealth. The novelty here is that we allow investor to update the risk criterion and to reconsider the optimal strategies she/he computed in the past. This will lead to a time inconsistent behavior as we show below. Let the agent start with a given positive wealth and a given market state at some instant The optimal trading strategy is chosen such that

 supu∈AE[∫Tte−ρi(u−t)U(c(u))du+e−ρi(T−t)U(X(T))|X(t)=x,J(t)=i] =Ex,it[∫Tte−ρi(u−t)U(~ct(u))du+e−ρi(T−t)U(~X(T))].

Throughout the paper we denote The optimal trading strategy is derived by the supermartingale/martingale principle. This leads to the Hamilton-Jacobi-Bellman (HJB) equation

 ∂V∂s(t,s,x,i)+supπ,c[(rx+μπ−c)∂V∂x(t,s,x,i)+12σ2π2∂2V∂x2(t,s,x,i)+U(c)] (2.6)
 −ρiV(t,s,x,i)+∑j∈SλijV(t,s,x,j)=0,

with the boundary condition

 V(t,T,x,i)=U(x). (2.7)

Here stands for the value of MC at time Thus, the HJB depends on the current time ( through ) and this dependence is inherited by the optimal trading strategy. This in turn leads to time inconsistencies. The resolution is to introduce subgame perfect equilibrium strategies. They are optimal now given that they will be implemented in the future.

## 3 The Main Result

### 3.1 The subgame perfect trading strategies

For a policy process satisfying (2.1) and its corresponding wealth process given by (2.2), we denote the expected utility functional by

 J(t,x,i,π,c)≜Ex,it[∫Tte−ρi(s−t)U(c(s))ds+e−ρi(T−t)U(X(T))]. (3.8)

Following [3] we shall give a rigorous mathematical formulation of the equilibrium strategies in the formal definition below.

###### Definition 3.1.

Let be a map such that for any and

 liminfϵ↓0J(t,x,i,F1,F2)−J(t,x,i,πϵ,cϵ)ϵ≥0, (3.9)

where

 J(t,x,i,F1,F2)≜J(t,x,i,¯π,¯c),
 ¯π(s)≜F1(s,¯X(s),J(s)),¯c(s)≜F2(s,¯X(s),J(s)), (3.10)

and satisfies (2.1). Here, the process is the wealth corresponding to The process is another investment-consumption strategy defined by

 πϵ(s)={¯π(s),s∈[t,T]∖Eϵ,tπ(s),s∈Eϵ,t, (3.11)
 cϵ(s)={¯c(s),s∈[t,T]∖Eϵ,tc(s),s∈Eϵ,t, (3.12)

with is any trading strategy for which is an admissible policy. If (3.9) holds true, then is a subgame perfect strategy.

### 3.2 The value function

Our goal is in a first step to characterize the subgame perfect strategies and then to find them. Inspired by [3], the value function satisfies

 v(t,x,i)=Ex,it[∫Tte−ρi(s−t)U(F2(s,¯X(s),J(s)))ds+e−ρi(T−t)U(¯X(T))]. (3.13)

Recall that is the wealth process corresponding to so it solves the SDE

 d¯X(s)=[r(s,J(s))¯X(s)+μ(s,J(s))F1(s,¯X(s),J(s))−F2(s,¯X(s),J(s))]ds+σ(s,J(s))F1(s,¯X(s))dW(s). (3.14)

Moreover, is given by

 F1(t,x,i)=−μ(t,i)∂v∂x(t,x,i)σ2(t,i)∂2v∂x2(t,x,i),F2(t,x,i)=I(∂v∂x(t,x,i)),t∈[0,T]. (3.15)

Thus, the value function is characterized by a system of four equations: one integral equation with nonlocal term (3.13), one SDE (3.14) and two PDEs (3.15). Of course the existence of such a function satisfying the equations above is not a trivial issue. We will take advantage of the special form of the utility function to simplify the problem of finding We search for:

 v(t,x,i)=g(t,i)xγγ,  x≥0, (3.16)

where the function is to be found. We consider the case (the case of logarithmic utility will be treated separately). In the light of equations (3.15) one gets

 F1(t,x,i)=μ(t,i)xσ2(t,i)(1−γ),  F2(t,x,i)=g1γ−1(t,i)x. (3.17)

By (3.14), the associated wealth process satisfies the following SDE:

 d¯X(s) = [r(s,J(s))+μ2(s,J(s))σ2(s,J(s))(1−γ)−g1γ−1(s,J(s))]¯X(s)ds (3.18) +μ(s,J(s))σ(s,J(s))(1−γ)¯X(s)dW(s).

This is a linear SDE which can be easily solved. By plugging of (3.16) into (3.13) (with of (3.17) and of (3.18), we obtain the following equation for

 ∂g∂t(t,i)+[γr(t,i)+μ2(t,i)γ2σ2(t,i)(1−γ)−ρi]g(t,i)+∑j∈Sλijg(t,j)+(1−γ)gγγ−1(t,i)=0, (3.19)

with the final condition Next we show that there exists a unique solution of this ODE system. Let us summarize these findings:

###### Lemma 3.2.

There exists a unique continuously differentiable solution of the system (3.19). Furthermore, is a value function, meaning that solves (3.13) with of (3.17) and of (3.18).

Appendix A proves this Lemma.

The following Theorem states the central result of our paper.

###### Theorem 3.3.

Suppose that is given by (3.16) with the solution of the system (3.19). Let be the solution of SDE (3.18). Then given by

 ¯π(t)=μ(t,J(t))X(t)σ2(t,J(t))(1−γ),  ¯c(t)=g1γ−1(t,J(t))X(t), (3.20)

is a subgame perfect strategy.

Appendix B proves this Theorem.

###### Remark 3.4.

In the case of constant discount rate, i.e., the subgame perfect strategies coincide with optimal ones. This can be seen by looking at the integral equation (6.34) which is exactly HJB (2.6) (after the first order conditions are implemented).

Next we turn to the special case of logarithmic utility.

### 3.3 Logarithmic Utility

When the risk aversion we search for the value function of the following form:

 v(t,x,i)=h(t,i)log(x)+l(t,i). (3.21)

Arguing as in the previous subsection we find that the functions should satisfy the following system of equations:

 ∂h∂t(t,i)−ρih(t,i)+∑j∈Sλijh(t,j)+1=0, (3.22)
 ∂l∂t(t,i)+(r(t,i)+μ2(t,i)2σ2(t,i))h(t,i)−logl(t,i)−ρil(t,i)+∑j∈Sλijl(t,j)−1=0,

with the final conditions and Notice that solves a linear ODE system and it can be found explicitly. With known, also solves a linear ODE system. The subgame perfect strategy is given by

 ¯π(t)=F1(t,J(t))X(t)),¯c(t)=F2(t,J(t))X(t)), (3.23)

with

 F1(t,x,i)=μ(t,i)xσ2(t,i),  F2(t,x,i)=h−1(t,i)x. (3.24)

Next we want to explore the relationship between subgame perfect consumption and discount rate. The following lemma shows that the higher discount rate the more consumption.

###### Lemma 3.5.

Assume that and solves the linear ODE (3.22). Then for any In other words the subgame perfect consumption rate is higher in the states in which the discount rate is higher.

Appendix C proves this Theorem.

## 4 Numerical Analysis

In this section, we use Matlab’s powerful ODE solvers (especially the functions ode23 and ode45) to perform numerical experiments. We numerically solve ODE system (3.19) and this in turn yield the subgame perfect strategies. Let the market coefficients be and the discount rate We take the Markov Chain generator to be

 (−6.046.0410.9−10.9)

Next, define the consumption rate by

 C(t,J(t)):=F2(t,X(t),J(t))X(t)=g1γ−1(t,J(t)),

with the functions for being the solution of (3.19).

 Fig.1

Fig. Equilibrium proportion of wealth consumed for different and The axis represents the time, and the axis the consumption rate

###### Remark 4.1.

As consumption rate approaches as it was to be expected. From the picture with , we can see that the consumption rate decreases with time. This is explained by the fact that the higher the less risk aversion, which leads to higher proportion of wealth invested into stock and less consumption. This is consistent with a graph from [11]. We see from the graphs that consumption rate increases when increases (this is also consistent with graphs from [11]). For a given risk aversion level , a higher discount rate will result in higher consumption rate. The difference in the consumption rates in the two MC states is decreasing with respect to

## 5 Consumption versus Discount Rate

We were able to prove for the case of logarithmic utility that higher discount rates lead to higher consumption rates. Numerical evidence suggest that this is also true for general power utilities. On the other hand, in a model with constant discount rate the same result holds true as the following Lemma shows.

###### Lemma 5.1.

Let be the optimal consumption rate in a model with constant discount rate Then

 ∂C(t)∂ρ>0,t∈[0,T].

Appendix D proves this Theorem.

## 6 Appendix

### 6.1 It^o’s formula for Markov Chain modulated diffusions

Suppose the stochastic processes satisfies the SDE

 dX(t)=μ(t,X(t),J(t))dt+σ(t,X(t),J(t))dW(t)
 X(0)=x,a.s.,

for some and for each Then

 G(t,X(t),J(t)) = G(0,x,J(0))+∫t0ΓG(s,X(s),J(s))ds +∂G∂x(s,X(s),J(s))σ(s,X(s),J(s))dW(s) +∑J(t)≠i∫t0(G(s,X(s),J(t))−G(s,X(s),i))dM(t),

where

 ΓG(t,x,i) = ∂G∂t(t,x,i)+μ(t,x,i)∂G∂x(t,x,i)+12σ2(t,x,i)∂2G∂x2+∑j∈Sλij(G(t,x,j)−G(t,x,i)).

Here is the martingale process associated with the Markov Chain.

### 6.2 A Proof of Lemma 3.2

The existence of a unique solution of ODE system (3.19) is granted locally in time by a fixed point theorem. If we can establish estimates for then this local solution is also global solution. Let us introduce the process by

 M(v)≜g(v,Jv)−g(t,Jt)−∫vt(gu(u,Ju)+∑j∈SλJujg(u,j))du, (6.25)

with of (3.19). Dynkin formula implies that the process defined by (6.25) is a martingale. Further let

 K(v)≜exp{∫vt[γr(u,J(u))+μ2(u,J(u))γ2σ2(u,J(u))(1−γ)−ρJ(u)]du}. (6.26)

By product rule

 d(K(v)g(v,Jv)) = K(v)[dg(v,Jv)+g(v,Jv)(γr(v,Jv)+μ2(v,Jv)γ2σ2(v,Jv)(1−γ)−ρJv)dv] (6.27) = K(v)[dM(v)+(γ−1)gγγ−1(v,Jv)dv].

Integrating (6.27) from to , we get

 K(T)−g(t,Jt)=∫TtK(v)dM(v)+(γ−1)∫TtK(v)gγγ−1(v,Jv)dv. (6.28)

Taking expectation on the both sides of (6.28) and letting , it leads to

 g(t,i)=EitK(T)+(1−γ)Eit∫TtK(v)gγγ−1(v,Jv)dv. (6.29)

Boundedness assumption on market coefficients makes the process of (6.27) bounded. This in turn yields that is uniformly bounded from below. Next we want to prove that is also bounded from above. For a vector in we introduce the norm by :

 ∥y∥1=n∑i=1|yi|.

When is positive (6.29) yields an upper bound on (since is bounded from below). When because we get from (6.29) that

 g(t,0)
 g(t,1)

Thus,

 ||g(t,⋅)||1≤2¯K+(1−γ)¯K∫Tt||g(v,⋅)||1dv,

for some positive constant Gronwal’s inequality yields an upper bound on Next we prove uniqueness of the solution of (3.19). Indeed, let and be two solutions of (3.19), then by (6.29) it follows that

 |g1(t,i)−g2(t,i)| = (1−γ)|Eit∫TtK(v)gγγ−11(v,Jv)dv−Eit∫TtK(v)gγγ−12(v,Jv)dv| (6.32) ≤Eit∫TtK(v)|gγγ−11(v,Jv)−gγγ−12(v,Jv)|dv ≤C1Eit∫TtK(v)|g1(v,Jv)−g2(v,Jv)|dv ≤C2Eit∫Tt|g1(v,Jv)−g2(v,Jv)|dv ≤C2∫Tt(|g1(v,i)−g2(v,i)|+|g1(v,j)−g2(v,j)|)dv =C2∫Tt||g1(v,⋅)−g2(v,⋅)||1dv,

for some positive constants and Consequently

 ||g1(t,⋅)−g2(t,⋅)||≤C2∫Tt||g1(v,⋅)−g2(v,⋅)||dv.

Thus, by Gronwal’s inequality it follows that

 g1(t,⋅)=g2(t,⋅).

Next we want to prove that is a value function, i.e., solves (3.13) with of (3.17) and of (3.18). Since the function is sufficiently differentiable, we can compute the derivative of :

 vt(t,x,i)=gt(t,i)xγγ,vx(t,x,i)=g(t,i)xγ−1,vxx(t,x,i)=(γ−1)g(t,i)xγ−2.

Next we show that solves the PDE system (6.34) (which Lemma 6.1 shows that is equivalent to (3.13)). Substituting the above derivatives into the equation (6.34) yields

 ∂g∂t(t,i)xγγ+[(r(t,i)x+μ(t,i)F1(t,x,i)−F2(t,x,i))g(t,i)xγ−1+σ2iF21(t,x,i)2(γ−1)g(t,i)xγ−2 +U(F2(t,x,i))]+∑j∈Sλijg(t,i)xγγ−ρig(t,i)xγγ=0. (6.33)

After cancelation of we recover the ODE system (3.19); hence solves (6.34) (and also (3.13)).

### 6.3 B Proof of Theorem 3.3

We need the following Lemma which gives a PDE version of (3.13).

###### Lemma 6.1.

For a given assume there exists a function of class which satisfies (3.13). Then solves the following equation

 ∂v∂t(t,x,i)+(r(t,i)x+μ(t,i)F1(t,x,i)−F2(t,x,i))∂v∂x(t,x,i)+σ2(t,i)F21(t,x,i)2∂2v∂x2(t,x,i)
 +U(F2(t,x,i))+∑j∈Sλijv(t,x,j)−ρiv(t,x,i)=0, (6.34)

with the boundary condition

Proof: We rewrite equation (3.13) as

 v(t,x,i)=∫Tte−ρi(s−t)f(t,s,x,i)ds+e−ρi(T−t)h(t,x,i), (6.35)

where

 f(t,s,x,i)≜Ex,it[U(F2(s,¯X(s),J(s)))],h(t,x,i)≜Ex,it[U(¯X(T))].

For a fixed time the process is a martingale. Thus, in the light of (3.14) the function satisfy the following PDEs :

 ∂f∂t(t,s,x,i)+(r(t,i)x+μ(t,i)F1(t,x,i)−F2(t,x,i))∂f∂x(t,s,x,i) (6.36) + σ2(t,i)F21(t,x,i)2∂2f∂x2(t,s,x,i)+∑j∈Sλijf(t,x,j)=0,f(t,t,x,i)=U(F2(t,x,i))
 ∂h∂t(t,s,x,i)+(r(t,i)x+μ(t,i)F1(t,x,i)−F2(t,x,i))∂h∂x(t,s,x,i) (6.37) + σ2(t,i)F21(t,x,i)2∂2h∂x2(t,s,x,i)+∑j∈Sλijh(t,x,j)=0,h(T,x,i)=U(x).

By differentiating (6.35) with respect to we get

 ∂v∂t(t,x,i)=∫Tte−ρi(s−t)∂f∂t(t,s,x,i)ds+e−ρi(T−t)∂h∂t(t,x,i)+ρiv(t,x,i)−f(t,t,x,i). (6.38)

Moreover

 ∂v∂x(t,x,i)=∫Tte−ρi(s−t)∂f∂x(t,s,x,i)ds+e−ρi(T−t)∂h∂x(t,x,i). (6.39)
 ∂2v∂x2(t,x)=∫Tte−ρi(s−t)∂2f∂x2(t,s,x,i)ds+e−ρi(T−t)∂2h∂x2(t,x,i). (6.40)

In the light of (6.36), (6.37), (6.38), (6.39) and (6.40) it follows that

 ∂v∂t(t,x,i)+(r(t,i)x+μ(t,i)F1(t,x,i)−F