# Dynamic Portfolio Optimization with Looping Contagion Risk

## Abstract

In this paper we consider a utility maximization problem with defaultable stocks and looping contagion risk. We assume that the default intensity of one company depends on the stock prices of itself and another company, and the default of the company induces an immediate drop in the stock price of the surviving company. We prove the value function is the unique continuous viscosity solution of the HJB equation. We also compare and analyse the statistical distributions of terminal wealth of log utility based on two optimal strategies, one using the full information of intensity process, the other a proxy constant intensity process. These two strategies may be considered respectively the active and passive optimal portfolio investment. Our simulation results show that, statistically, active portfolio investment is more volatile and performs either much better or much worse than the passive portfolio investment in extreme scenarios.

Keywords: dynamic portfolio optimization, looping contagion risk, HJB equation, continuous viscosity solution, statistical comparisons.

AMS MSC2010: 93E20, 90C39

## 1 Introduction

There has been extensive research in dynamic portfolio optimization and credit risk modelling, both in theory and applications (see Pham (2009), Brigo and Morini (2013), and references therein). Utility maximization with credit risk is one of important research areas, which is to find the optimal value and optimal control in the presence of possible defaults of underlying securities or names. The early work includes Korn and Kraft (2003) using the firm value structural approach and Hou and Jin (2002) using the reduced form intensity approach. Defaults are caused by exogenous risk factors such as correlated Brownian motions, Ornstein-Uhlenbeck or CIR intensity processes. Bo et al. (2010) consider an infinite horizon portfolio optimization problem with a log utility and assume both the default risk premium and the default intensity dependant on an external factor following a diffusion process and show the pre-default value function can be reduced to a solution of a quasilinear parabolic PDE (partial differential equation). Capponi and Figueroa-Lopez (2011) assume a Markov regime switching model and derive the dynamics of the defaultable bond and prove a verification theorem with applications to log and power utilities. Callegaro et al. (2012) consider a wealth allocation problem with several defaultable assets whose dynamics depending on a partially observed external factor process.

Contagion risk or endogenous risk has grown into a major topic of interest as it is clear that the conventional dependence modelling of assets using covariance matrix cannot capture the sudden market co-movements. The failure of one company will have direct impacts on the performance of other related companies. For example, during the global financial crisis of 2007-2008, the default of Lehman Brother led to sharp falls in stock prices of other investment banks. Since defaults are rare events, one may have to rely on the market information of other companies to infer the default probability of one specific company. For example, one can often observe in the financial market data that the stock price of one company has negative correlation with the CDS (credit default swap) spread (a proxy of default probability) of another company. A commonly used contagion risk model is the interacting intensity model (see Jarrow and Yu (2001)) in which the default intensity of one name jumps whenever there are defaults of other names in a portfolio. Contagion risk has great impact on pricing and hedging portfolio credit derivatives (see Gu et al. (2013)).

There is limited research in the literature on dynamic portfolio optimization with contagion risk. Jiao and Pham (2011) consider a financial market with one stock which jumps downward at the default time of a counterparty which is not traded and not affected by the stock and, for power utility, solve the post-default problem by the convex duality method and prove a verification theorem for the pre-default value function. Jiao and Pham (2013) discuss multiple jumps and default events with exponential utility and establish the existence and uniqueness of the value function via a recursive system of BSDEs (backward stochastic differential equations). Bo and Capponi (2013) consider a market consisting of a risk-free bank account and a set of CDSs. The default of one name may trigger a jump in default intensities of other names in the portfolio, which in turn leads to jumps in the market valuation of CDSs referencing the surviving names and affects the optimal trading strategies. They solve the problem with DPP (dynamic programming principle) and prove a verification theorem. Capponi and Frei (2016) introduce an equity-credit portfolio with a market consisting of a risk-free bank account, stocks, and the same number of CDSs referencing these stocks. The default intensities of companies are functions of stock prices and some external factors. They derive a closed-form optimal investment strategy for log utility and develop a calibration algorithm.

In this paper we analyse the interaction of market and credit risks and its impact on dynamic portfolio optimization. We assume that there is one risk-free bank account, one non-defaultable stock and one defaultable stock in the market. The defaultable stock means the underlying company may default and the value of its stock becomes zero when that happens. In theory, the non-defaultable stock may also default, here we implicitly assume the probability of that is negligible, and we simply call the non-defaultable stock as the stock. The default time is the first jump time of a pure jump process driven by an intensity process that depends on the stock price as well as the defaultable stock price, and the stock price jumps at time of default. We investigate a terminal wealth utility maximization problem with general utility functions under this looping contagion framework.

We have made two main contributions in the paper. Firstly, in addition to the verification theorem, we prove the pre-default value function is the unique continuous viscosity solution of the HJB equation. The proof is technical but the result is important as it lays a solid foundation for numerical schemes to find the value function, in contrast to the verification theorem that requires the existence of a classical solution to the HJB equation, which is in general difficult to prove. Secondly, we compare and analyse the statistical distributions of terminal wealth of log utility based on two optimal strategies, one using the full information of intensity process, the other a proxy constant intensity process. These two strategies may be considered respectively the active and passive optimal portfolio investment. Our simulation results show that, statistically, active portfolio investment is more volatile and performs either much better or much worse than the passive portfolio investment in extreme scenarios. In other words, using the full information of intensity process in optimal portfolio investment is likely to be riskier than not using it, not to mention there is no reliable mechanism in the market to extract such information. To the best of our knowledge, this is the first time these properties of the value function and the wealth distributions are established in the literature of utility maximization with looping contagion risk.

The rest of the paper is organized as follows. In Section 2 we introduce the market model and state the main results, including the continuity of the value function (Theorem 2.8), the verification theorem (Theorem 2.10), and the unique viscosity solution of the value function (Theorem 2.13). In Section 3 we derive the closed-form optimal control for log utility, and perform numerical tests and statistical distribution analysis. We also give a BSDE representation for the value function of power utility. In Section 4 we prove Theorems 2.8, 2.10, and 2.13. Section 5 concludes the paper.

## 2 Model Setting and Main Results

Assume that there are one stock, one defaultable stock and one risk-free bank account in the market. We denote the stock by , the defaultable stock by and the risk-free bank account by with the riskfree interest rate . Assume is a complete probability space satisfying the usual conditions. is an enlarged filtration given by . Let be a nonnegative random variable, defined on , representing the default time of the defaultable stock . Then is defined by , where which equals 0 if and 1 otherwise. The default indicator process is associated with an intensity process . Let and be two standard Brownian motions on with correlation and be independent of . The market model is given by the following SDEs (stochastic differential equations):

where and are growth rates of stocks and , respectively, and are volatility rates, and is the percentage loss of stock upon the default of stock , all coefficients are constants. At default time the defaultable stock price falls to zero and the stock price is reduced by a percentage of . The requirement ensures the stock price does not fall to zero. is the special case where the default of defaultable stock triggers the default of . As there is no technical difference in handling this special case, we only consider the general case where will not default in this paper. In the paper we denote by a generic constant which may have different values at different places. We make the following assumption for the intensity process.

###### Assumption 2.1.

The intensity process of the default indicator process can be represented by , a function of stock prices and , and is bounded, non-increasing in , monotone in and Lipschitz continuous in and .

Investors dynamically allocate proportions of their total wealth into stock , defaultable stock and bank account . The admissible control set is the set of control processes that are progressively measurable with respect to the filtration and for all . The set is defined by

for some bounded set and constant . The dynamic of wealth process is given by

(2.1) |

where is the transpose of a vector and

The matrix-valued process is adapted to the filtration and plays the role of removing the stock after default time . Specifically, after time , wealth dynamic (2.1) becomes

Even though the admissible control set is still after default time , becomes irrelevant, or is a dummy control. The requirement ensures that when default occurs, the maximum percentage loss of the wealth does not exceed , in other words, if is the pre-default wealth, then the post-default wealth is at least .

###### Remark 2.2.

###### Remark 2.3.

Our objective is to maximize the expected utility of the terminal wealth, that is,

###### Assumption 2.4.

The utility function is defined on , is continuous, non-decreasing, concave, and satisfies and for all , where and are constants.

###### Remark 2.5.

Power utility , , satisfies Assumption 2.4, but not log utility. However, due to the special structure of log utility, the optimality and regularity of the value function can be directly verified.

###### Remark 2.6.

The key benefit of introducing in wealth dynamic (2.1) is to keep the feasible control set unchanged before and after default time, otherwise we would have to deal with a control set which changes the dimensionality at a random time and makes the meaning of the maximization problem unclear.

Due to the existence of default event, the problem can be naturally split into pre-default case and post-default case. The latter is a standard utility maximization problem as stock disappears and the post-default value function is a function of time and wealth only, see Pham (2009). We will only discuss the pre-default case in this paper. The pre-default value function is defined by

for . Note that if is independent of and , then the pre-default value function is a function of and only.

We have the following continuity result for the value function.

###### Theorem 2.8.

The pre-default value function is continuous in .

It is clear that the natural boundary condition at is . The natural boundary condition at is the optimal value function in a market with two securities, one defaultable stock and one riskless bank account . However, due to the lack of continuity at , the natural boundary condition may be irrelevant. Same for the boundary condition for . To compensate for this, we make the following assumption for the pre-default value function.

###### Assumption 2.9.

Denote the boundary of by , then . Assume exists for any .

is a proper boundary condition for the pre-default value function at or , which equals the natural boundary condition if is continuous at and . Define

The modified pre-default value function is continuous on by Theorem 2.8 and Assumption 2.9 and satisfies the following HJB equation by the dynamic programming principle

(2.3) |

for with terminal condition , where is the infinitesimal generator of processes , and with control , given by

(2.4) | |||||

is the post-default value function, , , evaluated at , and other derivatives are similarly defined.

We next give a verification theorem for the pre-default value function.

###### Theorem 2.10.

Assume that the post-default value function , that solves (2.3) with the terminal condition and the boundary conditions , for , that satisfies a growth condition for , that the maximum of the Hamiltonian in (2.3) is achieved at in , and that SDE (2.1) admits a unique strong solution with control . Then coincides with the modified pre-default value function and is the optimal pre-default control process.

###### Remark 2.11.

For log utility , the assumption is not satisfied. However, it is well known that the value function can be written as the sum of and another function independent of . Specifically, the pre-default value function has a form , where is a solution of a linear PDE, see (3.2). If we assume and is bounded, then one can show is indeed the pre-default value function with the same proof as that of Theorem 2.10 except one change: instead of using , which does not hold for log utility, one uses . Since

for , we have , which provides the required uniform integrability property in the proof.

The verification theorem assumes the existence of a classical solution of the HJB equation (2.3), which may not be true for pre-default value function . We now assume that the post-default value function is a classical solution of the post-default HJB equation. We show that the pre-default value function is the unique viscosity solution to (2.3).

To facilitate discussions of viscosity solution, define a function by

where is the gradient vector of with respect to , and is the Hessian matrix of with respect to , and its derivatives are evaluated at . The HJB equation (2.3) is the same as

(2.5) |

###### Definition 2.12.

Let be a continuous function defined on .

(i) is a viscosity subsolution of (2.5) if

for all and such that and on .

(ii) is a viscosity supersolution of (2.5) if

for all and such that and on .

(iii) is a viscosity solution of (2.5) if it is both a viscosity subsolution and a viscosity supersolution of (2.5).

We have the following viscosity solution properties for the modified pre-default value function.

###### Theorem 2.13.

The modified pre-default value function is the unique continuous viscosity solution of (2.3) on , satisfying the growth condition for some constant , the terminal condition , and the boundary conditions , for .

## 3 Numerical Tests

In this section, we give an example with log utility and perform some numerical tests and statistical and sensitivity analysis.

### 3.1 Constrained optimal strategies for log utility

For , the post-default case is well known with the optimal control (and ) and the post-default value function , where

We conjecture that the pre-default value function takes the form

(3.1) |

Substituting (3.1) into (2.3), we get a linear PDE for :

(3.2) |

with the terminal condition , where is defined by

###### Remark 3.1.

If we assume further that is Lipschitz continuous in both and , then there exists a unique bounded solution to the PDE (3.2). This can be shown as follows. Let , and , then PDE (3.2) can be written equivalently as

(3.3) |

with terminal condition , where . By Assumption 2.1 and the Lipschitz continuity of , we have is bounded and Lipschitz continuous in and , which implies that (3.3) has a unique bounded classical solution , see Lunardi (1998).

Assume the control constraint set is given by

where are chosen such that for . We need to solve a constrained optimization problem:

Since is compact and is continuous, there exists an optimal solution which satisfies the Kuhn-Tucker optimality condition

(3.4) |

and the complementary slackness condition

(3.5) |

where , , are Lagrange multipliers and . Since can only take value either in the interior of interval or one of two endpoints, the same applies to , we have nine possible combinations.

If both and are interior points, then for , which results in . Assuming and solving (3.4), we get the optimal control as

(3.6) |

where is the sign function which equals 1 if and if , and , , are some constants (see Appendix) and is given by

We can discuss other cases one by one. For example, if and , then and and are solutions of equation (3.4). If solutions do not satisfy and , then this case is impossible. See detailed discussions of all cases in Appendix.

### 3.2 Performance comparison of state-dependent and constant intensities

We now do some numerical tests and statistical analysis. The data used for the benchmark case are the following:

In this numerical test, we simply assume the intensity function is only a function of the stock prices . The intensity function is given by

(3.7) |

with minimum intensity , maximum intensity , and parameter . Note that controls the initial intensity and weights control the sensitivity of intensity to stock prices and . We require . We set such that the initial intensity is 0.1 and . Additionally, we assume the control set parameters are the following: .

Figure 1 shows sample paths of stock prices, default intensities, and optimal wealth with two different trading strategies. The left panel shows stock price paths of and , at time of default, stock price drops to zero and stock price jumps down then continues. The middle panel shows the default intensity process , which is a function of stock prices and becomes zero after default, together with a constant (value equal to 0.1, determined by ) default intensity process. The right panel shows the sample wealth paths when optimal control strategies used are based on -dependent intensity and constant intensity (value equal to ). The wealth path with intensity jumps upwards compared to the wealth path with constant intensity and then two wealth paths move in the same pattern. This is not surprising as at time of default, strategies with intensity short sell more stock and invest less stock than strategies with constant intensity, which means gain is more (due to default of stock ) and loss is less (due to fall of stock price ). Of course, the opposite phenomenon happens when the intensity at time of default is smaller than constant intensity .

Figure 2 shows optimal controls and as functions of intensity values and the statistical distributions of the wealth at time . The left panel is proportions of wealth invested in stocks and . It is clear that as increases, investments in stocks and both decrease and investment in savings account increases, which is intuitively expected as if the default probability of stock increases, then one would reduce the holdings of stocks and to reduce the risk of loss in case stock indeed defaults. The optimal investment strategy to stock is short-selling except when default intensity value is small, which indicates the investor views the stock is likely to default. Due to the short selling constraint , the optimal equals the minimize value when the intensity value is sufficiently large (about 0.25). We simulate 10000 paths of both stock prices and , using -dependent default intensity . Among all these paths, about 1/4 (precisely 2475 paths) contain defaults of . The terminal wealth is generated by two strategies: one is optimal strategy based on the full information of , the other is optimal strategy based on constant intensity 0.1. The mid and right panels show the histograms of terminal wealth of these two strategies when defaults indeed happen and do not happen, respectively. It is clear that their distributions are similar but with some marked differences at tail parts, that is, probabilities of great over-performance and great under-performance are much higher with the -dependent optimal strategies. These histograms seem to indicate, for log utility, that -dependent optimal strategies are more volatile or riskier than constant intensity (a proxy of true intensity) optimal strategies.

mean | std dev | 2.3% quantile | 97.7% quantile | |

All samples | 119.85 | 22.09 | 83.82 | 173.56 |

All samples constant | 118.72 | 17.72 | 87.18 | 159.86 |

Default | 119.04 | 27.80 | 74.70 | 185.57 |

Default constant | 119.12 | 25.19 | 76.53 | 176.61 |

No-default | 120.12 | 19.86 | 88.11 | 169.54 |

No-default constant | 118.59 | 14.44 | 92.68 | 150.61 |

Table 1 contains sample means, sample standard deviations, and quantile values at low end (2.3%) and high end (97.7%) for both -dependant intensity and constant intensity . It is clear that the overall sample mean with -dependent optimal strategies is (slightly) higher than constant intensity optimal strategies, which is expected as the former is the genuine optimal control, however, the sample standard deviation with -dependent optimal strategies is also higher, with lower 2.3% quantile (more loss) and higher 97.7% quantile (more gain), which implies the -dependent optimal strategies can be volatile and risky, while the constant optimal strategies are more conservative.

### 3.3 Robustness and sensitivity of model parameters

Assume intensity function is given by (3.7) and stock prices and are generated based on that. It may be difficult to calibrate parameters accurately even one knows the exact form of the intensity function. We do some robust tests for parameters , that is, we compare the optimal performances of two investors, one uses benchmark parameter values and the other incorrect estimated values. We change one parameter only in each test while keep all other parameters fixed at benchmark values.

mean | std dev | 2.3% quantile | 97.7% quantile | |

benchmark | 119.85 | 22.09 | 83.82 | 173.56 |

119.87 (0.02%) | 23.03 (4.26%) | 82.92 (-1.07%) | 176.19 (1.52%) | |

119.78 (-0.06%) | 21.81 (-1.27%) | 83.38 (-0.52%) | 172.88 (-0.39%) | |

(constant ) | 118.72 (-0.94%) | 17.72 (-19.78%) | 87.18 (4.01%) | 159.86 (-7.89%) |

119.25 (-0.50%) | 19.03 (-13.85%) | 86.48 (3.17%) | 163.48 (-5.81%) | |

120.21 (0.30%) | 24.63 (11.50%) | 79.97 (-4.59%) | 179.86 (3.63%) | |

119.78 (-0.06%) | 22.31 (1.00%) | 83.33 (-0.58%) | 175.64 (1.20%) | |

119.65 (-0.17%) | 20.61 (-6.70%) | 83.62 (-0.24%) | 165.92 (-4.40%) | |

119.36 (-0.41%) | 27.83 (25.98%) | 71.40 (-14.82%) | 181.14 (4.37%) | |

119.99 (0.12%) | 28.71 (29.97%) | 80.80 (-3.60%) | 198.15 (14.17%) |

Table 2 shows that sample means are essentially the same over a broad range of model parameters. The main difference is sample standard deviations. Percentage changes over the benchmark values are listed in parentheses. The performance of state-dependent intensity strategies is robust for some parameters, including weight , minimum intensity level and maximum intensity level . Changes of these parameters do not greatly change sample standard deviations and quantile values at low and high ends. On the other hand, it seems important to have correct estimations of parameters and to avoid large changes of the standard deviation. For example, if one underestimates the initial default intensity ( instead of correct value ) then the sample standard deviation is greatly increased with large loss at low end quantile value.

Next we do some sensitivity tests to see the impact of changes of model parameters on the distribution of optimal terminal wealth, including drift , volatility , correlation , and percentage loss . We change drift and volatility parameters by 20% of their benchmark values and correlation and percentage loss parameters by some big deviations.

mean | std dev | 2.3% quantile | 97.7% quantile | |

benchmark | 119.85 | 22.09 | 83.82 | 173.56 |

121.31 (1.22%) | 32.91 (48.98%) | 69.84 (-16.68%) | 202.60 (16.73%) | |

118.05 (-1.50%) | 15.27 (-30.87%) | 94.62 (12.88%) | 158.47 (-8.69%) | |

119.35 (-0.42%) | 23.76 (7.56%) | 79.21 (-5.50%) | 180.05 (3.74%) | |

120.36 (0.43%) | 27.55 (24.72%) | 76.54 (-8.69%) | 187.14 (7.82%) | |

119.12 (-0.61%) | 18.00 (-18.52%) | 89.34 (6.59%) | 163.68 (-5.69%) | |

120.74 (0.74%) | 28.57 (29.33%) | 76.59 (-8.63%) | 191.12 (10.12%) | |

119.65 (-0.17%) | 20.90 (-5.39%) | 85.08 (1.50%) | 169.49 (-2.35%) | |

120.12 (0.23%) | 24.10 (9.10%) | 81.39 (-2.90%) | 180.43 (3.96%) | |

120.35 (0.42%) | 22.11 (0.09%) | 81.52 (-2.74%) | 173.36 (-0.12%) | |

120.24 (0.33%) | 22.18 (0.41%) | 83.51 (-0.37%) | 174.79 (0.71%) | |

124.39 (3.79%) | 32.68 (47.94%) | 70.91 (-15.40%) | 202.07 (16.43%) | |

121.18 (1.11%) | 27.38 (23.81%) | 82.64 (-1.41%) | 189.91 (9.42%) |

Table 3 lists statistical results of distributional sensitivity to changes of parameters. It is clear sample means are essentially same for all parameters, but sample standard deviations are very sensitive to changes of drift, volatility and percentage loss, especially parameters , which would significantly affect overall distributions of optimal terminal wealth. This requires one to have good estimations of these parameters to have correct distributions. It is well known that it is relatively easy to estimate volatility but very difficult to estimate drift (see Rogers (2013)) and information of percentage loss is rarely available. Since optimal trading strategies and optimal wealth distributions are greatly influenced by these parameters which are difficult to be correctly estimated, one needs to be cautious in using state-dependent intensity to model and solve optimal investment problems. Using sub-optimal but conservative and robust trading strategies, instead of optimal ones based on unobservable parameters and intensities, might be more sensible and less risky.

###### Remark 3.2.

For power utility , , the post-default case is well known with the optimal control (and ) and the post-default value function , where

We conjecture that the pre-default value function takes the form

(3.8) |

(3.9) |

with the terminal condition , where

The optimal control can be derived from the following equation, provided the solution is in the set ,

However, unlike the log utility case, there is no closed-form solution for the optimal control . We can nevertheless give a BSDE representation to the solution . Assume the following BSDE has a solution ,

with terminal condition and satisfies SDE