Quantifying the Model Risk Inherent in the Calibration and Recalibration of Option Pricing Models
We focus on two particular aspects of model risk: the inability of a chosen model to fit observed market prices at a given point in time (calibration error) and the model risk due to recalibration of model parameters (in contradiction to the model assumptions). In this context, we follow the approach of \citeasnounglasserman2014robust and use relative entropy as a pre-metric in order to quantify these two sources of model risk in a common framework, and consider the trade–offs between them when choosing a model and the frequency with which to recalibrate to the market. We illustrate this approach applied to the models of \citeasnounOZ:Bla&Sch:73 and \citeasnounOZ:Heston:93, using option data for Apple (AAPL) and Google (GOOG). We find that recalibrating a model more frequently simply shifts model risk from one type to another, without any substantial reduction of aggregate model risk. Furthermore, moving to a more complicated stochastic model is seen to be counterproductive if one requires a high degree of robustness, for example as quantified by a 99% quantile of aggregate model risk.
Yu Feng]Yu.Fengemail@example.com Ralph Rudd] Christopher Baker] Qaphela Mashalaba] Melusi Mavuso] Corresponding author]Erik.Schlogl@uts.edu.au \markleftYU FENG ET AL.
The renowned statistician George E. P. Box wrote that “essentially, all models are wrong, but some are useful.”111See \citeasnounOZ:Box:87. This is certainly true in finance, where many models and techniques that have been extensively empirically invalidated remain in widespread use, not just in academia, but also (perhaps especially) among practitioners. At times, the way models are used directly contradicts the model assumptions: As observed market prices change, parameters in option pricing models, which are assumed to be time–invariant, are recalibrated, often on a daily basis. Incorrect models, and model misuse, represent a source of risk that is being increasingly recognised — this is called “model risk.” As a paper by the Board of Governors of the Federal Reserve System put it in 2011,222See \citeasnounfedgov2011. “The use of models invariably presents model risk, which is the potential for adverse consequences from decisions based on incorrect or misused model outputs and reports.”
In broad terms, one could identify four general classes of model risk inherent to the way mathematical models are used in finance, for example in (but not limited to) option pricing applications:
Parameter uncertainty (and sensitivity to parameters) — let’s call this “Type 0” model risk for short. If model parameters need to be statistically estimated, they will only be known up to some level of statistical confidence, and this parameter uncertainty induces uncertainty about the correctness of the model outputs.333Examples of where this type of risk is considered explicitly in the literature include \citeasnounCDO:Loeffler:03, \citeasnounBan&Sch:13 and \citeasnounKer&Ber&Sch:10.
Inability to fit a model to a full set of simultaneous market observations — this is “calibration error,” let’s call this “Type 1” model risk for short. To the extent that a model cannot match observed prices on a given day, single-day (a.k.a. “cross-sectional”) market data already contradicts the model assumptions. The classical example of this is the Black/Scholes implied volatility smile.
Change in parameters due to recalibration — let’s call this “Type 2” model risk for short. Once one moves from one day to the next, this aspect of model risk becomes apparent: In order to again fit the market as closely as possible, it is common practice in the industry to recalibrate models. This recalibration results in model parameters (which the models assume to be fixed) changing from day to day, contradicting the model assumptions.
The “true” dynamics of state variables don’t match model dynamics444This type of model risk is considered for example in \citeasnounKer&Ber&Sch:10, who also relate this to identification risk, which they define as risk which “arises when observationally indistinguishable models have different consequences for capital reserves.” — let’s call this violation of model assumptions “Type 3” model risk.555\citeasnounBou&Dan&Kou&Mai:14 present a method for making value–at–risk more robust with respect to this source of model risk by “learning” from the results of model backtesting. The classical example of this is the econometric rejection of the hypothesis that asset prices follow geometric Brownian motion, thus invalidating the key assumption in the seminal model of \citeasnounOZ:Bla&Sch:73. This type of model risk would impact in particular the effectiveness of hedging strategies based on a model.666\citeasnounDet&Pac:16 take the approach of measuring model risk based on the residual profit/loss from hedging in a misspecified model.
Note that there is a gradual transition between the different types of model risk, and depending on one’s modelling choices, to a certain extent one can trade off one type of model risk against another. For example,
Less stringent requirements of an exact fit to market observations (Type 1) allows less frequent recalibration (Type 2).
Instead of different model dynamics (Type 3), one could consider a parameterised family of models (Type 2).
Regime–switching models “legalise” changes in parameters, so Type 2 becomes more like Type 3.
Adding parameters shifts model risk from Type 1 to Type 2 (or, to a certain extent, to Type 0).
Adding state variables shifts model risk from Type 2 to Type 3.
glasserman2014robust propose relative entropy as a consistent pre-metric by which to measure model risk from different sources.777Instead of using a relative entropy pre-metric, one could approach quantifying model risk in terms of optimal–transport distance, using for example Wasserstein distance, which most recently has become popular for this purpose (see \citeasnounBar&Dra&Tan:18, \citeasnounBla&Che&Zho:18 and \citeasnounFen&Sch:18). In the present paper, we follow the more established approach using relative entropy, which has its roots in the seminal work of Hansen and Sargent (see e.g. \citeasnounHan&Sar:06). What matters in the application of mathematical models in finance is the probability distributions which the models imply,888\citeasnounBre&Csi:16 call this distribution model risk. either under a “risk–neutral” probability measure (for applications to relative pricing of financial instruments) or the “physical” (a.k.a. “real–world”) probability measure (for risk management applications such as the calculation of expected shortfall). Each type of model risk manifests itself as some form of ambiguity about the “true” probability measures which should be used for these purposes, and being able to quantify different types of model risk in a unified setting using a pre-metric for the divergence between distributions (like relative entropy) allows one to make an informed choice about the trade–offs between different sources of model risk. \citeasnounglasserman2014robust postulate a “relative entropy budget” defining a set of models sufficiently close (in the sense of relative entropy) to a nominal reference model to be considered in an evaluation of model risk expressed as a “worst case” expectation — i.e., a worst–case price or a worst–case risk measure. However, they say little as to how one typically would obtain a specific number for this “relative entropy budget”. In a sense, we invert this problem by noting that higher relative entropy between model distributions indicates higher model risk, and propose a method to jointly evaluate model risk of two types, based on how this model risk manifests itself when option pricing models are calibrated and recalibrated to liquid market instruments.
We focus on the model risk inherent in the calibration and recalibration (i.e., in the above terminology, Types 1 and 2) of option pricing models, and to illustrate our approach we consider the models of \citeasnounOZ:Bla&Sch:73 and \citeasnounOZ:Heston:93, thus comparing the most classical option pricing model with its popular extension incorporating stochastic volatility. Clearly, if (as is often the case in practice) one focuses solely on calibration error, \citeasnounOZ:Heston:93 will always be preferred to \citeasnounOZ:Bla&Sch:73, and more frequent recalibration preferred to less. We quantify calibration and recalibration risk in both models applied to equity option data, and also explore the trade–off between these two types of model risk, finding that there is no longer a trivial answer to the question which model and which recalibration frequency should be preferred when these two sources of model risk are considered in a unified framework.
The rest of the paper is organised as follows. Section 2 introduces a framework for the joint evaluation of model risk due to calibration error and due to model recalibration. The numerical implementation of the method is discussed in Section 3. Section 4 presents the results obtained by applying this method to option price data, and Section 5 concludes.
2. Calibration error, model risk due to recalibration, and treatment of latent state variables
As noted above, model risk is reflected in the ambiguity with regard to the “correct” probability distribution to use for relative pricing or risk assessment. Following \citeasnounglasserman2014robust, we quantify this ambiguity using the divergence between probability measures. In the present context, these can be classified as divergence measures defined as a function satisfying
where is a space of all probability measures with a common support. More specifically, most divergence measures belong to the class of -divergence, which gives the divergence between two equivalent measures as:999See e.g. \citeasnounali1966general, \citeasnouncsisz1967information or \citeasnounahmadi2012entropic.
where is a convex function of the Radon-Nikodym derivative satisfying . Kullback–Leibler divergence (a.k.a. relative entropy) is the most common -divergence, which assigns . It is noted that the methodology of this paper applies to all types of statistical distances in principle, though in the empirical study the Kullback–Leibler divergence is adopted due to its simplicity and widespread use.
If we wish to quantify calibration error (Type 1 model risk) in this fashion, then in equations (1)–(3), the probability measure corresponds to the calibrated model and thus is parametric in some form. The probability measure , on the other hand, serves as a reference measure exactly matching observed market prices at a given point in time, unrestricted by the assumptions of the model under consideration. On calibrating an option pricing model, we may regard the measure as some non-parametric risk–neutral measure that explains the market in full assuming absence of arbitrage. In practice, however, the measure is not unique as the market is usually incomplete. We therefore define the space of all probability measures that explains the market in full by .
We may further define the space of probability measures given by all possible choices of parameter values for the target model by . The new calibration methodology proposed here aims to minimise the calibration error as quantified by the divergence between the two measures and , taken from their respective spaces, i.e.
This is to say, the new approach attempts to calibrate a model measure (i.e., a set of model parameters ) and non-parametric perfect fit to the market (at a given point in time) , in a fashion which minimises the calibration error expressed by
This is not an end in itself — it is required in order to compare model risk due to calibration error and model risk due to recalibration (as specified below) in a unified framework.
The classical approaches of model calibration, such as minimising the mean–squared error between model and market prices for options, would be inappropriate in this context, as they would lead to unnecessarily high model risk quantities. It is the choice of divergence measure which informs the calibration procedure, resulting in a pair of probability measures, , one of which corresponds to the calibrated model while the other provides a consistent reference measure fitting the market exactly.
To quantify the model risk due to recalibration, let us consider the more specific case where the model is Markovian in a vector of observable state variables , the model is characterised by a vector of model parameters , and market prices are given for European option prices of a single maturity .101010This last assumption of a single maturity avoids the need to constrain the choice of to ensure the absence of calendar spread arbitrage between non-parametric risk–neutral measures for different time horizons — parametric models typically ensure this by construction. If we appropriately constrain , this assumption can be lifted. Suppose we solved (4) yesterday (at time ) to obtain a — to be as explicit as possible, denote this by
I.e., this is a (conditional) probability measure defined on all –measurable events, where the conditioning is on the state variables at time , , and we write to express that the time realisations of the state variables are known at the time that these probabilities are evaluated. We write the subscript to express that these probabilities are evaluated in a model with parameters calibrated by solving (4) at time . Furthermore, denote the non-parametric measure resulting from solving (4) at time by .
Now, if we recalibrate today (at time ) by solving (4), we obtain and
We can then define the model risk quantity due to recalibration as
which is the divergence between the (conditional) probability measures evaluated at time , where one measure is based on the recalibrated parameters and the other is based on the previously calibrated parameters (thus expressing, in terms of divergence, the inconsistency with the model assumptions due to the fact that we are going “outside of the model” to change parameters in recalibration). The aggregate of calibration error and model risk due to recalibration is then
i.e., the divergence between the non-parametric probability measure obtained by solving (4) at time , and the non-recalibrated parametric probability measure, consisting of probabilities conditional on the state at time , but based on model parameters obtained by solving (4) at time . However, this approach minimises the divergence between the reference distribution and the recalibrated distribution, thus arguably overstating the divergence to the non-recalibrated (i.e. model–consistent) distribution, and therefore overstating the aggregate model risk .
Alternatively, we may choose as the non-parametric reference distribution at time :
resulting in a lower aggregate model risk of
Note that is still obtained by solving (4), because both and represent non-parametric probability measures fitting observed market prices exactly, so remains the best available parametric fit to the market at time ( is only used to determine minimum divergence of the non-recalibrated model to a measure giving a perfect fit).
In the heuristic schematic of Figure 1(a),111111Note that these graphs are for the purpose of heuristic illustration only — in particular, we are not requiring that the two sets of probability measures are convex. point A represents , being the parametric probability measure “closest” to the set of non-parametric probability measures fitting the market exactly, where point C represents . If we do not recalibrate at time , we end up with the parametric probability measure (point B), to which (point D) is the “closest” non-parametric probability measure fitting the market exactly.
In the case of Kullback–Leibler divergence, note that if Type 1 (calibration error) and 2 (recalibration) model risk involve independent Radon–Nikodym derivatives, then, in the first case considered above, aggregate model risk equals the sum of the two components. In fact, the Radon-Nikodym derivatives, as random variables, take the key role in evaluating the two types of model risk. At the time the model is recalibrated, we again consider the optimisation (4), with now changed to reflect the change in observed market prices, so we have the following Radon–Nikodym derivatives:
Abbreviating as and as , the aggregate risk can be expressed in terms of and as:
If and are independent, . The total model risk is equal to the sum of the calibration risk and the recalibration risk. Surprisingly, in our empirical exploration below we found that this equality is followed quite well by the Black-Scholes model. However, it typically does not hold in the Heston model, suggesting substantial dependence (of Radon-Nikodym derivatives) between the calibration error and model risk due to recalibration.
We also consider models which involve one or more latent state variables. An example of that is the class of stochastic volatility models where the volatility is taken as a latent state variable rather than a model parameter (in the empirical examples below, we specifically consider the model of \citeasnounOZ:Heston:93, which falls into this category). Under the framework of a single stochastic volatility state variable, a model specified by a given set of parameters forms a one-dimensional manifold (Fig. 1(b)) for possible realisations of the state variable, rather than a point in the Black-Scholes world (Fig. 1(a)).
Thus, the model which we are now considering is Markovian in a vector of state variables , where the state variables are observable and the state variables are latent (unobservable). Then, the initial calibration problem (4) becomes
where and are the sets of legitimate values of the state variables and the parameters, respectively. is the set of model parameters calibrated to the market, and is the best estimate of the latent state variables under the calibrated model.121212This effectively treats the latent (unobserved) state variable as an additional parameter to be calibrated, but the recalibration of which does not contribute to (Type 2) model risk due to recalibration, because it is consistent with the model assumptions for this latent state variable to evolve stochastically. This does shift Type 2 model risk to Type 3, the risk that the state variable dynamics are not (econometrically) consistent with the dynamics assumed in the model. However, in the present paper we deliberately set aside Type 3 model risk for the purposes of our analysis, leaving the integration of all four types of model risk for future research. The notation in (6) is amended to
At time , we have for the calibration error
The model risk due to recalibration is
The aggregate model risk using from and from in (18) is
or alternatively, using and determined analogously to (10), i.e.,
which results in
We then have the following Radon–Nikodym derivatives:
Note that the key difference between (12)–(14) and (25)–(27) is that the change in , being permitted by the model assumptions, does not contribute to the model risk quantities. In (4) and (18), we are deliberately prioritising the minimisation of calibration error, as this is congruent to the (often exclusive) focus of practitioners on calibration error (with little or no regard to model risk due to recalibration). If desired, one could reformulate this approach to prioritise the minimisation of aggregate model risk, or of model risk due to recalibration.
3. Numerical implementation
In this section, we outline the numerical scheme for solving the minimisation problems arising when taking into account calibration error and model risk due to recalibration in the manner described in the previous section, including problems of the type (4) involving the optimal choice of two probability measures. In this case, an iterative process is required, optimising two probability measures and in turn until convergence, in the following manner:
Produce from a parametric model based on an initial guess of the model parameters (and latent state variables, where required).
Solve for via Lagrange multipliers for the constrained problem that minimises .
Solve for to obtain model parameters for the that minimises .
Iterate steps 2 and 3: until convergence.
In Step 1, the initial guess may be obtained in several different ways. A common way is to minimise the mean–squared error between model and market option prices at all available strikes. We opted for the Broyden/Fletcher/Goldfarb/Shanno (BFGS) algorithm for conducting this initial calibration of the model parameters and (where required) latent state variables.
In Step 2, we solve the following constrained minimisation problem using Lagrange multipliers:
Note that here we specify the constraints in the form of expectations under the measure , where these expectation are the model prices for our calibration instruments for the model based on the non-parametric reference distribution . In general, , and are vectors; thus (29) is a “stack” of inequality constraints representing observed market prices. Also notice that for generality we “relax” each equality constraint into two inequality constraints. This is in order to account for the bid-ask spread of each option traded on the market. The vector denotes a list of bid prices while the vector contains ask prices. In a simplified scenario, where exact option prices are given, we may set . denotes the vector of discounted option payoffs. By introducing vectors of Lagrange multipliers and , we convert the constrained problem to an unconstrained dual problem,
In the case of Kullback-Leibler divergence, solving the inner problem gives the probability density function of in terms of the density of ,
If , then the last term vanishes, representing the problem with exact market prices. If (component-wise) , then the last term reflects a penality on the objective function that is proportional to the difference of the two Lagrange multipliers. We may therefore transform the Lagrange multipliers by
and the objective function becomes
We may numerically solve the maximization problem by taking its gradient with respect to ,
where the element-wise sign function assigns 1, -1 or 0 to each element of . However, due to the discontinuouity of the sign function (37) cannot be solved directly in a stable way. To bypass this problem, we approximate the sign function with a continuous step function:
We use Powell’s hybrid method to solve the multidimensional equations (37), where controls the steepness of the function and carefully choosing this value is critical for a fast and stable convergence of the method.
In Step 3, we use L-BFGS-B algorithm to minimise the divergence with respect to model parameters (or latent variables or both). Step 2 and Step 3 are repeated until convergence. The convergence criterion adopted here is that all the percentage changes of parameters after one iteration do not exceed a certain threshold, say 0.1%.
4. Examining the trade–off between calibration error and model risk due to recalibration
As an application example of the method described in the previous two sections, we consider historical data consisting of daily market prices for call options on AAPL and GOOG stock over a period from 6 January 2004 to 19 December 2008 for AAPL and 4 January 2005 to 19 December 2008 for GOOG. This gives us a reasonably straightforward application example free of extraneous complications,131313Although these options are of the American type, i.e. permitting early exercise, AAPL and GOOG did not pay any dividends during this period. Thus the possibility of early exercise may be ignored (see \citeasnounOZ:Merton:73). while still covering reasonably liquid options and including a period of “interesting” market volatility (2007/8). From this data, we remove options very far away from the money, restricting the range of strikes from delta 2.5% to delta 97.5%. Furthermore, we remove prices of options which had zero trading volume on a given day, in order to avoid using prices which are likely to be stale.
On this data we consider two parametric models, \citeasnounOZ:Bla&Sch:73 and \citeasnounOZ:Heston:93 — arguably the two most popular option pricing models available, where the latter introduces a latent variable for stochastic volatility. The unified methodology, quantifying calibration error, model risk due to recalibration, and the aggregate of the two, allows us to explore the trade–off between calibration error (which is, unsurprisingly, reduced by moving from \citeasnounOZ:Bla&Sch:73 to \citeasnounOZ:Heston:93) and model risk due to recalibration (which has hitherto been largely ignored) when moving from one parametric model to another as well as when changing the frequency with which the model is recalibrated.
We start by evaluating the calibration, recalibration and aggregated model risks under a Black/Scholes model, i.e. where the underlying asset price is assumed to follow geometric Brownian motion, with dynamics under the risk–neutral measure given by
where is the continuously compounded risk–free rate of interest and is a constant volatility parameter. We note that in the Black/Scholes model we obtain a simple closed form expression for the recalibration risk defined in (8):
where is the correctly recalibrated Black/Scholes volatility parameter and is the parameter value obtained in a previous calibration. This formula is a consequence of the log-normal distribution of returns assumed in the Black/Scholes model.
We can express the aggregate model risk as the sum of the calibration error, the recalibration risk and a residual. As noted in equation (15), the residual is zero if the likelihood ratios involved in the calibration and recalibration risks are two independent random variables. In practice, the residual usually takes a small non-zero value. In Figure 2 we demonstrate the decomposition of the total model risk into the three components.141414The vertical axis denotes the numerical value of the relative entropy. Unsurprisingly (as it is well documented that the Black/Scholes model cannot fit the implied volatility “smile” observed in most options markets), we see that calibration error typically predominates.
In the Heston model, the dynamics (39) are extended to allow for stochastic volatility, i.e.
This model involves two state variables, the underlying asset price and the volatility , and five model parameters: , , , , where is the correlation coefficient between the two Wiener processes:
is the risk-free rate,151515In our empirical application examples, we take the risk–free rate as one of the financial variables observed in the market, but we do not explicitly take into account interest rate risk in our empirical analysis. For the short–dated options considered here, interest rate risk is known to be of relatively little importance — for a discussion of this issue, see e.g. \citeasnounCHENG2017 and the literature cited therein. and , and relate to the volatility process, being the rate of mean reversion, the long–run mean and the volatility of this process.
Following \citeasnounOZ:Gatheral:06, the risk-neutral probability of exercise of a European call option with strike in the Heston model is given by
where is the current value of the volatility state variable , is the time to maturity, and is the logarithmic forward moneyness of the option, i.e.
with the time price of a zero bond maturing in . Furthermore,
Parameters and are functions of (Fourier transform variable of ):
|Aggregate Model Risk|
|1 day||3 days||1 week||2 weeks||1 quarter||1 day||3 days||1 week||2 weeks||1 quarter|
|Model Risk due to Recalibration|
It is noted that since by definition is the probability of exercise. The probability density function of the risk-neutral measure is therefore obtained:
For simplicity denotes the ratio of the forward price at to its spot price at maturity , i.e. , we derive the risk-neutral probability with respect to :
can be calculated by fast Fourier transform (FFT).
The decomposition of the total model risk into the three components (the components due calibration and recalibration, and the positive or negative residual measuring the departure from independence between the first two components) when using the Heston model as the baseline is given in Figure 3. Again, since it is well documented that the Heston model can fit observed option prices better than Black/Scholes, it is unsurprising that in this case the relative entropy measuring calibration error is much lower — however, already in this set of example days it is evident that this comes at a price of increased model risk due to recalibration.
|Aggregate Model Risk|
|1 day||3 days||1 week||2 weeks||1 quarter||1 day||3 days||1 week||2 weeks||1 quarter|
|Model Risk due to Recalibration|
These observations are reinforced when we consider aggregate model risk, calibration error and model risk due to recalibration over the entire sample period, as presented in Tables 1 and 2. Note that the absolute numbers refer to relative entropy and thus lack direct financial interpretation — what matters are the relative values when comparing the model across models and different recalibration frequencies, in particular when considering the aggregate model risk. Here, we consider recalibrating the Black/Scholes and Heston models either daily, every three days, every week, every two weeks, or every quarter year. We see that recalibrating more frequently has little effect on the aggregate model risk, neither when using the Black/Scholes model nor when using the Heston model. Essentially, recalibrating more frequently simply shifts calibration error into model risk due to recalibration,161616Note that on days on which we do not recalibrate, the model risk due to recalibration is zero, because (consistent with the model assumptions) we are keeping previously calibrated parameters unchanged — so on those days aggregate model risk is entirely due to calibration error (which increases because the fit to market prices deteriorates when we do not recalibrate). highlighting the dangers in the common practice of focusing solely on the calibration of derivative pricing models, at the expense of all other sources of model risk.
In addition, we observe that if we are interested in “robustness” at a high level of confidence (looking at, say, the 99% quantile of aggregate model risk), moving from Black/Scholes to Heston also does not appear to deliver any advantage (it does yield some improvement at lower quantiles, or average or median, aggregate model risk). This means that when high levels of confidence are required, any gain in calibration accuracy delivered by the Heston model is offset by higher model risk due to recalibration. One should note that this last point holds even before considering Type 3 model risk, which may well be worse when additional state variables are introduced (as in the Heston model). For these results, in Tables 1 and 2 we calculated means, medians and quantiles across all available option maturities. If we consider only particular maturity “buckets”, the same qualitative conclusions are evident — Tables 3 and 4 illustrate this in the case of daily recalibration.
|Aggregate Model Risk|
|all||0-0.2 year||0.2-0.7 year||0.7 year||all||0-0.2 year||0.2-0.7 year||0.7 year|
|Model Risk due to Recalibration|
|Aggregate Model Risk|
|all||0-0.2 year||0.2-0.7 year||0.7 year||all||0-0.2 year||0.2-0.7 year||0.7 year|
Model Risk due to Recalibration
Under our approach, less relative entropy implies less model risk, and we are able to evaluate two hitherto disparate sources of model risk (calibration error and model risk due to recalibration) in a unified fashion, and examine the potential trade–off between the two. We have considered a simple choice between two models, and between different recalibration frequencies. “Putting a number on model risk” by calculating quantiles for the maximum model risk (quantified by relative entropy) over a time series of market data allows one to assess the added value (if any) of more complicated stochastic models of financial markets.
In our application, we are deliberately prioritising the minimisation of calibration error, as this is congruent to the (often exclusive) focus of practitioners on calibration error (with little or no regard to model risk due to recalibration).171717If desired, one could reformulate this approach to prioritise the minimisation of aggregate model risk, or of model risk due to recalibration. Even in this case, we see that by including recalibration as one of the sources of aggregate model risk, recalibrating a model frequently to a changing market simply interchanges one source of model risk for another, and more complicated stochastic models may well underperform when aggregate model risk is taken into account.
-  \harvarditemAhmadi-Javid2012ahmadi2012entropic Ahmadi-Javid, A.: 2012, Entropic value-at-risk: A new coherent risk measure, Journal of Optimization Theory and Applications 155(3), 1105–1123.
-  \harvarditemAli and Silvey1966ali1966general Ali, S. M. and Silvey, S. D.: 1966, A general class of coefficients of divergence of one distribution from another, Journal of the Royal Statistical Society. Series B (Methodological) pp. 131–142.
-  \harvarditemBannör and Scherer2013Ban&Sch:13 Bannör, K. F. and Scherer, M.: 2013, Capturing parameter risk with convex risk measures, European Actuarial Journal 3, 97–132.
-  \harvarditem[Bartl et al.]Bartl, Drapeau and Tangpi2018Bar&Dra&Tan:18 Bartl, D., Drapeau, S. and Tangpi, L.: 2018, Computational aspects of robust optimized certainty equivalents and option pricing, Technical Report 1706.10186, arXiv preprint.
-  \harvarditemBlack and Scholes1973OZ:Bla&Sch:73 Black, F. and Scholes, M.: 1973, The pricing of options and corporate liabilities, Journal of Political Economy 81(3), 637–654.
-  \harvarditem[Blanchet et al.]Blanchet, Chen and Zhou2018Bla&Che&Zho:18 Blanchet, J., Chen, L. and Zhou, X. Y.: 2018, Distributionally robust mean–variance portfolio selection with Wasserstein distances, Technical Report 1802.04885, arXiv preprint.
-  \harvarditemBoard of Governors of the Federal Reserve System Office of the Comptroller of the Currency2011fedgov2011 Board of Governors of the Federal Reserve System Office of the Comptroller of the Currency: 2011, Supervisory guidance on model risk management, Technical Report OCC 2011-12 Attachment, Federal Reserve.
-  \harvarditem[Boucher et al.]Boucher, Danielsson, Kouontchou and Maillet2014Bou&Dan&Kou&Mai:14 Boucher, C. M., Danielsson, J., Kouontchou, P. S. and Maillet, B. B.: 2014, Risk models–at–risk, Journal of Banking & Finance 44, 72–92.
-  \harvarditemBox and Draper1987OZ:Box:87 Box, G. E. P. and Draper, N. R.: 1987, Empirical Model–Building and Response Surfaces, Wiley.
-  \harvarditemBreuer and Csiszár2016Bre&Csi:16 Breuer, T. and Csiszár, I.: 2016, Measuring distribution model risk, Mathematical Finance 26(2), 395–411.
-  \harvarditem[Cheng et al.]Cheng, Nikitopoulos and Schlögl2017CHENG2017 Cheng, B., Nikitopoulos, C. S. and Schlögl, E.: 2017, Pricing of long-dated commodity derivatives: Do stochastic interest rates matter?, Journal of Banking & Finance .
-  \harvarditemCsiszár1967csisz1967information Csiszár, I.: 1967, Information-type measures of difference of probability distributions and indirect observations, Studia Scientiarum Mathematicarum Hungarica 2, 299–318.
-  \harvarditemDetering and Packham2016Det&Pac:16 Detering, N. and Packham, N.: 2016, Model risk of contingent claims, Quantitative Finance 16(9), 1357–1374.
-  \harvarditemFeng and Schlögl2018Fen&Sch:18 Feng, Y. and Schlögl, E.: 2018, Model risk measurement under Wasserstein distance, Technical report, SSRN Working Paper.
-  \harvarditemGatheral2006OZ:Gatheral:06 Gatheral, J.: 2006, The Volatility Surface: A Practitioner’s Guide, John Wiley & Sons.
-  \harvarditemGlasserman and Xu2014glasserman2014robust Glasserman, P. and Xu, X.: 2014, Robust risk measurement and model risk, Quantitative Finance 14(1), 29–58.
-  \harvarditemHansen and Sargent2006Han&Sar:06 Hansen, L. P. and Sargent, T. J.: 2006, Robustness, Princeton University Press.
-  \harvarditemHeston1993OZ:Heston:93 Heston, S. L.: 1993, A closed–form solution for options with stochastic volatility with applications to bond and currency options, The Review of Financial Studies 6, 327–343.
-  \harvarditem[Kerkhof et al.]Kerkhof, Melenberg and Schumacher2010Ker&Ber&Sch:10 Kerkhof, J., Melenberg, B. and Schumacher, H.: 2010, Model risk and capital reserves, Journal of Banking & Finance 34, 267–279.
-  \harvarditemLöffler2003CDO:Loeffler:03 Löffler, G.: 2003, The effects of estimation error on measures of portfolio credit risk, Journal of Banking & Finance 27, 1427–1453.
-  \harvarditemMerton1973OZ:Merton:73 Merton, R. C.: 1973, Theory of rational option pricing, The Bell Journal of Economics and Management Science 4(1), 141–183.