Simulation of Implied Volatility Surfaces via Tangent Lévy Models

Simulation of Implied Volatility Surfaces via Tangent Lévy Models


In this paper, we implement and test two types of market-based models for European-type options, based on the tangent Lévy models proposed in [4] and [3]. As a result, we obtain a method for generating Monte Carlo samples of future paths of implied volatility surfaces. These paths and the surfaces themselves are free of arbitrage, and are constructed in a way that is consistent with the past and present values of implied volatility. We use a real market data to estimate the parameters of these models and conduct an empirical study, to compare the performance of market-based models with the performance of classical stochastic volatility models. We choose the problem of minimal-variance portfolio choice as a measure of model performance and compare the two tangent Lévy models to SABR model. Our study demonstrates that the tangent Lévy models do a much better job at finding a portfolio with smallest variance, their predictions for the variance are more reliable, and the portfolio weights are more stable. To the best of our knowledge, this is the first example of empirical analysis that provides a convincing evidence of the outperformance of the market-based models for European options using real market data.

1 Introduction

The existence of liquid markets for equity and volatility derivatives, as well as a well-developed over-the-counter market for exotic derivatives, generates a need for a modeling framework that is consistent across time and across financial instruments. Within this framework, once a model is chosen so that it matches both the present prices of liquid instruments and their past dynamics, it is expected to produce more realistic results for the problems of pricing and hedging of exotic instruments. In addition, such models can be used to quantify the risk embedded in portfolios of derivative contracts. Needless to say, evaluating and managing the risk of such portfolios is crucial for proper functioning of the financial markets: recall, for example, that VIX index, itself, is a portfolio of European options written on S&P 500.

In this paper we investigate an arbitrage-free modeling framework for multiple European-type options written on the same underlying, which is consistent across time and products. In particular, this framework allows to resolve one of the nagging challenges of quant groups supporting equity trading: i.e. how to generate realistic Monte Carlo scenarios of implied volatility surfaces which are consistent with present and historical observations? As mentioned above, such models can be used to address the problems of pricing, hedging and risk management. Herein, we implement several such models using real market data and conduct a numerical experiment which demonstrates clearly the advantages of this modeling approach.

The attempts to model the dynamics of implied volatility surface directly can be dated back as early as the “sticky smile model” and the “sticky delta model” (also known as “floating smile model”) (see Section 6.4 of [26] for the definitions). As an improvement of the two models, Cont et al. later proposed a multi-factor model of implied volatility surface in [8] and [9], where they applied a Karhunen-Loève decomposition on the daily variations of implied volatilities. It turns out that the first three eigenvectors could explain most of the daily variance, and a mean-reverting factor model based on the three eigenvectors is then constructed for future implied volatility surface. The major issue with these early attempts is that the proposed models for the dynamics of implied volatility are either too restrictive, not allowing to match the historical evolution of implied volatility, or too loose, so that they may contain arbitrage opportunities. While the importance of the first issue for any time-series analysis is very clear, the second one deserves a separate discussion. Indeed, what do we mean by arbitrage opportunities in a model for implied volatility and why do we need to avoid it? There are two types of arbitrage opportunities we refer to: static and dynamic. A given implied volatility surface contains static arbitrage if it is impossible to obtain such a surface in any arbitrage-free model for the underlying. The fact that not every surface can be an arbitrage-free implied volatility simply follows from the well-known static no-arbitrage restrictions on the shape of a call price surface: e.g. monotonicity and convexity in strikes, etc (cf. [12] and [13]). Notice that a violation of any of these conditions leads to an obvious arbitrage opportunity which is very easy to implement, hence, it is natural to assume that every implied volatility surface is free of static arbitrage. This, in turn, implies that any realistic simulation algorithm for future implied volatility surfaces has to produce surfaces that are arbitrage-free: otherwise, the algorithm generates outcomes that are simply impossible. The static no-arbitrage conditions are rather difficult to state explicitly, in terms of the implied volatility surface itself (without mapping it to a call or put price surface first). Nevertheless, it is not hard to deduce from the existing necessary conditions (cf. [23]) that the set of arbitrage-free implied volatility surfaces forms a “thin” set in the space of all (regular enough) functions of two variables. Hence, it is a non-trivial task to construct a modeling framework that excludes static arbitrage in the implied volatility surface. The dynamic arbitrage adds to this problem, and it refers to a restriction on the evolution (i.e. the time increments) of implied volatility surface, rather than its values at a fixed moment in time. This restriction follows from the same arbitrage considerations for option prices. However, its associated arbitrage strategies are not as straightforward as in the case of static arbitrage. In addition, the simulated implied volatility surfaces that contain only dynamic arbitrage are, typically, very close to the ones that are arbitrage-free, when the time horizon is small (it is related to the fact that dynamic arbitrage only changes the drift term of the implied volatility, which is much smaller than the diffusion term, for small times). This is why, eliminating the dynamic arbitrage in a model for implied volatility surface is often viewed as a “second priority” for risk management. Nevertheless, we believe that a good model should exclude both types of arbitrage, in order to produce realistic dynamics of implied volatility surface (for risk management) and eliminate the possible arbitrage opportunities (for pricing).

We have already mentioned that it is not a trivial task to construct a model of implied volatility that excludes arbitrage opportunities. In fact, when trying to model the surface directly, the first challenge that one faces is: how to describe the space possible implied volatility surfaces? Note that, as discussed above, the existing characterizations of arbitrage-free implied volatility surfaces are rather implicit. In addition, if the resulting space is not an open subset of any linear space (which it is not), what kind of mathematical tools can be used to describe evolution in space? Recall, for example, that all statistical models of time-series are defined on linear spaces (or those that can be easily mapped in to a linear space). Hence, it appears natural to map the space of possible implied volatility surfaces to an open set in a linear space, and then proceed with the construction of arbitrage free models. Such mapping became known as a code-book mapping, and it turns out that it can be constructed by means of the so-called tangent models (cf. [2], [4], [3]). The concept of a tangent model is very close to the method of calibrating a model for underlying to the target derivatives’ prices prices (in the present case, European options calls). Consider a family of arbitrage-free models for the underlying, , parameterized by , taking values in a “convenient” set (an open set of a linear space). For any given surface of option prices (or, equivalently, any given implied volatility surface), we can try to calibrate a model for this family to a given surface of option prices (or, equivalently, to a given implied volatility surface). In, other words, we attempt to find such that: , for all given maturities and strikes , where is the given call price, and is the call price produced by the model . If the above calibration problem has a unique solution, we obtain a one-to-one correspondence between the call price surfaces and the models in a chosen family: . For every call price surface , the associated (calibrated) model is called a tangent model.1 Notice that is always arbitrage-free, hence, we obtain the desired code-book mapping . Now, the problem of static arbitrage has been resolved, and one simply needs to prescribe the distribution of a stochastic process , taking values in a convenient set , in order to obtain a model for the dynamics of call prices , and, in turn, the dynamics of implied volatility surface. Finally, one needs to characterize all possible dynamics of that produce no dynamic arbitrage in the associated call prices . An interested reader is referred to [3], for a more detailed description of this general algorithm, and, for example, to [2], [4], [17], [33], [22], [27], for the analysis of specific choices of the families of models .

The idea of modeling prices of derivative contracts directly dates back to the work of Heath, Jarrow and Morton [16], who analyzed the dynamic of bond prices along with the short interest rate. Such models have become known as the market-based models (or simply market models), as opposed to the classical spot models, since the former are designed to capture the evolution of the entire market, including the liquid derivatives. This approach has been extended to more general mathematical settings, as well as to other derivatives’ markets. The list of relevant works includes [13], [28], [29], [31], [30], [14], in addition to those mentioned in the previous paragraph. Even though the notions of code-book and tangent models never appear in these papers, almost all of them follow the algorithm outlined in the previous paragraph (and described in more detail in [3]), in order to construct a market-based model.

Even though various code-books for implied volatility surface (or, equivalently, for call price surface) have been proposed and the corresponding arbitrage-free dynamics have been characterized, it was not until very recently that some of these models were implemented numerically. As is shown in the rest of the paper, the lack of such results is not a surprise given the complexity of the models. So far, the numerical implementations are mostly based on tangent Lévy models proposed in [4] and [3]: as the name suggests, this corresponds to a code-book which is constructed using non-homogeneous Lévy (or, additive) models as the tangent models. Karlsson [18] implements a class of tangent Lévy models with absolutely continuous Lévy densities and no continuous martingale component. Zhao [33] and Leclercq [22], on contrary, implemented the tangent Lévy models whose Lévy measure is purely atomic in the space variable. As opposed to [33], the work of Leclercq [22] allows for tangent models with continuous martingale component and includes options with multiple maturities, but it does require that the Lévy density possess certain symmetry, which may limit the ability of the model to capture the skew of the implied smile. All of the works [18], [33], [22] estimate the parameters of the model from real market data. In addition, [22] conducts a numerical experiment comparing the performance of a market-based model to a classical spot model. The actual results of this experiment, however, do not provide a convincing evidence in favor of the market-based approach. We believe that the latter is simply due the choice of experiment and to the deficiency of the theory, and we intend to demonstrate it in the present work.

The purpose of this paper is to propose implementation methods for two classes of tangent Lévy models – with continuous and discrete Lévy measures. These methods provide practical algorithms for simulating future arbitrage-free implied volatility surfaces, which are consistent with both present and past observations. Our first method is similar to the one used in [18], but with a different “dynamic fitting” part, and the second method is in the spirit of [22], although we avoid the assumption of symmetry of a Lévy measure made in [22]. However, the most important original contribution of this paper is the numerical experiment which uses real market data to demonstrate clearly the advantages of market-based models for implied volatility (or, option prices), as compared to the classical spot models. To the best of our knowledge, this is the first convincing empirical analysis that justifies the use of market-based approach for modeling implied volatility surface.

The rest of the paper is organized as follows. Section 2 starts by reviewing the work on tangent Lévy models with continuous Lévy density and continuous martingale component, developed in [3]. Then, we introduce the implementation approach for this models, which is based on double exponential jump processes, hence the name “Double Exponential Tangent Lévy Models”. Section 3 introduces the implementation method for tangent Lévy models with discrete Lévy density, called “Discrete Tangent Lévy Models”. The two approaches are then tested against a popular classical model in a portfolio optimization problem in Section 4. Section 5 concludes the paper by highlighting the main contributions and the future work. Appendices A–C contain technical proofs and derivations, Appendix D contains all tables and graphs.

2 Double exponential tangent Lévy models

2.1 Model setup and consistency conditions

In this subsection, we review and update the results of [4], which serve as a foundation for the analysis in subsequent sections. Herein, we assume that the interest and dividend rates for the underlying asset are zero. In the implementation that follows, we discount the market data accordingly, to comply with this assumption. As in [3], we denote by a stochastic process representing the underlying price, and assume that the true dynamics of under the pricing measure are given by:


Here, is a general integer-valued random measure (not necessarily a Poisson measure!), whose compensator is , where is a predictable stochastic process taking values in the function space , defined in (6.1).

For any fixed time and a given value of , a stochastic process is said to be tangent to the true model if the time- prices of all European call options written on can be obtained by pretending the future risk-neutral evolution of the index value is instead given by from on. Throughout this section, for any fixed , we assume that the tangent processes is in the form


for , where is a Poisson random measure associated with the jumps of whose compensator is given by a deterministic measure . Notice that the law of is uniquely determined by . Let denote the option prices generated by , i.e


The concept of a tangent model, then, requires that, for each fixed ,


Thus, at each time , we obtain the code-book for call prices, given by . Of course, the value of the code-book may be different at a different time . Hence, we consider the dynamic tangent Lévy models characterized by a pair of stochastic processes that satisfies (2.4). Here, is a positive martingale with dynamics given by (2.1); is progressively measurable positive stochastic process taking values in (cf. (6.2)). The dynamics of and are given by


where is a progressively measurable integrable stochastic process with values in , and, for each , is a progressively measurable square integrable stochastic process taking values in (cf. (6.4)).

Notice that (2.5) defines the dynamics of the code-book , but it does not ensure that it does, indeed, produce tangent models at each time : in other words, there is no guarantee that (2.4) holds. Thus, additional “consistency” conditions have to be enforced to obtain models which are, indeed, tangent to the true underlying process. As shown in [4], this consistency is, in fact, equivalent to the fact that call prices generated by these tangent models are free of dynamic arbitrage. In order to present the main consistency result, we state the following regularity assumptions on .

Assumption 1.

For each , almost surely, for almost every , we have:



For every , the function is absolutely continuous on .


For any , .

Finally, we introduce some extra notation and formulate the consistency result, which is a simple corollary of Theorem 12 in [4].

Theorem 1.

(Carmona-Nadtochiy 2012) Assume that is a true martingale, satisfies the above regularity assumptions RA1-RA4, and , almost surely for all and almost all . Then the processes satisfying (2.5) are consistent, in the sense that (2.4) holds, if and only if the following conditions hold almost surely for almost every and , and all :

  1. Drift restriction:

  2. Compensator specification: .

Theorem 1, along with equations (2.5) provide a general method for constructing a market-based model for call prices (i.e. an arbitrage-free dyanimc model for implied volatility surface). Indeed, choosing , we use the drift restriction in Theorem 1 and the second equation in (2.5) to generate the paths of . Finally, to generate the paths of , one can use the compensator specification in Theorem 1 and the first equation in (2.5), after representing the random measure through its compensator and a Poisson random measure (as shown in [4]). However, in the present paper we avoid simulating at all, by simply noticing that

where is the Black-Scholes price at time of a call option with maturity and strike given that the level of underlying is at and the volatility is . At any time , regardless of the value of , if we find the level of that makes the right hand sides of the two equations above coincide, then the option prices in the left hand sides have to coincide as well. This means that we can obtain the implied volatility of , in the maturity and log-moneyness variables, by computing the corresponding implied volatility of , for which we do not need to generate .

2.2 Implied volatility simulation with tangent Lévy models

We first introduce the general framework of the simulation procedure. Our procedure has two stages, estimation and simulation. The estimation stage, where the additive density of the tangent process as well as its dynamics are fitted to market data, is performed in two steps:

  • Static fitting. In static fitting, the additive density for each day is obtained by least squares optimization which minimizes the squared difference between model prices and actual market prices. Notice that for any given day , is fixed and there is no dynamics involved, which explains the term ‘static’.

  • Dynamic fitting. In dynamic fitting, we recover the dynamics of the time series . In view of the drift restriction in Theorem 1, this boils down to determining the volatility terms . This is done by applying the Principle Components Analysis to the time series of .

Once the estimation is completed, we generate the future paths of using Euler scheme Monte Carlo applied to the second equation in (2.5). From the simulated additive densities, we compute call prices and, then, implied volatilities by inverting the Black-Scholes formula.

Within the general framework, the simulation stage is generic, but the static part of the estimation stage can be quite different depending on the specific subclass of tangent Lévy densities that we fit to option price at any given time. In this section, we implement the procedure with the Lévy densities arising from the double exponential Lévy models proposed by Kou in [19]. The small number of parameters in double exponential models and the availability of an analytical pricing formula for call options make the resulting family of tangent Lévy models fairly easy to calibrate.

2.3 Market data

We use SPX (S&P 500) call option prices provided by OptionMetrics, an option database containing historical prices of options and their underlying instruments. Throughout the paper, we use the option data from two time periods: Jan. 2007 - Aug. 2008 and Jan. 2011 - Dec. 2012. Table 1 gives a quick summary of the two periods. We cut off the first period at August 2008 to reduce the impact of the financial crisis.

On each day of a period, we only keep the options with time to maturity less than one year, whose best closing bid price and best closing offer price are both available, and take the average of the two prices as the option price. To ensure the validity of all prices, the contracts with zero open interest are excluded. As a result, there are roughly 10 to 80 call contracts with valid prices available for each maturity. The log-moneyness (more precisely, the put log-moneyess, defined as ) of these call options ranges roughly from -0.3 to 0.1, varying for different and . Our calibration also requires dividend and interest rate data available on OptionMetrics and the homepage of U.S. Department of Treasury, respectively. This dividend yield is recovered from option prices via put-call parity with the method proposed in [1]. On day , we denote the dividend yield by , and the risk-free rate between and by . To simplify our implementation, we perform a simple transformation on the market data so that we can assume that the interest and dividend rates are both zero from now on:


where is the market price of a call option with maturity and strike . The adjusted call prices , corresponding to maturity and strike , are then consistent with the assumption of zero interest and dividend rates (i.e. they do not contain arbitrage under thee assumptions). IN a similar way, we define the adjusted bid and ask prices, and .

In this Section and Section 3, we will perform the calibration of the tangent Lévy models on the time span from Jan. 3, 2007 to Dec. 31, 2007, denoted by . In Section 4, data from both periods will be used to test the performance of the tangent Lévy models.

2.4 Static fitting

Before we proceed with the static fitting, let us first have a quick review of the double exponential model. In such a model, the logarithm of underlying follows a pure jump Lévy process whose jump sizes have a double exponential distribution. More specifically, assuming no diffusion term, the dynamics of the underlying are given by


where is the drift term, is a Poisson process with rate , is a sequence of i.i.d. random variables with asymmetric double exponential distribution, independent of . The density of an asymmetric double exponential distribution is given by


where represent the probabilities of positive and negative jumps, and are the parameters of the two exponential distributions. In other words, a double exponential model is a martingale model for the underlying whose logarithm is a pure jump Lévy process, with the Lévy density


One of the advantages of double exponential models is the availability of analytical pricing formulas for European call options, which could greatly simplify the calibration. [19] gives the pricing formula for double exponential models with a diffusion term. A minor modification of the derivation in [19] gives us the pricing formula in absence of the diffusion term, as shown in the lemma below (its proof is given in Appendix B).

Lemma 1.

Under the assumptions of zero interest and dividend rates, assume, in addition, that the underlying process follows a double exponential process with Lévy density given by (2.11), under the risk-neutral probability measure. Then, the price of a European call option with strike and maturity is given by



and the function is given by:




For each , with , we would like to find the set of parameters that minimizes the difference between the market and the model prices. For practical reasons, we will work with time values instead of options prices. The market time value and the model time value are calculated as follows

There are two reasons for working with time values. Firstly, the time values go to zero for very large and very small log-moneyness, which allows us to truncate the -space with negligible numerical errors. Secondly, time values and option prices are often of different magnitudes, especially for in the money options, with option prices much greater than time values, hence, working with time values is likely to result in smaller numerical errors. For fixed time and fixed maturity , the optimization problem can be written as


where are the weights we put on different options to take into account the difference in liquidity (measured by bid-ask spread). For every fixed maturity , the solution of the above optimization problem, , yields the Lévy density via (2.11). Then, we search for a function , such that


for every maturity and all . The resulting tangent model on day is defined as a martingale model for the underlying whose logarithm is a pure jump additive (non-homogeneous Lévy) process, with the Lévy density . It is easy to see that the call prices produced by this model, for every maturity and strike , coincide with the prices produced by the double exponential model, . Thus, for a given , the problem of static fitting is essentially a series of optimization problems (2.14), over all maturities , along with the fitting problem (2.15).

At the first glance, the optimization in (2.14) seems to have four parameters. However, the following constraints will reduce the number of parameters to two in our calibration:

  • To improve the stability of small-jumps intensity over time, we would like the Lévy density to be continuous in . The continuity at requires

  • In view of the results in Section 2.1, we have to impose the symmetry condition RA3 on ’s. A simple application of Itô’s lemma shows that, for the symmetry condition RA3 to hold, it suffices to choose every , so that

    is a deterministic function of , for all times . To achieve this, in view of (2.15), we need to choose every so that the symmetry index


    is a deterministic function of . This yields:


    where is a fixed (estimated a priori) function.

With the two constraints, our calibration takes only two variables: and . The condition transforms to the following condition on :


As a result, the optimization problem (2.14) can be re-written as


where is the interval defined in (2.19). The symmetry index function , for all , can be obtained on the first calibration day , solving a three-variable optimization problem,


and setting


for every maturity , and, finally, interpolating linearly between every and . We summarize the calibration procedure for in the following algorithm:

1 Preprocess the market data according to (2.8);
2 For , run the three-variable optimization (2.21), without the symmetry condition, for all maturities, and compute by (2.22) and linear interpolation;
For the subsequent days , run the two-variable optimization (2.20), with already estimated , to obtain the time series of Lévy densities .
Algorithm 1 Algorithm for calibrating

Below are the calibration results. The Lévy densities on Jan. 3, 2007 – the first day of calibration – is obtained by the three-variable optimization (2.21). From the calibrated parameters, we compute the symmetry index via (2.22), which is shown in Figure 1. With the symmetry index , we run the two-variable optimization (2.20) on the following day, Jan. 4, 2007, and obtain the Lévy densities shown in Figure 2. The corresponding time values are shown in Figure 3. We can see that the calibration results are quite precise in the sense that the time value falls between the bid and the ask values most of the time. As for the calibrated Lévy densities , its values tend to decrease as the time to maturity increases (cf. Figure 2). The magnitude of ( which measures the “asymetry” of the Lévy measure) is decreasing with maturity as well. Both results are in line with empirical findings on jump intensities and volatility skews.

Next, for every day , we need to find that satisfies (2.15). Notice that, if is differentiable in , we obtain:


for each . The relationship (2.23) can be used to back out the additive densities from the calibrated Lévy densities . However, the calibrated densities are only defined for , hence, we need to interpolate them across maturities. An analysis of the calibrated Lévy densities shows that generally exhibits one of the following two patterns as a function of .

  • For small jump sizes , decreases rapidly as increases. To ensure that the recovered is non-negative, we used a combination of exponential function and power function


    to fit , for any fixed . The corresponding Lévy density can then be computed as

  • For large jump sizes , increases as increases. The function we used to fit this scenario is a simple polynomial function


    Then, is computed as


An illustration of the two scenarios together with an example of the reconstructed is shown in Figure 4.

2.5 Dynamic fitting

Recall that, in view of (2.5), the Lévy density has the following dynamics:


In the dynamic fitting, we need to assume that the time increments of are stationary, which is only natural if we work with the time to maturity instead of the maturity . Namely, we define and its dynamics


A simple application of Itô’s formula shows that


To simulate future implied volatility surfaces, all we need are the diffusion terms ’s, because the drift term can be computed from ’s. We assume that ’s are deterministic and constant as functions of , for any (from a finite family of points). Then, every increment is a sum of a Gaussian random vector, corresponding to the diffusion part, and a vector that corresponds to the drift term (we view every surface as a vector whose entries correspond to different values of ). Notice that the distribution of the Gaussian component is completely determined by its covariance matrix, hence, we will aim to choose ’s to match the estimated covariance matrix. Assuming that the drift term is bounded, it is easy to notice that the standard estimate of the covariance of also provides a consistent estimate of the covariance of the aforementioned Gaussian vector, asymptotically, as the length of the time increments converges to zero. In the actual computations, we use daily increments – these are small compared to the time span of the entire sample, which is one year. To fit ’s to the estimated covariance matrix, it is natural to use the Principal Component Analysis (PCA), which finds the directions that explain most of the variance in the increments . However, the PCA can not be applied directly because the number of points on the surface is close to the sample size, which is 251: for each , we have call prices for 10 maturities and 21 jump sizes, which gives us 210 points on the surface after static fitting. To reduce the number of points, we pick every other maturity and the 7 jump sizes whose intensities are larger than others across time . This gives us points on the reduced surface of .

Applying PCA on the reduced surface, we see that the first three eigenmodes explain over of the daily variance of , as shown in Figure 5. To extend the values of the eigenmodes to other points (i.e. other jump sizes and maturities), we simply perform a linear interpolation. The first three eigenmodes have very unique characteristics. The first eigenmode takes the most prominent feature of - the densities are concentrated around small jumps at very short time to maturity. This eigenmode can be understood as a combination of the “level” factor and the “slope” factor (appearing in a typical PCA result for yield curve dynamics) along both the maturity and the jump size directions. The second eigenmode shows the curvature along the jump size direction, and the third eigenmode shows the curvature along the time to maturity direction. As the eigenmodes are normalized, to obtain the diffusion terms ’s, we need to multiple the eigenmodes by the loading factors:


Once we have ’s, we change the variables to pass to ’s and calculate the drift term according to (1). Figure 6 shows the drift term computed according to (1). Notice that can then be computed as


where we have no problem with evaluating the partial derivative, as, in the static fitting stage, was interpolated across maturities.

2.6 Monte Carlo simulation of implied volatility surfaces

Once all the terms in the right hand side of (2.29) are estimated, we can, for example, apply and explicit Euler scheme to simulate the future Lévy densities . However, we need to ensure that the simulated ’s stay nonnegative at all times. Inspired by [4], we incorporate a scaling factor in (2.29) as follows:




with and . Of course, this modification changes the diffusion term of , which was estimated from historical data. However, the value of is chosen to be so small that, in the historical sample, is always equal to one. Hence, if we use the ’s chosen in the previous subsection, the resulting dynamics are still consistent with the past observations. It is also easy to see that, since is a scalar, the drift restriction (1) is satisfied by the new drift and volatility of . Finally, this modification ensures that is almost surely nonnegative for any .

To simulate future values of , we apply the explicit Euler scheme to (2.33), to obtain


with being one day. Having simulated , we compute via (2.15). Then, for every fixed maturity , the option prices in the model given by the Lévy density can then be computed, for example, using the methods proposed in [6] or [24]. These methods are based on Fourier transform and can be implemented efficiently via numerical integration.2 In particular, in our simulation, we use the following formula to calculate future option prices:


where is the characteristic function of an exponential Lévy process with the Lévy density , starting from one:

From the above option price, , we can easily calculate the implied volatility by inverting the Black Scholes formula, assuming that and the interest and dividend rates are zero. As discussed at the very end of Subsection 2.1, this value is the same as the value of implied volatility of a call option for spot level , strike , and maturity , regardless of what the level of is (hence, we don’t need to simulate it). Using this method, we simulate the implied volatility surfaces five days into the future starting from Dec. 13, 2007, as shown in Figures 7 and 8.

3 Discrete tangent Lévy models

3.1 Model setup and consistency conditions

In this section, we work with a different class of tangent Lévy models in order to solve the same problem – develop a consistent Monte Carlo simulation algorithm for the future implied volatility surfaces. The new class of tangent Lévy models is called “discrete tangent Lévy models”, as the jump sizes of the logarithm of the tangent process are restricted to finitely many values. As a result, for each fixed maturity, the corresponding Lévy measure is purely atomic and can be represented by a finite number of parameters. There are several benefits of the new setting as opposed to the one considered in Section 2. On a theoretical level, the new drift restriction is simplified to a sum, as opposed to an integral which needs to be approximated numerically. In addition the mapping between option prices and Lévy measure is simplified in this case. The latter makes the calibration problem somewhat easier, which, in turn, allows us to use non-parametric calibration procedure, which can potentially improve the quality of static fitting. The downside of this method is that, despite the existence of an explicit formula that connects the Lévy density and options prices, solving the optimization problem associated with the non-parametric calibration is still very computationally expensive. We show how to deal with this problem further in the section. It is worth mentioning that discrete-space versions of tangent Lévy models have been considered in [33] and [22]. The results of [33] are limited to a single maturity, while the present setting includes multiple maturities. The theoretical results of [22] are very similar to the ones presented in this subsection. However, our choice of a convenient subclass of discrete tangent models and its subsequent numerical implementation are different.

Similar to Section 2, herein, we assume that the true dynamics of under the pricing measure are given by:


where is an integer-valued random measure with predictable compensator . The only difference between (3.1) and (2.1) is that, in the above expression, the compensator may not be absolutely continuous with respect to . In fact, herein, we restrict our analysis to the compensators of the form:


where every is a nonnegative predictable process, such that

and is a finite subset of which does not change with time. Clearly, ’s correspond to the jump sizes and ’s – to their intensities. At any fixed time , a tangent model for the underlying is given by


for , where is a Poisson random measure associated with the jumps of whose compensator is given by a deterministic measure . Of course, to be consistent with (3.2), we only consider Lévy measures of the form:


where every is a continuous deterministic function. With a slight abuse of notation, we denote the collection of , for , by . As before, we denote by the call prices produced by the above tangent model at time (cf. (2.3)). The concept of a tangent model, then, requires that (2.4) holds for all every . Next, we define the joint dynamics of and by


where is an -dimensional standard Brownian motion; and are progressively measurable stochastic processes taking values in (as defined in (6.7)); and, for each and , is a progressively measurable square integrable stochastic process with values in (as defined in (6.8)).

As before, given the dynamics of , we need to ensure that they satisfy certain “consistency conditions” in order to produce models which are, indeed, tangent to the true model almost surely at all times. First, we make the following assumption.

Assumption 2.

The jump sizes are regularly spaced and have 0 at the cent