Nonparametric Bayesian volatility learning

Nonparametric Bayesian volatility learning under microstructure noise

Shota Gugushvili Mathematical Institute
Leiden University
P.O. Box 9512
2300 RA Leiden
The Netherlands
shota.gugushvili@math.leidenuniv.nl
Frank van der Meulen Delft Institute of Applied Mathematics
Faculty of Electrical Engineering, Mathematics and Computer Science
Delft University of Technology
Van Mourik Broekmanweg 6
2628 XE Delft
The Netherlands
f.h.vandermeulen@tudelft.nl
Moritz Schauer Mathematical Institute
Leiden University
P.O. Box 9512
2300 RA Leiden
The Netherlands
m.r.schauer@math.leidenuniv.nl
 and  Peter Spreij Korteweg-de Vries Institute for Mathematics
University of Amsterdam
P.O. Box 94248
1090 GE Amsterdam
The Netherlands and Institute for Mathematics, Astrophysics and Particle Physics
Radboud University
Nijmegen
The Netherlands
spreij@uva.nl
July 13, 2019
Abstract.

Aiming at financial applications, we study the problem of learning the volatility under market microstructure noise. Specifically, we consider noisy discrete time observations from a stochastic differential equation and develop a novel computational method to learn the diffusion coefficient of the equation. We take a nonparametric Bayesian approach, where we model the volatility function a priori as piecewise constant. Its prior is specified via the inverse Gamma Markov chain. Sampling from the posterior is accomplished by incorporating the Forward Filtering Backward Simulation algorithm in the Gibbs sampler. Good performance of the method is demonstrated on two representative synthetic data examples. Finally, we apply the method on the EUR/USD exchange rate dataset.

Key words and phrases:
Forward Filtering Backward Simulation; Gibbs sampler; High frequency data; Inverse Gamma Markov chain; Microstructure noise; State-space model; Volatility
2000 Mathematics Subject Classification:
Primary: 62G20, Secondary: 62M05
The research leading to the results in this paper has received funding from the European Research Council under ERC Grant Agreement 320637.

1. Introduction

Let the one-dimensional stochastic differential equation (SDE)

(1)

be given. Here is a standard Wiener process and and are referred to as the drift function and volatility function respectively. We assume (not necessarily uniformly spaced) observation times and observations where

(2)

and is a sequence of independent and identically distributed random variables, independent of . Our aim is to nonparametrically learn the volatility using the noisy observations . Knowledge of the volatility is of paramount importance in financial applications, specifically in pricing financial derivatives, see, e.g., Musiela and Rutkowski (2005), and in risk management.

The quantity is referred to as the observation frequency; small values of correspond to high frequency, dense-in-time data. Intraday financial data are commonly thought to be high frequency data. In this high frequency financial data setting, which is the one we are interested in in this work, the measurement errors are referred to as microstructure noise. Their inclusion in the model aims to reflect such features of observed financial time series as their discreteness or approximations due to market friction. Whereas for low-frequency financial data these can typically be neglected without much ensuing harm, empirical evidence shows that this is not the case for high frequency data; cf. Mykland and Zhang (2012).

There exists a large body of statistics and econometrics literature on nonparametric volatility estimation under microstructure noise. See, e.g., Hoffmann et al. (2012), Jacod et al. (2009), Mykland and Zhang (2009), Reiß (2011), Sabel et al. (2015); a recent overview is Mykland and Zhang (2012). The literature predominantly deals with estimation of the integrated volatility , although inference on has also been studied in several of these references. Various methods proposed in the above-mentioned works share the property of being frequentist in nature.

In this paper, our main and novel contribution is the development of a practical nonparametric Bayesian approach to volatility learning under microstructure noise. More specifically, we propose a novel Bayesian approach to volatility learning and demonstrate good performance of our method on two representative simulation examples. We also apply our method on the EUR/USD exchange rate dataset.

In general, a nonparametric approach reduces the risk of model misspecification; the latter may lead to distorted inferential conclusions. The presented nonparametric method is not only useful for an exploratory analysis of the problem at hand (cf. Silverman (1986)), but also allows honest representation of inferential uncertainties (cf. Müller and Mitra (2013)). Attractive features of the Bayesian approach include its internal coherence, automatic uncertainty quantification in parameter estimates via Bayesian credible sets, and the fact that it is a fundamentally likelihood-based method. For a modern monographic treatment of nonparametric Bayesian statistics see Ghosal and van der Vaart (2017); an applied perspective is found in Müller et al. (2015).

The paper is organised as follows: in Section 2 we introduce in detail our approach. In Section 3 we test its practical performance on synthetic data examples. Section 4 applies our method on a real data example. Section 5 summarises our findings. Finally, Appendices A, B, C and D contain the frequently used notation and give further implementational details.

2. Methodology

In this section we introduce our methodology for inferring the volatility. We first recast the model into a simpler form amenable to computational analysis, next specify a nonparametric prior on the volatility, and finally describe an MCMC method for sampling from the posterior.

2.1. Linear state space model

Let . By equation (1), we have

(3)

We derive our method under the assumption that the “true”, data-generating volatility is a deterministic function of time . Next, if the “true” is in fact a stochastic process, we apply our procedure without further changes, as if were deterministic. As shown in the example of Subsection 3.2, this works in practice.

Over short time intervals , the term in (3), roughly speaking, will dominate the term , as the former scales as , whereas the latter as (due to the properties of the Wiener process paths). As our emphasis is on learning rather than , following Gugushvili et al. (2017), Gugushvili et al. (2018a) we act as if the process had a zero drift, . A similar idea is often used in frequentist volatility estimation procedures in the high frequency financial data setting; see Mykland and Zhang (2012), Section 2.1.5 for an intuitive exposition. Formal results why this works in specific settings rely on Girsanov’s theorem, see, e.g., Hoffmann et al. (2012), Mykland and Zhang (2009), Gugushvili et al. (2017). Further reasons why one would like to set are that is a nuisance parameter, in specific applications its appropriate parametric form might be unknown, and finally, the observed time series might simply be not long enough to learn consistently (see Ignatieva and Platen (2012)).

We thus assume where Note that then

(4)

and also that is a sequence of independent random variables. To simplify our notation, write , , , . The preceding arguments and (2) allow us to reduce our model to the linear state space model

(5)

where . The first equation in (5) is the state equation, while the second equation is the observation equation. We assume that is a sequence of independent distributed random variables, independent of the Wiener process in (1), so that is independent of . For justification of such assumptions on the noise sequence from a practical point of view, see Sabel et al. (2015), page 229. We endow the initial state with the prior distribution. Under our assumptions, (5) is a Gaussian linear state space model. This is very convenient computationally and had we not followed this route, we would have had to deal with an intractable likelihood, which constitutes the main computational bottleneck for Bayesian inference in SDE models; see, e.g, van der Meulen and Schauer (2017) and Papaspiliopoulos et al. (2013) for discussion.

2.2. Prior

For the measurement error variance , we assume a priori . The construction of the prior for is more complex and follows Gugushvili et al. (2018a), that in turn relies on Cemgil and Dikmen (2007). Fix an integer . Then we have a unique decomposition with , where . Now define bins , , and . We model as

(6)

where (the number of bins) is a hyperparameter. Then where . We complete the prior specification for by assigning a prior distribution to the coefficients . For this purpose, we introduce auxiliary variables , and suppose the sequence forms a Markov chain (in this order of variables). The transition distributions of this chain are defined by

(7)

where are hyperparameters. We refer to this chain as an inverse Gamma Markov chain, see Cemgil and Dikmen (2007). The corresponding prior on will be called the inverse Gamma Markov chain (IGMC) prior. The definition in (7) ensures that are positively correlated, which imposes smoothing across different bins . Simultaneously, it ensures partial conjugacy in the Gibbs sampler that we derive below, leading to simple and tractable MCMC inference. In our experience, an uninformative choice performs well in practice. We also endow with a prior distribution and assume , with hyperparameters chosen so as to render the hyperprior on diffuse. As explained in Gugushvili et al. (2017), Gugushvili et al. (2018a), the hyperparameter (or equivalently ) can be considered both as a smoothing parameter and the resolution at which one wants to learn the volatility function. Obviously, given the limited amount of data, this resolution cannot be made arbitrarily fine. On the other hand, as shown in Gugushvili et al. (2018a) (see also Gugushvili et al. (2018b)), inference with the IGMC prior is quite robust with respect to a wide range of values of , as the corresponding Bayesian procedure has an additional regularisation parameter that is learned from the data. Statistical optimality results in Munk and Schmidt-Hieber (2010) suggest that in our setting should be chosen considerably smaller than in the case of an SDE observed without noise (that was studied via the IGMC prior in Gugushvili et al. (2018a)).

2.3. Likelihood

Although an expression for the posterior of can be written down in closed form, it is not amenable to computations. This problem is alleviated by following a data augmentation approach, in which are treated as missing data; cf. Tanner and Wong (1987). An expression for the joint density of all random quantities involved is easily derived from the prior specification and (5). We have

Except for , all the densities have been specified directly in the previous subsections. To obtain an expression for the latter, define

and set for , and . Then

2.4. Gibbs sampler

We use the Gibbs sampler to sample from the joint distribution of . The full conditionals of , , are easily derived from Section 2.3 and recognised to be of the inverse Gamma type. The parameter can be updated via a Metropolis-Hastings step. For updating , conditional on all other parameters, we use the standard Forward Filtering Backward Simulation (FFBS) algorithm for Gaussian state space models (cf. Section 4.4.3 in Petris et al. (2009)). The resulting Gibbs sampler is summarised in Algorithm 1. For details, see Appendix B.

Data: Observations
Hyperparameters , , , , , ;
Result: Posterior samples
Initialization , , ;
while  do
       sample via FFBS;
       sample from the inverse Gamma full conditionals;
       sample from the inverse Gamma full conditionals;
       sample from the inverse Gamma full conditional;
       sample via a Metropolis-Hastings step;
       set .
end while
Algorithm 1 Gibbs sampler for volatility learning

3. Synthetic data examples

In this section we study the practical performance of our method on challenging synthetic data examples. In each of them we take the time horizon and generate observations as follows. First, using a fine grid of time points which are sampled from the distribution, conditional on including 0 and 1, a realisation of the process is obtained via Euler’s scheme. The time points are then taken as a random subsample of those times, conditional on including . The settings used for the Gibbs sampler are given in Appendix C. In each example below, we plot the posterior mean and marginal credible band (obtained from the central posterior intervals for the coefficients in (6)). The former gives an estimate of the volatility, while the latter provides a means for uncertainty quantification. All the computations are performed in the programming language Julia, see Bezanson et al. (2017), and we provide the code used in our examples, see Schauer and Gugushvili (2018). The hardware and software specifications for the MacBook used to perform simulations are: CPU: Intel(R) Core(TM) M-5Y31 CPU @ 0.90GHz; OS: macOS (x86_64-apple-darwin14.5.0).

3.1. Fan & Gijbels function

Suppose the volatility function is given by

(8)

This is a popular benchmark in nonparametric regression, which we call the Fan & Gijbels function (see Fan and Gijbels (1995)). This function was already used in the volatility estimation context in Gugushvili et al. (2017). We generated data using the drift coefficient . For the noise level we took , which is substantial. Given the general difficulty of learning the volatility from noisy observations, one cannot expect to infer it on a very fine resolution (cf. the remarks in Subsection 2.2), and thus we opted for bins.

Inference results are reported in Figure 1. It can be seen from the plot that we succeed in learning the volatility function. Note that the credible band does not cover the entire volatility function, but this had to be expected given that this is a marginal band. Quick mixing of the Markov chains can be visually assessed via the trace plots in Figure 5 in Appendix D. The algorithm took ca. 11 minutes to complete.

\includegraphics

[width=0.84]fan_posterior.pdf

Figure 1. Posterior and pointwise credible band for the example of Subsection 3.1. The true volatility function is plotted in red, the black step function gives the posterior mean, and the credible band is shaded in blue.

3.2. Heston model

The Heston model (see Heston (1993)) is a widely used stochastic volatility model where it is assumed that the price process of a certain asset, denoted by , evolves over time according to the SDE

where the process follows the CIR or square root process (see Cox et al. (1985)),

Here and are correlated Wiener processes with correlation . Following a usual approach in quantitative finance, we work not with directly, but with its logarithm (according to Itô’s formula it obeys a diffusion equation with volatility ). We assume high-frequency observations on the log-price process with additive noise . In this setup the volatility is a random function. We assume no further knowledge of the data generation mechanism. This setting is extremely challenging and puts our method to a serious test. To generate data, we used the parameter values , that mimmick the ones obtained via fitting the Heston model to real data (see Table 5.2 in van der Ploeg (2006)). Furthermore, the noise variance was taken equal to The latter choice ensures a sufficiently large signal-to-noise ratio in the model (5), that can be quantified via the ratio of the intrinsic noise level to the external noise level . Finally, the parameter choice corresponds to a log-return rate, which is a reasonable value.

We report inference results with bins in Figure 2. These are surprisingly accurate, given a general difficulty of the problem and the amount of assumptions that went into the learning procedure. The Markov chains mix rapidly, as evidenced by the trace plots in Figure 6 in Appendix D. The simulation run took ca. 12 minutes to complete. Finally, we note that the Heston model does not formally fall into the framework under which our Bayesian method was derived (deterministic volatility function ). We expect that a theoretical validation why our approach also works in this case requires the use of intricate technical arguments, which lie beyond the scope of the present, practically-oriented paper.

\includegraphics

[width=0.84]heston_posterior.pdf

Figure 2. Posterior mean and pointwise credible band for the example of Subsection 3.2. The true volatility function is plotted in red, the black step function gives the posterior mean, and the credible band is shaded in blue.

4. Exchange rate data

Unlike daily stock quotes or exchange rate series that can easily be obtained via several online resources (e.g., Yahoo Finance), high frequency financial data are rarely accessible for free for academia. In this section, we apply our methodology to infer volatility of the high frequency foreign exchange rate data made available by Pepperstone Limited, the London based forex broker.111See https://pepperstone.com/uk/client-resources/historical-tick-data (Accessed on 25 April 2018). The data are stored as csv files, that contain the dates and times of transactions and bid and ask prices. Specifically, we use the EUR/USD tick data (bid prices) for 2 March 2015. As the independent additive measurement error model (2) becomes harder to justify for highly densely spaced data, we work with the subsampled data, retaining every th observation; subsampling is a common strategy in this context. This results in a total of observations over one day. As in Subsection 3.2, we apply the log-transformation on the observed time series, and assume the additive measurement error model (2). The data are displayed in Figure 3.

\includegraphics

[width=0.84]eurus4_path.pdf \includegraphics[width=0.84]eurus4_posterior.pdf

Figure 3. Top: Natural logarithm of the EUR/USD exchange rate data for 2 March 2015 analysed in Section 4. Bottom: Posterior mean (black curve) and pointwise 95% credible band (blue band) for the volatility function. The time axis is rescaled to .

Financial time series often contain jumps, accurate detection of which is a delicate task. As this section serves illustrative purposes only, for simplicity we ignore jumps in our analysis. For high frequency data, volatility estimates are expected to recover quickly after infrequent jumps in the underlying process. This should in particular be the case for our learning procedure, given that we model the volatility as a piecewise constant function, which can be viewed as an estimate of the “histogramised” true volatility. Indeed, effects of infrequent jumps in the underlying process are likely to get smoothed out by averaging (we make no claim of robustness of our procedure with respect to possible presence of large jumps in the process , or densely clustered small jumps). An alternative here would have been to use a heuristic jump removal technique, such as the ones discussed in Sabel et al. (2015); next one could have applied our Bayesian procedure on the resulting “cleaned” data.

We report inference results in Figure 3, where time is rescaled to the interval , so that corresponds to the midnight and to noon. As seen from the plot, there is considerable uncertainty in volatility estimates. Understandably, the volatility is lower during the night hours. It also displays two peaks corresponding to the early morning and late afternoon parts of the day. Finally, we give several trace plots of the generated Markov chains in Figure 7. The algorithm took ca. 33 minutes to complete. In Figure 4 we give inference results obtained via further subsampling of the data, retaining of the observations. The posterior mean is quite similar to that in Figure 3, whereas the wider credible band reflects greater inferential uncertainty due to a smaller sample size. The figure provides a partial validation of the model we use.

\includegraphics

[width=0.84]eurus5_posterior.pdf

Figure 4. Posterior mean (black curve) and pointwise 95% credible band (blue band) for the volatility of the further subsampled EUR/USD exchange rate data analysed in Section 4. The time axis is rescaled to .

5. Discussion

In this paper we studied a practical nonparametric Bayesian approach to volatility learning under microstructure noise. From the statistical theory point of view, the problem is much more difficult than volatility learning from noiseless observations. Hence, accurate inference on volatility under microstructure noise requires large amounts of data, which, fortunately, are available in financial applications. On the other hand, it becomes important to devise a learning method that scales well with the size of the data. This is indeed the case for the technique we study in this work. Our prior specification and a deliberate, but asymptotically harmless misspecification of the drift by taking are clever choices which enable to combine our earlier work in Gugushvili et al. (2018a) with FFBS algorithm for Gaussian linear state space models. This directly gives a conceptually simple algorithm (Gibbs sampler) to obtain samples from the posterior, from which measures of inferential uncertainty can be extracted in a straightforward way. Given the promising results we have obtained, we plan to further pursue the current research direction for exploring for example online Bayesian learning of the volatility. This may require the use of sequential Monte Carlo methods (see, e.g., Cappé et al. (2007)). Another very interesting topic is to explicitly account for the possible presence of jumps in financial time series within the statistical model.

Appendix A

We denote the inverse Gamma distribution with shape parameter and scale parameter by . Its density is

By we denote a normal distribution with mean and variance . The uniform distribution on an interval is denoted by For a random variate , the notation stands for the fact that is distributed according to a density , or is drawn according to a density . Conditioning of a random variate on a random variate is denoted by . By we denote the integer part of a real number . The notation for a density denotes the fact that a positive function is an unnormalised density corresponding to : can be recovered from as . Finally, we use the shorthand notation .

Appendix B

b.1. Drawing

We first describe how to draw the state vector conditional on all other parameters in the model and the data . Note that for in (5) we have by (4) that , where

(9)

By equation (4.21) in Petris et al. (2009), write (we omit dependence on in our notation, as they stay fixed in this step)

where the factor with in the product on the righthand side is the filtering density . This distribution is in fact , with the mean and variance obtained from Kalman recursions

Here

is the Kalman gain. Furthermore, is the one-step ahead prediction error. See Petris et al. (2009), Section 2.7.2. This constitutes the forward pass of the FFBS.

Next, in the backward pass, one draws backwards in time and from the densities for . It holds that , and the latter distribution is , with

For every , these expressions depend on a previously generated and other known quantities only. The sequence is a sample from . See Section 4.4.1 in Petris et al. (2009) for details on FFBS.

b.2. Drawing

Using the likelihood expression from Subsection 2.3 and the fact that , one sees that the full conditional distribution of is given by

b.3. Drawing and

Using the likelihood expression from Subsection 2.3 and the conditional distributions in (7), one sees that the full conditional distributions for are

The full conditional distributions for are

b.4. Drawing

The unnormalised full conditional density of is

The corresponding normalised density is nonstandard, and the Metropolis-within-Gibbs step (see, e.g., Tierney (1994)) is used to update . The specific details are exactly the same as in Gugushvili et al. (2018a).

Appendix C

Settings for the Gibbs sampler we used in Section 3 are as follows: we used a vague specification , and also assumed that and (for the Heston model we used the specification ). Furthermore, we set . The Metropolis-within-Gibbs step to update the hyperparameter was performed via an independent Gaussian random walk proposal (with a correction as in Wilkinson (2012)) with scaling to ensure the acceptance rate of ca. . The Gibbs sampler was run for iterations, with the first third of the samples dropped as burn-in.

Appendix D

This appendix gives several trace plots for data examples we analysed in the main body of the paper.

\includegraphics

[width=0.28]fan_trace_part1.pdf \includegraphics[width=0.56]fan_trace_part2.pdf

Figure 5. Trace plots for the example of Subsection 3.1. Left: first samples. Right: subsequent samples. Top: , middle: , bottom: .
\includegraphics

[width=0.28]heston_trace_part1.pdf \includegraphics[width=0.56]heston_trace_part2.pdf

Figure 6. Trace plots for the example of subsection 3.2. Left: first samples. Right: subsequent samples. Top: , middle: , bottom: .
\includegraphics

[width=0.84]eurus4_trace_log.pdf

Figure 7. Log trace plots for the real data example of Section 4. Top: , middle: , bottom: .

References

  • Bezanson et al. [2017] J. Bezanson, A. Edelman, S. Karpinski, and V. B. Shah. Julia: a fresh approach to numerical computing. SIAM Rev., 59(1):65–98, 2017. ISSN 0036-1445.
  • Cappé et al. [2007] O. Cappé, S. J. Godsill, and E. Moulines. An overview of existing methods and recent advances in sequential Monte Carlo. Proceedings of the IEEE, 95(5):899–924, may 2007. ISSN 0018-9219.
  • Cemgil and Dikmen [2007] A. T. Cemgil and O. Dikmen. Conjugate gamma Markov random fields for modelling nonstationary sources. In Mike E. Davies, Christopher J. James, Samer A. Abdallah, and Mark D. Plumbley, editors, Independent Component Analysis and Signal Separation: 7th International Conference, ICA 2007, London, UK, September 9–12, 2007. Proceedings, pages 697–705. Springer Berlin Heidelberg, Berlin, Heidelberg, 2007. ISBN 978-3-540-74494-8.
  • Cox et al. [1985] John C. Cox, Jonathan E. Ingersoll, and Stephen A. Ross. A theory of the term structure of interest rates. Econometrica, 53(2):385–407, 1985.
  • Fan and Gijbels [1995] Jianqing Fan and Irène Gijbels. Data-driven bandwidth selection in local polynomial fitting: variable bandwidth and spatial adaptation. J. Roy. Statist. Soc. Ser. B, 57(2):371–394, 1995. ISSN 0035-9246.
  • Ghosal and van der Vaart [2017] S. Ghosal and A. van der Vaart. Fundamentals of nonparametric Bayesian inference, volume 44 of Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge University Press, Cambridge, 2017. ISBN 978-0-521-87826-5.
  • Gugushvili et al. [2017] S. Gugushvili, F. van der Meulen, M. Schauer, and P. Spreij. Nonparametric Bayesian estimation of a Hölder continuous diffusion coefficient. ArXiv e-prints, 2017.
  • Gugushvili et al. [2018a] S. Gugushvili, F. van der Meulen, M. Schauer, and P. Spreij. Nonparametric Bayesian volatility estimation. ArXiv e-prints, 2018a.
  • Gugushvili et al. [2018b] S. Gugushvili, F. van der Meulen, M. Schauer, and P. Spreij. Fast and scalable non-parametric Bayesian inference for Poisson point processes. ArXiv e-prints, 2018b.
  • Heston [1993] Steven L. Heston. A closed-form solution for options with stochastic volatility with applications to bond and currency options. Rev. Financial Stud., 6(2):327–343, 1993.
  • Hoffmann et al. [2012] M. Hoffmann, A. Munk, and J. Schmidt-Hieber. Adaptive wavelet estimation of the diffusion coefficient under additive error measurements. Ann. Inst. Henri Poincaré Probab. Stat., 48(4):1186–1216, 2012. ISSN 0246-0203.
  • Ignatieva and Platen [2012] K. Ignatieva and E. Platen. Estimating the diffusion coefficient function for a diversified world stock index. Comput. Statist. Data Anal., 56(6):1333–1349, 2012. ISSN 0167-9473.
  • Jacod et al. [2009] J. Jacod, Y. Li, P.A. Mykland, M. Podolskij, and M. Vetter. Microstructure noise in the continuous case: the pre-averaging approach. Stochastic Process. Appl., 119(7):2249–2276, 2009. ISSN 0304-4149.
  • Müller and Mitra [2013] P. Müller and R. Mitra. Bayesian nonparametric inference—why and how. Bayesian Anal., 8(2):269–302, 2013. ISSN 1936-0975.
  • Müller et al. [2015] P. Müller, F. A. Quintana, A. Jara, and T. Hanson. Bayesian nonparametric data analysis. Springer Series in Statistics. Springer, Cham, 2015. ISBN 978-3-319-18967-3; 978-3-319-18968-0.
  • Munk and Schmidt-Hieber [2010] A. Munk and J. Schmidt-Hieber. Lower bounds for volatility estimation in microstructure noise models. In Borrowing strength: theory powering applications—a Festschrift for Lawrence D. Brown, volume 6 of Inst. Math. Stat. (IMS) Collect., pages 43–55. Inst. Math. Statist., Beachwood, OH, 2010.
  • Musiela and Rutkowski [2005] M. Musiela and M. Rutkowski. Martingale methods in financial modelling, volume 36 of Stochastic Modelling and Applied Probability. Springer-Verlag, Berlin, 2 edition, 2005. ISBN 3-540-20966-2.
  • Mykland and Zhang [2009] P. A. Mykland and L. Zhang. Inference for continuous semimartingales observed at high frequency. Econometrica, 77(5):1403–1445, 2009. ISSN 0012-9682.
  • Mykland and Zhang [2012] P. A. Mykland and L. Zhang. The econometrics of high-frequency data. In Statistical methods for stochastic differential equations, volume 124 of Monogr. Statist. Appl. Probab., pages 109–190. CRC Press, Boca Raton, FL, 2012.
  • Papaspiliopoulos et al. [2013] Omiros Papaspiliopoulos, Gareth O. Roberts, and Osnat Stramer. Data augmentation for diffusions. J. Comput. Graph. Statist., 22(3):665–688, 2013. ISSN 1061-8600.
  • Petris et al. [2009] G. Petris, S. Petrone, and P. Campagnoli. Dynamic linear models with R. Use R! Springer, New York, 2009. ISBN 978-0-387-77237-0.
  • Reiß [2011] M. Reiß. Asymptotic equivalence for inference on the volatility from noisy observations. Ann. Statist., 39(2):772–802, 2011. ISSN 0090-5364.
  • Sabel et al. [2015] T. Sabel, J. Schmidt-Hieber, and A. Munk. Spot volatility estimation for high-frequency data: adaptive estimation in practice. In Modeling and stochastic learning for forecasting in high dimensions, volume 217 of Lect. Notes Stat., pages 213–241. Springer, Cham, 2015.
  • Schauer and Gugushvili [2018] Moritz Schauer and Shota Gugushvili. MicrostructureNoise, may 2018. URL https://doi.org/10.5281/zenodo.1241024.
  • Silverman [1986] B. W. Silverman. Density estimation for statistics and data analysis. Monographs on Statistics and Applied Probability. Chapman & Hall, London, 1986. ISBN 0-412-24620-1.
  • Tanner and Wong [1987] M. A. Tanner and W. H. Wong. The calculation of posterior distributions by data augmentation. J. Amer. Statist. Assoc., 82(398):528–550, 1987. ISSN 0162-1459. With discussion and with a reply by the authors.
  • Tierney [1994] L. Tierney. Markov chains for exploring posterior distributions. Ann. Statist., 22(4):1701–1762, 1994. ISSN 0090-5364. With discussion and a rejoinder by the author.
  • van der Meulen and Schauer [2017] F. van der Meulen and M. Schauer. Bayesian estimation of discretely observed multi-dimensional diffusion processes using guided proposals. Electron. J. Stat., 11(1):2358–2396, 2017. ISSN 1935-7524.
  • van der Ploeg [2006] A. P. C. van der Ploeg. Stochastic volatility and the pricing of financial derivatives. PhD thesis, University of Amsterdam, 2006.
  • Wilkinson [2012] D. J. Wilkinson. Metropolis Hastings MCMC when the proposal and target have differing support, 2012. URL https://darrenjw.wordpress.com/2012/06/04/metropolis-hastings-mcmc-when-the-proposal-and-target-have-differing-support/. Online; accessed on 23 December 2017.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
192942
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description