Towards information optimal simulation of partial differential equations

# Towards information optimal simulation of partial differential equations

Reimar H. Leike, Torsten A. Enßlin Max-Planck-Institut für Astrophysik, Karl-Schwarzschildstr. 1, 85748 Garching, Germany
Ludwig-Maximilians-Universität München, Geschwister-Scholl-Platz 1, 80539 Munich, Germany
###### Abstract

Most simulation schemes for partial differential equations (PDEs) focus on minimizing a simple error norm of a discretized version of a field. This paper takes a fundamentally different approach; the discretized field is interpreted as data providing information about a real physical field that is unknown. This information is sought to be conserved by the scheme as the field evolves in time. Such an information theoretic approach to simulation was pursued before by information field dynamics (IFD). In this paper we work out the theory of IFD for nonlinear PDEs in a noiseless Gaussian approximation. The result is an action that can be minimized to obtain an informationally optimal simulation scheme. It can be brought into a closed form using field operators to calculate the appearing Gaussian integrals. The resulting simulation schemes are tested numerically in two instances for the Burgers equation. Their accuracy surpasses finite-difference schemes on the same resolution. The IFD scheme, however, has to be correctly informed on the subgrid correlation structure. In certain limiting cases we recover well-known simulation schemes like spectral Fourier Galerkin methods. We discuss implications of the approximations made.

information theory, Bayesian inference, simulation, partial differential equation

## I Introduction

Simulation of partial differential equations (PDEs) is a wide field with countless applications. This stems from the fact that there is no general known analytic solution for most of the interesting, in practice occurring PDEs. Thus one has to resort to simulation in order to make predictions about the behavior of the solutions. PDEs are differential equations for fields, which have infinite degrees of freedom. However, on a computer one is not able to store the whole field for any point in time. Furthermore the time evolution has to be discretized as well, because time is a continuous variable.

Information field dynamics (IFD) (Enßlin, 2013) takes an approach that differs slightly on a fundamental level from conventional field discretization. Instead of simulating a discretized field, finite information about the real continuous field is stored in a data vector, as if it were obtained from a measurement. The time evolution of the data is then derived from the evolution of the real field. This interpretation enables the application of information theory, specifically information field theory (Lemm, 1999; Enßlin et al., 2009) which is information theory for the reconstruction of fields.

The application of information theory to get an improvement or better understanding of existing numerical methods is not new. One of the early prominent examples is Ref. (Diaconis, 1988), where Bayesian inference is used to compute integrals. There are also examples of groups working on applying information theory to simulations. Some historic examples are (Kailath, 1969; Joseph, 1968; Jazwinski, 1970; Doucet, 1998) who regard the problem as a hidden Markov model which is then treated in a Bayesian fashion through a filtering approach. See e.g. (Bruno, 2013) for an overview, (Ridgeway and Madigan, 2003) for a more generic review of sequential Monte Carlo methods. There is still ongoing research for the filtering approach, see e.g. (Branicki and Majda, 2014; Sullivan et al., ; Kersting and Hennig, 2016). These methods can also be applied to neural networks, see e.g. (De Freitas et al., 2001). In some cases, for example for linear differential equations, one can infer the solution directly (Raissi et al., 2017a). Other approaches focus on parametrizing the posterior as a Gaussian and learning the dynamics in a way motivated by machine learning (Raissi et al., 2017b).

The approach that is probably the closest to the one in this paper is described in (Archambeau et al., 2007), where stochastic differential equations are approximately solved using a variational approach. The differences to our approach lie in the way the probability density is parametrized and how the KL divergence is used.

All in all, Bayesian simulation is an active and growing field of research.

In our approach, we do not rely on sampling, neither are we restricted to linear PDEs. Instead, we approximate the true evolved probability distribution of solutions by a parametrized one in each time step; and choose the parameters so that the loss of information is minimized. Hereby we parameterize the probability distribution such that it mimics a physical measurement instrument.

In Sec. II the reader is introduced to the fundamental concepts of IFD and general formula for discretizing PDEs. This formula is then tested in Sec. III for its numerical performance. Advantages and disadvantages of the proposed scheme are discussed in Sec. IV. We conclude in Sec. V.

## Ii General formalism

IFD is a formalism for simulating differential equations for fields of the form

 dsdt=f(s) (1)

using only the finite resources that are available on computers. This implies that from the infinite degrees of freedom of a field , only finitely many can be taken into account. IFD is a specific kind of Bayesian forward simulation scheme.

A forward simulation scheme takes a data vector and returns a new data vector . Here a data vector is referring to an array of numbers on a computer. The data are supposed to contain information about the real physical field at time . What kind of information contains is also specified by the simulation scheme. The new data vector is supposed to contain information about the field at the time . One then iterates the application of this scheme until one arrives at a target time. On this abstract level IFD yields a forward simulation scheme. The difference to the construction of most other simulation schemes is its information theoretic foundation and very restrictive formalism. The formalism is restrictive in the sense that once it is defined what information the data contains about the field , the time evolution of the scheme is completely specified.

IFD attempts to mimic the optimal Bayesian simulation. In an optimal Bayesian simulation we take the knowledge about the initial conditions and then compute the time evolved probability density using the exact analytic time evolution

 P(sn|init'')=P(s0(sn)|init'')∣∣∣∣∣∣∂s0(sn)∂sn∣∣∣∣∣∣ . (2)

Here we have assumed that there exists an exact solution for the PDE to be simulated (at least up to a zero set of ). Thus there is a one to one mapping between fields at time and fields at time , such that we can write the initial field as function of the later field , or vice versa. We denote with the absolute value of the Jacobi determinant that arises from transforming the probability density. Note that no information is lost, since time forward and backward evolution is a one to one mapping of the phase space of the field . This optimal Bayesian simulation scheme is practically not accessible in most interesting cases because it requires the exact (backward) time evolution to be known, the Jacobi determinant to be computable and the storage of whole probability densities over fields. In this paper, we propose a scheme that aims to overcome these limitations at the cost of losing some information in the process of simulation. It does so by parameterizing the probability density and then evolving these parameters such that the least amount of information is lost in each of the small time steps. We proceed by describing in detail how the probability density is parametrized, then we describe how we approximate the time evolution.

In IFD we store a finite amount of data on the actual continuous field , as if measured by an instrument whose action is described by a measurement equation of the following form:

 d=R(s)(t)+n (3)

Here is accumulated numerical noise and is some response function. As an example for one could choose a matrix of Dirac -distributions for to mimic point measurements of the field at certain locations. This measurement equation does not imply that there is an actual measurement, it just defines the probability theoretic connection between the data on our computer and the actual physical field that we want to simulate. The initial conditions of the PDE will determine the first data , future data will then be determined by the scheme to mimic the time evolution of the field . To recover the full field from the data at time one can use Bayes theorem:

 P(s0|d0)=P(d0|s0)P(s0)P(d0) . (4)

For this, a prior is necessary. It reflects the knowledge about when no data are available. A simulation scheme also has to discretize the time evolution such that in a time step from to the data gets updated from to . In the language just introduced, the purpose of a simulation scheme is to choose a proper time discretization that is as close to the real evolution of as possible (or even equal if feasible) and a proper discretization of space such that the features of the field are represented well.

In IFD the time evolution of the data is defined indirectly, that is we assign such that the posterior using our new data matches the time evolved probability density using our old data as well as possible. Note that is what we defined to be the optimal Bayesian simulation, but simulated only for a small time step , where a linearization of the time evolution might still be justified. Because we cannot store the whole density we store an approximation of it that uses the same parametrization as the probability density but with new values assigned to the parameters such that it approximates the time evolved probability density as well as possible. For probability densities corresponding to a Bayesian belief, there is only one consistent notion of “approximating as well as possible”, given the two requirements that the optimal approximation is no approximation and that an approximation can be judged by what it predicts for actual outcomes. We refer to Ref. (Leike and Enßlin, 2016a) for a practice-oriented discussion why this uniquely determines the “approximation” KL distance as the appropriate loss function to be used here, see Ref. (Bernardo, 1979) for the original proof on probability densities. This proposed loss is different than that in the originally proposed IFD scheme (Enßlin, 2013) and leads to matching the two distributions via

 KL(d0,d1)=∫ds1P(s1|d0)lnP(s1|d0)P(s1|d1) (5)

In this matching, is given and is searched for, such that the KL divergence serves as an action that is minimized to obtain the discretized time evolution .

It was also proposed in (Leike and Enßlin, 2016a) that for information preserving dynamics, that is for non-stochastic time evolution, one has and therefore this Kullback-Leibler distance is equal to the Kullback-Leibler distance with both probability densities time evolved to the past (note the changed indices):

 KL(d0,d1) =∫ds0P(s0|d0)lnP(s0|d0)P(s0|d1) (6)

Note that the equality between Eqs. (5) and (6) is nothing else than the invariance of the KL under invertible transformations. In this case the transformation is the time evolution of the field . The latter KL can be calculated once one made a suitable choice for . For this note that there is a degeneracy between and . That means, if is altered by an invertible operator

 R′=RT (7)

then this is equivalent to instead simulating the differential equation for

 ddt(Ts)=dTds(f(T−1(Ts))) (8)

and using the unaltered response . This is because

 (R′)s(t)=(RT)s(t)=R(Ts)(t) . (9)

This provides some freedom to simplify , thus can be chosen such that it is linear at the cost of possibly making the time evolution more complicated. If the prior and the noise are zero-centered Gaussian distributions with covariance matrices and , respectively, then the inverse problem can be solved by a (generalized) Wiener Filter (Wiener, 1949) and a Gaussian posterior distribution is obtained:

 P(s0|d0) =G(s0−m0,D0) =|2πD0|−12e−12(s0−m0)†D−10(s0−m0) (10)

Here and . One also gets a Gaussian posterior distribution for :

 P(s1|d1) =G(s1−m1,D1), (11) with D−11 =S−11+R†N−11R (12) and m1 =D1R†N−11d1 (13) =SR†(RS1R†+N1)−1d1 . (14)

Note that Eqs. (13) and (14) are two equivalent ways to obtain a reconstruction . In our paper we will mostly use Eq. (14) as the matrix inversion only needs to be computed for , which is a finite dimensional operator.

To compute the necessary quantities for our action as given by Eq. (6) we have to compute the distribution for given . It is obtained from the backward time evolution of :

 P(s0|d1)=G(U(s0)−m1,D1)∣∣∣∣∣∣dU(s0)ds0∣∣∣∣∣∣ (15)

Here is the exact analytical time evolution. Using this, the Kullback-Leibler divergence that needs to be minimized so that is obtained is

 KL(d0,d1) =∫ds0P(s0|d0)lnP(s0|d0)P(s0|d1) (16) =∫ds0G(s0−m0,D0) lnG(s0−m0,D0)G(U(s0)−m1,D1)∣∣∣∣dU(s0)ds0∣∣∣∣ .

We only minimize for parameters of , so we can ignore any additive terms that do not depend on . Thus

 KL(d0,d1)ˆ= ∫ds0G(s0−m0,D0)ln1G(U(s0)−m1,D1) . (17)

Here “” denotes equality up to irrelevant constants, which in this case are constants that are not a function of . These will drop out when the expression is minimized with respect to later on. Note that the absolute value of the Jacobian can be ignored because it only depends on . The integral above can be quite difficult to evaluate in general. For integrals involving Gaussian distributions there is however a general method (Leike and Enßlin, 2016b) to write down a closed expression for the result. Replacing every instance of with the field operator

 Om0=m0+D0ddm0 (18)

allows us to evaluate the integral at the cost of having to evaluate operator expressions. The integral is rewritten as

 KL(d0,d1) ˆ=ln1G(U(Om0)−m1,D1) ˆ=12(U(Om0)−m1)†D−11(U(Om0)−m1) +12tr(ln(2πD1)) . (19)

We now minimize this Kullback-Leibler divergence with respect to . For this we compute the derivative

 dKL(d0,d1)dd1 =(dm1dd1)†D−11(m1−U(Om0)) =N−11R(m1−U(Om0)) (20)

with respect to . We now assume that we leave the noise matrix, the response, and the prior invariant, thus omitting the indices on these operators. At the minimum this derivative is , so we can solve it for :

 0 =N−1R(m1−U(Om0)) 0 =RSR†(N+RSR†)−1d1−RU(Om0) d1 =(N+RSR†)(RSR†)−1RU(Om0) (21)

One way to use IFD is to reformulate a PDE like Eq. (1) to an ordinary differential equation (ODE), for which potent solvers already exist. For this we expand Eq. (21) to first order in :

 d1 =(N+RSR†)(RSR†)−1R(Om0+dtf(Om0)) d1 =(N+RSR†)(RSR†)−1R (SR†(N+RSR†)−1d0+dtf(Om0)) d1 =d0+(N+RSR†)(RSR†)−1Rdtf(Om0) (22)

Inserting we arrive at an ODE for :

 dddt =(N+RSR†)(RSR†)−1Rf(Om) (23)

Using this in the limit of no-noise we get the following compact expression for the updating rule:

 dddt =Rf(Om)=Rf(OSR†(RSR†)−1d) (24)

Eqs. (23) and (24) are the central equations of this paper, allowing us to discretize any differential equation. They were derived through minimizing the action given by Eq (6) and thus mimic the Bayes optimal simulation up to a minimized information loss. Using Eqs. (23) and (24) and an appropriate choice of the response of the virtual measurement connecting field and data, IFD tells us how the differential operators need to be discretized.

## Iii Numerical tests

As a benchmark we simulate the Burgers equation

 ∂s∂t=f(s)=η∂2s∂x2−s∂s∂x . (25)

This equation is numerically challenging as it develops shock waves for small diffusion constants . First, we have to specify our choice of . We demonstrate the formalism for two different choices of .

### iii.1 Box grid

We choose

 Rix=1xi,xi+1={1xi

This type of grid is commonly used in simulations. Starting from equation (24), we compute:

 dddt =Rf(Om) =R(ΔOm−Om∇Om) =R(Δm−m∇m) −∫dxdydzR⋅xDxyddmy(∇xzmz) =RΔSR†(RSR†)−1d −R(SR†(RSR†)−1d(∇SR†(RSR†)−1d)) −∫dxdyR⋅xDxy∇xy (27)

We introduce the short hand notation

 d′=(RSR†)−1d (28)

so that the IFD Burgers scheme simplifies to

 dddt =RΔSR†d′ −R(SR†d′(∇SR†d′)) −∫dxdyR⋅xDxy∇xy (29)

Assuming that our a priori knowledge favors no certain points in space or certain directions, according to the Wiener-Khintchin theorem (Wiener, 1930) the covariance operator has to be diagonal in Fourier space. This is equivalent to a convolution with a convolution kernel in configuration (-) space, such that

 (SR†)xi=Cx∗R†xi=∫dyCx−yR†yi . (30)

We now compute the three terms of Eq. (29) all separately, starting with the term involving the Laplace operator:

 (RΔSR†d′)i=∫xi+1xidx∑jΔ∫xi+jxjdySxyd′j =∫xi+1xidx∑jΔCx∗1xj,xj+1d′j =∫dx1xi,xi+1∑jΔCx∗1xj,xj+1d′j =∫xi+1xidx∑j∇(Cx∗(δ(x−xj)−δ(x−xj+1)))d′j =∫dx(δ(x−xi+1)−δ(x−xi)) ∑j(Cx−xj−Cx−xj+1)d′j =∑j(Cxi+1−xj−Cxi+1−xj+1) −∑j(Cxi−xj−Cxi−xj+1)d′j =∑j(Cl(i−j+1)−2Cl(i−j)+Cl(i−j−1))d′j (31)

Here we assumed the to be equally spaced with distance . Note that this version of the discretized Laplace operator has similarities with the normal finite-difference (Courant et al., 1928) Laplace operator, but accounts for the field correlation structure. We continue by computing the second term

 (R(SR†d′(∇SR†d′)))i= =∫dxRix(SR†d′)x(∇SR†d′)x =−∫dx[∇(Rix(SR†d′))x](SR†d′)x =−∫dx(∇Rix)(SR†d′)(SR†d′) −∫dxRix(∇SR†d′)(SR†d′) . (32)

The last summand in Eq. (32) is the same term we started with, only with a negative sign. Thus we can bring both to the same side of the equation and get

 R(SR†d′(∇SR†d′))= =−12∫dx(∇Rix)(SR†d′)(SR†d′) =12[(Cx∗∑jRxjd′j)(Cx∗∑kRxkd′k)]xi+1x=xi . (33)

The third term is

 ∫dxdyRixDxy∇xy . (34)

This term vanishes in the case of periodic boundary conditions. One can see this by rewriting for a sufficiently small to obtain

 ∫dxdyRixDxyϵ−1(δ(x−y+ϵ)−δ(x−y−ϵ)) =∫dxϵ−1Rix(Dx(x+ϵ)−Dx(x−ϵ)) . (35)

Because and have no favored direction, and thus the third term vanishes. Finally we have to compute

 d′ =(RSR†)−1d =(∫xi+1xi∫xj+1xjdx%dyCx−y)−1dj . (36)

Now that one has all the terms of Eq. (27) one can choose a prior and obtain a simulation scheme as a result. Normally, one would choose the prior according to physical properties of the system, such that it meaningfully encodes our knowledge in the absence of data. The matter of choosing priors will be further addressed in Sec. IV.2. We just want to demonstrate the formalism, so we simply choose the analytic form of such that we can easily compute the three integrals given by Eqs. (31, 33, 36). One convenient choice of is a Gaussian111or a mixture of Gaussians, for which we know all the above terms analytically. One might equally well choose any correlation function and do these integrations numerically. Because these integrals only have to be done once, this does not significantly increase the computation time of the resulting simulation scheme.

Note that all computed operators are local, meaning that they fall off as falls off. Thus, they can be truncated at a certain distance and the whole simulation scales only linearly with the number of data points.

Fig. 2 shows an example of a simulation that was performed using the scheme that was worked out in this section. As a comparison, the figure also shows a simulation using the finite-difference method with the same spatial resolution. This simulation uses data values and the function

 s(x)=e4−(x/64−0.5)2 (37)

as initial condition. The simulation space is an interval of length with periodic boundary conditions. The prior covariance was chosen to be a convolution with a zero-centered Gaussian that has a standard deviation of . The diffusion constant was chosen to be . The result of a simulation using the finite-difference method with a four times higher resolution is displayed as well. This high resolution simulation should capture all features produced by the Burgers dynamics. To investigate the resolution dependence of the IFD scheme, we compare the IFD simulation of Fig. 2 with a simulation on a two times more resolved grid in Fig. 2. For comparison one can again see the fine resolved finite-difference method. One can clearly see an increased performance as the resolution increases.

### iii.2 Fourier grid

Now we switch to a different response. We choose a Fourier space grid

 Rik =∑iδ(k−ki) (38)

with Fourier space grid points . There is a choice whether to view the Fourier transform

 Fkx=eikx (39)

as part of the measurement or as transformation of the field , see Eqs. (7, 8, 9). We choose the latter and obtain as transformed time evolution:

 dskdt =Fkxdsxdt =Fkx(Δsx−(sx)(∇sx)) =−k2sk+(sk)∗(iksk) (40)

We insert this into Eq. (24) and obtain

 dddt=Rf(Om)= −Rk2SR†(RSR†)−1d+Rk2SR†(RSR†)−1d . (41)

If the a priori knowledge does not favor any specific direction or point in space, then is diagonal in Fourier space. Thus

 (SR†(RSR†)−1)ki =∑lrjpSklδlmj(δmipSprδrmj)−1 =∑jSkmj(Smimj)−1 =δkmi=R†ki (42)

and the prior covariance drops out, making time evolution on this grid invariant under Fourier space priors. For the final time evolution we arrive at

 ddidt =−k2idi +∑j∫dkdk′δ(k−ki−k′)diik′δ(k′−kj)dj =−k2idi+∑jdj−iikjdj (43)

where the continuous convolution was translated to a finite convolution on a grid. This resulting time evolution equation can be implemented efficiently and the results can be seen in Fig. 3. The simulation constraints, initial conditions, and degrees of freedom were chosen to be the same as in Sec. III.1.

The developed Fourier space IFD method is equivalent to a Fourier Galerkin method (Galerkin, 1915). In the Fourier Galerkin method, the error to the correct solution is minimized for a vector in a subspace. This subset is a linear subspace of the real solution space, with selected basis functions that are often chosen to be polynomials or, as in the case of the Fourier Galerkin method, Fourier basis functions. In the case of IFD this subspace is the co-image of the response . Thus Galerkin schemes can be regarded as IFD schemes with being specified by the Galerkin basis. The prerequisites for this equivalence are that the prior commutes with the discretization and that we work in the no-noise regime .

In the current development status of the IFD method, there are some caveats as well as some advantages over classical approaches. Some of them stem from the theoretical side, where approximations had to be done in order to arrive at a computable scheme. In this section we discuss all the observed problems and benefits.

### iv.1 Subgrid structure

Information field dynamics employs the field evolution of the real physical field, and thus automatically takes into account subgrid structure. However, this leads to problems when the time evolution is truncated after the first order in . In most physical systems, the time evolution is faster for the smaller scales, thus if we take into account all scales, no is small enough to justify this approximation. In other schemes, the discretized field automatically yields a cutoff at high frequencies. However for IFD, in the derivation we truncate the whole time evolution of the real system at first order, which is not justified. As an example consider the diffusion equation

 dsdt =Δs (44) dskdt =−k2sk . (45)

For higher modes the change of scales with . Thus, if we truncate the Taylor expansion of the time evolution at order of , then for we get

 skt+dt =stk−dtk2stk=−(1−dtk2)stk . (46)

where , thus the scheme is boosting small frequencies instead of damping them.

This is, however, a general problem of any simulation scheme.

### iv.2 How to choose a prior

The goal of a prior is to incorporate as much information as one has about the system and nothing more. If the system has no special directions or singled out locations, one should choose a prior that is homogeneous and isotropic. These two requirements force the covariance matrix to be diagonal in Fourier space. If we restrict ourselves to Gaussian priors, the prior is fully characterized by its power spectrum . Thus, the only a priori information that enters the simulation in the Gaussian case is how smooth the physical field is. But this is also a significant restriction. For example for infinitely sharp shocks, as they occur in the Burgers equation with , no smoothness at all is justified at the location of the shock, whereas at other points the solution might be perfectly smooth. To capture this kind of behavior one would need to either use a prior that allows for higher order statistics, use a dynamical prior which evolves in time, or to introduce data that capture the discontinuities. In our simulation we observed that the scheme diverges quickly if an unjustified prior was chosen, that is for example a prior that enforces significantly more smoothness than is present in the solution.

To choose the right power spectrum of the prior one could use a fine grid simulation and take the occurring power spectrum as input for a coarser simulation.

### iv.3 Static prior

In the current formulation we assume that the prior does not evolve in time. However, ideally the prior should evolve with the system. This is because some a priori assumptions that were made for time step might not be true at a later time (for example initial smoothness might be violated by the formation of a shock). However, from an agnostic point of view, if one has no knowledge about points in time, then the prior should be invariant under time translation. In principle, IFD provides guidance on how to evolve any kind of degree of information on the field, like its covariance structure, and not only some measurement data. By minimizing Eq. 19 with respect to any such piece of information, we obtain an evolution equation for it that loses the minimal amount of information. This way, when the prior is parametrized, an update rule for it is automatically obtained.

### iv.4 No-noise approximation

To derive our algorithms, we take the limit . While this significantly simplifies the derivation of the schemes, it also deprives the resulting schemes of the advantages that an information theoretic treatment has in general. In the no-noise approximation field configurations with are assigned zero probability, thus the information loss in the presence of numerical rounding errors and finite time steps, where the evolution of the data cannot satisfy Eq. (24) exactly, is infinite. Further development in the field of IFD will have to investigate into approaches incorperating noise.

## V Conclusion

The requirement of minimal information loss per time-step defines a unique simulation framework. This concrete simulation scheme requires the specification of the field measurement (response) and the incorporation of prior known correlation structure. It exhibits similarities to the finite-difference scheme when the response is a grid of boxes and becomes a spectral scheme in the case of Fourier space response. This yields a new interpretation for linear and in some cases even nonlinear Galerkin schemes. These are information optimal up to the approximations made in this paper if no spatial a priori knowledge about the field is available.

IFD can thus be regarded as a general theory for simulations that explains what assumptions about the simulated field enter a given simulation scheme, if one is able to reproduce that scheme in IFD. For some schemes one can enhance the performance by using a prior that is correctly informed on the field correlation structure. When the prior is chosen incorrectly, for example if it is chosen such that the simulation produces features that are regarded very unlikely by the prior, the scheme tends to diverge quickly. In principle, IFD can provide a guideline how to evolve any degree of freedom; by minimizing Eq. 6 one gets a unique simulation scheme. An interesting route, at least for the Burgers and other hydrodynamic equations, would be to automatically infer the position of the virtual measurements, allowing the scheme to sample the field where it is most informative. The investigation in that direction is however beyond the scope of this paper and might be the target of future research.

All in all information theory provides a powerful language to talk about simulation tasks. Even though the series of approximations made in this paper permitted the resulting simulation schemes to only outperform finite differences by a small amount, further advancements in the field could yield substantial enhancements.

## Vi Acknowledgements

We would like to thank Martin Dupont, Phillip Frank, Sebastian Hutschenreuter, Jakob Knollmüller and two anonymous referees for the discussions and their valuable comments on the manuscript.

## References

• Enßlin (2013) T. A. Enßlin, “Information field dynamics for simulation scheme construction,” Phys. Rev. E 87, 013308 (2013)arXiv:1206.4229 [physics.comp-ph] .
• Lemm (1999) J. C. Lemm, “Bayesian Field Theory: Nonparametric Approaches to Density Estimation, Regression, Classification, and Inverse Quantum Problems,” ArXiv Physics e-prints  (1999), physics/9912005 .
• Enßlin et al. (2009) T. A. Enßlin, M. Frommert,  and F. S. Kitaura, “Information field theory for cosmological perturbation reconstruction and nonlinear signal analysis,” Phys. Rev. D 80, 105005 (2009)arXiv:0806.3474 .
• Diaconis (1988) Persi Diaconis, “Bayesian numerical analysis,” Statistical decision theory and related topics IV 1, 163–175 (1988).
• Kailath (1969) Thomas Kailath, “A general likelihood-ratio formula for random signals in gaussian noise,” IEEE Transactions on Information Theory 15, 350–361 (1969).
• Joseph (1968) Peter D Joseph, Filtering for stochastic processes with applications to guidance (1968).
• Jazwinski (1970) Andrew H Jazwinski, “Mathematics in science and engineering,” Stochastic Processes and Filtering Theory 64 (1970).
• Doucet (1998) Arnaud Doucet, “On sequential simulation-based methods for bayesian filtering,”  (1998).
• Bruno (2013) Marcelo GS Bruno, ‘‘Sequential monte carlo methods for nonlinear discrete-time filtering,” Synthesis Lectures on Signal Processing 6, 1–99 (2013).
• Ridgeway and Madigan (2003) Greg Ridgeway and David Madigan, “A sequential monte carlo method for bayesian analysis of massive datasets,” Data Mining and Knowledge Discovery 7, 301–319 (2003).
• Branicki and Majda (2014) M Branicki and AJ Majda, “Quantifying bayesian filter performance for turbulent dynamical systems through information theory,” Commun. Math. Sci 12, 901–978 (2014).
• (12) Tim Sullivan, Jon Cockayne, Chris Oates,  and Mark Girolami, “Bayesian probabilistic numerical methods,”  .
• Kersting and Hennig (2016) Hans Kersting and Philipp Hennig, “Active uncertainty calibration in bayesian ode solvers,” arXiv preprint arXiv:1605.03364  (2016).
• De Freitas et al. (2001) Nando De Freitas, C Andrieu, Pedro Højen-Sørensen, M Niranjan,  and A Gee, “Sequential monte carlo methods for neural networks,” in Sequential Monte Carlo methods in practice (Springer, 2001) pp. 359–379.
• Raissi et al. (2017a) Maziar Raissi, Paris Perdikaris,  and George Em Karniadakis, “Inferring solutions of differential equations using noisy multi-fidelity data,” Journal of Computational Physics 335, 736–746 (2017a).
• Raissi et al. (2017b) Maziar Raissi, Paris Perdikaris,  and George Em Karniadakis, “Numerical gaussian processes for time-dependent and non-linear partial differential equations,” arXiv preprint arXiv:1703.10230  (2017b).
• Archambeau et al. (2007) Cedric Archambeau, Dan Cornford, Manfred Opper,  and John Shawe-Taylor, “Gaussian process approximations of stochastic differential equations,” in Gaussian Processes in Practice (2007) pp. 1–16.
• Leike and Enßlin (2016a) R. H. Leike and T. A. Enßlin, ‘‘Optimal Belief Approximation,” ArXiv e-prints  (2016a), arXiv:1610.09018 [math.ST] .
• Bernardo (1979) José M Bernardo, “Expected information as expected utility,” The Annals of Statistics , 686–690 (1979).
• Wiener (1949) N. Wiener, Extrapolation, Interpolation, and Smoothing of Stationary Time Series (New York: Wiley, 1949).
• Leike and Enßlin (2016b) R. H. Leike and T. A. Enßlin, “Operator calculus for information field theory,” Phys. Rev. E 94, 053306 (2016b)arXiv:1605.00660 [stat.ME] .
• Wiener (1930) Norbert Wiener, “Generalized harmonic analysis,” Acta mathematica 55, 117–258 (1930).
• Courant et al. (1928) R. Courant, K.O. Friedrichs,  and H. Lewy, “über die partiellen differenzengleichungen der mathematischen physik,” Math. Annalen 100, 32–74 (1928), (English translation, with commentaries by Lax, P.B., Widlund, O.B., Parter, S.V., in IBM J. Res. Develop. 11 (1967)).
• Galerkin (1915) B.G. Galerkin, “On electrical circuits for the approximate solution of the laplace equation,” Vestnik Inzh. 19, 897–908 (1915).
You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters