Recurrent machines for likelihoodfree inference
Abstract
Likelihoodfree inference is concerned with the estimation of the parameters of a nondifferentiable stochastic simulator that best reproduce real observations. In the absence of a likelihood function, most of the existing inference methods optimize the simulator parameters through a handcrafted iterative procedure that tries to make the simulated data more similar to the observations. In this work, we explore whether metalearning can be used in the likelihoodfree context, for learning automatically from data an iterative optimization procedure that would solve likelihoodfree inference problems. We design a recurrent inference machine that learns a sequence of parameter updates leading to good parameter estimates, without ever specifying some explicit notion of divergence between the simulated data and the real data distributions. We demonstrate our approach on toy simulators, showing promising results both in terms of performance and robustness.
Recurrent machines for likelihoodfree inference
Arthur Pesah^{†}^{†}thanks: Both authors contributed equally to this work. KTH Royal Institute of Technology Stockholm, Sweden Antoine Wehenkel^{1}^{1}footnotemark: 1 University of Liège Liège, Belgium Gilles Louppe University of Liège Liège, Belgium
noticebox[b]32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada.\end@float
1 Introduction
Modern science often relies on the modeling of complex data generation process by means of computer simulators. While the forward generation of observables is often straightforward and wellmotivated, inverting generation processes is usually very difficult. In particular, scientific simulators are often stochastic and give rise to intractable likelihood functions that prevent the use of classical inference algorithms. The importance and prevalence of this problem has recently motivated the development of socalled likelihoodfree inference methods (LFI) which do not make use of the likelihood function for estimating model parameters. LFI methods (e.g., Louppe and Cranmer, 2017; Beaumont et al., 2002; Gutmann and Corander, 2016) are often based on handcrafted iterative optimization procedures, where a sequence of updates are performed to make the simulated data more similar to the observations.
Driven by the promises of learning to learn, metalearning has shown that automatically learning neural optimizers from data is possible, achieving results close to the stateoftheart for the task of training neural networks (Andrychowicz et al., 2016) or solving inverse problems (Putzky and Welling, 2017) when the gradient of the objective function is available. Meanwhile, (Chen et al., 2016) have shown that metalearning is also capable of learning neural optimizers that rely at each step on the value of the objective function only, without requiring access to its gradient.
In this work, we push the limits of the metalearning framework further by showing that it can be used even when no explicit objective value is available at each step. More specifically, we focus on likelihoodfree inference problems and build a recurrent inference machine that learns an iterative procedure for updating simulator parameters such that they converge (in distance) towards nominal parameter values known at training. In particular, the inference machine is never given access to an explicit objective function that would estimate some divergence between the synthetic distribution and the real data distribution. Rather, both the optimization procedure and the implicit objective to minimize are learned endtoend from artificial problems.
2 Problem statement
The goal of likelihoodfree inference is the estimation of the parameters of a stochastic generative process with an unknown or intractable likelihood function. Formally, given a set of observations drawn i.i.d. from the real data distribution , we are interested in finding the model parameters
that minimize some discrepancy between the real data distribution and the implicit distribution of the simulated data.
In this work, instead of defining one objective point , we target a probability distribution over the space. Formally, our goal is to find the optimal parameter of a parametric proposal distribution on , leading to the following optimization problem
The proposal distribution provides two advantages: i) it leads to a variational formulation of the optimization problem that does not rely on the usage of the gradient of the function to be optimized (Staines and Barber, 2012), ii) by sampling points from this distribution, , we get distinct parameters that we can give to the simulator. Then, by comparing the generated observations to the real ones for each , it is possible to figure out what the optimal update of is.
Because the likelihood function cannot be evaluated, strategies must be found to use only simulated data in order to estimate . Current methods for likelihoodfree inference typically rely on a handcrafted iterative update procedure where at each time step the next estimate is determined by comparing the simulated data produced from at the current parameter estimate with the real observations . In this work, our goal is to investigate whether metalearning can be used for learning a parametrized update function such that
3 Recurrent machines for likelihoodfree inference
In this work, we define as a recurrent neural network (RNN) whose architecture is given in Figure 1. At each time step , the RNN takes as input the current proposal parameters , a memory state , some information about the real observations and those generated with , and produces as output the parameter update and . Observations are ingested through data encoder architectured as a feedforward neural network that takes as input each independently, transforms each of them into a dimensional vector, and aggregates them through an averaging of those vectors over . It should therefore be able to compute any moment (and more general feature averages) of the distribution. If the moments are learned, they can be used to infer the parameters of the distribution by the method of matching moments, whose principle is to check compatibility between the generated data and the observed data by looking at these moments (Ravuri et al., 2018). This also means that moments can capture the relevant information of a set of samples about a parameter of interest.
In our architecture, we use three encoders. The first one is for the real observations . The second one is for the generated observations at time . We consider a sampling of parameters , and for each , the corresponding observations . These two encoders share their weights, because the way true and generated data are summarized should be the same in order to be able to compare them. The last encoder takes the results of the first two as well as the loglikelihood value , as motivated by Wierstra et al. (2014) and which represents the direction to follow to move the proposal distributions toward .
The loss used at training should encourage the network to generate updates of the proposal which yield to a final proposal ) that is close to a deltafunction at . The total loss of the RNN is expressed in a way similar to (Andrychowicz et al., 2016), i.e. as a weighted sum of a local loss evaluated for each ,
where is a loss comparing the proposal parameters with the real parameter , is a weight given to the loss at time (for instance for all , or . The choice of the weighting and the loss functions are discussed in the experiments section.
We call the architecture ALFI (Automatic LikelihoodFree Inference).
4 Experiments
To illustrate our method and compare it with a simple baseline, we performed experiments on three toy simulators, whose likelihood is known and consequently for which the maximum likelihood estimator (MLE) can be computed. The results for the simplest simulator is shown below and results for the two other toy problems are provided in Appendix C. We also tested our architecture on a simulator from particle physics, whose exact likelihood function is intractable. For this last experiment we assess the model performance by comparing the data generated at the end of the iterative process with the real observations. The full description of each simulator is given in Appendix B.
4.1 Illustrative example
As an illustrative example, we implemented a simulator that generates samples from a Poisson distribution . The goal is to estimate the parameter corresponding to real samples of this distribution.
Results
The box plot of Figure 2 compares the performance of our model with the MLE. For our model (ALFI), the root meansquared error (RMSE) is computed between the mean of the final proposal parametrized by and the true value of the parameters: . We observe that our model achieves similar performance as the MLE whereas it is not given any explicit function to minimize during testing.
The left part of Figure 2 shows the evolution of the average and the standard deviation of the RMSE along the iterative procedure over all test problems. It can be observed from this plot that our model quickly converges to a proposal distribution with an expected value close to .
4.2 (Simplified) particle physics simulation
We also tested our model on a simplified simulator from particle physics, called Weinberg and introduced as a likelihoodfree inference benchmark in (Louppe and Cranmer, 2017). This benchmark comes with two parameters (the beam energy and the Fermi constant) and one observable (the cosine of the scattering angle).
Results
Figure 3 (left) presents the evolution of the error between the real and the predicted parameters along 15 iterations. We see that it has learned an iterative procedure and converges after 10 iterations. Figure 3 (right) compares the distributions of the simulated data and the real data . We can see that ALFI has managed to infer parameters that simulate a realistic distribution.
4.3 Robustness of the model
In order to evaluate the robustness of the learned optimization procedure, we tested the model on a number of iterations greater than the number used during training. On the Poisson and the multivariate simulators (Appendix B.3), we observed that increasing the number of iterations improves the performance of ALFI. Therefore, our model seems to learn an update rule that is not tied to the specific number of updates used a training, but rather generalizes to a larger horizon. Those results are shown in Appendix C.
5 Conclusions and future works
In this work, we provide a proofofconcept of a metalearning architecture designed to solve likelihoodfree inference problems. We applied our model on toy simulators and achieved results competitive with maximum likelihood estimation. We finally applied it on a simple particle physics simulator, and showed that it can infer parameters corresponding to samples close to the real ones.
As for future work, we see two paths worth of exploration. First, getting a better understanding of the optimization procedure learned by ALFI: can we interpret the representation learned by each data encoder? Could an update model learned for a simulator be transferred to another simulator? Is the learned procedure comparable to other existing methods? Secondly, evaluating our model on more complex simulators and comparing it to stateoftheart LFI methods: since our approach is intensive in the number of simulator calls, moderating this complexity would be a necessary step to scale our method to slower simulators.
References
 Andrychowicz et al. (2016) Andrychowicz, M., Denil, M., Gomez, S., Hoffman, M. W., Pfau, D., Schaul, T., and de Freitas, N. (2016). Learning to learn by gradient descent by gradient descent. In Advances in Neural Information Processing Systems, pages 3981–3989.
 Beaumont et al. (2002) Beaumont, M. A., Zhang, W., and Balding, D. J. (2002). Approximate bayesian computation in population genetics. Genetics, 162(4):2025–2035.
 Chen et al. (2016) Chen, Y., Hoffman, M. W., Colmenarejo, S. G., Denil, M., Lillicrap, T. P., Botvinick, M., and de Freitas, N. (2016). Learning to learn without gradient descent by gradient descent. arXiv preprint arXiv:1611.03824.
 Cho et al. (2014) Cho, K., van Merriënboer, B., Gülçehre, Ç., Bahdanau, D., Bougares, F., Schwenk, H., and Bengio, Y. (2014). Learning phrase representations using rnn encoder–decoder for statistical machine translation. Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP).
 Gutmann and Corander (2016) Gutmann, M. U. and Corander, J. (2016). Bayesian optimization for likelihoodfree inference of simulatorbased statistical models. The Journal of Machine Learning Research, 17(1):4256–4302.
 Kingma and Ba (2014) Kingma, D. P. and Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
 Louppe and Cranmer (2017) Louppe, G. and Cranmer, K. (2017). Adversarial variational optimization of nondifferentiable simulators. arXiv preprint arXiv:1707.07113.
 Putzky and Welling (2017) Putzky, P. and Welling, M. (2017). Recurrent inference machines for solving inverse problems. arXiv preprint arXiv:1706.04008.
 Ravuri et al. (2018) Ravuri, S., Mohamed, S., Rosca, M., and Vinyals, O. (2018). Learning implicit generative models with the method of learned moments. arXiv preprint arXiv:1806.11006.
 Staines and Barber (2012) Staines, J. and Barber, D. (2012). Variational optimization. arXiv preprint arXiv:1212.4507.
 Wierstra et al. (2014) Wierstra, D., Schaul, T., Glasmachers, T., Sun, Y., Peters, J., and Schmidhuber, J. (2014). Natural evolution strategies. Journal of Machine Learning Research, 15:949–980.
Appendix
Appendix A Experimental setup
Optimizer
To train our architecture, we used the ADAM optimizer (Kingma and Ba, 2014).
Proposal distribution
For all the experiments, we defined as a multivariate Gaussian distribution with parameters where denotes the size of the parameter space and . For each experiment the initial proposal is made of a random mean vector drawn with the same distribution as the parameters of the different problems of the training set. The variance vector always starts at .
Partial loss function
For the partial loss at time t, we tried two distinct functions: the meansquared error between the mean of and , and the negativeloglikelihood of the proposal evaluated on , . The later loss has the advantage of taking more the variance of the proposal into account and lead to better performance experimentally. Therefore, we decided to use the likelihood loss function for all the subsequent experiments.
Weighting function
The choice of the weighting function determines the explorationexploitation tradeoff of our iterative algorithm. We tested three weighting schemes:

: only , the final proposal distribution on the parameters, is taken into account in the total loss function. It means that the algorithm can freely explore the parameterspace during iterations.

for all : all the parameters’ estimates found during the iterative process are taken into account with the same weight. It encourages the algorithm to converge as fast as possible to a good parameter.

: compromise between the two previous weightings. The first steps are given a low weight, encouraging exploration, while the last steps have a high weight to ensure convergence by the end.
Among those three weighting schemes, the exponential one gave the best performance and we decided to use it for all the subsequent experiments.
Marginalization
To avoid overfitting, the initial value of the mean of the proposal parameters is taken randomly. Thus to compute the performance of our model at test time, is marginalized out. To do so, we draw 500 values and take the average outputs .
Hyperparameters
We provide below a list of all the hyperparameters of ALFI, along with a description if necessary:

Number of epochs

Number of iterations

Number of for metatraining: size of the metadataset

Distribution of the : how the used in the metatraining are generated

Meta batchsize: number of that we use to compute the gradient that we backpropagate in our networks.

Batch size for : number of that we generate from at each time

Batch size for : number of that we generate from each generated from at time

Learning rate

Clipping: we force our model to follow an iterative procedure by clipping each component of the output of the RNN between two values.
Appendix B Simulators
b.1 Linear Regression
Forward generation
The data generated by this procedure follow a linear law in 2 dimensions, the unknown parameters of the simulator represents the slope and the offset of this line. Formally, let denote an observation generated by , where . Then X satisfies the constraint with and the value of being drawn uniformly between and .
Hyperparameters

Number of epochs:

Number of iterations :

Number of for metatraining:

Distribution of the : uniformly in

Meta batchsize:

Batch size for :

Batch size for :

Learning rate:

Clipping:
b.2 Poisson distribution
Forward generation
The forward generation process is a simple Poisson distribution which depends on the mean parameter of the distribution. To make the parametrisation real we define . The generation of an observation conditionally to the parameter value is done by sampling from .
Hyperparameters

Number of epochs:

Number of iterations :

Number of for metatraining:

Distribution of the : uniformly in

Meta batchsize:

Batch size for :

Batch size for :

Learning rate:

Clipping:
b.3 Multivariate Distribution
Forward generation
The forward generation process for can be described as follow:

Draw independently , , , ,

Compute with where is a positive semidefinite matrix.
Hyperparameters

Number of epochs:

Number of iterations :

Number of for metatraining:

Distribution of the : uniformly in

Meta batchsize:

Batch size for :

Batch size for :

Learning rate:

Clipping:
b.4 Weinberg Simulator
Introduced in (Louppe and Cranmer, 2017), Weinberg is a simplified simulator from particle physics of electronpositron collisions resulting in muonantimuon pairs ().
Forward generation
The simulator takes two parameters, the Fermi constant and the beam energy , and produces the onedimensional observable , where is the angle of the outgoing muon with respect to the originally incoming electron.
Hyperparameters
To generate our training dataset we draw parameters uniformly in the square . We enforce our model to follow an iterative procedure by clipping the step output between and .

Number of epochs:

Number of iterations :

Number of for metatraining:

Distribution of the : uniformly in the

Meta batchsize:

Batch size for :

Batch size for :

Learning rate:

Clipping:
Appendix C Supplementary results
c.1 Poisson simulator
To check that the procedure learned by our model is robust and really meaningful, we took a number of steps at test time greater than . We see on Figure 5 that the performance are slightly better than for , which shows that our model is robust to the number of iterations.
c.2 Multivariate Distribution
This simulator is a combination of canonical distributions, aimed at showing that our architecture has enough capacity to learn an optimization procedure valid for parameters with very different impact on the samples. We also took a number of steps at test time greater than . It can be observed from the left part of Figure 5 that the RMSE doesn’t increase after the iteration 15 which shows that the learned procedure hasn’t overfitted on the number of steps. This figure also shows that the procedure converges in both mean and variance.
It shows that the architecture has enough capacity to learn an update procedure which eventually converges to a value close to the MLE, even in the case where the parameters have very different effects on the generated data.
c.3 Linear Regression
Figure 6 presents the results for the linear regression simulator. We also took a number of steps at test time greater than . It can be observed from the boxplot that, in average, ALFI has performance comparable with the MLE. However, we can notice the difficulty to estimate precisely the value of parameters for few cases.
The left subfigure shows that the RMSE quickly converges in 15 iterations and then is stable with a slight variance reduction during the 10 last iteration.