Monte Carlo Gradient Estimation in Machine Learning

# Monte Carlo Gradient Estimation in Machine Learning

###### Abstract

Monte Carlo Gradient Estimation in Machine LearningS. Mohamed, M. Rosca, M. Figurnov, A. Mnih \firstpageno1

{keywords}

gradient estimation, Monte Carlo, sensitivity analysis, score-function estimator, pathwise estimator, measure-valued estimator, variance reduction

## 1 Introduction

Over the past five decades the problem of computing the gradient of an expectation of a function—a stochastic gradient—has repeatedly emerged as a fundamental tool in the advancement of the state of the art in the computational sciences. An ostensibly anodyne gradient lies invisibly within many of our everyday activities: within the management of modern supply-chains (Siekman, 2000; Kapuscinski and Tayur, 1999), in the pricing and hedging of financial derivatives (Glasserman, 2013), in the control of traffic lights (Rubinstein and Shapiro, 1993), and in the major milestones in the ongoing research in artificial intelligence (Silver et al., 2016). Yet, computing the stochastic gradient is not without complexity, and its fundamental importance requires that we deepen our understanding of them to sustain future progress. This is our aim in this paper: to provide a broad, accessible, and detailed understanding of the tools we have to compute gradients of stochastic functions. We also aim to describe their instantiations in other research fields, to consider the tradeoffs we face in choosing amongst the available solutions, and to consider questions for future research.

Our central question is the computation of a general probabilistic objective of the form:

 F(θ):=∫p(x;θ)f(x;ϕ)dx=Ep(x;θ)[f(x;ϕ)]. (1)

Equation (1) is a mean-value analysis, in which a function of an input variable with structural parameters , is evaluated on average with respect to an input distribution with distributional parameters . We will refer to as the cost and to as the measure, following the naming used in existing literature. We will make one restriction in this review, and that is to problems where the measure is a probability distribution that is continuous in its domain and differentiable with respect to its distributional parameters. This is a general framework that allows us to cover problems from queueing theory and variational inference to portfolio design and reinforcement learning.

The need to learn the distributional parameters makes the gradient of the function (1) of greater interest to us:

 η:=∇θF(θ)=∇θEp(x;θ)[f(x;ϕ)]. (2)

Equation (2) is the sensitivity analysis of , i.e. the gradient of the expectation with respect to the distributional parameters. It is this gradient that lies at the core of tools for model explanation and optimisation. But this gradient problem is difficult in general: we will often not be able to evaluate the expectation in closed form; the integrals over are typically high-dimensional making quadrature ineffective; it is common to request gradients with respect to high-dimensional parameters , easily in the order of tens-of-thousands of dimensions; and in the most general cases, the cost function may itself not be differentiable, or be a black-box function whose output is all we are able to observe. In addition, for applications in machine learning, we will need efficient, accurate and parallelisable solutions that minimise the number of evaluations of the cost. These are all challenges we can overcome by developing Monte Carlo estimators of these integrals and gradients.

##### Overview.

We develop a detailed understanding of Monte Carlo gradient estimation by first introducing the general principles and considerations for Monte Carlo methods in Section 2, and then showing how stochastic gradient estimation problems of the form of (2) emerge in five distinct research areas. We then develop three classes of gradient estimators: the score-function estimator, the pathwise estimator, and the measure-valued gradient estimator in Sections 46. We discuss methods to control the variance of these estimators in Section 7. Using a set of case studies in Section 8, we explore the behaviour of gradient estimators in different settings to make their application and design more concrete. We discuss the generalisation of the estimators developed and other methods for gradient estimation in section 9, and conclude in section 10 with guidance on choosing estimators and some of the opportunities for future research. This paper follows in the footsteps of several related reviews that have had an important influence on our thinking, specifically, Fu (1994), Pflug (1996), Vázquez-Abad (2000), and Glasserman (2013).

##### Notation.

Throughout the paper, bold symbols indicate vectors, otherwise variables are scalars. is function of variables which may depend on structural parameters , although we will not explicitly write out this dependency since we will not consider these parameters in our exposition. We indicate a probability distribution using the symbol , and use the notation to represent the distribution over random vectors with distributional parameters . represents the gradient operator that collects all the partial derivatives of a function with respect to parameters in , i.e. , for -dimensional parameters; will also be used for scalar . By convention, we consider vectors to be columns and gradients as rows. We represent the sampling or simulation of variates from a distribution using the notation . We use and to denote the expectation and variance of the function under the distribution , respectively. Shorthand notation for distributions such as or for the Gaussian and double-sided Maxwell distribution, respectively, is defined in Appendix A.

##### Reproducibility.

Code to reproduce Figures 2 and 3 in the paper will later be available in a repository at www.github.com/deepmind/mc_gradients .

## 2 Monte Carlo Methods and Stochastic Optimisation

This section briefly reviews the Monte Carlo method and the stochastic optimisation setting we rely on throughout the paper. But importantly, this section provides the motivation for why we consider the gradient estimation problem (2) to be so fundamental, by exploring what is an astounding breadth of research areas in which it appears.

### 2.1 Monte Carlo Estimators

The Monte Carlo method is one of the most general tools we have for the computation of probabilities, integrals and summations. Consider the mean-value analysis problem (1), which evaluates the expected value of a general function under a distribution . In most problems, the integral (1) will not be known in closed-form, and not amenable to evaluation using numerical quadrature. However, by using the Monte Carlo method (Metropolis and Ulam, 1949) we can easily approximate the value of the integral. The Monte Carlo method says that we can numerically evaluate the integral by first drawing independent samples from the distribution , and then computing the average of the function evaluated at these samples:

 ¯FN=1NN∑n=1f(^x(n)), where ^x(n)∼p(x;θ) for n=1,...,N. (3)

The quantity is a random variable, since it depends on the specific set of random variates used, and we can repeat this process many times by constructing multiple sets of such random variates. Equation (3) is a Monte Carlo estimator of the expectation (1).

As long as we can write an integral in the form of equation (1)—as a product of a function and a distribution that we can easily sample from—we will be able to apply the Monte Carlo method and develop Monte Carlo estimators. This is the strategy we use throughout this paper.

There are four properties we will always ask of a Monte Carlo estimator:

Consistency.

As we increase the number of samples in (3), the estimate should converge to the true value of the integral . This usually follows from the strong law of large numbers.

Unbiasedness.

If we repeat the estimation process many times, we should find that the estimate is centred on the actual value of the integral on average. The Monte Carlo estimators for (1) satisfy this property easily:

 Ep(x;θ)[¯FN]=Ep(x;θ)[1NN∑n=1f(x(n))]=1NN∑n=1Ep(x;θ)[f(x(n))]=Ep(x;θ)[f(x)]. (4)

Unbiasedness is always preferred because it allows us to guarantee the convergence of a stochastic optimisation procedure. Biased estimators can sometimes be useful but require more care in their use (Mark and Baram, 2001).

Minimum variance.

Because an estimator (3) is a random variable, if we compare two unbiased estimators, we will prefer the estimator that has lower variance. We will repeatedly emphasise the importance of low variance estimators for two reasons: the resulting gradient estimates are themselves more accurate, which is essential for problems in sensitivity analysis where the actual value of the gradient is the object of interest; and where the gradient is used for stochastic optimisation, low-variance gradients makes learning more efficient, allowing larger step-sizes (learning rates) to be used, potentially reducing the overall number of steps needed to reach convergence and hence resulting in faster training.

Computational efficiency.

We will always prefer an estimator that is computationally efficient, such as those that allow the expectation to be computed using the fewest number of samples, those that have a computational cost linear in the number of parameters, and those whose computations can be easily parallelised.

We can typically assume that our estimators are consistent because of the generality of the law of large numbers. Therefore most of our effort will be directed towards characterising their unbiasedness, variance and computational cost, since it is these properties that affect the choice we make between competing estimators. Monte Carlo methods are widely studied, and the books by Robert and Casella (2013) and Owen (2013) provide a deep coverage of their wider theoretical properties and practical considerations.

### 2.2 Stochastic Optimisation

The gradient (2) supports at least two key computational tasks, that of explanation and optimisation. Because the gradient provides a computable value that characterises the behaviour of the cost—the cost’s sensitivity to changing settings—a gradient is directly useful as an tool with which to explain the behaviour of a probabilistic system. More importantly, the gradient (2) is the key quantity needed for optimisation of the distributional parameters . Figure 1: Stochastic optimisation loop comprising a simulation phase and an optimisation phase. The simulation phase produces a simulation of the stochastic system or interaction with the environment, as well as unbiased estimators of the gradient.

If the optimisation phase is also used with stochastic approximation (Robbins and Monro, 1951; Kushner and Yin, 2003), such as stochastic gradient descent, as is widely-used in machine learning, then this loop can be described as a doubly-stochastic optimisation (Titsias and Lázaro-Gredilla, 2014). In this setting, there are now two sources of stochasticity. One source arises in the simulation phase from the use of Monte Carlo estimators of the gradient, which are random variables because of the repeated sampling from the measure, like those in (3). A second source of stochasticity arises in the optimisation phase from the use of the stochastic approximation method, which introduces a source of randomness through the subsampling of data points (mini-batching) when computing the gradient. In Section 8.3 we explore some of the performance effects from the interaction between these sources of stochasticity.

### 2.3 The Central Role of Gradient Estimation

Across the breadth of research areas, whether in approximate inference, reinforcement learning, experimental design, or active learning, the need for accurate and efficient computation of stochastic gradients and their corresponding Monte Carlo estimators appears, making the gradient question one of the fundamental problems of statistical and machine learning research. We make the problem (2) more concrete by briefly describing its instantiation in five areas, each with independent and thriving research communities of their own. In them, we can see the central role of gradient estimation by matching the pattern of the problem that arises, and in so doing, see their shared foundations.

Variational Inference.

Variational inference is a general method for approximating complex and unknown distributions by the closest distribution within a tractable family. Wherever the need to approximate distributions appears in machine learning—in supervised, unsupervised or reinforcement learning—a form of variational inference can be used. Consider a generic probabilistic model that defines a generative process in which observed data is generated using a set of unobserved variables , using a model , which may have additional side information or additional parameters , and a prior distribution . In supervised learning, the unobserved variables might correspond to the weights of a regression problem, and in unsupervised learning to latent variables. The posterior distribution of this generative process is unknown, and is approximated by a variational distribution , which is a parameterised family of distributions with variational parameters , e.g., the mean and variance of a Gaussian distribution. Finding the distribution that is closest to leads to an objective, the variational free-energy, that optimises an expected log-likelihood subject to a regularisation constraint that encourages closeness between the variational distribution and the prior distribution (Jordan et al., 1999; Blei et al., 2017). Optimising the distribution requires the gradient of the free energy with respect to the variational parameters :

 η=∇θEq(z|x;θ)[logp(x|z)−logq(z|x;θ)p(z)]. (5)

This is an objective that is in the form of equation (1): the cost is the difference between a log-likelihood and a log-ratio (of the variational distribution and the prior distribution), and the measure is the variational distribution . This problem also appears in other research areas, especially in statistical physics, information theory and utility theory (Honkela and Valpola, 2004). Many of the solutions that have been developed for scene understanding, representation learning, photo-realistic image generation, or the simulation of molecular structures rely on variational inference (Eslami et al., 2018; Kusner et al., 2017; Higgins et al., 2017). In variational inference we find a thriving research area where the problem (2) of computing gradients of expectations lies at its core.

Reinforcement Learning.

Model-free policy search is an area of reinforcement learning where we learn a policy—a distribution over actions—that on average maximises the accumulation of long-term rewards. Through interaction in an environment, we can generate trajectories that consist of pairs of states and actions for time period . A policy is learnt by following the policy gradient (Sutton and Barto, 1998):

 (6)

which again has the form of equation (1). The cost is the return over the trajectory, which is a weighted sum of rewards obtained at each time step , with the discount factor . The measure is the joint distribution over states and actions , which is the product of a state transition probability and the policy distribution with policy parameters . The Monte Carlo gradient estimator used to compute this gradient with respect to the policy parameters leads to the policy gradient theorem, one of the key theorems in reinforcement learning, which lies at the heart of many successful applications, such as the AlphaGo system for complex board games (Silver et al., 2016) or in robotic control (Deisenroth et al., 2013). And so we find yet another thriving area of research where gradient estimation plays a fundamental role.

Sensitivity Analysis.

The field of sensitivity analysis is dedicated to the study of problems of the form of (2), and asks what the sensitivity (another term for gradient) of an expectation is to its input parameters. Computational finance is one area where sensitivity analysis is widely used to compare investments under different pricing and return assumptions, in order to choose a strategy that provides the best potential future payout. Knowing the value of the gradient gives information needed to understand the sensitivity of future payouts to different pricing assumptions, and provides a direct measure of the financial risk that an investment strategy will need to manage. In finance, these sensitivities, or gradients, are referred to as the greeks. A classic example of this problem is the Black-Scholes option pricing model, which can be written in the form of equation (1): the cost function is the discounted value of an option with price at the time of maturity , using discount factor ; and the measure is a log-Normal distribution over the price at maturity , and is a function of the initial price . The gradient with respect to the initial price is known as the Black-Scholes Delta (Chriss, 1996):

 Δ=∇s0Ep(sT;s0)[e−γT(sT−K)+], (7)

where is a minimum expected return on the investment and . Delta is the risk measure that is actively minimised in delta-neutral hedging strategies. The Black-Scholes delta above can be computed in closed form, and the important greeks have closed or semi-closed form formulae in the Black-Scholes formulation. Gradient estimation methods are used when the cost function (the payoff) is more complicated, or in more realistic models where the measure is not a log-normal. In these more general settings, the gradient estimators we review in this paper are the techniques that are still widely-used in financial computations today (Glasserman, 2013).

Discrete Event Systems and Queuing Theory.

An enduring problem in operations research is the study of queues, the waiting systems and lines that we all find ourselves in as we await our turn for a service. Such systems are often described by a stochastic recursion , where is the total system time (of people in the queue plus in service), is the service time for the th customer, is the inter-arrival time between the th and the th customer, and the operation (Vázquez-Abad, 2000; Rubinstein et al., 1996). The number of customers served in this system is denoted by . For convenience we can write the service time and the inter-arrival time using the variable , and characterise it by a distribution with distributional parameters . The expected steady-state behaviour of this system gives a problem of the form of (2), where the cost is the ratio of the average total system time to the average number of customers served, and the measure is the joint distribution over service times (Rubinstein, 1986):

 η=∇θEp(y1:T;θ)[∑Tt=1Lt(y1:t)τ(y1:T)]. (8)

In Kendall’s notation, this is a general description for G/G/1 queues (Newell, 2013): queues with general distributions for the arrival rate, general distributions for the service time, and a single-server first-in-first-out queue. This problem also appears in many other areas and under many other names, particularly in regenerative and semi-Markov processes, and discrete event systems (Cassandras and Lafortune, 2009). In all these cases, the gradient of the expectation of a cost, described as a ratio, is needed to optimise a sequential process. Queues permeate all parts of our lives, hidden from us in global supply chains and in internet routing, and visible to us at traffic lights and in our online and offline shopping, all made efficient through the use of Monte Carlo gradient estimators.

Experimental Design.

In experimental design we are interested in finding the best designs—the inputs or settings to a possibly unknown system—that result in outputs that are optimal with respect to some utility or score. Designs are problem configurations, such as a hypothesis in an experiment, the locations of sensors on a device, or the hyperparameters of a statistical model (Chaloner and Verdinelli, 1995). We evaluate the system using a given design and measure its output , usually in settings where the measurements are expensive to obtain and hence cannot be taken frequently. One such problem is the probability-of-improvement, which allows us to find designs that on average improve over a currently best-known outcome. This is an objective that can be written in the form of equation (1), where the cost is the score of a new design being better that the current best design, and the measure is a predictive function , which allows us to simulate the output for an input design . To find the best design, we will need to compute the gradient of the probability-of-improvement with respect to the design :

 η=∇θEp(y|θ)[\mathds1{y

where the indicator is one if the condition is met, and zero otherwise. There are many such objectives, in areas such as Bayesian optimisation, active learning and bandits (Shahriari et al., 2016; Wilson et al., 2018), all of which involve computing the gradient of an expectation of a loss function, with wide use in computer graphics, model architecture search, automatic machine learning, and treatment design; again highlighting the central role that general-purpose gradient estimators play in modern applications.

While these five areas are entire fields of their own, they are also important problems for which there is ongoing effort throughout machine learning. There are also many other problems where the need for computing stochastic gradients appears, including systems modelling using stochastic differential equations, parameter learning of generative models in algorithms such as variational autoencoders, generative adversarial networks and generative stochastic networks (Rezende et al. (2014); Kingma and Welling (2014b); Goodfellow et al. (2014); Bengio et al. (2014)), in bandits and online learning (Hu et al., 2016), in econometrics and simulation-based estimation (Gourieroux et al., 1996), and in instrumental-variables estimation and counter-factual reasoning (Hartford et al., 2016). The ability to compute complicated gradients gives us the confidence to tackle increasingly more complicated and interesting problems.

## 3 Intuitive Analysis of Gradient Estimators

The structure of the sensitivity analysis problem (2) directly suggests that gradients can be computed in two ways:

Derivatives of Measure.

The gradient can be computed by differentiation of the measure . Gradient estimators in this class include the score function estimator (Section 4) and the measure-valued gradient (Section 6).

Derivatives of Paths.

The gradient can be computed by differentiation of the cost , which encodes the pathway from parameters , through the random variable , to the cost value. In this class of estimators, we will find the pathwise gradient (Section 5), harmonic gradient estimators and finite differences (Section 9.5), and Malliavin-weighted estimators (Section 9.7).

We focus our attention on three classes of gradient estimators: the score function, pathwise and measure-valued gradient estimators. All three estimators satisfy two desirable properties that we identified previously, they are consistent and unbiased; but they differ in their variance behaviour and in their computational cost. Without knowing the mathematical descriptions of these three gradient estimators, we can compare the performance of the three estimation approaches in a simplified problem, to develop an intuitive view of the differences between methods with regards to performance, computational cost, differentiability, and variability of the cost function.

Consider the stochastic gradient problem (2) that uses Gaussian measures for three simple families of cost functions, quadratics, exponentials and cosines:

 η=∇θ∫N(x|μ,σ2)f(x;k)dx;θ∈{μ,σ};f∈{(x−k)2,exp(−kx2),cos(kx)}. (10)

We are interested in estimates of the gradient (10) with respect to the mean and the standard deviation of the Gaussian distribution. The cost functions vary with a parameter , which allows us to explore how changes in the cost affect the gradient. In the graphs that follow, we use numerical integration to compute the variance of these gradients. To reproduce these graphs, see the note on code in the introduction.

Figure 2 compares the variance of several gradient estimators, for the quadratic function . For this function we see that we could create a rule-of-thumb ranking: the highest variance estimator is the score function, lower variance is obtained by the pathwise derivative, and the lowest variance estimator is the measure-valued derivative. But for the gradient with respect to the mean , we also see that this is not uniformly true, since there are quadratic functions, those with small or large offsets , for which the variance of the pathwise estimator is lower than the measure-valued derivative. This lack of universal ranking is a general property of gradient estimators.

The computational cost of the estimators in Figure 2 is the same for the score-function and pathwise estimators: they can both be computed using a single sample in the Monte Carlo estimator ( in (3)), even for multivariate distributions, making them computationally cheap. The measure-valued derivative estimator will require evaluations of the cost function for dimensional parameters, and for this the reason will typically not be preferred in high-dimensional settings. We will later find that if the cost function is not differentiable, then the pathwise gradient will not be applicable. These are considerations which will have a significant impact on the choice of gradient estimator for any particular problem.

##### Exponential and Cosine costs.

Figure 3 shows the variance behaviour for two other functions, and . As the parameter of these functions varies, the functions can become very sharp and peaked (see the black graphs at the bottom), and this change in the cost function can change the effectiveness of a gradient estimator: from being the lowest-variance estimator in one regime, to being the highest-variance one in another. The green curve in Figure 3 for the pathwise estimator shows its variance increasing as becomes larger. The measure-valued derivative (red curve) has similar behaviour. The blue curve for the score function for the exponential cost shows that its variance can behave in the opposite way, and can be lower for higher values of . We highlight this because, in machine learning applications, we usually learn the cost function (by optimising its structural parameters), and face a setting akin to varying in these graphs. Therefore the process of learning influences the variance behaviour of an estimator differently at different times over the course of learning, and is an aspect we will need to take steps to control.

Figures 2 and 3 also demonstrate the importance of variance reduction. The score function estimator is commonly used with a control variate, a way to reduce the variance of the gradient that we explore further in Section 7. We see a large decrease in variance by employing this technique. The variance of the measure-valued derivative estimator in these plots is also shown with a form of variance reduction (known as coupling), and for these simple cost functions, there are regimes of the function that allow corrections that drive the variance to zero; we can see this where the kink in the plot for the variance of the mean-gradient for the cosine cost function.

From this initial exploration, we find that there are several criteria to be judged when choosing an unbiased gradient estimator: computational cost, implications on the use of differentiable and non-differentiable cost functions, the change in behaviour as the cost itself changes, e.g., during learning, and the availability of effective variance reduction techniques to achieve low variance. We will revisit these figures again in subsequent sections as we develop the precise description of these methods. We will assess each estimator based on these criteria, working towards building a deeper understanding of them and their implications for theory and practice.

## 4 Score Function Gradient Estimators

The score function estimator is in the class of derivatives of measure, and is one of the most general types of gradient estimators available to us. Because of its widespread and general-purpose nature, it appears under various names throughout the literature, including the score function estimator (Kleijnen and Rubinstein, 1996; Rubinstein et al., 1996), the likelihood ratio method (Glynn, 1990), and the REINFORCE estimator (Williams, 1992). This section will derive the estimator, expose some of its underlying assumptions and possible generalisations, explore its behaviour in various settings, and briefly review its historical development.

### 4.1 Score Functions

A score function is the derivative of the log of a probability distribution with respect to its distributional parameters, which using the rule for the derivative of the logarithm can be expanded as:

 ∇θlogp(x;θ)=∇θp(x;θ)p(x;θ). (11)

This identity is useful since it relates the derivative of a probability to the derivative of a log-probability; for Monte Carlo estimators it will allows us to rewrite integrals of gradients in terms of expectations under the measure . It is the appearance of the score function in the gradient estimator we develop in this section that will explain its name.

The score function is important since, amongst other uses, it is the key quantity in maximum likelihood estimation (Stigler et al., 2007). One property of the score that will later be useful is that its expectation is zero:

 (12)

We show this in (12) by first replacing the score using the ratio given by the identity (11), then cancelling terms, interchanging the order of differentiation and integration, and finally recognising that probability distributions must integrate to one, resulting in a derivative of zero. A second important property of the score is that its variance, known as the Fisher information, is an important quantity in establishing the Cramer-Rao lower bound (Lehmann and Casella, 2006).

### 4.2 Deriving the Estimator

Using knowledge of the score function, we can now derive a general-purpose estimator for the sensitivity analysis problem (2); its derivation is uncomplicated and insightful:

 η =∇θEp(x;θ)[f(x)]=∇θ∫p(x;θ)f(x)dx=∫f(x)∇θp(x;θ)dx (13a) =∫p(x;θ)f(x)∇θlogp(x;θ)dx (13b) =Ep(x;θ)[f(x)∇θlogp(x;θ)] (13c) ¯ηN =1NN∑n=1f(^x(n))∇θlogp(^x(n);θ);^x(n)∼p(x;θ). (13d)

In the first line (13a), we expanded the definition of the expectation as an integral and then exchanged the order of the integral and the derivative; we discuss the validity of this operation in section 4.3.1. In (13b), we use the score identity (11) to replace the gradient of the probability by the product of the probability and the gradient of the log-probability. Finally, we obtain an expectation (13c), which is in the form we need—a product of a distribution we can easily sample from and a function we can evaluate—to provide a Monte Carlo estimator of the gradient in (13d).

Equation (13c) is the most basic form in which we can write this gradient. One simple modification replaces the cost function with a shifted version of it:

 η=Ep(x;θ)[(f(x)−β)∇θlogp(x;θ)], (14)

where is a constant that we will call a baseline. For any value of , we still obtain an unbiased estimator because the additional term it introduces has zero expectation due to the property (12) of the score. This baseline-corrected form should be preferred to (13c), because, as we will see in Section 7, it allows for a simple but effective form of variance reduction.

### 4.3 Estimator Properties and Applicability

By inspecting the form (13c), we can intuitively see that the score-function estimator relates the overall gradient to the gradient of the measure reweighted by the value of the cost function. This intuitiveness is why the score function estimator was one of the first and most widely-used estimators for sensitivity analysis. But there are several properties of the score-function gradient to consider that have a deep impact on its use in practice.

#### 4.3.1 Unbiasedness

When the interchange between differentiation and integration in (13a) is valid, we will obtain an unbiased estimator of the gradient (L’Ecuyer, 1995). Intuitively, since differentiation is a process of limits, the validity of the interchange will relate to the conditions for which it is possible to exchange limits and integrals, in such cases most often relying on the use of the dominated convergence theorem or the Leibniz integral rule (Flanders, 1973; Grimmett and Stirzaker, 2001). The interchange will be valid if the following conditions are satisfied:

• The measure is continuously differentiable in its parameters .

• The product is both integrable and differentiable for all parameters .

• There exists an integrable function such that .

These assumptions usually hold in machine learning applications, since probability distributions, especially the ones that appear most often in machine learning applications, are smooth functions of their parameters. L’Ecuyer (1995) provides an in depth discussion on the validity of interchanging integration and differentiation, and also develops additional tools to check if they are satisfied.

#### 4.3.2 Absolute Continuity

An important behaviour of the score function estimator is exposed by rewriting it in one other way. Here, we consider a scalar distributional parameter and look at its derivatives using first principles:

 ∇θEp(x;θ)[f(x)] =∫∇θp(x;θ)f(x)dx (15a) =∫limh→0p(x;θ+h)−p(x;θ)hf(x)dx (15b) =limh→0∫p(x;θ+h)−p(x;θ)hf(x)dx (15c) (15d) =limh→01h∫p(x;θ)(p(x;θ+h)p(x;θ)−1)f(x)dx (15e) =limh→01h(Ep(x;θ)[ω(θ,h)f(x)]−Ep(x;θ)[f(x)]). (15f)

In the first line (15a), we again exchange the order of integration and differentiation, which we established is safe in most use-cases. We then expand the derivative in terms of its limit definition in (15b), and we swap the order of the limit and the integral in (15c). We introduce an identity term in (15d) to allow us to later rewrite the expression as an expectation with respect to the distribution . Finally, we simplify the expression separating it in two terms (15e), denoting the importance weight and rewrite the gradient using the expectation notation (15f). It is this final expression that exposes a hidden requirement of the score function estimator.

The ratio in (15f) is similar to that which appears in importance sampling (Robert and Casella, 2013). Like importance sampling, the estimator makes an implicit assumption of absolute continuity, where we require for all points where . Not all distributions of interest satisfy this property, and failures of absolute continuity can result in a biased gradient. As one instance, absolute continuity is violated when the parameter defines the support of the distribution, such as the uniform distribution .

Example (Bounded support). Consider the score-function estimator for a cost and distribution , which is differentiable in when ; the score . This is a popular example also used by Glasserman (2013) and Pflug (1996), amongst others. Comparing the two gradients:

In this example, because is not absolutely continuous with respect to at the boundary of the support, the estimator fails to provide the correct gradient.

#### 4.3.3 Estimator Variance

From the intuitive analysis in Section 3, we compared the score function estimator to other gradient estimation techniques (which we explore in the subsequent sections), and see that even in those simple univariate settings, the variance of the score-function estimator can vary widely as the cost function varies (blue curves in Figures 2 and 3). Starting from the estimator (13d) and denoting the estimator mean as , we can write the variance of the score function estimator as:

 Vp(x;θ)[¯ηN]=Ep(x;θ)[(f(x)∇θlogp(x;θ))2]−μ(θ)2, (17)

which writes out the definition of the variance and allows us to see that the parameter dimensionality, through its appearance in the score, is an important part of the estimator’s variance. The alternative form of the gradient we explored in equation (15f) provides another way to characterise the variance:

 Vp(x;θ)[¯ηN]=limh→01hEp(x;θ)[(ω(θ,h)−1)2f(x)2]−μ(θ)2, (18)

which, for a fixed , exposes the dependency of the variance on the importance weight . Although we will not explore it further, we find it instructive to connect these variance expressions to the variance bound for the estimator given by the Hammersley-Chapman-Robbins bound (Lehmann and Casella, 2006, ch 2.5):

 Vp(x;θ)[¯ηN]≥suph(μ(θ+h)−μ(θ))2Ep(x;θ)[p(x;θ+h)p(x;θ)−1]2=suph(μ(θ+h)−μ(θ))2Ep(x;θ)[ω(θ,h)−1]2, (19)

which is a generalisation of the more widely-known Cramer-Rao bound and describes the minimal variance achievable by the estimator.

Our understanding of the gradient variance can then be built by exploring three sources of variance: contributions from the implicit importance ratio that entered due to the need for absolute continuity, contributions from the dimensionality of the parameters, and contributions from the variance of the cost function. These are hard to characterise exactly for general cases, but we can develop an intuitive understanding by unpacking specific terms of the gradient variance.

##### Variance from the importance ratio ω.

The variance characterisation from either equation (18) or (19) shows that the importance ratio directly affects the variance. The simplest way to see this effect is to consider the contributions to the variance using the form of the gradient in equation (15e), for a fixed . The first term in that equation is the quadratic term:

 Ep(x;θ)[(ω(θ,h)−1)2f(x)2], (20)

where is the importance ratio we identified in equation (15e). We will only obtain finite variance gradients when the integral in (20) is finite, i.e. when the conditions for absolute continuity are satisfied. We saw previously that failures of absolute continuity can lead to biased gradients. In practice, complete failure will be rare and we will instead face near-failures in maintaining absolute continuity, which as the above expression shows, will lead to an increase in the variance of the Monte Carlo estimator of the gradient.

##### Variance due to input dimensionality.

Assume for simplicity that the distribution factorises over the dimensions of so that , and again consider a scalar parameter . In expectation, the importance weights have the property that for any dimension , The importance weight and its logarithm are given by and , which we can use to study the behaviour of the importance weight as the dimensionality of changes.

If we follow an argument due to Glynn and Iglehart (1989) and Glasserman (2013, p. 259), and assume that the expectation of the log ratio is is bounded, i.e., , then the using the strong law of large numbers we can find that this expectation converges to a constant, . Using Jensen’s inequality, we know that , with equality only when , meaning . As a result, the sum of many such terms has a limiting behaviour of:

 limd→∞∑dlogp(xd;θ+h)p(xd;θ)=−∞⟹limd→∞∏dp(xd;θ+h)p(xd;θ)=0, (21)

where we reach the limit in the first term since it is a sum of negative terms, and the second expression is obtained by exponentiation. As the dimensionality increases, we find that the importance weights converge to zero, while at the same time its expectation is one for all dimensions. Therefore, in high dimensions, the importance ratio can become highly skewed, taking large values with small probabilities and leading to high variance as a consequence.

##### Variance from the cost. Figure 4: Variance of the score function estimator for a normal random variable, with a constant cost function f(x)=100 and linear cost f(x)=∑dxd, under a Gaussian measure N(x|12,σ2ID). The y-axis is the estimate of the average variance V[∇σf(x)] across parameter dimensions.

The cost function appears in all forms of the variance we wrote (17)–(19) and itself is a significant contributor to the estimator variance, since it is a multiplicative term in the gradient. For example, if the cost function is a sum of terms, , whose individual variance we assume is bounded, then the variance of the score-function estimator will be of order . Because the cost function is a black-box as far as the estimator is concerned, it will typically contain many elements that do not directly influence the parameters, and hence will not affect the gradient. But every extra term contributes to its variance: ideally, before multiplying the cost with the score function, we would eliminate the parts of it that have no influence on the parameter whose derivative we are computing. Alternatively, we can try to do this automatically by using the gradient of the function itself (but only if it is available) to remove these terms—this approach is explored in the next section.

To support this intuition, we explore the effect of the cost function and measure on variance properties of the score-function estimator with an example. Figure 4 shows the estimator variance for the gradient , for a constant cost and a linear cost that sums the dimensions of for . For both cost functions, the expected value of the cost under the measure has no dependence on the scale parameter . As we expected, for the linear cost, the variance scales quadratically as the number of terms in the cost increases.

This variance analysis is meant to expose the importance of understanding the variance in our estimators. Whether because of the influence of the structure of the cost function, or the dimensionality of the implied importance weights in our gradients, we will need to counterbalance their effects and remove excess variance by some method of variance reduction. The question of variance reduction will apply to every type of Monte Carlo gradient estimator, but the specific solutions that are used will differ due to the different assumptions and factors that affect the variance. We will study variance reduction in more detail in Section 7.

Higher derivatives are conceptually simple to compute using the score function estimator. The score of higher orders is defined as:

 s(1)(x)=∇θp(x;θ)p(x;θ)=∇θlogp(x;θ);s(k)(x)=∇(k)θp(x;θ)p(x;θ) (22)

where represents the th-order gradient operator. Using this definition, the higher-order score-function gradient estimator is:

 η(k)=∇(k)θEp(x;θ)[f(x)]=Ep(x;θ)[f(x)s(k)(x)]. (23)

#### 4.3.5 Computational Considerations

We can express the score-function gradient estimator (for single parameter) in one other way, as:

 η=∇θEp(x;θ)[f(x)]=Cov[f(x),∇θlogp(x;θ)], (24) Cov[f(x),∇θlogp(x;θ)]2≤Vp(x;θ)[f(x)]Vp(x;θ)[∇θlogp(x;θ)]. (25)

The first identity shows that the score function gradient can be interpreted as a measure of covariance between the cost function and the score function (and is true because the expectation of the score is zero, which will remove the second term that the covariance would introduce) (Pflug, 1996, pp. 234). The second identity is the Cauchy-Schwartz inequality, which bounds the squared covariance. Computationally, this shows that the variance of the cost function is related to the magnitude and range of the gradient. A highly-variable cost function can result in highly-variable gradients, which is undesirable for optimisation. It is for this reason that we will often constrain the values of the cost function by normalising or bounding its value in some way, e.g., by clipping.

The score-function gradient is considered to be general-purpose because it is computed using only the final value of the cost in its computation. It makes no assumptions about the internal structure of the cost function, and can therefore be used with any function whose outcome we can simulate; many functions are then open to us for use: known differentiable functions, discrete functions, dynamical systems, or black-box simulators of physical systems or graphics engines. Overall the computational cost of the score function estimator is low, and will be of the order , for -dimensional distributional parameters , plus the additional cost of evaluating the cost function, and multiplying by the number of samples used in the estimator.

Taking into account the exposition of this section, points for consideration when using the score function estimator are:

• Any type of cost function can be used, allowing for the use of simulators and other black-box systems, as long as we are able to evaluate them easily.

• The measure must be differentiable with respect to its parameters, an assumption we make throughout this paper.

• We must be able to easily sample from the measure, since the gradient estimator is computed using samples from it.

• The estimator can be implemented using only a single sample if needed, i.e. using . Single-sample estimation is applicable in both univariate and multivariate cases, making the estimator computationally efficient, since we can deliberately control the number of times that the cost function is evaluated.

• Because there are many factors that affect the variance of the gradient estimator, it will be important to use some form of variance reduction to obtain competitive performance.

### 4.4 Research in Score Function Gradient Estimation

We are the lucky inheritors of a rich body of work specifically devoted to the score function gradient estimator. Given its simplicity and generality, this estimator has found its way to many parts of computational science, especially within operations research, computational finance and machine learning. We describe a subset of this existing work, focusing on the papers that provide deeper theoretical insight and further context on the score-function estimator’s use in practice.

##### Development of the estimator.

The score function estimator is one of the first types of estimators to be derived, initially by several different groups in the 1960s, including Miller (1967) in the design of stable nuclear reactors, and by Rubinstein (1969) in the early development of Monte Carlo optimisation. Later the estimator was derived and applied by Rubinstein and Kreimer (1983) for discrete-event systems in operations research, where they began to refer to the estimator as the score function method, the name we use in this paper. The estimator was again developed by Glynn (1987, 1990) and by Reiman and Weiss (1989), where they referred to it as the likelihood ratio method, and part of the study of queueing systems and regenerative stochastic processes. Concise descriptions have entered several books, such as those by Rubinstein and Shapiro (1993) and Fu and Hu (2012), and in two in books we especially recommend by Pflug (1996, sect. 4.2.1) and Glasserman (2013, sect 7.3).

##### Interchange of integration and differentiation.

The two basic questions for the application of the score-function estimator are differentiability of the stochastic system that is being studied, and subsequently, the validity of the interchange of integration and differentiation. For many simple systems, differentiability is safely assumed, but in discrete event systems that are often studied in operations research, some effort is needed to establish differentiability, e.g., like that of the stochastic recursions by Glynn and L’Ecuyer (1995). The validity of the interchange of differentiation and integration for score function estimators is discussed by Kleijnen and Rubinstein (1996) and specifically in the note by L’Ecuyer (1995).

##### In machine learning.

In reinforcement learning the score-function gradient estimator was developed as the REINFORCE algorithm by Williams (1992). In such settings, the gradient is the fundamental quantity needed for policy search methods and is the basis of the policy gradient theorem and the subsequent development of actor-critic reinforcement learning methods (Sutton et al., 2000). Approximate Bayesian inference methods based on variational inference deployed the score function gradient estimator to enable a more general-purpose, black-box variational inference; one that did not require tedious manual derivation, but that could instead be more easily combined with automatic differentiation tools. The appeal and success of this approach has been shown by several authors, including Paisley et al. (2012), Wingate and Weber (2013), Ranganath et al. (2014) and Mnih and Gregor (2014). Variance reduction is essential for effective use of this estimator, and has been explored in several areas, by Greensmith et al. (2004) in reinforcement learning, Titsias and Lázaro-Gredilla (2015) in variational inference, Capriotti (2008) in computational finance, or more recently by Walder et al. (2019). We describe variance reduction techniques in more detail in Section 7. Finally, as machine learning has sought to automate the computation of these gradients, the implementation of the score function estimator within wider stochastic computational graphs has been explored for the standard estimator (Schulman et al., 2015) and its higher-order gradients (Foerster et al., 2018); we will provide more discussion on computational graphs in Section 9.

We can develop a very different type of gradient estimator if, instead of using only knowledge of the score function, we can know more of the structural characteristics of the problem (2). One such structural property is the specific sequence of transformations and operations that sources of randomness take as they pass through the measure and into the cost function, to affect the overall objective. Using this sampling path will lead us to a second estimator, the pathwise gradient estimator, which as its name implies, is in the class of derivatives of paths. Because we need information about the path underlying a probabilistic objective, this class of gradient estimator will be less general-purpose than the score-function estimator, but in losing this generality, we will gain several advantages, especially in terms of lower variance and ease of implementation.

The pathwise derivative is as fundamental to sensitivity analysis and stochastic optimisation as the score function estimator is, and hence also appears under several names, including: the process derivative (Pflug, 1996), as the general area of perturbation analysis and specifically infinitesimal perturbation analysis (Ho and Cao, 2012; Glasserman and Ho, 1991), the pathwise derivative (Glasserman, 2013), and more recently as stochastic backpropagation and the reparameterisation trick (Rezende et al., 2014; Kingma and Welling, 2014b; Titsias and Lázaro-Gredilla, 2014). Another way to view the pathwise approach is as a process of pushing the parameters of interest, which are part of the measure, into the cost function, and then differentiating the newly modified cost function. For this reason, the pathwise estimator is also called the ‘push-in’ gradient method (Rubinstein, 1992).

### 5.1 Sampling Paths

Continuous distributions have a simulation property that allows both a direct and an indirect way of drawing samples from them, making the following sampling processes equivalent:

 ^x∼p(x;θ)≡^x=g(^ϵ,θ),^ϵ∼p(ϵ), (26)

and states that an alternative way to generate samples from the distribution is to sample first from a simpler base distribution , which is independent of the parameters , and to then transform this variate through a deterministic path ; we can refer to this procedure as either a sampling path or sampling process. For invertible paths, this transformation is described by the rule for the change of variables for probability:

 p(x;θ)=p(ϵ)|∇ϵg(ϵ;θ)|−1. (27)

There are several classes of transformation methods available (Devroye, 2006):

• Inversion methods. For univariate distributions, we will always be able to find an equivalent base distribution and sampling path by using the uniform distribution and inverse cumulative distribution function (CDF), respectively. This method can often be difficult to use directly, however, since computing the inverse of the CDF and its derivative can be computationally difficult; restricting this approach to univariate settings. We will return to this method later, but instead explore methods that allow us to use the CDF instead of its inverse.

• Polar transformations. It is sometimes possible, and more efficient, to generate a pair of random variates from the target distribution . We can map this pair to a representation in polar form , which exposes other mechanisms for sampling, e.g., the famous Box-Muller transform for sampling Gaussian variates is derived in this way.

• One-liners. In many cases there are simple functions that transform a base distribution into a richer form. One widely-known example is sampling from the multivariate Gaussian , by first sampling from the standard Gaussian , and then applying the location-scale transformation , with . Many such transformations exist for the most common distributions, including the Dirichlet, Gamma, and Exponential. Devroye (1996) refers to these types of transformations as one-liners because they can often be implemented in one line of code.

With knowledge of these transformation methods, we can invoke the Law of the Unconscious Statistician (LOTUS) (Grimmett and Stirzaker, 2001):

 Ep(x;θ)[f(x)]=Ep(ϵ)[f(g(ϵ;θ))], (28)

which states that we can compute the expectation of a function of a random variable without knowing its distribution, if we know its corresponding sampling path and base distribution. LOTUS tells us that in probabilistic objectives, we can simply replace expectations over any random variables wherever they appear, by the transformation and expectations over the base distribution . This is a way to reparameterise a probabilistic system; it is often used in Monte Carlo methods, where it is referred to as the non-centred parameterisation (Papaspiliopoulos et al., 2007) or as a reparameterisation trick (Kingma and Welling, 2014b).

### 5.2 Deriving the Estimator

Equipped with the pathwise simulation property of continuous distributions and LOTUS, we can derive an alternative estimator for the sensitivity analysis problem (2) that exploits this additional knowledge of continuous distributions. Assume that we have a distribution with known differentiable and invertible sampling path and base distribution . The sensitivity analysis problem (2) can then be reformulated as:

 η =∇θEp(x;θ)[f(x)]=∇θ∫p(x;θ)f(x)dx (29a) =∇θ∫p(ϵ)f(g(ϵ;θ))dϵ (29b) =Ep(ϵ)[∇θf(g(ϵ;θ))] (29c) ¯ηN =1NN∑n=1∇θf(g(^ϵ(n);θ));^ϵ(n)∼p(ϵ). (29d)

In equation (29a) we first expand the definition of the expectation. Then, using the law of the unconscious statistician, and knowledge of the sampling path and the base distribution for , we reparameterise this integral (29b) as one over the variable . The parameters have now been pushed into the function making the expectation free of the parameters. This allows us to, without concern, interchange the derivative and the integral (29c), resulting in the pathwise gradient estimator (29d).

### 5.3 Estimator Properties and Applicability

The gradient (29c) shows that we can compute gradients of expectations by first pushing the parameters into the cost function and then using standard differentiation, applying the chain rule. This is a natural way to think about these gradients, since it aligns with our usual understanding of how deterministic gradients are computed, and is why this approach is so popular. Its simplicity, however, belies several distinct properties that impact its use in practice.

#### 5.3.1 Decoupling Sampling and Gradient Computation

The pathwise estimator (29c), as we derived it, is limited to those distributions for which we simultaneously have a differentiable path, and use this same path to generate samples. We could not compute gradients with respect to parameters of a Gamma distribution in this way, because sampling from a Gamma distribution involves rejection sampling that does not provide a differentiable path. The process of sampling from a distribution and the process of computing gradients with respect to its parameters are coupled in the estimator (29c); we can expand the applicability of the pathwise gradient by decoupling these two processes.

The pathwise estimator can be rewritten in a more general form as:

 η=∇θEp(x;θ)[f(x)] (30a) =∫∇xf(x)∇θxp(x;θ)dx=Ep(x;θ)[∇xf(x)∇θx]. (30b)

In the first line (30a), we recall the gradient that was formed as a result of applying the law of the unconscious statistician in equation (29c), and then apply the chain rule to write the gradient w.r.t. . Using the LOTUS (28) again, this time for the expectation of and in the reverse direction, we obtain (30b). This derivation shows that sampling and gradient estimation can be decoupled: we can generate samples using any sampler for the original distribution (e.g., using a sampling path, rejection sampling or Markov chain Monte Carlo), and compute the gradient using the chain rule on the function by finding some way to compute the term .

One way to compute is to use as we did for (29c). In practice, this form is not always convenient. For example, if the sampling path is a inverse cumulative density function (CDF), which is often computed with root-finding methods, then evaluating its derivative may require numerically unstable finite difference methods. Instead, we can find another way of writing that makes use of the inverse of the path . We can think of as the ‘standardisation path’ of the random variable—that is the transformation that removes the dependence of the sample on the distribution parameters, standardising it to a zero mean unit variance-like form. In the univariate case, instead of using the inverse CDF, we can use the standardisation path given by the CDF. Consider the equation as an implicit function for . Evaluating the total derivative (TD) on both sides—using implicit differentiation—and expanding we find that:

 ϵ=g−1(x;θ)⟹∇TDθϵ=∇TDθg−1(x;θ) (31a) ∴0=∇xg−1(x;θ)∇θx+∇θg−1(x;θ) (31b) ∇θx=−(∇xg−1(x;θ))−1∇θg−1(x;θ). (31c)

Equation (31c) gives us the expression needed to fully evaluate (30b). Equation (31c) is the form in which Ho and Cao (1983) initially introduced the estimator, and it corresponds to the strategy of differentiating both sides of the transformation noted by Glasserman (2013). Figurnov et al. (2018) refer to this approach to as an implicit reparameterisation gradient, because of its use of implicit differentiation. In this form, we are now able to apply pathwise gradient estimation to a far wider set of distributions and paths, such as for the Beta, Gamma or Dirichlet distributions. In Section 8 we look at this decoupling in other settings, and in the discussion (Section 9.4) look at optimal transport as another way of computing .

Example (Univariate Gaussian). For univariate Gaussian distributions , with , the location-scale transform is the natural choice of path: for . The inverse path is then . The standard pathwise derivatives are and . Equation (31c) is then:

 dxdμ=−\sfrac∂g−1(x,θ)∂μ\sfrac∂g−1(x,θ)∂x=−−\sfrac1σ\sfrac1σ=1;dxdσ=−\sfrac∂g−1(x,θ)∂σ\sfrac∂g−1(x,θ)∂x=−−\sfrac(x−μ)σ2\sfrac1σ=x−μσ=ϵ. (32)

We see that the two approaches provide the same gradient.

Example (Univariate distributions). As we discussed, for univariate distributions we can use the sampling path given by the inverse CDF: , where . Computing the derivative is often complicated and expensive. We can obtain an alternative expression for by considering the inverse path, which is given the CDF . From equation (31c) we have:

 ∇θx=−∇θF(x;θ)∇xF(x;θ)=−∇θF(x;θ)p(x;θ). (33)

In the final step, we used the fact that the derivative of the CDF w.r.t. is the density function. This expression allows efficient computation of pathwise gradients for a wide range of continuous univariate distributions, including Gamma, Beta, von Mises, Student’s -distribution and univariate mixtures (Figurnov et al., 2018; Jankowiak and Obermeyer, 2018).

#### 5.3.2 Bias and Variance Properties

In deriving the pathwise estimator (29d) we again exploited an interchange of differentiation and integration. If this interchange is valid, then the resulting application of the chain rule to compute the gradient will be valid, and the resulting estimator will be unbiased. We can always ensure that this interchange applies by ensuring that the cost functions we use are differentiable. The implication of this is that if we have discontinuous cost functions, then we will be unable to use the pathwise gradient estimator. We can be more rigorous about the conditions for unbiasedness, and we defer to Glasserman (2013, sect. 7.2.2) for this more in-depth discussion.

We can build an understanding of the variance of the estimator by considering the case of gradients (2) with Gaussian measures . If we denote the gradient (29d) as using , then we can bound the variance of the Gaussian pathwise derivative as:

 Vp(ϵ)[h(ϵ)]=Ep(ϵ)[(h(ϵ)−Ep(ϵ)[h(ϵ)])2]≤κ2π24, (34)

where is the Lipschitz constant of the function ; and this type of expression can be reached in several ways. The approach used here characterises the zero-mean random variable as a sub-Gaussian variable, and invokes the variance-bound property that is characteristic of sub-Gaussian random variables (Fan et al., 2015, sect. 10). Alternatively, the variance properties of the pathwise gradient are asymptotically the same as that for finite difference methods, and we can bound the variation in the finite difference by the squared Lipschitz constant (Glasserman, 2013, sect. 7.2.2). Although this specific result is derived using a Gaussian assumption its conclusions apply to the more general situations where the gradient will be used.

However we obtain this expression (34), several key features of the variance are revealed by it. The variance bound is independent of the dimensionality of the parameter space, meaning that we can expect to get low-variance gradient estimates, even in high-dimensional settings. Because we are able to differentiate through the cost function itself, only the specific path through which any individual parameter influences the cost function is included in the gradient; unlike the score-function estimator, the gradient does not sum over large numbers of terms for which the parameter has no effect, allowing the estimator variance to be much lower.

Importantly, the variance is directly bounded by the Lipschitz constant. As Figures 2 and 3 show, as the cost function becomes highly-variable, i.e. its Lipschitz constant increases, we enter regimes where, even with the elementary functions we considered, the variance of the pathwise estimator can be higher than that of the score-function method. The pathwise gradient, when it is applicable, will not always have lower variance when compared to other methods. In such situations, variance reduction will again be a powerful accompaniment to the pathwise estimator. Since most of the functions we will work with will not be Lipschitz continuous, Xu et al. (2018) use a different set of simplifying assumptions to develop an understanding of the variance that reinforces the general intuition built here.

Computation of higher-order gradients using the pathwise method is also possible and directly involves higher-order differentiation of the function, requiring cost functions that are differentiable at higher-orders. This approach has been explored by Fu and Hu (1993) and Fan et al. (2015). Another common strategy for computing higher-order gradients is to compute the first-order gradient using the pathwise methods, and the higher-order gradients using the score-function method.

#### 5.3.4 Computational Considerations

The pathwise gradient estimator is restricted to differentiable cost functions. This is still a large class of functions, yet is not as general as the scope of applicability of the score-function estimator. For some types of discontinuous functions it is possible to smooth the function over the discontinuity and maintain the correctness of the gradient; this approach is often referred to as smoothed perturbation analysis in the existing literature (Glasserman and Ho, 1991).

There are often several competing approaches for computing the gradient. The choice between them will need to be made considering the associated computational costs and ease of implementation. The case of the univariate Gaussian measure highlights this. This distribution has two equivalent sampling paths: one given by the location-scale transform: and another given by the inverse CDF transform: . While there is no theoretical difference between the two paths, the first path is preferred in practice due to the simplicity of implementation.

The same analysis that allowed us to characterise the variance of the Gaussian gradient (34) provides us with a tail bound to analyse the convergence of the Monte Carlo estimator (Fan et al., 2015). We find that the estimator probability is:

 P(∣∣ ∣∣1NN∑n=1h(ϵ(n))−Ep(ϵ)[h(ϵ)]∣∣ ∣∣>t)≤2exp(−2Nt2π2κ2), (35)

where is the number of Monte Carlo samples used, and is the Lipschitz constant of the function . We can obtain rapid convergence even when using only a single sample to compute the gradient, as is often done in practice. But this will need to be balanced against the appearance of the Lipschitz constant, where a larger may require more samples to be used. This consideration is why we will find that regularisation that promotes smoothness of the functions we learn is an important part of successful applications. Overall, the computational cost of the pathwise estimator is the same as the score function estimator and is low, of the order , for -dimensional distributional parameters , plus the additional cost of evaluating the cost function and its gradient, and multiplying by the number of samples used in the estimator.

Taking into account the exposition of this section, points for consideration when using the pathwise derivative estimator are:

• Only cost functions that are differentiable can be used.

• When using the pathwise estimator (29c), we do not need to know the measure explicitly. Instead we must know its corresponding deterministic and differentiable sampling path and a base distribution that is easy to sample from.

• If using the implicit form of the estimator, we require a method of sampling from the original distribution as well as a way of computing the derivative of the inverse path (standardisation) transformation.

• The estimator can be implemented using only a single sample if needed, i.e. using . Single-sample estimation is applicable in both univariate and multivariate cases, making the estimator computationally efficient, since we can deliberately control the number of times that the cost function is evaluated.

• We might need to control the smoothness of the function during learning to avoid large variance, and may need to employ variance reduction.

### 5.4 Research in Pathwise Derivative Estimation

##### Basic development.

This pathwise estimator was initially developed by Ho and Cao (1983) under the name of infinitesimal perturbation analysis (IPA), by which it is still commonly known. Its convergence properties were analysed by Heidelberger et al. (1988). Other variations of perturbation analysis expand its applicability, see Suri and Zazanis (1988) and the books by Glasserman and Ho (1991) and Ho and Cao (2012). The view as a push-in technique is expanded on by Rubinstein (1992). Again the books by Glasserman (2013, sect. 7.2) and Pflug (1996, sect. 4.2.3) provide a patient exploration of this method using the names of the pathwise derivative, and process derivative estimation, respectively.

##### Appearance in machine learning.

The pathwise approach to gradient estimation has seen wide application in machine learning, driven by work in variational inference. An early instance of gradients of Gaussian expectations in machine learning was developed by Opper and Archambeau (2009). Rezende et al. (2014) called their use of the pathwise estimator stochastic backpropagation, since the gradient reduced to the standard gradient computation by backpropagation averaged over an independent source of noise. As we described in Section 2.2, Titsias and Lázaro-Gredilla (2014) used the pathwise estimator under the umbrella of doubly stochastic optimisation, to recognise that there are two distinct sources of stochasticity that enter when using the pathwise estimator with minibatch optimisation. The substitution of the random variable by the path in the estimator (29d) led Kingma and Welling (2014a, b) to refer to their use of the substitution as a reparametrisation trick, and by which it is commonly referred to at present in machine learning.

##### Use in generative models and reinforcement learning.

Recognising that the pathwise gradients do not require knowledge of the final density, but only a sampling path and base distribution, Rezende and Mohamed (2015) developed normalising flows for variational inference, which allows learning of complex probability distributions that exploit this property of the pathwise estimator. The optimisation of model parameters in implicit generative models, such those in generative adversarial networks (Goodfellow et al., 2014; Mohamed and Lakshminarayanan, 2016), relies on the use of the pathwise gradient estimation. To learn behaviour policies in continuous action spaces, the pathwise estimator has also found numerous uses in reinforcement learning for continuous control. Williams (1992, sect. 7.2) invoked this approach as an alternative to the score function when using Gaussian distributions, and has also been used for learning value gradients by Heess et al. (2015) and Lillicrap et al. (2015). We find that the pathwise derivative is now a key tool for computing information-theoretic quantities like the mutual information (Alemi et al., 2017), in Bayesian optimisation (Wilson et al., 2018) and in probabilistic programming (Ritchie et al., 2016).

##### Derivative generalisations.

Gong and Ho (1987) introduced smoothed perturbation analysis as one variant of the pathwise derivative to address cases where the interchange of differentiation and integration is not applicable, such as when there are discontinuities in the cost function; and several other extensions of the estimator appear in the perturbation analysis literature (Glasserman and Ho, 1991). In the variational inference setting, Lee et al. (2018) also look an the non-differentiable case by splitting regions in to differentiable and non-differentiable components. The implicit reparameterisation gradients for univariate distributions were developed by Salimans and Knowles (2013). Hoffman and Blei (2015) used the implicit gradients to perform backpropagation through the Gamma distribution using a finite difference approximation of the CDF derivative. Graves (2016) derives the implicit reparameterisation gradients for multivariate distributions with analytically tractable CDFs, such as mixtures. The form of implicit reparameterisation that we described here was developed by Figurnov et al. (2018) and clarifies the connections to existing work and provides further practical insight. Other generalisations like that of Ruiz et al. (2016), Naesseth et al. (2017) and Parmas et al. (2018), combine the score function method with the pathwise methods; we come back to these hybrid methods in Section 8.

A third class of gradient estimators, which falls within the class of derivatives of measure, is known simply as the measure-valued gradient estimators, and has received little attention in machine learning. By exploiting the underlying measure-theoretic properties of the probabilities in the sensitivity analysis problem (2)—the properties of signed-measures in particular—we will obtain a class of unbiased and general-purpose gradient estimators with favourable variance properties. This approach is referred to interchangeably as either the weak derivative method (Pflug, 1996) or as the measure-valued derivative (Heidergott and Vázquez-Abad, 2000).

### 6.1 Weak Derivatives

The derivative of a density is itself not a density, since it may have negative values and does not integrate to one. However, we can always decompose this derivative into a difference of two densities multiplied by a constant (Pflug, 1989):

 ∇θp(x;θ)=c+θp+(x;θ)−c−θp−(x;θ), (36)

where are densities, referred to as the positive and negative components of , respectively. By integrating both sides of (36), we can see that :

 LHS:∫∇θp(x;θ)dx=∇θ∫p(x;θ)dx=0;RHS:∫(c+θp+(x;θ)−c−θp−(x;θ))dx=c+θ−c−θ. (37)

Therefore, we can simply use one constant that we denote , and the decomposition becomes:

 ∇θp(x;θ)=cθ(p+(x;θ)−p−(x;θ)). (38)

The triple is referred to as the weak derivative of . We restrict our definition here to the univariate parameter case; the multivariate parameter case extends this definition to form a vector of triples, with one triple for each dimension. We list the weak derivatives for several widely-used distributions in Table 1, and defer to Heidergott et al. (2003) for their derivations.

The derivative is weak because we do not require the density to be differentiable on its domain, but rather require that integrals of the decomposition against sets of test functions converge. For example, the mass functions of discrete distributions results in positive and negative components that are delta functions, which are integrable against continuous functions. The weak derivative is not unique, but always exists and can be obtained using Hahn-Jordan decomposition of a signed measure into two measures that have complementary support (Billingsley, 2008). Like the score function and the sampling path, the weak derivative is a tool we will use to study the sensitivity analysis problem (2), although it has uses in other settings. The connections between the weak derivative and functional analysis, as well as a general calculus for weak differentiability is described by Heidergott and Leahu (2010).

### 6.2 Deriving the Estimator

Using our knowledge of weak derivatives of probability measures, we can now derive a third estimator for the gradient problem (2). This will lead us to an unbiased gradient estimator, which intuitively computes the gradient using a weighted difference of two expectations. For -dimensional parameters , we can rewrite the gradient for the th parameter as:

 ηi =∇θiEp(x;θ)[f(x)]=∇θi∫p(x;θ)f(x)dx=∫∇θip(x;θ)f(x)dx (39a) =cθi(∫f(x)p+i(x;θ)dx−∫f(x)p−i(x;θ)dx) (39b) =cθi(Ep+i(x;θ)[f(x)]−Ep−i(x;θ)[f(x)]) (39c) ¯ηi =cθiN(N∑n=1f(˙x(s)