Sensitivity Analysis for Rare Events based on Rényi Divergence
Abstract
Rare events play a key role in many applications and numerous algorithms have been proposed for estimating the probability of a rare event. However, relatively little is known on how to quantify the sensitivity of the probability with respect to model parameters. In this paper, instead of the direct statistical estimation of rare event sensitivities, we develop novel and general uncertainty quantification and sensitivity bounds which are not tied to specific rare event simulation methods and which apply to families of rare events. Our method is based on a recently derived variational representation for the family of Rényi divergences in terms of risk sensitive functionals associated with the rare events under consideration. Based on the derived bounds, we propose new sensitivity indices for rare events and relate them to the moment generating function of the score function. The bounds scale in such a way that we additionally develop sensitivity indices for large deviation rate functions.
1 Introduction and Main Result
Rare events play an important role in a wide range of applications. For example in insurance, finance and risk management, rare events play an outsized role due to potentially catastrophic consequences [1]. In queueing theory the probability of a buffer overflow often needs to be estimated [2], and in molecular dynamics, metastability effects play a crucial role in determining the behavior of the system [3]. Similarly, extreme value theory studies events and statistical samples which are far from the typical observed [4]. There is a large body of literature on rare event simulation and many different techniques have been developed to approximate the probability of a rare event. For example, importance sampling [5, 6, 7] transforms the distribution of random variables in order to make the rare event typical and corrects for bias using the likelihood ratio; interacting particle systems methods [8, 9] use many (dependent) copies of the system to speed the exploration of state space; splitting techniques [10, 11, 12, 13] decompose the problem of a single rare event into a sequence of not so rare events; and so on… Closely related are multilevel methods inspired primarily by statistical mechanics considerations, e.g. [14], as well as by rare events involving barrier crossing in molecular simulation, e.g. [15, 16].
In this paper we are primarily interested in the problem of uncertainty quantification (UQ) and in particular sensitivity analysis for rare events, a problem of practical importance whenever there is uncertainty in the parameters of the model or even in the model itself. This problem has rarely been addressed in the literature despite the fact that the statistics of rare events are often heavily influenced by the particular values of model parameters. The case of Poisson processes in the context of importance sampling for risk models was considered in [17, 18], while more recent work, [19], proposed importance sampling combined with splitting techniques in the context of Gaussian models and Malliavin calculus. Some examples using the likelihood ratio method are also discussed in [20].
Our approach in this paper is not based on any specific algorithm for rare event simulation but rather on novel information theoretic bounds. These bounds allow us to define a new sensitivity index that is independent of the particular event. Instead, the bounds hold for all rare events with probability above any fixed threshold. Specifically we utilize the Rényi family of relative entropies (a.k.a. Rényi divergence)as a measure of uncertainty between probability distributions and a new variational representation for risksensitive functionals in terms of Rényi relative entropy, derived by Atar et al. [21], to obtain general bounds on the uncertainty quantification and sensitivity analysis of families of rare events.
We first introduce the main objects of interest, namely sensitivity indices for probabilities of rare events, and discuss the challenges involved in their estimation. We then present the main result of the paper: bounds on the sensitivity indices for the families of events whose probability is at least , where is any fixed rare event threshold. At this stage we assume the degree of rarity is characterized by but later on a large deviation parameter will be introduced in Section 7.
Gradient sensitivity indices for rare events. Let be a family of probability measures parameterized by a vector . We assume that where is a reference measure and we denote by the corresponding family of densities. We also assume that the mapping satisfies suitable differentiability and integrability conditions in order to interchange integration and differentiation. For a rare event with we define the sensitivity index in the direction by
(1) 
which describes the relative change of the quantity of interest with respect to the parameter in the direction.
One of the simplest ways to estimate the sensitivity index (1) is by considering finite difference approximations for each partial derivative, i.e., considering all coordinate unit directions :
(2) 
However, the cost of implementing such an approximation can be prohibitive, given the cost of estimating the small probabilities . A variant of the likelihood ratio method can at least partially address this issue, as we discuss next.
Likelihood ratio method for rare events. The gradient sensitivity for expected values of observables can be computed using various methods, such as the likelihood ratio method and infinitesimal perturbation analysis, see e.g. [5]. While these methods are in principle applicable to the problem of computing the rare event sensitivity index (1) (as we show next in the context of likelihood ratio), they still require the use of some form of accelerated Monte Carlo simulation, for example importance sampling [5], to (possibly) obtain acceptable performance when rare events are involved.
We define the score function for the parametric family by
(3) 
with the convention that if . We also denote by the probability conditioned on the event , i.e., , where is the indicator function.
Under suitable conditions to ensure the interchangeability between integrals and derivatives, the sensitivity index for the rare event given in (1) can be rewritten as
(4) 
An algorithm that estimates (4) by combining the likelihood ratio method with importance sampling through interacting particles was recently developed in [20]. Both approaches, (2) and (4), are feasible only when an accelerated Monte Carlo scheme appropriate to the particular event has been designed. Therefore, sensitivity analysis methods for rare events which would apply to a whole class of events (or more generally expected values which are sensitive to rare events), would be a more practical computational tool. One approach to this end is to derive upper and lower bounds for that can serve as new sensitivity indices. These are of course less accurate, but may be much easier to compute, and can be used to identify those parameters for which greater accuracy is not needed. We show next that the wellknown CramerRao type bounds are not useful for the sensitivity analysis of rare events.
Failure of CramerRao type bounds. The sensitivity index for a regular, i.e., nonrare, observable has the form . This can be easily bounded using the CramerRao inequality, [22] i.e.,
where is the Fisher information matrix. Applying the CramerRao bound to a rare event yields
(5) 
Unfortunately, this (naive) sensitivity bound is rather useless since it scales as . This can be very large for a rare event , while one expects the sensitivity index to be of order .
Informationbased sensitivity indices for rare events. In view of the difficulty of directly approximating (1), the main contribution in this paper is to develop informationbased bounds for the sensitivity indices defined in (1) that apply to families of rare events. The bounds involve only a single risksensitive functional for the score function . While this quantity must be approximated, the bound does not require a different rare event sampler for each distinct rare event. One of the main results proved in this paper is presented next, while complete technical assumptions on are given in Section 4.
Main result: Sensitivity bounds and a sensitivity index for rare events. For let
denote the sets, parametrized by the positive constant , of all events which are equally probable (or rather equally rare if ). Then, for any we have
(6) 
where
(7) 
(Note that is the cumulant generating function for the score function defined in (3).) Furthermore, denoting by the exponential family of tilted measures
we have
(8) 
Here are determined by
and denotes the relative entropy of with respect to . See Figure 1 for a graphical depiction of the characterization of .
The proposed rare event sensitivity indices (7) are bounds for the gradientbased indices (1). They do not require a rare event sampler for each rare event , as one readily sees in the definition of (7) or (8), and they apply to the entire class of rare events, that is for the probability level sets for the parametric model . Intuitively, balances between the rarity of the event as quantified by and the cost to be paid in order to make the event less rare as quantified by . Note also that, due to the monotonicity of (7) in , the rare event sensitivity indices actually characterize the sensitivity of the model for each family
i.e., all events which are less rare than the threshold . In this sense, the bounds (6) present similar computational advantages and tradeoffs as other sensitivity bounds for typical observables (not rare event dependent), such as the CramérRao information bounds, see (5). Namely they are less accurate than the exact gradientbased indices (1), but they can be used to determine insensitive parameters and directions , without requiring recalculation for different events . We present a simple demonstration of such an insensitivity analysis based on the bounds (6) in the last example of Section 6.
Additionally, in ordinary CramerRao bounds, the sensitivity of the parametric model is encoded into the Fisher information matrix (the variance of the score function), for rare event the cumulant generating function of the score function plays a central role. Since the cumulant generating function controls rare events (as in Cramer’s Theorem) our bounds show that the rare events associated to the score function control the sensitivity of all rare events. The question of the tightness of the sensitivity bounds will be addressed in [23], which discusses the UQ and sensitivity analysis for more general risksensitive functionals.
For a practical implementation of the bound one can use concentration equalities [24] in a similar manner to UQ bounds for regular observables, [25], and obtain easily computable but less accurate bounds, see Section 5. As the bound involves the moment generating function of the score function, the rare event simulation techniques mentioned at the beginning of the introduction could also in principle be used to solve the optimization problem in the sensitivity indices. We plan to revisit this issue in a followup publication.
The paper is organized as follows. In Section 2 we begin with the study of an optimization problem which appears several times throughout the paper and then proceed with the definition and the properties of the Rényi divergence. In Section 3 we derive our main UQ bounds based on the Rényi divergence optimized over its parameter. We then derive in Section 4 information inequalities for the sensitivity indices. In Section 5 we discuss the practical implementation of the sensitivity indices for rare events, via concentration inequalities or via numerical estimation. We illustrate our results on several distributions from the exponential family in Section 6, and in Section 7, we present sensitivity bounds for large deviation rate functions.
2 Mathematical Preliminaries
2.1 An optimization problem
In order to obtain optimal UQ bounds we have to consider a certain optimization problem involving cumulant generating functions. We note that a bound function with similar structure has been derived and studied recently in [26, 27, 28], and we slightly generalize and reformulate those results in this section.
Let be a Polish space, the associated Borel algebra and denote by the set of all probability measures on . Given a probability measure and a measurable function consider the moment generating function with . We will assume that is such that the moment generating function is finite in a neighborhood of the origin and denote the space of such functions by .
Definition 1.
A measurable function belongs to the set if and only if there exists such that .
If then as is well known has finite moments of all orders, see also the discussion Appendix A.
Definition 2.
Given and the cumulant generating function of is given by
A family of probability measures naturally associated to is the exponential family given by
which is welldefined if .
In Appendix A, we summarize various useful properties of cumulant generating functions. These will be needed to study the following minimization problems, which arise in the definition of the sensitivity indices introduced in Section 1.
Proposition 3.
Let and with not a constant a.s. Suppose is the largest open set such that for all .

For any the optimization problems
have unique minimizers . Let be defined by
Then the minimizers are finite for and if .

If then
(9) where is strictly increasing in and is determined by the equation
(10) 
is finite in two distinct cases.

If (in which case is unbounded above/below) is finite if and , and for we have
(11) 
If and is finite then is a.s. bounded above/below and for we have
(12)

Proof.
Unless the random variable is symmetric, in general the two optimization problems are not related to each other since they involve the cumulant generating function for and respectively.
Remark 4.
An alternative characterization of the minimizers uses the LegendreFenchel transform of ,
which is a convex function with a mimimum equal to that is attained at . The optimality condition (10) for can be written, equivalently, as
Thus the set of values of for which a finite minimizer exists corresponds to the possible level sets for (the rate function in Cramer’s theorem [29]), provided is strictly convex.
If the function is centered, i.e., , then when the minimizers are and we can expand as a Taylor series in the variable as the following proposition shows, which was proved in [28, Lemma 2.10] and will be useful for nonrare events.
Proposition 5 (Linearization).
If with then we have
2.2 Rényi divergence, relative entropy, and a variational principle
In this section, we discuss the concepts of Rényi divergence and associated variational representations and duality formulas. These tools provide the mathematical foundations for the uncertainty quantification and sensitivity analysis methods introduced in the subsequent sections.
Given , we pick a reference measure , such that and (i.e., and are absolutely continuous with respect to ). Denote by (resp. ) the RadonNikodym derivative of (resp. ) with respect to the reference measure . Then, for , the Rényi divergence of degree of with respect to is defined by (cf. [30, 21])
and is independent of the choice of the reference measure . Another common definition of Rényi divergence utilizes the factor (cf. [31, 32, 33]) instead of . When and are mutually absolutely continuous, Rényi divergence can be written without reference to a reference measure as
(13) 
where the rightmost expression reveals that Rényi divergence is proportional to the cumulant (or logmoment) generating function for the logarithm of the RadonNikodym derivative (i.e., ). The definition of Rényi divergence is extended to by letting , where is the relative entropy (or KullbackLeibler divergence) defined by
and we have . In view of (13) we can also extend to negative values of if and are mutually absolutely continuous. Due to our convention for the definition of we have the symmetry and thus, in particular, . Further properties of the Rényi divergence can be found for example in [34, 32, 33].
A variational formula in terms of Rényi divergence, recently derived in [26, Theorem 2.1], will play a central role in this paper.
Theorem 6 (Variational representation involving Rényi divergence).
Let with and let . For any bounded and measurable we have
(14)  
(15) 
Remark 7.
The variational representation formula (15) is a generalization of the DonskerVaradhan variational formula involving relative entropy (also know as the Gibbs variational principle) [35, 36]. Indeed taking and in (15) we obtain the wellknown formula
(16) 
Note that the variational formula (16) serves as the basis of the UQ theory for typical events developed in [26, 28, 37, 25].
3 UQ Bounds for Rare Events
Let and be a measurable function. It is convenient to think of as the “true” probabilistic model and of as a “nominal” or “reference” model. By Theorem 6, equation (14), we have the upper bound
(17) 
which constitutes an upper bound for risksensitive observables (i.e., where tail events matter). The upper bound consists of two terms; one being an estimate of the risksensitive observable under the “nominal” model and the second term being the cost to be paid for the substitution of the “true” model, here, quantified by the Rényi divergence.
By taking the limit , it formally holds that
(18) 
which is an upper bound for typical (i.e., not risksensitive) observables. This upper bound (18) was the starting point for deriving UQ and sensitivity bounds for typical observables in [26, 27, 28, 37, 25]. Here we derive respective bounds for logprobabilities of rare events, which form a particular class of risksensitive observables. In other words, we are interested in bounding quantities of the form . The following theorem summarizes the UQ bounds for rare events.
Theorem 8.
(a) Fix and let be such that and . Then
(19) 
(b) If and are mutually absolutely continuous, then (19) can be rewritten as
(20) 
Proof.
(a) We first prove the upper bound. By setting and in (17), we get
By taking and considering on , on and then sending , the above inequality becomes
Multiplying with and subtracting , we have
Since this holds for all , the upper bound in (19) is proved. For the lower bound, we reverse and in (17) and proceed as in the upper bound.
We turn next to determining the optimal in (20) using the results from Section 2.1. We select the function , whose cumulant generating function is
(21) 
As is readily apparent from the bound (20), in order to obtain nontrivial upper and lower bounds we should assume is finite in an open neighborhood of the interval . If we assume this then the function has the following elementary properties:

.

, .

More generally for any we have
(22) where is the exponential family given by
(23) The family interpolates between and since and .
Using these properties, as well as Proposition 3 we obtain the following, explicit UQ bounds for all rare events such that .
Theorem 9 (UQ bounds for rare events).
Proof.
With given in (21) and setting , part (b) of Theorem 8 is rewritten as
(26) 
The lower bound is then an immediate consequence of Proposition 3 together with (22) and (25).
However, the upper bound involves a modified calculation since the infimum is taken only over . We first note that the corresponding minimization of the quantity
(27) 
arising also in the proof of Proposition 3, e.g. (62), separates into two distinct cases. Indeed, for any we have
(28) 
using the Properties 13 above for and the fact that is a strictly increasing function of in .
First, (28) implies that if
(29) 
then is strictly increasing in , hence the infimum on the upper bound of (26) occurs at . In this case, using that , we obtain that
(30) 
On the other hand, if we have
(31) 
then has a unique minimum, and the minimizer (equivalently, the minimizer of the upper bound of (26)) occurs at the unique finite root , namely
(32) 
We now combine (30) and (32) with the upper bound of (26) to obtain (24).
Note that the upper bound in (24) is not discontinuous in since for , and by Property 3 we have that ; thus .
∎
Remark 10.
Remark 11.
Finally, we note that a much cruder UQ bound than (19) can be obtained by considering the upper lower bounds obtained by taking in (19). Indeed, we can consider the alternative definition of Rényi divergence [33]
and accordingly rewrite (19) for . Noting that
also referred as worstcase regret, [33], we can bound (19) from above and below by selecting . This substitution obviously yields a less sharp version of (19), namely the (trivial) bound
(33) 
Note that this bound is valid if and , as long as is bounded from above/below.
4 Sensitivity Indices for Rare Events
In this section we consider a parametric family of probability measures with and we assume , where is a reference measure in and with density . We further assume that the mapping is twice differentiable with respect to for all together with a suitable integrability condition on to allow the interchange of integral and derivatives. The sensitivity index for a rare event (with ) in the direction is then given by
(34) 
where is the score of the probability measure , see also (4).
Here, we derive computationally tractable bounds on the sensitivity indices and corresponding new rare event sensitivity indices , starting from the UQ bounds presented in Section 3.
By considering the measures and we have that and thus it is natural to rescale the parameter according to
(35) 
Taking we obtain a nontrivial bound for the sensitivity indices as the following Theorem shows. Next, in order to state our results we require the cumulant generating function for the score function defined in (3):
This cumulant generating function has the following elementary properties:

.

where denotes the Fisher information matrix for the parametric family .

More generally we have
where is the exponential family with
Theorem 12.
Assume that the mapping is for all and that for each there exists such that
Then
(36) 
Proof.
Rewriting (20) for and and substituting from (35), we get the upper bound
valid for all . Dividing by , sending , and then infimizing on , we get
In order to complete the proof of the upper bound we have to interchange between the limit and the integral. This is justified by the dominated convergence theorem since we have from Taylor’s theorem that
for some in the interval defined by the points and and for . Therefore,
which establishes the upper bound in (36). The lower bound in (36) is proved similarly. ∎
Using (9) and (10) from Proposition 3 to evaluate the infimum in (36), we obtain a representation of the bounds.
Theorem 13 (Sensitivity indices for rare events).
Under the same assumptions as in Theorem 12 consider the family
of all events which are less rare than a specified threshold . Then there exists such that for and any we have
(37) 
where
(38) 
and are determined by
Similarly to Theorem 13, the rare event sensitivity indices characterize the sensitivity of the model for each level set , i.e. corresponding to all events which are equally rare and characterized by .
The new sensitivity indices defined in (38) are in general less sharp than the gradientbased indices in (34), due to the inequalities (37). However, they do not require a rare event sampler for each rare event , as one readily sees by comparing (4) to (38). In fact the indices are identical for the entire classes of rare events in or in . In this sense, they present similar computational advantages and tradeoffs as other sensitivity indices for typical observables (not rare eventdependent), such as Fisher information bounds, [28]; in particular, they are less sharp but can be used to efficiently screen out insensitive parameters in directions in parameter space, i.e directions where ; we refer for such sensitivity screening results to [38], at least for typical events and observables.
5 Bounds and Approximations for the Rare Event Sensitivity Indices
The upper and lower bounds in Theorem 12 and the representation of the sensitivity indices in Theorem 13 suggest at least two approaches to practically implement the indices . The first one is based on concentration inequalities, while the second one relies on the direct statistical and numerical estimation of the indices . We next discuss the first approach.
Concentration Inequalities. Concentration inequalities, i.e., explicit bounds of the probability of tail events, are often obtained via a Chernoff bound by using computable upper bounds on the cumulant generating function of the random variable. Such upper bounds typically involve only a few features of the underlying random variable such as mean, variance, bounds, higher moments, and so on, see for instance [24].
Here we can naturally use such inequalities to provide simplified and computable bounds for the variational formula for the sensitivity index, namely
by bounding the cumulant generating function of the score function We provide two such examples, by making the assumption that the score function is bounded. One can prove similar results in the same spirit by using different assumptions on the tail behavior of the score function, e.g. if we assume that the score is a subGaussian or a subPoissonian random variable [24].
A similar use of various concentration inequalities in order to obtain computable uncertainty quantification bounds for ordinary observables was proposed recently in [25].