Sensitivity Analysis for Rare Events based on Rényi Divergence

# Sensitivity Analysis for Rare Events based on Rényi Divergence

Paul Dupuis
Division of Applied Mathematics, Brown University, Providence, RI 02912, USA
dupuis@dam.brown.edu, Research supported in part by the Defense Advanced Research Projects Agency (DARPA) EQUiPS program (W911NF-15-2-0122), the Air Force Office of Scientific Research (AFOSR) (FA-9550-18-1-0214) and the National Science Foundation (DMS-1317199)
Markos A. Katsoulakis
Department of Mathematics and Statistics, University of Massachusetts Amherst, Amherst, MA 01002, USA
markos@math.umass.edu, Research supported in part by the Defense Advanced Research Projects Agency (DARPA) EQUiPS program (W911NF-15-2-0122), the Air Force Office of Scientific Research (AFOSR) (FA-9550-18-1-0214) and the National Science Foundation (DMS-1515712)
Yannis Pantazis
Institute of Applied and Computational Mathematics, Foundation for Research and Technology - Hellas, Heraklion, 70013, Greece
pantazis@iacm.forth.gr
Luc Rey-Bellet
Department of Mathematics and Statistics, University of Massachusetts Amherst, Amherst, MA 01002, USA
luc@math.umass.edu, Research supported in part by the National Science Foundation (DMS-1515712) and the Air Force Office of Scientific Research (AFOSR) (FA-9550-18-1-0214)
August 4, 2019
###### Abstract

Rare events play a key role in many applications and numerous algorithms have been proposed for estimating the probability of a rare event. However, relatively little is known on how to quantify the sensitivity of the probability with respect to model parameters. In this paper, instead of the direct statistical estimation of rare event sensitivities, we develop novel and general uncertainty quantification and sensitivity bounds which are not tied to specific rare event simulation methods and which apply to families of rare events. Our method is based on a recently derived variational representation for the family of Rényi divergences in terms of risk sensitive functionals associated with the rare events under consideration. Based on the derived bounds, we propose new sensitivity indices for rare events and relate them to the moment generating function of the score function. The bounds scale in such a way that we additionally develop sensitivity indices for large deviation rate functions.

## 1 Introduction and Main Result

Rare events play an important role in a wide range of applications. For example in insurance, finance and risk management, rare events play an outsized role due to potentially catastrophic consequences [1]. In queueing theory the probability of a buffer overflow often needs to be estimated [2], and in molecular dynamics, metastability effects play a crucial role in determining the behavior of the system [3]. Similarly, extreme value theory studies events and statistical samples which are far from the typical observed [4]. There is a large body of literature on rare event simulation and many different techniques have been developed to approximate the probability of a rare event. For example, importance sampling [5, 6, 7] transforms the distribution of random variables in order to make the rare event typical and corrects for bias using the likelihood ratio; interacting particle systems methods [8, 9] use many (dependent) copies of the system to speed the exploration of state space; splitting techniques [10, 11, 12, 13] decompose the problem of a single rare event into a sequence of not so rare events; and so on… Closely related are multi-level methods inspired primarily by statistical mechanics considerations, e.g. [14], as well as by rare events involving barrier crossing in molecular simulation, e.g. [15, 16].

In this paper we are primarily interested in the problem of uncertainty quantification (UQ) and in particular sensitivity analysis for rare events, a problem of practical importance whenever there is uncertainty in the parameters of the model or even in the model itself. This problem has rarely been addressed in the literature despite the fact that the statistics of rare events are often heavily influenced by the particular values of model parameters. The case of Poisson processes in the context of importance sampling for risk models was considered in [17, 18], while more recent work, [19], proposed importance sampling combined with splitting techniques in the context of Gaussian models and Malliavin calculus. Some examples using the likelihood ratio method are also discussed in [20].

Our approach in this paper is not based on any specific algorithm for rare event simulation but rather on novel information theoretic bounds. These bounds allow us to define a new sensitivity index that is independent of the particular event. Instead, the bounds hold for all rare events with probability above any fixed threshold. Specifically we utilize the Rényi family of relative entropies (a.k.a. Rényi divergence)as a measure of uncertainty between probability distributions and a new variational representation for risk-sensitive functionals in terms of Rényi relative entropy, derived by Atar et al. [21], to obtain general bounds on the uncertainty quantification and sensitivity analysis of families of rare events.

We first introduce the main objects of interest, namely sensitivity indices for probabilities of rare events, and discuss the challenges involved in their estimation. We then present the main result of the paper: bounds on the sensitivity indices for the families of events whose probability is at least , where is any fixed rare event threshold. At this stage we assume the degree of rarity is characterized by but later on a large deviation parameter will be introduced in Section 7.

Gradient sensitivity indices for rare events. Let be a family of probability measures parameterized by a vector . We assume that where is a reference measure and we denote by the corresponding family of densities. We also assume that the mapping satisfies suitable differentiability and integrability conditions in order to interchange integration and differentiation. For a rare event with we define the sensitivity index in the direction by

 Sθv(A)=vT∇θlogPθ(A)=vT∇θPθ(A)Pθ(A), (1)

which describes the relative change of the quantity of interest with respect to the parameter in the direction.

One of the simplest ways to estimate the sensitivity index (1) is by considering finite difference approximations for each partial derivative, i.e., considering all coordinate unit directions :

 Sθei(A)≈logPθ+ϵei(A)−logPθ(A)ϵ. (2)

However, the cost of implementing such an approximation can be prohibitive, given the cost of estimating the small probabilities . A variant of the likelihood ratio method can at least partially address this issue, as we discuss next.

Likelihood ratio method for rare events. The gradient sensitivity for expected values of observables can be computed using various methods, such as the likelihood ratio method and infinitesimal perturbation analysis, see e.g. [5]. While these methods are in principle applicable to the problem of computing the rare event sensitivity index (1) (as we show next in the context of likelihood ratio), they still require the use of some form of accelerated Monte Carlo simulation, for example importance sampling [5], to (possibly) obtain acceptable performance when rare events are involved.

We define the score function for the parametric family by

 Wθ(x):=∇θlogpθ(x), (3)

with the convention that if . We also denote by the probability conditioned on the event , i.e., , where is the indicator function.

Under suitable conditions to ensure the interchangeability between integrals and derivatives, the sensitivity index for the rare event given in (1) can be rewritten as

 Sθv(A)=vT∇θ∫ApθdRPθ(A)=vT∫AWθdPθPθ(A)=EPθ|A[vTWθ]. (4)

An algorithm that estimates (4) by combining the likelihood ratio method with importance sampling through interacting particles was recently developed in [20]. Both approaches, (2) and (4), are feasible only when an accelerated Monte Carlo scheme appropriate to the particular event has been designed. Therefore, sensitivity analysis methods for rare events which would apply to a whole class of events (or more generally expected values which are sensitive to rare events), would be a more practical computational tool. One approach to this end is to derive upper and lower bounds for that can serve as new sensitivity indices. These are of course less accurate, but may be much easier to compute, and can be used to identify those parameters for which greater accuracy is not needed. We show next that the well-known Cramer-Rao type bounds are not useful for the sensitivity analysis of rare events.

Failure of Cramer-Rao type bounds. The sensitivity index for a regular, i.e., non-rare, observable has the form . This can be easily bounded using the Cramer-Rao inequality, [22] i.e.,

 ∣∣vT∇θEPθ[f]∣∣≤√VarPθ[f]√vTF(Pθ)v

where is the Fisher information matrix. Applying the Cramer-Rao bound to a rare event yields

 |Sθv(A)|=1Pθ(A)∣∣EPθ[χAvTWθ]∣∣≤√1−Pθ(A)Pθ(A)√vTF(Pθ)v. (5)

Unfortunately, this (naive) sensitivity bound is rather useless since it scales as . This can be very large for a rare event , while one expects the sensitivity index to be of order .

Information-based sensitivity indices for rare events. In view of the difficulty of directly approximating (1), the main contribution in this paper is to develop information-based bounds for the sensitivity indices defined in (1) that apply to families of rare events. The bounds involve only a single risk-sensitive functional for the score function . While this quantity must be approximated, the bound does not require a different rare event sampler for each distinct rare event. One of the main results proved in this paper is presented next, while complete technical assumptions on are given in Section 4.

Main result: Sensitivity bounds and a sensitivity index for rare events. For let

 ¯AM={A:Pθ(A)=e−M}.

denote the sets, parametrized by the positive constant , of all events which are equally probable (or rather equally rare if ). Then, for any we have

 Iθv,−(M)≤Sθv(A)≤Iθv,+(M), (6)

where

 Iθv,±(M):=±infα>0{Hθv(±α)+Mα}withHθv(α)=logEPθ[eαvTWθ]. (7)

(Note that is the cumulant generating function for the score function defined in (3).) Furthermore, denoting by the exponential family of tilted measures

 dPθαdPθ=eαvTWθ−Hθv(α),

we have

 Iθv,±(M)=ddαHθv(α)∣∣α=±α±=EPθα±[vTWθ]. (8)

Here are determined by

 R(Pθ±α±||Pθ)=M,

and denotes the relative entropy of with respect to . See Figure 1 for a graphical depiction of the characterization of .

The proposed rare event sensitivity indices (7) are bounds for the gradient-based indices (1). They do not require a rare event sampler for each rare event , as one readily sees in the definition of (7) or (8), and they apply to the entire class of rare events, that is for the probability level sets for the parametric model . Intuitively, balances between the rarity of the event as quantified by and the cost to be paid in order to make the event less rare as quantified by . Note also that, due to the monotonicity of (7) in , the rare event sensitivity indices actually characterize the sensitivity of the model for each family

 AM:={A:Pθ(A)≥e−M},

i.e., all events which are less rare than the threshold . In this sense, the bounds (6) present similar computational advantages and trade-offs as other sensitivity bounds for typical observables (not rare event dependent), such as the Cramér-Rao information bounds, see (5). Namely they are less accurate than the exact gradient-based indices (1), but they can be used to determine insensitive parameters and directions , without requiring recalculation for different events . We present a simple demonstration of such an insensitivity analysis based on the bounds (6) in the last example of Section 6.

Additionally, in ordinary Cramer-Rao bounds, the sensitivity of the parametric model is encoded into the Fisher information matrix (the variance of the score function), for rare event the cumulant generating function of the score function plays a central role. Since the cumulant generating function controls rare events (as in Cramer’s Theorem) our bounds show that the rare events associated to the score function control the sensitivity of all rare events. The question of the tightness of the sensitivity bounds will be addressed in [23], which discusses the UQ and sensitivity analysis for more general risk-sensitive functionals.

For a practical implementation of the bound one can use concentration equalities [24] in a similar manner to UQ bounds for regular observables, [25], and obtain easily computable but less accurate bounds, see Section 5. As the bound involves the moment generating function of the score function, the rare event simulation techniques mentioned at the beginning of the introduction could also in principle be used to solve the optimization problem in the sensitivity indices. We plan to revisit this issue in a follow-up publication.

The paper is organized as follows. In Section 2 we begin with the study of an optimization problem which appears several times throughout the paper and then proceed with the definition and the properties of the Rényi divergence. In Section 3 we derive our main UQ bounds based on the Rényi divergence optimized over its parameter. We then derive in Section 4 information inequalities for the sensitivity indices. In Section 5 we discuss the practical implementation of the sensitivity indices for rare events, via concentration inequalities or via numerical estimation. We illustrate our results on several distributions from the exponential family in Section 6, and in Section 7, we present sensitivity bounds for large deviation rate functions.

## 2 Mathematical Preliminaries

### 2.1 An optimization problem

In order to obtain optimal UQ bounds we have to consider a certain optimization problem involving cumulant generating functions. We note that a bound function with similar structure has been derived and studied recently in [26, 27, 28], and we slightly generalize and reformulate those results in this section.

Let be a Polish space, the associated Borel -algebra and denote by the set of all probability measures on . Given a probability measure and a measurable function consider the moment generating function with . We will assume that is such that the moment generating function is finite in a neighborhood of the origin and denote the space of such functions by .

###### Definition 1.

A measurable function belongs to the set if and only if there exists such that  .

If then as is well known has finite moments of all orders, see also the discussion Appendix A.

###### Definition 2.

Given and the cumulant generating function of is given by

 H(α):=logEP[eαg] .

A family of probability measures naturally associated to is the exponential family given by

 dPαdP=eαg−H(α),

which is well-defined if .

In Appendix A, we summarize various useful properties of cumulant generating functions. These will be needed to study the following minimization problems, which arise in the definition of the sensitivity indices introduced in Section 1.

###### Proposition 3.

Let and with not a constant -a.s. Suppose is the largest open set such that for all .

1. For any the optimization problems

 infα>0H(±α)+Mα

have unique minimizers . Let be defined by

 M±=limα↗±d±±αH′(±α)−H(±α).

Then the minimizers are finite for and if .

2. If then

 H(±α±)+Mα±=infα>0H(±α)+Mα=±H′(±α±)=±EP±α±[g], (9)

where is strictly increasing in and is determined by the equation

 R(P±α±||P)=M. (10)
3. is finite in two distinct cases.

1. If (in which case is unbounded above/below) is finite if and , and for we have

 infα>0H(±α)+Mα=H(d±)+M±d±=±EPd±[g]+M−M+±d±. (11)
2. If and is finite then is -a.s. bounded above/below and for we have

 infα>0H(±α)+Mα=esssupx∈X{±g(x)}. (12)
###### Proof.

The proof of Proposition 3 can be found in Appendix A. A geometric depiction of the Proposition when is shown in Figure 1(a). ∎

Unless the random variable is symmetric, in general the two optimization problems are not related to each other since they involve the cumulant generating function for and respectively.

###### Remark 4.

An alternative characterization of the minimizers uses the Legendre-Fenchel transform of ,

 L(δ)=supα∈R{αδ−H(α)},

which is a convex function with a mimimum equal to that is attained at . The optimality condition (10) for can be written, equivalently, as

 M=L(±δ±)  where  δ±=±H′(±α±).

Thus the set of values of for which a finite minimizer exists corresponds to the possible level sets for (the rate function in Cramer’s theorem [29]), provided is strictly convex.

If the function is centered, i.e., , then when the minimizers are and we can expand as a Taylor series in the variable as the following proposition shows, which was proved in [28, Lemma 2.10] and will be useful for non-rare events.

###### Proposition 5 (Linearization).

If with then we have

 infα>0H(±α)+Mα=√2VarP(g)M+O(M) .

### 2.2 Rényi divergence, relative entropy, and a variational principle

In this section, we discuss the concepts of Rényi divergence and associated variational representations and duality formulas. These tools provide the mathematical foundations for the uncertainty quantification and sensitivity analysis methods introduced in the subsequent sections.

Given , we pick a reference measure , such that and (i.e., and are absolutely continuous with respect to ). Denote by (resp. ) the Radon-Nikodym derivative of (resp. ) with respect to the reference measure . Then, for , the Rényi divergence of degree of with respect to is defined by (cf. [30, 21])

 Rα(Q||P):=⎧⎪⎨⎪⎩1α(α−1)log∫pq>0(qp)αdPif\ Q≪P\ or\ α<1+∞otherwise,

and is independent of the choice of the reference measure . Another common definition of Rényi divergence utilizes the factor (cf. [31, 32, 33]) instead of . When and are mutually absolutely continuous, Rényi divergence can be written without reference to a reference measure as

 Rα(Q||P)=1α(α−1)logEP[(dQdP)α]=1α(α−1)logEP[exp{αlogdQdP}] , (13)

where the rightmost expression reveals that Rényi divergence is proportional to the cumulant (or log-moment) generating function for the logarithm of the Radon-Nikodym derivative (i.e., ). The definition of Rényi divergence is extended to by letting , where is the relative entropy (or Kullback-Leibler divergence) defined by

 R(Q||P):=⎧⎨⎩∫pq>0qplogqpdPif\ Q≪P+∞otherwise,

and we have . In view of (13) we can also extend to negative values of if and are mutually absolutely continuous. Due to our convention for the definition of we have the symmetry and thus, in particular, . Further properties of the Rényi divergence can be found for example in [34, 32, 33].

A variational formula in terms of Rényi divergence, recently derived in [26, Theorem 2.1], will play a central role in this paper.

###### Theorem 6 (Variational representation involving Rényi divergence).

Let with and let . For any bounded and measurable we have

 1βlogEQ[eβf] = infP∈P(X){1γlogEP[eγf]+1γ−βRγγ−β(Q||P)} , (14) 1γlogEQ[eγf] = supP∈P(X){1βlogEP[eβf]−1γ−βRγγ−β(P||Q)} . (15)
###### Remark 7.

The variational representation formula (15) is a generalization of the Donsker-Varadhan variational formula involving relative entropy (also know as the Gibbs variational principle) [35, 36]. Indeed taking and in (15) we obtain the well-known formula

 logEQ[ef]=supP∈P(X){EP[f]−R(P||Q)} . (16)

Note that the variational formula (16) serves as the basis of the UQ theory for typical events developed in [26, 28, 37, 25].

## 3 UQ Bounds for Rare Events

Let and be a measurable function. It is convenient to think of as the “true” probabilistic model and of as a “nominal” or “reference” model. By Theorem 6, equation (14), we have the upper bound

 1βlogEQ[eβg]≤1γlogEP[eγg]+1γ−βRγγ−β(Q||P), (17)

which constitutes an upper bound for risk-sensitive observables (i.e., where tail events matter). The upper bound consists of two terms; one being an estimate of the risk-sensitive observable under the “nominal” model and the second term being the cost to be paid for the substitution of the “true” model, here, quantified by the Rényi divergence.

By taking the limit , it formally holds that

 (18)

which is an upper bound for typical (i.e., not risk-sensitive) observables. This upper bound (18) was the starting point for deriving UQ and sensitivity bounds for typical observables in [26, 27, 28, 37, 25]. Here we derive respective bounds for log-probabilities of rare events, which form a particular class of risk-sensitive observables. In other words, we are interested in bounding quantities of the form . The following theorem summarizes the UQ bounds for rare events.

###### Theorem 8.

(a) Fix and let be such that and . Then

 supα>0{−(α+1)Rα+1(P||Q)+1αlogP(A)}≤logQ(A)−logP(A)≤infα>0{αRα+1(Q||P)−1α+1logP(A)}. (19)

(b) If and are mutually absolutely continuous, then (19) can be rewritten as

 (20)
###### Proof.

(a) We first prove the upper bound. By setting and in (17), we get

 1αlogEQ[eαg]≤1α+1logEP[e(α+1)g]+Rα+1(Q||P) .

By taking and considering on , on and then sending , the above inequality becomes

 1αlogQ(A)≤1α+1logP(A)+Rα+1(Q||P) .

Multiplying with and subtracting , we have

 logQ(A)−logP(A)≤αRα+1(Q||P)−1α+1logP(A) .

Since this holds for all , the upper bound in (19) is proved. For the lower bound, we reverse and in (17) and proceed as in the upper bound.

(b) Substituting the Rényi formula (13) in (19), we get

Equivalently,

 supα>0{−1αlogEP[(dPdQ)α]+1αlogP(A)}≤logQ(A)−logP(A)≤infα>1{1αlogEP[(dQdP)α]−1αlogP(A)} .

which is exactly (20). ∎

We turn next to determining the optimal in (20) using the results from Section 2.1. We select the function , whose cumulant generating function is

 H(α)=logEP[eαlogdQdP] . (21)

As is readily apparent from the bound (20), in order to obtain non-trivial upper and lower bounds we should assume is finite in an open neighborhood of the interval . If we assume this then the function has the following elementary properties:

1. .

2. , .

3. More generally for any we have

 H′(α)=EPα[logdQdP]=R(Pα||P)−R(Pα||Q) , (22)

where is the exponential family given by

 dPαdR=qαp1−α∫qαp1−αdR. (23)

The family interpolates between and since and .

Using these properties, as well as Proposition 3 we obtain the following, explicit UQ bounds for all rare events such that .

###### Theorem 9 (UQ bounds for rare events).

Let be mutually absolutely continuous and assume that given in (21) is finite for in a neighborhood of . Let , , be the constants given in Proposition 3 (with ). For any with with we have

 −R(P−α−||Q)≤logQ(A)≤{0 if M

where are the (unique) solutions of

 R(P±α±||P)=M=−logP(A). (25)
###### Proof.

With given in (21) and setting , part (b) of Theorem 8 is rewritten as

 −infα>0{H(−α)+Mα}≤logQ(A)+M≤infα>1{H(α)+Mα}. (26)

The lower bound is then an immediate consequence of Proposition 3 together with (22) and (25).

However, the upper bound involves a modified calculation since the infimum is taken only over . We first note that the corresponding minimization of the quantity

 B(α;M)=H(α)+Mα, (27)

arising also in the proof of Proposition 3, e.g. (62), separates into two distinct cases. Indeed, for any we have

 B′(α;M)=αH′(α)−H(α)−Mα2>1⋅H′(1)−H(1)−Mα2=R(Q||P)−Mα2, (28)

using the Properties 1-3 above for and the fact that is a strictly increasing function of in .

First, (28) implies that if

 R(Q||P)>M=−logP(A), (29)

then is strictly increasing in , hence the infimum on the upper bound of (26) occurs at . In this case, using that , we obtain that

 infα>1{H(α)+Mα}=M. (30)

On the other hand, if we have

 R(Q||P)≤M=−logP(A), (31)

then has a unique minimum, and the minimizer (equivalently, the minimizer of the upper bound of (26)) occurs at the unique finite root , namely

 infα>1{H(α)+Mα}=H′(α+)=M−R(Pα+||Q). (32)

We now combine (30) and (32) with the upper bound of (26) to obtain (24).

Note that the upper bound in (24) is not discontinuous in since for , and by Property 3 we have that ; thus .

###### Remark 10.

We obtain the trivial bound in Theorem 9 when the true measure is (relatively) too far from the reference model . This relative “distance” (see (29) and (31)) is quantified by the ratio which is required to be less than to obtain an informative upper bound in (24).

###### Remark 11.

We can interpret the condition for in (25) by noting that the measure conditioned on the rare event , satisfies Theorem 9 states that one should find the proper mixtures of and (as described by ) so that . The bounds for are then simply .

Finally, we note that a much cruder UQ bound than (19) can be obtained by considering the upper lower bounds obtained by taking in (19). Indeed, we can consider the alternative definition of Rényi divergence [33]

 Dα(Q||P)=αRα(Q||P),

and accordingly rewrite (19) for . Noting that

 D∞(Q||P)=supx∈XlogdQdP,

also referred as worst-case regret, [33], we can bound (19) from above and below by selecting . This substitution obviously yields a less sharp version of (19), namely the (trivial) bound

 infx∈XlogdQdP=−D∞(P||Q)≤logQ(A)−logP(A)≤D∞(Q||P)=supx∈XlogdQdP, (33)

Note that this bound is valid if and , as long as is bounded from above/below.

## 4 Sensitivity Indices for Rare Events

In this section we consider a parametric family of probability measures with and we assume , where is a reference measure in and with density . We further assume that the mapping is twice differentiable with respect to for all together with a suitable integrability condition on to allow the interchange of integral and derivatives. The sensitivity index for a rare event (with ) in the direction is then given by

 Sθv(A):=limϵ→0logPθ+ϵv(A)−logPθ(A)ϵ=EPθ|A[vTWθ]=EPθ[χAvTWθ]P(A) , (34)

where is the score of the probability measure , see also (4).

Here, we derive computationally tractable bounds on the sensitivity indices and corresponding new rare event sensitivity indices , starting from the UQ bounds presented in Section 3.

By considering the measures and we have that and thus it is natural to rescale the parameter according to

 α=α0ϵ . (35)

Taking we obtain a non-trivial bound for the sensitivity indices as the following Theorem shows. Next, in order to state our results we require the cumulant generating function for the score function defined in (3):

 Hθv(α)=logEPθ[eαvTWθ].

This cumulant generating function has the following elementary properties:

1. .

2. where denotes the Fisher information matrix for the parametric family .

3. More generally we have

 (Hθv)′(α)=EPθα[vTWθ],

where is the exponential family with

 dPθαdPθ=eαvTWθ−Hθv(α).
###### Theorem 12.

Assume that the mapping is for all and that for each there exists such that

 EPθ[eα0vT∇θlogpθ+α0δ2sup|θ−θ′|<δ|vT∇2θlogpθ′v|]<∞ .

Then

 −infα>0{Hθv(−α)−logPθ(A)α}≤Sθv(A)≤infα>0{Hθv(α)−logPθ(A)α}. (36)
###### Proof.

Rewriting (20) for and and substituting from (35), we get the upper bound

 logPθ+ϵv(A)−logPθ(A)≤⎧⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪⎩logEPθ[eα0ϵ(logpθ+ϵv−logpθ)]−logPθ(A)α0/ϵ⎫⎪ ⎪ ⎪ ⎪⎬⎪ ⎪ ⎪ ⎪⎭ ,

valid for all . Dividing by , sending , and then infimizing on , we get

 limϵ→01ϵ(logPθ+ϵv(A)−logPθ(A))≤infα0>0⎧⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪⎩limϵ→0logEPθ[eα0ϵ(logpθ+ϵv−logpθ)]−logPθ(A)α0⎫⎪ ⎪ ⎪ ⎪⎬⎪ ⎪ ⎪ ⎪⎭ .

In order to complete the proof of the upper bound we have to interchange between the limit and the integral. This is justified by the dominated convergence theorem since we have from Taylor’s theorem that

 eα01ϵ(logpθ+ϵv−logpθ)=eα0(vT∇θlogpθ+ϵ2vT∇2θlogpθ′v)≤eα0(vT∇θlogpθ+δ2sup|θ−θ′|<δ|vT∇2θlogpθ′v|)

for some in the interval defined by the points and and for . Therefore,

which establishes the upper bound in (36). The lower bound in (36) is proved similarly. ∎

Using (9) and (10) from Proposition 3 to evaluate the infimum in (36), we obtain a representation of the bounds.

###### Theorem 13 (Sensitivity indices for rare events).

Under the same assumptions as in Theorem 12 consider the family

 AM={A:Pθ(A)≥e−M}

of all events which are less rare than a specified threshold . Then there exists such that for and any we have

 Iθv,−(M)≤Sθv(A)≤Iθv,+(M), (37)

where

 Iθv,±(M):=±infα>0{Hθv(±α)+Mα}=EPθ±α±[vTWθ] (38)

and are determined by

 R(Pθ±α±||Pθ)=M.

Similarly to Theorem 13, the rare event sensitivity indices characterize the sensitivity of the model for each -level set , i.e. corresponding to all events which are equally rare and characterized by .

The new sensitivity indices defined in (38) are in general less sharp than the gradient-based indices in (34), due to the inequalities (37). However, they do not require a rare event sampler for each rare event , as one readily sees by comparing (4) to (38). In fact the indices are identical for the entire classes of rare events in or in . In this sense, they present similar computational advantages and trade-offs as other sensitivity indices for typical observables (not rare event-dependent), such as Fisher information bounds, [28]; in particular, they are less sharp but can be used to efficiently screen out insensitive parameters in directions in parameter space, i.e directions where ; we refer for such sensitivity screening results to [38], at least for typical events and observables.

## 5 Bounds and Approximations for the Rare Event Sensitivity Indices

The upper and lower bounds in Theorem 12 and the representation of the sensitivity indices in Theorem 13 suggest at least two approaches to practically implement the indices . The first one is based on concentration inequalities, while the second one relies on the direct statistical and numerical estimation of the indices . We next discuss the first approach.

Concentration Inequalities. Concentration inequalities, i.e., explicit bounds of the probability of tail events, are often obtained via a Chernoff bound by using computable upper bounds on the cumulant generating function of the random variable. Such upper bounds typically involve only a few features of the underlying random variable such as mean, variance, bounds, higher moments, and so on, see for instance [24].

Here we can naturally use such inequalities to provide simplified and computable bounds for the variational formula for the sensitivity index, namely

 Iθv,±(M):=±infα>0{Hθv(±α)+Mα},

by bounding the cumulant generating function of the score function We provide two such examples, by making the assumption that the score function is bounded. One can prove similar results in the same spirit by using different assumptions on the tail behavior of the score function, e.g. if we assume that the score is a sub-Gaussian or a sub-Poissonian random variable [24].

A similar use of various concentration inequalities in order to obtain computable uncertainty quantification bounds for ordinary observables was proposed recently in [25].

###### Theorem 14 (Bernstein sensitivity bounds for rare events).

We consider the same assumptions as in Theorem 12; we further assume

 supx∈XvTWθ(x)≤bv

for some . Furthermore, let

 F(Pθ)=EP