Bounding the Probability of Causation in Mediation Analysis

# Bounding the Probability of Causation in Mediation Analysis

A. P. Dawid University of Cambridge apd@statslab.cam.ac.uk    R. Murtas University of Cagliari ro.murtas@gmail.it    M. Musio University of Cagliari mmusio@unica.it
###### Abstract

Given empirical evidence for the dependence of an outcome variable on an exposure variable, we can typically only provide bounds for the “probability of causation” in the case of an individual who has developed the outcome after being exposed. We show how these bounds can be adapted or improved if further information becomes available. In addition to reviewing existing work on this topic, we provide a new analysis for the case where a mediating variable can be observed. In particular we show how the probability of causation can be bounded when there is no direct effect and no confounding.

Keywords: Causal inference, Mediation Analysis, Probability of Causation

## 1 Introduction

Many statistical analyses aim at a causal explanation of the data. In particular in epidemiology many studies are conducted to try to understand if and when an exposure will cause a particular disease. Also in a Court of Law, when we want to assess legal responsibility we usually refer to causality. But when discussing this topic it is important to specify the exact query we want to talk about. For example it may be claimed in court that it was Ann’s taking the drug that was the cause of her death. This type of question relates to the cause of an observed effect (“CoE”) and is fundamental to the allocation of responsibility. On the other hand much of classical statistical design and analysis, for example randomized agricultural or medical experiments, has been created to address questions about the effects of applied causes (“EoC”). When we address an EoC query, we are typically asking a hypothetical question: “What would happen to Ann if she were to take the drug?”. At the very same time we can address alternative hypothetical questions: “What would happen to Ann if she were not to take the drug?”.

Assessing the effects of causes can be achieved in straightforward fashion using a framework based on probabilistic prediction and statistical decision theory [2]. To formalize the problem, let be a binary decision variable denoting whether or not Ann takes the drug, and the response, coded as if she dies and if not. We denote by [resp. ] the probability distribution of ensuing when is set to the value [resp. ]. The two distributions and are all that is needed to address EoC-type queries: I can compare these two different hypothetical distributions for , decide which one I prefer, and take the associated decision.

The situation is different for a CoE query, where the drug has already been taken and the outcome observed. A natural way to address a CoE question would be to try to imagine what would have happened to Ann had she not been taken the drug. In other words, given the fact that Ann actually took the drug and died, how likely is it that she would not have died if she had not taken the drug? We can not address a CoE query using only the two distribution and . In fact we can no longer base our approach purely on the probability distribution of and conditioned on known facts, since we know the values of both variables (, ), and after conditioning on that knowledge there is no probabilistic uncertainty left to work with. Nevertheless we want an answer. This query can be approached by introducing (for any individual) an associated pair of “potential responses” , where denotes the value of the response that will be realized when the exposure is set to (which we write as ). Both potential responses are regarded as existing, simultaneously, prior to the choice of , the actual response then being determined as . However, for any each individual just one of the potential responses will be observable. For example, only will be observable if in fact ; will then be counterfactual, because it relates to a situation, , which is contrary to the known fact .

To address the court’s query we use the formulation of Probability of Causation, PC, as given by Pearl in [5] (where it is named Probability of Necessity). In terms of the triple , we define the Probability of Causation in Ann’s case as:

 PCA=PA(YA(0)=0∣XA=1,YA(1)=1) (1)

where denotes the probability distribution over attributes of Ann. Knowing that Ann did take the drug () and the actual response was recovery (), this is the probability that the potential response , that would been observed had Ann not taken the drug, would have been different (). But how are we to get a purchase on this quantity?

Suppose that a good experimental study tested the same drug taken by Ann, and produced the data reported in Table 1.

Since our analysis here is not concerned with purely statistical variation due to small sample sizes, we take proportions computed from this table as accurate estimates of the corresponding population probabilities (see [3] for issues related to the use of small-sample data for making causal inferences). Thus we take

 Pr(Y=1∣X←1) = 0.3 Pr(Y=1∣X←0) = 0.12

where we use to denote population probabilities.

We see that, in the experimental population, individuals exposed to the drug () were more likely to die than those unexposed (), by percentage points. So can the court infer that was Ann’s taking the drug that caused her death? More generally: Is it correct to use such experimental results, concerning a population, to say something about a single individual? This “Group-to-individual” (G2i) issue is discussed by Dawid et al. [6] in relation to the question “When can Science be relied upon to answer factual disputes in litigation?”. It is there pointed out that in general we can not obtain a point estimate for ; but we can provide useful information, in the form of bounds between which this quantity must lie.

In this paper we show how these bounds can be adapted or improved when further information is available. In §2 we consider the basic situation where we have information only on exposure and outcome. In §3 we bound the probability of causation when we have additional information on a pre-treatment covariate. Section 4 considers the situation in which unobserved variables confound the exposure-outcome relationship. Finally in §5 we introduce new bounds for PC when a mediating variable can be observed. Section 6 presents some concluding comments.

## 2 Starting Point: Simple Analysis

In this section we discuss the simple situation in which we have information, as in Table 1, from a randomized experimental study. We need to assume that the fact of Ann’s exposure, , is independent of her potential responses :

 XA⊥⊥YA. (2)

Property (2) parallels the “no-confounding” property which holds for individuals in the experimental study on account of randomization. We further suppose that Ann is exchangeable with the individuals in the experiment, i.e. she could be considered as a subject in the experimental population.

On account of (2) and exchangeability, (1) reduces to ; but we can not fully identify this from the data. In fact we can never observe the joint event , since at least one of must be counterfactual. In particular, we can never learn anything about the dependence between and . However, even without making any assumptions about this dependence, we can derive the following inequalities [3]:

 1−1RR≤PCA≤Pr(Y=0∣X←0)Pr(Y=1∣X←1) (3)

where

 RR=Pr(Y=1∣X←1)Pr(Y=1∣X←0)

is the experimental risk ratio between exposed and unexposed. And these bounds can be estimated from the experimental data using the population death rates computed in §1.

In many cases of interest (such as Table 1), we will have

 Pr(Y=1∣X←0)

Then the lower bound in (3) will be non-trivial, while the upper bound will exceed 1, and hence be vacuous.

We see from (3) that whenever the Probability of Causation will exceed 50%. In a civil court this is often taken as the criterion to assess legal responsibility “on the balance of probabilities” (although the converse is false: it would not be correct to infer from the finding ). Since, in Table 1, the exposed are times as likely to die as the unexposed (), we have enough confidence to infer causality in Ann’s case: We have .

In this Section we show how we can refine the bounds of (3) if further information about a pre-treatment covariate is available. We now take the assumptions of §2 to hold after conditioning on (indeed in cases where the original assumptions fail, it may well be possible to reinstate them by conditioning on a suitable covariate ). In particular, , and : adjusting for is enough to control for confounding, both for Ann and in the study.

### 3.1 Fully observable

Consider first the situation where we can observe both in the experimental data and in Ann. We can apply the analysis of §2, after conditioning on , to obtain the estimable lower bound:

 1−1RR(sA)≤PCA (4)

where

 RR(s)=Pr(Y=1∣X←1,S=s)Pr(Y=1∣X←0,S=s),

and is Ann’s value for .

### 3.2 Observable in data only

But even when we can only observe in the population, and not in Ann, we can sometimes refine the bounds in (3). Thus suppose is binary, and from the data we infer the following probabilities (which are consistent with the data of Table 1):

 PA(S=1) = 0.5 (5) PA(Y=1∣X←1,S=1) = 0.6 (6) PA(Y=1∣X←0,S=1) = 0 (7) PA(Y=1∣X←1,S=0) = 0 (8) PA(Y=1∣X←0,S=0) = 0.24 (9)

Since we know , from (8) we deduce , and so by (7). That is, in this special case we can infer causation in Ann’s case—even though we have not directly observed her value for .

More generally (see [1]) we can refine the bounds in (3) as follows:

 ΔPr(Y=1∣X←1)≤PC≤1−ΓPr(Y=1∣X←1) (10)

where

 Δ = ∑sPr(S=s)×max{0,Pr(Y=1∣X←1,S=s)−Pr(Y=1∣X←0,S=s)} Γ = ∑sPr(S=s)×max{0,Pr(Y=1∣X←1,S=s)−Pr(Y=0∣X←0,S=s)}

These bounds are never wider than those obtained from (3), whicn ignores .

## 4 Unobserved Confounding

So far we have assumed no confounding: (perhaps conditionally on a stuitable covariate ), both for Ann and for the study data. Now we drop this assumption for Ann. Then the experimental data can not be used, by themselves, to learn about .

We might however be able to gather additional observational data, having the same dependence between and as for Ann. Let Q denote the joint observational distribution of , estimable from such data. Tian and Pearl [4] obtain the following bounds for , given both experimental and nonexperimental data:

 max{0,Q(Y=1)−Pr(Y=1∣X←0)Q(X=1,Y=1)}≤PCA≤min{1,Pr(Y=0∣X←0)−Q(X=0,Y=0)Q(X=1,Y=1)}. (11)

For example, suppose that, in addition to the data of Table 1, we have observational data as in Table 2.

Thus

 Q(Y=1) = 0.21 Q(X=1,Y=1) = 0.09 Q(X=0,Y=0) = 0.38.

Also, from Table 1 we have (so ). From (11) we thus find . We deduce that Ann would definitely have survived had she not taken the drug.

## 5 Mediation Analysis

In this Section we bound the Probability of Causation for a case where a third variable, , is involved in the causal pathway between the exposure and the outcome . Such a variable is called a mediator. In general, the total causal effect of on can be split into two different effects: One mediated by (the indirect effect) and one not so mediated (the direct effect). Here we shall only consider the case of no direct effect, as intuitively described by Figure 1. We shall be interested in the case that is observable in the experimental data, but is not observed for Ann, and see how this additional experimental evidence can be used to refine the bounds on .

To formalize our assumption of “no direct effect”, we introduce , the potential value of for , and , the potential value of for and , where is any value—the irrelevance of that value representing the property that has no effect on over and above that transmitted through its influence on the mediator . The potential value of for (in cases where there is no intervention on , which we here assume) is then .

In the sequel we restrict to the case that all variables are binary, and define , , . In particular, we have observable variables . We denote the bivariate distributions of the potential response pairs by

 mab := Pr(M(0)=a,M(1)=b) yrs := Pr(Y(0)=r,Y(1)=s) y∗rs := Pr(Y∗(0)=r,Y∗(1)=s).

Then

 ma+ = Pr(M=a∣X←0) m+b = Pr(M=b∣X←1) yr+ = Pr(Y=r∣M←0) y+s = Pr(Y=s∣M←1) y∗r+ = Pr(Y=r∣X←0) y∗+s = Pr(Y=s∣X←1)

where denote , etc.

In addition to the assumptions of §2 we further suppose that none of the causal mechanisms depicted in Figure 1 are confounded—expressed mathematically by assuming mutual independence between , and (both for experimental individuals, and for Ann). Then , , , , , are all estimable from experimental data in which is randomized, and and are observed.

It is also then easy to show the Markov property:

 Y⊥⊥X∣M.

This observable property can serve as a test of the validity of our conditions. It implies

 y∗00 = m00y0++(m01+m10)y00+m11y+0 (12) y∗01 = m01y01+m10y10 (13) y∗10 = m01y10+m10y01 (14) y∗11 = m00y1++(m01+m10)y11+m11y+1, (15)

and

 y∗r+ = m0+yr++m1+y+r (16) y∗+s = m+0ys++m+1y+s (17)

Suppose now that we observe , , but do not observe . We have

 PCA=y∗01y∗+1=m01y01+m10y10y∗+1. (18)

The denominator of (18) is , which is estimable from the data.

As for the numerator, this can be expressed as

 2μη+Aμ+Bη+AB=2(μ+B/2)(η+A/2)+AB/2 (19)

with , , , . Note that , are identified from the data, while for and we can only obtain inequalities:

 max{0,−B}≤μ≤min{m0+,m+1}max{0,−A}≤η≤min{y0+,y+1},

so that

 |B/2|≤μ+B/2≤min{12(m0++m+0),12(m1++m+1)}|A/2|≤η+A/2≤min{12(y0++y+0),12(y1++y+1)}. (20)

The lower [resp., upper] limit for (19) will be when and are both at their lower [resp., upper] limits. In particular, the lower limit for (19) is . Using (16) and (17), we compute , which leads to the lower bound

 PCA≥1−Pr(Y=1∣X←1)Pr(Y=1∣X←0)=1−1RR, (21)

exactly as for the case that was not observed. Thus the possibility to observe a mediating variable in the experimental data has not improved our ability to lower bound .

We do however obtain an improved upper bound. Taking into account the various possible choices for the upper bounds in (20), the uper bound for the numerator of (18), in terms of experimentally estimable quantities, is given in Table 3.

### 5.1 Example

Suppose we obtain the following values from the data:

 Pr(M=1∣X←1) = 0.25 Pr(M=1∣X←0) = 0.025 Pr(Y=1∣M←1) = 0.9 Pr(Y=1∣M←0) = 0.1

(these are consistent with Table 1).

Then we find ; whereas without taking account of the mediator we would have no non-trivial upper bound.

## 6 Discussion

In this paper we have considered estimation of the Probability of Causation in a number of contexts, including a novel analysis for the case of a mediating variable, in the absence of a direct effect. As we saw in §5 considering such a third variable in the pathway between exposure and outcome can lead to an improved upper bound, although conclusions about the lower bound remains the same.

Even if the case of no direct effect is special and unusual, there certainly do exist cases, such as the relationship between anxiolytics and cars crash mediated by alcohol consumption, or the relationship between aspirin and yellow jaundice due to hemolytic anemia mediated by favism, where this assumption is plausible. The next step will be generalize our analysis to more general cases of mediation, allowing for a direct effect and for unobserved confounding.

## References

• [1] Dawid, A. P. (2011). The role of scientific and statistical evidence in assessing causality. In Perspectives on Causation (R. Goldberg, Ed.). Oxford: Hart Publishing, 133–147.
• [2] Dawid, A. P. (2014). Statistical causality from a decision-theoretic perspective. Annual Review of Statistics and Its Application 2. In Press.
• [3] Dawid, A. P., Musio, M., and Fienberg, S. E. (2014). From statistical evidence to evidence of causality. arXiv:1311.7513.
• [4] Tian, J. and Pearl, J. (2000). Probabilities of causation: Bounds and identification. Annals of Mathematics and Artificial Intelligence 28, 287--313.
• [5] Pearl, J. (1999). Probabilities of causation: Three counterfactual interpretations and identification. Synthese 121, 93-149.
• [6] Dawid, A. P., Fienberg, S. and Faigman, D. (2014a). Fitting science into legal contexts: Assessing effects of causes or causes of effects? Sociological Methods and Research 43, 359--390.
You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters