Counterfactual Normalization: Proactively Addressing Dataset Shift and Improving Reliability Using Causal Mechanisms
Predictive models can fail to generalize from training to deployment environments because of dataset shift, posing a threat to model reliability and the safety of downstream decisions made in practice. Instead of using samples from the target distribution to reactively correct dataset shift, we use graphical knowledge of the causal mechanisms relating variables in a prediction problem to proactively remove relationships that do not generalize across environments, even when these relationships may depend on unobserved variables (violations of the “no unobserved confounders” assumption). To accomplish this, we identify variables with unstable paths of statistical influence and remove them from the model. We also augment the causal graph with latent counterfactual variables that isolate unstable paths of statistical influence, allowing us to retain stable paths that would otherwise be removed. Our experiments demonstrate that models that remove vulnerable variables and use estimates of the latent variables transfer better, often outperforming in the target domain despite some accuracy loss in the training domain.
oddsidemargin has been altered.
textheight has been altered.
marginparsep has been altered.
paperwidth has been altered.
textwidth has been altered.
marginparwidth has been altered.
marginparpush has been altered.
paperheight has been altered.
The page layout violates the UAI style. Please do not change the page layout, or include packages like geometry, savetrees, or fullpage, which change it for you. We’re not able to reliably undo arbitrary changes to the style. Please remove the offending package(s), or layout-changing commands and try again.
Counterfactual Normalization: Proactively Addressing Dataset Shift and Improving Reliability Using Causal Mechanisms
Adarsh Subbaswamy Department of Computer Science Johns Hopkins University Baltimore, MD 21218 Suchi Saria Department of Computer Science Johns Hopkins University Baltimore, MD 21218
Classical supervised machine learning methods for prediction problems assume that training and test data are independently and identically distributed from a fixed distribution over the input features and target output label , . When this assumption does not hold, training with classical frameworks can yield unreliable models and, in the case of safety-critical applications like medicine, dangerous predictions (Dyagilev and Saria, 2015; Caruana et al., 2015; Schulam and Saria, 2017). Unreliable models may have performance that is not stable—model performance varies greatly when the test distribution is different from the training distribution in scenarios where invariance to the underlying changes are desirable and expected. Unreliability arises because models are often deployed in dynamic environments that systematically differ from the one in which the historical training data was collected—a problem known as dataset shift which results in poor generalization. Most existing methods for addressing dataset shift are reactive: they use unlabeled data from the target distribution during the learning process (see Quionero-Candela et al. (2009) for an overview). However, when the differences in environments are unknown prior to model deployment (e.g., no available data from the target environment), it is important to understand what aspects of the prediction problem can change and how we can train models that will be robust to these changes. In this work we consider this problem of proactively addressing dataset shift for discriminative models.
To illustrate, we will consider diagnosis, a problem common to medical decision making. The goal is to detect the presence of a target condition . The features used can be split into three categories: risk factors for the target condition (causal antecedents), outcomes or symptoms of the condition (causal descendents), and co-parents that serve as alternative explanations for the observations (e.g., comorbidities and treatments). The causal mechanisms (directional knowledge of causes and effects, e.g., beta blockers lower blood pressure) relating variables in a prediction problem can be represented using directed acyclic graphs (DAGs), such as the one in Figure 1a. As an example (Figure 1b), a hospital may wish to screen for meningitis , which can cause blood pressure (BP) to drop dangerously low. Smoking is a risk factor for meningitis, and also causes heart disease for which patients are prescribed beta blockers (a type of medication that lowers blood pressure). However, domain-dependent confounding (Figure 1b) and selection bias (Figure 1c) can cause certain distributions in the graph to change across domains, resulting in dataset shift.
Consider domain-dependent confounding in which relevant variables may be unobserved and distributions involving these variables may change across domains. In diagnosis, unobserved variables are likely to be risk factors (e.g., behavioral factors, genetics, and geography) that confound the relationship between the target condition and comorbidities/treatments. For example (Figure 1b), smoking () may not be recorded in the data, and the policy used to prescribe beta blockers to smokers will vary between doctors and hospitals. When is observed, the changes in the prescription policy can be adjusted for. More generally, others have described solutions to ensuring model stability across environments with differences in policies (Schulam and Saria, 2017). Specifically, they optimize the counterfactual risk to explicitly account for variations in policy between train and test environments (e.g., Swaminathan and Joachims (2015); Schulam and Saria (2017)). However, this requires ignorability assumptions (also known as the no unobserved confounders assumption in causal inference), that may not hold in practice (such as when is not observed). Violations of this assumption have implications on model reliability. For example, in Figure 1b by -separation (Koller and Friedman, 2009) has two active paths to when conditioned on : and . The first path is unstable because it contains an edge encoding the distribution that changes between environments . The second path, however, encodes medical effects that are stable— does not change. Naively including and in the model will capture both paths, leaving the model vulnerable to learning the relationship along the unstable path.
Similarly, selection bias (Figure 1c) adds auxiliary variables to the graph (i.e., ) which can create unstable paths that contribute to model unreliability. Certain subpopulations with respect to the target and comorbidities may be underrepresented in the training data (). For example, patients without meningitis who take beta blockers () may be underrepresented because they rarely visit the hospital due to a local chronic care facility which helps them manage their chronic condition. This introduces a new unstable active path from to : . As before, the path through remains stable. In the case of selection bias or domain-dependent confounding, can we remove the influence of unstable paths while retaining the influence of stable paths?
We propose removing vulnerable variables—variables with unstable active paths to the target— from the conditioning set of a discriminative model in order to learn models that are stable to changes in environment. In Figure 1, this means we must remove from the model. In doing so, becomes vulnerable as well because of the paths in 1b and in 1c, so we must remove . While this removes all unstable paths, it also removes stable paths (in fact, it removes all stable paths in this example). However, in certain situations we describe, we can retain some of the stable paths between the target vulnerable variables by considering counterfactual variables. In our example, if we somehow knew an adjusted counterfactual value of , denoted —the value of for which the effects of were removed (e.g., the blood pressure had the patient not been treated)—then this adjusted would only contain the information along the stable path . This concept is inspired by potential outcomes in causal inference and allows us to retain stable paths that would otherwise be removed along with the unstable paths.
Contributions: First, we identify variables which make a statistical model vulnerable to learning unstable relationships that do not generalize across datasets (due to selection bias or unobserved domain-dependent confounding) which must be removed from a discriminative model for its performance to be stable. Second, we define a node-splitting operation which modifies the DAG to contain interpretable latent counterfactual variables which isolate unstable paths allowing us to retain some stable paths involving vulnerable variables. By allowing unstable paths to depend on unobserved variables, we generalize previous works that learn stable models by assuming there are no unobserved confounders, intervening on the unstable policy, and predicting potential outcomes (see e.g., Schulam and Saria (2017)). Third, we provide algorithms for determining stable conditioning sets and which counterfactuals to estimate. Fourth, we explain how including the latent features can make a classification problem measurably simpler due to their reduced variance. In simulated and real data experiments we demonstrate that our method improves stability of model performance.
2 Related Work
Proactive and Reactive Approaches: Reactive predictive modeling methods for countering dataset shift typically require representative unlabeled samples from the test distribution (Storkey, 2009). These methods work by re-weighting the training data or extracting transferable features (e.g., Shimodaira (2000); Gretton et al. (2009); Gong et al. (2016); Zhang et al. (2013)). To proactively address perturbations of test distributions, recent work considers formal verification methods for bounding the performance of trained models on perturbed inputs (e.g., Raghunathan et al. (2018); Dvijotham et al. (2018)). Complementary to this, others have developed methods based on distributional robustness for training models to be minimax optimal to perturbations of bounded magnitude in order to guard against adversarial attacks (Sinha et al., 2018) and improve generalization (Rothenhäusler et al., 2018). We consider the related problem of training models that are stable to arbitrary shifts in distribution.
Beyond predictive modeling, previous work has considered estimation of causal models in the presence of selection bias and confounding. For example, Spirtes et al. (1995) learn the structure of the causal DAG from data affected by selection bias. Others have studied methods and conditions for identification of causal effects under simultaneous selection and confounding bias (e.g., Bareinboim and Pearl (2012); Bareinboim and Tian (2015); Correa et al. (2018)). Correa and Bareinboim (2017) determine conditions under which interventional distributions are identified without using external data.
Transportability: The goal of an experiment is for the findings to generalize beyond a single study, a concept known as external validity (Campbell and Stanley, 1963). Similarly, in causal inference transportability, formalized in Pearl and Bareinboim (2011), transfers causal effect estimates from one environment to another. Bareinboim and Pearl (2013) generalize this to transfer causal knowledge from multiple source domains to a single target domain. Rather than transfer causal estimates from source to target, the proposed method learns a single statistical model whose predictions should perform well on the source domain while also generalizing well to new domains.
Graphical Representations of Counterfactuals: The node-splitting operation we introduce in Section 3 is similar to the node-splitting operation in Single World Intervention Graphs (SWIGs) (Richardson and Robins, 2013). However, intervening in a SWIG results in a generative graph for a potential outcome with the factual outcome removed from the graph. By contrast, our node-splitting operation yields a modified generative graph of the factual outcomes with new intermediate counterfactual variables. Other graphical representations such as twin networks (Pearl, 2009) and counterfactual graphs (Shpitser and Pearl, 2007) simultaneously represent factual and counterfactual outcomes, rather than the intermediate counterfactuals exploited in this work.
The proposed method involves the estimation of counterfactuals, which can be formalized using the Neyman-Rubin potential outcomes framework (Neyman, 1923; Rubin, 1974). For outcome variable and intervention , we denote the potential outcome by : the value would have if were observed to be .
In general, the distributions and are not equal. For this reason, estimation of the distribution of the potential outcomes relies on two assumptions:
Consistency: The distribution of the potential outcome under the observed intervention is the same as the distribution of the observed outcome. This implies .
Conditional Ignorability: , . There are no unobserved confounders. This implies .
Counterfactuals and SEMS
Shpitser and Pearl (2008) develop a causal hierarchy consisting of three layers of increasing complexity: association, intervention, and counterfactual. Many works in causal inference are concerned with estimating average treatment effects—a task at the intervention layer because it uses information about the interventional distribution . In contrast, the proposed method requires counterfactual queries which use the distribution s.t. 111The distinction is that reasons about the effects of causes while reasons about the causes of effects (see, e.g., Pearl (2015)).. That is, given that we observed an individual’s outcome to be under intervention , what would the distribution of their outcome have been under a different intervention ?
In addition to the assumptions for estimating potential outcomes, computing counterfactual queries requires functional or structural knowledge (Pearl, 2009). We can represent this knowledge using causal structural equation models (SEMs). These models assume variables are functions of their immediate parents in the generative causal DAG and exogenous noise : . Reasoning counterfactually at the level of an individual unit requires assumptions on the form of the functions and independence of the , because typically we are interested in reasoning about interventions in which the exogenous noise variables remain fixed. We build on this to estimate the latent counterfactual variables introduced within the proposed procedure.
3.2 Counterfactual Normalization
3.2.1 Assumptions About Structure of the Graph
Counterfactual Normalization uses a DAG, , that represents the causal mechanisms relating variables in a prediction problem. Let denote the observed variables, and be the target variable to predict ( is unobserved in test distributions). We make no further assumptions about the edges relating observed variables. Let and represent children and parents in , respectively.
can contain unobserved variables , which we will use to represent domain-dependent confounding. An unobserved variable must have at least two children so that it confounds the relationship between its children. Domain-dependent confounding occurs when or changes across domains. can also contain an additional variable which represents the selection mechanism that induces selection bias in the training data. The mechanism is given by where is assumed to be nonempty and is always assumed to be conditioned upon in the training domain.
3.2.2 Constructing a Stable Set
The goal of Counterfactual Normalization is to find a set of observed variables and adjusted versions of observed variables that contains no active unstable paths while maximizing the number of active stable paths it contains. First, we define an unstable path to be a path to the target that contains variables or edges which encode a distribution that can change across environments. These are edges involving unobserved variables (domain-dependent confounding) or the selection mechanism variable . Thus, an unstable path is a path to which contains or a variable in .
We can find a set, , of observed variables with no active unstable paths using Algorithm 1, which considers active paths of increasing length that begin with , and removes vulnerable variables reachable by unstable active paths.
Algorithm 1 will result in a set that contains no unstable active paths to .
We show that on iteration , removing a variable from does not create an active unstable path to a member of of length (see supplement). ∎
We now consider expanding the stable conditioning set by including some variables in or adjusted versions of these variables. The adjusted versions are counterfactuals which we place on a modified DAG through a procedure called node-splitting.
Assume each variable has a corresponding structural equation in which it is a function of its parents and an exogenous, unobserved, and independent noise term: . We want to compute a counterfactual version of in which we remove the effects of (i.e., intervene upon) some of its parents. Denote the set of parents we intervene upon as . Given ’s factual value and the factual values of , we calculate the counterfactual value (remove the effects of parents in by intervening and setting these parents to “null”). In the diagnosis example of Figure 1b, an example counterfactual variable would be : the patient’s blood pressure if we removed the effects of the treatments they were given. Note that we must observe the factual value of parents we intervene on—they must be observed variables.
Removing the effects of only a subset of the parents requires being able to consider the effects of a parent while holding fixed the effects of the other parents of the variable. For this reason, we assume that the effects of parents on children are independent—they have no interactions. We specifically consider additive structural equations which satisfy this requirement. Estimation of the counterfactuals requires fitting the relevant structural equations using the factual outcome data by maximum likelihood estimation. We can now define the node-splitting operation, which is given in Algorithm 2. Given a variable and the subset of its parents to intervene upon, we set the intervened parents to “null” and place a latent counterfactual version of the variable onto the graph as a parent of its factual version. Unlike traditional SEM interventions, we retain the factual version of the parents we intervene on in the graph. The counterfactual version subsumes the parents (in the original graph ) of its factual version that were not intervened upon. The modified graph is an equivalent model of the factual data generating process.
The consequence of node-splitting is that while the factual version of a variable may be vulnerable, after intervening on some of its parents its counterfactual version may no longer be vulnerable. Consider a vulnerable variable which, if added to a stable conditioning set , would yield at least one unstable active path to . If the unstable path is of the form , then since is a parent of we can intervene on . After node-splitting the new path would be . Since is not conditioned on, this collider path is blocked. Thus, this unstable path is not active for though it was active for . However, if the unstable path were of the form , then we cannot intervene on (not a parent of ) and any counterfactual version of will inherit the unstable active path: . The first case shows that for unstable paths from a vulnerable variable that begin through a parent of , intervening on the parent yields a counterfactual in which these unstable paths are not active. The second case shows that unstable active paths from that begin through a child of cannot be removed by node-splitting. We can also intervene on a variable’s observed parents that are not along unstable paths. As we discuss in Section 4, the potential benefit is that counterfactual variables have reduced variance than their factual versions.
A question remains: does conditioning upon a stable counterfactual version of a vulnerable variable cause any unstable paths to become active? Conditioning on a variable can only open collider paths, so the only cases we must consider are when the counterfactual is a collider or descendant of a collider. In these cases, the active paths that meet at the collider are reachable by the counterfactual through at least one of its parents. However, we know that these paths are stable since the counterfactual is stable: we would have intervened on any parents which were along unstable paths. Thus, conditioning on a stable counterfactual does not activate any new unstable paths.
3.2.4 Adding to the Conditioning Set
After finding a stable set of observed variables to condition upon, we must consider adding back each of the vulnerable variables that were removed. First, there may be variables with no unstable active paths because collider paths became inactive after these variables were removed from . Second, we know that if a variable’s active unstable paths go through observed parents, we can intervene on those parents, node-split in , and add the counterfactual version to the conditioning set. Because conditioning on the counterfactual may open stable paths involving its non-vulnerable parents, we want to make sure that non-vulnerable parents that may be in (first case) are considered after the counterfactual. For this reason, we consider adding the variables in to in reverse topological order. Algorithm 3 shows the procedure for adding variables to . We condition on the resulting set, and use it to predict by modeling .
Algorithm 3 does not activate any unstable paths and results in a stable set .
We show all branches in the algorithm do not activate unstable paths (see supplement). ∎
3.2.5 An Example
To illustrate node-splitting and Counterfactual Normalization, consider the expanded domain-dependent confounding diagnosis example in Figure 2a. represents a chronic condition (e.g., heart disease), represents treatments (e.g., beta blockers), and represents age (a demographic risk factor). is an unobserved variable, and we allow to vary across domains.
In finding a stable set , we remove (unstable path of length 2), and then and (unstable paths of length 3) which yields . Now we consider the variables in in reverse topological order. has unstable active paths through and . Since they are observed variables, we intervene on them to generate the counterfactual in Figure 1b and add it to after node-splitting. Now consider , which has no stable active paths to so we do not add it to . Similarly, has no stable active paths. Thus, is the conditioning set we would use to predict by modeling .
4 Complexity Metrics
Beyond removing unstable paths, what are other benefits of the proposed method? For binary prediction problems, the geometric complexity (on the basis of euclidean distance) of the class boundary of a dataset can decrease when using the latent counterfactual variables instead of the factual and vulnerable variables. This is similar to the work of Alaa and van der Schaar (2017) who use the smoothness of the treated and untreated response surfaces to quantify the difficulty of a causal inference problem. To measure classifier-independent geometric complexity we will use two types of metrics developed by Ho and Basu (2000, 2002): measures of overlap of individual features and measures of separability of classes.
For measuring feature overlap, we use the maximum Fisher’s discriminant ratio of the features. For a single feature, this measures the spread of the means for each class ( and ) relative to their variances ( and ): . Since the proposed method uses counterfactual variables in which we have removed the effects of some parents, this removes sources of variance in the variable. Thus, we expect the variances of each class to reduce resulting in increased feature separability and a corresponding increased Fisher’s discriminant ratio.
One measure of separability of classes is based off of a test (Friedman and Rafsky, 1979) for determining if two samples are from the same distribution. First, compute a minimum spanning tree (MST) that connects all the data points regardless of class. Then, the proportion of nodes which are connected to nodes of a different class is an approximate measure of the proportion of examples on the class boundary. Higher values of this proportion generally indicate a more complex boundary, and thus a more difficult classification problem.
However, this metric is only sensitive to which class neighbors are closer, and not the relative magnitudes of intraclass and interclass distances. Another measure of class separability is the ratio between the average intraclass nearest neighbor distance and the average interclass nearest neighbor distance. This measures the relative magnitudes of the dispersion within classes and the gap between classes. We expect intraclass distances to decrease because the data units are transformed to have the same value of the intervened parents, reducing sources of variance (e.g., less variance in counterfactual untreated BP than in factual BP).
The non- parents of a variable add variance to the prediction problem through their effects on children of . By removing their effects from children of , the proposed method can directly increase the signal-to-noise ratio of the classification problem. With respect to the geometric complexity of the class boundary, this manifests itself through reductions in the variance within a class, as we demonstrate in a simulated experiment.
We demonstrate that without requiring samples from the target distribution during training, Counterfactual Normalization results in discriminative models with more stable performance across datasets. In all experiments we train models using only source data and evaluate on test data from both the the source and target domains.
5.1 Simulated Experiments
5.1.1 Linear Gaussian Example
We consider a regression version of the simple domain-dependent confounding example in Figure 1b in which is unobserved. We simulate data from linear Gaussian SEMs in which every variable is a linear combination of its parents plus Gaussian noise. Thus, every edge in the graph has a corresponding weight which is the coefficient of the parent in the SEM (for full specification consult the supplement). In particular, we let , . We manifest domain-dependent confounding by varying in different test domains (from in the train domain), thus changing .
First, note that in the ideal case if we could observe then the unstable edge is not in any active paths to when , and are conditioned upon. This means a least squares regression modeling will have stable predictive performance regardless of changes to and . This is visible in Figure 3 in which the mean squared error (MSE) of the ideal model (red points) stays constant despite changes in .
We could naively ignore changes to and model by conditioning on vulnerable variables , but the naive model’s performance will vary in test domains as a result of using the unstable path. In fact, the MSE of the naive model (blue points in Figure 3) appears to increase quadratically as changes from its training value.
Alternatively, we can use Counterfactual Normalization (CFN). and are vulnerable, but we can condition on . First, we fit the structural equation for from training data: .222 denotes an estimated value. We then estimate . Finally, we model which conditions on a stable set. The MSE of CFN is stable (green points in Figure 3), but it can be outperformed by the naive and ideal models because CFN isolates paths that include .
5.1.2 Cross Hospital Transfer
Ensuring Stable Performance
|Method||Source AUROC||Target AUROC|
We consider a simulated version of the diagnosis problem in Figure 2(a), but remove from the graph. We let represent the time since treatment and simulate the exponentially decaying effects of the treatment as where the treatment policy depends on . and its descendants ( and ) are vulnerable.
We simulate patients from two hospitals (full specification in the supplement). In the source hospital there is a positive correlation between and , while in the target hospital changes yielding a negative correlation. At the source hospital smaller are associated with while at the target hospital is uncorrelated with . The structural equation for remains stable: , . We train using data from the source hospital and evaluate performance at both the source and target hospitals.
Counterfactual Normalization requires us to estimate the latent variable . We first fit the structural equation for using maximum likelihood estimation, optimized using BFGS (Chong and Zak, 2013). Then, we compute the counterfactual: for every individual at both hospitals, which can be done without observing . We compare a counterfactual model (CFN) with a baseline vulnerable model and counterfactual model that uses vulnerable variables using logistic regression and measure predictive accuracy with the area under the Receiver Operating Characteristic curve (AUROC).
The results of evaluation on the patients from the source and target are shown in Table 1. The accuracy of models that use vulnerable variables does not transfer across hospitals, with the baseline suffering large changes in performance. Instead, CFN transfers well while performing competitively at the source hospital, despite not using unstable paths which are informative in the training domain.
Normalizing BP () for treatment () and chronic condition () greatly increases the separability by class in the training data as measured through the classification complexity metrics in Table 2. The feature with the maximum Fisher’s Discriminant Ratio in the baseline model is , but this is much smaller than the ratio for the latent feature in CFN. The large decrease in the MST metric indicates fewer examples lies on the class boundary in the normalized problem, and the decrease in intraclass-interclass distance is due to a combination of increased separability and reduced intraclass variance of the latent variables. This is visible in the class conditional densities of factual and counterfactual (see supplement).
Accuracy of Counterfactual Estimates
In this experiment, we examine how the accuracy of counterfactual estimates affects model stability and performance. We expect models that do not use vulnerable variables to have more stable performance, but they may be less accurate in the source domain than models which use vulnerable variables. We bias the true counterfactual values by adding normally distributed noise of increasing scale. Then, we train the counterfactual logistic regressions (with and without vulnerable variables) to predict and evaluate the AUROC at the source and target hospitals. We vary the standard deviation of the perturbations from to in increments of , repeating the process 50 times for each perturbation.
The results, shown in Figure 4, confirm what we expect: removing vulnerable variables leads to more stable performance, but performance in the source domain is always lower than when including vulnerable variables. Further, when the counterfactual estimates are accurate (low MSE), removing vulnerable variables yields better performance in the target domain. However, when the MSE is high, the noise removes both the information captured by the adjustment and the information contained in itself, causing the model to perform worse in the target domain than a model using vulnerable variables.
5.2 Real Data: Sepsis Classification
5.2.1 Problem and Data Description
We apply the proposed method to the task of detecting sepsis, a deadly response to infection that leads to organ failure. Early detection and intervention has been shown to result in improved mortality outcomes (Kumar et al., 2006) which has resulted in recent applications of machine learning to build predictive models for sepsis (e.g., Henry et al. (2015); Soleimani et al. (2017); Futoma et al. (2017a, b)).
To illustrate, we consider a simplified333Sepsis involves many physiologic markers and corresponding treatments and chronic conditions. We select a small number of variables to demonstrate the key technical concepts. cross-sectional version of the sepsis detection task using electronic health record (EHR) data from our institution’s hospital. Working with a domain expert, we determined the primary factors in the causal mechanism DAG (Figure 5) for the effects of sepsis on a single physiologic signal : the international normalized ratio (INR), a measure of the clotting tendency of blood. The target variable is whether or not the patient has sepsis due to hematologic dysfunction. We include seven conditions (such as chronic liver disease and sickle cell disease) affecting INR that are risk factors for sepsis (Goyette et al., 2004; Booth et al., 2010). We consider five types of relevant treatments : anticoagulants, aspirin, nonsteroidal anti-inflammatory drugs (NSAIDs), plasma transfusions, and platelet transfusions, where means patient has received treatment in the last 24 hours. Finally, we include a demographic risk factor, age . For each patient, we take the last recorded measurements while only considering data up until the time sepsis is recorded in the EHR for patients with .
27,633 patients had at least one INR measurement, 388 of whom had sepsis due to hematologic dysfunction. We introduced selection bias as follows. First, we took one third of the data as a sample from the original target population for evaluation. Second, we subsample the remaining data by rejecting patients with any treatment and without sepsis with probability . Third, we split the subsampled data into a random two thirds/one third train/test splits for training on biased data and evaluating on both the biased and unbiased data to measure stability of prediction performance. We repeated the three steps 100 times. We normalize INR in all experiments.
5.2.2 Experimental Setup
We apply the proposed method by fitting an additive structural equation for using the Bayesian calibration form of Kennedy and O’Hagan (2001):
where is a Gaussian process (GP) prior (with RBF kernel) on the discrepancy function since our linear regression model is likely misspecified.
Due to selection bias and few sepsis examples, for better calibration we place informative priors on and using for features that increase INR (e.g., and anticoagulants) and for features that decrease INR (e.g., sickle cell disease and plasma transfusions). For full specification of the other priors consult the supplement. We compute point estimates for the parameters using MAP estimation and the FITC sparse GP (Snelson and Ghahramani, 2006) implementation in PyMC3 (Salvatier et al., 2016).
While the only vulnerable variables are and , we additionally remove the effects of and :
We consider three logistic regression models trained on the biased data for predicting : a baseline using vulnerable variables , a counterfactually normalized model , and a counterfactually normalized model with vulnerable variables . We evaluate prediction accuracy on biased and unbiased data using AUROC and the area under the precision-recall curve (AUPRC).
The selection bias causes a small shift in the marginal distribution of between populations, such that of the selection biased population has sepsis while of the unbiased population has sepsis. Since most of the examples are negative, the AUPRC is a more interesting measurement because it is sensitive to false positives.
The resulting AUCs when predicting on selection biased data are shown in Figure 6. As expected, the counterfactually normalized model (CFN) performs worse than models using vulnerable variables because it does not take advantage of the unstable path created by selection bias. On unbiased data (Figure 7), however, CFN not only outperforms both vulnerable models, but its performance is also more stable to the selection bias: the decrease in AUPRC from source to target is much smaller for CFN.
Interestingly, the the performance of the two vulnerable models is nearly identical. This implies that the CFN model with vulnerable variables does not learn to use counterfactual features, perhaps because the unstable path through selection bias encodes a much stronger relationship. The AUPRC of the non-vulnerable CFN model in selection biased and unbiased data is in between the AUPRC of the vulnerable CFN model in selection biased data (upper bound) and unbiased data (lower bound). This is encouraging, because Figure 4 suggests that if CFN performance were worse in the unbiased data than the vulnerable model’s performance, then the counterfactual estimates may be inaccurate. Ultimately, we were able to leverage Counterfactual Normalization to remove vulnerable variables resulting in stabler performance, while outperforming the vulnerable models in unbiased data.
When environment-specific artifacts cause training and test distributions to differ, naively training models under i.i.d. assumptions can result in unreliable models which predict using unstable relationships that do not generalize. While some previous solutions use prior knowledge of causal mechanisms to predict potential outcomes that are invariant to differences in policy across environments, they require strong assumptions about no unobserved confounders that may not hold in practice (e.g., Schulam and Saria (2017)). Our proposed solution, Counterfactual Normalization, generalizes these approaches to cases in which the unstable relationships (such as ones due to domain-specific policy) may depend on unobserved variables or selection bias. Specifically, we train discriminative models using conditioning sets that only contain variables with stable relationships with the target prediction variable. Then, for vulnerable variables with unstable relationships to the target, we consider adding to the conditioning set counterfactual versions of these variables which sever the unstable paths of statistical influence. Further, because of their causal interpretations, we believe these counterfactual variables are more intelligible for human experts than existing adjustment-based methods. For example, we think it is easier to reason about “the blood pressure if the patient had not been treated” than interaction features or kernel embeddings—we would like to test this in a future user study. As demonstrated by our experiments, models trained using Counterfactual Normalization have performance that is more stable to changes across environments and is not coupled to artifacts in the training domain.
The authors would like to thank Katie Henry for her help in developing the sepsis classification DAG and Peter Schulam for suggesting experiment 5.1.1 and help clarifying presentation of the method.
- Alaa and van der Schaar (2017) Alaa, A. M. and van der Schaar, M. (2017). Bayesian nonparametric causal inference: Information rates and learning algorithms. arXiv preprint arXiv:1712.08914.
- Bareinboim and Pearl (2012) Bareinboim, E. and Pearl, J. (2012). Controlling selection bias in causal inference. In AISTATS, pages 100–108.
- Bareinboim and Pearl (2013) Bareinboim, E. and Pearl, J. (2013). Meta-transportability of causal effects: A formal approach. In AISTATS, pages 134–143.
- Bareinboim and Tian (2015) Bareinboim, E. and Tian, J. (2015). Recovering causal effects from selection bias. In AAAI, pages 3475–3481.
- Booth et al. (2010) Booth, C., Inusa, B., and Obaro, S. K. (2010). Infection in sickle cell disease: a review. International Journal of Infectious Diseases, 14(1):e2–e12.
- Campbell and Stanley (1963) Campbell, D. T. and Stanley, J. C. (1963). Experimental and quasi-experimental designs for research. Handbook of research on teaching.
- Caruana et al. (2015) Caruana, R., Lou, Y., Gehrke, J., Koch, P., Sturm, M., and Elhadad, N. (2015). Intelligible models for healthcare: Predicting pneumonia risk and hospital 30-day readmission. In KDD, pages 1721–1730. ACM.
- Chong and Zak (2013) Chong, E. K. and Zak, S. H. (2013). An introduction to optimization, volume 76. John Wiley & Sons.
- Correa and Bareinboim (2017) Correa, J. D. and Bareinboim, E. (2017). Causal effect identification by adjustment under confounding and selection biases. In AAAI, pages 3740–3746.
- Correa et al. (2018) Correa, J. D., Tian, J., and Bareinboim, E. (2018). Generalized adjustment under confounding and selection biases. In AAAI.
- Dvijotham et al. (2018) Dvijotham, K., Stanforth, R., Gowal, S., Mann, T., and Kohli, P. (2018). A dual approach to scalable verification of deep networks. In UAI.
- Dyagilev and Saria (2015) Dyagilev, K. and Saria, S. (2015). Learning (predictive) risk scores in the presence of censoring due to interventions. Machine Learning, 102(3):323–348. First Online 2015. Printed Version 2016.
- Friedman and Rafsky (1979) Friedman, J. H. and Rafsky, L. C. (1979). Multivariate generalizations of the wald-wolfowitz and smirnov two-sample tests. The Annals of Statistics, pages 697–717.
- Futoma et al. (2017a) Futoma, J., Hariharan, S., and Heller, K. (2017a). Learning to detect sepsis with a multitask gaussian process rnn classifier. In ICML.
- Futoma et al. (2017b) Futoma, J., Hariharan, S., Sendak, M., Brajer, N., Clement, M., Bedoya, A., O’Brien, C., and Heller, K. (2017b). An improved multi-output gaussian process rnn with real-time validation for early sepsis detection. arXiv preprint arXiv:1708.05894.
- Gong et al. (2016) Gong, M., Zhang, K., Liu, T., Tao, D., Glymour, C., and Schölkopf, B. (2016). Domain adaptation with conditional transferable components. In ICML, pages 2839–2848.
- Goyette et al. (2004) Goyette, R. E., Key, N. S., and Ely, E. W. (2004). Hematologic changes in sepsis and their therapeutic implications. In Seminars in respiratory and critical care medicine, volume 25, pages 645–659. Thieme Medical Publishers, Inc., NY, USA.
- Gretton et al. (2009) Gretton, A., Smola, A. J., Huang, J., Schmittfull, M., Borgwardt, K. M., and Schölkopf, B. (2009). Covariate shift by kernel mean matching.
- Henry et al. (2015) Henry, K. E., Hager, D. N., Pronovost, P. J., and Saria, S. (2015). A targeted real-time early warning score (trewscore) for septic shock. Science translational medicine, 7(299):299ra122–299ra122.
- Ho and Basu (2000) Ho, T. K. and Basu, M. (2000). Measuring the complexity of classification problems. In Pattern Recognition, volume 2, pages 43–47. IEEE.
- Ho and Basu (2002) Ho, T. K. and Basu, M. (2002). Complexity measures of supervised classification problems. IEEE transactions on pattern analysis and machine intelligence, 24(3):289–300.
- Kennedy and O’Hagan (2001) Kennedy, M. C. and O’Hagan, A. (2001). Bayesian calibration of computer models. Journal of the Royal Statistical Society: Series B, 63(3):425–464.
- Koller and Friedman (2009) Koller, D. and Friedman, N. (2009). Probabilistic graphical models: principles and techniques. MIT press.
- Kumar et al. (2006) Kumar, A., Roberts, D., Wood, K. E., Light, B., Parrillo, J. E., Sharma, S., Suppes, R., Feinstein, D., Zanotti, S., Taiberg, L., et al. (2006). Duration of hypotension before initiation of effective antimicrobial therapy is the critical determinant of survival in human septic shock. Critical care medicine, 34(6):1589–1596.
- Neyman (1923) Neyman, J. (1923). On the application of probability theory to agricultural experiments. essay on principles. Annals of Agricultural Sciences, 10:1–51.
- Pearl (2009) Pearl, J. (2009). Causality. Cambridge university press.
- Pearl (2015) Pearl, J. (2015). Causes of effects and effects of causes. Sociological Methods & Research, 44(1):149–164.
- Pearl and Bareinboim (2011) Pearl, J. and Bareinboim, E. (2011). Transportability of causal and statistical relations: a formal approach. In AAAI, pages 247–254. AAAI Press.
- Quionero-Candela et al. (2009) Quionero-Candela, J., Sugiyama, M., Schwaighofer, A., and Lawrence, N. D. (2009). Dataset shift in machine learning. MIT Press.
- Raghunathan et al. (2018) Raghunathan, A., Steinhardt, J., and Liang, P. (2018). Certified defenses against adversarial examples. In ICLR.
- Richardson and Robins (2013) Richardson, T. S. and Robins, J. M. (2013). Single world intervention graphs (swigs): A unification of the counterfactual and graphical approaches to causality. Center for the Statistics and the Social Sciences, University of Washington Series. Working Paper, 128(30):2013.
- Rothenhäusler et al. (2018) Rothenhäusler, D., Bühlmann, P., Meinshausen, N., and Peters, J. (2018). Anchor regression: heterogeneous data meets causality. arXiv preprint arXiv:1801.06229.
- Rubin (1974) Rubin, D. B. (1974). Estimating causal effects of treatments in randomized and nonrandomized studies. Journal of Educational Psychology, 66(5):688.
- Salvatier et al. (2016) Salvatier, J., Wiecki, T. V., and Fonnesbeck, C. (2016). Probabilistic programming in python using pymc3. PeerJ Computer Science, 2:e55.
- Schulam and Saria (2017) Schulam, P. and Saria, S. (2017). Reliable decision support using counterfactual models. In NIPS, pages 1696–1706.
- Shimodaira (2000) Shimodaira, H. (2000). Improving predictive inference under covariate shift by weighting the log-likelihood function. Journal of statistical planning and inference, 90(2):227–244.
- Shpitser and Pearl (2007) Shpitser, I. and Pearl, J. (2007). What counterfactuals can be tested. In UAI, pages 352–359. AUAI Press.
- Shpitser and Pearl (2008) Shpitser, I. and Pearl, J. (2008). Complete identification methods for the causal hierarchy. Journal of Machine Learning Research, 9(Sep):1941–1979.
- Sinha et al. (2018) Sinha, A., Namkoong, H., and Duchi, J. (2018). Certifying some distributional robustness with principled adversarial training. In ICLR.
- Snelson and Ghahramani (2006) Snelson, E. and Ghahramani, Z. (2006). Sparse gaussian processes using pseudo-inputs. In NIPS, pages 1257–1264.
- Soleimani et al. (2017) Soleimani, H., Hensman, J., and Saria, S. (2017). Scalable joint models for reliable uncertainty-aware event prediction. IEEE transactions on pattern analysis and machine intelligence.
- Spirtes et al. (1995) Spirtes, P., Meek, C., and Richardson, T. (1995). Causal inference in the presence of latent variables and selection bias. In UAI, pages 499–506.
- Storkey (2009) Storkey, A. (2009). When training and test sets are different: characterizing learning transfer. Dataset shift in machine learning, pages 3–28.
- Swaminathan and Joachims (2015) Swaminathan, A. and Joachims, T. (2015). Counterfactual risk minimization: Learning from logged bandit feedback. In ICML, pages 814–823.
- Zhang et al. (2013) Zhang, K., Schölkopf, B., Muandet, K., and Wang, Z. (2013). Domain adaptation under target and conditional shift. In International Conference on Machine Learning, pages 819–827.
Appendix A Counterfactual Normalization Proofs
a.1 Proof of Theorem 1
We must show that on iteration , removing a variable from does not create an active unstable path to a member of of length .
Suppose, by contradiction, that on iteration removing a variable with an active unstable path of length to results in an active unstable path of length with respect to another variable . Note: removing a variable from a conditioning set cannot create new collider paths. Let denote that direction of edge does not matter. We will consider all cases of how can relate to an unstable path to from . In the first two cases, comes before in the unstable active path to .
Case 1: . does not have an unstable path to . If it did, the path would be of the form . Thus, would have been removed from in a previous iteration because the unstable path is of length and its active status does not depend on .
Case 2: . This is an unstable path to of length . cannot be in since it would have been removed in a previous iteration as the active status of this path does not depend on .
Case 3: . Creates new active path of length .
Case 4: . Creates new active path of length .
Case 5: . Creates new active path of length .
Case 6: . We remove from the conditioning set (so it is now considered unobserved). Thus, this collider path is not active. If a descendent of is conditioned on, then this is an unstable active path of length .
In all cases, either would have been removed from before iteration or the new unstable active path would be of length . This is a contradiction since we assumed and that the procedure would result in a new active unstable path of length . ∎
a.2 Proof of Theorem 2
We must show that Algorithm 3 will not activate any unstable paths with respect to the initial stable set . While considering each vulnerable variable , the resulting set must remain stable.
We assume the initial set is stable. The only way to activate a path (stable or unstable) by adding to a conditioning set is if the new variable being added is a collider or descendant of a collider.
Algorithm 3 only adds to in branches 2 and 3 of the if-else. We consider each branch in turn.
In branch 2, the vulnerable variable has no active unstable paths to . Thus, by adding to , no unstable path from to is active. Next we consider the possibility of being a collider or descendant of a collider. In these cases, if activates a path all branches of the activated path will be reachable by . Since has no unstable active paths to , none of the branches of the collider can be unstable paths to . Thus, the collider path is not unstable.
In branch 3 of the if-else, the active unstable paths of the vulnerable variable all go through some subset of the observed parents of . Denote this subset of parents of as . We intervene on and node-split to add the counterfactual to the modified graph. The modified graph contains the path . Conditioning on does not allow for paths through the collider . We know the paths through the parents of are stable because we intervened on all parents on unstable paths from . Further, there are no active unstable paths through children of otherwise would not be considered in branch 3. As in the case of branch 2, if is a collider or descendant of a collider, then since all parts of any activated collider path are reachable from the counterfactual variable through its parents, they must be stable.
Thus, when Algorithm 3 adds a variable or counterfactual variable to the stable set , no unstable paths are activated and the set remains stable. ∎
Appendix B Linear Gaussian Experiment Details
b.1 Simulation Details
We generate the data from the following linear Gaussian SEMs:
In the training domain, we set and . We simulated 100 test domains by varying from to in equally spaced increments. In all domains we generated 30000 samples. We fit all models (structural equation of , naive model, ideal model, and counterfactually normalized model) on the training data using least squares then applied them to all test domains.
b.2 Counterfactual Normalization
is unobserved and varies between domains. Thus, is vulnerable becomes conditioning on results in the unstable active path . , as a descendant of , is also vulnerable because without conditioning on , conditioning on results in the unstable active path . The only shared child of vulnerable variables and is . This means we need to perform node-splitting on to generate an intermediate counterfactual version, for which we have removed the effects of the vulnerable parent . The graph after node-splitting is shown in Figure 8.
The counterfactual variable inherits the parents of from the original graph that we did not intervene upon (i.e., set to null). In this case, the parents it inherits are and the unpictured . The SEMs in the modified graph are:
Importantly, note that the counterfactual is now a random quantity, while is a deterministic function of and . The modified SEMs are observationally equivalent to the original SEMs (marginalizing over the latent counterfactual yields the same joint as in the original system).
We can recover by observing and : . This makes clear the role of the intermediate counterfactual variable: it isolates the effect of the target on the outcome from the effects of vulnerable variables (or any other parents we set to null) on the outcome.
Appendix C Cross Hospital Transfer Experiment Details
c.1 Simulation Details
We generate data at the source hospital as follows:
At the target hospital, we change and :
We generate 2000 patients from the source hospital, using 1600 for training and holding out 400 to evaluate performance on the source hospital. We evaluate cross hospital transfer on 1000 patients generated from the second hospital.
c.2 Class Conditional Densities
Appendix D Real Data Experiment Details
Our posited structural equation for INR () is a linear regression of the parents of in Figure 5. The seven conditions () we include are liver disease, sickle cell disease, chronic kidney disease, any immunodeficiency, any cancer, diabetes, and stroke. In the statistical uncertainty quantification community, one technique for parameter calibration when the computer model is misspecified is to jointly estimate model parameters with an explicit discrepancy function that captures model inadequacy (Kennedy and O’Hagan, 2001). The discrepancy function has a Gaussian process prior. The parameters to estimate are the linear regression parameters , the observation noise scale , the RBF kernel output scale , and the kernel lengthscales .
We placed the following priors on parameters:
We used the PyMC3 FITC sparse GP approximation implementation with 20 inducing points initialized by k-means.