I-SPEC: An End-to-End Framework for Learning Transportable, Shift-Stable Models

I-SPEC: An End-to-End Framework for Learning Transportable, Shift-Stable Models


Shifts in environment between development and deployment cause classical supervised learning to produce models that fail to generalize well to new target distributions. Recently, many solutions which find invariant predictive distributions have been developed. Among these, graph-based approaches do not require data from the target environment and can capture more stable information than alternative methods which find stable feature sets. However, these approaches assume that the data generating process is known in the form of a full causal graph, which is generally not the case. In this paper, we propose I-Spec, an end-to-end framework that addresses this shortcoming by using data to learn a partial ancestral graph (PAG). Using the PAG we develop an algorithm that determines an interventional distribution that is stable to the declared shifts; this subsumes existing approaches which find stable feature sets that are less accurate. We apply I-Spec to a mortality prediction problem to show it can learn a model that is robust to shifts without needing upfront knowledge of the full causal DAG.



1 Introduction

One of the primary barriers to the deployment of machine learning models in safety-critical applications is unintended behaviors arising at deployment that were not problematic during model development. For example, predictive policing systems have been shown to be vulnerable to predictive feedback loops that cause them to disproportionately overpatrol certain neighborhoods (Lum and Isaac, 2016; Ensign et al., 2018), and a patient triage model erroneously learned that asthma lowered the risk of mortality in pneumonia patients (Caruana et al., 2015). At the heart of many such unintended behaviors are shifts in environment—changes in the conditions that generated the training data and deployment data (Subbaswamy and Saria, 2019). An important step for ensuring that models will perform reliably under shifting conditions is for model developers to anticipate failures and train models in a way that addresses likely sources of error.

Consider the study of Zech et al. (2018), who trained a model to diagnose pneumonia from chest X-rays using data from one hospital. Notably, the X-rays contained stylistic features, including inlaid tokens that encoded geometric information such as the X-ray orientation. When they applied the model to data from other hospital locations, they found the model’s performance significantly deteriorated, indicating that it failed to generalize across the shifts between hospitals. In particular, shifts in the distribution of style features occurred due to differences in equipment at different hospital locations and differences in imaging protocols between hospital departments.

More formally, Zech et al. encountered an instance of dataset shift, in which shifts in environment resulted in differing train and test distributions. Typical solutions for addressing dataset shift use samples from the test distribution to reweight training samples during learning (see Quiñonero-Candela et al. (2009) for an overview). In many practical applications, however, there are unknown or multiple possible test environments (e.g., for a cloud-based machine learning service), making it infeasible to acquire test samples. In contrast with reweighting solutions, proactive solutions do not use test samples during learning. For example, one class of proactive solutions is entirely dataset-driven: using datasets from multiple training environments, they empirically determine a stable conditional distribution that is invariant across the datasets (e.g., Muandet et al. (2013); Rojas-Carulla et al. (2018); Arjovsky et al. (2019)). A model of this distribution is then used to make predictions in new, unseen environments.

While the dataset-driven methods find a distribution that is invariant across the training datasets, they do not, in general, provide guarantees about the specific shifts in environment to which the resulting models are stable. This information is crucially important in safety-critical domains where incorrect decision making can lead to failures. In the pneumonia example, suppose we had multiple training datasets which contained shifts in style features due to differing equipment, but, critically, did not contain shifts in protocols between departments. When a dataset-driven method finds a predictive distribution that is invariant across the training datasets, its developers will not know that this distribution is stable to shifts in equipment but is not stable to shifts in imaging protocols. When the resulting model is deployed at a hospital with different imaging protocols (e.g., distribution of front-to-back vs back-to-front X-rays), the model will make (potentially arbitrarily) incorrect predictions resulting in unanticipated misdiagnoses and disastrous failures.

Alternative methods use graphical representations of the data generating process (DGP) (e.g., causal directed acylic graphs (DAGs)), letting developers proactively reason about the DGP to specify shifts and provide stability guarantees. One advantage of explicit graph-based methods is that they allow the computation of stable interventional (Subbaswamy et al., 2019b) and counterfactual distributions (Subbaswamy and Saria, 2018); these retain more stable information than conditional distributions, leading to higher accuracy. A primary challenge in applying these approaches, however, is that in large-scale complex domains it is very difficult to fully specify the graph (i.e., edge adjacencies and directions) from prior knowledge alone. We address this by extending graphical methods for finding stable distributions to partial graphs that can be learned directly from data.

Contributions: A key impediment to the deployment of machine learning is the lack of methods for training models that can generalize despite shifts across training and test environments. Stable interventional distributions estimated from data yield models that are guaranteed to be invariant to shifts (e.g., the modeler can upfront identify which shifts the model is protected against). However, to estimate such distributions, prior approaches require knowledge of the underlying causal DAG or extensive samples from multiple training environments. We propose I-Spec, a novel end-to-end framework which allows us to estimate stable interventional distributions when we do not have prior causal knowledge of the full graph. To do so, we learn a partial ancestral graph (PAG) from data; the PAG captures uncertainty in the graph structure. Then, we use the PAG to inform the choice of mutable variables, or shifts to protect against. We develop an algorithm that uses the PAG and set of mutable variables to determine a stable interventional distribution. We prove the soundness of the algorithm and prove that it subsumes existing dataset-driven approaches which find stable conditional distributions. Empirically, we apply I-Spec to a large, complicated healthcare problem and show that we are able to learn a PAG, use it to inform the choice of mutable variables, and learn models that generalize well to new environments. We also use simulated data to provide insight into when stable models are desirable by examining how shifts of varying magnitude affect the difference in performance between stable and unstable models.

2 Background

The proposed framework, I-Spec, uses PAGs and interventional distributions, which we briefly overview here.

Notation: Sets of variables are denoted by bold capital letters while their assignments are denoted by bold lowercase letters. The sets of parents, children, ancestors, and descendants in a graph will be denoted by , , , and , respectively. We will consider prediction problems with observed variables and target variable .

Causal Graphs: We assume the DGP underlying a prediction problem can be represented as a causal DAG with latent variables, or equivalently, a causal acylic directed mixed graph (ADMG) over which contains directed () and bidirected (, representing unobserved confounding) edges. A causal mechanism is the functional relationship that generates a child from its (possibly unobserved) parents.

Multiple ADMGs can contain the same conditional independence and ancestral information about the observed variables —consider, for example, an ADMG with edges and an ADMG with edge . However, an ADMG is associated with a unique maximal ancestral graph (MAG) (Richardson and Spirtes, 2002) which represents a set of ADMGs that share this information, with at most one edge between any pair of variables (e.g., the MAG associated with and is ). The problem is that multiple MAGs may contain the same independences (e.g., and are Markov equivalent). Fortunately, a partial ancestral graph (PAG) represents an equivalence class of MAGs, denoted by . and every MAG in have the same adjacencies, but they differ in the edge marks. An arrow head (or tail) is present in if that head (or tail) is present in all MAGs in . Otherwise, the edge mark is and the edge is partially (or non) directed. The PAG for , , and is . PAGs can be learned from data, and are the output of the FCI algorithm (Spirtes et al., 2000).

Because PAGs are partial graphs, we require a few additional definitions. First, a path from to is possibly directed if no arrowhead along the path is directed towards . In such a path, is a possible ancestor of , and is a possible descendant of . There are two kinds of directed edges in MAGs and PAGs. A directed edge is visible if there is a node not adjacent to , such that either there is an edge between and that is into , or there is a collider path between and that is into , and every node on the path is a parent of (Maathuis and Colombo, 2015). Otherwise, the edge is invisible. The importance of visible edges is that if is visible, then it implies that there is no unobserved confounder (i.e., no edge in any ADMG in the equivalence class).

Interventional Distributions: We now review interventional distributions, which we use to make stable predictions. First, note that the distribution of observed variables in an ADMG factorizes as


where are unobserved variables corresponding to the bidirected edges. An interventional distribution of the form is defined in terms of the operator (Pearl, 2009).1 Graphically, in ADMGs the intervention deletes all edges into . Distributionally, deletes the , terms in (1) to yield the interventional distribution. The difficulty of using interventional distributions is that they are not always identifiable as a function of the observational data distribution.

Definition 1 (Causal Identifiability).

For disjoint , the effect of an intervention on conditional on is said to be identifiable from in if is (uniquely) computable from in any causal model which induces .

The ID algorithm (Shpitser and Pearl, 2006b, a) takes disjoint sets and an ADMG and returns an expression in terms of if is identifiable. Recently, the ID algorithm was extended to PAGs (Jaber et al., 2019b), with the CIDP algorithm2 (Jaber et al., 2019a) returning an expression for if the conditional interventional distribution is uniquely computable (i.e., identifiable) from for all ADMGs in the equivalence class defined by a PAG . We use CIDP in the proposed method to determine identified interventional expressions.

3 Methods

We now present I-Spec, a framework for finding a stable distribution that is invariant to shifts in environments. I-Spec works as follows: Given datasets collected from multiple source environments, a user defines a graphical invariance specification by first learning a PAG from the combined datasets (without requiring prior causal knowledge). Then, the user can determine which shifts to protect against by reasoning about the PAG and consulting it regarding shifts that occurred across the datasets. Given the resulting invariance specification (i.e., PAG and shifts to protect against), graphical criteria are used to search for the best-performing stable interventional distribution which is guaranteed to be invariant to the specified shifts.

The rest of this section is organized as follows: In Section 3.1 we introduce invariance specifications. Next, in Section 3.2 we describe the steps of I-Spec (Algorithm 1) and prove its correctness. Then, in Section 3.3 we establish the superiority of stable interventional distributions over stable conditional distributions, proving that Algorithm 1 subsumes existing dataset-driven methods. Finally, in Section 3.4 we discuss how I-Spec can be adapted to settings in which data from only one environment is available.

3.1 Graphical Invariance Specifications

Our goal is to predict accurately in new environments without using test samples. To do so, we need a way to represent the possible environments and how they can differ. For this reason, we will now introduce invariance specs, which are built around a PAG and specify shifts to protect against. They do not require prior causal knowledge and can be learned from data. Given the invariance spec, we show that certain interventional distributions provide stability guarantees to the specified shifts.

First, we formalize the notion of a stable distribution. Stable distributions are the same in all environments, can be learned from the source environment data, and can be applied in the target environment without any adjustment.

Definition 2 (Stable Distribution).

For environment set , a distribution is said to be stable if, for any two environments in corresponding to joint distributions and , .

Stable distributions are defined with respect to a set of environments. We develop invariance specifications as a way to represent a set of environments when we do not have prior causal knowledge by generalizing selection diagrams (Pearl and Bareinboim, 2011; Subbaswamy et al., 2019b), a representation that assumes a known causal graph.

Definition 3 (Selection Diagram).

A selection diagram is an ADMG augmented with auxiliary selection variables such that for an edge denotes the mechanism that generates X can vary across environments. Selection variables may have at most one child.

A selection diagram describes a set of environments whose DGPs share the same underlying graph structure (i.e., ADMG). Only the causal mechanisms associated with the children of selection variables may differ across environments, usually expressed as distributional shifts in the terms of the factorization of the joint via Equation (1).3

Selection diagrams assume that both the full graph (i.e., ADMG) and the shifts (i.e., placement of selection variables) are known, prohibiting their use in complex domains. A natural idea to relax this would be to define a selection PAG, thus allowing for uncertainty in the graphical structure. However, a PAG augmented with selection variables would not technically be a PAG—for example, selection variables could not be used to determine visible edges in the PAG. For this reason, we introduce the notion of a graphical invariance specification (or simply invariance spec) which generalizes selection diagrams.

Definition 4 (Invariance spec).

An invariance spec is a 2-tuple consisting of a graphical representation, , of the DGP and a set of mutable variables, , whose causal mechanisms are vulnerable to shifts.

When is a PAG, an invariance spec defines a set of environments which share the same underlying graph structure (i.e., ADMG) that is only known up to an equivalence class (the PAG). Now we say the mechanism shifts are associated with the mutable variables (Subbaswamy et al., 2019b). Note that if is an ADMG, then by augmenting with selection variables as parents of we recover a selection diagram. We will only consider when is a PAG, which means the invariance spec can be learned from data.

Like selection diagrams, invariance specs provide graphical criteria for determining if a distribution is stable. The next result states that a distribution is stable in an invariance spec when the distribution is stable in every selection diagram corresponding to the equivalence class .4

Proposition 1.

Given an invariance spec , a distribution is stable if in every ADMG in the equivalence class augmented with selection variables as parents of .

Armed with this graphical stability criterion, we now extend a prior result which showed that intervening on yields a stable distribution in ADMGs (Subbaswamy et al., 2019b, Proposition 1). The extension to PAGs is what permits the use of interventional distributions to make stable predictions across the environments represented by an invariance spec, and is thus key to the correctness of I-Spec.

Proposition 2.

For invariance spec , , is stable to shifts in the mechanisms of .

3.2 I-Spec Step by Step

input : Datasets , Observed vars , Target , Environment Indicator
output : Best stable interventional distribution found or FAIL.
1 Learn invariance spec structure, PAG over ;
2 Declare mutable variables ;
3 Let ;
4 for  do
5       if , , satisfy Zhang (2008a, Thm 30) then
6             append;
8      else if CIDPFAIL then
9             append
11if  then
12      return FAIL;
13return Dist. in with lowest validation loss;
Algorithm 1 I-Spec

We have just defined invariance specs and established that intervening on mutable variables yields a stable distribution. We are now ready to discuss each step of I-Spec (Alg 1).

1) Learning Invariance Spec Structure : The first step (Line 1) in creating an invariance spec is to learn the graphical representation of the DGP. We will assume faithfulness: the independences implied by the graph are the only independences in the data distribution. Consider the DGP represented by the ADMG in Fig 1a, in which the observed variables are and the goal is to predict . While every environment (e.g., location) shares this graph (including unseen environments), certain mechanisms corresponding to may vary across environments (e.g., ). If we knew this full graph , then we could use existing graphical methods (e.g., Subbaswamy et al. (2019b)) to find a stable distribution. However, in practice we usually do not know the full graph.

Instead, we will learn the structure of the DGP from datasets , containing observations of collected in training environments . While this problem has itself been extensively studied (as we discuss in Section 4; Related Work), we will use a simple extension of FCI (Spirtes et al., 2000), a constraint-based structure learning algorithm which learns a PAG over the observed variables. FCI uses conditional independence tests to determine adjacencies and create a graph skeleton, and then uses a set of orientation rules to determine where to place edge marks (Zhang, 2008b). We apply FCI by pooling the datasets , adding the environment indicator as a variable, and adding the logical constraint that causally precedes all variables in (i.e., there can be no edge for ).5

Suppose we had datasets generated from multiple environments according to the DGP in Fig 1a. Using the pooled FCI variant we described, we would learn the PAG in Fig 1b. The PAG represents an equivalence class of ADMGs (which includes the one in Fig 1a). The edge marks denote structural uncertainty: there is an ADMG in the equivalence class in which this mark is an arrowhead, and an ADMG in the equivalence class in which this mark is a tail. Despite being a partial graph, the PAG still helps inform decisions about which shifts to protect against as discussed next.


Figure 1: (a) An example ADMG. (b) PAG representing invariance spec structure for (a).

2) Declaring Mutable Variables : Given the graph , to complete the invariance spec we must declare the mutable variables (Line 2). The graph suggests possible mechanism shifts that occurred across the datasets: the possible children of the environment indicator (; nodes adjacent to with the edge not into ). In Fig 1b, . When there are many possible children of , an advantage of having an explicit graph is that we can reason about and protect against only the shifts that are most likely to be problematic (vs defaulting to ). We demonstrate this process on a mortality prediction problem in our experiments.

3) Determining a Stable Distribution: Once we have the invariance spec , we need to find an identifiable6 interventional distribution that is stable to mechanism shifts in . In particular, we want to select the one that produces a model which performs best on heldout validation data. We established in Proposition 2 that conditional interventional distributions which intervene on are stable (i.e., distributions of the form ). To check identifiability, we use two existing graphical criteria in PAGs (Lines 5,7), but delay further discussion of these until the next section (3.3).7

Unfortunately, searching for the optimal conditioning set that yields a stable identifiable interventional distribution (Alg 1, Lines 4-8), like feature subset selection, is NP-Hard. Following related methods (e.g., Magliacane et al. (2018); Subbaswamy et al. (2019b)) we consider an exhaustive search over the feature powerset , but note that many strategies for improving scalability exist, including greedy searches (Rojas-Carulla et al., 2018) or space pruning (using, e.g., regularization).

Correctness: We can now establish that Algorithm 1 does, in fact, return distributions which are guaranteed to be stable to the specified shifts.

Corollary 3 (Soundness).

If Algorithm 1 returns a distribution , then this is stable to shifts in .

3.3 Connection to Dataset-driven Methods

We now show that I-Spec subsumes the ability of existing dataset-driven approaches to find stable (conditional) distributions.8 This is a consequence of the fact that stable conditional distributions are stable interventional interventional distributions as discussed next.

A prior result provides a sound and complete criterion for cases in which interventional distributions in PAGs reduce to conditional distributions Zhang (2008a, Theorem 30). We adapt the criterion (Line 5) to find stable conditional distributions: cases in which . Distributions satisfying Line 5 are exactly the stable distributions that can found by existing data-driven methods.

However, not all identifiable interventional distributions reduce to conditionals, and are instead functionals of the observational distribution. These can be found using the CIDP algorithm (Jaber et al., 2019a).9 For example, in Fig 1b, if we consider the spec , then via CIDP, while the only stable conditional distribution that can be found via Line 5 is . We can now prove the main result of this section:

Lemma 4.

Suppose a dataset-driven method finds to be stable given the input to Algorithm 1. Then Algorithm 1 finds this distribution to be stable as well.

Lemma 5.

Algorithm 1 finds stable distributions that cannot be expressed as conditional observational distributions.

The following is now immediate:

Corollary 6.

Algorithm 1 subsumes methods that find stable conditional (observational) distributions.

3.4 Special Case: Only One Source Dataset

I-Spec was constructed to take datasets from multiple environments as input to match the input of existing dataset-driven methods that, by default, require this. We briefly want to note that I-Spec is easily extensible to the case in which only data from a single environment is available. In this case, there is no environment indicator and one can simply learn a PAG over . Now specification of the mutable variables must come from prior knowledge alone, but we note that this is how selection variables are typically placed (Pearl and Bareinboim, 2011). This yields an invariance spec and stable interventional distributions can be found as before (i.e., Lines 4-8 of Alg 1). While it may be possible to modify other methods to only require one dataset, we believe the extension to this setting is most natural using I-Spec because it uses an explicit graph.

4 Related Work

Proactively Addressing Dataset Shift: The problem of differing train and test distributions is known as dataset shift (Quiñonero-Candela et al., 2009). Typical solutions assume access to unlabeled samples from the test distribution which are used to reweight training data during learning (e.g., Shimodaira (2000); Gretton et al. (2009)). However, in many practical applications it is infeasible to acquire test distribution samples. In this paper we consider shifts of arbitrary strengths when test samples are not available during learning, though there has been other work on bounded magnitude distributional robustness when shifts are of a known type and strength (Rothenhäusler et al., 2018; Heinze-Deml and Meinshausen, 2017).

Dataset-driven approaches use datasets from multiple training environments to determine a feature subset (Rojas-Carulla et al., 2018; Magliacane et al., 2018; Kuang et al., 2018) or feature representation (e.g., Muandet et al. (2013); Arjovsky et al. (2019)) that yields a conditional distribution that is invariant across the training datasets. Perhaps most related is Magliacane et al. (2018), whose method uses unlabeled target environment data, though it can be easily adapted to the setting of this paper. Notably, they allow for multiple environment (or “context”) variables, and additionally consider shifts in environment due to a variety of types of interventions. Dataset-driven methods do not require an explicit causal graph, and by default conservatively protect against all shifts they detect across datasets.

In contrast, some works assume explicit knowledge of the underlying graph (i.e., an ADMG) so that users can specify the shifts in mechanisms to protect against. Subbaswamy et al. (2019b) determine stable interventional distributions in selection diagrams (Pearl and Bareinboim, 2011) that can be used for prediction. Under the assumption of linear mechanisms, Subbaswamy and Saria (2018) find a stable feature set that includes counterfactual features. When there are no unobserved confounders, Schulam and Saria (2017) protect against shifts in action policies and consider continuous-time longitudinal settings. I-Spec allows for unobserved variables and inherits the benefits of using interventional distributions, but relaxes the need for a fully specified graph, instead using a partial graph learned from data.

Causal Discovery Across Multiple Environments: One line of research has focused exclusively on the problem of learning causal graphs using data from multiple environments. These methods could help extend I-Spec to other settings: For example, methods have been developed to learn a causal graph using data collected from multiple experimental contexts (Mooij et al., 2016; He and Geng, 2016) or non-stationary environments (Zhang et al., 2017). The FCI variant described in Section 3.2 might be viewed as a special case of FCI-JCI (Mooij et al., 2016), which allows for multiple environment/context variables. Triantafillou et al. (2010) consider the problem of learning a joint graph using datasets with different, but overlapping, variable sets. Others have considered local problems, e.g., using invariant prediction to infer a variable’s causal parents (Peters et al., 2016; Heinze-Deml et al., 2018) or Markov blanket (Yu et al., 2019) when there are no unobserved confounders.

5 Experiments

We perform two experiments to demonstrate the efficacy of I-Spec. In our first experiment, we show that we can apply the framework to large, complicated datasets from the healthcare domain. Specifically, we are able to learn a partial graph that provides meaningful insights into both how the variables are related and what shifts occurred across datasets. We show how these insights can inform the choice of invariance spec, further showcasing the flexibility of the procedure since we can consider different choices of the mutable variables. We empirically show that I-Spec finds distributions that generalize well to new environments and produce consistent predictions irrespective of the choice of training environment. In our second experiment, we measure the degree to which the magnitude of shifts in environments affects the difference in performance between stable and unstable models. We used simulated data to create a large number of datasets in order to compare performance under varying shifts. These results confirm that stable models have more consistent performance across shifted environments and that interventionals can capture more stable information than conditionals.

5.1 Real Data: Mortality Prediction

Motivation and Dataset: Machine learning has been used to predict intensive care unit (ICU) mortality to perform patient triage and identify most at-risk patients (e.g., Pirracchio et al. (2015)). However, in addition to physiologic features, studies have shown that features related to clinical practice patterns (e.g., ordering frequency of lab tests) are highly predictive of patient outcomes (Agniel et al., 2018). Since these patterns vary greatly by hospital, accurate models trained at one hospital will have highly variant performance at others, which can lead to unreliable and potentially dangerous decisions when deployed (Schulam and Saria, 2017). Therefore, we apply the proposed method to learn an ICU mortality prediction model that is stable to shifts in the mechanisms of such practice-based features and will generalize well to new hospitals. We demonstrate this using data from ICU patients at a large hospital and test its ability to generalize to smaller hospitals.

We extract the first 24 hours of ICU patient data from three hospitals in our institution’s network over a two year period.10 The pooled dataset consists of 24,787 individuals: 16,608 from Hospital 1 (H1); 5,621 from Hospital 2 (H2); and 2,558 from Hospital 3 (H3). We also extract 17 features, including the worst value of 12 physiological variables (e.g., heart rate), age, type of admission (i.e., surgical or medical), and three underlying chronic diseases (e.g., metastatic cancer), which are the features used in the SAPS II score (Le Gall et al., 1993). To explicitly create a problematic shift, we simulate one practice-based variable: time of day when lab measurements occur (i.e., morning or night), whose correlation with mortality varies by hospital: mortality is correlated with morning measurements at H1, uncorrelated with measurement timing at H2, and correlated with night measurements at H3.

Determining the Invariance Spec: To determine the invariance spec , we first learned a PAG from the full pooled dataset, using the hospital ID as the environment indicator . Specifically, using Tetrad we applied the FCI variant described in Section 3.2 and used the the Degenerate Gaussian likelihood ratio test (Andrews et al., 2019). The learned PAG is given in Appendix D; we describe some aspects of it here to demonstrate its value.

12 variables are possible children of , including physiologic variables such as ‘Age’ and ‘Bicarbonate’, and features associated with clinical practice such as ‘Admit Type’ and Lab Time’. Of the 10 variables adjacent to ‘Mortality’, ‘Age’ is the only parent—the other 9 variables are connected via bidirected edges. The explicit graph makes it easy to reason about the DGP: it tells us that ‘Age’ is a causal factor for mortality (e.g., older patients are more likely to die), while ‘Bicarbonate’ is related to mortality through unobserved common causes (such as an acute underlying kidney condition). The bidirected edge connecting ‘Lab Time’ to ‘Mortality’ indicates a practice-based non-causal relationship, and the bidirected edge between ‘Admit Type’ and ‘Mortality’ is due to the latent condition that caused the admission and will contribute to risk of mortality.

In this example, if a model is not stable to shifts in practice pattern-based features, then the predictions it makes will be arbitrarily sensitive to changes in policies between datasets, such as shifts in the times when lab measurements are taken. This sensitivity would render the model unreliable, so we reason that shifts in administrative policies should not affect our mortality risk predictions. In contrast, shifts in physiologic mechanisms may encode clinically relevant changes: if there are differences in the treatments patients receive at different hospitals, this would affect the bicarbonate mechanism, for example. Because these shifts would be clinically meaningful, they should affect the decisions we make and model predictions should not be invariant to them. Thus, one reasonable invariance spec is to take the mutable variables to be . Note the flexibility of this procedure: we are able to consider alternative invariance specs (i.e., different choices of ) and compare the sensitivity of resulting solutions.

Baselines/Models: We consider three models that correspond to the three ways a model developer can respond to shifts in environment: ignoring shifts, protecting against all shifts in the datasets, or protecting against some shifts. Our first baseline, an unstable model, ignores shifts and uses all features. Our second baseline conservatively protects against all shifts in the data by using I-Spec with the invariance spec , emulating the conservative default data-driven behavior. Finally, to protect against only some shifts we use I-SPEC and the invariance spec defined before. Using the procedure in Alg 1, I-Spec uses 13 of the 18 features, while the conservative method uses 7—neither use ‘Lab Time’ or ‘Admit Type’. For demonstration we train logistic regression models, though we emphasize that more complex models could be used instead.

Figure 2: Performance (AUROC) of unstable (blue; medians 0.88, 0.86, 0.80), stable conservative (red; medians 0.69, 0.73, 0.72), and stable I-Spec (green; medians 0.83, 0.87, 0.82) mortality prediction models trained at Hospital 1 but evaluated at each hospital.

Experimental Setup: We evaluate as follows: We randomly performed 80/20 train/test splits on data from each hospital (and repeated this 100 times). To measure predictive performance, we use the H1 dataset to train the unstable, conservative, and I-Spec models, and evaluated their area under the ROC curve (AUROC) on the test patients from each hospital. This allows us to see the robustness of a model’s performance as it is applied to new environments. Beyond performance, we also evaluated the effect of shifts on model decisions. For each approach, we consider pairs of models (one trained at H1, and one trained at H2 or H3) and made predictions on the test set patients. We then computed the rank correlation of the predictions via Spearman’s . A value of indicates that two models produce the same ordering of patients by predicted risk despite being trained at different hospitals (i.e., patient orderings are stable).

Results: Fig 2 shows boxplots of the AUROC of the models at each test hospital. As expected, the unstable model fails to generalize to new hospitals, with a significant drop in performance from H1 to H3 because the unstable lab time-mortality association flipped. On the other hand, the I-Spec model generalizes well, and outperforms the unstable model at the new hospitals H2 and H3. Comparing I-Spec to the conservative model, we see that the conservative model performs worse at all hospitals precisely because it protects against all shifts (leaving less predictive signal to learn), though its performance also does not deteriorate at new hospitals because the model is also stable.

Figure 3: Rank correlation between predictions by models trained at different hospitals but applied to the same test patients. Median ’s: unstable (0.82), conservative (0.90), I-Spec (0.86).

Fig 3 shows the boxplots of rank correlations of each model’s predictions. The unstable model has significantly less stable patient orderings than the two stable models: its rank correlations are highly varying and reach as low as . Both the I-Spec and conservative models have similar rank correlations, though the conservative model’s ’s tend to be slightly higher due to protecting against all shifts. Overall, we see that stable models produce significantly more consistent predictions (and, thus, more stable patient orderings) than the unstable model. The difference between the stable models, however, is that the I-Spec model has significantly and strictly better discriminative performance at all hospitals. This demonstrates that careful choice of the mutable variables (as opposed to defaulting to ), can yield stable and accurate models.

5.2 Simulated Data

Figure 4: MSE of different models as they are evaluated in different test environments. Vertical dashed lines denote the coefficient values associated with the two training environments.

To analyze the effect of the magnitude of shifts on the performance of stable and unstable models, we simulated data from a zero-mean linear Gaussian system according to the ADMG in Fig 1a. We shift the mechanism of by changing the coefficient of the unobserved confounder between and in the structural equation for .11 We generated two source datasets (environments denoted by the vertical dashed lines in Fig 4) and trained three linear regression models: an unstable (green) model of , a stable conditional (blue) model of , and a stable interventional (red) model of . We then evaluated the mean squared error (MSE) of these models (plotted in Fig 4) in test environments created by varying the unstable coefficient.

As expected, under small shifts the unstable model outerperforms both stable models; but the unstable model’s error grows rapidly with the magnitude of the shift quickly performing much worse than the stable models. On the other hand, the stable models have performance that is consistent across environments as desired. The interventional model achieves lower MSE than the conditional because it uses stable information in the term that the conditional model does not.

6 Conclusion

In this paper we addressed one of the primary challenges facing the deployment of machine learning in safety-critical applications: shifts in environment between training and deployment. To this end, we proposed I-Spec, an end-to-end framework that lets us go from data to models that are guaranteed to be stable to shifts. Like existing graphical methods, I-Spec does not require data from the target environment and is able to capture more stable information in the data than methods which use stable conditional distributions. An important difference, however, is that I-Spec does not require prior knowledge of the full causal graph. As demonstrated in our healthcare experiments, this means I-Spec can be applied to problems in which existing graphical methods would have been too difficult to use. The experiments further demonstrated how the framework can be used to discover shifts, determine which ones to protect against, and train accurate, stable models. To improve I-Spec’s interoperability, a valuable direction for future work would be to handle differing variables sets across datasets.


The authors thank Dan Malinsky for helpful discussions about structure learning, the Tetrad developers for promptly providing an implementation of the Degenerate Gaussian score, and Sieu Tran for help in implementation of an earlier version of this work.

Appendix A Invariant Conditionals in PAGs and the CIDP Algorithm

a.1 Additional PAG Preliminaries

We first provide some additional definitions and facts about PAGs. These are relevant for understanding Theorem 7.

The -separation criterion in DAGs is naturally generalized to encode conditional independences in mixed graphs through -separation (Richardson and Spirtes, 2002). A path is -connecting given a set if every collider (e.g., v-structure like ) on the path is in and all non-colliders are not in .

In PAGs we must also account for uncertainty in whether or not a node is a collider along a path. Letting denote a wildcard edge mark (head, tail, or circle), a node is a definite non-collider if there is at least one edge out of on the path, or if is a subpath and and are not adjacent. A definite status path is one in which every node is either a collider or definite non-collider (Maathuis and Colombo, 2015). These definitions let us extend -connection (and separation) to PAGs: a definite status path is m-connecting given if every definite non-collider is in and every collider on the path is in .

a.2 Invariance Criterion for Conditionals

Theorem 7 (Zhang (2008a), Theorem 30).

Suppose is the PAG over the observed variables . For any such that , is invariant under interventions on in if and only if


for every , every definite status m-connecting path, if any, between and any member of given is out of with a visible edge;


for every , there is no definite status m-connecting path between and any member of given ;


for every , every definite status m-connecting path, if any, between and any member of given is into .

As originally written, verifying Theorem 7 involves checking individual definite status paths in the PAG. We will reduce the conditions to equivalent ones that can be verified in MAGs derived from the PAG that will, in general, have fewer paths, and for which efficient m-separation routines have been implemented (e.g., in the R package dagitty (Textor et al., 2016)). First, we require the following definitions from Maathuis and Colombo (2015), with the addition of .

Definition 5 (, , and ).

Let be a vertex in PAG . Define to be the set of MAGs in the equivalence class described by that have the same number of edges into as in . For any , let be the graph obtained from by removing all directed edges out of that are visible in . For any , let be the graph obtained from by removing all edges (directed or bidirected) into .

Theorem 7 can now be verified via Lemma 8:

Lemma 8.

For , , as in Theorem 7 and , the Theorem 7 conditions are equivalent to


for every , in ;


for every , in ;


for , in .

Proof of Lemma 8.

Consider each condition in turn.

  1. This equivalence is a restatement of Lemma 7.4 in Maathuis and Colombo (2015) (which states the condition as there is no m-connecting path between and given in , i.e., they are m-separated).

  2. This equivalence follows from the definition of a PAG. All MAGs in the equivalence class represented by share the same conditional independences. Thus, if in one MAG in then in . Similarly, if in then in all MAGs in .

  3. To prove this equivalence we must prove the following: Let . Then there is a definite status m-connecting path from to given in that is not into if and only if there is an m-connecting path between X and given in . The style of the proof follows that of the proof of Lemma 7.4 in Maathuis and Colombo (2015).

    First, the only if direction. Suppose there is definite status m-connecting path, , between and given in that is not into . Let be this path in and be this path in . As noted in Zhang (2008a), if a path is definite status m-connecting, then the corresponding path in every MAG in is m-connecting. Thus, we know that is m-connecting. Further, is out of because was not into , and by construction has no additional edges into when compared to . Since only deletes edges into when compared to , the path is no different from and is also out of . is also m-connecting because the only way for to not be m-connecting while is, would be for to contain a collider that became inactive after deleting edges into . However, we know that (and, thus, ) are not collider paths since , and thus the paths and are directed and out of . Thus, is m-connecting and out of in .

    Now the if direction. Suppose there is an m-connecting path between and given in . Because this path is out of , the corresponding path in is unaffected and is also m-connecting. By Lemma 5.1.9 in Zhang (2006), since , this means there is a definite status m-connecting path, , between and given in that is not into .

a.3 CIDP Algorithm

We briefly restate key aspects of the CIDP algorithm here. For full details see Jaber et al. (2019a).

Jaber et al. (2019a) introduce additional constructs that are used in the CIDP algorithm. In what follows, we will use () to denote the union of and the set of possible parents (children). Similarly, we define . We will let denote excluding the possible parents of due to circle edges. We similarly define . Let a circle path be a path on which all edge marks are . Define a bucket to be a closure of nodes connected with circle paths as a bucket.

Definition 6 (PC-Component).

In a PAG or any induced subgraph thereof, two nodes are in the same possible c-component (pc-component) if there is a path between them such that (1) all non-endpoint nodes along the path are colliders, and (2) none of the edges are visible.

Note that two nodes are in the same definite c-component if they are connected by a bi-directed path.

The following proposition gives an identification criterion for interventional distributions corresonding to interventions on a bucket.

Proposition 9 (Jaber et al. (2019a)[Proposition 2).

] Let denote a PAG over , be a union of a subset of buckets in , and be a bucket. Given (i.e., ), and a partial topological order of buckets with respect to (induced subgraph), is identifiable if and only if, in , there does not exist such that Z has a possible child that is in the pc-component of Z. If identifiable, then the expression is given by

where is the union of the definite c-components of the members of in , and denotes the set of nodes preceding bucket in the partial order.

Definition 7 (Region ).

Given a PAG over , and . Let the region of w.r.t. , denoted , be the union of the buckets that contain nodes in the pc-component of in the induced subgraph .

We are now ready to state the algorithm.

input : three disjoint sets
output : Expression for or FAIL.
1 Let ;
2 ;
3 = Decompose();
4 Let ;
5 for  do
6        if  then
7               Do-See();
11 Function Decompose():
12        if  then
13              return ;
       /* In let denote the pc-component of in */
14        Initialize to some node in ;
15        Let ;
16        while  do
17               ;
18               ;
       /* Let and */
20        return Decompose();
22Function Do-See():
        /* Let denote a bucket in and denote the pc-component of in */
23        if  then
24               if  then
25                      return Do-See();
27              else
28                      throw FAIL;
30       return ;
32Function Identify():
33        if  then
34              return 1
35       if  then
36              return Q
       /* In let denote a bucket, and let denote the pc-component of */
37        if  such that  then
38               Compute from using Proposition 9;
39               return Identify();
41       else if  such that  then
42               return
43       else
44               throw FAIL;
Algorithm 2 CIDP() given PAG

Appendix B Proofs of Main Results

Proof of Proposition 1.

Follows from -admissibility (Pearl and Bareinboim, 2011, Theorem 2) and the definition of a PAG (independences that hold in every member of the PAG’s equivalence class must also hold in the PAG). ∎

Proof of Proposition 2.

In each ADMG in the equivalence class , we have that in , the mutilated graph in which all edges into have been deleted due to the operator in . Now, by Rule 2 of -calculus we have that = (again in each ADMG in ), so is a stable distribution by Proposition 1. ∎

Proof of Corollary 3.

First, note that given an invariance spec , Algorithm 1 searches over distributions of the form . All of these are stable by Proposition 2. Now, for conditioning sets that satisfy Theorem 7, these are in fact stable due to the fact that the theorem is a sufficient graphical conditional for invariance (see the “if” direction of the proof in Zhang (2008a)). For conditional interventional distributions found to be identifiable by CIDP, correctness follows from its soundness Jaber et al. (2019a, Theorem 1). ∎

Proof of Lemma 4.

The conditioning set found by the dataset driven method will be checked in Line 5 of Algorithm 1 to see if it satisfies Theorem 7. Because Theorem 7 is sound and complete in PAGs, it is satisfied by all stable conditioning sets. Thus, Algorithm 1 will find to be stable and will append it to . ∎

Proof of Lemma 5.

Consider Fig 1b in which is stable but is not reducible to a conditional distribution of the form . ∎

Appendix C Clarifying Relation to Dataset-Driven Approaches

We now discuss how existing dataset-driven methods, Rojas-Carulla et al. (2018) and Magliacane et al. (2018) in particular, can be adapted to address a problem defined by an invariance spec. Then, by virtue of the fact that these methods search for invariant conditionals, this means that these methods are subsumed by I-SPEC.

First, Rojas-Carulla et al. (2018) is related to work on invariant prediction that finds stable distributions by hypothesis testing the stability of a distribution across source environments (Peters et al., 2016). While these works do not assume faithfulness, under the faithfulness assumption (which is made by I-Spec), it has been shown that an invariant distribution corresponds to a feature set such that : the target variable is -separated from the environment indicator given the features (Peters et al., 2016, Appendix C). Thus, Rojas-Carulla et al. (2018) can naturally be applied to the input of I-Spec and it searches for stable conditional distributions as defined within the main paper.

Magliacane et al. (2018), by contrast, builds on the Joint Causal Inference (JCI) framework proposed in Mooij et al. (2016). The JCI framework considers a related setting to the environment indicator setup (that is used by I-Spec and invariant prediction works like Peters et al. (2016); Rojas-Carulla et al. (2018)) in which there are instead (possibly multiple) context variables that describe how environments differ as opposed to system variables which are the observed variables that form the feature set and target variable. The environment indicator described in the main paper can reasonably be viewed as a single context variable. Thus, invariance specs can be translated into the JCI framework. The specific method proposed in Magliacane et al. (2018) considers a problem setup in which unlabeled target domain data is available. However, within I-Spec the assumption is that the unknown target environment will be drawn from the set of environments defined by an invariance spec . This stronger assumption is what allows I-Spec to be applied in settings in which no target environment data is available. Under this assumption it is straightforward to adapt the method of Magliacane et al. (2018) to handle the input of I-Spec. However, the method of Magliacane et al. (2018) searches only over stable conditional distributions.

Thus, both of these relevant dataset-driven methods are applicable to the same problems as I-Spec. However, they search over stable conditional distributions, which (under the assumptions of the I-Spec framework) consist of all the distributions (and only the distributions) that satisfy Zhang (2008a, Theorem 30) (which is sound and complete in PAGs). Then, by Lemma 5 we get Corollary 6, and we have that I-Spec subsumes existing dataset-driven methods in their ability to find stable distributions due to the additional search over stable interventional distributions.

Appendix D Learned PAG

Figure 5: The PAG learned using pooled FCI on the full, 3 hospital dataset. Because the dataset is mixed continuous and discrete, we use the Degenerate Gaussian Likelihood Ratio Test and set . Recall that bidirected edges denote unobserved confounding, and that edge marks denote that there is at least one MAG in the equivalence class in which this mark is a head and at least one in which this is a tail.

Appendix E Experimental Details

e.1 Simulated Experiment Details

We generated data according to the following linear Gaussian system:

Different environments correspond to different values of the coefficient in the structural equation for . We generated 50,000 samples each from two source environments associated with and . We pooled the data from these two environments to train all three models. We evaluated the three models in 100 test environments created by varying on an evenly spaced grid from to , sampling 10,000 data points from each test environment.

We briefly note that , where (e.g., with effect of removed). See Subbaswamy et al. (2019a) for the equivalence of using the auxiliary variable (a counterfactual variable) to the original interventional distribution. To compute we first fit a linear regression for the structural equation of to learn the coefficient of (which is -1). Then, using the estimated coefficient, we computed an estimate of before fitting the model . Test environment values were computed using the coefficient learned from training data.

e.2 Real Data Experiment Details:

Data Cohort

We construct the pooled dataset using de-identified measurements from patients who are admitted or transferred to the intensive care unit (ICU) of three hospitals in our institution’s network within from early 2016 to early 2018. We only consider patients who stayed in the ICU for longer than 24 hours and use data collected during the first 24 hours of their visit. We focus on the non-pediatric case, requiring all patients to be over 15 years old. For patients with multiple ICU encounters, we only consider data from their first encounter. These criteria result in a cohort of 24,787 patients. Mortality rates varied as follows: 7% in H1, 10% in H2, and 12% in H3.

Data Features

The target variable of our prediction model is Mortality, which is defined as an in-hospital death. We capture 12 physiologic features: Heart Rate, Systolic Blood Pressure, Temperature, Glasgow Coma Scale/Score (GCS), PaO/FiO, Blood Urea Nitrogen, Urine Output, Sodium, Potassium, Bicarbonate, Bilirubin, and White Blood Cell Count. We computed the worst value using the SAPS II criteria found in Le Gall et al. (1993). Furthermore, we consider age and three comorbidities: Metastatic cancer, Hematologic malignancy, and AIDS. SAPS II also makes use of the admission type (i.e., scheduled surgical, unscheduled surgical, or medical). To create a known shift, we simulate another healthcare process variable: time of day when lab measurements occur (i.e., morning or night ), such that mortality is correlated with morning measurements in Hospital 1, uncorrelated with measurement timing in Hospital 2, and correlated with night measurements in Hospital 3.

Specifically, we generated Lab Time as follows:

  1. ,

  2. ,

  3. ,

Imputation of missing values

To account for the missing physiologic feature values, we impute our data via “Last Observation Carried Forward” (LOCF). If the feature value is missing from the patient’s first 24 hours, we impute it with the most recently recorded value prior to their ICU stay. Otherwise, we fill the missing value with the hospital-specific population mean.


We trained unregularized Logistic Regression models using “classif.logreg” in the R language’s mlr package (Bischl et al., 2016).


  1. We will use and interchangeably.
  2. We restate this algorithm in Appendix A
  3. This special case of shifts in mechanism are sometimes referred to as “soft interventions”.
  4. Proofs of all results are in Appendix B.
  5. Such prior knowledge can be specified in the Tetrad implementation of FCI http://www.phil.cmu.edu/tetrad/.
  6. Recall that identifiability means that an interventional distribution is a function of the observational training data distribution.
  7. CIDP and Zhang (2008a, Thm 30) are given in Appendix A.
  8. Namely, Rojas-Carulla et al. (2018); Magliacane et al. (2018) are easily adaptable to the setting of this paper; see Appendix C.
  9. CIDP has not been proven complete.
  10. Full inclusion criteria and details in Appendix E.
  11. Exact simulation details in Appendix E.


  1. Biases in electronic health record data due to processes within the healthcare system: retrospective observational study. Bmj 361, pp. k1479. Cited by: §5.1.
  2. Learning high-dimensional directed acyclic graphs with mixed data-types. Proceedings of machine learning research 104, pp. 4. Cited by: §5.1.
  3. Invariant risk minimization. arXiv preprint arXiv:1907.02893. Cited by: §1, §4.
  4. Mlr: machine learning in r. Journal of Machine Learning Research 17 (170), pp. 1–5. External Links: Link Cited by: §E.2.
  5. Intelligible models for healthcare: predicting pneumonia risk and hospital 30-day readmission. In Proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining, pp. 1721–1730. Cited by: §1.
  6. Runaway feedback loops in predictive policing. In Conference on Fairness, Accountability and Transparency, pp. 160–171. Cited by: §1.
  7. Covariate shift by kernel mean matching. Dataset shift in machine learning 3 (4), pp. 5. Cited by: §4.
  8. Causal network learning from multiple interventions of unknown manipulated targets. arXiv preprint arXiv:1610.08611. Cited by: §4.
  9. Conditional variance penalties and domain shift robustness. arXiv preprint arXiv:1710.11469. Cited by: §4.
  10. Invariant causal prediction for nonlinear models. Journal of Causal Inference 6 (2). Cited by: §4.
  11. Identification of conditional causal effects under markov equivalence. In Advances in Neural Information Processing Systems 32, H. Wallach, H. Larochelle, A. Beygelzimer, F. d’Alché-Buc, E. Fox and R. Garnett (Eds.), Vancouver, Canada, pp. 11512–11520. Cited by: §A.3, §A.3, Appendix B, §2, §3.3, Proposition 9.
  12. Causal identification under markov equivalence: completeness results. In International Conference on Machine Learning, pp. 2981–2989. Cited by: §2.
  13. Stable prediction across unknown environments. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 1617–1626. Cited by: §4.
  14. A new simplified acute physiology score (saps ii) based on a european/north american multicenter study. Jama 270 (24), pp. 2957–2963. Cited by: §E.2, §5.1.
  15. To predict and serve?. Significance 13 (5), pp. 14–19. Cited by: §1.
  16. A generalized back-door criterion. The Annals of Statistics 43 (3), pp. 1060–1088. Cited by: item 1, item 3, §A.1, §A.2, §2.
  17. Domain adaptation by using causal inference to predict invariant conditional distributions. In Advances in Neural Information Processing Systems, pp. 10869–10879. Cited by: Appendix C, Appendix C, §3.2, §4, footnote 8.
  18. Joint causal inference from multiple contexts. arXiv preprint arXiv:1611.10351. Cited by: Appendix C, §4.
  19. Domain generalization via invariant feature representation. In International Conference on Machine Learning, pp. 10–18. Cited by: §1, §4.
  20. Transportability of causal and statistical relations: a formal approach. In Twenty-Fifth AAAI Conference on Artificial Intelligence, Cited by: Appendix B, §3.1, §3.4, §4.
  21. Causality. Cambridge university press. Cited by: §2.
  22. Causal inference by using invariant prediction: identification and confidence intervals. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 78 (5), pp. 947–1012. Cited by: Appendix C, Appendix C, §4.
  23. Mortality prediction in intensive care units with the super icu learner algorithm (sicula): a population-based study. The Lancet Respiratory Medicine 3 (1), pp. 42–52. Cited by: §5.1.
  24. Dataset shift in machine learning. The MIT Press. Cited by: §1, §4.
  25. Ancestral graph markov models. The Annals of Statistics 30 (4), pp. 962–1030. Cited by: §A.1, §2.
  26. Invariant models for causal transfer learning. The Journal of Machine Learning Research 19 (1), pp. 1309–1342. Cited by: Appendix C, Appendix C, Appendix C, §1, §3.2, §4, footnote 8.
  27. Anchor regression: heterogeneous data meets causality. arXiv preprint arXiv:1801.06229. Cited by: §4.
  28. Reliable decision support using counterfactual models. In Advances in Neural Information Processing Systems, pp. 1697–1708. Cited by: §4, §5.1.
  29. Improving predictive inference under covariate shift by weighting the log-likelihood function. Journal of statistical planning and inference 90 (2), pp. 227–244. Cited by: §4.
  30. Identification of conditional interventional distributions. In 22nd Conference on Uncertainty in Artificial Intelligence, UAI 2006, pp. 437–444. Cited by: §2.
  31. Identification of joint interventional distributions in recursive semi-markovian causal models. In Proceedings of the National Conference on Artificial Intelligence, Vol. 21, pp. 1219. Cited by: §2.
  32. Causation, prediction, and search. MIT press. Cited by: §2, §3.2.
  33. The hierarchy of stable distributions and operators to trade off stability and performance. arXiv preprint arXiv:1905.11374. Cited by: §E.1.
  34. Counterfactual normalization: proactively addressing dataset shift using causal mechanisms.. In UAI, pp. 947–957. Cited by: §1, §4.
  35. From development to deployment: dataset shift, causality, and shift-stable models in health ai. Biostatistics. Cited by: §1.
  36. Preventing failures due to dataset shift: learning predictive models that transport. In The 22nd International Conference on Artificial Intelligence and Statistics, pp. 3118–3127. Cited by: §1, §3.1, §3.1, §3.1, §3.2, §3.2, §4.
  37. Robust causal inference using directed acyclic graphs: the r package ‘dagitty’. International journal of epidemiology 45 (6), pp. 1887–1894. Cited by: §A.2.
  38. Learning causal structure from overlapping variable sets. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, pp. 860–867. Cited by: §4.
  39. Learning markov blankets from multiple interventional data sets. IEEE transactions on neural networks and learning systems. Cited by: §4.
  40. Variable generalization performance of a deep learning model to detect pneumonia in chest radiographs: a cross-sectional study. PLoS medicine 15 (11), pp. e1002683. Cited by: §1, §1.
  41. Causal inference and reasoning in causally insufficient systems. Ph.D. Thesis, Carnegie Mellon University. Cited by: item 3.
  42. Causal reasoning with ancestral graphs. Journal of Machine Learning Research 9 (Jul), pp. 1437–1474. Cited by: item 3, Appendix B, Appendix C, §3.3, Theorem 7, 1, footnote 7.
  43. On the completeness of orientation rules for causal discovery in the presence of latent confounders and selection bias. Artificial Intelligence 172 (16-17), pp. 1873–1896. Cited by: §3.2.
  44. Causal discovery from nonstationary/heterogeneous data: skeleton estimation and orientation determination. In IJCAI: Proceedings of the Conference, Vol. 2017, pp. 1347. Cited by: §4.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description