Learning Representations for Counterfactual Inference

Learning Representations for Counterfactual Inference

Abstract

Observational studies are rising in importance due to the widespread accumulation of data in fields such as healthcare, education, employment and ecology. We consider the task of answering counterfactual questions such as, “Would this patient have lower blood sugar had she received a different medication?”. We propose a new algorithmic framework for counterfactual inference which brings together ideas from domain adaptation and representation learning. In addition to a theoretical justification, we perform an empirical comparison with previous approaches to causal inference from observational data. Our deep learning algorithm significantly outperforms the previous state-of-the-art.

\fxsetup

inline,nomargin,theme=color \FXRegisterAuthorufxuanfxUri

1 Introduction

Inferring causal relations is a fundamental problem in the sciences and commercial applications. The problem of causal inference is often framed in terms of counterfactual questions (Lewis, 1973; Rubin, 1974; Pearl, 2009) such as “Would this patient have lower blood sugar had she received a different medication?”, or “Would the user have clicked on this ad had it been in a different color?”. In this paper we propose a method to learn representations suited for counterfactual inference, and show its efficacy in both simulated and real world tasks.

We focus on counterfactual questions raised by what are known as observational studies. Observational studies are studies where interventions and outcomes have been recorded, along with appropriate context. For example, consider an electronic health record dataset collected over several years, where for each patient we have lab tests and past diagnoses, as well as data relating to their diabetic status, and the causal question of interest is which of two existing anti-diabetic medications A or B is better for a given patient. Observational studies are rising in importance due to the widespread accumulation of data in fields such as healthcare, education, employment and ecology. We believe machine learning will be called on more and more to help make better decisions in these fields, and that researchers should be careful to pay attention to the ways in which these studies differ from classic supervised learning, as explained in Section 2 below.

In this work we draw a connection between counterfactual inference and domain adaptation. We then introduce a form of regularization by enforcing similarity between the distributions of representations learned for populations with different interventions. For example, the representations for patients who received medication A versus those who received medication B. This reduces the variance from fitting a model on one distribution and applying it to another. In Section 3 we give several methods for learning such representations. In Section 4 we show our methods approximately minimizes an upper bound on a regret term in the counterfactual regime. The general method is outlined in Figure 1. Our work has commonalities with recent work on learning fair representations (Zemel et al., 2013; Louizos et al., 2015) and learning representations for transfer learning (Ben-David et al., 2007; Gani et al., 2015). In all these cases the learned representation has some invariance to specific aspects of the data: either an identity of a certain group such as racial minorities for fair representations, or the identity of the data source for domain adaptation, or, in the case of counterfactual learning, the type of intervention enacted in each population.

In machine learning, counterfactual questions typically arise in problems where there is a learning agent which performs actions, and receives feedback or reward for that choice without knowing what would be the feedback for other possible choices. This is sometimes referred to as bandit feedback (Beygelzimer et al., 2010). This setup comes up in diverse areas, for example off-policy evaluation in reinforcement learning (Sutton & Barto, 1998), learning from “logged implicit exploration data” (Strehl et al., 2010) or “logged bandit feedback” (Swaminathan & Joachims, 2015), and in understanding and designing complex real world ad-placement systems Bottou et al. (2013). Note that while in contextual bandit or robotics applications the researcher typically knows the method underlying the action choice (e.g. the policy in reinforcement learning), in observational studies we usually do not have control or even a full understanding of the mechanism which chooses which actions are performed and which feedback or reward is revealed. For instance, for anti-diabetic medication, more affluent patients might be insensitive to the price of a drug, while less affluent patients could bring this into account in their choice.

Given that we do not know beforehand the particulars determining the choice of action, the question remains, how can we learn from data which course of action would have better outcomes. By bringing together ideas from representation learning and domain adaptation, our method offers a novel way to leverage increasing computation power and the rise of large datasets to tackle consequential questions of causal inference.

The contributions of our paper are as follows. First, we show how to formulate the problem of counterfactual inference as a domain adaptation problem, and more specifically a covariate shift problem. Second, we derive new families of representation algorithms for counterfactual inference: one is based on linear models and variable selection, and the other is based on deep learning of representations (Bengio et al., 2013). Finally, we show that learning representations that encourage similarity (balance) between the treated and control populations leads to better counterfactual inference; this is in contrast to many methods which attempt to create balance by re-weighting samples (e.g., Bang & Robins, 2005; Dudík et al., 2011; Austin, 2011; Swaminathan & Joachims, 2015). We show the merit of learning balanced representations both theoretically in Theorem 1, and empirically in a set of experiments across two datasets.

2 Problem setup

Let be the set of potential interventions or actions we wish to consider, the set of contexts, and the set of possible outcomes. For example, for a patient the set of interventions of interest might be two different treatments, and the set of outcomes might be indicating blood sugar levels in mg/dL. For an ad slot on a webpage , the set of interventions might be all possible ads in the inventory that fit that slot, while the potential outcomes could be . For a context (e.g. patient, webpage), and for each potential intervention , let be the potential outcome for . The fundamental problem of causal inference is that only one potential outcome is observed for a given context : even if we give the patient one medication and later the other, the patient is not in exactly the same state. In machine learning this type of partial feedback is often called “bandit feedback”. The model described above is known as the Rubin-Neyman causal model (Rubin, 1974, 2011).

We are interested in the case of a binary action set , where action is often known as the “treated” and action is the “control”. In this case the quantity is of high interest: it is known as the individualized treatment effect (ITE) for context van der Laan & Petersen (2007); Weiss et al. (2015). Knowing this quantity enables choosing the best of the two actions when confronted with the choice, for example choosing the best treatment for a specific patient. However, the fact that we only have access to the outcome of one of the two actions prevents the ITE from being known. Another commonly sought after quantity is the average treatment effect, for a population with distribution . In the binary action setting, we refer to the observed and unobserved outcomes as the factual outcome , and counterfactual outcome respectively.

A common approach for estimating the ITE is by direct modelling: given samples , where , learn a function such that . The estimated transductive ITE is then:

(1)

While in principle any function fitting model might be used for estimating the ITE (Prentice, 1976; Gelman & Hill, 2006; Chipman et al., 2010; Wager & Athey, 2015; Weiss et al., 2015), it is important to note how this task differs from standard supervised learning. The problem is as follows: the observed sample consists of the set . However, calculating the ITE requires inferring the outcome on the set . We call the set the empirical factual distribution, and the set the empirical counterfactual distribution, respectively. Because and need not be equal, the problem of causal inference by counterfactual prediction might require inference over a different distribution than the one from which samples are given. In machine learning terms, this means that the feature distribution of the test set differs from that of the train set. This is a case of covariate shift, which is a special case of domain adaptation (Daume III & Marcu, 2006; Jiang, 2008; Mansour et al., 2009). A somewhat similar connection was noted in Schölkopf et al. (2012) with respect to covariate shift, in the context of a very simple causal model.

Specifically, we have that and . The difference between the observed (factual) sample and the sample we must perform inference on lies precisely in the treatment assignment mechanism, . For example, in a randomized control trial, we typically have that and are independent. In the contextual bandit setting, there is typically an algorithm which determines the choice of the action given the context . In observational studies, which are the focus of this work, the treatment assignment mechanism is not under our control and in general will not be independent of the context . Therefore, in general, the counterfactual distribution will be different from the factual distribution.

Figure 1: Contexts are representated by , which are used, with group indicator , to predict the response while minimizing the imbalance in distributions measured by .

3 Balancing counterfactual regression

1:  Input:
2:       (2)
3:  
4:  Output:
Algorithm 1 Balancing counterfactual regression

We propose to perform counterfactual inference by amending the direct modeling approach, taking into account the fact that the learned estimator must generalize from the factual distribution to the counterfactual distribution.

Our method, see Figure 1, learns a representation , (either using a deep neural network, or by feature re-weighting and selection), and a function , such that the learned representation trades off three objectives: (1) enabling low-error prediction of the observed outcomes over the factual representation, (2) enabling low-error prediction of unobserved counterfactuals by taking into account relevant factual outcomes, and (3) the distributions of treatment populations are similar or balanced.

We accomplish low-error prediction by the usual means of error minimization over a training set and regularization in order to enable good generalization error. We accomplish the second objective by a penalty that encourages counterfactual predictions to be close to the nearest observed outcome from the respective treated or control set. Finally, we accomplish the third objective by minimizing the so-called discrepancy distance, introduced by Mansour et al. (2009), which is a hypothesis class dependent distance measure tailored for domain adaptation. For hypothesis space , we denote the discrepancy distance by . See Section 4 for the formal definition and motivation. Other discrepancy measures such as Maximum Mean Discrepancy (Gretton et al., 2012) could also be used for this purpose.

Intuitively, representations that reduce the discrepancy between the treated and control populations prevent the learner from using “unreliable” aspects of the data when trying to generalize from the factual to counterfactual domains. For example, if in our sample almost no men ever received medication A, inferring how men would react to medication A is highly prone to error and a more conservative use of the gender feature might be warranted.

Let , , and denote the observed units, treatment assignments and factual outcomes respectively. We assume is a metric space with a metric . Let be the nearest neighbor of among the group that received the opposite treatment from unit . Note that the nearest neighbor is computed once, in the input space, and does not change with the representation . The objective we minimize over representations and hypotheses is

(2)

where are hyperparameters to control the strength of the imbalance penalties, and is the discrepancy measure defined in 4.1. When the hypothesis class is the class of linear functions, the term has a closed form brought in 4.1 below, and . For more complex hypothesis spaces there is in general no exact closed form for .

Once the representation is learned, we fit a final hypothesis minimizing a regularized squared loss objective on the factual data. Our algorithm is summarized in Algorithm 1. Note that our algorithm involves two minimization procedures. In Section 4 we motivate our method, by showing that our method of learning representations minimizes an upper bound on the regret error over the counterfactual distribution, using results of Cortes & Mohri (2014).

3.1 Balancing variable selection

A naïve way of obtaining a balanced representation is to use only features that are already well balanced, i.e. features which have a similar distribution over both treated and control sets. However, imbalanced features can be highly predictive of the outcome, and should not always be discarded. A middle-ground is to restrict the influence of imbalanced features on the predicted outcome. We build on this idea by learning a sparse re-weighting of the features that minimizes the bound in Theorem 1. The re-weighting determines the influence of a feature by trading off its predictive capabilities and its balance.

We implement the re-weighting as a diagonal matrix , forming the representation , with subject to a simplex constraint to achieve sparsity. Let denote the space of such representations. We can now apply Algorithm 1 with the space of linear hypotheses. Because the hypotheses are linear, is a function of the distance between the weighted population means, see Section 4.1. With , , , and analogously defined,

To minimize the discrepancy, features that differ a lot between treatment groups will receive a smaller weight . Minimizing the overall objective , involves a trade-off between maximizing balance and predictive accuracy. We minimize (2) using alternating sub-gradient descent.

3.2 Deep neural networks

Deep neural networks have been shown to successfully learn good representations of high-dimensional data in many tasks Bengio et al. (2013). Here we show that they can be used for counterfactual inference and, crucially, for accommodating imbalance penalties. We propose a modification of the standard feed-forward architecture with fully connected layers, see Figure 2. The first hidden layers are used to learn a representation of the input . The output of the :th layer is used to calculate the discrepancy . The layers following the first layers take as additional input the treatment assignment and generate a prediction of the outcome.

Figure 2: Neural network architecture.

3.3 Non-linear hypotheses and individual effect

We note that both in the case of variable re-weighting, and for neural nets with a single linear outcome layer, the hypothesis space comprises linear functions of and the discrepancy, can be expressed in closed-form. A less desirable consequence is that such models cannot capture difference in the individual treatment effect, as they involve no interactions between and . Such interactions could be introduced by for example (polynomial) feature expansion, or in the case of neural networks, by adding non-linear layers after the concatenation . For both approaches however, we no longer have a closed form expression for .

4 Theory

In this section we derive an upper bound on the relative counterfactual generalization error of a representation function . The bound only uses quantities we can measure directly from the available data. In the previous section we gave several methods for learning representations which approximately minimize the upper bound.

Recall that for an observed context or instance with observed treatment , the two potential outcomes are , of which we observe the factual outcome . Let be a sample from the factual distribution. Similarly, let be the counterfactual sample. Note that while we know the factual outcomes , we do not know the counterfactual outcomes . Let be a representation function, and let denote its range. Denote by the empirical distribution over the representations and treatment assignments , and similarly the empirical distribution over the representations and counterfactual treatment assignments . Let be the hypothesis set of linear functions .

Definition 1 (Mansour et al. 2009).

Given a hypothesis set and a loss function , the empirical discrepancy between the empirical distributions and is:

where is a loss function with weak Lipschitz constant relative to 1. Note that the discrepancy is defined with respect to a hypothesis class and a loss function, and is therefore very useful for obtaining generalization bounds involving different distributions. Throughout this section we always have denote the squared loss. We prove the following, based on Cortes & Mohri (2014):

Theorem 1.

For a sample , , and , and a given representation function , let , . We assume that is a metric space with metric , and that the potential outcome functions and are Lipschitz continuous with constants and respectively, such that   for .

Let be the space of linear functions , and for , let be the expected loss of over distribution . Let be the maximum expected radius of the distributions. For , let , and similarly for , i.e. and are the ridge regression solutions for the factual and counterfactual empirical distributions, respectively.

Let and be the outputs of the hypothesis over the representation for the factual and counterfactual settings of , respectively. Finally, for each , let and be the nearest neighbor in of among the group that received the opposite treatment from unit . Then for both and we have:

(3)
(4)
(5)
(6)

The proof is in the supplemental material.

Theorem 1 gives, for all fixed representations , a bound on the relative error for a ridge regression model fit on the factual outcomes and evaluated on the counterfactual, as compared with ridge regression had it been fit on the unobserved counterfactual outcomes. It does not take into account how is obtained, and applies even if is not convex in , e.g. if is a neural net. Since the bound in the theorem is true for all representations , we can attempt to minimize it over , as done in Algorithm 1.

The term on line (4) of the bound includes the unknown counterfactual outcomes . It measures how well could we in principle fit the factual and counterfactual outcomes together using a linear hypothesis over the representation . For example, if the dimension of the representation is greater than the number of samples, and in addition if there exist constants and such that , then this term is upper bounded by . In general however, we cannot directly control its magnitude.

The term on line (3) measures the discrepancy between the factual and counterfactual distributions over the representation . In 4.1 below, we show that this term is closely related to the norm of the difference in means between the representation of the control group and the treated group. A representation for which the means of the treated and control are close (small value of (3)), but at the same time allows for a good prediction of the factuals and counterfactuals (small value of (4)), is guaranteed to yield structural risk minimizers with similar generalization errors between factual and counterfactual.

We further show that the term on line (4), which cannot be evaluated since we do not know , can be upper bounded by a sum of the terms on lines (5) and (6). The term (5) includes two empirical data fitting terms: and . The first is simply fitting the observed factual outcomes using a linear function over the representation . The second term is a form of nearest-neighbor regression, where the counterfactual outcomes for a treated (resp. control) instance are fit to the most similar factual outcome among the control (resp. treated) set, where similarity is measured in the original space . Finally, the term on line (6), is the only quantity which is independent of the representation . It measures the average distance between each treated instance to the nearest control, and vice-versa, scaled by the Lipschitz constants of the true treated and control outcome functions. This term will be small when: (a) the true outcome functions and are relatively smooth, and (b) there is overlap between the treated and control groups, leading to small average nearest neighbor distance across the groups. It is well-known that when there is not much overlap between treated and control, causal inference in general is more difficult since the extrapolation from treated to control and vice-versa is more extreme (Rosenbaum, 2009).

The upper bound in Theorem 1 suggests the following approach for counterfactual regression. First minimize the terms (3) and (5) as functions of the representation . Once is obtained, perform a ridge regression on the factual outcomes using the representations and the treatment assignments as input. The terms in the bound ensure that would have a good fit for the data (term (5)), while removing aspects of the treated and control which create a large discrepancy term (3)). For example, if there is a feature which is much more strongly associated with the treatment assignment than with the outcome, it might be advisable to not use it Pearl (2011).

4.1 Linear discrepancy

A straightforward calculation shows that for a class of linear hypotheses,

Here, is the spectral norm of and is the second-order moment of . In the special case of counterfactual inference, and differ only in the treatment assignment. Specifically,

(7)
(8)

where and .

Let and be the treated and control means in space. Then , exactly the difference in means between the treated and control groups, weighted by their respective sizes. As a consequence, minimizing the discrepancy with linear hypotheses constitutes matching means in feature space.

5 Related work

Counterfactual inference for determining causal effects in observational studies has been studied extensively in statistics, economics, epidemiology and sociology (Morgan & Winship, 2014; Robins et al., 2000; Rubin, 2011; Chernozhukov et al., 2013) as well as in machine learning (Langford et al., 2011; Bottou et al., 2013; Swaminathan & Joachims, 2015).

Non-parametric methods do not attempt to model the relation between the context, intervention, and outcome. The methods include nearest-neighbor matching, propensity score matching, and propensity score re-weighting (Rosenbaum & Rubin, 1983; Rosenbaum, 2002; Austin, 2011).

Parametric methods, on the other hand, attempt to concretely model the relation between the context, intervention, and outcome. These methods include any type of regression including linear and logistic regression (Prentice, 1976; Gelman & Hill, 2006), random forests (Wager & Athey, 2015) and regression trees (Chipman et al., 2010).

Doubly robust methods combine aspects of parametric and non-parametric methods, typically by using a propensity score weighted regression (Bang & Robins, 2005; Dudík et al., 2011). They are especially of use when the treatment assignment probability is known, as is the case for off-policy evaluation or learning from logged bandit data. Once the treatment assignment probability has to be estimated, as is the case in most observational studies, their efficacy might wane considerably (Kang & Schafer, 2007).

Tian et al. (2014) presented one of the few methods that achieve balance by transforming or selecting covariates, modeling interactions between treatment and covariates.

6 Experiments

We evaluate the two variants of our algorithm proposed in Section 3 with focus on two questions: 1) What is the effect of imposing imbalance regularization on representations? 2) How do our methods fare against established methods for counterfactual inference? We refer to the variable selection method of Section 3.1 as Balancing Linear Regression (BLR) and the neural network approach as BNN for Balancing Neural Network.

We report the RMSE of the estimated individual treatment effect, denoted , and the absolute error in estimated average treatment effect, denoted , see Section 2. Further, following Hill (2011), we report the Precision in Estimation of Heterogeneous Effect (PEHE), . Unlike for ITE, obtaining a good (small) PEHE requires accurate estimation of both the factual and counterfactual responses, not just the counterfactual. Standard methods for hyperparameter selection, including cross-validation, are unavailable when training counterfactual models on real-world data, as there are no samples from the counterfactual outcome. In our experiments, all outcomes are simulated, and we have access to counterfactual samples. To avoid fitting parameters to the test set, we generate multiple repeated experiments, each with a different outcome function and pick hyperparameters once, for all models (and baselines), based on a held-out set of experiments. While not possible for real-world data, this approach gives an indication of the robustness of the parameters.

The neural network architectures used for all experiments consist of fully-connected ReLU layers trained using RMSProp, with a small weight decay, . We evaluate two architectures. BNN-4-0 consists of 4 ReLU representation-only layers and a single linear output layer, . BNN-2-2 consists of 2 ReLU representation-only layers, 2 ReLU output layers after the treatment has been added, and a single linear output layer, , see Figure 2. For the IHDP data we use layers of 25 hidden units each. For the News data representation layers have 400 units and output layers 200 units. The nearest neighbor term, see Section 3, did not improve empirical performance, and was omitted for the BNN models. For the neural network models, the hypothesis and the representation were fit jointly.

We include several different linear models in our comparison, including ordinary linear regression (OLS) and doubly robust linear regression (DR) Bang & Robins (2005). We also include a method were variables are first selected using LASSO and then used to fit a ridge regression (Lasso + Ridge). Regularization parameters are picked based on a held out sample. For DR, we estimate propensity scores using logistic regression and clip weights at 100. For the News dataset (see below), we perform the logistic regression on the first 100 principal components of the data.

Bayesian Additive Regression Trees (BART) Chipman et al. (2010) is a non-linear regression model which has been used successfully for counterfactual inference in the past Hill (2011). We compare our results to BART using the implementation provided in the BayesTree R-package Chipman & McCulloch (2016). Like Hill (2011), we do not attempt to tune the parameters, but use the default. Finally, we include a standard feed-forward neural network, trained with 4 hidden layers, to predict the factual outcome based on and , without a penalty for imbalance. We refer to this as NN-4.

6.1 Simulation based on real data – IHDP

Hill (2011) introduced a semi-simulated dataset based on the Infant Health and Development Program (IHDP). The IHDP data has covariates from a real randomized experiment, studying the effect of high-quality child care and home visits on future cognitive test scores. The experiment proposed by Hill (2011) uses a simulated outcome and artificially introduces imbalance between treated and control subjects by removing a subset of the treated population. In total, the dataset consists of 747 subjects (139 treated, 608 control), each represented by 25 covariates measuring properties of the child and their mother. For details, see Hill (2011). We run 100 repeated experiments for hyperparameter selection and 1000 for evaluation, all with the log-linear response surface implemented as setting “A” in the NPCI package Dorie (2016).

6.2 Simulation based on real data – News

We introduce a new dataset, simulating the opinions of a media consumer exposed to multiple news items. Each item is consumed either on a mobile device or on desktop. The units are different news items represented by word counts , and the outcome is the readers experience of . The intervention represents the viewing device, desktop or mobile . We assume that the consumer prefers to read about certain topics on mobile. To model this, we train a topic model on a large set of documents and let represent the topic distribution of news item . We define two centroids in topic space, (mobile), and (desktop), and let the readers opinion of news item on device be determined by the similarity between and , , where is a scaling factor and . Here, we let the mobile centroid, be the topic distribution of a randomly sampled document, and be the average topic representation of all documents. We further assume that the assignment of a news item to a device is biased towards the device preferred for that item. We model this using the softmax function, , where determines the strength of the bias. Note that implies a completely random device assignment.

We sample news items and outcomes according to this model, based on 50 LDA topics, trained on documents from the NY Times corpus (downloaded from UCI Newman (2008)). The data available to the algorithms are the raw word counts, from a vocabulary of words, selected as union of the most probable words in each topic. We set the scaling parameters to and sample realizations for evaluation. Figure 3 shows a visualization of the outcome and device assignments for a sample of 500 documents. Note that the device assignment becomes increasingly random, and the outcome lower, further away from the centroids.

Figure 3: Visualization of one of the News sets (left). Each dot represents a single news item . The radius represents the outcome , and the color the treatment . The two black dots represent the two centroids. Histogram of ITE in News (right).

\abovespace\belowspace PEHE
Linear outcome
OLS
Doubly Robust
Lasso + Ridge
BLR
BNN-4-0
Non-linear outcome
NN-4
BART
BNN-2-2


Table 1: IHDP. Results and standard errors for 1000 repeated experiments. (Lower is better.) Proposed methods: BLR, BNN-4-0 and BNN-2-2.  Chipman et al. (2010)

\abovespace\belowspace PEHE
Linear outcome
OLS
Doubly Robust
Lasso + Ridge
BLR
BNN-4-0
Non-linear outcome
NN-4
BART
BNN-2-2


Table 2: News. Results and standard errors for 50 repeated experiments. (Lower is better.) Proposed methods: BLR, BNN-4-0 and BNN-2-2.  Chipman et al. (2010)
Figure 4: Error in estimated treatment effect (ITE, PEHE) and counterfactual response (RMSE) on the IHDP dataset. Sweep over for the BNN-2-2 neural network model.

6.3 Results

The results of the IHDP and News experiments are presented in Table 1 and Table 2 respectively. We see that, in general, the non-linear methods perform better in terms of individual prediction (ITE, PEHE). Further, we see that our proposed balancing neural network BNN-2-2 performs the best on both datasets in terms of estimating the ITE and PEHE, and is competitive on average treatment effect, ATE. Particularly noteworthy is the comparison with the network without balance penalty, NN-4. These results indicate that our proposed regularization can help avoid overfitting the representation to the factual outcome. Figure 4 plots the performance of BNN-2-2 for various imbalance penalties . The valley in the region , and the fact that we don’t experience a loss in performance for smaller values of , show that the penalizing imbalance in the representation has the desired effect.

For the linear methods, we see that the two variable selection approaches, our proposed BLR method and Lasso + Ridge, work the best in terms of estimating ITE. We would like to emphasize that Lasso + Ridge is a very strong baseline and it’s exciting that our theory-guided method is competitive with this approach.

On News, BLR and Lasso + Ridge perform equally well yet again, although this time with qualitatively different results, as they do not select the same variables. Interestingly, BNN-4-0, BLR and Lasso + Ridge all perform better on News than the standard neural network, NN-4. The performance of BART on News is likely hurt by the dimensionality of the dataset, and could improve with hyperparameter tuning.

7 Conclusion

As machine learning is becoming a major tool for researchers and policy makers across different fields such as healthcare and economics, causal inference becomes a crucial issue for the practice of machine learning. In this paper we focus on counterfactual inference, which is a widely applicable special case of causal inference. We cast counterfactual inference as a type of domain adaptation problem, and derive a novel way of learning representations suited for this problem.

Our models rely on a novel type of regularization criteria: learning balanced representations, representations which have similar distributions among the treated and untreated populations. We show that trading off a balancing criterion with standard data fitting and regularization terms is both practically and theoretically prudent.

Open questions which remain are how to generalize this method for cases where more than one treatment is in question, deriving better optimization algorithms and using richer discrepancy measures.

Acknowledgements

DS and US were supported by NSF CAREER award #1350965.

Appendix A Proof of Theorem 1

We use a result implicit in the proof of Theorem 2 of Cortes & Mohri (2014), for the case where is the set of linear hypotheses over a fixed representation . Cortes & Mohri (2014) state their result for the case of domain adaptation: in our case, the factual distribution is the so-called “source domain”, and the counterfactual distribution is the “target domain”.

Theorem A1.

[Cortes & Mohri (2014)] Using the notation and assumptions of Theorem 1, for both and :

(9)

In their work, Cortes & Mohri (2014) assume the is a reproducing kernel Hilbert space (RKHS) for a universal kernel, and they do not consider the role of the representation . Since the RKHS hypothesis space they use is much stronger than the linear space , it is often reasonable to assume that the second term in the bound A1 is small. We however cannot make this assumption, and therefore we wish to explicitly bound the term , while using the fact that we have control over the representation .

Lemma 1.

Let , , and . We assume that is a metric space with metric , and that there exist two function and such that , and in addition we define . We further assume that the functions and are Lipschitz continuous with constants and respectively, such that . Define to be the nearest neighbor of among the group that received the opposite treatment from unit , for all . Let

For any and :

Proof.

By the triangle inequality, we have that:

By the Lipschitz assumption on , and since , we obtain that

By definition . In addition, by definition of , we have , and therefore , proving the equality. The inequality is an immediate consequence of the Lipschitz property. ∎

We restate Theorem 1 and prove it.

Theorem 1.

For a sample , , and , recall that , and in addition define . For a given representation function , let , . We assume that is a metric space with metric , and that the potential outcome functions and are Lipschitz continuous with constants and respectively, such that .

Let be the space of linear functions, and for , let be the expected loss of over distribution . Let . For , let , and similarly for , i.e. and are the ridge regression solutions for the factual and counterfactual empirical distributions, respectively.

Let and be the outputs of the hypothesis over the representation for the factual and counterfactual settings of , respectively. Finally, for each , let be the nearest neighbor of among the group that received the opposite treatment from unit . Let .

Then for both and we have:

(10)
(11)
Proof.

Inequality (10) is immediate by Theorem A1. In order to prove inequality (11), we apply Lemma 1, setting and summing over the . ∎

Footnotes

  1. When is the squared loss we can show that if and , and the hypothesis set is that of linear functions with norm bounded by , then .

References

  1. Austin, Peter C. An introduction to propensity score methods for reducing the effects of confounding in observational studies. Multivariate behavioral research, 46(3):399–424, 2011.
  2. Bang, Heejung and Robins, James M. Doubly robust estimation in missing data and causal inference models. Biometrics, 61(4):962–973, 2005.
  3. Ben-David, Shai, Blitzer, John, Crammer, Koby, Pereira, Fernando, et al. Analysis of representations for domain adaptation. Advances in neural information processing systems, 19:137, 2007.
  4. Bengio, Yoshua, Courville, Aaron, and Vincent, Pierre. Representation learning: A review and new perspectives. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 35(8):1798–1828, 2013.
  5. Beygelzimer, Alina, Langford, John, Li, Lihong, Reyzin, Lev, and Schapire, Robert E. Contextual bandit algorithms with supervised learning guarantees. arXiv preprint arXiv:1002.4058, 2010.
  6. Bottou, Léon, Peters, Jonas, Quinonero-Candela, Joaquin, Charles, Denis X, Chickering, D Max, Portugaly, Elon, Ray, Dipankar, Simard, Patrice, and Snelson, Ed. Counterfactual reasoning and learning systems: The example of computational advertising. The Journal of Machine Learning Research, 14(1):3207–3260, 2013.
  7. Chernozhukov, Victor, Fernández-Val, Iván, and Melly, Blaise. Inference on counterfactual distributions. Econometrica, 81(6):2205–2268, 2013.
  8. Chipman, Hugh and McCulloch, Robert. BayesTree: Bayesian additive regression trees. https://cran.r-project.org/package=BayesTree/, 2016. Accessed: 2016-01-30.
  9. Chipman, Hugh A, George, Edward I, and McCulloch, Robert E. Bart: Bayesian additive regression trees. The Annals of Applied Statistics, pp. 266–298, 2010.
  10. Cortes, Corinna and Mohri, Mehryar. Domain adaptation and sample bias correction theory and algorithm for regression. Theoretical Computer Science, 519:103–126, 2014.
  11. Daume III, Hal and Marcu, Daniel. Domain adaptation for statistical classifiers. Journal of Artificial Intelligence Research, pp. 101–126, 2006.
  12. Dorie, Vincent. NPCI: Non-parametrics for causal inference. https://github.com/vdorie/npci, 2016. Accessed: 2016-01-30.
  13. Dudík, Miroslav, Langford, John, and Li, Lihong. Doubly robust policy evaluation and learning. arXiv preprint arXiv:1103.4601, 2011.
  14. Gani, Yaroslav, Ustinova, Evgeniya, Ajakan, Hana, Germain, Pascal, Larochelle, Hugo, Laviolette, François, Marchand, Mario, and Lempitsky, Victor. Domain-adversarial training of neural networks. arXiv preprint arXiv:1505.07818, 2015.
  15. Gelman, Andrew and Hill, Jennifer. Data analysis using regression and multilevel/hierarchical models. Cambridge University Press, 2006.
  16. Gretton, Arthur, Borgwardt, Karsten M., Rasch, Malte J., Schölkopf, Bernhard, and Smola, Alexander. A kernel two-sample test. J. Mach. Learn. Res., 13:723–773, March 2012. ISSN 1532-4435.
  17. Hill, Jennifer L. Bayesian nonparametric modeling for causal inference. Journal of Computational and Graphical Statistics, 20(1), 2011.
  18. Jiang, Jing. A literature survey on domain adaptation of statistical classifiers. Technical report, University of Illinois at Urbana-Champaign, 2008.
  19. Kang, Joseph DY and Schafer, Joseph L. Demystifying double robustness: A comparison of alternative strategies for estimating a population mean from incomplete data. Statistical science, pp. 523–539, 2007.
  20. Langford, John, Li, Lihong, and Dudík, Miroslav. Doubly robust policy evaluation and learning. In Proceedings of the 28th International Conference on Machine Learning (ICML-11), pp. 1097–1104, 2011.
  21. Lewis, David. Causation. The journal of philosophy, pp. 556–567, 1973.
  22. Louizos, Christos, Swersky, Kevin, Li, Yujia, Welling, Max, and Zemel, Richard. The variational fair auto encoder. arXiv preprint arXiv:1511.00830, 2015.
  23. Mansour, Yishay, Mohri, Mehryar, and Rostamizadeh, Afshin. Domain adaptation: Learning bounds and algorithms. arXiv preprint arXiv:0902.3430, 2009.
  24. Morgan, Stephen L and Winship, Christopher. Counterfactuals and causal inference. Cambridge University Press, 2014.
  25. Newman, David. Bag of words data set. https://archive.ics.uci.edu/ml/datasets/Bag+of+Words, 2008.
  26. Pearl, Judea. Causality. Cambridge university press, 2009.
  27. Pearl, Judea. Invited commentary: understanding bias amplification. American journal of epidemiology, 174(11):1223–1227, 2011.
  28. Prentice, Ross. Use of the logistic model in retrospective studies. Biometrics, pp. 599–606, 1976.
  29. Robins, James M, Hernan, Miguel Angel, and Brumback, Babette. Marginal structural models and causal inference in epidemiology. Epidemiology, pp. 550–560, 2000.
  30. Rosenbaum, Paul R. Observational studies. Springer, 2002.
  31. Rosenbaum, Paul R. Design of Observational Studies. Springer Science & Business Media, 2009.
  32. Rosenbaum, Paul R and Rubin, Donald B. The central role of the propensity score in observational studies for causal effects. Biometrika, 70(1):41–55, 1983.
  33. Rubin, Donald B. Estimating causal effects of treatments in randomized and nonrandomized studies. Journal of educational Psychology, 66(5):688, 1974.
  34. Rubin, Donald B. Causal inference using potential outcomes. Journal of the American Statistical Association, 2011.
  35. Schölkopf, B., Janzing, D., Peters, J., Sgouritsa, E., Zhang, K., and Mooij, J. On causal and anticausal learning. In Proceedings of the 29th International Conference on Machine Learning, pp. 1255–1262, New York, NY, USA, 2012. Omnipress.
  36. Strehl, Alex, Langford, John, Li, Lihong, and Kakade, Sham M. Learning from logged implicit exploration data. In Advances in Neural Information Processing Systems, pp. 2217–2225, 2010.
  37. Sutton, Richard S and Barto, Andrew G. Reinforcement learning: An introduction, volume 1. MIT press Cambridge, 1998.
  38. Swaminathan, Adith and Joachims, Thorsten. Batch learning from logged bandit feedback through counterfactual risk minimization. Journal of Machine Learning Research, 16:1731–1755, 2015.
  39. Tian, Lu, Alizadeh, Ash A, Gentles, Andrew J, and Tibshirani, Robert. A simple method for estimating interactions between a treatment and a large number of covariates. Journal of the American Statistical Association, 109(508):1517–1532, 2014.
  40. van der Laan, Mark J and Petersen, Maya L. Causal effect models for realistic individualized treatment and intention to treat rules. The International Journal of Biostatistics, 3(1), 2007.
  41. Wager, Stefan and Athey, Susan. Estimation and inference of heterogeneous treatment effects using random forests. arXiv preprint arXiv:1510.04342, 2015.
  42. Weiss, Jeremy C, Kuusisto, Finn, Boyd, Kendrick, Lui, Jie, and Page, David C. Machine learning for treatment assignment: Improving individualized risk attribution. American Medical Informatics Association Annual Symposium, 2015.
  43. Zemel, Rich, Wu, Yu, Swersky, Kevin, Pitassi, Toni, and Dwork, Cynthia. Learning fair representations. In Proceedings of the 30th International Conference on Machine Learning (ICML-13), pp. 325–333, 2013.