Efficient Policy Learning from Surrogate-Loss Classification Reductions

Efficient Policy Learning from Surrogate-Loss Classification Reductions


Recent work on policy learning from observational data has highlighted the importance of efficient policy evaluation and has proposed reductions to weighted (cost-sensitive) classification. But, efficient policy evaluation need not yield efficient estimation of policy parameters. We consider the estimation problem given by a weighted surrogate-loss classification reduction of policy learning with any score function, either direct, inverse-propensity weighted, or doubly robust. We show that, under a correct specification assumption, the weighted classification formulation need not be efficient for policy parameters. We draw a contrast to actual (possibly weighted) binary classification, where correct specification implies a parametric model, while for policy learning it only implies a semiparametric model. In light of this, we instead propose an estimation approach based on generalized method of moments, which is efficient for the policy parameters. We propose a particular method based on recent developments on solving moment problems using neural networks and demonstrate the efficiency and regret benefits of this method empirically.

1 Introduction

Policy learning from observational data is an important but challenging problem because it requires reasoning about the effects of interventions not observed in the data. For example, if we wish to learn an improved policy for medical treatment assignment based on observational data from electronic health records, we must take care to consider potential confounding: since healthier patients who were already predisposed to positive outcomes were likely to have historically been assigned less invasive treatments, naïve approaches may incorrectly infer that a policy of always assigning less invasive treatments will obtain better outcomes.

Various recent work has recently tackled this problem, known as policy learning from observational (or, off-policy) data, by optimizing causally-grounded estimates of policy value such as inverse-propensity weighting (IPW), doubly robust (DR) estimates, or similar (Qian & Murphy, 2011; Beygelzimer & Langford, 2009; Kitagawa & Tetenov, 2018; Swaminathan & Joachims, 2015; Zhao et al., 2012; Zhou et al., 2017; Jiang et al., 2019; Kallus & Zhou, 2018; Kallus, 2018, 2017). In particular, Athey & Wager (2017); Zhou et al. (2017), among others, highlight the importance of using efficient estimates of policy value as optimization objectives, i.e., having minimal asymptotic mean-squared error (MSE). Examples of efficient estimators are direct modeling or IPW when outcome functions or propensities are sufficiently smooth (Hirano et al., 2003; Hahn, 1998), or DR leveraging cross-fitting (Chernozhukov et al., 2018) in more general non-parametric settings.

Regardless of which of the three estimates one uses, the resulting optimization problem amounts to a difficult binary optimization problem. Therefore many of the above leverage a reduction of this problem to weighted classification (for two actions; cost-sensitive classification more generally) and leverage tractable convex formulations that use surrogate loss functions for the zero-one loss, such as, for example, hinge loss (Zhao et al., 2012; Zhou et al., 2017, which yields a weighted SVM) and logistic loss (Jiang et al., 2019, which yields a weighted logistic regression). The recently proposed entropy learning approach of Jiang et al. (2019) is particularly appealing, since the logistic regression-based surrogate loss is smooth and therefore allows for statistical inference on the estimated optimal parameters.

However, as we here emphasize, even if we use policy value estimates that are efficient, this does not imply that we obtain efficient estimation/learning of the optimal policy itself, even if the surrogate-loss model is well-specified. For example, in the case of logistic loss, we demonstrate that, although logistic regression is statistically efficient for actual binary classification when well-specified (as is well-known), in the case of policy learning via a weighted-classification reduction, well-specification only implies a semi-parametric model and therefore minimizing the empirical average of loss is not efficient in this case.

On the other hand, the implications of correct specification can be summarized as a conditional moment problem. Such problems are amenable to efficient solution using approaches based on the generalized method of moments (GMM; Hansen, 1982). We demonstrate what an efficient such estimate would look like, in terms of the efficient instruments for our specific policy learning problem. We propose a particular implementation of solving our problem based on recent work on efficiently solving conditional moment problems using a reformulation of the efficient GMM solution as a smooth game optimization problem, which can be solved using adversarial training of neural networks (Bennett et al., 2019). In addition, we prove some results relating the efficiency of optimal policy estimation to the asymptotic regret of the surrogate loss, and also prove that under correct specification the regret of the surrogate loss upper bounds the true regret of policy learning.

We demonstrate empirically over a wide range of scenarios that our methodology indeed leads to greater efficiency, with lower MSE in estimating the optimal policy parameter estimates under correct specification. Furthermore, we demonstrate that in practice, both with and without correct specification, our methodology tends to learn policies with lower regret, particularly in the low-data regime.

1.1 Setting and Assumptions

Let denote the context of an individual, the treatment assigned to that individual, and the resultant outcome. In addition let denote the counterfactual outcome that would have been obtained for the corresponding individual if, possibly contrary to fact, treatment had been assigned instead. We assume throughout that we have access to logged data consisting of iid observations, , of triplet generated by some behavior policy.

We make standard causal assumptions of consistency and non-interference, which can be summarized by assuming that . Furthermore, as is standard in the above policy learning literature, we assume that encapsulates all possible confounders, that is, , as would for example be guaranteed if the logging policy is a function of the observed individual context.

A policy denotes a mapping from individual context to treatment to be assigned. Concretely, given individual context , let denote the treatment assigned by policy (we may also consider stochastic policies but since optimal policies are deterministic we focus on these).


denote the expected value of following policy , relative to complete randomization. Given the logged data and some policy class , our task is to learn an optimal policy from the class, defined by (notice that offsetting by the complete randomization policy does not affect this optimization problem). In particular we consider policy classes where each policy is indexed by some utility function and is defined by , where in turn the utility functions are parametrized by as , so that

Correspondingly, we define

and . A prominent example is linear decision rules, where . Other examples include decision trees of bounded depth and neural networks.

1.2 Efficiency

We briefly review what it means to estimate the optimal policy parameters, , efficiently. For simplicity, suppose that is unique. A model is some set of distributions for the data-generating process (DGP), i.e., a set of probability distributions for the triplet .

A model is generally non-parametric in the sense that this set of distributions can be arbitrary, infinite, and infinite dimensional.

Consider any learned policy parameters , that is, a function of the data with values in . Roughly speaking, we say that is regular if, whenever the data is generated from , we have that converges in distribution to some limit as and this limit holds in a particular locally uniform sense in (see Van der Vaart, 2000, Chapter 25 for a precise definition). Semiparametric efficiency theory (see ibid.) then establishes that there exists a covariance matrix such that for any cost function for which the sublevel sets are are convex, symmetric about the origin, and closed, we have that


for any estimator that is regular in . An important example is MSE, given by .

Efficient estimators are those for which Eq. 1 holds with equality for all such functions , which, by the portmanteau lemma, would be implied if the estimator has the limiting law . Regular estimators is a very general class of estimators so the bound in Eq. 1 is rather strong. So much so that, in fact, Eq. 1 holds in a local asymptotic minimax sense for all estimators (see ibid., Theorem 25.21).

Efficiency is important because, in observational data, we only have the data that we have and cannot experiment or simulate to generate more so we should use the data optimally. Equation 1 relates to the efficiency of estimating . In Section 4.4 we also relate this to regret.

1.3 Related Work

There has been a variety of past work on the problem of policy learning from observational data. Much of this work considers formulating the objective of policy learning as a weighted classification problem (Beygelzimer & Langford, 2009; Dudík et al., 2011), and either minimizing the 0-1 loss directly using combinatorial optimization (Athey & Wager, 2017; Kitagawa & Tetenov, 2018; Zhou et al., 2018), using smooth stochastic policies to obtain a nonconvex but smooth loss surface (Swaminathan & Joachims, 2015), or replacing the 0-1 objective with a convex surrogate to be minimized instead (Zhao et al., 2012; Zhou et al., 2017; Jiang et al., 2019; Beygelzimer & Langford, 2009; Dudík et al., 2011). In addition there is work that extends some of the above approaches to the continuous action setting (Kallus & Zhou, 2018; Krishnamurthy et al., 2019; Chernozhukov et al., 2019); our focus will be solely on binary actions. Of these methods the convex-surrogate approach has the advantage of computational tractability and, when the convex surrogate is smooth (e.g. Jiang et al., 2019), the ability to perform statistical inference on the optimal parameters. Our paper extends this work by investigating how to solve the smooth surrogate problem efficiently. Although much of this past work has used objective functions for learning based on statistically efficient estimates of policy value (Dudík et al., 2011; Athey & Wager, 2017; Zhou et al., 2018; Chernozhukov et al., 2019), to the best of our knowledge our paper is novel in investigating the efficient estimation of the optimal policy parameters themselves.

In addition there has been a variety of past work on solving conditional moment problems (see Khosravi et al. (2019); Bennett et al. (2019) and citations therein). Our paper builds on this work as it reformulates the problem of policy learning as a conditional moment problem, which we propose to solve using optimally weighted GMM (Hansen, 1982) and DeepGMM (Bennett et al., 2019).

2 The Surrogate-Loss Reduction and Its Fisher Consistency

In this section, we present the surrogate-loss reduction of policy learning and the implications of correct specification.

Many policy learning methods start by recognizing that the policy value can be re-written as


where is any of the following score variables, which all depend on observables:


where and . Equation 2 arises once we recognize that all of these satisfy .

Then we can approximate Eq. 2 using its empirical version:


In particular, Athey & Wager (2017); Kitagawa & Tetenov (2018); Zhou et al. (2018) prove bounds of the form given that the policy class has bounded complexity. This shows that optimizing provides near-optimal solutions in the original policy learning problem, since . Given that in practice the nuisance functions and are estimated from data, we denote the corresponding score variable when such estimates are plugged in as to differentiate it from the variable that uses the true nuisance functions. We correspondingly let . When is efficient for one can generally additionally prove that .

Given the non-convexity and non-smoothness of the empirical objective function Eq. 4 it is not necessarily clear how to actually optimize it, however. Many works (Jiang et al., 2019; Zhao et al., 2012; Beygelzimer & Langford, 2009) recognize that this optimization problem is actually equivalent to weighted binary classification (in our two-action case), since , so any classification algorithm that accepts instance weights can perhaps be used to address Eq. 4. Specifically, many classification algorithms take the form of minimizing a convex surrogate loss:


where acts as a surrogate for the zero-one loss . Analogous to above, we let denote the population version of this loss. For classification, Bartlett et al. (2006) studies which losses are appropriate surrogates, i.e., are classification-calibrated. The population version of the surrogate loss, which is approximating, is


Following Jiang et al. (2019), we will focus on the logistic (or, logit-cross-entropy) loss function and define everywhere as:1

This loss is clearly smooth, and is also convex in as long as is convex in for each . The loss is also classification-calibrated, which immediately yields the following, given an additional regularity assumption:

Assumption 1.

, and .

Theorem 1 (Fisher Consistency Under Correct Specification).

Suppose the policy class is correctly specified for the surrogate loss in the sense that


Then given creftype 1, any minimizer of the surrogate-loss risk is an optimal policy:

Theorem 1 establishes that, under correct specification, if we minimize the population surrogate loss, , then we obtain the optimal policy. Therefore, a natural strategy for policy learning would be to directly minimize the empirical loss , as was done by the above. Although the above arguments indicate that this approach would be computationally tractable, and also consistent under mild regularity conditions that ensure that optimizers of would converge to optimizers of , it is not clear that it is statistically efficient, even if we use an efficient score variable for policy value estimation.

3 The Conditional-Moment Reformulation of the Surrogate-Loss Reduction

In this section we establish a new interpretation of the surrogate-loss reduction as a conditional moment problem and we discuss the implications of this in terms of the model implied by correct specification. This will enable us to conduct efficiency analysis and to design algorithms with improved efficiency in the next section.

3.1 The Conditional Moment Problem

To make progress toward a characterization of efficiency under correct specification, we next establish an equivalent formulation of optimizing the population surrogate loss under correct specification as a conditional moment problem.

Define the derivative of with respect to :

where is the logistic function.

Theorem 2 (Conditional Moment Problem Under Correct Specification).

Suppose creftype 1 holds and the policy class is correctly specified for the surrogate loss in the sense that Eq. 7 holds. Define

Then we have that


Theorem 2 arises straightforwardly from the observation that, under correct specification, minimizes for almost every . Using smoothness and convexity, this latter observation is restated using first-order optimality conditions. Dominated convergence theorem allows us to exchange differentiation and expectation and we obtain the result. Theorem 2 provides an alternative characterization of as solving a conditional moment problem.

Notice that Eq. 8 is equivalent to the statement that, for any square integrable function of , we have the moment restriction


This alternative characterization makes the problem amenable to efficiency analysis.

Notice that by first-order optimality, if , optimizing in Eq. 6 exactly corresponds to solving the set of moment equations given by . Similarly, optimizing the empirical loss in Eq. 5 corresponds to solving these equations with population averages () replaced with empirical sample averages.

However, Eq. 9 gives a much broader set of equations. Leveraging this fact will be crucial to achieving efficiency. Indeed, it is well-known that even if a small number of moment equations are sufficient to identify a parameter (e.g., in the above, the equations identify via first-order optimality), taking into consideration additional moment equations that are known to hold can increase efficiency in semiparametric settings (Carrasco & Florens, 2014).

3.2 The Semiparametric Model Implied by Specification

In order to reason about efficiency, we need to reason about the model implied by Eq. 8. To do so, we first establish the following lemma:

Lemma 1.

Assume creftype 1. Then given a policy class , the model of DGPs (distributions on ) where is correctly specified for the surrogate loss (in the sense of Eq. 7) is given by all distributions on for which there exists satisfying


This model is generally a semiparametric model. That is, while Eq. 10 is a parametric restriction on the function , the set of corresponding distributions on that satisfy this restriction is still infinite-dimensional and non-parametric.

3.3 Comparison with Logistic Regression for Classification

One question the reader might have at this point is why an approach different than empirical loss minimization is necessary for efficiency, given that the surrogate loss formulation seems mathematically identical to binary classification using logistic regression, which is known to be efficient.2 The difference between the problems is that for actual classification we have that is a binary class label, i.e., . If we assume the policy class is well-specified and , the characterization of our semiparametric model from Lemma 1 reduces to

which implies that our model is parametric, since the choice of now fully characterizes the distribution of the label given . E.g., usually for logistic regression we let so that the above says that the logit of is linear. Therefore, performing logistic regression corresponds to MLE for this parametric model, which is efficient.

However in our general setting this is not the case and there is a non-trivial nuisance space, since an infinite-dimensional space of conditional distributions for given could result in the same function . This suggests that we may need to be more careful in order to obtain efficiency and that there may exist estimators that are more efficient than empirical loss minimization.

4 Efficient Policy Learning Reductions

In this section we propose some efficient methods for policy learning based on the above conditional-moment formulation. In addition, we provide some analysis of these methods in terms of efficiency and regret.

4.1 FiniteGMM Policy Learner

We begin by proposing an approach based on using multi-step GMM to solve the conditional moment problem, which we will call FiniteGMM. This approach works by optimally enforcing for the moment conditions given by Eq. 9 for a finite collection of critic functions . Specifically, given some initial estimate of , define:

We then estimate by . We can repeat this multiple times, plugging in as and resolving.

An important issue with this estimator, however, is how to choose the critic functions. Standard GMM theory requires that the moment conditions are sufficient to identify . And even then, the above is only the most efficient among estimators of the form for any norm , but there may still be more efficient choices of critic functions.

4.2 The Efficient Instruments for Policy Learning

One nice result from the theory of conditional moment problems is the existence of a finite set of critic functions ensuring efficiency in the sense of Section 1.2. Define:

We call the efficient instruments, and as long as the span of contains these instruments then FiniteGMM is guaranteed efficiency (Newey, 1993).

Given this, one approach would be to let be flexible with the hope of approximately containing . Letting, for example, be the first functions in a basis for such as a polynomial basis and letting can be shown to be efficient under certain conditions (Newey, 1993). This, however, can perform very badly in practice, especially with any reasonable amount of features. Ideally, we would instead be able to make use of modern machine learning methods and approximate using some flexible function class such as neural networks rather than defining a finite set of basis functions.

4.3 ESPRM Policy Learner

Motivated by the above concerns, we now present our proposed approach: ESPRM (efficient surrogate policy risk minimization). This is based on the extension of Bennett et al. (2019) to our conditional moment problem. In the setting of instrumental variable regression, Bennett et al. (2019) proposes an adversarial reformulation of optimally-weighted GMM, which allows us to consider critic functions given by flexible classes such as neural networks. Then if this class provides a good approximation for the efficient instruments, this approach should be approximately efficient.

Specifically, we define:

where as above is some initial consistent estimate of . Then following Bennett et al. (2019), the ESPRM estimator is defined as

where is our flexible function class (henceforth assumed to be a class of neural networks). It remains to describe how this adversarial game is to be solved, and how to define . As in Bennett et al. (2019) we optimize the objective by performing alternating first-order optimization steps using the OAdam algorithm (Daskalakis et al., 2017), which was designed for solving smooth game problems such as generative adversarial networks (GANs). In addition, we continuously update during optimization, where at each step of alternating first order optimization we set equal to the previous iterate of .

4.4 Efficient Learning implies Optimal Regret

Finally we prove that efficiency not only ensures minimal MSE in estimating but also implies regret bounds. Let

Theorem 3 (Regret Upper Bound).

Suppose creftype 1 holds and that the policy class is correctly specified for the surrogate loss in the sense that Eq. 7 holds. Then, for any we have:

This theorem implies that the regret of a policy is upper-bounded by the excess risk of the surrogate loss. Next, we make the following regularity assumption about the loss :

Assumption 2 (Well Behaved Loss).

has a unique minimizer in the interior of , and the Hessian of at is positive definite.

Given this assumption, a Taylor’s theorem expansion yields . For for any regular estimator , we can also define the asymptotic regret as the limiting distribution:

which exists since regularity implies that has a limiting distribution. Given this we can prove the following optimality result of our efficient estimators in terms of asymptotic regret:

Theorem 4 (Optimal Asymptotic Regret).

Given creftype 2 and any non-negative, non-decreasing , we define the risk . Given this, there exists a risk bound such that for every regular , with equality if is semi-parametrically efficient.

Together with Theorem 3, this means that both the actual regret () and the surrogate regret () of policies given by efficient estimators are , and the surrogate regret has an optimal constant.

5 Experiments

Figure 1: Difference in performance between ESPRM and ERM. We plot RMRR against number of training examples for each combination of policy class and scenario kind. All shaded regions are 95% confidence intervals constructed from bootstrapping.

5.1 Synthetic Scenarios

First we investigate the performance of our algorithms on a variety of synthetic scenarios. In all these scenarios is 2-dimensional, and and are standard Gaussian distributed for each ; the scenarios only differ in the functions and . In none of the scenarios is our policy class actually well-specified in the sense of Eq. 7.

We consider the following kinds of synthetic scenarios:

  • Linear: and for some vectors , , .

  • Quadratic: and for some symmetric matrices and , and vectors , , .

In addition we experiment with the following policy classes: a linear policy class, where , and a flexible policy class where is given by a fully-connected neural network with a single hidden layer of size 50, and leaky ReLU activations.

In all cases we use the surrogate loss method of Jiang et al. (2019) described in Section 2 as a benchmark, which we henceforth refer to as ERM. We note that although in the prior work they used , we instead use , both because it is theoretically better grounded (Athey & Wager, 2017; Zhou et al., 2017) and we found that it gives stronger results for all methods. For our ESPRM method we let be the same neural network function class as for flexible policies, and perform alternating first-order optimization as described in Section 4.3 for a fixed number of epochs.

For all methods, except where otherwise specified, we use the weights described in Eq. 3, with nuisance functions fit using correctly specified linear regression or logistic regression algorithms on a separately sampled tuning dataset of the same size as the training dataset.3 We provide some additional results in the appendix where nuisances were instead fit via flexible neural networks, which show that this has little effect on our results. In all cases except for ESPRM we perform optimization using LBFGS. Additional optimization details are given in the appendix.4

For all configurations of scenario kind and policy we ran our experiments by sampling random scenarios of the respective kind, by setting all scenario parameters to be independent standard Gaussian variables. Specifically, for each we sample 64 random scenarios of the respective kind, and for each random scenario we sample training data points and run all methods on this data. Results for FiniteGMM, which generally did badly as predicted, are given in the appendix.

Define Relative Mean Regret Reduction (RMRR), given by:

where each expectation in the fraction is taken over the joint distribution of randomly sampled scenario, and corresponding random estimate . Then for each scenario kind and policy class, we plot predicted RMRR against number of training data based on our ESPRM estimates in Fig. 1. We see that ESPRM consistently obtains policies on average that are lower regret or on-par than those obtained by ERM (typically with around 10% to 20% RMRR), with 95% confidence intervals indicating clearly better performance in every case except for training flexible policies on random Quadratic scenarios (in which case performance seems roughly on par). It is notable that this even occurs in the Quadratic setting with the linear policy class, where our policy class is not even well specified for the loss, let alone the surrogate loss. We can also observe that the most significant regret benefits tend to occur with smaller training set sizes (since the same RMRR implies a larger absolute decrease in regret), indicating that the statistical efficiency of our method is leading to improved finite sample behavior.

Figure 2: Above we plot the convergence in MSE of the predicted for each method with a linear policy class, over the random scenarios of the Linear class. Below we plot the average difference in the squared error of ESPRM and ERM (positive numbers indicate improvement over ERM). All shaded regions are 95% confidence intervals constructed from bootstrapping.

In Fig. 2 we plot the convergence in terms of the MSE of the estimated parameter from ESPRM and ERM, for the Linear setting and linear policy class (where parameters are low-dimensional and correctly specified). We plot both the MSE convergence, and the average difference in the squared error between the estimates, across the random scenarios.5 It is clear from these results that ESPRM consistently estimates optimal policy parameters with lower squared error on average compared to ERM across these random simulated scenarios. This provides strong evidence that the methodology indeed provides an improvement in statistical efficiency for solving the smooth surrogate loss problem.

5.2 Jobs Case Study

Policy ERM ESPRM Difference
Table 1: Average predicted policy value (multiplied by 1000) for the Jobs case study for ERM versus ESPRM over 64 repetitions. The interval provides the 95% confidence intervals.

We next consider an application to a dataset derived from a large scale experiment comparing different programs offered to unemployed individuals in France (Behaghel et al., 2014). We focus our attention to two arms from the experiment: a treatment arm where individuals receive an intensive counseling program run by a public agency and a treatment arm with a similar program run by a private agency. The hypothetical application is learning a personalized allocation to counseling program, with the aim of maximizing the number of individuals who reenter employment within six months, minus costs. (The original study’s focus was not personalization.) Our intervention is simply the offer of the counseling program; we therefore ignore the fact that some individuals offered one of the programs did not attend.

To make our policies focus on heterogeneous effects, we set the costs of each arm to be equal to their within-arm average outcome in the original data. That is, the outcome we consider is equal whether one reentered employment within 6 months, minus the average number of individuals who entered employment within 6 months in that arm. The covariates we consider personalizing on are: statistical risk of long-term unemployment, whether individual is seeking full-time employment, whether individual lives in sensitive suburban area, whether individual has a college education, the number of years of experience in the desired job, and the nature of the desired job (e.g., technician, skilled clerical worker, etc.).

We then consider 64 replications of the following procedure. Each time, we randomly split the data 40%/60% into train/test. We then introduce some confounding into the training dataset. We consider the following three binary variables: whether individual has 1–5 years experience in the desired job, whether they seek a skilled blue collar job, and whether their statistical risk of long-term unemployment is medium. After studentizing each variable, we segment the data by the tertiles of their sum. In the first tertile, we drop each unit with probability . In the second tertile, we drop private-program units with probability and public-program units with probability . In the third tertile, we drop public-program units with probability and private-program units with probability . Given a policy learned on this training data, we evaluate it on the held-out test set using a Horvitz-Thompson estimator.

Of the training data, was set aside for training nuisances, and an additional as validation data for early stopping. We then trained both linear and flexible policies using ERM and ESPRM as in our simulation studies, with the exception that nuisances were fitted using neural networks (of the same architecture as the flexible policy class).

We summarize the mean estimated outcome for the policies from each method in Table 1. We note from these values that on average ESPRM seems to be learning higher value job-assignment policies than ERM. Furthermore, performing paired two-sided -tests on the two sets of repetitions for each policy to test for difference in mean policy value we obtained -values of for the linear policy class and for the flexible policy class, clearly highlighting the benefit of our ESPRM method.

6 Conclusion

We considered a common reduction of learning individualized treatment rules from observational data to weighted surrogate risk minimization. We showed that, quite differently from actual classification problems, assuming correct specification in the policy learning case actually suggests more efficient solutions to this reduction. In particular, even if we use efficient policy evaluation, this may not necessarily lead to efficient policy learning. Specifically, under correct specification, the problem becomes a conditional moment problem in a semiparametric model and efficiency here translates to both better MSE in estimating optimal policy parameters and improved regret bounds.

Based on this observation, we proposed an algorithm, ESPRM, for efficiently solving the surrogate loss problem. We showed that our method consistently outperformed the standard method of empirical risk minimization on the surrogate loss, both over a wide variety of synthetic scenarios and in a case study based on a real job training experiment.

Appendix A Omitted Proofs

To prove Theorems 2 and 1, we first establish the following two useful lemmas.


Correct specification, Eq. 7, is the assumption that .

Lemma 2.

Suppose . Then


For brevity, let .

Let any be given. Now let any be given. Since , we have . Since was arbitrary, we conclude that .

Now, let any be given. By assumption, . Since and , we obtain that . Now let any unconstrained be given. Since , we have , whence . Since unconstrained was arbitrary, we conclude that . Since by definition of , we conclude that . ∎

Lemma 3.

Suppose almost surely. Then


Notice that because is an unconstrained function of it must minimize the conditional expectation. That is,

Since is convex in , so is . Next, note that by mean value theorem, we have

for some . Since and , dominated convergence theorem yields

We conclude via first-order conditions for unconstrained optimization over that

which is a restatement of the lemma’s result. ∎

We are now prepared to prove Theorems 2 and 1.

Proof of Theorem 1.

Suppose . That is, . By Lemma 2, . Then, by Lemma 3, for a.e. . Consider such an . Note that , and define and .


We therefore have, from ,

The condition that for almost every is exactly equivalent to the condition that since, by assumption on and iterated expectations, we have that . ∎

Proof of Theorem 2.

First note that creftype 1 implies that almost everywhere, so the conditions of Lemma 3 apply. Now suppose . By Lemma 2, . Then, by Lemma 3, for a.e. , which is a restatement of almost surely.

Now suppose almost surely. Then, by Lemma 3, . By definition, . Therefore, by Lemma 2, , which is a restatement of . ∎

Proof of Lemma 1.

Suppose that is correctly specified for the surrogate loss. Then given creftype 1, there exists such that, for each almost everywhere, we have:

Proof of Theorem 3.

Let . We first note that given from creftype 1, we have that and can be re-scaled by a factor of and expressed as the expected values of and , where , for some modified distribution of (where the measure of is re-scaled by ). In addition by our correct specification assumption we have , where is the minimum loss over all possible unconstrained choices of function . Given the above it follows from Bartlett et al. (2006, Theorem 3) that for some non-decreasing, non-negative function , which depends only on the nonnegative loss function . Following their notation, define:

Then it follows from Bartlett et al. (2006, Section 2) that is the Fenchel-Legendre biconjugate of . Now it is easy to verify from these definitions that , which is convex, and thus . The desired result follows immediately from this since . ∎

Proof of Theorem 4.

Given creftype 2, from the Taylor expansion from Section 4.4 we have:

Thus assuming that is regular, we let be the limiting distribution of , which by Slutsky’s and the continuous mapping theorem gives us that

Now, by Van der Vaart (2000, Theorem 25.20) we have that , where is given by some arbitrary distribution, is the convariance matrix of the semi-parametrically efficient estimator, and denotes convolution. In addition is semi-parametrically efficient. Now let . Then it follows from Van der Vaart (2000, Lemma 8.5) that for any , since is a bowl-shaped loss in the sense of Van der Vaart (2000) given that is non-negative and non-decreasing. Thus we can conclude by noting that the efficiency bound is given by , which is clearly realized for any semi-parametrically efficient . ∎

Appendix B Additional Experiment Details

b.1 Additional Optimization Details

Solving Esprm Smooth Game

As mentioned in Section 4.3, we solve the smooth game by running alternating first-order optimization using the OAdam algorithm. We tuned this procedure manually by experimenting on a couple of hand selected synthetic scenarios, one Linear and one Quadratic, prior to running our main experiments. We found generally good results using a learning rate of 0.001 for linear policy networks, and 0.0002 for flexible policy networks, with the learning rate of the critic network set to 5 times that of the policy netowrk. Furthermore we found good results using a number of epochs given by the fixed rule of , where is the number of training data points used.

Optimizing Neural Networks for Nuisance Functions and FiniteGMM

In all cases where we optimized neural networks in these problems we used the LBFGS algorithm. Furthermore we performed some additional first-order optimization using Adam to deal with potential cases of poor convergence, using a learning rate of 0.001, and stopping once performance on a held-out validation set (of same size as training set) failed to improve for 5 consecutive epochs.

b.2 Results for FiniteGMM Methods

We include here results for our FiniteGMM method. As mentioned in Section 5, the results for these methods were poor as expected. In particular the results seem to be very unstable, with extremely poor policy learning in a small percentage of cases, leading to extremely negative RMRR values in all cases except with Quadratic scenario and linear policy network. However even in the majority of cases where these estimators don’t have unstable behavior, they seem to perform par with or at best only marginally better than ERM, with the one expection of Quadratic scenario and linear policy network.

In our experiments with FiniteGMM we experimented with two different kinds of choices for the set of critic functions : (1) polynomial expansion of of degree ; and (2) Random Kitchen Sink (RKS; Rahimi & Recht (2009)) expansion of of length using the Gaussian kernel with . Note that the Random Kitchen Sink expansion is designed such that for some given kernel, with approximation error vanishing as . In both cases, the function is given by the ’th coordinate of the corresponding feature map. We calculated using 3 stages, with the guess of in the first stage chosen at random.

In Fig. 3 we plot the performance of both FiniteGMM and ESPRM in terms of the RMRR metric, plotting both mean and median values across different values of . Although we experimented with multiple choices of polynomial degree / RKS expansion length, we only plot results for degree 3 polynomials (), and length 64 expansions () for clarity, as we found these gave the least-worst results.

Figure 3: Results for ESPRM, ERM, and FiniteGMM methods for all scenarios and policy network types, where nuisances are fit using linear/logistic regression as in main experiments.

b.3 Additional Results for Flexible Nuisance fitting

We provide here some additional results for our simulation study on Quadratic where the nuisances were fit using flexible neural network training (using the same neural network architecture as for the flexible policy class) instead of using a correctly specified model. We show results for the ESPRM and ERM methods in Fig. 4 both with nuisance fit using correctly specified model and using flexible neural network model. We note that results are about the same in both cases: for linear policies we have clearly superior results with our method versus ERM, while for flexible policies in both cases the methods are roughly on par with each other, with slight performance increase in favor of ESPRM for some values of .

Figure 4: Results for both ESPRM and ERM methods for Quadratic, where in top row results obtained by fitting correctly specified nuisance model, while in bottom row results fit using flexible nueral network nuisance model.


  1. All of our results actually extend to any twice-differentiable classification-calibrated loss. Logistic is the most prominent such loss.
  2. This is because logistic regression performs maximum likelihood estimation (MLE), which is statistically efficient for well-specified parametric models.
  3. By correctly specified we mean that for Linear we fit using linear/logistic regression on , whereas for Quadratic we fit on a quadratic feature expansion of .
  4. Code for running all of our experiments is located at https://github.com/CausalML/ESPRM.
  5. All parameter vectors are normalized first given that the policy function is scale-invariant.


  1. Athey, S. and Wager, S. Efficient policy learning. arXiv preprint arXiv:1702.02896, 2017.
  2. Bartlett, P. L., Jordan, M. I., and McAuliffe, J. D. Convexity, classification, and risk bounds. Journal of the American Statistical Association, 101(473):138–156, 2006.
  3. Behaghel, L., Crépon, B., and Gurgand, M. Private and public provision of counseling to job seekers: Evidence from a large controlled experiment. American economic journal: applied economics, 6(4):142–74, 2014.
  4. Bennett, A., Kallus, N., and Schnabel, T. Deep generalized method of moments for instrumental variable analysis. In Advances in Neural Information Processing Systems, pp. 3559–3569, 2019.
  5. Beygelzimer, A. and Langford, J. The offset tree for learning with partial labels. In Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 129–138, 2009.
  6. Carrasco, M. and Florens, J.-P. On the asymptotic efficiency of gmm. Econometric Theory, 30(2):372–406, 2014.
  7. Chernozhukov, V., Chetverikov, D., Demirer, M., Duflo, E., Hansen, C., Newey, W., and Robins, J. Double/debiased machine learning for treatment and structural parameters. The Econometrics Journal, 21(1):C1–C68, 2018.
  8. Chernozhukov, V., Demirer, M., Lewis, G., and Syrgkanis, V. Semi-parametric efficient policy learning with continuous actions. In Advances in Neural Information Processing Systems, pp. 15039–15049, 2019.
  9. Daskalakis, C., Ilyas, A., Syrgkanis, V., and Zeng, H. Training gans with optimism. arXiv preprint arXiv:1711.00141, 2017.
  10. Dudík, M., Langford, J., and Li, L. Doubly robust policy evaluation and learning. arXiv preprint arXiv:1103.4601, 2011.
  11. Hahn, J. On the role of the propensity score in efficient semiparametric estimation of average treatment effects. Econometrica, pp. 315–331, 1998.
  12. Hansen, L. P. Large sample properties of generalized method of moments estimators. Econometrica, pp. 1029–1054, 1982.
  13. Hirano, K., Imbens, G. W., and Ridder, G. Efficient estimation of average treatment effects using the estimated propensity score. Econometrica, 71(4):1161–1189, 2003.
  14. Jiang, B., Song, R., Li, J., and Zeng, D. Entropy learning for dynamic treatment regimes.