[

[

[
Abstract

We consider a problem of ecological inference, in which individual-level covariates are known, but labeled data is available only at the aggregate level. The intended application is modeling voter preferences in elections.

In Rosenman and Viswanathan (2018), we proposed modeling individual voter probabilities via a logistic regression, and posing the problem as a maximum likelihood estimation for the parameter vector . The likelihood is a Poisson binomial, the distribution of the sum of independent but not identically distributed Bernoulli variables, though we approximate it with a heteroscedastic Gaussian for computational efficiency. Here, we extend the prior work by proving results about the existence of the MLE and the curvature of this likelihood, which is not log-concave in general. We further demonstrate the utility of our method on a real data example. Using data on voters in Morris County, NJ, we demonstrate that our approach outperforms other ecological inference methods in predicting a related, but known outcome: whether an individual votes.

Poisson Binomial Models]Some New Results for Poisson Binomial Models Rosenmanet al.]Evan Rosenman

Keywords: ecological inference, generalized linear models, Poisson binomial distribution, political science

1 Introduction

This paper considers a problem of ecological inference. We suppose independent binary variables follow a logistic regression model in which where . We would like to estimate from values but while values are available, the individual values are not. All that are available are the totals for some disjoint sets of individuals.

This problem has a long history in political science beginning with early efforts to identify voting preferences by race from precinct level data (Sun et al., 2017; Flaxman et al., 2015). The modern setting has a rich set of covariates for each voter within each precinct , derived from the “voter file,” the roster of all registered voters within a given geography. Vote tallies are reported for each precinct . The goal is to design voter mobilization and persuasion efforts for a subsequent election, using retrospective knowledge of whom each voter was likely to have supported in the prior election.

Following more recent developments in the machine learning literature (Patrini et al., 2014; Sun et al., 2017), we explicitly pose the problem as a maximum likelihood estimation. The exact probability of getting is

where is the set of all subsets of with cardinality . This is called the Poisson binomial distribution (Wang, 1993). The exact log-likelihood evaluated at is then

(0)

where the sets correspond to the observed vote totals . These sets are combinatorially large and thus the likelihood is computationally intractable for even modestly sized problems. Various approximate inference techniques have been used with similar formulations (Sun et al., 2017; Jackson et al., 2006). Our computationally favorable approach, first discussed in Rosenman and Viswanathan (2018), is to approximate the Poisson binomial log likelihood by a heteroscedastic Gaussian one, justified by a Central Limit Theorem. Here, we extend our prior work by proving new theoretical results about the likelihood under this parameterization, and demonstrating that our method outperforms ompetitor ecological inference techniques on a real data set.

An outline of this paper is as follows. Section 2 reviews prior work on ecological inference from the political science, statistics, and machine learning literatures. Section 3 proves theoretical results about the existence of a finite MLE and the curvature of the log likelihood. The log likelihood is not guaranteed to be concave, so we cannot be sure that we have found the global optimum. We can, however, evaluate the resulting model fits on held-out precincts and compare it to other methods. Section 4 briefly reviews the motivation for our technique. In Section 5, we compare performance of our methods against competitor ecological inference models, including a single layer neural network. We use a data set of voters from Morris County, NJ, and build a classifier to predict whether an individual will vote in each election, using only aggregate turnout numbers as training data. This outcome is closely related to our desired outcome (voter preference rather than voter participation), but it is observed, allowing us to directly compare predictive accuracy against other models. Section 6 contains our conclusions. Proofs can be found in the Appendix.

2 Related Work

Theoretical work on the Poisson binomial distribution has focused on computationally tractable ways to estimate its distribution function, often via approximations to other distributions (Ehm, 1991; Roos, 1999; Chen, 1974). Prior research (Hong, 2013) has identified a closed-form expression for the CDF, which relies on the discrete Fourier Transform. This technique is leveraged in the poisbinom package (Olivella and Shiraito, 2017), which we use for this paper. The application of the Poisson binomial distribution to the generalized linear model setting has been discussed by Chen and Liu (1997), who propose it for hypothesis testing on the parameter vector for a logistic regression model.

Literature on the ecological inference problem bifurcates into two main subfields: political science and machine learning. The problem originates in the political science literature, where early work focused on inferring voting patterns by race (Sun et al., 2017; Flaxman et al., 2015). Among the simplest techniques is Goodman’s Regression (Goodman, 1953), dating to the middle of the twentieth century, in which vote proportions are regressed on the proportion of voters of a specific race in order to generate an individual-level model. More advanced methods, making use of random effects, were proposed at the end of the century. These included a semiparametric approach proposed by Prentice and Sheppard (1995) and several hierarchical models proposed by King (see King, 1997; King et al., 1999).

Wakefield made a number of noteworthy contributions, including positing a statistical framework for ecological inference (Wakefield and Salway, 2001) and showing that if covariate totals are known and conditioned upon in the 2 2 table, this case yields a convolution of binomial likelihoods (Wakefield, 2004). From the latter insight, he developed a normal approximation for efficient inference. Jackson et al. generalized much of the prior work with their integrated ecological model (Jackson et al., 2008, 2006), in which the individual-level probabilities are modeled via a logistic regression and then averaged over the population in each area. The count of votes for a particular candidate in the area is then modeled as a binomial random variable with success probability equal to this average. Our approach is similar to that of Jackson et al., except that we model the data as Poisson Binomial rather than Binomial, which means probabilities need not be averaged. The Jackson method is implemented in the ecoreg (Jackson et al., 2008) package, against which we baseline in Section 5.

The machine learning literature is more varied, both in methodology and in application. For the problem of inferring voting behavior in demographic subgroups, distribution regression is a popular tool. This method maps the covariate distribution within each geography to a single high-dimensional covariate vector via kernel mean embeddings; penalized or Bayesian regressions methods are then used to fit a function mapping from these distributions to observed vote proportions (Law et al., 2017). Distribution regression has been deployed by Flaxman for analysis of subgroup preferences in the 2012 (Flaxman et al., 2015) and 2016 (Flaxman et al., 2016) elections. Yet, because it aggregates over individual-level data, the method is more appropriate for understanding group-level behavior.

SVM-based methods have also gained traction. Rueping et al. introduced an “inverse calibration” method (Rueping, 2010), which a Support Vector Regression is fit to the data such that the average prediction within each geography is close to the sigmoid inverse of the bag probability. Yu et al. also work in the large-margin framework, proposing a “SVM” method in which a loss is directly minimized over the model parameters and the unseen individual labels (Felix et al., 2013). These methods may be more appropriate for generating individual-level classifiers.

A last, loosely connected area of the literature might be termed “learning with label proportions.” These papers are focused specifically on learning individual-level classifiers, and, relative to distribution regression and SVM-based methods, engage more directly with a probabilistic model for the data. Kück and de Freitas proposed a hierarchical probabilistic model and an MCMC algorithm for training, and showed their method was effective in the object recognition domain (Kück and de Freitas, 2005). Quadrianto et al. introduced the mean map algorithm (Quadrianto et al., 2009), in which models are fit by maximizing the log-likelihood in a conditional exponential family model. Their method requires a somewhat restrictive assumption that the distribution of the covariates is independent of geography conditional on the vote proportions. Patrini et. al were able to generalize this work and weaken this assumption (Patrini et al., 2014). They proposed several novel algorithms: Laplacian Mean Map (LMM) and Alternating Mean Map (AMM). Sun et. al used related techniques for analysis of the 2016 presidential election, defining a likelihood and maximizing it via a novel exact inference algorithm making use of counting potentials (Sun et al., 2017).

Among the prior work, our approach is most similar to that of Sun, though we propose a different set of algorithms for model fitting. Compared to the broader literature, we differ in a few key ways. Our algorithms are designed solely for the case in which covariates are known for all participating voters in a geographic area, and do not generalize to the case of partial samples via, for example, data from the American Community Survey. We also make relatively strong assumptions on the functions relating individual-level covariates to aggregate statistics. Moreover, our method is purely frequentist while much of the literature uses Bayesian methods. The benefits of our proposal include simpler fitting procedures, straightforward estimation of individual-level probabilities, and greater model interpretability.

3 Theoretical Results

We seek to maximize the likelihood over . We first consider several properties of the likelihood.

3.1 Existence of a Finite MLE

In the case of standard logistic regression, it is well-known that the MLE may not exist (see e.g. Candès and Sur, 2018). Our scenario is somewhat more delicate. We begin by considering the scenario in which an MLE fails to exist.

We can write the log likelihood as

(0)

Suppose we can find a direction and sets , where is the number of precincts, such that

For each precinct, define . We can now rewrite the log-likelihood evaluated at as:

If we increase the magnitude of , then for each we will make arbitrarily large and arbitrarily close to 0; the former term will also dominate . It follows that, for each precinct, the first term behaves as at large , while the second term behaves as . Thus, making arbitrarily large will yield a log-likelihood arbitrarily close to 0, and no finite MLE will exist. In words, is normal to a hyperplane that perfectly separates units from units for all precincts .

These insights are formalized as follows.

Theorem 1

For every set of individuals , define as the dual cone of ,

and as the polar cone of :

If

then there does not exist a finite MLE.

The proof follows directly from the preceding argument. Unions and intersections of cones are themselves cones, so the set of potentially infinite norm values form a cone.

Theorem 1 is potentially troublesome for scenarios with small values of . Directions for satisfying the perfect separation conditions can be readily found in simulations with fewer than 10 precincts. But as grows, it becomes increasingly likely that the problematic set is empty, due to the outer intersection. In settings in which we are interested – typically involving at least a few hundred precincts – we thus assume that a finite MLE exists. Deriving exact conditions for the existence of a finite MLE remains an open question for further research.

3.2 Curvature Results

We prove some elementary results regarding curvature of the likelihood function.

Theorem 2

The log likelihood is a difference of convex functions.

  • With the log likelihood given by

    log(

    1 + exp(x_missingij^T β) )  ,\@close@row we see that the first term is a log-sum-exp function of in canonical form, while the second term is a sum of log-sum-exp functions of . It follows immediately that both the first and second terms are convex functions in and hence that the log-likelihood is a difference of convex functions (Boyd and Vandenberghe, 2004).

DC functions are neither convex nor concave in general, and one can readily generate examples where the Hessian of the log-likelihood has positive and negative eigenvalues.

missingmissingTheorem 3

Suppose the model for the data is correct with true parameter value x_missingij

β^⋆

  • Per the results in Appendix B, we can write the scaled Hessian as:

    Y_missingij ∣∑_j ∈S_i Y_ij = D_i ) - Cov( ∑_j ∈S_i x_ijY_ij ) where is the number of precincts. By Kolmogorov’s Strong Law (Sedor, 2015), we see

    where the second line is due to the Law of Total Covariance. Thus

    and the result follows from the fact that any covariance matrix must be positive semidefinite.

The likelihood will be asymptotically log-concave in a neighborhood around the true parameter value if there is sufficient differentiation in the covariates across precincts. To see why, consider the quantity

Under an identical SLLN argument, this quantity has the same asymptotic limit as . Being an average of outer products, the resultant matrix will also have full rank as long as the vectors

span , where . In practical examples, this condition holds as long is sufficiently large compared to , and the covariates differ sufficiently across precincts.

If the indeed span , then the Hessian has full rank and is thus negative definite, not merely negative semidefinite. Since the eigenvalues are continuous functions of the , it follows that we are guaranteed log-concavity in a neighborhood around .

4 Optimization

Results from the prior section indicate that algorithms dependent on concavity, like Newton’s Method, may not converge in the general case. But for large problems with substantial precinct differentiation, we may be able to exploit curvature near the true parameter value.

A second consideration is runtime. The work of Hong (2013) has substantially sped up the computation of Poisson binomial probabilities, but these computations can still be a bottleneck for even modestly sized problems. The exact gradient can be calculated by computing a number of Poisson binomial probabilities linear in the dataset size, per the results in Appendix B. But in practice, even this method is slow to train in the regimes of interest.

These considerations point toward our approach, discussed in detail in Rosenman and Viswanathan (2018). The Lyapunov CLT (Billingsley, 1995) can be used to observe that the asymptotic distribution of is given by

Thus, the contribution of precinct to the overall log likelihood is approximately

where irrelevant constants have been dropped, , and .

This yields a gradient of the form:

These approximate gradients can be used in gradient ascent with a fixed step size. Or they can be used to choose an ascent direction with step size selected based on the true or approximate likelihood evaluated at each point along a grid. These are the primary algorithms we propose.

5 Comparison Against Other Ecological Inference Methods

Our goal with these models is to predict the candidate selected by each voter—an outcome that is unknown. This poses a challenge for evaluating the performance of our models and comparing against other ecological inference methods.

To address this issue, we use a related task: modeling the probability that an individual casts a ballot (rather than modeling the candidate he or she supports). We train only on aggregated ballot counts from each precinct. This task is not a perfect proxy for modeling candidate selections, but it is an attractive option because it allows us to leverage the same set of covariates and the same aggregation structure, and we also have access to the individual-level outcomes for performance evaluation.

We use a data set comprising all voters from Morris County, New Jersey, an affluent and historically Republican-leaning county of about half a million residents. The voter file contains 316,724 registered voters and includes limited demographic information as well as information about whether each voter cast a ballot in general elections and primaries stretching back to the year 2000. There are 396 voting precincts in the county.

We fit models to eight data sets in total. To explore performance in different outcome regimes, we predict whether voters participated in each election from 2014 to 2017, in which 34%, 19%, 76%, and 45% of all voters in our data set cast a ballot, respectively. For each year, we fit two models: a parsimonious “demographics-only” model containing just four covariates (age, party, gender, and whether the voter lives in an apartment); and a “demographics and voter history” model that also contains nine variables corresponding to the voter’s participation and voting method in the given year’s primary and the primaries and general elections of the prior four years.

We compare performance of a number of methods:

  • To obtain an upper bound on performance, we fit a logistic regression and a Gradient Boosted Machine (GBM) to the data set while giving them access to the individual-level outcomes (Friedman, 2001). Because these models “see” individual-level data, they should outperform methods that only have access to aggregated data.

  • We fit three variants of our logistic regression formulation.

    • In the first (“Logit with Gaussian Gradient”), the coefficients are fit via gradient ascent exclusively using the Gaussian approximation to the gradient proved in Appendix LABEL:app:lyapunovProof. We run for 120 iterations using a learning rate of .

    • In the second (“Logit with Gaussian Gradient, PoiBin Backtracking”), we run ten iterations using the approximated gradient and fixed step size; for the remaining 110 iterations, we use the normal-approximation gradient to choose an ascent direction, but use backtracking line search based on the true likelihood to choose a step size.

    • The third algorithm (“Logit with Gaussian Gradient, PoiBin Backtracking, True Gradient”) is identical to the second, except we run only 100 iterations using backtracking line search. The final ten iterations are then instead run using the true gradient derived in Appendix B.

    These three variants are used to explore the practical effect of the Gaussian approximation on our model’s accuracy.

  • We also fit a neural network in which individual success probabilities are modeled as a feedforward network with a single hidden layer. Full details on the model and its gradients can be found in Appendix C. Results were found to be highly sensitive to the parameter initializations and training lengths; hence, ten random initializations were tried, and parameters were stored after 50, 100, 150, and 200 iterations of gradient ascent. A development set consisting of 40 precincts (10% of the total) was used to tune these hyperparameters. As true outcomes would not be known in practice, tuning was done by choosing the parameter that minimized the sum squared error between the predicted and actual aggregate voting rate in the development counties. Results were less sensitive to hyperparameters like the learning rate and number of hidden nodes, so these were set to and respectively.

  • Following the approach in (Rueping, 2010), we baseline against the simplest ecological inference method: assigning each unit in a given aggregation block the average of the outcomes in that aggregation block. In our setting, this means each voter in a precinct is assigned the voter turnout proportion in that precinct as a pseudo-outcome, and a logistic regression model is fit to these data.

  • We baseline against ecological regression as implemented in the ecoreg package in R (Jackson et al., 2008).

  • We baseline against Rueping’s Inverse Calibration method (Rueping, 2010). Aggregate accuracy on the 40 precincts in the development set is again used, this time for tuning the and parameters.

  • Lastly, we baseline against the Mean Map (Quadrianto et al., 2009), Laplacian Mean Map, and Alternating Mean Map (Patrini et al., 2014). Hyperparameters are again tuned using squared error on the development set.

Results for the demographics-only model are provided in Table 1 and results from the expanded data set are provided in Table 2. The relative strength of the logistic regression formulation is immediately obvious: these models achieve the highest ROC AUC values in all but one of the eight conditions, and frequently come very close to the performance of methods with access to the individual outcomes. Also evident is the fact that little to no predictive power is gained by making use of the real likelihood rather than the approximation. Backtracking on the true Poisson Binomial likelihood or using the true gradient actually slightly degrade performance in most cases, while also slowing training.

The neural network model is never competitive. It appears that the model is able to capture some useful interactions and non-linearities when more covariates are present, but it is generally too expressive and tends to overfit.

Ecological regression performs well in all conditions, and outperforms our proposed methods in the demographics-only model for 2014. The other tested methods are generally not competitive. The logistic regression on aggregates technique performs surprisingly well given its extreme simplicity, but it still underperforms the proposed logistic methods. Inverse calibration sees a noticeable performance bump with the inclusion of additional covariates. The Mean Map, LMM, and AMM methods typically do poorly, with only AMM consistently beating random guessing in the demographics-only case. Each of these methods is somewhat sensitive to hyperparameter values, and tuning is extremely challenging in the absence of labeled data in the development set. We are using squared error across development precincts as a proxy measure, and it’s highly plausible that alternative proxies would yield better hyperparameter values. Nonetheless, a strength of our proposed methods is that they require very little tuning in order to get good performance.

Demographics Only 2017 2016 2015 2014 Standard Methods (non-ecological)      Logistic Regression 72.0% 71.2% 75.2% 76.9%      GBM 73.0% 72.7% 75.5% 77.2% Proposed Methods      Logit with Gaussian Gradient 69.3% 68.3% 73.6% 74.7%      Logit with Gaussian Gradient, PoiBin Backtracking 69.3% 68.4% 73.6% 74.7%      Logit with Gaussian Gradient, PoiBin Backtracking,               True Gradient 69.3% 68.4% 72.2% 74.5%      Neural Net with Gaussian Gradient 48.3% 46.9% 50.3% 53.5% Comparison Methods      Logistic Regression on Aggregates 65.9% 60.0% 71.7% 69.0%      Ecological Regression 67.6% 66.7% 72.8% 75.0%      Inverse Calibration 61.1% 61.9% 72.9% 41.5%      Mean Map 51.5% 60.1% 33.3% 31.9%      Laplacian Mean Map 51.0% 46.1% 37.9% 51.1%      Alternating Mean Map 58.9% 62.6% 58.0% 58.4%

Table 1: ROC AUC scores for models predicting voter turnout, fit to demographics-only data sets. Highest values among ecological inference models are underlined.

Demographics and Voting History 2017 2016 2015 2014 Standard Methods (non-ecological)      Logistic Regression 85.9% 84.5% 88.6% 89.5%      GBM 86.2% 85.5% 88.8% 89.6% Proposed Methods      Logit with Gaussian Gradient 83.9% 82.0% 81.0% 86.3%      Logit with Gaussian Gradient, PoiBin Backtracking 83.8% 82.0% 81.0% 86.4%      Logit with Gaussian Gradient, PoiBin Backtracking,               True Gradient 83.8% 81.9% 80.6% 86.3%      Neural Net with Gaussian Gradient 72.1% 76.8% 80.4% 74.1% Comparison Methods      Logistic Regression on Aggregates 75.0% 72.4% 77.2% 76.8%      Ecological Regression 67.5% 68.7% 71.8% 76.1%      Inverse Calibration 64.2% 77.6% 78.4% 66.9%      Mean Map 45.4% 54.4% 48.4% 51.8%      Laplacian Mean Map 49.5% 51.5% 57.6% 49.4%      Alternating Mean Map 51.9% 52.9% 44.4% 46.2%

Table 2: ROC AUC scores for models predicting voter turnout, fit to demographics and voter history data sets. Highest values among ecological inference models are underlined.

6 Conclusions and Future Work

Our results extend the literature on ecological inference in several key ways. We model individual voter probabilities via a logistic regression, and pose the problem as a maximum likelihood estimation, proving new results about the existence of the MLE and the curvature of the log-likelihood. We use an approximate algorithm for fitting this model to real election data, making use of a normal approximation.

The comparative evaluation reveals that this model outperform other ecological inference techniques. This method is intuitively appealing as it is simple to explain and fast to compute. We have also demonstrated that a neural network model is not competitive in this setting—at least without substantial additional work to prevent the model from overfitting.

On the algorithmic side, further theoretical work is necessary to understand when a finite MLE exists, and when gradient ascent with a Gaussian approximation will not be sufficient. On applications with smaller data sets, or in which outcome proportions close to 0 or 1 are more frequent, the approximation will likely degrade. In such cases, use of backtracking on the true likelihood or ascent on the true gradient may be necessary (and will be more computationally feasible on small data sets). The possibility of using DC algorithms for model fitting also represents an intriguing option. These algorithms may outperform if they can be prevented from stopping a poor local optima.

On the modeling side, this model-fitting machinery can be extended to a wide class of parametric models. Simple variants on the logistic regression, such as probit or cauchit models, may perform well. A bigger prize would be an expressive model, able to capture non-linearities and interactions, without a high risk of overfit. This remains a direction for future research.

This work was supported by the Department of Defense (DoD) through the National Defense Science & Engineering Graduate Fellowship (NDSEG). We thank Pete Mohanty, Art Owen, and Michael Baiocchi for their guidance and feedback on this work.

A Gradient and Hessian

The likelihood given in Equation 1 can be expressed as:

By direct computation, the gradient with respect to is

The Hessian is

B Alternative Forms of Gradient and Hessian

For some choice of , we consider the quantity

This is easily interpreted in the sampling setting: the expected sample sum of the covariate vectors, conditional on the total number of units sampled among those in . Careful expansion yields

which we recognize as the first term in our gradient from the prior section. Analogous computations apply for the Hessian, giving us the following equivalent forms:

This form is particularly auspicious for computation of the exact gradient, as the first term can be expressed

which we recognize as requiring the evaluation of only Poisson binomial probabilities to compute. This is preferable to computations based on the original form of the gradient, which involved a combinatorial sum.

C Alternative Model: Neural Network

In the neural network model, the Democratic support probability is modeled as

where are weight matrices and bias vectors. This network has a single hidden layer. The dimension of can be selected by the modeler, though we used ten hidden nodes for our experiments.

In the case of a general neural network model, it is typically believed that the loss is non-convex but that local optima are close to the global minimum in function value (Choromanska et al., 2015). We assume that local optima also exist for the neural network model in our setting.

With the Gaussian approximation to the Poisson binomial likelihood, the neural network gradients are

where denotes a column vector consisting of the entries for all and denotes a Hadamard product.

References

  • Billingsley (1995) Billingsley, P. (1995). Probability and Measure. Wiley Series in Probability and Statistics. Wiley.
  • Boyd and Vandenberghe (2004) Boyd, S. and L. Vandenberghe (2004). Convex optimization. Cambridge university press.
  • Candès and Sur (2018) Candès, E. J. and P. Sur (2018). The phase transition for the existence of the maximum likelihood estimate in high-dimensional logistic regression. arXiv preprint arXiv:1804.09753.
  • Chen (1974) Chen, L. H. (1974). On the convergence of Poisson binomial to Poisson distributions. The Annals of Probability, 178–180.
  • Chen and Liu (1997) Chen, S. X. and J. S. Liu (1997). Statistical applications of the Poisson-binomial and conditional Bernoulli distributions. Statistica Sinica 7(4), 875–892.
  • Choromanska et al. (2015) Choromanska, A., M. Henaff, M. Mathieu, G. B. Arous, and Y. LeCun (2015). The loss surfaces of multilayer networks. In Artificial Intelligence and Statistics, pp. 192–204.
  • Ehm (1991) Ehm, W. (1991). Binomial approximation to the Poisson binomial distribution. Statistics & Probability Letters 11(1), 7–16.
  • Felix et al. (2013) Felix, X. Y., D. Liu, S. Kumar, T. Jebara, and S.-F. Chang (2013). psvm for learning with label proportions.
  • Flaxman et al. (2016) Flaxman, S., D. Sutherland, Y.-X. Wang, and Y. W. Teh (2016). Understanding the 2016 US presidential election using ecological inference and distribution regression with census microdata. arXiv preprint arXiv:1611.03787.
  • Flaxman et al. (2015) Flaxman, S. R., Y.-X. Wang, and A. J. Smola (2015). Who supported Obama in 2012?: Ecological inference through distribution regression. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 289–298. ACM.
  • Friedman (2001) Friedman, J. H. (2001). Greedy function approximation: a gradient boosting machine. Annals of statistics, 1189–1232.
  • Goodman (1953) Goodman, L. A. (1953). Ecological regressions and behavior of individuals. American sociological review.
  • Hong (2013) Hong, Y. (2013). On computing the distribution function for the Poisson binomial distribution. Computational Statistics & Data Analysis 59(Supplement C), 41 – 51.
  • Jackson et al. (2006) Jackson, C., N. Best, and S. Richardson (2006). Improving ecological inference using individual-level data. Statistics in medicine 25(12), 2136–2159.
  • Jackson et al. (2008) Jackson, C. H., N. G. Best, and S. Richardson (2008). Hierarchical related regression for combining aggregate and individual data in studies of socio-economic disease risk factors. Journal of the Royal Statistical Society, Series A: Statistics In Society 171(1), 159–178.
  • King (1997) King, G. (1997). A solution to the ecological inference problem.
  • King et al. (1999) King, G., O. Rosen, and M. A. Tanner (1999). Binomial-beta hierarchical models for ecological inference. Sociological Methods & Research 28(1), 61–90.
  • Kück and de Freitas (2005) Kück, H. and N. de Freitas (2005). Learning about individuals from group statistics. In Proceedings of the 21th Annual Conference on Uncertainty in Artificial Intelligence (UAI-05), pp. 332.
  • Law et al. (2017) Law, H. C. L., D. J. Sutherland, D. Sejdinovic, and S. Flaxman (2017). Bayesian approaches to distribution regression. arXiv preprint arXiv:1705.04293.
  • Olivella and Shiraito (2017) Olivella, S. and Y. Shiraito (2017). poisbinom: A Faster Implementation of the Poisson-Binomial Distribution. R package version 1.0.1.
  • Patrini et al. (2014) Patrini, G., R. Nock, P. Rivera, and T. Caetano (2014). (Almost) no label no cry. In Advances in Neural Information Processing Systems, pp. 190–198.
  • Prentice and Sheppard (1995) Prentice, R. L. and L. Sheppard (1995). Aggregate data studies of disease risk factors. Biometrika 82(1), 113–125.
  • Quadrianto et al. (2009) Quadrianto, N., A. J. Smola, T. S. Caetano, and Q. V. Le (2009). Estimating labels from label proportions. Journal of Machine Learning Research 10(Oct), 2349–2374.
  • Roos (1999) Roos, B. (1999, 12). Asymptotics and sharp bounds in the Poisson approximation to the Poisson-binomial distribution. Bernoulli 5(6), 1021–1034.
  • Rosenman and Viswanathan (2018) Rosenman, E. and N. Viswanathan (2018). Using poisson binomial glms to reveal voter preferences. arXiv preprint arXiv:1802.01053.
  • Rueping (2010) Rueping, S. (2010). Svm classifier estimation from group probabilities. In Proceedings of the 27th international conference on machine learning (ICML-10), pp. 911–918.
  • Sedor (2015) Sedor, K. (2015). The law of large numbers and its applications. Lakehead University, Thunder Bay, Ontario, Canada, copyright c.
  • Sun et al. (2017) Sun, T., D. Sheldon, and B. O?Connor (2017). A probabilistic approach for learning with label proportions applied to the us presidential election. In Data Mining (ICDM), 2017 IEEE International Conference on, pp. 445–454. IEEE.
  • Wakefield (2004) Wakefield, J. (2004). Ecological inference for 2 2 tables. Journal of the Royal Statistical Society: Series A (Statistics in Society) 167(3), 385–425.
  • Wakefield and Salway (2001) Wakefield, J. and R. Salway (2001). A statistical framework for ecological and aggregate studies. Journal of the Royal Statistical Society: Series A (Statistics in Society) 164(1), 119–137.
  • Wang (1993) Wang, Y. H. (1993). On the number of successes in independent trials. Statistica Sinica, 295–312.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
383789
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description