Optimal Data Acquisition for Statistical Estimation
Abstract
We consider a data analyst’s problem of purchasing data from strategic agents to compute an unbiased estimate of a statistic of interest. Agents incur private costs to reveal their data and the costs can be arbitrarily correlated with their data. Once revealed, data are verifiable. This paper focuses on linear unbiased estimators. We design an individually rational and incentive compatible mechanism that optimizes the worstcase meansquared error of the estimation, where the worstcase is over the unknown correlation between costs and data, subject to a budget constraint in expectation. We characterize the form of the optimal mechanism in closedform. We further extend our results to acquiring data for estimating a parameter in regression analysis, where private costs can correlate with the values of the dependent variable but not with the values of the independent variables.
itemsep=5pt,parsep=0pt,leftmargin=* \addauthorvsblue \addauthorjzmagenta \addauthornired \addauthoryccyan \addauthorblForestGreen
1 Introduction
In the age of automation, data is king. The statistics and machine learning algorithms that help curate our online content, diagnose our diseases, and drive our cars, among other things, are all fueled by data. Typically, this data is mined by happenstance: as we click around on the internet, seek medical treatment, or drive “smart” vehicles, we leave a trail of data. This data is recorded and used to make estimates and train machine learning algorithms. So long as representative data is readily abundant, this approach may be sufficient. But some data is sensitive and therefore inaccurate, rare, or lacking detail in observable data traces. In such cases, it is more expedient to buy the necessary data directly from the population.
Consider, for example, the problem a public health administration faces in trying to learn the average weight of a population, perhaps as an input to estimating the risk of heart disease. Weight is a sensitive personal characteristic, and people may be loath to disclose it. It is also variable over time, and so must be collected close to the time of the average weight estimate in order to be accurate. Thus, while other characteristics, like height, age, and gender, are fairly accurately recorded in, for example, driver’s license databases, weight is not. The public health administration may try surveying the public to get estimates of the average weight, but these surveys are likely to have low response rates and be biased towards healthier lowweight samples.
In this paper, we propose a mechanism for buying verifiable data from a population in order to estimate a statistic of interest, such as the expected value of some function of the underlying data. We assume each individual has a private cost, or disutility, for revealing his or her sensitive data to the analyst. Importantly, this cost may be correlated with the private data. For example, overweight or underweight individuals to have a higher cost of revealing their data than people of a healthy weight. Individuals wish to maximize their expected utility, which is the expected payment they receive for their data minus their expected cost.
The analyst has a fixed budget for buying data. The analyst does not know the distribution of the data: properties of the distribution is what she is trying to learn from the data samples, therefore it is important that she uses the data she collects to learn it rather than using an inaccurate prior distribution (for example, the analyst may have a prior on weight distribution within a population from DMV records or previous surveys, but such a prior may be erroneous if people do not accurately report their weights). However, we do assume the analyst has a prior for the marginal distribution of costs, and that she estimates how much a survey may cost her as a function of said prior.
The analyst would like to buy data subject to her budget, then use that data to obtain an unbiased estimator for the statistic of interest.
To this end, the analyst posts a menu of probabilityprice pairs. Each individual with cost selects a pair from the menu, at which point the analyst buys the data with probability at price . The expected utility of the individual is thus .
The HorvitzThompson estimator always generates an unbiased estimate of the statistic being measured, regardless of the price menu. However, the precision of the estimator, as measured by the variance or meansquared error of the estimate, depends on the menu of probabilityprice pairs offered to each individual. For example, offering a high price would generate data samples with low bias (since many individuals would accept such an offer), but the budget would limit the number of samples. Offering low prices allows the mechanism to collect more samples, but these would be more heavily biased, requiring more aggressive correction which introduces additional noise. The goal of the analyst is to strike a balance between these forces and post a menu that minimizes the variance of her estimate in the worstcase over all possible joint distributions of the data and cost consistent with the cost prior. We note that this problem setting was first studied by [23], who characterized an approximately optimal mechanism for moment estimation.
1.1 Summary of results and techniques
Our main contribution is a solution for the optimal menu, as discussed in Section 3. As one would expect, if the budget is large, the optimal menu offers to buy, with probability , all data at a cost equal to the maximum cost in the population. If the budget is small, consistent with prior work [13], the optimal menu buys data from an individual with probability inversely proportional to the square root of their cost.
Revisiting the example of estimating the weight of a population, our scheme suggests the following solution. Imagine the costs are with probability , and the peragent budget of the analyst is . The analyst brings a scale to a public location and posts the following menu of pairs of allocation probability and price: . A simple calculation shows that individuals with cost or will pick the first menu option: stepping on the scale and having their weight recorded with probability , and receiving a payment of dollars. Individuals with cost will pick the second menu option; if they are selected to step on the scale, which happens with probability , the analyst records their weight scaled by . The estimate is the average of the scaled weights.
We show how to extend our approach in multiple directions. First, our characterization of the optimal mechanism holds even when the quantity to be estimated is the expected value of a dimensional moment function of the data. Second, we extend our techniques beyond moment estimation to the common task of multidimensional linear regression. In this regression problem, an individual’s data includes both features (which are assumed to be insensitive or publicly available) and outcomes (which may be sensitive). The analyst’s goal is to estimate the linear regression coefficients that relate the outcomes to the features. We make the assumption that an individual’s cost is independent of her features, but may be arbitrarily correlated with her outcome. For example, the goal might be to regress a health outcome (such as severity of a disease) on demographic information. In this case, we might imagine that an agent incurs no cost for reporting his age, height or gender, but his cost might be highly correlated with his realized health outcome. In such a setting, we show that the asymptotically optimal allocation rule, given a fixed budget per agent as the number of agent grows large, can be calculated efficiently and exhibits a pooling region as before. However, unlike for moment estimation, agents with intermediate costs can also be pooled together. We further show that our results extend to nonlinear regression in Section D, under mild additional conditions on the regression function.
Our techniques rely on i) reducing the mechanism design problem to an optimization problem through the classical notion of virtual costs, then ii) reducing the problem of optimizing the worstcase variance to that of finding an equilibrium of a zerosum game between the analyst and an adversary. The adversary’s goal is to pick a distribution of data, conditional on agents’ costs, that maximizes the variance of the analyst’s estimator. We then characterize such an equilibrium through the optimality conditions for convex optimization described in [2].
1.2 Related work
A growing amount of attention has been placed on understanding interactions between the strategic nature of data holders and the statistical inference and learning tasks that use data collected from these holders. The work on this topic can be roughly divided into two categories according to whether money is used for incentive alignment.
In the first category, individuals as data holders do not directly derive utility from the accuracy of the inference or learning outcome, but in some cases may incur a privacy cost if the outcome leaks their private information. The analyst uses monetary payments to incentivize agents to reveal their data. Our work falls into this category. Prior papers by Roth and Schoenebeck [23] and Abernethy et al. [1] are closest to our setting. Similarly to our work, both Roth and Schoenebeck [23] and Abernethy et al. [1] consider an analyst’s problem of purchasing data from individuals with private costs subject to a budget constraint, allow the cost to be correlated with the value of data, and assume that individuals cannot fabricate their data. Roth and Schoenebeck [23] aim at obtaining an optimal unbiased estimator with minimum worstcase variance for population mean, while their mechanism achieves optimality only approximately: instead of the actual worstcase variance, a bound on the worstcase variance is minimized. Our work achieves optimality exactly (minimizing worstcase variance) and our results are extended to broader classes of statistical inference, moment estimation and linear regression. Abernethy et al. [1] consider general supervised learning. They do not seek to achieve a notion of optimality; instead, they take a learningtheoretic approach and design mechanisms to obtain learning guarantees (risk bounds).
Several papers consider data acquisition models with different objectives under the assumptions that (a) individuals do not fabricate their data, and (b) private costs and value of data are uncorrelated. For example, in Cummings at al. [6], the analyst can decide the level of accuracy for data purchased from each individual, and wishes to guarantee a certain desired level of accuracy of the aggregated information while minimizing the total privacy cost incurred by the agents. Cai, Daskalakis, and Papadimitriou [3] focus on incentivizing individuals to exert effort to obtain highquality data for the purpose of linear regression. Another line of research in the first category examines the data acquisition problem under the lens of differential privacy [13, 10, 12, 20, 5]. The mechanism designer then uses payments to balance the tradeoff between privacy and accuracy.
In the second category, individuals’ utilities directly depend on the inference or learning outcome (e.g. they want a regression line to be as close to their own data point as possible) and hence they have incentives to manipulate their reported data to influence the outcome. There often is no cost for reporting one’s data. The data analyst, without using monetary payments, attempts to design or identify inference or learning processes so that they are robust to potential data manipulations. Most papers in this category assume that independent variables (feature vectors) are unmanipulable public information and dependent variables are manipulable private information [7, 16, 17, 21], though some papers consider strategic manipulation of feature vectors [14, 8]. Such strategic data manipulations have been studied for estimation [4], classification [16, 17, 14], online classification [8], regression [22, 7], and clustering [21]. Work in this category is closer to mechanism design without money in the sense that they focus on incentive alignment in acquiring data (e.g., strategyproof algorithms) but often do not evaluate the performance of the inference or learning, with a few notable exceptions [14, 8].
2 Model and Preliminaries
Survey Mechanisms There is a population of agents. Each agent has a private pair , where is a data point and is a cost. We think of as the disutility agent incurs by releasing her data . The pair is drawn from a distribution , unknown to the mechanism designer. We denote with the CDF of the marginal distribution of costs,
A survey mechanism is defined by an allocation rule and a payment rule , and works as follows. Each agent arrives at the mechanism in sequence and reports a cost . The mechanism chooses to buy the agent’s data with probability . If the mechanism buys the data, then it learns the value of (i.e., agents cannot misreport their data) and pays the agent . Otherwise the data point is not learned and no payment is made.
We assume agents have quasilinear utilities, so that the utility enjoyed by agent when reporting is
(1) 
We will restrict attention to survey mechanisms that are truthful and individually rational.
Definition 1 (Truthful and Individually Rational  TIR).
A survey mechanism is truthful if for any cost it is in the agent’s best interest to report their true cost, i.e. for any report :
(2) 
It is individually rational if, e. for any cost , .
We assume that the mechanism is constrained in the amount of payment it can make to the agents. We will formally define this as an expected budget constraint for the survey mechanism.
Definition 2 (Expected Budget Constraint).
A mechanism respects a budget constraint if:
(3) 
Estimators The designer (or data analyst) wishes to use the survey mechanism to estimate some parameter of the marginal distribution of data points.
Definition 3 (Unbiased Estimator).
Given an allocation function , an estimator for is unbiased if for any instantiation of the true distribution its expected value is equal to :
(4) 
Given a fixed choice of estimator, the mechanism designer wants to construct the survey mechanism to minimize the variance (finite sample or asymptotic as the population grows) of that estimator. Since the designer does not know the distribution , we will work with the worstcase variance over all instantiations of that are consistent with the cost marginal .
Definition 4 (WorstCase Variance).
Given an allocation function and an instance of the true distribution , the variance of an estimator is defined as:
(5) 
The worstcase variance of is
(6) 
We are now ready to formally define the mechanism design problem faced by the data analyst.
Definition 5 (Analyst’s Mechanism Design Problem).
Given an estimator and cost distribution , the goal of the designer is to design an allocation rule and payment rule so as to minimize worstcase variance subject to the truthfulness, individual rationality and budget constraints:
(7)  
Implementing Surveys as Posted Menus. The formulation above describes surveys as directrevelation mechanisms, where agents report costs. We note that an equivalent indirect implementation might be more natural: a posted menu survey offers each agent a menu of (price, probability) pairs . If the agent chooses then their data is elicited with probability , in which case they are paid . Each agent can choose the item that maximizes their expected utility, i.e., . By the wellknown taxation principle, any survey mechanism can be implemented as a posted menu survey, and the number of menu items required is at most the size of the support of the cost distribution.
2.1 Reducing Mechanism Design to Optimization
We begin by reducing the mechanism design problem to a simpler fullinformation optimization problem where the designer knows the private cost of each player and can acquire their data by paying them exactly that cost. However, the designer is constrained to using monotone allocation rules, in which players with higher costs have weakly lower probability of being chosen.
Definition 6 (Analyst’s Optimization Problem).
Given an estimator and cost distribution , the optimization version of the designer’s problem is to find a nonincreasing allocation rule that minimizes worstcase variance subject to the budget constraint, assuming agents are paid their cost:
(8)  
The mechanism design problem in Definition 5 reduces to the optimization problem given by Definition 6, albeit with a transformation of costs to virtual cost.
Definition 7 (Virtual Costs and Regular Distributions).
If is continuous and admits a density then define the virtual cost function as . If is discrete with support and PDF , then define the virtual cost function as: , with . We also denote with the distribution of virtual costs; i.e., the distribution created by first drawing from and then mapping it to . A distribution is regular if the virtual cost function is increasing.
The following is an analogue of Myerson’s [18] reduction of mechanism design to virtual welfare maximization, adapted to the survey design setting.
Lemma 1.
If the distribution of costs is regular, then solving the Analyst’s Mechanism Design Problem reduces to solving the Analyst’s Optimization Problem for distribution of costs .
Proof.
The proof is given in Appendix E.1. ∎
2.2 Unbiased Estimation and Inverse Propensity Scoring
We now describe a class of estimators that we will focus on for the remainder of the paper. Note that simply calculating the quantity of interest, , on the sampled data points can lead to bias, due to the potential correlation between costs and data. For instance, suppose that and the goal is to estimate the mean of the distribution of . A natural estimator is the average of the collected data: . However, if players with lower tend to have lower cost, and are therefore selected with higher probability by the analyst, then this estimator will consistently underestimate the true mean.
This problem can be addressed using inverse propensity scoring (IPS), pioneered by Horvitz and Thompson [15]. The idea is to recover unbiasedness by weighting each data point by the inverse of the probability of observing it. This IPS approach can be applied to any parameter estimation problem where the parameter of interest is the expected value of an arbitrary moment function .
Definition 8 (HorvitzThompson Estimator).
The HorvitzThompson estimator for the case when the parameter of interest is the expected value of a (moment) function is defined as:
(9) 
The HorvitzThompson estimator is the unique unbiased estimator that is a linear function of the observations [23]. It is therefore without loss of generality to focus on this estimator if one restricts to unbiased linear estimators.
IPS beyond moment estimation. We defined the HorvitzThompson estimator with respect to moment estimation problems, . As it turns out, this approach to unbiased estimation extends even beyond the moment estimation problem to parameter estimation problems defined as the solution to a system of moment equations or parameters defined as the minima of a moment function . We defer this discussion to Section 4.
3 Estimating Moments of the Data Distribution
In this section we
consider
the case where the analyst’s goal is to estimate the mean of a given moment function of the distribution. That is, there is some function
such that both and are in the support of random variable , and the goal of the analyst is to estimate .
For convenience we will assume that the cost distribution has finite support, say with . (We relax the finite support assumption in Appendix A.) Write for the probability of cost in . Also, for a given allocation rule , we will write for convenience. That is, we can interpret an allocation rule as a vector of values . Finally, we will assume that the distribution of costs is regular.
Our goal is to address the analyst’s mechanism design problem for this restricted setting. By Lemma 1 it suffices to solve the analyst’s optimization problem. We start by characterizing the worstcase variance for this setting.
Lemma 2.
The worstcase variance of the HorvitzThompson estimator of a moment , given cost distribution and allocation rule , is:
(10) 
Proof.
For any distribution , observe that the HorvitzThompson estimator can be written as the sum of i.i.d. random variables each with a variance:
Hence, the variance of the estimator is . Observe that conditional on any value , the worstcase distribution , will assign positive mass only to values such that . This is because any other conditional distribution can be altered by a meanpreserving spread, pushing all the mass on these values, while preserving the conditional mean . This would strictly increase the latter variance. Thus we can assume without loss of generality that , in which case and . Let . Then we can simplify the variance as:
The theorem follows since the worstcase variance is a supremum over all possible consistent distributions, hence equivalently a supremum over conditional probabilities . ∎
Given the above characterization of the variance of the estimator, we can greatly simplify the analyst’s optimization problem for this setting. Indeed, it suffices to find the allocation rule that minimizes (10), subject to being monotone nondecreasing and satisfying the expected budget constraint.
3.1 Characterization of the Optimal Allocation Rule
We are now ready to solve the analyst’s optimization problem for moment estimation. We remark that if the budget per agent is larger than the expected cost of an agent, then it is feasible (and hence optimal) for the analyst can set the allocation rule to pick any type with probability . We therefore assume without loss of generality that .
Our analysis is based on an equilibrium characterization, where we view the analyst choosing and the adversary choosing as playing a zerosum game and solve for its equilibria. We first present the characterization and some qualitative implications and then present an outline of our proof. We defer the full details of the proof to Appendix E.2.
Theorem 3 (Optimal Allocation for Moment Estimation).
The optimal allocation rule is determined by two constants and such that:
(11) 
with uniquely determined such that the budget constraint is binding.
The parameters and in Theorem 3 are explicitly derived in closed form in Appendix E.2. For instance, when , then and for all . When then and . More generally, the computational part of Theorem 3 follows by performing binary search over the support of , which can be done in time.
We note that the optimal rule essentially allocates to each agent inversely proportionally to the square root of their cost, but may also “pool” the allocation probability for agents at the lower end of the cost distribution. See Figure 1 for examples of optimal solutions. In comparison, the approximately optimal rule presented in [23] omits the pooling region.
The proof of Theorem 3 appears in Appendix E.2. The main idea is to view the optimization problem as a zerosum game between the analyst who designs the allocation rule , and an adversary who designs so as to maximize the variance of the estimate. We show how to compute an equilibrium of this zerosum game via Lagrangian and KKT conditions, and then note that the obtained must in fact be an optimal allocation rule for worstcase variance.
The analysis above applied to a discrete cost distribution over a finite support of possible costs. We show how to extend this analysis to a continuous distribution over costs in Appendix A, noting that the continuous variant of the Optimization Problem for Moment Estimation can be derived by taking the limit over finer and finer discrete approximations of the cost distribution.
4 Further results
Multidimensional moment estimation: We show in Appendix B that we can in fact extend our analysis to the case where is a dimensional vector with support and we are trying to estimate each coordinate of so as to minimize the meansquared error of the estimator. More precisely, we prove—under the condition that all of the corners of the hypercube are contained in the support of —that the dimensional problem reduces exactly to solving the dimensional problem discussed in Section 3. One can therefore use the results of Section 3 to design an optimal allocation rule under the budget constraint in the multidimensional setting.
Linear regression: We extend our results beyond moment estimation, to a multidimensional linear regression task. In this setting, an agent’s information consists of a feature vector , an outcome value , and a residual value , drawn in the following manner: first, is drawn from an unknown distribution . Then, independently from , the pair is drawn from a joint distribution over . The marginal distribution over costs, , is known to the designer, but not the full joint distribution . Then is defined to be
where with a compact subset of . We assume the marginal distribution over is supported on some bounded range and has expected value . (So, in particular, .)
When a survey mechanism buys data from agent , the pair is revealed. However, the value of is not revealed to the survey mechanism. The goal of the designer is to estimate the parameter vector . The analyst wants to design a survey mechanism to buy from the agents, then compute an estimate of , while not paying each agent more than the total budget per agent in expectation.
One can interpret as a vector of publiclyverifiable information about agent , which might influence a (possibly sensitive) outcome . For example, might consist of demographic information, and might indicate the severity of a medical condition. The coefficient vector describes the average effect of each feature on the outcome, over the entire population. Under this interpretation, is the residual agentspecific component of the outcome, beyond what can be accounted for by the agent’s features. We can interpret the independence of from as meaning that each agent’s cost to reveal information is potentially correlated with their (private) residual data, but is independent of the agent’s features.
As in Section 3, the analyst wants to design a survey mechanism to buy from the agents, obtain data from the set of elicited agents, then compute an estimate of , while not paying each agent more than the total budget per agent in expectation. To this end, the analyst designs an allocation rule and a pricing rule so as to minimize the normalized worstcase asymptotic meansquared error of as the population size goes to infinity. Our mechanism will essentially be optimizing the coefficient in front of the leading term in the mean squared error, ignoring potential finite sample deviations that decay at a faster rate than . Note that we will design allocation and pricing rules to be independent of the population size ; hence, the analyst can use the designed mechanism even if the exact population size in unknown.
The analyst’s estimate is given by the value that minimizes the HorvitzThompson meansquared error , i.e.,
(12) 
Further, we make the following assumption on the distribution of data points:
Assumption 1 (Assumption on the distribution of features).
is finite and positivedefinite, and hence invertible.
Finite expectation is a property one may expect real data such has age, height, weight, etc. to exhibit. The second part of the assumption is satisfied by common classes of distributions, such as multivariate normals.
We now state our main results, and defer proofs and technical details to Appendix C. We first show that our estimator is consistent, i.e. it converges in probability towards the true parameter as the population size grows to infinity:
Lemma 4.
Under Assumption 3, for any allocation rule that does not depend on , is a consistent estimator of .
As in Section 3, we now assume costs are drawn from a discrete set, say . We will then write for an allocation rule conditional on the cost being , and the probability of the cost of an agent being . We will further assume that , meaning that it is not feasible to accept all data points, since otherwise it is trivially optimal to set for all . Our main result, that characterizes the form of the optimal allocation rule, is then given by:
Theorem 5.
Under the assumptions described above, if , an optimal allocation rule has the form

for

for all

for
for and positive constants that do not depend on , and and integers with .
When , the form of the allocation rule can be obtained by reversing the roles of and (for more details, see Appendix C). We remark that the solution for the linear regression case exhibits a structure that is similar to the structure of the optimal allocation rule for moment estimation (see Theorem 3): it exhibits a pooling region in which all cost types are treated the same way, and changes in the inverse of the square root of the cost outside said pooling region. However, we note that we may now choose to pool agents together in an intermediate range of costs, instead of pooling together agents whose costs are below a given threshold.
Nonlinear regression: We further show that our results extend to nonlinear regression, i.e. when is generated by a process of the more general form
under a few additional assumptions on the distribution of and on the regression function . Said assumptions are discussed in more detail in Appendix D.
Appendix A Extension: Continuous Costs for Moment Estimation
The analysis above applied to a discrete cost distribution over a finite support of possible costs. We now show how to extend this analysis to a continuous distribution over costs. We first note that by taking the limit over finer and finer discrete approximations of the cost distribution, one can derive the following continuous variant of the Optimization Problem for Moment Estimation.
Definition 9 (Continuous Optimization Problem for Moment Estimation).
When costs are supported on , the analyst’s optimization problem for the moment estimation problem based on the HorvitzThompson estimator can be written as:
(13)  
We can now establish the following continuous variant of Theorem 3, which describes the optimal survey mechanism for continuous cost distributions.
Theorem 6 (Continuous Limit of Optimal Allocation).
If the distribution of costs is atomless and supported in , then the optimal allocation rule is determined by two constants and such that:
(14) 
with uniquely determined such that the budget constraint is binding.
Then and (see Figure 2).
Let us give some intuition behind the form of the allocation rule described in Theorem 6. As in Theorem 3, the allocation rule will pool agents with low costs (i.e., less than some threshold ), then allocate to highercost agents inversely proportional to the square root of their costs. In the definition of and , note that is nondecreasing and is nonincreasing, so is nondecreasing. We therefore have that , the boundary of the pooling region, increases with up to a maximum value of (at which point all agents are pooled).
Let’s restrict attention to the case where the mean of the distribution is at least as large as half of the maximum value of the support, i.e. . In this setting, we see that for all , so
(see Figure \ref{fig:continuous}) 
So the optimal allocation sets . Moreover, the allocation for the pooling region is . So the optimal mechanism takes the following intuitive form: first, assign each agent an allocation probability that would, in an alternate world where costs are capped at , precisely exhaust the budget. Since costs can actually be greater than , this flat allocation goes overbudget. So, for agents whose costs are greater than , we remove allocation probability so that (a) the budget becomes balanced, and (b) the remaining probability of allocation is inversely proportional to the square root of the costs.
a.1 Proof of Theorem 6
Consider any continuous atomless distribution supported in . Then we can approximate the density of any such distribution by considering a discretized grid of the interval, i.e. and the discrete support distribution defined by pdf for . Since, the loss of the zero sum game for moment estimation is continuous in the CDF of the cost distribution, we have that the minimax value of the game is continuous in the CDF of the cost distribution (see e.g. [9] on continuity of minimax with respect to parameters of the game). Hence, the limit of the optimal discretized solutions will be the optimal solution to the discrete problem.
We now consider the limit structure of the optimal solutions to the discretized problems. We will use the more structural characterization of our main Theorem 3, presented as Theorem 18 in Appendix E.2. In particular, the optimal solution will look as follows, taking the limit of the form in Theorem 18: for
(15) 
Now let us examine how the point is defined in the limit. Consider the functions , and defined in Theorem 18. In the limit as , observe that for every . Therefore, it is easy to see that and . Hence, also . Hence, we will have that defined in 18 will satisfy and in the limit for some . Hence, we only need to consider these functions at and take their limit as . In this limit we observe that these functions take the simpler forms (since summations will converge to integrals) for
(16)  
(17) 
Hence, adapting the discrete characterization of and to these limits we have: the parameter is defined as the solution to the following process: let be the solution to the equation . If , then and , otherwise and .
Now we observe that since is atomless and has support , is a continuous increasing function of , with range . Hence, if (which we assumed holds as otherwise the problem is trivial), then , or equivalently if then it must be that is the unique solution to the equation .
Moreover, observe that is also a decreasing function of ranging in as varies from to . If , then for all and the second case of the characterization of never holds and we have that is the solution to the equation , or if is above . Moreover, , in both cases. Equivalently, . Hence, in this case the Theorem holds.
Otherwise, let be the solution to the equation . Thus above and below . Now, consider the function . This function is continuous increasing and is equal to for and is equal to for .
If it happened that the solution of the equation happens at , then we have that and . Otherwise, if the solution to that equation is above , then (the latter always has a solution when ) and . Thus in this case we have that and , which concludes the proof.
Appendix B Extension: Multidimensional Parameters for moment estimation
Section 3 focused on the case of estimating a singledimensional parameter of the data distribution. In this section we note that our characterization of the optimal mechanism extends to multidimensional moment estimation as well. In multidimensional moment estimation, there is a function , and our goal is to estimate . Here is the dimension of the estimation problem, which we assume to be a fixed constant.
As before, we will estimate by applying an estimator to the data collected from a survey mechanism. To evaluate an estimator, we must extend our definition of variance to the dimensional setting, as follows.
Definition 10 (WorstCase Mean Squared Error  Risk).
Given allocation function and distribution , the expected mean squared error (or risk) of an estimator is
(18) 
and the worstcase variance of is
(19) 
When is unbiased, the risk has a natural interpretation: it is simply the sum of variances of each coordinate of , considered separately.
Claim 7 (Risk of Unbiased Estimators).
The risk of any unbiased estimator is equal to the sum of variances of every coordinate:
(20) 
As in the singledimensional case, the analyst obtains an estimate through the HorvitzThompson estimator, which is defined as follows for parameters in . Also as in the singledimensional case, The HorvitzThompson estimator is an unbiased estimator of .
Definition 11 (HorvitzThompson Estimator).
The HorvitzThompson estimator for the case when the parameter of interest is the expected value of a vector of moments is defined as:
(21) 
For our characterization of worstcase risk, we will assume that the moment function can take on the extreme points of the hypercube .
Assumption 2.
is such that the induced distribution of is supported on every extreme point of the hypercube.
Lemma 8.
Under Assumption 2, the worstcase risk of the HorvitzThompson estimator of moment is
(22) 
Proof.
See Appendix E.3. ∎
Lemma 8 implies that the optimal survey design problem in the dimensional case is, in fact, identical to the problem considered in the singledimensional case. We can conclude that Theorems 3 and 6, which characterized the optimal survey mechanisms for discrete and continuous singleparameter settings, respectively, also apply to the multidimensional setting without change.
Appendix C Extension: Multidimensional Parameter Estimation via Linear Regression
In this section, we extend beyond moment estimation to a multidimensional linear regression task. For this setting we will impose additional structure on the data held by each agent. Each agent’s private information consists of a feature vector , an outcome value , and a residual value , that are i.i.d among agents. Each agent also has a cost . The data is generated in the following way: first, is drawn from an unknown distribution . Then, independently from , the pair is drawn from a joint distribution over . The marginal distribution over costs, , is known to the designer, but not the full joint distribution . Then is defined to be
(23) 
where with a compact subset of . Without loss of generality, we pick large enough so that is in the interior of . We write for the marginal distribution over , which is supported on some bounded range , and has expected value . (So, in particular, .)
When a survey mechanism buys data from agent , the pair is revealed. Crucially, the value of is not revealed to the survey mechanism. The goal of the designer is to estimate the parameter vector .
Note that the singledimensional moment estimation problem from Section 3 is a special case of linear regression. Indeed, consider setting , for each , , and to be the constant . Then, when the survey mechanism purchases data from agent , it learns , and estimating is equivalent to estimating the expected value of .
More generally, one can interpret as a vector of publiclyverifiable information about agent , which might influence a (possibly sensitive) outcome . For example, might consist of demographic information, and might indicate the severity of a medical condition. The coefficient vector describes the average effect of each feature on the outcome, over the entire population. Under this interpretation, is the residual agentspecific component of the outcome, beyond what can be accounted for by the agent’s features. We can interpret the independence of from as meaning that each agent’s cost to reveal information is potentially correlated with their (private) residual data, but is independent of the agent’s features.
As in Section 3, the analyst wants to design a survey mechanism to buy from the agents, obtain data from the set of elicited agents, then compute an estimate of , while not paying each agent more than the total budget per agent in expectation. As in Section 2.1, we note that the problem of designing a survey mechanism in fact reduces to that of designing an allocation rule that minimizes said variance and satisfies a budget constraint in which the prices are replaced by known virtual costs. To this end, the analyst designs an allocation rule and a pricing rule so as to minimize the normalized worstcase asymptotic meansquared error of as the population size goes to infinity. Our mechanism will essentially be optimizing the coefficient in front of the leading term in the mean squared error, ignoring potential finite sample deviations that decay at a faster rate than . Note that we will design allocation and pricing rules to be independent of the population size ; hence, the analyst can use the designed mechanism even if the exact population size in unknown.
c.1 Estimators for Regression
Let be the set of data points elicited by a survey mechanism. The analyst’s estimate will then be the value that minimizes the HorvitzThompson meansquared error , i.e.,
(24) 
Further, we make the following assumptions on the distribution of data points:
Assumption 3 (Assumption on the distribution of features).
is finite and positivedefinite, and hence invertible.
Finite expectation is a property one may expect real data such has age, height, weight, etc. to exhibit. The second part of the assumption is satisfied by common classes of distributions, such as multivariate normals. We first show that is a consistent estimator of .
Lemma 9.
Under Assumption 3, for any allocation rule that does not depend on , is a consistent estimator of .
Proof of Lemma 9.
Let , and let for simplicity. The following holds:

First, we note that is the unique parameter that minimizes ; indeed, take any , we have that
As and are independent, has mean , this simplifies to
where the last step follows from being positivedefinite by Assumption 3.

By definition, is compact.

is continuous in , and so is its expectation.

is also bounded (lowerbounded by , and upperbounded by either or ), implying that is continuous and bounded. Hence, by the uniform law of large number, remembering that are i.i.d,
Finally, noting that conditional on , and are independent, we have:
using .
Therefore, all of the conditions of Theorem 2.1 of [19] are satisfied, which is enough to prove the result. ∎
Similarly to the moment estimation problem in Section 3, the goal of the analyst is to minimize the worstcase (over the unknown feature and data distributions) asymptotic meansquared error of the estimator . Here “asymptotic” means the worstcase error as approaches the true parameter . The following theorem characterizes the asymptotic covariance matrix of . (In fact, it fully characterizes the asymptotic distribution of