Impact of Frequentist and Bayesian Methods on Survey Sampling Practice: A Selective Appraisal\thanksrefT1

Impact of Frequentist and Bayesian Methods on Survey Sampling Practice: A Selective Appraisal


According to Hansen, Madow and Tepping [J. Amer. Statist. Assoc. 78 (1983) 776–793], “Probability sampling designs and randomization inference are widely accepted as the standard approach in sample surveys.” In this article, reasons are advanced for the wide use of this design-based approach, particularly by federal agencies and other survey organizations conducting complex large scale surveys on topics related to public policy. Impact of Bayesian methods in survey sampling is also discussed in two different directions: nonparametric calibrated Bayesian inferences from large samples and hierarchical Bayes methods for small area estimation based on parametric models.


26 \issue2 2011 \firstpage240 \lastpage256 \doi10.1214/10-STS346


T1Discussed in \doi10.1214/11-STS346A, \doi10.1214/11-STS346B and \doi10.1214/11-STS346C; rejoinder at \doi10.1214/11-STS346REJ. \runtitleBayesian methods on Survey Sampling


Bayesian pseudo-empirical likelihood \kwddesign-based approach \kwdhierarchical Bayes methods \kwdmodel-dependent approach \kwdmodel-assisted methods \kwdPolya posterior \kwdsmall area estimation.

1 Introduction

Sample surveys have long been conducted to obtain reliable estimates of finite population descriptive parameters, such as totals, means, ratios and quantiles, and associated standard errors and normal theory intervals with large enough sample sizes. Probability-sampling designs and randomization (repeated sampling) inference, also called the design-based approach, played a dominant role, especially in the production of official statistics, ever since the publication of the landmark paper by Neyman (1934) which laid the theoretical foundations of the design-based approach. Neyman’s approach was almost universally accepted by practicing survey statisticians and it also inspired various important theoretical contributions, mostly motivated by practical and efficiency considerations. In this paper I will first provide some highlights of the design-based approach, for handling sampling errors, to demonstrate its significant impact on survey sampling practice, especially on the production of official statistics (Sections 2 and 3.1).

Model-dependent approaches (Section 3.2) that lead to conditional inferences more relevant and appealing than repeated sampling inferences have also been advanced (Brewer, 1963; Royall, 1970). Unfortunately, for large samples such approaches may perform very poorly under model misspecifications; even small model deviations can cause serious problems (Hansen, Madow and Tepping, 1983). On the other hand, model-dependent approaches can play a vital role in small area (domain) estimation, where the area-specific sample sizes are very small or even zero and make the design-based area-specific direct estimation either very unreliable or not feasible. Demand for reliable small area statistics has greatly increased in recent years and to meet this growing demand, federal statistical agencies and other survey organizations are currently paying considerable attention to producing small area statistics using models and methods that can “borrow strength” across areas. Hierarchical Bayes (HB) model-dependent methods are particularly attractive in small area estimation because of their ability to handle complex modeling and provide “exact” inferences on desired parameters (Section 5). I will highlight some HB developments in small area estimation that seem to have a significant impact on survey practice. I will also discuss the role of nonparametric Bayesian methods for inferences, based on large area-specific sample sizes, especially those providing Bayesian inferences that can be also justified under the design-based framework (Section 4.2).

Models are needed, regardless of the approach used, to handle nonsampling errors that include measurement errors, coverage errors and missing data due to nonresponse. In the design-based approach, a combined design and modeling approach is used to minimize the reliance on models, in contrast to fully model-dependent approaches (Section 3.4).

For simplicity, I will focus on descriptive parameters, but survey data are also increasingly used for analytical purposes, in particular, to study relationships and making inferences on model parameters under assumed super-population models. For example, social and health scientists are interested in fitting linear and logistic regression models to survey data and then making inferences on the model parameters taking account of the survey design features (Section 3.3).

2 Design-Based Approach: Early Landmark Contributions

In this section I will highlight some early landmark contributions to the design-based approach that had major impact on survey practice. Prior to Neyman (1934), sampling was implemented either by “balanced” sampling through purposive selection or by probability sampling with equal inclusion probabilities. Such a method was called the “representative method.” Bowley (1926) studied stratified random sampling with proportional sample size allocation, leading to a representative sample with equal inclusion probabilities. Neyman (1934) broke through this restrictive setup by relaxing the condition of equal inclusion probabilities and introducing the ideas of efficiency and optimal sample size allocation in his theory of stratified random sampling. He also demonstrated that balanced purposive sampling may perform poorly if the underlying model assumptions are violated. Neyman proposed normal theory confidence intervals for large samples such that the frequency of errors in the confidence statements based on all possible stratified random samples that could be drawn does not exceed the limit prescribed in advance “whatever the unknown properties of the finite population.” He broadened the definition of representative method by calling any method of sampling that satisfies the above frequency statement as representative. It is interesting to note that Neyman advocated distribution-free design-based inferences for survey sampling in contrast to his own fundamental work on parametric inference, including the Neyman–Pearson theory of hypothesis testing and confidence intervals.

The possibility of developing efficient probability sampling designs by minimizing total cost subject to a specified precision of an estimator or maximizing precision for a given cost, taking account of operational considerations, and making distribution-free inferences (point estimation, variance estimation and large sample confidence intervals) through the design-based approach were soon recognized.This, in turn, led to a significant increase in the number and type of surveys taken by probability sampling and covering large populations. In the early stages, the primary focus was on sampling errors.

I now list a few important post-Neyman theoretical developments in the design-based approach. As early as 1937, Mahalanobis used multistage sampling designs for crop surveys in India. His classic 1944 paper (Mahalanobis, 1944) presents a rigorous theoretical setup and a generalized approach to the efficient design of sample surveys of different crops in Bengal, India, with emphasis on variance and cost functions. Mahalanobis considered a geographical region of finite area and defined a field consisting of “a finite number, say, , of basic cells arranged in a definite space or geographic order together with a single value (or a set of values in the multivariate case) of for each basic cell,” where  is the variable of interest (say, crop yield). Under this setup, he studied four different probability sampling designs for selecting a sample of cells (called quads): unitary unrestricted, unitary configurational, zonal unrestricted and zonal configurational. In modern terminology, the four designs correspond to simple random sampling, stratified random sampling, single-stage cluster sampling and single-stage stratified cluster sampling, respectively. He developed realistic cost functions depending on particular situations. He also extended the theoretical setup to subsampling of clusters (which he named as two-stage sampling). We refer the reader to Murthy (1964) for a detailed account of the 1944 paper and other contributions of Mahalanobis to sample surveys. Hall (2003) provides a scholarly historical account of the pioneering contributions of Mahalanobis to the early development of survey sampling in India. Mahalanobis was instrumental in establishing the National Sample Survey of India and the famous Indian Statistical Institute.

Survey statisticians at the U.S. Census Bureau, under the leadership of Morris Hansen, made fundamental contributions to survey sampling theory and practice during the period 1940–1970, and many of those methods are still widely used in practice. This period is regarded as the golden era of the Census Bureau. Hansen and Hurwitz (1943) developed the basic theory of stratified two-stage cluster sampling with one cluster or primary sampling unit (PSU) within each stratum drawn with probability proportional to a size measure (PPS) and then subsampled at a rate that ensures self-weighting (equal overall probabilities of selection). This method provides approximately equal interviewer work loads which are desirable in terms of field operations. It can also lead to significant variance reduction by controlling the variability arising from unequal PSU sizes without actually stratifying by size and thus allowing stratification on other variables to further reduce the variance. The Hansen–Hurwitz method, with some modifications, has been widely used for designing large-scale socio-economic, health and agricultural surveys throughout the world. Many large-scale surveys are repeated over time, such as the monthly Current Population Survey (CPS), and rotation sampling with partial replacement of ultimate units (e.g., households) is used to reduce response burden. Hansen et al. (1955) developed simple but efficient composite estimators under rotation sampling in the context of stratified multistage sampling. Rotation sampling and composite estimation are widely used in large-scale surveys.

Prior to the 1950s, the primary focus was on estimating totals, means and ratios for the whole population and large planned subpopulations such as US states or provinces in Canada. Woodruff (1952) developed a unified design-based approach for constructing confidence intervals on quantiles using only the estimated distribution function and the associated standard error. This ingenious method is applicable to general probability sampling designs and performs well in terms of coverage probabilities in many cases. Woodruff intervals can also be used to obtain standard errors of estimated quantiles (Rao and Wu, 1987; Francisco and Fuller, 1991). Because of those features, the Woodruff method had a significant impact on survey practice. However, the method should not be treated as a black box for constructing confidence intervals on quantiles, because it can perform poorly in some practical situations. For example, it performed very poorly under stratified random sampling when the population is stratified by a concomitant variable highly correlated with the variable of interest (Kovar, Rao and Wu, 1988). The failure of the Woodruff method in this case stems from the fact that the standard error of the estimated distribution function at the quantile will be too small due to zero contributions to the standard error from most strata. Kovar, Rao and Wu (1988) showed that the bootstrap method for stratified random sampling performs better than the Woodruff method in this case, but in other situations the Woodruff method is better.

Attention was also given to inferences for unplanned subpopulations (also called domains) such as age–sex groups within a state. Hartley (1959) and Durbin (1958) developed simple, unified theories for domain estimation applicable to general designs, requiring only existing formulae for population totals and means.

After the consolidation of basic design-based sampling theory, Hansen et al. (1951) and others paid attention to measurement errors in surveys. They developed basic theories under additive measurement error models with minimal model assumptions on the observed responses treated as random variables. Total variance of an estimator is decomposed into sampling variance, simple response variance and correlated response variance (CRV) due to interviewers. The CRV was shown to dominate the total variance when the number of interviewers is small, and the 1950 U.S. Census interviewer variance validation study showed that this component is indeed large for small areas. Partly for this reason, self-enumeration by mail was first introduced in the 1960 U.S. Census to reduce the CRV component. Earlier, Mahalanobis (1946) developed the method of interpenetrating subsamples (called replicated sampling by Deming, 1960) and used it extensively in large-scale surveys in India for assessing both sampling and interviewer errors. By assigning the subsamples at random to interviewers, the total variance can be estimated and interviewer differences assessed.

It should be clear from the above brief description of early developments that much of the basic sampling theory was developed by official statisticians or those closely associated with official statistics. Theory was driven by the need to solve real problems and often theory was not challenging enough to attract academic researchers to survey sampling. As a result, university researchers paid little attention to survey sampling in those days with few exceptions (e.g., Iowa State University under the leadership of Cochran, Jessen and Hartley).

3 Some Recent Design-Based and Non-Bayesian Developments

3.1 Model-Assisted Approach

We first give a brief account of the model-assisted approach that uses a working model to find efficient estimators. However, the associated inferences are design-based. Consider a finite population consisting of elements labeled with associated values of a variable of interest . Under a probability sampling design, the inclusion probabilities are all strictly positive and a basic estimator of the total is of the form , where denotes a sample and are the so-called design weights (Horvitz and Thompson, 1952; Narain, 1951). For example, in the Neyman stratified random sampling design, the design weights are equal to the inverse of the sampling fractions within strata and vary across strata, while in the Hansen et al. two-stage cluster sampling design, the design weights are all equal. Design unbiasedness of estimators is not insisted upon (contrary to statements in some papers on inferential issues of sampling theory) because it “often results in much larger MSE than necessary” (Hansen, Madow and Tepping, 1983). Instead, design consistency is deemed necessary for large samples. Strategies (design and estimation) that appeared reasonable are entertained (accounting for costs) and relative properties are carefully studied by analytical and/or empirical methods, mainly through the comparison of mean squared error (MSE) or anticipated MSE under plausible population models on the variables treated as random variables. This is essentially the basis of the repeated sampling (or designbased) approach.

In recent years, a model-assisted repeated sampling approach has received the attention of survey practitioners. In this approach, a working population model is used to find efficient design-consistent estimators. For example, suppose the working model is a linear regression model of the form


with model errors assumed to be uncorrelated with mean zero and variance proportional to a known constant , where is a vector of auxiliary variables with known population total . Under model (1), the best linear unbiased estimator (BLUE) of the model parameter , based on the census values , is given by the “census” regression coefficient

A predictor of under the working model is then given by for where is the design-weighted estimator of :

By writing the total as where denotes the prediction error, a design-based estimator of is given by . We can express this estimator as a generalized regression (GREG) estimator


where (Särndal, Swensson and Wretman, 1992). The GREG estimator (2) is design-consistent regardless of the validity of the working model (Robinson and Sarndal, 1983) under certain regularity conditions provided is precisely correct. If the working model provides a good fit to the data, then the residuals should be less variable than the response values and the GREG estimator is likely to be significantly more efficient than the basic design-weighted estimator .

The estimator (2) may also be expressed as a weighted sum , where with


The adjustment factors , popularly known as the -weights, ensure the calibration property so that the GREG estimator when applied to the sample values agrees with the known total . This property is attractive to the user when the vector contains user-specified totals.

The assumption of a working linear regression model (1) can be relaxed by adopting more flexible working models. For example, Breidt, Claeskens and Opsomer (2005) proposed a nonparametric model-assisted approach based on a penalized spline (P-spline) regression working model and showed that the resulting estimators are design-consistent and more efficient than the usual GREG estimators based on linear regression working models when the latter are incorrectly specified. Also, the P-spline model-assisted estimators were shown to be approximately as efficient as the GREG estimators when the linear regression working model is correctly specified. The P-spline approach can be easily implemented using existing estimation packages for GREG because the underlying model is closely related to a linear regression model. It offers a wider scope for the model-assisted approach because it makes minimal assumptions on the regression of on without assuming a specific parametric form.

Under the model-assisted approach, design-consistent variance estimators are obtained either by a Taylor linearization method or by a resampling method (when applicable), provided the probability sampling design ensures strictly positive joint inclusion probabilities , . Using the estimator and associated standard error, asymptotically valid normal theory intervals are obtained regardless of the validity of the working model.

Most large-scale surveys are multipurpose and observe multiple variables of interest, and the same working model may not hold for all the variables of interest. In that case, a model-assisted approach may lead to possibly different and hence different calibration weights associated with the variables, that is, the calibration weights are of the form  associated with the variable and sample unit . However, survey users prefer to use a commonweight  for all variables of interest. This is often accomplished by minimizing a suitable distance measure between and for subject to userspecified calibration constraints, say, , without appealing to any working model, where is the vector of known totals associated with the user-specified variables . For example, a chi-squared distance measure leads to common calibrationweights  of the form where is given by (3) with  replaced by (Deville and Sarndal, 1992). Thus, calibration estimation in this case corresponds to using model-assisted estimation based on a linear regression model (1) with as the vector of predictor variables. Calibration estimation has attracted the attention of users due to its ability to produce common calibration weights and accommodate an arbitrary number of user-specified calibration (or benchmark) constraints, for example, calibration to the marginal counts of several post-stratification variables. Several national statistical agencies have developed software designed to compute calibration weights: GES (Statistics Canada), LIN WEIGHT (Statistics Netherlands), CALMAR (INSEE, France) and CLAN97 (Statistics Sweden). Sarndal (2007) says, “Calibration has established itself as an important methodological instrument in large-scale production of statistics.” Brakel and Bethlehem (2008) noted that the use of common calibration weights for estimation in multipurpose surveys makes the calibration method “very attractive to produce timely official releases in a regular production environment.”

Unfortunately, the model-free calibration approach can lead to erroneous inferences for some of the response variables, even in fairly large samples if the underlying working linear regression model uses an incorrect or incomplete set of auxiliary variables, unlike the model-assisted approach that uses a working model obtained after some model checking. For example, suppose that the underlying model is a quadratic regression model of on and the distribution of is highly skewed. Also, suppose that the user-specified calibration constraints are the known population size and the known population total . In this case, the calibration estimator of the total under simple random sampling is the familiar simple linear regression estimator with as the predictor variable. On the other hand, a model-assisted estimator under the quadratic regression working model is given by a multiple linear regression estimator with and as the predictor variables, assuming the total of is also known. Rao, Jocelyn and Hidiroglou (2003) demonstrated that the coverage performance of the normal theory interval associated with the calibration estimator is poor even in fairly large samples, unlike the coverage performance of the normal theory intervals associated with the model-assisted estimator. The coverage performance depends on the skewness of the residuals from the fitted model and in the case of calibration estimation the skewness of residuals after fitting simple linear regression remains large, whereas the skewness of residuals after fitting quadratic regression is small even if and are highly skewed. This simple example demonstrates that the population structure does matter in design-based inferences and that it should be taken into account through a model-assisted approach based on suitable working models. But the model-assisted approach has the practical limitation that the weight  may vary across variables in surveys with multiple variables of interest, unlike in the calibration approach. Also, for complex working models, such as the P-spline, all the population values of the predictor variables should be known in order to implement the model-assisted approach.

The model-assisted approach is essentially design-based, unlike the model-dependent approach (Section 3.2) that can provide conditional inferences referring to the particular sample, , of units selected. Such conditional inferences may be more relevant and appealing than the unconditional repeated sampling inferences used in the design-based approach.

3.2 Model-Dependent Approach

The frequentist model-dependent approach to inference assumes that the population structure obeys a specified population model and that the same model holds for the sample, that is, no sample selection bias with respect to the assumed population model. Sampling design features are often incorporated into the model to reduce or eliminate the sample selection bias (see Section 3.3 for some difficulties to implement this in practice). Typically, distributional assumptions are avoided by focusing on point estimation, variance estimation and associated normal theory confidence intervals, as in the case of design-based inferences. As a result, models used specify only the mean function and the variance function of the variable of interest, .We refer the reader to Valliant, Dorfman and Royall (2000) for an excellent account of the model-dependent approach.

As noted in Section 1, model-dependent strategies may perform poorly in large samples when the population model is not correctly specified; even small deviations from the assumed model that are not easily detectable through routine model checking can cause serious problems. In the Hansen, Madow and Tepping (1983) example of an incorrectly specified population model, the best linear unbiased prediction (BLUP) estimator of the mean is not design consistent under their stratified simple random sampling design with near optimal sample allocation (commonly used to handle highly skewed populations such as business populations). As a result, model-dependent confidence intervals exhibited poor performance: for , coverage was around 70% compared to nominal level of 95%, while coverage for model-assisted intervals was 94.4%. To get around this difficulty, Little (1983) proposed restricting attention to models that hold for the sample and for which the BLUP estimator is design consistent. For example, in the Hansen, Madow and Tepping (1983) example, the BLUP of the mean under a model with means differing across strata is identical to the traditional stratified mean which is design consistent. But it seems not possible even to find a suitable model under which the widely used combined ratio estimator of the mean under the stratified random sampling is the BLUP estimator. The combined ratio estimator is a model-assisted estimator under a ratio working model with a common slope. It allows a large number of strata with few sample units from each stratum and yet remains design consistent, unlike the separate ratio estimator which is the BLUP estimator under a ratio model with separate slopes across strata: , , where and denote the values of the variable of interest and an auxiliary variable for the unit in stratum . Moreover, the BLUP estimator under this model requires the knowledge of the strata population means , whereas the combined ratio estimator requires only the overall population mean .

It is also not clear how one proceeds to formulate suitable models for general sampling designs that lead to design-consistent BLUP estimators. Further, the main focus has been on point estimation and it is not clear how one should proceed with variance estimation and setting up confidence intervals that have repeated sampling validity. In this context, Pfeffermann (2008) says, “I presume that these are supposed to be computed under the corrected model as well. Are we guaranteed that they are sufficiently accurate under the model? Do we need to robustify them separately?” Little (2008), in his rejoinder to Pfeffermann’s comment, says that he advocates using some replication method for variance estimation and then appealing to normal approximation for confidence intervals. Clearly, further work is needed to address the above issues. Note that if parametric assumptions are made, such as normality of model errors, then it is possible to make exact Bayesian inferences by introducing suitable priors on the model parameters (Section 4).

Some recent work on the model-dependent approach focused on avoiding misspecification of the mean function by using P-spline models. Zheng and Little (2003, 2005) studied single stage PPS sampling using a P-spline model, based on the size measure used in PPS sampling, to represent the regression function , and a specified function of the size measure as the variance function. In a simulation study, they compared the performance of the usual linear GREG estimator and the P-spline model-based estimators, not necessarily design-consistent, and showed that the P-spline model-based estimators are generally more efficient than the GREG or the NHT estimator in terms of design MSE even for large samples. However, the simulation study did not consider model-assisted estimators corresponding to their P-spline model. The simulations also showed that the design-bias for their P-spline estimators is minor, even though the estimators are not design-consistent, and hence the authors conclude that “design consistency may not be of paramount importance.” On the other hand, in the Breidt, Claeskens and Opsomer (2005) simulationstudy, their model-assisted P-spline estimator is sometimes much better, and never worse, than the corresponding P-spline model-based estimator under stratified random sampling. The latter estimator is not design-consistent under the P-spline model considered by Breidt, Claeskens and Opsomer (2005).

As noted above, a main advantage of the frequentist model-dependent approach is that it leads to inferences conditional on the selected sample of units, , unlike the unconditional design-based approach. However, it is possible to develop a conditional model-assisted approach that allows us to restrict the reference set of samples to a “relevant” subset of all possible samples specified by the design. Conditionally valid inferences for large samples can then be obtained. Rao (1992) and Casady and Valliant (1993) developed an “optimal” linear regression estimator that is asymptotically valid under the conditional setup.

We refer the reader to Kalton (2002) for compelling arguments for favoring design-based approaches (possibly model-assisted and/or conditional) to handle sampling errors. Smith (1994) named the traditional repeated sampling inference as “procedural inference” and argued that procedural inference is the correct approach for surveys in the public domain.

3.3 Analysis of Complex Survey Data

Data collected from large-scale socio-economic,health and other surveys are being extensively used for analysis purposes, such as inferences on the regression parameters of linear and logistic regression population models. Ignoring the survey design features and using standard methods can lead to erroneous inferences on model parameters because of sample selection bias caused by informative sampling. It is tempting to expand the models by including among the predictors all the design variables that define the selection process at the various levels and then ignore the design and apply standard methods to the expanded model. The main difficulties with this approach, advocated by some leading researchers, are the following, among others (Pfeffermann and Sverchkov, 2003): (1) Not all design variables may be known or accessible to the analyst. (2) Too many design variables can lead to difficulties in making inferences from the expanded models. (3) The expanded model may no longer be of scientific interest to the analyst.

The design-based approach can provide asymptotically valid repeated sampling inferences without changing the analyst model. A unified approach based on survey-weighted estimating equations leads to design-consistent estimators of the “census” or finite population parameters, which in turn estimate the associated model parameters. Further, using resampling methods for variance estimation, such as the jackknife and the bootstrap for survey data, asymptotically valid design-based inferences on the census parameters can be implemented. The same methods may also be applicable for inference on the model parameters, in many cases of large-scale surveys. In the other cases, it is necessary to estimate the model variance of the census parameters from the sample. The estimate of the total variance is then given by the sum of this estimate and the resampling variance estimate.

In practice, the data file would contain for each sampled unit the variables of interest and predictor variables, final weights after adjustment for unit nonresponse and the corresponding replication weights, for example, bootstrap weights. The analyst can use software that handles survey weights (such as SAS) to obtain point estimates from the final weights and the corresponding point estimates for each bootstrap replicate using bootstrap weights. The variability of the bootstrap point estimates provides asymptotically valid standard errors for designs commonly used in large-scale surveys. Details of the methods are not provided due to space limitations, but the reader is referred to Rao (2005, Section 6) for a succinct account of analysis of survey data using resampling methods for variance estimation and normal theory confidence intervals. Design-based approach using resampling methods is extensively used in practice and software is also available (e.g., WesVar, Stata).

The design-based approach has also been applied to make inferences on the regression parameters and the variance parameters of multilevel models from data obtained from multistage sampling designs corresponding to the levels of the model. For example, in an education study of students, schools (first-stage units or clusters) may be selected with probabilities proportional to school size and students (second-stage units) within selected schools by stratified random sampling. Again, ignoring the sampling design and using traditional methods for multilevel models that ignore the design can lead to erroneous inferences in the presence of sample selection bias. In the design-based approach, estimation of variance parameters of the model is more difficult than that of regression parameters and the necessary information for estimating variance parameters is often not provided in public-use data files which typically report only the final weight for each sample unit. Widely used design-based methods have been proposed in the literature (e.g., Pfeffermann et al., 1998, and Rabe-Hesketh and Skrondal, 2006) to handle variance parameters that require the weights within sampled clusters in addition to the weights associated with the clusters. Some of those methods can be implemented using the Stata program gllamm. Unfortunately, the resulting estimators of variance parameters may not be design-model consistent when the sample sizes within clusters are small, even for two-level linear models. Korn and Graubard (2003) demonstrated the bias problem and proposed a different method for simple two-level or three-level models involving only a common mean as the fixed effect. This method first obtains the census parameters and then estimates those parameters. It worked well in empirical studies even for small within-cluster sample sizes. Rao, Verret and Hidiroglou (2010) proposed a weighted estimating equations (WEE) approach for general two-level linear models that uses within-cluster joint inclusion probabilities, similar to Korn and Graubard (2003). The WEE method leads to design-model consistent estimators of variance parameters even for smallwithin-cluster sample sizes, provided the number of sample clusters is large. It performed well in empirical studies compared to the other methods proposed in the literature. Rao, Verret and Hidiroglou (2010) also proposed a unified approach based on a weighted log-composite likelihood that can handle generalized linear multilevel models and small within-cluster sample sizes. This method is currently under investigation.

A drawback of the design-based approach to the analysis of survey data is that it may lead to loss in efficiency when the final weights vary considerably across the sampling units. Alternative approaches that can reduce the variability of the weights and thus lead to more efficient estimators have also been proposed (e.g., Pfeffermann and Sverchkov, 2003; Fuller, 2009, Chapter 6). We refer the reader to Pfeffermann (1993) and Rao et al. (2010) for overviews on the role of sampling weights in the analysis of survey data.

3.4 Nonsampling Errors

Survey practitioners have to rely on models, regardless of the approach used, to handle nonsampling errors that include measurement errors, coverage errors and missing data due to unit nonresponse and item nonresponse. In the design-based approach, a combined design and modeling approach is used to minimize the reliance on models, in contrast to fully model-dependent approaches that will have similar difficulties noted in the previous subsections. As mentioned in Section 1, Hansen et al. (1951) studied measurement errors under minimal model assumptions on the observed responses treated as random variables, and their discovery that the correlated response variance due to interviewers dominates the total variance when the number of interviewers is small led to the adoption of self enumeration by mail in the 1960 U.S. Census.

Inference in the presence of missing survey data, particularly item nonresponse, has attracted a lot of attention; see Little and Rubin (2002) for an excellent account of missing data methods. To handle item nonresponse, imputation of missing data is often used because of its practical advantages. In the design-based approach, traditional weighted estimators of a total or a mean are computed from the completed data set, leading to an imputed estimator. Often imputed values are generated from an imputation model assumed to hold for the respondents under a missing at random (MAR) response mechanism. Under this setup, the imputed estimator is unbiased or asymptotically unbiased under the combined design and model set up. Reiter, Raghunathan and Kinney (2006) demonstrated the importance of incorporating sampling design features into the imputation model in order to make the model hold for the sample and then for the respondents under the assumed MAR response mechanism. An alternative approach avoids imputation models but assumes a model for the response mechanism. For example, a popular method consists of forming imputation classes (according to the values of estimated response probabilities under a specified response model) and assuming that the missing values are missing completely at random (MCAR) within classes. The missing values are then imputed by selecting donors at random from the observed values within classes. It may be also possible to develop imputation methods that make an imputed estimator doubly robust in the sense that it is valid either under an assumed imputation model or under an assumed response mechanism (e.g., see Haziza and Rao, 2006). Doubly robust estimation has attracted considerable attention in the nonsurvey literature (see, e.g., Cao, Tsiatis and Davidian, 2009).

Variance estimation under imputation for missing survey data has attracted a lot of attention because treating the imputed values as if observed and then applying standard variance formulae can often lead to serious underestimation because the additional variability due to estimating the missing values is not taken into account. Methods that can lead to asymptotically valid variance estimators under single imputation for missing data have been proposed under the above setups. We refer the reader to Kim and Rao (2009) for a unified approach to variance estimation under the imputation model approach, and to Haziza (2009) for an excellent overview of imputation for survey data and associated methods for inference. Rubin (1987) proposed multiple imputation to account for the underestimation when applying standard formulae treating the imputed values as if observed. Under this approach, () imputed values are generated for a missing item, leading to completed data sets. Rubin recommends the use of traditional design-based estimators and variance estimators, computed from each of the completed data sets, although multiple imputation ideas are based on a Bayesian perspective: “We restrict attention to standard scientific surveys and standard complete data statistics” (Rubin, 1987, page 113). Multiple imputation estimator  of a total is taken as the average of the  estimators , and its estimator of variance is given by , where  is the average of the naïve variance estimators and . Rubin gives design-based conditions for “proper imputation” that ensure the repeated sampling validity of the estimator and the associated variance estimator under a posited response mechanism. Unfortunately, there are some difficulties in developing imputation methods satisfying Rubin’s conditions for “proper imputation” with complex survey data (see, e.g., Kim et al., 2006). Nevertheless, multiple imputation shows how Bayesian ideas can be integrated to some extent with the traditional design-based approach that is widely used in practice.

4 Bayesian Approaches

This section provides an account of both parametric and nonparametric Bayesian (and pseudo-Bayesian) approaches to inference from survey data, focusing on descriptive finite population parameters.

4.1 Parametric Bayesian Approach

As noted in Section 3.2, the frequentist model-dependent approach mostly avoided distributional assumptions by specifying only the mean function and the variance function of the variable of interest. Under a specified distribution on the assumed model, Bayesian inferences can be easily implemented, provided the model holds for the sample. Royall and Pfeffermann (1982) studied Bayesian inference on the population mean assuming normality and flat (diffuse) priors on the parameters of a linear regression model. Their focus was on the posterior mean and the posterior variance and, hence, results were similar to those of Royall (1970) without the normality assumption and priors on model parameters. However, exact credible intervals on the mean and other parameters of interest can be obtained conditional on the observed data, using a parametric Bayesian setup. It can be implemented even under complex modeling, using powerful Monte Carlo Markov chain (MCMC) methods to simulate samples from the posterior distributions of interest. Scott and Smith (1969) obtained the posterior mean and the posterior variance of the population mean under linear models with random effects, normality and diffuse priors on the model parameters. Their posterior mean is also the BLUP estimator without the normality assumption when the variance parameters of the model are known. In the frequentist approach, estimates of variance parameters are substituted in the BLUP to get the empirical BLUP (EBLUP) estimator which is different (but close to) to the posterior mean. However, the Bayesian approach also provides the posterior variance which is typically different from the estimated mean squared prediction error (MSPE) of the EBLUP estimator; several different methods of estimating MSPE have been proposed in the context of small area estimation (Rao, 2003, Chapter 7). A simulation study by Bellhouse and Rao (1986) showed that any gain in efficiency of the posterior mean (or the BLUP) over traditional design-based estimators is likely to be small in practice. However, by regarding the clusters as small areas of interest, the Scott–Smith approach provides models linking the small areas and the resulting estimators of small area means can lead to significant efficiency gains over direct area-specific estimators. Random cluster effect models are now extensively used to construct efficient small area estimators by “borrowing strength” across small areas using auxiliary information (Section 5). It may be noted that the empirical Bayes (EB) approach to inference from random cluster effect models is similar to EBLUP, but it can handle general parametric random cluster effect models and does not require the linearity assumption used in the BLUP method. The EB approach is essentially frequentist, unlike the Bayesian approach that requires the specification of priors on the model parameters. It may be more appropriate to name “empirical Bayes” as “empirical best” without changing the abbreviation EB (Jiang and Lahiri, 2006).

Sedransk (1977) studied regression models with random slopes , using a prior distribution on specified as . He then followed the Scott and Smith (1969) approach and obtained posterior mean and posterior variance of the finite population total. He applied the method to data on banks from the U.S. Federal Reserve Board to estimate a current monetary total making use of extensive historical data to specify the values of , and thus arrive at an informative prior which in turn leads to more efficient posterior inferences compared to those based on a noninformative prior, provided the informative prior is correctly specified. Malec and Sedransk (1985) extended Scott–Smith results to three-stage sampling. Nandram, Sedransk and Smith (1997) applied the Bayesian approach to obtain order-restricted estimators of the age composition of a population of Atlantic cod, using MCMC methods. Sedransk (2008) lists possible uses of parametric Bayesian methods for sample surveys, including the above application to estimation from establishment surveys, “optimal” sample allocation and small area estimation from data pooled from independent surveys (see Section 5).

Pfeffermann, Moura and Silva (2006) report an interesting application of the Bayesian approach to make inferences from multilevel models under informative sampling. In this case, the multilevel sample model induced by informative sampling is more complicated than the corresponding population model and, as a result, frequentist methods are difficult to implement. On the other hand, the authors show that the Bayesian approach, using noninformative priors on the model parameters indexing the sample model and applying MCMC methods, is efficient and convenient for handling such complex sample models, although computer intensive. This application is an example where the Bayesian approach offers computational advantage over the corresponding frequentist approach.

4.2 Nonparametric Bayesian Approaches

For multipurpose large-scale surveys, parametric Bayesian methods based on distributional assumptions have limited value because of the difficulties in validating the parametric assumptions. It may be more appealing to use a nonparametric Bayesian approach, but this requires the specification of a nonparametric likelihood function based on the full sample data and a prior distribution on the parametric vector . The likelihood function based on the full sample data, however, is noninformative in the sense that all possible unobserved values of the parameter vector have the same likelihood function (Godambe, 1966). One way out of this difficulty is to take a Bayesian route by assuming an informative (exchangeable) prior on the -dimensional parameter vector and combine it with the noninformative likelihood (Ericson, 1969; Binder, 1982) to get an informative posterior, but inferences do not depend on the sample design; Ericson argued that an exchangeable prior assumption may be reasonable under simple random sampling. Ericson (1969) focused on the posterior mean and the posterior variance of the population mean which approximately agree, under prior vagueness, with the usual formulae under the design-based approach. In the case of stratified sampling with known strata differences, priors within strata are assumed to be exchangeable.

Meeden and Vardeman (1991) used a Polya posterior (PP) over the unobserved, assuming that “unseen are like the seen” (equivalent to exchangeability). In this case, the posterior “does not arise from a single prior distribution” (Meeden, 1995) and, hence, it is called a pseudo-posterior. It is also similar to the Bayesian bootstrap (Rubin, 1981; Lo, 1988). The Polya posterior is a flexible tool and methods based on PP have reasonable design-based properties under simple random sampling. PP approach permits Bayesian interval estimation for the mean and any other parameters of interest through simulation of many finite populations from PP. The general interval estimation feature of the PP approach is attractive. Meeden (1995) extended the PP approach to utilize auxiliary population information by making a strong prior assumption that the ratios are exchangeable and obtained point and interval estimators for the population median. Empirical results under simple random sampling are given to show that the resulting Bayesian intervals perform well in terms of design-based coverage. Lazar, Meeden and Nelson (2008) developed a constrained Polya posterior to generate simulated populations that are consistent with the known population mean of an auxiliary variable , using MCMC methods. This approach permits the use of known population auxiliary information and leads to more efficient Bayesian inferences. Nelson and Meeden (1998) adapted PP to incorporate prior knowledge that the population median belongs to some interval. Meeden (1999) studied two-stage cluster sampling (balanced case) and his two-stage PP-based results for the posterior mean and the posterior variance are very close to standard design-based results, but it is not clear how readily Meeden’s approach extends to the unbalanced case with unequal cluster sizes. Although the PP approach is attractive and seems to provide calibrated Bayesian inferences at least for some simple sampling designs, it is unlikely to be used in the production of official statistics because of the underlying assumption that “unseen are like the seen” and each case needs to be studied carefully to develop suitable PP. Also, it is not clear how this method can handle complex designs, such as stratified multistage sampling designs, or even single stage unequal probability sampling without replacement with nonnegligible sampling fractions, and provide design-calibrated Bayesian inferences. Nevertheless, the PP approach may be useful for some specialized surveys and when inferences are desired on a variety of finite population parameters associated with the variable of interest or prior knowledge on the parameters is available, as in the case of Nelson and Meeden (1998).

An alternative approach is to start with an informative likelihood based on reduced data. For example, under simple random sampling, it may be reasonable to suppress the labels from the full data and use the likelihood based on the reduced data see Hartley and Rao (1968) and Royall (1968). On the other hand, for stratified random sampling, labels within strata are suppressed but strata labels are retained because of known strata differences. Hartley and Rao (1968) proposed a “scale-load” approach for inference on the mean . Under this approach, the -values are assumed to belong to a finite set of possible values for some finite (unspecified). Then  is the scale load of and the population mean is expressed in terms of the scale loads as . Reduced sample data under simple random sampling without replacement is represented by the sample scale loads , and the resulting likelihood function is the hyper-geometric likelihood . If the sampling fraction is negligible, then the likelihood is simply the multinomial likelihood which is the same as the empirical likelihood (EL) of Owen (1988). In the case of stratified random sampling, the likelihood function is the product of hyper-geometric likelihoods corresponding to the different strata .

Hartley and Rao focused primarily on design-based inferences, but also briefly studied Bayesian inference under simple random sampling using a compound-multinomial prior on the scale loads . Hoadley (1969) obtained the compound-multinomial prior, denoted , by first assuming that the finite population is a random sample from an infinite population with unknown probabilities , and then using a Dirichlet prior with parameters () on the probabilities . The posterior distribution of , , given the data , , is the compound multinomial . Using this posterior distribution, Hartley and Rao (1968) obtained the posterior mean and the posterior variance of the population mean . Under a diffuse prior with the close to zero, the results are identical to those of Ericson (1969). However, there are fundamental differences in the two approaches in the sense that under exchangeability the conditional distribution of the sample scale loads , given the population scale loads , is equal to the hyper-geometric likelihood of Hartley–Rao for any sampling design, whereas in the Hartley–Rao approach this conditional distribution and the resulting posterior of the are derived under simple random sampling, and hence depend on the sampling design. Rao and Ghangurde (1972) studied Bayesian optimal sample allocation, by minimizing the expected posterior variance of the mean, for stratified simple random sampling and some other cases including two-phase sampling to handle the nonresponse problem. Attention was given to data-based priors obtained by combining diffuse priors with likelihoods based on pilot samples.

Aitkin (2008) used the scale-load framework and obtained Bayesian intervals on the population mean under simple random sampling, by using a compound-multinomial prior with on the observed scale loads and then simulating a large number of samples from the resulting posterior distribution. This approach is similar to the simulation approach used by Meeden and Vardeman (1991), but the posterior intervals depend on the design via the likelihood function. As in the case of Meeden and Vardman, the simulation method can be applied to other parameters of interest. Also, the simulation method readily extends to stratified simple random sampling.

The scale-load approach is promising but somewhat limited in applicability, in the sense that the scale-load likelihoods cannot be obtained easily for complex sampling designs. To handle complex sampling designs, Rao and Wu (2010) used a pseudo-EL approach, proposed by Wu and Rao (2006), to obtain “calibrated” pseudo-Bayesian intervals in the sense that the intervals have asymptotically correct designbased coverage probabilities. The pseudo-EL approach uses the survey weights and the design effect (via the effective sample size ) in defining the profile pseudo-EL function for the mean . Let be the normalized weights and , where deff is the ratio of the estimated variance of the weighted mean to the estimate of the variance under simple random sampling. Then the profile pseudo empirical log-likelihood function for is given by


where the maximize subject to and . We refer the reader to Rao and Wu (2009) for an overview of EL methods used for inference from survey data.

By combining the profile pseudo-EL function for the population mean from (4) with a flat prior on the mean, one can get pseudo-Bayesian intervals that have asymptotically correct design-based coverage probabilities. Also, it may be easier to specify informative priors on the mean if historical information on the mean is available. The proposed approach can incorporate known auxiliary population information in the construction of pseudo-Bayesian intervals using the basic design weights or using weights already calibrated by the known auxiliary information. The latter is more appealing because, in practice, data files report the calibrated weights. One limitation of the Rao–Wu method for complex designs is that the pseudo-EL depends on the design effects which may not be readily available. Lazar (2003) proposed the Bayesian profile EL approach for the case of independent and identically distributed (i.i.d.) observations.

It should be noted that even in the i.i.d. case a “matching” prior on the mean that provides higher order coverage accuracy for the intervals does not exist when using the nonparametric Bayesian profile-EL (Fang and Mukerjee, 2006). Therefore, the main advantage of the approach is to get “exact” pseudo-Bayesian intervals that are also calibrated in the sense of first order coverage accuracy in the design-based framework.

5 Small Area Estimation: HB Approach

Methods for small area (or domain) estimation have received much attention in recent years due to growing demand for reliable small area statistics. Traditional area-specific direct estimation methods (either design-based or model based or Bayesian) are not suitable in the small area context because of small (or even zero) area-specific sample sizes. As a result, it is necessary to use indirect estimation methods that borrow information across related areas through linking models based on survey data and auxiliary information, such as recent census data and current administrative records. Advocates of design-based methods indeed acknowledge the need for models in small area estimation. For example, Hansen, Madow and Tepping (1983) remark, “If the assumed model accurately represents the state of nature, useful inferences can be based on quite small samples at least for certain models.”

Linking models based on linear mixed models and generalized linear mixed models with random small area effects are currently used extensively, in conjunction with empirical best linear unbiased prediction (EBLUP), parametric empirical Bayes (EB) and hierarchical Bayes (HB) methods for estimation of small area means and other small area parameters of interest. A detailed treatment of small area estimation methods is given in Rao (2003). We focus here on HB methods to highlight the significant impact of Bayesian methods on small area estimation.

In the HB approach, model parameters are treated as random variables and assigned a prior distribution. Typically, noninformative priors are used, but one must make sure that the resulting posteriors are proper because some priors on the variance parameters can lead to improper posteriors (see Rao, 2003, Section 10.2.4, for a discussion on the choice of priors). The posterior distribution of a small area parameter of interest is then obtained from the prior and the likelihood function generated from the data and the assumed model. Typically, closed-form expressions for desired posterior distributions do not exist, but powerful MCMC methods are now available for simulating samples from the desired posterior distribution and then computing the desired posterior summaries. Rao (2003, Chapter 10) gives a detailed account of the HB methods in the small area context; see also the review paper by Datta (2009), Section 3.

A significant advantage of the HB approach is that it is straightforward and the inferences are “exact,” unlike in the EB approach. Moreover, it can handle complex small area models using MCMC methods. Availability of powerful MCMC methods and software, such as WinBUGS, also makes HB attractive to the user. Extensive HB model diagnostic tools are also available, but some of the default HB model-checking measures that are widely used may not be necessarily good for detecting model deviations. For example, the commonly used posterior predictive -value (PPP) for checking goodness of fit may not be powerful enough to detect nonnormality of random effects (Sinharay and Stern, 2003) because this measure makes “double use” of data in the sense of first generating values from the predictive posterior distribution and then calculating the -value. Bayarri and Castellanos (2007) say, “Double use of the data can result in an extreme conservatism of the resulting -values.” Alternative measures, such as the partial PPP and the conditional PPP (Bayarri and Berger, 2000), attempt to avoid double use of data, but those measures are more difficult to implement than the PPP, especially for the small area models. Browne and Draper (2001) suggested the use of prior-free, frequentist methods in the model exploration phase and then the HB for inference based on the selected models using possibly diffuse priors on the model parameters. However, many Bayesians may not agree with this suggestion because of the orientation of frequentist tests of goodness of fit to rejecting null hypotheses, as noted by a referee.

To illustrate the HB approach for small area estimation, we focus on a basic area-level model with two components, a sampling model and a linking model, requiring only area-specific (direct) designbased estimators of small area means and associated area-level covariates (). The linking model is of the form , where the random effects and is a specified link function. The sampling model assumes that , where the sampling errors are assumed to be independent with known sampling variances . The assumptions of zero mean sampling errors and known sampling variances may be both quite restrictive in practice. The first difficulty may be circumvented by using the sampling model , where the sampling errors are assumed to be independent normal with zero means, which simply says that the direct estimators are design unbiased or nearly design unbiased, as in the case of a GREG estimator, for large overall sample size. The second assumption of known sampling variances is more problematic and the usual practice to get around this problem is to model the estimated sampling variances (using generalized variance functions) and then treat the resulting smoothed estimates as the true variances . Bell (2008) studied the sensitivity of small area inferences to errors in the specification of the true variances. The original model, called the Fay–Herriot (FH) model, is a matched model in the sense that the sampling model matches the linking model and the combined model is simply a special case of a linear mixed model. On the other hand, the alternative sampling model is not necessarily matched to the linking motirdel and in this case the two models are “mismatched.” For simplicity, we focus on the matched case, but the HB approach readily extends to the more complex case of mismatched models and also to models that allow the sampling variance to depend on the area mean (You and Rao, 2002).

Attractive features of area level models are that the sampling design is taken into account through the direct estimators and that the direct estimators and the associated area level covariates are more readily available to the users than the corresponding unit level sample data. For example, the U.S. Small Area Income and Poverty Estimation (SAIPE) Program used the FH model to estimate county level poverty counts of school-age children by employing direct estimates for sampled counties from the Current Population Survey and associated county level auxiliary information from tax records, food stamps programs and other administrative sources (see Rao, 2003, Chapter 7, for details). Bayesians have used the area level models extensively through the HB approach, in spite of the limitations mentioned above, because of their practical advantages (Rao, 2003, Chapter 10).

In the HB approach, a flat prior on the model parameters and is often specified as and , and the resulting posterior summaries (means, variances and credible intervals) for the means are obtained. Bell (1999) studied matched models in the context of estimating the proportion of school-age children in poverty at the state level in the US, using the survey proportions based on the Current Population Survey (CPS) data for 1989–1993 and area level covariates related to . Bell found that the maximum likelihood (ML) and restricted ML (REML) estimates of turned out to be zero for the first four years (1989–1992) and the resulting EB estimates of state poverty rates attached zero weight to the direct estimate regardless of the CPS state sample sizes  (number of households). This problem with EB based on ML or REML can be circumvented by using the HB approach. Bell used the above flat prior and obtained the posterior mean which always attached nonzero weight to the direct estimate. Further, the posterior variance is well behaved (smallest for California with the largest , unlike the estimated mean squared prediction error (MSPE) of the EB estimator. It is possible, however, to develop EB methods that always lead to nonzero estimates of . Morris (2006) proposed to multiply the residual likelihood function of by the factor and maximize this adjusted likelihood function. The resulting estimator of is always positive and gets around the difficulty with REML. Li and Lahiri (2010) used an adjusted profile likelihood function which also leads to positive estimates of . They also established asymptotic consistency of the estimator and obtained a nearly unbiased estimator of the mean squared prediction error (MSPE) of the associated EB estimator of .

Datta, Rao and Smith (2005) studied frequentist properties of HB by deriving a moment-matching prior on , in the sense that the resulting posterior variance is nearly unbiased for the MSPE of the HB estimator of the small area mean. The moment-matching prior is given by


This prior depends collectively on the sampling variances for all the areas as well as on the area-specific sampling variance. Note that the matching prior is designed for inference on area and, hence, its dependence on should not be problematic. The matching prior (5) reduces to the flat prior in the special case of equal sampling variances . However, in the application considered by Bell (1999), is as large as 20. Ganesh and Lahiri (2008) derived a single matching prior such that a weighted posterior variance over the areas tracks the corresponding weighted MSPE for specified weights. By letting the weights be one for area and zero for the remaining areas, the resulting prior is identical to (5). Datta (2008) has shown that the previous moment-matching priors also ensure matching property for interval estimation in the sense that the coverage probability of the credible interval tracks the corresponding coverage probability of the normal interval based on the EB estimator and its estimated MSPE. Further work on matching priors in the context of small area estimation would be useful.

Mismatched models are often more realistic for practical applications, as they allow flexibility in formulating the linking model. A recent application of HB under mismatched models is to the estimation of adult literacy levels for all states and counties in the US, using data from the National Assessment of Adult Literacy and literacy-related auxiliary data (Mohadjer et al., 2007). Bizier et al. (2008) used mismatched models and the HB approach to produce estimates of disability rates for health regions and selected municipalities in Canada.

A variety of applications of HB under complex modeling have been reported in the literature (see Rao, 2003, Chapter 10, for work prior to 2003). Nandram and Choi (2005) and Nandram, Cox and Choi (2005) studied extensions of HB to handle nonignorable nonresponse and applied the methods to data from the National Health and Nutrition Examination Survey (NHANES III) to produce small area estimates. Raghunathan et al. (2007) applied the HB approach to combine data from two independent surveys [Behavioral Risk Factor Surveillance System (BRFSS) and the National Health Interview Survey (NHIS)] for the years 1997–2000 to produce yearly prevalence estimates at the county level for six outcomes. BRFSS is a large telephone survey covering almost all US counties, but the nonresponse rates are high and also nontelephone households are not covered. On the other hand, NHIS is a smaller personal interview survey with lower nonresponse rates and covers nontelephone households. In this application, direct survey weighted county estimates of proportions from the two surveys were transformed using the inverse sine transformation and the sampling variances were taken as , where denotes the effective sample size for a particular domain (calculated as the actual domain sample size  divided by the estimated design effect which is the ratio of the estimated variance under the given design to the binomial estimated variance). The resulting sampling model was then combined with a suitable linking model to obtain county estimates of the prevalence rates, using diffuse proper priors on the model parameters and MCMC. This application attempts to account for possible noncoverage bias and obtain efficient county estimates by combining data from two independent surveys. It may be noted that the model used here is an extension of the basic FH area level model and the application demonstrates how design-based and Bayesian approaches can be fruitfully integrated in small area estimation.

HB methods studied in the literature have been largely parametric, based on specified distributions for the data. However, Meeden (2003) extended his noninformative Bayesian approach, based on the Polya posterior (PP), to small area estimation in some simple cases. Extension of this approach to handle complex models is not likely to be easy in practice.

6 Concluding Remarks

I have provided an appraisal of the role of Bayesian and frequentist methods in sample surveys. My opinion is that for domains (subpopulations) with sufficiently large samples, a traditional design-based frequentist approach that makes effective use of auxiliary information, through calibration or assistance from working models, will remain as the preferred approach in the large-scale production of official statistics from complex surveys. Nonsampling errors can be handled using a combined design and model approach with minimal use of models. But the designbased approach, using survey weights, is not a panacea even for large samples and yet “many people ask too much of the weights” (Lohr, 2007), prompting statements like, “Survey weighting is a mess” (Gelman, 2007). As Lohr (2007) noted, survey weighting is not a mess as long as the weighting is not stretched to a limit as in the case of a very large number of post-stratified cells leading to very small or even zero cell sample sizes, thus making weighting at the cell level unstable or even not feasible (Gelman, 2007). Alternative weighting methods can be used in those situations to get around this problem. For example, by calibrating to the marginal counts of the post-stratification variables instead of the cell counts leads to a calibration estimator with stable weights which should perform well for estimating population totals or means. Also, the resulting weights do not depend on the response values, thus ensuring internal consistency, unlike the hierarchical regression method proposed by Gelman (2007) based on models involving random effects.

Recent work on nonparametric Bayesian methods that can be used for both Bayesian and design-based inferences looks promising, at least for some specialized surveys. For small area estimation, the hierarchical Bayes (HB) approach offers a lot of promise because of its ability to handle complex small area models and provide “exact” inferences. However, the choice of noninformative priors that can provide frequentist validity is not likely to be easy in practice when complex modeling is involved. Also, caution needs to be exercised in the routine use of popular HB model-checking methods.


This research was supported by a grant from the Natural Sciences and Engineering Research Council of Canada. My thanks are due to an associate editor and two referees for constructive comments and suggestions.


  1. {barticle}[auto:STB—2010-11-18—09:18:59] \bauthor\bsnmAitkin, \bfnmM.\binitsM. (\byear2008). \btitleApplications of the Bayesian bootstrap in finite population inference. \bjournalJ. Off. Statist. \bvolume24 \bpages21–51. \endbibitem
  2. {barticle}[mr] \bauthor\bsnmBayarri, \bfnmM. J.\binitsM. J. \AND\bauthor\bsnmBerger, \bfnmJames O.\binitsJ. O. (\byear2000). \btitle values for composite null models. \bjournalJ. Amer. Statist. Assoc. \bvolume95 \bpages1127–1142, 1157–1170. \bidmr=1804239 \bptnotecheck related \endbibitem
  3. {barticle}[mr] \bauthor\bsnmBayarri, \bfnmM. J.\binitsM. J. \AND\bauthor\bsnmCastellanos, \bfnmM. E.\binitsM. E. (\byear2007). \btitleBayesian checking of the second levels of hierarchical models. \bjournalStatist. Sci. \bvolume22 \bpages322–343. \biddoi=10.1214/07-STS235, mr=2416808 \endbibitem
  4. {bmisc}[auto:STB—2010-11-18—09:18:59] \bauthor\bsnmBell, \bfnmW. R.\binitsW. R. (\byear1999). \bhowpublishedAccounting for uncertainty about variances in small area estimation. In Bull. Int. Statist. Inst.: 52nd Session. Available at www.census.govt/hhes/ www/saipe under “Publications.” \endbibitem
  5. {bincollection}[auto:STB—2010-11-18—09:18:59] \bauthor\bsnmBell, \bfnmW. R.\binitsW. R. (\byear2008). \btitleExamining sensitivity of small area inferences to uncertainty about sampling error variances. In \bbooktitleProceedings of the Survey Research Methods Section \bpages327–333. \bpublisherAmer. Statist. Assoc., \baddressAlexandria, VA. \endbibitem
  6. {barticle}[mr] \bauthor\bsnmBellhouse, \bfnmD. R.\binitsD. R. \AND\bauthor\bsnmRao, \bfnmJ. N. K.\binitsJ. N. K. (\byear1986). \btitleOn the efficiency of prediction estimators in two-stage sampling. \bjournalJ. Statist. Plann. Inference \bvolume13 \bpages269–281. \biddoi=10.1016/0378-3758(86)90139-4, mr=0835612 \endbibitem
  7. {barticle}[mr] \bauthor\bsnmBinder, \bfnmDavid A.\binitsD. A. (\byear1982). \btitleNonparametric Bayesian models for samples from finite populations. \bjournalJ. Roy. Statist. Soc. Ser. B \bvolume44 \bpages388–393. \bidmr=0693238 \endbibitem
  8. {bmisc}[auto:STB—2010-11-18—09:18:59] \bauthor\bsnmBizier, \bfnmV.\binitsV., \bauthor\bsnmYou, \bfnmY.\binitsY., \bauthor\bsnmVeilleux, \bfnmL.\binitsL. \AND\bauthor\bsnmGrodin, \bfnmC.\binitsC. (\byear2008). \bhowpublishedModel-based approach to small area estimation of disability count and rate using data from the 2006 participation and activity limitation survey. Technical report, Household Survey Methods Division, Statistics Canada. \endbibitem
  9. {barticle}[auto:STB—2010-11-18—09:18:59] \bauthor\bsnmBowley, \bfnmA. L.\binitsA. L. (\byear1926). \btitleMeasurement of the precision attained in sampling. \bjournalBull. Int. Statist. Inst. 22, Supplement to Liv \bvolume1 \bpages6–62. \endbibitem
  10. {bincollection}[auto:STB—2010-11-18—09:18:59] \bauthor\bparticlevan den \bsnmBrakel, \bfnmJ. A.\binitsJ. A. \AND\bauthor\bsnmBethlehem, \bfnmJ.\binitsJ. (\byear2008). \btitleModel-based estimation for official statistics. Discussion Paper \bpages08002, \bnoteStatistics Netherlands. \endbibitem
  11. {barticle}[mr] \bauthor\bsnmBreidt, \bfnmF. J.\binitsF. J., \bauthor\bsnmClaeskens, \bfnmG.\binitsG. \AND\bauthor\bsnmOpsomer, \bfnmJ. D.\binitsJ. D. (\byear2005). \btitleModel-assisted estimation for complex surveys using penalised splines. \bjournalBiometrika \bvolume92 \bpages831–846. \biddoi=10.1093/biomet/92.4.831, mr=2234189 \endbibitem
  12. {barticle}[mr] \bauthor\bsnmBrewer, \bfnmK. R. W.\binitsK. R. W. (\byear1963). \btitleRatio estimation and finite populations: Some results deducible from the assumption of an underlying stochastic process. \bjournalAustral. J. Statist. \bvolume5 \bpages93–105. \bidmr=0182078 \endbibitem
  13. {bmisc}[auto:STB—2010-11-18—09:18:59] \bauthor\bsnmBrowne, \bfnmW. J.\binitsW. J. \AND\bauthor\bsnmDraper, \bfnmD.\binitsD. (\byear2001). \bhowpublishedA comparison of Bayesian and likelihood-based methods for fitting multilevel models. Technical report, Institute for Education, London, England. \endbibitem
  14. {barticle}[auto:STB—2010-11-18—09:18:59] \bauthor\bsnmCao, \bfnmW.\binitsW., \bauthor\bsnmTsiatis, \bfnmA.\binitsA. \AND\bauthor\bsnmDavidian, \bfnmM.\binitsM. (\byear2009). \btitleImproving efficiency and robustness of the doubly robust estimators for a population mean with incomplete data. \bjournalBiometrika \bvolume96 \bpages723–734. \endbibitem
  15. {barticle}[auto:STB—2010-11-18—09:18:59] \bauthor\bsnmCasady, \bfnmR. J.\binitsR. J. \AND\bauthor\bsnmValliant, \bfnmR.\binitsR. (\byear1993). \btitleConditional properties of post-stratified estimators under normal theory. \bjournalSurvey Methodol. \bvolume19 \bpages183–192. \endbibitem
  16. {bmisc}[auto:STB—2010-11-18—09:18:59] \bauthor\bsnmDatta, \bfnmG. S.\binitsG. S. (\byear2008). \bhowpublishedPrivate communication. \endbibitem
  17. {binproceedings}[auto:STB—2010-11-18—09:18:59] \bauthor\bsnmDatta, \bfnmG. S.\binitsG. S. (\byear2009). \btitleModel-based approach to small area estimation. In \bnoteHandbook of Statistics: Sample Surveys: Inference and Analysis 29B (D. Pfeffermann and C. R. Rao, eds.) 251–288. North-Holland, Amsterdam. \endbibitem
  18. {barticle}[mr] \bauthor\bsnmDatta, \bfnmGauri Sankar\binitsG. S., \bauthor\bsnmRao, \bfnmJ. N. K.\binitsJ. N. K. \AND\bauthor\bsnmSmith, \bfnmDavid Daniel\binitsD. D. (\byear2005). \btitleOn measuring the variability of small area estimators under a basic area level model. \bjournalBiometrika \bvolume92 \bpages183–196. \biddoi=10.1093/biomet/92.1.183, mr=2158619 \endbibitem
  19. {bbook}[mr] \bauthor\bsnmDeming, \bfnmW. Edwards\binitsW. E. (\byear1960). \btitleSample Design in Business Research. \bpublisherWiley, \baddressNew York. \bidmr=0120753 \endbibitem
  20. {barticle}[mr] \bauthor\bsnmDeville, \bfnmJean-Claude\binitsJ.-C. \AND\bauthor\bsnmSärndal, \bfnmCarl-Erik\binitsC.-E. (\byear1992). \btitleCalibration estimators in survey sampling. \bjournalJ. Amer. Statist. Assoc. \bvolume87 \bpages376–382. \bidmr=1173804 \endbibitem
  21. {barticle}[mr] \bauthor\bsnmDurbin, \bfnmJ.\binitsJ. (\byear1958). \btitleSampling theory for estimates based on fewer individuals than the number selected. \bjournalBull. Inst. Internat. Statist. \bvolume36 \bpages113–119. \bidmr=0117821 \endbibitem
  22. {barticle}[mr] \bauthor\bsnmEricson, \bfnmW. A.\binitsW. A. (\byear1969). \btitleSubjective Bayesian models in sampling finite populations. \bjournalJ. Roy. Statist. Soc. Ser. B \bvolume31 \bpages195–233. \bidmr=0270494 \bptnotecheck related \endbibitem
  23. {barticle}[mr] \bauthor\bsnmFang, \bfnmKai-Tai\binitsK.-T. \AND\bauthor\bsnmMukerjee, \bfnmRahul\binitsR. (\byear2006). \btitleEmpirical-type likelihoods allowing posterior credible sets with frequentist validity: Higher-order asymptotics. \bjournalBiometrika \bvolume93 \bpages723–733. \biddoi=10.1093/biomet/93.3.723, mr=2261453 \endbibitem
  24. {barticle}[mr] \bauthor\bsnmFrancisco, \bfnmCarol A.\binitsC. A. \AND\bauthor\bsnmFuller, \bfnmWayne A.\binitsW. A. (\byear1991). \btitleQuantile estimation with a complex survey design. \bjournalAnn. Statist. \bvolume19 \bpages454–469. \biddoi=10.1214/aos/1176347993, mr=1091862 \endbibitem
  25. {bbook}[auto:STB—2010-11-18—09:18:59] \bauthor\bsnmFuller, \bfnmW. A.\binitsW. A. (\byear2009). \btitleSampling Statistics. \bpublisherWiley, \baddressHoboken, NJ. \endbibitem
  26. {barticle}[mr] \bauthor\bsnmGanesh, \bfnmN.\binitsN. \AND\bauthor\bsnmLahiri, \bfnmP.\binitsP. (\byear2008). \btitleA new class of average moment matching priors. \bjournalBiometrika \bvolume95 \bpages514–520. \biddoi=10.1093/biomet/asn008, mr=2521597 \endbibitem
  27. {barticle}[mr] \bauthor\bsnmGelman, \bfnmAndrew\binitsA. (\byear2007). \btitleStruggles with survey weighting and regression modeling. \bjournalStatist. Sci. \bvolume22 \bpages153–164. \biddoi=10.1214/088342306000000691, mr=2408951 \endbibitem
  28. {barticle}[mr] \bauthor\bsnmGodambe, \bfnmV. P.\binitsV. P. (\byear1966). \btitleA new approach to sampling from finite populations. I. Sufficiency and linear estimation. \bjournalJ. Roy. Statist. Soc. Ser. B \bvolume28 \bpages310–319. \bidmr=0216720 \endbibitem
  29. {barticle}[mr] \bauthor\bsnmHall, \bfnmPeter\binitsP. (\byear2003). \btitleA short prehistory of the bootstrap. \bjournalStatist. Sci. \bvolume18 \bpages158–167. \biddoi=10.1214/ss/1063994970, mr=2026077 \endbibitem
  30. {barticle}[mr] \bauthor\bsnmHansen, \bfnmMorris H.\binitsM. H. \AND\bauthor\bsnmHurwitz, \bfnmWilliam N.\binitsW. N. (\byear1943). \btitleOn the theory of sampling from finite populations. \bjournalAnn. Math. Statist. \bvolume14 \bpages333–362. \bidmr=0009832 \endbibitem
  31. {barticle}[auto:STB—2010-11-18—09:18:59] \bauthor\bsnmHansen, \bfnmM. H.\binitsM. H., \bauthor\bsnmHurwitz, \bfnmW. N.\binitsW. N., \bauthor\bsnmMarks, \bfnmE. S.\binitsE. S. \AND\bauthor\bsnmMauldin, \bfnmW. P.\binitsW. P. (\byear1951). \btitleResponse errors in surveys. \bjournalJ. Amer. Statist. Assoc. \bvolume46 \bpages147–190. \endbibitem
  32. {bmisc}[auto:STB—2010-11-18—09:18:59] \bauthor\bsnmHansen, \bfnmM. H.\binitsM. H., \bauthor\bsnmHurwitz, \bfnmW. N.\binitsW. N., \bauthor\bsnmNisselson, \bfnmH.\binitsH. \AND\bauthor\bsnmSteinberg, \bfnmJ.\binitsJ. (\byear1955). \bhowpublishedThe redesign of the census current population survey. J. Amer. Statist. Assoc. 50 701–719. \endbibitem
  33. {barticle}[auto:STB—2010-11-18—09:18:59] \bauthor\bsnmHansen, \bfnmM. H.\binitsM. H., \bauthor\bsnmMadow, \bfnmW. G.\binitsW. G. \AND\bauthor\bsnmTepping, \bfnmB. J.\binitsB. J. (\byear1983). \btitleAn evaluation of model-dependent and probability sampling inferences in sample surveys. \bjournalJ. Amer. Statist. Assoc. \bvolume78 \bpages776–793. \endbibitem
  34. {bincollection}[auto:STB—2010-11-18—09:18:59] \bauthor\bsnmHartley, \bfnmH. O.\binitsH. O. (\byear1959). \btitleAnalytical studies of survey data. In Volume in Honor of Corrado Gini 1–32. Instituto di Statistica, \bnoteRome. \endbibitem
  35. {barticle}[auto:STB—2010-11-18—09:18:59] \bauthor\bsnmHartley, \bfnmH. O.\binitsH. O. \AND\bauthor\bsnmRao, \bfnmJ. N. K.\binitsJ. N. K. (\byear1968). \btitleA new estimation theory for sample surveys. \bjournalBiometrika \bvolume55 \bpages547–557. \endbibitem
  36. {bincollection}[mr] \bauthor\bsnmHaziza, \bfnmDavid\binitsD. (\byear2009). \btitleImputation and inference in the presence of missing data. In \bbooktitleSample Surveys: Design, Methods and Applications. \bseriesHandbook of Statist. \bvolume29 \bpages215–246. \bpublisherElsevier/North-Holland, Amsterdam. \biddoi=10.1016/S0169-7161(08)00010-2, mr=2654640 \endbibitem
  37. {barticle}[auto:STB—2010-11-18—09:18:59] \bauthor\bsnmHaziza, \bfnmD.\binitsD. \AND\bauthor\bsnmRao, \bfnmJ. N. K.\binitsJ. N. K. (\byear2006). \btitleA nonresponse model approach to inference under imputation for missing survey data. \bjournalSurvey Methodol. \bvolume32 \bpages53–64. \endbibitem
  38. {barticle}[mr] \bauthor\bsnmHoadley, \bfnmBruce\binitsB. (\byear1969). \btitleThe compound multinomial distribution and Bayesian analysis of categorical data from finite populations. \bjournalJ. Amer. Statist. Assoc. \bvolume64 \bpages216–229. \bidmr=0240916 \endbibitem
  39. {barticle}[mr] \bauthor\bsnmHorvitz, \bfnmD. G.\binitsD. G. \AND\bauthor\bsnmThompson, \bfnmD. J.\binitsD. J. (\byear1952). \btitleA generalization of sampling without replacement from a finite universe. \bjournalJ. Amer. Statist. Assoc. \bvolume47 \bpages663–685. \bidmr=0053460 \endbibitem
  40. {barticle}[mr] \bauthor\bsnmJiang, \bfnmJiming\binitsJ. \AND\bauthor\bsnmLahiri, \bfnmP.\binitsP. (\byear2006). \btitleMixed model prediction and small area estimation. \bjournalTest \bvolume15 \bpages1–96. \biddoi=10.1007/BF02595419, mr=2252522 \endbibitem
  41. {barticle}[auto:STB—2010-11-18—09:18:59] \bauthor\bsnmKalton, \bfnmG.\binitsG. (\byear2002). \btitleModels in practice of survey sampling. \bjournalJ. Off. Statist. \bvolume18 \bpages129–154. \endbibitem
  42. {barticle}[auto:STB—2010-11-18—09:18:59] \bauthor\bsnmKim, \bfnmJ. K.\binitsJ. K. \AND\bauthor\bsnmRao, \bfnmJ. N. K.\binitsJ. N. K. (\byear2009). \btitleA unified approach to linearization variance estimation from survey data after imputation for item nonresponse. \bjournalBiometrika \bvolume96 \bpages917–932. \endbibitem
  43. {barticle}[mr] \bauthor\bsnmKim, \bfnmJae Kwang\binitsJ. K., \bauthor\bsnmBrick, \bfnmJ. Michael\binitsJ. M., \bauthor\bsnmFuller, \bfnmWayne A.\binitsW. A. \AND\bauthor\bsnmKalton, \bfnmGraham\binitsG. (\byear2006). \btitleOn the bias of the multiple-imputation variance estimator in survey sampling. \bjournalJ. R. Stat. Soc. Ser. B Stat. Methodol. \bvolume68 \bpages509–521. \biddoi=10.1111/j.1467-9868.2006.00546.x, mr=2278338 \endbibitem
  44. {barticle}[mr] \bauthor\bsnmKorn, \bfnmEdward L.\binitsE. L. \AND\bauthor\bsnmGraubard, \bfnmBarry I.\binitsB. I. (\byear2003). \btitleEstimating variance components by using survey data. \bjournalJ. R. Stat. Soc. Ser. B Stat. Methodol. \bvolume65 \bpages175–190. \biddoi=10.1111/1467-9868.00379, mr=1959830 \endbibitem
  45. {barticle}[mr] \bauthor\bsnmKovar, \bfnmJ. G.\binitsJ. G., \bauthor\bsnmRao, \bfnmJ. N. K.\binitsJ. N. K. \AND\bauthor\bsnmWu, \bfnmC. F. J.\binitsC. F. J. (\byear1988). \btitleBootstrap and other methods to measure errors in survey estimates. \bjournalCanad. J. Statist. \bvolume16 \bpages25–45. \biddoi=10.2307/3315214, mr=0997120 \endbibitem
  46. {barticle}[mr] \bauthor\bsnmLazar, \bfnmNicole A.\binitsN. A. (\byear2003). \btitleBayesian empirical likelihood. \bjournalBiometrika \bvolume90 \bpages319–326. \biddoi=10.1093/biomet/90.2.319, mr=1986649 \endbibitem
  47. {barticle}[auto:STB—2010-11-18—09:18:59] \bauthor\bsnmLazar, \bfnmR.\binitsR., \bauthor\bsnmMeeden, \bfnmG.\binitsG. \AND\bauthor\bsnmNelson, \bfnmD.\binitsD. (\byear2008). \btitleA non-informative Bayesian approach to finite population sampling using auxiliary variables. \bjournalSurvey Methodol. \bvolume34 \bpages51–64. \endbibitem
  48. {barticle}[mr] \bauthor\bsnmLi, \bfnmHuilin\binitsH. \AND\bauthor\bsnmLahiri, \bfnmP.\binitsP. (\byear2010). \btitleAn adjusted maximum likelihood method for solving small area estimation problems. \bjournalJ. Multivariate Anal. \bvolume101 \bpages882–892. \biddoi=10.1016/j.jmva.2009.10.009, mr=2584906 \endbibitem
  49. {barticle}[auto:STB—2010-11-18—09:18:59] \bauthor\bsnmLittle, \bfnmR. J. A.\binitsR. J. A. (\byear1983). \btitleEstimating a finite population mean from unequal probability samples. \bjournalJ. Amer. Statist. Assoc. \bvolume78 \bpages596–604. \endbibitem
  50. {barticle}[mr] \bauthor\bsnmLittle, \bfnmRoderick J.\binitsR. J. (\byear2008). \btitleWeighting and prediction in sample surveys. \bjournalCalcutta Statist. Assoc. Bull. \bvolume60 \bpages147–167. \bidmr=2553424 \bptnotecheck year \endbibitem
  51. {bbook}[mr] \bauthor\bsnmLittle, \bfnmRoderick J. A.\binitsR. J. A. \AND\bauthor\bsnmRubin, \bfnmDonald B.\binitsD. B. (\byear2002). \btitleStatistical Analysis with Missing Data, \bedition2nd ed. \bpublisherWiley, \baddressHoboken, NJ. \bidmr=1925014 \endbibitem
  52. {barticle}[mr] \bauthor\bsnmLo, \bfnmAlbert Y.\binitsA. Y. (\byear1988). \btitleA Bayesian bootstrap for a finite population. \bjournalAnn. Statist. \bvolume16 \bpages1684–1695. \biddoi=10.1214/aos/1176351061, mr=0964946 \endbibitem
  53. {barticle}[mr] \bauthor\bsnmLohr, \bfnmSharon L.\binitsS. L. (\byear2007). \btitleComment: Struggles with survey weighting and regression modeling. \bjournalStatist. Sci. \bvolume22 \bpages175–178. \biddoi=10.1214/088342307000000159, mr=2408955 \endbibitem
  54. {barticle}[auto:STB—2010-11-18—09:18:59] \bauthor\bsnmMahalanobis, \bfnmP. C.\binitsP. C. (\byear1944). \btitleOn large scale sample surveys. \bjournalPhil. Trans. Roy. Soc. B \bvolume231 \bpages329–351. \endbibitem
  55. {barticle}[auto:STB—2010-11-18—09:18:59] \bauthor\bsnmMahalanobis, \bfnmP. C.\binitsP. C. (\byear1946). \btitleRecent experiments in statistical sampling in the Indian Statistical Institute. \bjournalJ. Roy. Statist. Soc. \bvolume109 \bpages325–378. \endbibitem
  56. {barticle}[mr] \bauthor\bsnmMalec, \bfnmDonald\binitsD. \AND\bauthor\bsnmSedransk, \bfnmJ.\binitsJ. (\byear1985). \btitleBayesian inference for finite population parameters in multistage cluster sampling. \bjournalJ. Amer. Statist. Assoc. \bvolume80 \bpages897–902. \bidmr=0819590 \endbibitem
  57. {barticle}[auto:STB—2010-11-18—09:18:59] \bauthor\bsnmMeeden, \bfnmG.\binitsG. (\byear1995). \btitleMedian estimation using auxiliary information. \bjournalSurvey Methodol. \bvolume21 \bpages71–77. \endbibitem
  58. {barticle}[mr] \bauthor\bsnmMeeden, \bfnmGlen\binitsG. (\byear1999). \btitleA noninformative Bayesian approach for two-stage cluster sampling. \bjournalSankhyā Ser. B \bvolume61 \bpages133–144. \bidmr=1720718 \endbibitem
  59. {barticle}[auto:STB—2010-11-18—09:18:59] \bauthor\bsnmMeeden, \bfnmG.\binitsG. (\byear2003). \btitleA noninformative Bayesian approach to small area estimation. \bjournalSurvey Methodol. \bvolume29 \bpages19–24. \endbibitem
  60. {barticle}[mr] \bauthor\bsnmMeeden, \bfnmGlen\binitsG. \AND\bauthor\bsnmVardeman, \bfnmStephen\binitsS. (\byear1991). \btitleA noninformative Bayesian approach to interval estimation in finite population sampling. \bjournalJ. Amer. Statist. Assoc. \bvolume86 \bpages972–980. \bidmr=1146345 \endbibitem
  61. {bmisc}[auto:STB—2010-11-18—09:18:59] \bauthor\bsnmMohadjer, \bfnmL.\binitsL., \bauthor\bsnmRao, \bfnmJ. N. K.\binitsJ. N. K., \bauthor\bsnmLiu, \bfnmB.\binitsB., \bauthor\bsnmKrenzyke, \bfnmT.\binitsT. \AND\bauthor\bsnmVan de Kerckhove, \bfnmW.\binitsW. (\byear2007). \bhowpublishedHierarchical Bayes small area estimates of adult literacy using unmatched sampling and linking models. In Proceedings of the Survey Research Methods Section 3203–3209. Amer. Statist. Assoc., Alexandria, VA. \endbibitem
  62. {barticle}[mr] \bauthor\bsnmMorris, \bfnmC. E.\binitsC. E. (\byear2006). \btitleMixed model prediction and small area estimation. \bjournalTest \bvolume15 \bpages72–76. \endbibitem
  63. {bincollection}[auto:STB—2010-11-18—09:18:59] \bauthor\bsnmMurthy, \bfnmM. N.\binitsM. N. (\byear1964). \btitleOn Mahalanobis’ contributions to the development of sample survey theory and methods. In \bbooktitleContributions to Statistics \bedition(C. R. Rao, ed.) \bpages283–316. \bnoteStatistical Publishing Society, Calcutta, India. \endbibitem
  64. {barticle}[auto:STB—2010-11-18—09:18:59] \bauthor\bsnmNandram, \bfnmB.\binitsB. \AND\bauthor\bsnmChoi, \bfnmJ. W.\binitsJ. W. (\byear2005). \btitleHierarchical Bayesian nonignorable nonresponse regression models for small area: An application to the NHANES data. \bjournalSurvey Methodol. \bvolume31 \bpages73–84. \endbibitem
  65. {barticle}[auto:STB—2010-11-18—09:18:59] \bauthor\bsnmNandram, \bfnmB.\binitsB., \bauthor\bsnmCox, \bfnmL. H.\binitsL. H. \AND\bauthor\bsnmChoi, \bfnmJ. W.\binitsJ. W. (\byear2005). \btitleBayesian analysis of nonignorable missing categorical data: An application to bone mineral density and family income. \bjournalSurvey Methodol. \bvolume31 \bpages213–225. \endbibitem
  66. {barticle}[auto:STB—2010-11-18—09:18:59] \bauthor\bsnmNandram, \bfnmB.\binitsB., \bauthor\bsnmSedransk, \bfnmJ.\binitsJ. \AND\bauthor\bsnmSmith, \bfnmS. J.\binitsS. J. (\byear1997). \btitleOrder-restricted Bayesian estimation of the age composition of a population of Atlantic cod. \bjournalJ. Amer. Statist. Assoc. \bvolume92 \bpages33–40. \endbibitem
  67. {barticle}[mr] \bauthor\bsnmNarain, \bfnmR. D.\binitsR. D. (\byear1951). \btitleOn sampling without replacement with varying probabilities. \bjournalJ. Indian Soc. Agric. Statistics \bvolume3 \bpages169–174. \bidmr=0045354 \endbibitem
  68. {barticle}[mr] \bauthor\bsnmNelson, \bfnmDavid\binitsD. \AND\bauthor\bsnmMeeden, \bfnmGlen\binitsG. (\byear1998). \btitleUsing prior information about population quantiles in finite population sampling. \bjournalSankhyā Ser. A \bvolume60 \bpages426–445. \bidmr=1718840 \endbibitem
  69. {barticle}[auto:STB—2010-11-18—09:18:59] \bauthor\bsnmNeyman, \bfnmJ.\binitsJ. (\byear1934). \btitleOn the two different approaches of the representative method: The method of stratified sampling and the method of purposive selection. \bjournalJ. Roy. Statist. Soc. \bvolume97 \bpages558–606. \endbibitem
  70. {barticle}[mr] \bauthor\bsnmOwen, \bfnmArt B.\binitsA. B. (\byear1988). \btitleEmpirical likelihood ratio confidence intervals for a single functional. \bjournalBiometrika \bvolume75 \bpages237–249. \biddoi=10.1093/biomet/75.2.237, mr=0946049 \endbibitem
  71. {barticle}[auto:STB—2010-11-18—09:18:59] \bauthor\bsnmPfeffermann, \bfnmD.\binitsD. (\byear1993). \btitleThe role of sampling weights when modeling survey data. \bjournalInternat. Statist. Rev. \bvolume61 \bpages317–337. \endbibitem
  72. {barticle}[auto:STB—2010-11-18—09:18:59] \bauthor\bsnmPfeffermann, \bfnmD.\binitsD. (\byear2008). \btitleDiscussion. \bjournalCalcutta Statist. Assoc. Bull. \bvolume60 \bpages170–175. \bidmr=2750142 \endbibitem
  73. {bincollection}[mr] \bauthor\bsnmPfeffermann, \bfnmDanny\binitsD. \AND\bauthor\bsnmSverchkov, \bfnmM. Yu.\binitsM. Y. (\byear2003). \btitleFitting generalized linear models under informative sampling. In \bbooktitleAnalysis of Survey Data (Southampton, 1999). \bseriesWiley Ser. Surv. Methodol. (\beditionR. Chambers and C. J. Shinner, eds.) \bpages175–195. \bpublisherWiley, \baddressChichester. \biddoi=10.1002/0470867205.ch12, mr=1978851 \endbibitem
  74. {barticle}[mr] \bauthor\bsnmPfeffermann, \bfnmDanny\binitsD., \bauthor\bsnmMoura, \bfnmFernando Antonio Da Silva\binitsF. A. S. \AND\bauthor\bsnmSilva, \bfnmPedro Luis do Nascimento\binitsP. L. N. (\byear2006). \btitleMulti-level modelling under informative sampling. \bjournalBiometrika \bvolume93 \bpages943–959. \biddoi=10.1093/biomet/93.4.943, mr=2285081 \endbibitem
  75. {barticle}[mr] \bauthor\bsnmPfeffermann, \bfnmD.\binitsD., \bauthor\bsnmSkinner, \bfnmC. J.\binitsC. J., \bauthor\bsnmHolmes, \bfnmD. J.\binitsD. J., \bauthor\bsnmGoldstein, \bfnmH.\binitsH. \AND\bauthor\bsnmRasbash, \bfnmJ.\binitsJ. (\byear1998). \btitleWeighting for unequal selection probabilities in multilevel models. \bjournalJ. R. Stat. Soc. Ser. B Stat. Methodol. \bvolume60 \bpages23–56. \biddoi=10.1111/1467-9868.00106, mr=1625668 \bptnotecheck related \endbibitem
  76. {barticle}[mr] \bauthor\bsnmRabe-Hesketh, \bfnmSophia\binitsS. \AND\bauthor\bsnmSkrondal, \bfnmAnders\binitsA. (\byear2006). \btitleMultilevel modelling of complex survey data. \bjournalJ. Roy. Statist. Soc. Ser. A \bvolume169 \bpages805–827. \biddoi=10.1111/j.1467-985X.2006.00426.x, mr=2291345 \endbibitem
  77. {barticle}[mr] \bauthor\bsnmRaghunathan, \bfnmTrivellore E.\binitsT. E., \bauthor\bsnmXie, \bfnmDawei\binitsD., \bauthor\bsnmSchenker, \bfnmNathaniel\binitsN., \bauthor\bsnmParsons, \bfnmVan L.\binitsV. L., \bauthor\bsnmDavis, \bfnmWilliam W.\binitsW. W., \bauthor\bsnmDodd, \bfnmKevin W.\binitsK. W. \AND\bauthor\bsnmFeuer, \bfnmEric J.\binitsE. J. (\byear2007). \btitleCombining information from two surveys to estimate county-level prevalence rates of cancer risk factors and screening. \bjournalJ. Amer. Statist. Assoc. \bvolume102 \bpages474–486. \biddoi=10.1198/016214506000001293, mr=2370848 \endbibitem
  78. {binproceedings}[auto:STB—2010-11-18—09:18:59] \bauthor\bsnmRao, \bfnmJ. N. K.\binitsJ. N. K. (\byear1992). \btitleEstimating totals and distribution functions using auxiliary information at the estimation stage. \bnoteIn Proceedings of the Workshop on Uses of Auxiliary Information in Surveys. Statistics Sweden. \endbibitem
  79. {bbook}[mr] \bauthor\bsnmRao, \bfnmJ. N. K.\binitsJ. N. K. (\byear2003). \btitleSmall Area Estimation. \bpublisherWiley, \baddressHoboken, NJ. \biddoi=10.1002/0471722189, mr=1953089 \endbibitem
  80. {barticle}[auto:STB—2010-11-18—09:18:59] \bauthor\bsnmRao, \bfnmJ. N. K.\binitsJ. N. K. (\byear2005). \btitleInterplay between sample survey theory and practice: An appraisal. \bjournalSurvey Methodol. \bvolume31 \bpages117–138. \endbibitem
  81. {barticle}[mr] \bauthor\bsnmRao, \bfnmJ. N. K.\binitsJ. N. K. \AND\bauthor\bsnmGhangurde, \bfnmP. D.\binitsP. D. (\byear1972). \btitleBayesian optimization in sampling finite populations. \bjournalJ. Amer. Statist. Assoc. \bvolume67 \bpages439–443. \bidmr=0314161 \endbibitem
  82. {binproceedings}[mr] \bauthor\bsnmRao, \bfnmJ. N. K.\binitsJ. N. K. \AND\bauthor\bsnmWu, \bfnmC. F. J.\binitsC. F. J. (\byear1987). \btitleMethods for standard errors and confidence intervals from sample survey data: Some recent work. In \bbooktitleProceedings of the 46th Session of the International Statistical Institute, Vol. 3 (Tokyo, 1987) \bvolume52 \bpages5–21. \bidmr=1027183 \endbibitem
  83. {bmisc}[mr] \bauthor\bsnmRao, \bfnmJ. N. K.\binitsJ. N. K. \AND\bauthor\bsnmWu, \bfnmC.\binitsC. (\byear2009). \bhowpublishedEmpirical likelihood methods. In Handbook of Statistics—Sample Surveys: Inference and Analysis 29B (D. Pfeffermann and C. R. Rao, eds.) 189–208. North-Holland, Amsterdam. \bidmr=2668352 \endbibitem
  84. {barticle}[auto:STB—2010-11-18—09:18:59] \bauthor\bsnmRao, \bfnmJ. N. K.\binitsJ. N. K. \AND\bauthor\bsnmWu, \bfnmC.\binitsC. (\byear2010). \btitleBayesian pseudo empirical likelihood intervals for complex surveys. \bjournalJ. Roy. Statist. Soc. Ser. B \bvolume72 \bpages533–544. \endbibitem
  85. {barticle}[auto:STB—2010-11-18—09:18:59] \bauthor\bsnmRao, \bfnmJ. N. K.\binitsJ. N. K., \bauthor\bsnmJocelyn, \bfnmW.\binitsW. \AND\bauthor\bsnmHidiroglou, \bfnmM. A.\binitsM. A. (\byear2003). \btitleConfidence interval coverage probabilities for regression estimators in uni-phase and two-phase sampling. \bjournalJ. Off. Statist. \bvolume19 \bpages17–30. \endbibitem
  86. {bmisc}[auto:STB—2010-11-18—09:18:59] \bauthor\bsnmRao, \bfnmJ. N. K.\binitsJ. N. K., \bauthor\bsnmVerret, \bfnmF.\binitsF. \AND\bauthor\bsnmHidiroglou, \bfnmM. A.\binitsM. A. (\byear2010). \bhowpublishedA weighted estimating equations approach to inference for two-level models from survey data. In Proc. Survey Sec. Statistical Society of Canada Annual Meeting. May 2010, Québec, Canada. \endbibitem
  87. {barticle}[auto:STB—2010-11-18—09:18:59] \bauthor\bsnmRao, \bfnmJ. N. K.\binitsJ. N. K., \bauthor\bsnmHidiroglou, \bfnmM.\binitsM., \bauthor\bsnmYung, \bfnmW.\binitsW. \AND\bauthor\bsnmKovacevic, \bfnmM.\binitsM. (\byear2010). \btitleRole of weights in descriptive and analytical inference from survey data: An overview. \bjournalJ. Ind. Soc. Agric. Statist. \bvolume64 \bpages129–135. \endbibitem
  88. {barticle}[auto:STB—2010-11-18—09:18:59] \bauthor\bsnmReiter, \bfnmJ. P.\binitsJ. P., \bauthor\bsnmRaghunathan, \bfnmT. E.\binitsT. E. \AND\bauthor\bsnmKinney, \bfnmS. K.\binitsS. K. (\byear2006). \btitleThe importance of modeling the sampling design in multiple imputation for missing data. \bjournalSurvey Methodol. \bvolume32 \bpages143–149. \endbibitem
  89. {barticle}[mr] \bauthor\bsnmRobinson, \bfnmP. M.\binitsP. M. \AND\bauthor\bsnmSärndal, \bfnmCarl-Erik\binitsC.-E. (\byear1983). \btitleAsymptotic properties of the generalized regression estimator in probability sampling. \bjournalSankhyā Ser. B \bvolume45 \bpages240–248. \bidmr=0748468 \endbibitem
  90. {barticle}[auto:STB—2010-11-18—09:18:59] \bauthor\bsnmRoyall, \bfnmR. M.\binitsR. M. (\byear1968). \btitleAn old approach to finite population sampling theory. \bjournalJ. Amer. Statist. Assoc. \bvolume63 \bpages1269–1279. \endbibitem
  91. {barticle}[auto:STB—2010-11-18—09:18:59] \bauthor\bsnmRoyall, \bfnmR. M.\binitsR. M. (\byear1970). \btitleOn finite population sampling theory under certain linear regression models. \bjournalBiometrika \bvolume57 \bpages377–387. \endbibitem
  92. {barticle}[mr] \bauthor\bsnmRoyall, \bfnmRichard M.\binitsR. M. \AND\bauthor\bsnmPfeffermann, \bfnmDany\binitsD. (\byear1982). \btitleBalanced samples and robust Bayesian inference in finite population sampling. \bjournalBiometrika \bvolume69 \bpages401–409. \biddoi=10.1093/biomet/69.2.401, mr=0671978 \endbibitem
  93. {barticle}[mr] \bauthor\bsnmRubin, \bfnmDonald B.\binitsD. B. (\byear1981). \btitleThe Bayesian bootstrap. \bjournalAnn. Statist. \bvolume9 \bpages130–134. \bidmr=0600538 \endbibitem
  94. {bbook}[mr] \bauthor\bsnmRubin, \bfnmDonald B.\binitsD. B. (\byear1987). \btitleMultiple Imputation for Nonresponse in Surveys. \bpublisherWiley, \baddressNew York. \biddoi=10.1002/9780470316696, mr=0899519 \endbibitem
  95. {barticle}[auto:STB—2010-11-18—09:18:59] \bauthor\bsnmSarndal, \bfnmC.-E.\binitsC.-E. (\byear2007). \btitleThe calibration approach in survey theory and practice. \bjournalSurvey Methodol. \bvolume33 \bpages99–119. \endbibitem
  96. {bbook}[mr] \bauthor\bsnmSärndal, \bfnmCarl-Erik\binitsC.-E., \bauthor\bsnmSwensson, \bfnmBengt\binitsB. \AND\bauthor\bsnmWretman, \bfnmJan\binitsJ. (\byear1992). \btitleModel Assisted Survey Sampling. \bpublisherSpringer, \baddressNew York. \bidmr=1140409 \endbibitem
  97. {barticle}[auto:STB—2010-11-18—09:18:59] \bauthor\bsnmScott, \bfnmA. J.\binitsA. J. \AND\bauthor\bsnmSmith, \bfnmT. M. F.\binitsT. M. F. (\byear1969). \btitleEstimation in multi-stage surveys. \bjournalJ. Amer. Statist. Assoc. \bvolume76 \bpages681–689. \endbibitem
  98. {barticle}[auto:STB—2010-11-18—09:18:59] \bauthor\bsnmSedransk, \bfnmJ.\binitsJ. (\byear1977). \btitleSampling problems in the estimation of the money supply. \bjournalJ. Amer. Statist. Assoc. \bvolume72 \bpages516–521. \endbibitem
  99. {barticle}[auto:STB—2010-11-18—09:18:59] \bauthor\bsnmSedransk, \bfnmJ.\binitsJ. (\byear2008). \btitleAssessing the value of Bayesian methods for inference about finite population quantities. \bjournalJ. Off. Statist. \bvolume24 \bpages495–506. \endbibitem
  100. {barticle}[mr] \bauthor\bsnmSinharay, \bfnmSandip\binitsS. \AND\bauthor\bsnmStern, \bfnmHal S.\binitsH. S. (\byear2003). \btitlePosterior predictive model checking in hierarchical models. \bjournalJ. Statist. Plann. Inference \bvolume111 \bpages209–221. \biddoi=10.1016/S0378-3758(02)00303-8, mr=1955882 \endbibitem
  101. {barticle}[auto:STB—2010-11-18—09:18:59] \bauthor\bsnmSmith, \bfnmT. M. F.\binitsT. M. F. (\byear1994). \btitleSample surveys 1975-90; an age of reconciliation. \bjournalInt. Statist. Rev. \bvolume62 \bpages5–34. \endbibitem
  102. {bbook}[mr] \bauthor\bsnmValliant, \bfnmRichard\binitsR., \bauthor\bsnmDorfman, \bfnmAlan H.\binitsA. H. \AND\bauthor\bsnmRoyall, \bfnmRichard M.\binitsR. M. (\byear2000). \btitleFinite Population Sampling and Inference: A Prediction Approach. \bpublisherWiley-Interscience, \baddressNew York. \bidmr=1784794 \endbibitem
  103. {barticle}[mr] \bauthor\bsnmWoodruff, \bfnmRalph S.\binitsR. S. (\byear1952). \btitleConfidence intervals for medians and other position measures. \bjournalJ. Amer. Statist. Assoc. \bvolume47 \bpages635–646. \bidmr=0050845 \endbibitem
  104. {barticle}[mr] \bauthor\bsnmWu, \bfnmChangbao\binitsC. \AND\bauthor\bsnmRao, \bfnmJ. N. K.\binitsJ. N. K. (\byear2006). \btitlePseudo-empirical likelihood ratio confidence intervals for complex surveys. \bjournalCanad. J. Statist. \bvolume34 \bpages359–375. \biddoi=10.1002/cjs.5550340301, mr=2328549 \endbibitem
  105. {barticle}[mr] \bauthor\bsnmYou, \bfnmYong\binitsY. \AND\bauthor\bsnmRao, \bfnmJ. N. K.\binitsJ. N. K. (\byear2002). \btitleSmall area estimation using unmatched sampling and linking models. \bjournalCanad. J. Statist. \bvolume30 \bpages3–15. \biddoi=10.2307/3315862, mr=1907674 \endbibitem
  106. {barticle}[auto:STB—2010-11-18—09:18:59] \bauthor\bsnmZheng, \bfnmH.\binitsH. \AND\bauthor\bsnmLittle, \bfnmR. J. A.\binitsR. J. A. (\byear2003). \btitlePenalized spline model-based estimation of the finite populations total from probability-proportional-to-size samples. \bjournalJ. Off. Statist. \bvolume19 \bpages99–117. \endbibitem
  107. {barticle}[auto:STB—2010-11-18—09:18:59] \bauthor\bsnmZheng, \bfnmH.\binitsH. \AND\bauthor\bsnmLittle, \bfnmR. J. A.\binitsR. J. A. (\byear2005). \btitleInference for the population total from probability proportional-to-size samples based on predictions from a penalized spline nonparametric model. \bjournalJ. Off. Statist. \bvolume21 \bpages1–20. \endbibitem
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description