Frequentist size of Bayesian inequality tests
Abstract
Bayesian and frequentist criteria are fundamentally different, but often posterior and sampling distributions are asymptotically equivalent (e.g., Gaussian). For the corresponding limit experiment, we characterize the frequentist size of a certain Bayesian hypothesis test of (possibly nonlinear) inequalities. If the null hypothesis is that the (possibly infinitedimensional) parameter lies in a certain halfspace, then the Bayesian test’s size is ; if the null hypothesis is a subset of a halfspace, then size is above (sometimes strictly); and in other cases, size may be above, below, or equal to . Two examples illustrate our results: testing stochastic dominance and testing curvature of a translog cost function.
JEL classification: C11, C12
Keywords: Bernstein–von Mises theorem, limit experiment, nonstandard inference, stochastic dominance, translog
1 Introduction
Although Bayesian and frequentist properties are fundamentally different, often we can (approximately) achieve both. In other cases, Bayesian and frequentist summaries of the data differ greatly. We provide results on the role of null hypothesis “shape” in determining such differences. We hope to increase understanding and awareness of situations prone to large differences.
Economic theory prominently features inequality restrictions, often nonlinear.^{1}^{1}1Nonlinear inequalities also come from other sources. For example, as in Kaplan (2015), can test stability of the sign of a parameter over time (or geography), or whether a treatment attenuates the effect of another regressor. For example, inequality of cumulative distribution functions (CDFs) characterizes firstorder stochastic dominance (SD1), an important concept for welfare analysis. SD1 conclusions from frequentist tests (that have correct asymptotic size) and Bayesian tests may differ greatly, and the direction of the difference partly depends on whether the null hypothesis is dominance or nondominance. Inequalities also characterize secondorder stochastic dominance, which is also used for portfolio comparison in finance. An example of nonlinear inequalities is curvature constraints on production, cost, indirect utility, and other functions. Such constraints usually result from optimization, like utility or profit maximization. SD1 and curvature are detailed in Section 4; additional economic examples are reviewed in Section 3.1.
Further motivation for studying Bayesian–frequentist differences is that deriving frequentist tests for general nonlinear inequalities is notoriously difficult; e.g., see Wolak (1991). In contrast, it is (relatively) simple to compute the posterior probability that the parameter satisfies certain nonlinear inequalites, by computing the proportion of draws from the parameter’s posterior in which the constraints are satisfied. Perhaps especially in the absence of a feasible frequentist method, it is helpful to understand if the Bayesian test’s size differs greatly from the nominal .
Statistically, we consider cases where the sampling distribution of an estimator is asymptotically Gaussian, while the asymptotic posterior is also Gaussian, with the same covariance (i.e., a Bernstein–von Mises theorem holds). More generally, any symmetric distribution suffices, and only a certain functional must have the equivalent sampling and posterior distributions, making it easier to treat infinitedimensional parameters. We examine the corresponding limit experiment.
In this limit experiment, to quantify Bayesian–frequentist differences, we characterize the frequentist size of a particular Bayesian test. This test rejects the null hypothesis when the posterior probability of is below . In addition to being intuitive and practically salient, there are decisiontheoretic reasons to examine this test, as detailed in Section 2.1. Although size gives no insight into admissibility, it captures a practical difference between reporting Bayesian and frequentist inferences.
Specifically, we describe how the Bayesian test’s size depends on the shape of . By “the shape of ,” we mean the shape of the parameter subspace where is satisfied. If is a halfspace, then the Bayesian test has size . If is strictly smaller than a halfspace (in a certain sense), then the Bayesian test’s size is strictly above . If is not contained within any halfspace, then the Bayesian test’s size may be above, equal to, or below . An immediate corollary is that the Bayesian test’s size is when testing a single linear inequality constraint, whereas it is sizedistorted when testing multiple linear inequalities.
Our results beg the question: if inferences on can disagree while credible and confidence sets coincide, why not simply report the credible or confidence set?^{2}^{2}2Berger (2003) also notes this possible disagreement/agreement, but he writes, “The disagreement occurs primarily when testing a ‘precise’ hypothesis” (p. 2), whereas we find disagreements even with inequality hypotheses. Also, Casella and Berger (1987b, p. 344) opine, “Interval estimation is, in our opinion, superior to point null hypothesis testing,” although they do not mention composite null hypotheses like in this paper. If interest is primarily in the parameters themselves, then reporting a credible or confidence set may indeed be better. However, sometimes interest is in testing implications of economic theory or in specification testing; other times, inequalities provide economically relevant summaries of a highdimensional parameter.
Literature
Many papers compare Bayesian and frequentist inference. Here, we highlight examples of different types of conclusions (not all directly comparable to inequality testing): sometimes frequentist inference is more conservative, sometimes Bayesian, sometimes neither.
Some of the literature documents cases where frequentist inference is “too conservative” from a Bayesian perspective. For testing linear inequality constraints of the form with , Kline (2011, §4) provides examples showing frequentist testing to be more conservative (e.g., his Figure 1), especially as the dimension grows; his examples are consistent with our general theoretical results. As another example, under set identification, asymptotically, frequentist confidence sets for the true parameter (e.g., Imbens and Manski, 2004; Stoye, 2009) are strictly larger than the estimated identified set, whereas Bayesian credible sets are strictly smaller, as shown by Moon and Schorfheide (2012, Cor. 1).^{3}^{3}3There seems to be a typo in the statement of Corollary 1(ii), switching the frequentist and Bayesian sets from their correct places seen in the Supplemental Material proof. For testing the null of a unit root in autoregression, Sims and Uhlig (1991) say frequentist tests “accept the null more easily” (p. 1592), and they determine (sampledependent) priors that equate values and posterior probabilities.
Other papers document cases where frequentist inference is “too aggressive” from a Bayesian perspective. Perhaps most famously, in Lindley’s (1957) paradox, the frequentist test rejects while the Bayesian test does not. Berger and Sellke (1987) make a similar argument. In both cases, as noted by Casella and Berger (1987b), the results follow primarily from having a large prior probability on a point (or “small interval”) null hypothesis, specifically . Arguing that is “objective,” Berger and Sellke (1987, p. 113) consider even to be “blatant bias toward .” Casella and Berger (1987b) disagree, saying is “much larger than is reasonable for most problems” (p. 344).
In yet other cases, Bayesian and frequentist inferences are similar or even identical. Casella and Berger (1987a) compare Bayesian and frequentist onesided testing of a location parameter, given a single draw of from an otherwise fully known density. They compare the value to the infimum of the posterior over various classes of priors. In many cases, the infimum is attained by the improper prior of Lebesgue measure on and equals the value (p. 109). Berger et al. (1994) establish an equivalence of Bayesian and conditional frequentist testing of a simple null hypothesis against a simple alternative. Goutis et al. (1996) consider jointly testing multiple onesided hypotheses in a singledraw Gaussian shift experiment. If the components of are mutually independent and the improper prior sets , then the posterior is proportional to one of the frequentist values they consider, but it is (weakly) smaller. This complements our setting where we do not impose independence, , finite dimensionality, or restricted null hypothesis shape.
Paper structure and notation
Section 2 presents the setup and assumptions. Section 3 contains our main results and discussion. Section 4 illustrates our results with stochastic dominance and cost function curvature examples. Appendix A contains proofs. Appendices D, C, and B contain details on testing equality of parameters’ signs, translog cost functions, and infinitedimensional Bernstein–von Mises theorems, respectively. Acronyms used include those for cumulative distribution function (CDF), data generating process (DGP), negative semidefinite (NSD), posterior expected loss (PEL), probability density function (PDF), rejection probability (RP), and firstorder stochastic dominance (SD1). Notationally, is subset and is proper subset; scalars, (column) vectors, and matrices are respectively formatted as , , and ; denotes the zero function, i.e., for all .
2 Setup and assumptions
In Section 2.1, a specific Bayesian hypothesis test is described along with the decisiontheoretic context. In Section 2.2, the assumptions used for the results in Section 3 are presented and discussed. Section 2.3 contains addition details and references about “the” Bernstein–von Mises theorem.
2.1 The Bayesian test and decisiontheoretic context
The Bayesian test rejects the null hypothesis when its posterior probability is below .
Method 1 (Bayesian test).
Reject if ; otherwise, accept .
In addition to seeming intuitive, Method 1 is a generalized Bayes decision rule that minimizes posterior expected loss (PEL) for the loss function taking value for type I error, for type II error, and zero otherwise. To see this, let denote the posterior probability given observed data . The PEL for the decision to reject is , i.e., the type I error loss times the posterior probability that rejecting is a type I error. Similarly, the PEL of accepting is , i.e., the type II error loss times the probability that accepting is a type II error. PEL is thus minimized by rejecting if and accepting otherwise.
Our results compare the frequentist size of the Bayesian test in Method 1 to (instead of some other value) for practical and decisiontheoretic reasons. Practically, the Bayesian test can be seen as treating the posterior as a (frequentist) value; we want to know if this is valid, similar to Casella and Berger (1987a). Decisiontheoretically, the same loss function used to compute PEL could be used to compute a minimax risk decision rule that is closely related to a frequentist hypothesis test. Specifically, if an unbiased frequentist test with size exists, then it is the minimax risk decision rule. Even without unbiasedness, this is approximately true given conventional values of (below).
The minimax risk decision rule can be characterized and compared to an unbiased frequentist test. Let , , . “Minimax risk” minimizes
(1) 
where is probability under . Consider a test with size . If the test is unbiased, then . If the power function is continuous in , then (writing for the boundary of , which is also the boundary of its complement)
Thus,
and creftype 1 becomes . The minimum value is attained with ; if , then , and if , then . Without unbiasedness, , so , weakly increasing the minimax risk. Thus, assuming continuity of the power function, the minimax risk decision rule is an unbiased frequentist hypothesis test with size . See also Lehmann and Romano (2005, Problem 1.10) on unbiased tests as minimax risk decision rules.
Without unbiasedness, the minimaxriskoptimal size of a test is above , but the magnitude of the difference is very small for conventional . As a function of , let , so creftype 1 is . Continuity restricts , but we now drop the unbiasedness restriction . In the extreme, for all , and the maximum risk is for any test with , while maximum risk is larger than if . If instead is strictly increasing in but , then minimax risk is achieved at some . For example, rounding to two significant digits, if , then , or if , then . Such small divergence of from is almost imperceptible in practice.
Ideally, a single decision rule minimizes both the maximum risk in creftype 1 and the PEL. However, if the Bayesian test’s size is significantly above or below , this is not possible. In such cases, it may help to use both Bayesian and frequentist inference and to carefully consider the differences in optimality criteria.
2.2 Assumptions
Assumption A1 states conditions on the sampling and posterior distributions of a functional in the (limit) experiment we consider. As usual, the sampling distribution treats the (functional of the) data as random and conditions on the parameter , whereas the posterior treats the (functional of the) parameter as random and conditions on the data .
Assumption A1.
Let be a continuous CDF with support and symmetry . Let the (lone) observation and the parameter both belong to a Banach space of possibly infinite dimension. Let denote a continuous linear functional, with sampling distribution and posterior .
Assumption A1 can be interpreted as a limit experiment where is a local mean parameter. The limit distribution is often , satisfying the continuity, support, and symmetry conditions. For example, consider a simple asymptotic setup leading to scalar and with and . If , , and , then ; more generally, if and , then for any consistent estimator . This type of result holds for a wide variety of models, estimators, and sampling assumptions; it is most commonly used for local power analysis but has been used for purposes like ours in papers like Andrews and Soares (2010, eqn. (4.2)). Since is the local mean parameter, assuming does not require that is the original parameter space (e.g., the space for in the example), but it does exclude boundary points. Results for posterior asymptotic normality date back to Laplace (1820), as cited in Lehmann and Casella (1998, §6.10, p. 515).
Seeing creftype A1 as a limit experiment, implicitly the prior has no asymptotic effect on the posterior. This phenomenon is called the Bernstein–von Mises theorem; see Section 2.3. This is an especially reasonable assumption when the Bayesian test in practice uses an uninformative prior.
For our purpose of approximating the finitesample frequentist size of a Bayesian test, considering a fixed DGP and drifting centering parameter can be just as helpful as considering a fixed centering parameter and drifting DGP. This allows the relevant Bernstein–von Mises theorem to hold only for fixed (not drifting) DGPs. For example, in ,
(2) 
where is the true parameter value, is a normal estimator (as seen), is the known or consistently estimable asymptotic covariance matrix, and is a statistic based on and centered at the deterministic sequence that satisfies , the local mean parameter. This does not have a literal meaning like “we must change if our sample size increases,” just as a drifting DGP does not mean literally that “the population distribution changes as we collect more data”; rather, it is simply a way to capture the idea of being “close to” the true in the asymptotics. For the posterior, letting and again , assuming a Bernstein–von Mises theorem,
(3) 
The common case of a Gaussian limit (plus a Bernstein–von Mises theorem) generally satisfies creftype A1. When the Banach space in creftype A1 is , continuous linear functionals are simply linear combinations for some constant vector . With multivariate Gaussian and , linear combinations are (scalar) Gaussian random variables, satisfying the assumption. More generally, including infinitedimensional spaces, if is a Gaussian process in some Banach space and belongs to the dual of that space, then is a scalar Gaussian random variable; e.g., see Definition 2.2.1(ii) in Bogachev (1998, p. 42) and van der Vaart and Wellner (1996, pp. 376–377). In infinite dimensions, it is usually even easier to show the functional’s asymptotic normality by direct arguments.
2.3 Bernstein–von Mises theorems
Seeing Assumption A1 as a limit experiment, a Bernstein–von Mises theorem holds: the asymptotic sampling and posterior distributions are both . This is equivalent to using an improper uninformative prior in the limit experiment. For example, with sampling distribution and prior , the posterior is
and taking yields the posterior , satisfying creftype A1. The improper prior is fine with inequality testing because only posterior probabilities are used (see Method 1), unlike with point null hypothesis testing based on Bayes factors that involve a ratio of prior probabilities (e.g., Bayarri et al., 2012).
There are versions of the Bernstein–von Mises theorem for parametric, semiparametric, and nonparametric models. The first two are discussed below. The third is not necessary (though it can be sufficient) for Assumption A1 since the functional is finitedimensional, so discussion is relegated to Appendix D.
Parametric versions of the Bernstein–von Mises theorem are the oldest and can be found in textbooks. They differ in regularity conditions and in how to quantify the distance between two distributions, but they share the requirement that the prior density be both continuous and positive at the true value. For example, see Theorem 10.1 in van der Vaart (1998, §10.2) and Theorems 20.1–3 in DasGupta (2008, §20.2), where Theorem 20.3 allows noniid sampling.
General semiparametric versions of the Bernstein–von Mises theorem have been established, too. For example, see Shen (2002), Bickel and Kleijn (2012), and Castillo and Rousseau (2015), who allow noniid sampling. There are also earlier papers for specific models like GMM and quantile regression; see Hahn (1997, Thm. G and footnote 13), Kwan (1999, Thm. 2), Kim (2002, Prop. 1), Lancaster (2003, Ex. 2), Schennach (2005, p. 36), Sims (2010, Sec. III.2), and Norets (2015, Thm. 1), among others.
3 Results and discussion
Theorem 1 contains this paper’s main results. Discussion and special cases follow.
Theorem 1.
Let Assumption A1 hold. Consider testing against with the Bayesian test in Method 1, where is a subset of the Banach space in creftype A1.

[label=(), ref=Theorem 1()]

If there exists a satisfying creftype A1 and value such that (i.e., is a halfspace), then the Bayesian test’s size is , and its type I error rate is when .

If there exists a satisfying creftype A1 and value such that with (where denotes the closure), then the Bayesian test’s size is or greater.

Continuing from creftype ii, assume there exists a satisfying creftype A1 with distribution over and satisfying the properties below; if . Assume the from creftype ii may be written as for some . Assume there exists (in ) a set . If , then one may set , , and . Further assume (a) the set has positive Lebesgue measure, (b) has a strictly positive PDF over , and (c) is continuous in . Then, the Bayesian test’s rejection probability is strictly above when , and its size is strictly above .

If, contrary to creftype ii, there do not exist any and such that (i.e., is not a subset of a halfspace), then the Bayesian test’s size may be greater than, equal to, or less than , and it may depend on the sampling/posterior distribution.
3.1 Discussion of results
Intuitively, creftype i holds by the symmetry of . It holds for higherdimensional parameters by finding a single inequality on a scalarvalued functional that is both necessary and sufficient for . creftypepluralcap iii and ii hold because when parts of the halfspace are carved away to make smaller, the posterior probability of (at any ) becomes smaller, making the Bayesian test more likely to reject. For infinitedimensional parameters, this logic is essentially applied to a test of a finitedimensional necessary condition for .
creftypecap ii has a geometric interpretation: if has a supporting hyperplane, then the Bayesian test’s size is at least . creftypecap iii gives sufficient conditions for the Bayesian test’s RP to be strictly above for any that is a support point of .
creftypecap iii is partly a result of the prior being “small” when is small. That is, the prior over the parameter is always the same (regardless of ), so the implicit prior shrinks when shrinks. (Technically, the limit experiment’s prior is an improper constant prior, so is not welldefined, but the qualitative idea remains.) Unless is a halfspace, this differs from Berger and Sellke (1987) and others who only consider “objective” priors with . Whether placing a prior on the null (like ) or on the parameter is more appropriate depends on the empirical setting; e.g., do we have prior reason to suspect SD1? Often it is easier computationally not to set a specific ; e.g., one may use the same posterior to compute probabilities of many different hypotheses. However, for hypotheses like SD1, this can lead to a very small “” and consequently very large rejection probabilities (and size distortion).
Although the implicit partially explains creftype iii, the shape of still plays an important role. For example, in , let and , so comprises the first and third quadrants (and thus is not contained in any halfspace). This is the same “size” as the halfspace . However, with bivariate normal sampling and posterior distributions, creftype i implies the Bayesian test of the halfspace has size , whereas the size of the Bayesian test of may be either strictly above or equal to , depending on the correlation. For example, let have a bivariate normal sampling distribution with . Then the test is equivalent to a scalar test where is a finite, closed interval, in which case the Bayesian test’s size strictly exceeds by creftype iii. The same holds for other strong negative correlations, but size decreases to as the correlation increases; see Appendix B as well as Kaplan (2015, §3.2) for details.
creftypepluralcap iii and ii can apply to when is directionally differentiable, as in Fang and Santos (2015) and others. For example, all the examples in Section 4 of Fang and Santos (2015) concern belonging to a convex set. They provide a frequentist resampling scheme that is consistent and corresponding hypothesis tests that control asymptotic size. Thus, in cases like this where creftype iii applies, not only is the Bayesian test’s (asymptotic) size strictly above , but it is also strictly above the size of an available frequentist test.
The condition of being a subset of a halfspace holds for many economic examples. The examples of stochastic dominance and curvature constraints (on functions describing cost, production, etc.) are explored in Section 4. As noted above, the examples in Section 4 of Fang and Santos (2015) also satisfy this condition, “encompass[ing] tests of moment inequalities, shape restrictions, and the validity of random utility models” (§4, p. 25), the latter referring to Stoye and Kitamura (2013). The general moment inequality null hypothesis with as in equation (58) of Fang and Santos (2015) includes special cases like testing discrete (or ordinal) firstorder stochastic dominance and testing the performance of financial trading rules against a benchmark as in equations (4) and (5) of Sullivan et al. (1999, 2001), among many other applications. Wolak (1989, §6) considers shape/monotonicity restrictions corresponding to in his model of residential electricity demand. Example 4.2 of Fang and Santos (2015) considers shape restrictions on the infinitedimensional regression quantile process, e.g., the restriction that the coefficient on the regressor of interest is monotonic in the quantile index. In finance, Patton and Timmermann (2010) test asset return monotonicity (of various types) using null hypotheses that are all strict subsets of halfspaces in , such as (with altered notation) in their (5), for all in their (13), and in their (19).
Whether is treated as or affects the size of the Bayesian test: if satisfies creftype iii, then its complement does not. Combining creftypeplural iv and iii, the Bayesian test of may have size strictly above while the Bayesian test of has size strictly below or equal to , as with SD1 (Section 4.1).
Many nonlinear inequalities could be recast as linear inequalities, but at the expense of additional approximation error. For example, for and underlying (nonlocal) parameter , the nonlinear could be written as with . By the delta method (e.g., Hansen, 2018, Thm. 6.12.3), if is continuously differentiable in a neighborhood of and , then implies . To be concrete, imagine and , so is the unit disk in ; the delta method gives . Further imagine in a finite sample of observations, with the corresponding posterior. Then the Bayesian test’s size is strictly above , as in creftype iii; as the sampling variance grows, size grows to . In apparent contradiction, suggests the Bayesian test has correct asymptotic size, by creftype i. The “contradiction” is simply that the asymptotic result is less accurate, due to the delta method’s linear approximation of . We avoid this layer of delta method approximation by treating nonlinear inequalities directly, providing better finitesample insights. These insights remain practically helpful for any as long as the sampling and posterior distributions are close to their limits.
3.2 Special cases of results
In the special case when , similar results to creftype i are found in the literature, like in Casella and Berger (1987a). Less general versions of creftypeplural iii and ii have also been given when .
creftypecap iii covers a special case explored in examples by Kline (2011, §4). Let , with (elementwise) against (i.e., at least one element ). Kline (2011, p. 3136) explains the possible divergence of Bayesian and frequentist conclusions as the dimension grows, when the distribution is multivariate normal with identity matrix covariance. He gives the example of observing , where for large the Bayesian while the frequentist value is near one. Inverting his example illustrates creftype iv. If and are switched to get and , then the divergence is in the opposite direction: , and large can occur even when the value is near zero. For example, the point is the corner of the rejection region for the likelihood ratio test with size , but the corresponding when , when , and when .
Theorem 1 also includes the special case of linear inequalities in . creftypecap i states that for a single linear inequality , the Bayesian test has size . creftypecap iii states that for multiple linear inequalities, the Bayesian test’s size is strictly above , and its RP is strictly above at every boundary point of .
4 Examples
We illustrate Theorem 1 through examples of firstorder stochastic dominance and curvature constraints in Sections 4.2 and 4.1, respectively.
4.1 Example: firstorder stochastic dominance
For testing firstorder stochastic dominance (SD1), let , and independent of the sample, and is nonrandom, where all distributions are continuous. Onesample SD1 is ; twosample SD1 is ; “nonSD1” means SD1 is not satisfied.
First, we show how Theorem 1 applies to SD1. Second, we provide analytic results from the limit experiment. Third, we show simulated finitesample results.
4.1.1 SD1: application of Theorem 1
As we show below, Theorem 1 implies that the Bayesian test’s asymptotic size is strictly above when the null hypothesis is SD1 but that this may not hold when the null is nonSD1. This subsection shows how SD1 and nonSD1 satisfy the conditions of creftype iii and creftype iv, respectively.
Although is infinitedimensional, only finitedimensional marginal distributions are required to apply Theorem 1. Consider the simpler creftype ii first. Let and , where (from the frequentist view) , the local mean parameter. SD1 is equivalent to . Although limits of the full infinitedimensional sequences are tractable (see Section 4.1.2), only a scalarvalued functional is needed for creftype ii. Let for some (any) , so , satisfying the condition of creftype ii that . Let and . Writing , then and , i.e., we are concerned only with the mean of a random variable. The asymptotic sampling distribution of is , satisfying the continuity, support, and symmetry conditions in Assumption A1. The remainder of creftype A1 is satisfied if a semiparametric Bernstein–von Mises theorem for the mean holds. This and even stronger results hold with a Dirichlet process prior, as in Lo (1983).
For creftype iii, we only need strengthen the Bernstein–von Mises theorem from a scalar to a bivariate vector (), which again holds with a Dirichlet process prior (Lo, 1983), for example. Here, let for , and so . Also, , satisfying the condition in creftype iii on . Continuing to follow the notation from creftype iii, , which has positive (indeed infinite) Lebesgue measure as required. The bivariate asymptotic distribution is bivariate normal, which again satisfies creftype A1 as well as the continuity and strictly positive PDF requirement.
For testing nonSD1, creftype iv applies. NonSD1 is satisfied in the entire halfspace , as well as most of the complement halfspace (e.g., if but ), so it cannot be contained in any halfspace.
For the twosample setting, the infinitedimensional limits in Section 4.1.2 are more than sufficient to establish the scalar and bivariate conditions of creftype iii, and again nonSD1 cannot be contained in any halfspace.
4.1.2 SD1: results from limit experiment
We derive the infinitedimensional limit experiment and then compute certain results. Continuing some notation from Section 4.1.1, consider the onesample setup with . Again let , the local parameter. SD1 of over can be written equivalently as
(4) 
Since (by Donsker’s theorem) for standard Brownian bridge , similar to creftype 2,
(5) 
so the limit experiment has . Note is a meanzero Gaussian process with covariance function for . Although is unknown, uniformly by the Glivenko–Cantelli theorem, so asymptotically the covariance function is known while the (local) mean function remains unknown. Analogously, for the posterior, similar to creftype 3,
(6) 
using the Bernstein–von Mises theorem in Lo (1983, 1987) for Dirichlet process prior Bayesian inference or Theorem 4 of Castillo and Nickl (2014).
The twosample setting is similar since we assume the samples are independent. For notational simplicity, assume both samples have observations. Let , the true CDF difference function. Let be the centering functions satisfying , the local parameter. SD1 of over is
(7) 
For the sampling distribution,
(8) 
where and are independent standard Brownian bridges. For the posterior, using the independence of samples and Bernstein–von Mises theorem,
(9) 
First, consider the Bayesian posterior probability of SD1 when . The finitesample analog is when or . The value is at the very “corner” of , and it is a very pointy corner, so a ball centered at contains very little of . In fact, “very little” means “zero probability,” as the next result states.
Proposition 2.
Consider the limit experiment posterior for onesample SD1 testing in creftype 6 and for twosample SD1 testing in creftype 9. Given , the posterior probability of SD1 is zero in both onesample and twosample settings.
Second, similar intuition and arguments lead to the SD1 Bayesian test’s size being one.
Proposition 3.
Consider the limit experiment for onesample SD1 testing in creftypeplural 6 and 5 and for twosample SD1 testing in creftypeplural 9 and 8. Consider the Bayesian test from Method 1 that rejects iff the posterior probability of is below . Then, the Bayesian test’s size equals one, with type I error rate equal to one when .
Third, the following result for nonSD1 rejection probability is immediate.
Corollary 4.
Consider same setup as in Proposition 3. When , the Bayesian test’s probability of rejecting nonSD1 is zero.
4.1.3 SD1: finitesample simulations
The following simulation results reflect the theoretical results from the limit experiment (discussed in Section 4.1.1). Code for replication is provided.
Table 1 shows Bayesian posterior probabilities of SD1 and nonSD1 in datasets near the “corner” of SD1, similar to the setup of Proposition 2. In the onesample case, this means nearly equals the CDF. In the twosample case, this means nearly equals . Specifically, we set for , and in the twosample case, for (there are observations in the second sample). When , the Bayesian bootstrap variant of Banks (1988) is used. When , the results are from Proposition 2.
Comparison distribution  

SD1  0.103  0.097  
SD1  0.028  0.025  
SD1  0.009  0.010  
SD1  0.000  0.000  
nonSD1  0.897  0.903  
nonSD1  0.972  0.975  
nonSD1  0.991  0.990  
nonSD1  1.000  1.000 
Table 1 illustrates the Bayesian interpretation of a draw near . This interpretation differs greatly from a frequentist interpretation and illuminates the rejection probabilities seen in Table 2. The same intuition before Proposition 2 applies here. Consequently, when , or when , the Bayesian posterior places nearly zero probability on SD1 and (equivalently) almost all probability on nonSD1. Table 1 shows finitesample posterior probabilities when to be very close to the limit as . Opposite the Bayesian interpretation, a frequentist value for the null of SD1 would be near one when the estimated is near or . These results are qualitatively similar to those for the onesample, finitedimensional example in Kline (2011, §4).
Comparison distribution  

SD1  0.740  0.655  
SD1  0.935  0.917  
SD1  0.981  0.977  
SD1  1.000  1.000  
nonSD1  0.000  0.005  
nonSD1  0.000  0.000  
nonSD1  0.000  0.000  
nonSD1  0.000  0.000 
Table 2 shows rejection probabilities of the Bayesian test when . This is the “least favorable configuration” for the null of SD1 (but not for nonSD1). The DGP has for . For onesample testing, is the true (standard uniform) CDF of . For twosample testing, for , identical to . The hypotheses, methods, and notation are the same as for Table 1. The entries for use Corollaries 4 and 3.
Table 2 shows the same patterns as Table 1. When is SD1, the Bayesian type I error rate is well above even with , with rejection probability increasing to as grows; consequently, size is also above . The opposite occurs when is nonSD1, which is not a subset of a halfspace, with type I error rates of zero.^{4}^{4}4Although the type I error rate for nonSD1 is near zero with this DGP, the test’s size is actually , which is attained when there is a single “contact point” with and the inequalities are strict for all other , thus reducing the test (asymptotically) to a single, scalar inequality.
4.2 Example: curvature constraints
One common nonlinear inequality hypothesis in economics is a “curvature” constraint like concavity. Such constraints come from economic theory, often the secondorder condition of an optimization problem like utility maximization or cost minimization. As noted by O’Donnell and Coelli (2005), the Bayesian approach is appealing for imposing or testing curvature constraints due to its (relative) simplicity. However, according to Theorem 1, since curvature is usually satisfied in a parameter subspace much smaller than a halfspace, Bayesian inference similar to Method 1 may be much less favorable toward the curvature hypothesis than frequentist inference would be; i.e., the size of the Bayesian test in Method 1 may be well above . This is true in Table 3 below.
Our example concerns concavity of a cost function with the “translog” functional form (Christensen et al., 1973). This has been a popular way to parameterize cost, indirect utility, and production functions. The translog is more flexible than many traditional functional forms, allowing violation of certain implications of economic theory (such as curvature) without reducing such constraints to the value of a single parameter. Since Lau (1978), there has been continued interest in methods to impose curvature constraints during estimation, as well as methods to test such constraints. Although “flexible,” the translog is still parametric, so violation of curvature constraints may come from misspecification (of the functional form) rather than violation of economic theory.^{5}^{5}5With a nonparametric model, one may more plausibly test the theory itself, although there are always other assumptions that may be violated; see Dette et al. (2016) for nonparametrically testing negative semidefiniteness of the Slutsky substitution matrix.
Specifically, we test concavity of cost in input prices as follows.^{6}^{6}6The “translog” example on page 346 of Dufour (1989) is even simpler but appears to ignore the fact that second derivatives are not invariant to log transformations. With output , input prices , and total cost , the translog model is
(10) 
Standard economic assumptions imply that is concave in (as in Kreps, 1990, §7.3), which corresponds to the Hessian matrix (of with respect to ) being negative semidefinite (NSD), which in turn corresponds to all the Hessian’s principal minors of order (for all ) having the same sign as or zero.^{7}^{7}7In some cases, only leading principal minors are required; see Mandy (2017).
For simplicity, we consider local concavity at the point :
(11) 
This is necessary but not sufficient for global concavity; rejecting local concavity implies rejection of global concavity. In Appendix C, we show that even this weaker constraint corresponds to a set of parameter values much smaller than a halfspace, so creftype iii applies.
Our simulation DGP is as follows. To impose homogeneity of degree one in input prices, we use the normalized model (with error term added)
(12) 
for both data generation and inference.^{8}^{8}8Alternatively, cost share equations may be used. Shephard’s lemma implies that the demand for input is . The cost share for input is then . The parameter values are , (more on below), and to make some of the inequality constraints in close to binding, as well as , , , . The other parameter values follow from imposing symmetry () and homogeneity. When , is a matrix of zeros, on the boundary of being NSD in that each principal minor equals zero (and none are strictly negative). When , all principal minors are strictly negative (other than , which is always true under homogeneity). We set . In each simulation replication, an iid sample is drawn, where and all are , , and all variables are mutually independent. There are observations per sample, simulation replications, and posterior draws per replication. The local monotonicity constraints were satisfied in of replications overall.
Table 3 reports values from two methods. For the method denoted “Bayesian bootstrap” in the table header, the posterior probability of is computed by a nonparametric Bayesian method with improper Dirichlet process prior, i.e., the Bayesian bootstrap of Rubin (1981) based on Ferguson (1973) and more recently advocated in economics by Chamberlain and Imbens (2003). For the method denoted “Normal,” the parameter vector is sampled from a normal distribution with mean equal to the ordinary least squares (OLS) estimate and covariance matrix equal to the corresponding (homoskedastic) asymptotic covariance matrix estimate; this is the posterior from a homoskedastic normal linear regression model with improper prior (or asymptotically). To accommodate numerical imprecision, we deem an inequality satisfied if it is within . The simulated type I error rate is the proportion of simulated samples for which the posterior probability of was below .
Bayesian  

bootstrap  Normal  
Table 3 shows the type I error rate of the Bayesian tests of creftype 11 given our DGP. The values of and are varied as shown in the table. The two Bayesian tests are very similar, always within a few percentage points of each other. As a sanity check, when , the RP is zero since the constraints are satisfied by construction. As increases, the RP increases well above , even over .^{9}^{9}9The results with are similar; with , RP jumps to over even with . Although the Bayesian test’s size distortion with the null of local NSD is clearly bad from a frequentist perspective, it reflects the Bayesian method’s need for great evidence to conclude in favor of local NSD, which may be reasonable since the translog form does not come from economic theory and since only a small part of the parameter space satisfies local NSD. Either way, it is helpful to understand the behavior of Bayesian inference in this situation.
5 Conclusion
We have explored the frequentist properties of Bayesian inference on general nonlinear inequality constraints, providing formal results on the role of the shape of the null hypothesis parameter subspace. Future work could include investigation of approaches like Müller and Norets (2016) applied to nonlinear inequality testing, or extensions to allow (proper) priors with or other values. Moreover, one could explore how to achieve correct frequentist size by adjusting the prior or by adjusting the relative weight of type I and II errors in the loss function, or how to achieve a posterior probability of equal to the value from a common frequentist method.
References

Andrews and Soares (2010)
Andrews, D. W. K., Soares, G., 2010. Inference for parameters defined by moment
inequalities using generalized moment selection. Econometrica 78 (1),
119–157.
URL https://www.jstor.org/stable/25621398 
Banks (1988)
Banks, D. L., 1988. Histospline smoothing the Bayesian bootstrap. Biometrika
75 (4), 673–684.
URL https://www.jstor.org/stable/2336308 
Bayarri et al. (2012)
Bayarri, M. J., Berger, J. O., Forte, A., GarcíaDonato, G., 2012.
Criteria for Bayesian model choice with application to variable selection.
Annals of Statistics 40 (3), 1550–1577.
URL https://projecteuclid.org/euclid.aos/1346850065 
Berger (2003)
Berger, J. O., 2003. Could Fisher, Jeffreys and Neyman have agreed on
testing? Statistical Science 18 (1), 1–32.
URL https://doi.org/10.1214/ss/1056397485 
Berger et al. (1994)
Berger, J. O., Brown, L. D., Wolpert, R. L., 1994. A unified conditional
frequentist and Bayesian test for fixed and sequential simple hypothesis
testing. Annals of Statistics 22 (4), 1787–1807.
URL https://www.jstor.org/stable/2242484 
Berger and Sellke (1987)
Berger, J. O., Sellke, T., 1987. Testing a point null hypothesis: The
irreconcilability of values and evidence. Journal of the American
Statistical Association 82 (397), 112–122.
URL https://doi.org/10.1080/01621459.1987.10478397 
Bickel and Kleijn (2012)
Bickel, P. J., Kleijn, B. J. K., 2012. The semiparametric Bernstein–von
Mises theorem. Annals of Statistics 40 (1), 206–237.
URL https://projecteuclid.org/euclid.aos/1333029963 
Birnbaum and Tingey (1951)
Birnbaum, Z. W., Tingey, F. H., 1951. Onesided confidence contours for
probability distribution functions. Annals of Mathematical Statistics 22 (4),
592–596.
URL https://www.jstor.org/stable/2236929  Bogachev (1998) Bogachev, V. I., 1998. Gaussian Measures. Vol. 62 of Mathematical Surveys and Monographs. American Mathematical Society.

Casella and Berger (1987a)
Casella, G., Berger, R. L., 1987a. Reconciling Bayesian and
frequentist evidence in the onesided testing problem. Journal of the
American Statistical Association 82 (397), 106–111.
URL https://www.jstor.org/stable/2289130 
Casella and Berger (1987b)
Casella, G., Berger, R. L., 1987b. Testing precise hypotheses:
Comment. Statistical Science 2 (3), 344–347.
URL https://www.jstor.org/stable/2245777 
Castillo and Nickl (2013)
Castillo, I., Nickl, R., 2013. Nonparametric Bernstein–von Mises theorems in
Gaussian white noise. Annals of Statistics 41 (4), 1999–2028.
URL https://projecteuclid.org/euclid.aos/1382547511 
Castillo and Nickl (2014)
Castillo, I., Nickl, R., 2014. On the Bernstein–von Mises phenomenon for
nonparametric Bayes procedures. Annals of Statistics 42 (5), 1941–1969.
URL https://projecteuclid.org/euclid.aos/1410440630 
Castillo and Rousseau (2015)
Castillo, I., Rousseau, J., 2015. A Bernstein–von Mises theorem for smooth
functionals in semiparametric models. Annals of Statistics 43 (6),
2353–2383.
URL https://projecteuclid.org/euclid.aos/1444222078  Chamberlain and Imbens (2003) Chamberlain, G., Imbens, G. W., 2003. Nonparametric applications of Bayesian inference. Journal of Business & Economic Statistics 21 (1), 12–18.

Christensen et al. (1973)
Christensen, L. R., Jorgenson, D. W., Lau, L. J., 1973. Transcendental
logarithmic production frontiers. Review of Economics and Statistics 55 (1),
28–45.
URL https://www.jstor.org/stable/1927992  DasGupta (2008) DasGupta, A., 2008. Asymptotic Theory of Statistics and Probability. Springer, New York.

Dette et al. (2016)
Dette, H., Hoderlein, S., Neumeyer, N., 2016. Testing multivariate economic
restrictions using quantiles: The example of Slutsky negative
semidefiniteness. Journal of Econometrics 191 (1), 129–144.
URL https://doi.org/10.1016/j.jeconom.2015.07.004 
Dufour (1989)
Dufour, J.M., 1989. Nonlinear hypotheses, inequality restrictions, and
nonnested hypotheses: Exact simultaneous tests in linear regressions.
Econometrica 57 (2), 335–355.
URL https://www.jstor.org/stable/1912558  Fang and Santos (2015) Fang, Z., Santos, A., 2015. Inference on directionally differentiable functions, working paper, available at https://arxiv.org/abs/1404.3763.

Ferguson (1973)
Ferguson, T. S., 1973. A Bayesian analysis of some nonparametric problems.
Annals of Statistics 1 (2), 209–230.
URL https://doi.org/10.1214/aos/1176342360 
Freedman (1999)
Freedman, D., 1999. On the Bernstein–von Mises theorem with
infinitedimensional parameters. Annals of Statistics 27 (4), 1119–1140.
URL http://projecteuclid.org/euclid.aos/1017938917  Ghosal and van der Vaart (2017) Ghosal, S., van der Vaart, A., 2017. Fundamentals of Nonparametric Bayesian Inference. Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge University Press.

Goutis et al. (1996)
Goutis, C., Casella, G., Wells, M. T., 1996. Assessing evidence in multiple
hypotheses. Journal of the American Statistical Association 91 (435),
1268–1277.
URL https://www.jstor.org/stable/2291745 
Hahn (1997)
Hahn, J., 1997. Bayesian bootstrap of the quantile regression estimator: A
large sample study. International Economic Review 38 (4), 795–808.
URL https://www.jstor.org/stable/2527216  Hansen (2018) Hansen, B. E., 2018. Econometrics, unpublished textbook, available at https://www.ssc.wisc.edu/~bhansen/econometrics/.

Hirano and Porter (2009)
Hirano, K., Porter, J. R., 2009. Asymptotics for statistical treatment rules.
Econometrica 77 (5), 1683–1701.
URL https://www.jstor.org/stable/25621374 
Imbens and Manski (2004)
Imbens, G. W., Manski, C. F., 2004. Confidence intervals for partially
identified parameters. Econometrica 72 (6), 1845–1857.
URL https://doi.org/10.1111/j.14680262.2004.00555.x  Kaplan (2015) Kaplan, D. M., 2015. Bayesian and frequentist tests of sign equality and other nonlinear inequalities, working paper, available at https://faculty.missouri.edu/~kaplandm.

Kim (2002)
Kim, J.Y., 2002. Limited information likelihood and Bayesian analysis.
Journal of Econometrics 107 (1), 175–193.
URL https://doi.org/10.1016/S03044076(01)001191 
Kline (2011)
Kline, B., 2011. The Bayesian and frequentist approaches to testing a
onesided hypothesis about a multivariate mean. Journal of Statistical
Planning and Inference 141 (9), 3131–3141.
URL https://doi.org/10.1016/j.jspi.2011.03.034  Kreps (1990) Kreps, D. M., 1990. A Course in Microeconomic Theory. Princeton University Press.

Kwan (1999)
Kwan, Y. K., 1999. Asymptotic Bayesian analysis based on a limited
information estimator. Journal of Econometrics 88 (1), 99–121.
URL https://doi.org/10.1016/S03044076(98)000244 
Lancaster (2003)
Lancaster, T., 2003. A note on bootstraps and robustness, available at SSRN:
https://ssrn.com/abstract=896764.
URL https://doi.org/10.2139/ssrn.896764  Laplace (1820) Laplace, P.S., 1820. Théorie Analytique des Probabilités, 3rd Edition. V. Courcier, Paris.
 Lau (1978) Lau, L. J., 1978. Testing and imposing monotonicity, convexity, and quasiconvexity constraints. In: Fuss, M., McFadden, D. (Eds.), Production Economics: A Dual Approach to Theory and Applications. Vol. 1 of Contributions to Economic Analysis. NorthHolland, Amsterdam, Ch. A.4, pp. 409–453.
 Lehmann and Casella (1998) Lehmann, E. L., Casella, G., 1998. Theory of Point Estimation, 2nd Edition. Springer, New York.

Lehmann and Romano (2005)
Lehmann, E. L., Romano, J. P., 2005. Testing Statistical Hypotheses, 3rd
Edition. Springer Texts in Statistics. Springer.
URL http://books.google.com/books?id=Y7vSVW3ebSwC 
Lindley (1957)
Lindley, D. V., 1957. A statistical paradox. Biometrika 44 (1–2), 187–192.
URL https://www.jstor.org/stable/2333251 
Lo (1983)
Lo, A. Y., 1983. Weak convergence for Dirichlet processes. Sankhyā: The
Indian Journal of Statistics, Series A 45 (1), 105–111.
URL https://www.jstor.org/stable/25050418 
Lo (1987)
Lo, A. Y., 1987. A large sample study of the Bayesian bootstrap. Annals of
Statistics 15 (1), 360–375.
URL http://projecteuclid.org/euclid.aos/1176350271 
Mandy (2017)
Mandy, D. M., 2017. Verifying curvature of profit and cost/expenditure
functions. Working Paper WP 1705, Department of Economics, University of
Missouri.
URL https://economics.missouri.edu/paper/wp1705 
Moon and Schorfheide (2012)
Moon, H. R., Schorfheide, F., 2012. Bayesian and frequentist inference in
partially identified models. Econometrica 80 (2), 755–782.
URL https://www.jstor.org/stable/41493833 
Müller and Norets (2016)
Müller, U. K., Norets, A., 2016. Credibility of confidence sets in
nonstandard econometric problems. Econometrica 84 (6), 2183–2213.
URL https://doi.org/10.3982/ECTA14023 
Norets (2015)
Norets, A., 2015. Bayesian regression with nonparametric heteroskedasticity.
Journal of Econometrics 185 (2), 409–419.
URL https://doi.org/10.1016/j.jeconom.2014.12.006 
O’Donnell and Coelli (2005)
O’Donnell, C. J., Coelli, T. J., 2005. A Bayesian approach to imposing
curvature on distance functions. Journal of Econometrics 126 (2), 493–523.
URL https://doi.org/10.1016/j.jeconom.2004.05.011 
Patton and Timmermann (2010)
Patton, A. J., Timmermann, A., 2010. Monotonicity in asset returns: New tests
with applications to the term structure, the CAPM, and portfolio sorts.
Journal of Financial Economics 98 (3), 605–625.
URL https://doi.org/10.1016/j.jfineco.2010.06.006 
Rubin (1981)
Rubin, D. B., 1981. The Bayesian bootstrap. Annals of Statistics 9 (1),
130–134.
URL http://projecteuclid.org/euclid.aos/1176345338 
Schennach (2005)
Schennach, S. M., 2005. Bayesian exponentially tilted empirical likelihood.
Biometrika 92 (1), 31–46.
URL https://www.jstor.org/stable/20441164 
Shen (2002)
Shen, X., 2002. Asymptotic normality of semiparametric and nonparametric
posterior distributions. Journal of the American Statistical Association
97 (457), 222–235.
URL https://doi.org/10.1198/016214502753479365  Sims (2010) Sims, C. A., 2010. Understanding nonBayesians, unpublished book chapter, available at http://sims.princeton.edu/yftp/UndrstndgNnBsns/GewekeBookChpter.pdf.

Sims and Uhlig (1991)
Sims, C. A., Uhlig, H., 1991. Understanding unit rooters: A helicopter tour.
Econometrica 59 (6), 1591–1599.
URL https://www.jstor.org/stable/2938280 
Smirnov (1939)
Smirnov, N. V., 1939. Sur les écarts de la courbe de distribution
empirique. Recueil Mathématique [Mathematicheskii Sbornik] 6(48) (1),
3–26.
URL http://mi.mathnet.ru/eng/msb5810 
Stoye (2009)
Stoye, J., 2009. More on confidence intervals for partially identified
parameters. Econometrica 77 (4), 1299–1315.
URL https://www.jstor.org/stable/40263861 
Stoye and Kitamura (2013)
Stoye, J., Kitamura, Y., 2013. Nonparametric analysis of random utility models:
Testing. In: Beiträge zur Jahrestagung des Vereins für Socialpolitik
2013: Wettbewerbspolitik und Regulierung in einer globalen
Wirtschaftsordnung, Session: Microeconometrics, No. D20–V3. pp. 1–43.
URL http://hdl.handle.net/10419/79753 
Sullivan et al. (1999)
Sullivan, R., Timmermann, A., White, H., 1999. Datasnooping, technical trading
rule performance, and the bootstrap. Journal of Finance 54 (5), 1647–1691.
URL https://doi.org/10.1111/00221082.00163 
Sullivan et al. (2001)
Sullivan, R., Timmermann, A., White, H., 2001. Dangers of data mining: The case
of calendar effects in stock returns. Journal of Econometrics 105 (1),
249–286.
URL https://doi.org/10.1016/S03044076(01)00077X 
van der Vaart (1998)
van der Vaart, A. W., 1998. Asymptotic Statistics. Cambridge University Press,
Cambridge.
URL https://books.google.com/books?id=UEuQEM5RjWgC  van der Vaart and Wellner (1996) van der Vaart, A. W., Wellner, J. A., 1996. Weak Convergence and Empirical Processes: With Applications to Statistics. Springer Series in Statistics. Springer, New York.

Wolak (1989)
Wolak, F. A., 1989. Testing inequality constraints in linear econometric
models. Journal of Econometrics 41 (2), 205–235.
URL https://doi.org/10.1016/03044076(89)900948 
Wolak (1991)
Wolak, F. A., 1991. The local nature of hypothesis tests involving inequality
constraints in nonlinear models. Econometrica 59 (4), 981–995.
URL https://www.jstor.org/stable/2938170
Appendix A Proofs
Proof of Theorem 1.
For creftype i: the Bayesian test rejects iff
Given any such that (so holds), the RP is
since . If , then the becomes .
For creftype ii: because , then for any ,
Consequently, the rejection region for is at least as big as the rejection region for : for some ,