On Bayes risk lower bounds

On Bayes risk lower bounds

Abstract

This paper provides a general technique for lower bounding the Bayes risk of statistical estimation, applicable to arbitrary loss functions and arbitrary prior distributions. A lower bound on the Bayes risk not only serves as a lower bound on the minimax risk, but also characterizes the fundamental limit of any estimator given the prior knowledge. Our bounds are based on the notion of -informativity [18], which is a function of the underlying class of probability measures and the prior. Application of our bounds requires upper bounds on the -informativity, thus we derive new upper bounds on -informativity which often lead to tight Bayes risk lower bounds. Our technique leads to generalizations of a variety of classical minimax bounds (e.g., generalized Fano’s inequality). Our Bayes risk lower bounds can be directly applied to several concrete estimation problems, including Gaussian location models, generalized linear models, and principal component analysis for spiked covariance models. To further demonstrate the applications of our Bayes risk lower bounds to machine learning problems, we present two new theoretical results: (1) a precise characterization of the minimax risk of learning spherical Gaussian mixture models under the smoothed analysis framework, and (2) lower bounds for the Bayes risk under a natural prior for both the prediction and estimation errors for high-dimensional sparse linear regression under an improper learning setting.

1Introduction

Consider a standard setting where we observe data points taking values in a sample space . The distribution of depends on an unknown parameter and is denoted by . The goal is to compute an estimate of based on the observed samples. Formally, we denote the estimator by , where is a mapping from the sample space to the parameter space. The risk of the estimator is defined by where is a non-negative loss function. This framework applies to a broad scope of machine learning problems. Taking sparse linear regression as a concrete example, the data represents the design matrix and the response vector; the parameter space is the set of sparse vectors; the loss function can be chosen as a squared loss.

Given an estimation problem, we are interested in the lowest possible risk achievable by any estimator, which will be useful in justifying the potential of improving existing algorithms. The classical notion of optimality is formalized by the so-called minimax risk. More specifically, we assume that the statistician chooses an optimal estimator , then the adversary chooses the worst parameter by knowing the choice of . The minimax risk is defined as:

The minimax risk has been determined up to multiplicative constants for many important problems. Examples include sparse linear regression [41], classification [55], additive models over kernel classes [42], and crowdsourcing [59].

The assumption that the adversary is capable of choosing a worst-case parameter is sometimes over-pessimistic. In practice, the parameter that incurs a worst-case risk may appear with very small probability. To capture the hardness of the problem with this prior knowledge, it is reasonable to assume that the true parameter is sampled from an underlying prior distribution . In this case, we are interested in the Bayes risk of the problem. That is, the lowest possible risk when the true parameter is sampled from the prior distribution:

If the prior distribution is known to the learner, then the Bayes estimator attains the Bayes risk [6]. But in general, the Bayes estimator is computationally hard to evaluate, and the Bayes risk has no closed-form expression. It is thus unclear what is the fundamental limit of estimators when the prior knowledge is available.

In this paper, we present a technique for establishing lower bounds on the Bayes risk for a general prior distribution . When the lower bound matches the risk of any existing algorithm, it captures the convergence rate of the Bayes risk. The Bayes risk lower bounds are useful for three main reasons:

  1. They provide an idea of the difficulty of the problem under a specific prior .

  2. They automatically provide lower bounds for the minimax risk and, because the minimax regret is always larger than or equal to the minimax risk (see, for example, [39]), they also yield lower bounds for the minimax regret.

  3. As we will show, they have an important application in establishing the minimax lower bound under the smoothed analysis framework.

Throughout this paper, when the loss function and the parameter space are clear from the context, we simply denote the Bayes risk by . When the prior is also clear, the notation is further simplified to .

1.1Our Main Results

In order to give the reader a flavor of the kind of results proved in this paper, let us consider Fano’s classical inequality [33] which is one of the most widely used Bayes risk lower bounds in statistics and information theory. The standard version of Fano’s inequality applies to the case when for some positive integer with the indicator loss ( stands for the zero-one valued indicator function) and the prior being the discrete uniform distribution on . In this setting, Fano’s inequality states that

where is the mutual information between the random variables and with (note that this mutual information only depends on and which is why we denote it by ). Fano’s inequality implies that when is large i.e., when the information that has about is large, then the risk of estimation is small.

A natural question regarding Fano’s inequality, which does not seem to have been asked until very recently, is the following: does there exist an analogue of when is not necessarily the uniform prior and/or when and are arbitrary sets, and/or when the loss function is not necessarily ? An interesting result in this direction is the following inequality which has been recently proved by [22] who termed it the continuum Fano inequality. This inequality applies to the case when is a subset of Euclidean space with finite strictly positive Lebesgue measure, for a fixed ( is the usual Euclidean metric) and the prior being the uniform probability measure (i.e., normalized Lebesgue measure) on . In this setting, [22] proved that

It turns out that there is a very clean connection between inequalities and . Indeed, both these inequalities are special instances of the following inequality:

Indeed, the term equal to in the setting of and it is equal to in the setting of .

Since both and are special instances of , one might reasonably conjecture that inequality might hold more generally. In Section 3, we give an affirmative answer by proving that inequality holds for any zero-one valued loss function and any prior . No assumptions on , and are needed. We refer to this result as generalized Fano’s inequality. Our proof of is quite succinct and is based on the data processing inequality [16] for Kullback-Leibler (KL) divergence. The use of the data processing inequality for proving Fano-type inequalities was introduced by [32].

The data processing inequality is not only available for the KL divergence. It can be generalized to any divergence belonging to a general family known as -divergences [17]. This family includes the KL divergence, chi-squared divergence, squared Hellinger distance, total variation distance and power divergences as special cases. The usefulness of -divergences in machine learning has been illustrated in [44].

For every -divergence, one can define a quantity called -informativity [18] which plays the same role as the mutual information for KL divergence. The precise definitions of -divergences and -informativities are given in Section 2. Utilizing the data processing inequality for -divergence, we prove general Bayes risk lower bounds which hold for every zero-one valued loss and for arbitrary , and (Theorem ?). The generalized Fano’s inequality is a special case by choosing the -divergence to be KL. The proposed Bayes risk lower bounds can also be specialized to other -divergences and have a variety of interesting connections to existing lower bounds in the literature such as Le Cam’s inequality, Assouad’s lemma (see Theorem 2.12 in [49]), Birgé-Gushchin inequality [32]. These results are provided in Section 3.

In Section 4, we deal with nonnegative valued loss functions which are not necessarily zero-one valued. Basically, we use the standard method of lower bounding the general loss function by a zero-one valued function and then use our results from Section 3 for lower bounding the Bayes risk. This technique, in conjunction with the generalized Fano’s inequality, gives the following lower bound (proved in Corollary ?)

A special case of the above inequality has appeared previously in [58] (please refer to Remark ? for a detailed explanation of the connection between inequality and [58]).

We also prove analogues of the above inequality for different divergences. Specifically, using our -divergence inequalities from Section 3, we prove, in Theorem ?, the following inequality which holds for every divergence:

where represents the -informativity and is a non-decreasing -valued function that depends only on . This function (see its definition from ) can be explicitly computed for many -divergences of interest, which gives useful lower bounds in terms of -informativity. For example, for the case of KL divergence and chi-squared divergence, inequality gives the lower bound in and the following inequality respectively,

where is the chi-squared informativity.

Intuitively, inequality shows that the Bayes risk is lower bounded by half of the largest possible such that the maximum prior mass of any -radius “ball” () is less than some function of -informativity. To apply , one needs to obtain upper bounds on the following two quantities:

  1. The “small ball probability” , which does not depend of the family of probability measures .

  2. The -informativity , which does not depend on the loss function .

We note that a nice feature of is that and play separately roles. One may first obtain an upper bound for the -informativity , then choose so that the small ball probability can be bounded from above by . The Bayes risk will be bounded from below by . It is noteworthy that the terminology “small ball probability” was used by [54] (this paper proved information-theoretic lower bounds on the minimum time in a distributed function computation problem).

We do not have a general guideline for bounding the small ball probability. It needs to be dealt with case by case based on the prior and the loss function. But for upper bounding the -informativity, we offer a general recipe in Section 5 for a subclass of divergences of interest (power divergences for ), which covers the chi-squared divergence as one of the most important divergences in our applications. These bounds generalize results of [34] and [56] for mutual information to -informativities involving power divergences. As an illustration of our techniques (inequality combined with the -informativity upper bounds), we apply them to a concrete estimation problem in Section 5. We further apply our results to several popular machine learning and statistics problems (e.g., generalized linear model, spiked covariance model, and Gaussian model with general loss) in Appendix C.

In Section 6 and Section 7, we present non-trivial applications of our Bayes risk lower bounds to two learning problems: the first one is a unsupervised learning problem, while the second one is a supervised learning problem. Section 6 studies smoothed analysis for learning mixtures of spherical Gaussians with uniform weights. Although learning mixtures of Gaussians is a computationally hard problem, it has been shown recently by [35] that under the assumptions that the Gaussian means are linearly independent, it can be learnt in polynomial time by a spectral method. We perform a smoothed analysis on a variant of the algorithm [35], showing that the linear independence assumption can be replaced by perturbing the true parameters by a small random noise. The method described in Section 6 achieves a better convergence rate than the original algorithm of [35]. Furthermore, we apply the Bayes risk lower bound techniques to show that the algorithm’s convergence rate is unimprovable, even under smoothed analysis (i.e. when the true parameters are randomly perturbed). Section 6 highlights the usefulness of our techniques in proving lower bounds for smoothed analysis, which appears to be challenging using traditional techniques of the minimax theory.

In Section 7, we consider the high-dimensional sparse linear regression problem and we provide Bayes risk lower bounds for both prediction error and estimation error under a natural prior on the regression parameter belonging to the set of -sparse vectors. Although lower bounds for sparse linear regression have been well-studied (see, e.g., [41] and references therein), these bounds only focus on the minimax or the worst-case scenario and thus are too pessimistic in practice. Indeed, the parameters that usually attain these minimax lower bounds have zero probability under any continuous prior, so that their average effects might be negligible. The fundamental limits of sparse linear regression under a realistic prior is, to the best of the our knowledge, unknown. The developed tool of lower bounding Bayes risks can be directly applied to characterize these limits. Moreover, our Bayes risk lower bound is flexible in the sense that by tuning the variance of the prior of non-zero elements of , it provides a wide spectrum of lower bounds. For one particular choice of the variance, our Bayes risk lower bounds match the minimax risk lower bounds. This gives a natural least favorable prior for sparse linear regression, while the known least favorable prior in [41] is a non-constructive discrete prior over a packing set of the parameter space that cannot be sampled from. We also work under the improper learning setting where we allow non-sparse estimators for the true regression vector (even though the true regression vector is assumed to be sparse).

1.2Related Works

Before finishing this introduction section, we briefly describe related work on Bayes risk lower bounds. There are a few results dealing with special cases of finite dimensional estimation problems under (weighted/truncated) quadratic losses. The first results of this kind were established by [51], and [9] with extensions by [12]. A few additional papers dealt with even more specialized problems e.g., Gaussian white noise model [13], scale models [25] and estimating Gaussian variance [53]. Most of these results are based on the van Trees inequality (see [28] and Theorem 2.13 in [49]). Although the van Trees inequality usually leads to sharp constant in the Bayes risk lower bounds, it only applies to weighted quadratic loss functions (as its proof relies on Cauchy-Schwarz inequality) and requires the underlying Fisher information to be easily computable, which limits its applicability. There is also a vast body of literature on minimax lower bounds (see, e.g., [49]) which can be viewed as Bayes risk lower bounds for certain priors. These priors are usually discrete and specially constructed so that the lower bounds do not apply to more general (continuous) priors. Another related area of work involves finding lower bounds on posterior contraction rates (see, e.g., [15]).

1.3Outline of the Paper

The rest of the paper is organized in the following way. In Section 2, we describe notations and review preliminaries such as -divergences, -informativity, data processing inequality, etc. Section 3 deals with inequalities for zero-one valued loss functions. These inequalities have many connections to existing lower bound techniques. Section 4 deals with nonnegative loss functions and we provide inequality and its special cases. Section 5 presents upper bounds on the -informativity for power divergences for . Some examples are also given in this section. Section 6 studies smoothed analysis for learning mixtures of spherical Gaussians with uniform weights using our technique. We conclude the paper in Section 1.3. Due to space constraints, we have relegated some proofs and additional examples and results to the appendix.

2Preliminaries and Notations

We first review the notions of -divergence [17] and -informativity [18]. Let denote the class of all convex functions which satisfy . Because of convexity, the limits and exist (even though they may be ) for each . Each function defines a divergence between probability measures which is referred to as -divergence. For two probability measures and on a sample space having densities and with respect to a common measure , the -divergence between and is defined as follows:

We note that the convention is adopted here so that when and . Note that when and . Also note that implies that when .

Certain divergences are commonly used because they can be easily computed or bounded when and are product measures. These divergences are the power divergences corresponding to the functions defined by

Popular examples of power divergences include:

1)

Kullback-Leibler (KL) divergence: , if is absolutely continuous with respect to (and it is infinite if is not absolutely continuous with respect to ). Following the conventional notation, we denote the KL divergence by (instead of ).

2)

Chi-squared divergence: , if is absolutely continuous with respect to (and it is infinite if is not absolutely continuous with respect to ). We denote the chi-squared divergence by following the conventional notation.

3)

When , one has which is a half of the squared Hellinger distance. That is, , where is the squared Hellinger distance between and .

The total variation distance is another -divergence (with ) but not a power divergence.

One of the most important properties of -divergences is the “data processing inequality” ([18] and [36]) which states the following: let and be two measurable spaces and let be a measurable function. For every and every pair of probability measures and on , we have

where and denote the induced measures of on , i.e., for any measurable set on the space , , (see the definition of induced measure from Definition 2.2.1. in [4]).

Next, we introduce the notion of -informativity [18]. Let be a family of probability measures on a space and be a probability measure on . For each , the -informativity, , is defined as

where the infimum is taken over all possible probability measures on . When (so that the corresponding -divergence is the KL divergence), the -informativity is equal to the mutual information and is denoted by . We denote the informativity corresponding to the power divergence by . For the special case , we use the more suggestive notation . The informativity corresponding to the total variation distance will be denoted by .

Additional notations and definitions are described as follows. Recall the Bayes risk and the minimax risk . When the loss function and parameter space are clear from the context, we drop the dependence on and . When the prior is also clear from the context, we denote the Bayes risk by and the minimax risk by . We need certain notation for covering numbers. For a given -divergence and a subset , let denote any upper bound on the smallest number for which there exist probability measures that form an -cover of under the -divergence i.e.,

We write the covering number as when and when . We write when for other . We note that is an upper bound on the metric entropy. The quantity can be infinite if is arbitrary. For a vector and a real number , denote by the -norm of . In particular, denotes the Euclidean norm of . denotes the indicator function which takes value 1 when is true and 0 otherwise. We use , , etc. to denote generic constants whose values might change from place to place.

3Bayes Risk Lower Bounds for Zero-one Valued Loss Functions and Their Applications

In this section, we consider zero-one loss functions and present a principled approach to derive Bayes risk lower bounds involving -informativity for every . Our results hold for any given prior and zero-one loss . By specializing the -divergence to KL divergence, we obtain the generalized Fano’s inequality . When specializing to other -divergences, our bounds lead to some classical minimax bounds of Le Cam and Assouad [3], more recent minimax results of [32] and also results in [49]. Bayes risk lower bounds for general nonnegative loss functions will be presented in the next section.

We need additional notations to state the main results of this section. For each , let be the function defined in the following way: for , is the -divergence between the two probability measures and on given by and . By the definition , it is easy to see that has the following expression (recall that ):

The convexity of implies monotonicity and convexity properties of , which is stated in the following lemma.

We also define the quantity

where the decision does not depend on data . Note that represents the Bayes risk with respect to in the “no data” problem i.e., when one only has information on , , and the prior but not the data . For simplicity, our notation for suppresses its dependence on . Because the loss function is zero-one valued so that , the quantity has the following alternative expression:

where

and is the prior mass of the “ball” . It will be important in the sequel to observe that the Bayes risk, is bounded from above by . This is obvious because the risk with some data cannot be greater than the risk in the no data problem (which can be viewed as an application of the data processing inequality). Formally, if is the class of the constant decision rules, then . Because , we have when . We shall therefore assume throughout this section that .

The main result of this section is presented next. It provides an implicit lower bound for the Bayes risk in terms of and the -informativity for every . The only assumption is that is zero-one valued and we do not assume the existence of the Bayes decision rule.

Figure 1: Illustration on why  leads to a lower bound on R_{\rm Bayes}(w). Recall that R \leq R_0 and r \mapsto \phi_f(r, R_0) is non-increasing in r for r \in [0, R_0]. Given I_f(w, \mathcal{P}) as an upper bound of \phi_f(R_{\rm Bayes}(w), R_0), we have R_{\rm Bayes}(w) \geq R_L = g^{-1}(I_f(w, {\mathcal{P}})) and thus R_L serves as a Bayes risk lower bound.
Figure 1: Illustration on why leads to a lower bound on . Recall that and is non-increasing in for . Given as an upper bound of , we have and thus serves as a Bayes risk lower bound.

Before we prove Theorem ?, we first show that the inequality indeed provides an implicit lower bound for the Bayes risk since and is non-increasing in for (Lemma ?). Therefore, let . We have

where is the generalized inverse function of the non-increasing . As an illustration, we plot for and the corresponding Bayes risk lower bound in Figure 1. The lower bound can be immediately applied to obtain Bayes risk lower bounds when the -divergence in is chi-squared divergence, total variation distance, or Hellinger distance (see Corollary ?). However, for the KL divergence, there is no simple form of . To obtain the corresponding Bayes risk lower bound, we can invert by utilizing the convexity of , which will give a generalized Fano’s inequality (see Corollary ?). In particular, since is convex (see Lemma ?),

where denotes the left derivative of at . The monotonicity of in (Lemma ?) gives and we thus have,

Inequality can now be used to deduce that (note that )

The inequalities and provide general approaches to convert to an explicit lower bound on .

Theorem ? is new, but its special case , and the uniform prior is known (see [32] and [30]). In such a discrete setting, for any and thus . The proof of Theorem ? heavily relies on the following lemma, which is a consequence of the data processing inequality for -divergences (see in Section 2).

We note that Lemma ? is of independent interest, which can be applied to establish minimax lower bound as shown in the following remark.

Let denote the joint distribution of and under the prior i.e., and . For any decision rule , in can be written as . Let denote the joint distribution of and under which they are independently distributed according to and respectively. The quantity in can then be written as .

Because the loss function is zero-one valued, the function maps into . Our strategy is to fix and apply the data processing inequality to the probability measures and the mapping . This gives

where and are induced measures on the space of . In other words, since is zero-one valued, both and are two-point distributions on with

By the definition of the function , it follows that . It is also easy to see . Combining this equation with inequality establishes inequality .

With Lemma ? in place, we are ready to prove Theorem ?.

We write as a shorthand notation of . By the definition of , it suffices to prove that

for every probability measure .

Notice that . If , then the right hand side of is zero and hence the inequality immediately holds. Assume that . Let be small enough so that . Let denote any decision rule for which and note that such a rule exists since . It is easy to see that

We thus have . By Lemma ?, we have

Because is non-increasing on , we have

Because is non-decreasing on , we have

Combining the above three inequalities, we have

The proof of completes by letting and using the continuity of (continuity was noted in Lemma ?). This completes the proof of Theorem ?.

3.1Generalized Fano’s Inequality

In the next result, we derive the generalized Fano’s ienquality using Theorem ?. The inequality proved here is in fact slightly stronger than ; see Remark ? for the clarification.

We simply apply to and , it can then be checked that

Inequality then gives

which proves .

As mentioned in the introduction, the classical Fano inequality and the recent continuum Fano inequality are both special cases (restricted to uniform priors) of Corollary ?. The proof of given in [22] is rather complicated with a stronger assumption and a discretization-approximation argument. Our proof based on Theorem ? is much simpler. Lemma ? also has its independent interest. Using Lemma ?, we are able to recover another recently proposed variant of Fano’s inequality in [10]. Details of this argument are provided in Appendix A.2.

3.2Specialization of Theorem to Different -divergences and Their Applications

In addition to the generalized Fano’s inequality, Theorem ? allows us to derive a class of lower bounds on Bayes risk for zero-one losses by plugging other -divergences. In the next corollary, we consider some widely used -divergences and provide the corresponding Bayes risk lower bounds by inverting in Theorem ?.

See Appendix A.3 for the proof of the corollary. The special case of Corollary ? for , and being the uniform prior has been discovered previously in [30]. It is clear from Corollary ? that the choice of -divergence will affect the tightness of the lower bound for . In Appendix Section 3.3, we provide a qualitative comparison of the lower bounds , and . In particular, we show that in the discrete setting with , the lower bounds induced by the KL divergence and the chi-squared divergence are much stronger than the bounds given by the Hellinger distance. Therefore, in most applications in this paper, we shall only use the bounds involving the KL divergence and the chi-squared divergence.

Corollary ? can be used to recover classical inequalities of Le Cam (for two point hypotheses) and Assouad (Theorem 2.12 in [49] with both total variation distance and Hellinger distance) and Theorem 2.15 in [49] that involves fuzzy hypotheses. The details are presented in Appendix A.4.

3.3Comparison of the Bounds for Different Divergences

We provide some qualitative comparisons of Bayes risk lower bounds given by Theorem ? for different power divergences. In particular, let us consider the discrete setting where , , and is the discrete uniform. Note that in such a “multiple testing problem” setup, is equal to . We take sufficiently large so that is close to 1. To establish minimax lower bounds, a typical approach is to reduce the estimation problem to a multiple hypotheses testing problem in the aforementioned setup, then try to prove that the Bayes risk (see Section 2.2. in [49]). Without loss of generality, we take and we shall see how the three inequalities , and work to establish .

Let us start with corresponding to KL divergence, which is equivalent to the classical Fano’s inequality in the discrete setting. To establish , the following condition should hold:

We remark that is at most even if every the pairwise KL divergence equals for . This fact will be clear from the inequality from Section 5 (let and for ). The upper bound on in further provides a sufficient condition to verify .

Now we turn to corresponding to the chi-squared divergence. Since , inequality implies a sufficient condition for :

When is large, the above condition is equivalent to . Note that the maximum possible value of in this discrete setting is (even when for every ) and this follows from our upper bounds on -informativity for a class of power divergences in (see Section 5).

The conditions and don’t imply each other. The chi-squared divergence is always greater than the KL divergence (see Lemma 2.7 in [49]), but the upper bound required by is also weaker than that required by . For both divergences, constructing more hypotheses (i.e., choosing ) is often helpful for showing .

For the Hellinger distance (inequality ), we claim that it gives no more useful bounds than those obtained by a simple two point argument. To see this, since , inequality implies

where . When is large, the above inequality reduces to effectively . Therefore a sufficient condition for is , which is equivalent to,

When is large, the above displayed condition implies the existence of for which . Let denote the prior . It is easy to see that the Bayes risk for equals By Le Cam’s inequality (see Lemma 2.3 in [49]), we have,

Since , it is easy to verify from the above that . Therefore in this discrete setting, if inequality implies , then there is a much simpler two point prior for which . It shows that for Hellinger distance, considering hypotheses is not more useful than using a pair of hypotheses. The reason is that the Hellinger informativity can be written as an expression involving pairwise Hellinger distances. In particular, it can be seen from the proof of inequality that

In contrast, the mutual information, , cannot be written in terms of for (recall that is always at most even when for all ). The same holds for as well (which is always at most even if for all ).

If the eventual goal of obtaining Bayes risk lower bounds is to obtain lower bounds up to multiplicative constants on the minimax risk, then the bound in gives no more useful bounds than those obtained by the simple two point argument. In this sense, inequality induced by Hellinger distance is not as useful as inequalities and . In fact, the Hellinger distance is seldom used in lower bounding minimax risk involving many hypotheses (for example, none of the minimax rates in the examples of [49] involving multiple hypotheses testing are established via Hellinger distance).

Therefore, in most applications in this paper in Section 5, we shall only use the bounds involving KL and chi-squared divergence. Also, in Section 5 on bounding -informativities, we will focus on the bounds involving KL and chi-squared divergences (and more generally for power divergences with ) as opposed to the Hellinger distance (and more generally for power divergences with ).

3.4Birgé-Gushchin’s Inequality

In this section, we expand in Remark ? to obtain a minimax risk lower bound due to [32] and [7], which presents an improvement of the classical Fano’s inequality when specializing to KL divergence.

To prove Proposition ?, it is enough to prove that for every . Without loss of generality, we assume that . We apply with the uniform distribution on as , and the minimax rule for the problem as . Because is the minimax rule, . Also

It is easy to verify that . We thus have . Because is minimax, and thus

On the other hand, we have . To see this, note that the minimax risk is upper bounded by the maximum risk of a random decision rule, which chooses among the hypotheses uniformly at random. For this random decision rule, its risk is no matter what the true hypothesis is. Thus, is an upper bound on the minimax risk. We thus have, from , that . We can thus apply to obtain

which completes the proof Proposition ?.

4Bayes Risk Lower Bounds for Nonnegative Loss Functions

Figure 2: \phi_f(1/2, b)
Figure 2:
Figure 3: u_f(x)
Figure 3:

In the previous section, we discussed Bayes risk lower bounds for zero-one valued loss functions. We deal with general nonnegative loss functions in this section. The main result of this section, Theorem ?, provides lower bounds for for any given loss and prior . To state this result, we need the following notion. Fix and recall the definition of in . We define by

and if for every , then we take to be 1. By Lemma ?, it is easy to see that is a non-decreasing function of . For example, for KL-divergence with , we have and (see Figure ?). We are now ready to state the main theorem of this paper.

Fix and . Let be a shorthand notation. Suppose is such that

We prove below that and this would complete the proof. Let denote the zero-one valued loss function . It is obvious that and hence the proof will be complete if we establish that . Let for a shorthand notation.

Because is a zero-one valued loss function, Theorem ? gives

By , it then follows that . By definition of , it is clear that there exists such that (this in particular implies that ). Lemma ? implies that is non-decreasing for , which yields . The above two inequalities imply . Combining this inequality with , we have

Lemma ? shows that is non-increasing for . Thus, we have .

We further note that because is non-decreasing in , one can replace in by any upper bound i.e., for any , we have