Information Criterion for Minimum Cross-Entropy Model Selection

# Information Criterion for Minimum Cross-Entropy Model Selection

## Abstract

The cross-entropy, which is proportional to the Kullback-Leibler divergence, is widely used to gauge the deviation of a distribution of interest from a reference distribution in statistical inference. For example, the Akaike information criterion (AIC) is an asymptotically unbiased estimator of the cross-entropy from a parametric distribution to the true distribution of data. Minimizing the AIC allows us to find a parametric distribution close to the true distribution. In this paper, we generalize the AIC by letting the reference distribution be a target distribution to approximate when its density can be evaluated up to a multiplicative constant only at observed data points. We prove, under some conditions, that the generalized criterion, which we call the cross-entropy information criterion (CIC), is an asymptotically unbiased estimator of the cross-entropy (up to a multiplicative constant) from a parametric distribution to the target distribution. We demonstrate the usefulness of CIC for approximating the optimal importance sampling distribution by a mixture of parametric distributions.

Keywords: cross-entropy information criterion, importance sampling, Kullback-Leibler divergence, mixture model, Monte Carlo

## 1 Introduction

This paper considers approximating an unknown target density when a nonnegative function proportional can be evaluated only at observed points. When is computationally light to evaluate so that discarding some observations is not burdensome, the Metropolis-Hastings algorithm (Metropolis et al., 1953; Hastings, 1970) has been used extensively in Bayesian inference (e.g., is the posterior density and is the prior density times the likelihood), because of the theoretical guarantee on asymptotically exact sampling from .

But when evaluating is computationally heavy (e.g., involves computing the likelihood for a complex Bayesian model (Beaumont, 2010; Sunnåker et al., 2013) or running a computationally intensive algorithm or simulator (Kurtz and Song, 2013; Choe et al., 2015)), we can only afford a small number of the evaluations of and want to make the best use of each evaluation in approximating . This paper focuses on this scenario where we use every evaluation of to approximate .

For the approximation, we take a parametric approach by positing a parametric family of densities and finding its member closest to in terms of the closeness measured by the Kullback-Leibler (KL) divergence (Kullback and Leibler, 1951). Thus, the parametric approximation of takes two steps, namely, (1) choosing the parametric family and (2) minimizing the KL divergence for the given family. The latter is a fairly straightforward optimization problem, so the former will be the focus of this paper.

Specifically, the main problem we tackle is how to determine the complexity of the parametric family based on the given data, . If the parametric family is too simple, the parametric approximation of can be poor, whereas if the family is too complex, it can lead to overfitting. We note that this new model selection problem, where is the only source of information on , is different from the traditional model selection problem where the key information is contained in data, , directly sampled from an unknown density to model. The traditional problem is extensively studied (Burnham and Anderson, 2003) and its most well-known model selection criteria include the Akaike information criterion (AIC) (Akaike, 1973, 1974) and Bayesian information criterion (BIC) (Schwarz, 1978).

To the best of our knowledge, the new model selection problem is not yet investigated despite its potential importance in Bayesian inference (Beaumont, 2010; Sunnåker et al., 2013) and Monte Carlo studies (Kurtz and Song, 2013; Choe et al., 2015). To address the problem, we propose a novel information criterion and prove that the criterion is an asymptotically unbiased estimator of the KL divergence (up to a multiplicative constant and an additive constant), justifying the minimization of the criterion for selecting the best model.

The remainder of this paper is organized as follows. Section 2 provides the background. Section 3 proposes the novel information criterion. Section 4 explains how the proposed criterion can be used in practice to adaptively posit an increasingly complex parametric family to approximate an unknown target density as more evaluations of proportional to are amassed. Section 5 numerically demonstrates the use of the proposed criterion for approximating the optimal density for importance sampling (Kahn and Marshall, 1953). Section 6 concludes and suggests future research directions.

## 2 Background

The KL divergence (Kullback and Leibler, 1951) is commonly used to gauge the difference between two distributions in statistical inference. For two probability measures, and , on a common measurable space such that is absolutely continuous with respect to (written ), the KL divergence from to is

 D(Q∗||Q) :=EQ∗[logdQ∗dQ], (1)

where denotes the expectation with respect to and is the Radonâ-Nikodym derivative of with respect to . If and are absolutely continuous with respect to a dominating measure (e.g., counting or Lebesgue measure), their respective densities, and , exist by the Radonâ-Nikodym theorem and the KL divergence in (1) can be expressed as

 D(Q∗||Q) =EQ∗[logq∗q] =∫q∗log(q∗q)dμ.

The KL divergence is non-negative and takes zero if and only if almost everywhere (a.e.). If is an approximation of , the choice of minimizing leads to a good approximation.

The maximum likelihood estimator (MLE) is a prominent example of using the KL divergence. When a sample of –dimensional random vectors is drawn from an unknown distribution , we can approximate by a distribution in a parametric family by minimizing the KL divergence from to over . Suppose for all so that the KL divergence is well defined over . Also, suppose for all and so that densities and exist. Then, the KL divergence is

 D(Q∗||Qθ) Missing or unrecognized delimiter for \right =∫q∗logq∗dμ−∫q∗logqθdμ. (2)

Note that only the second term in (2), called cross-entropy, depends on . Therefore, minimizing the KL divergence over is equivalent to minimizing the cross-entropy over . Because is unknown, the cross-entropy should be estimated based on . An unbiased, consistent estimator of the cross-entropy is

 −1nn∑i=1logqθ(Xi), (3)

which is the average of negative log-likelihoods for the observed data. Therefore, the MLE of , denoted by , is the minimizer of the cross-entropy estimator in (3).

Another example of using the KL divergence or cross-entropy is the AIC (Akaike, 1973, 1974). As the free parameter dimension of the parameter space (or equivalently, the model degrees of freedom) increases, may become a better approximation of . To compare the different approximating distributions (or models), we could use a plug-in estimator

 −1nn∑i=1logqˆθn(Xi) (4)

of the cross-entropy, but this is problematic because of the downward bias created from using the data twice. While complex models (or distributions with larger ’s) tend to have a smaller cross-entropy estimate, they are subject to the overfitting problem. The AIC remedies this issue by correcting the asymptotic bias of the estimator in (4). The AIC is defined (up to a multiplicative constant) as

 −1nn∑i=1logqˆθn(Xi)+dn, (5)

where the bias correction term penalizes the model complexity, balancing it with the goodness-of-fit represented by the first term. Minimizing the AIC can be interpreted as minimizing an asymptotically unbiased estimator of the cross-entropy. Therefore, both the MLE and AIC aim at minimizing the cross-entropy from an approximate distribution to the unknown distribution generating the data.

Analogous to the MLE and AIC, our approximation task in this paper considers minimizing the KL divergence (or cross-entropy) from a parametric distribution to the target distribution over (to well-define the KL divergence, hereafter assume for all ) when the target density is proportional to a nonnegative function (i.e., for a positive constant ). Minimizing the KL divergence in (2) (equivalently, cross-entropy) over is equivalent to minimizing

 C(θ) :=−∫rlogqθdμ, (6)

which is unknown in practice because can be evaluated only at observed data points. We can approximate in (6) by an unbiased, consistent estimator

 ¯Cη(θ) :=−1nn∑i=1r(Xi)qη(Xi)logqθ(Xi) (7)

for with any parameter , because . Therefore, by minimizing in (7) over , we can approximately minimize the KL divergence (or cross-entropy) from to . Thus, we call the minimum cross-entropy estimator (MCE). Note that if the random sample is drawn from the target distribution (i.e., ), then the MCE reduces to the MLE because minimizing (7) is equivalent to minimizing (3) due to .

## 3 Cross-Entropy Information Criterion

This section develops the cross-entropy information criterion (CIC) that is to the AIC as the MCE is to the MLE. For a fixed parametric family , the MCE yields closest to with respect to the data. The CIC will guide us to select the dimension of , allowing us to compare different approximating distributions (or models).

As the AIC balances the goodness-of-fit and the model complexity, the CIC should balance the cross-entropy estimate and the model complexity. We derive the model complexity penalty term for the CIC by following a similar path for deriving the AIC, i.e., we derive the downward bias introduced from estimating by , where is the true cross-entropy (up to a multiplicative constant) from to . To derive the bias in a closed-form, we need regularity conditions similar to those needed for deriving the AIC (see Assumptions 216 in Appendix A).

### 3.1 Properties of the minimum cross-entropy estimator

Because the CIC is based on the MCE, we first characterize the asymptotic behavior of the MCE. The MCE, which is a minimizer of (7), is an M-estimator (Huber, 1964), so that the M-estimation theory (Van der Vaart, 1998) applies to the MCE. Specifically, assume the parameter minimizing the true cross-entropy, , is unique hereafter. Then, MCE is a strongly consistent estimator of (Lemma 1) and is asymptotically normal (Lemma 2) under standard regularity conditions (see Assumptions 28 in Appendix A).

###### Lemma 1 (Strong consistency of the MCE).

Suppose that Assumptions 23 hold and is compact. Then, for any ,

 ˆθn:=argminθ∈Θ¯Cη(θ)

converges almost surely to as .
Proof. This is a direct result of Theorem A1 in Rubinstein and Shapiro (1993).

###### Lemma 2 (Asymptotic normality of the MCE).

Suppose that Assumptions 48 hold and that for any , converges in probability to as . Then, for any , converges in distribution to as , where

 Γ :=−Eμ[r∇2θlogqθ∗]

and

 Λη :=EQη[∇θh(X,η,θ∗)∇θh(X,η,θ∗)T]

for

 h(X,η,θ) :=r(X)qη(X)logqθ(X). (8)

Proof. This is a direct result of Theorem A2 in Rubinstein and Shapiro (1993).

### 3.2 Iterative procedure for minimizing the cross-entropy

In contrast to the AIC that uses the data drawn from the target distribution to approximate, the CIC uses the data drawn from a distribution (e.g., ) different from . To compensate for this lack of information, the data distribution needs to asymptotically approach . Therefore, we consider an iterative procedure which refines the data distribution as we gather data. Specifically, the structure of the estimator in (7) inspires the iterative procedure in Figure 1 with the maximum number of iterations, . We note that a special case of this procedure is known as the cross-entropy method (Rubinstein, 1999) when in of Step 2 is proportional to the optimal importance sampling density.

We also note that in (9) is an unbiased, consistent estimator of the cross-entropy (up to a multiplicative constant) from to , making the MCE. We use the same sample size for each iteration for notational simplicity without loss of generality.

In this algorithm, the MCE in each iteration is strongly consistent as stated in Corollary 3. We use this consistency and the iterative procedure to establish our main theoretical results justifying the CIC in the next section.

###### Corollary 3.

Suppose the conditions in Lemma 1 hold. Then,

 ˆθ(t)n:=argminθ∈Θ¯Cˆθ(t−1)n(θ) (10)

converges almost surely to as for .
Proof. holds for and . The convergence follows from Lemma 1.

### 3.3 Asymptotic bias of a cross-entropy estimator

To simplify the model complexity penalty term in the AIC, Akaike (1974) makes the strong assumption that the true distribution of data belongs to the parametric distribution family being considered. We similarly make Assumption 1 to simplify the asymptotic bias of in estimating , because the asymptotic bias corresponds to the penalty term in the CIC.

###### Assumption 1.

There exists such that .

This assumption implies that the target distribution belongs to the parametric family in which the approximating distribution is searched. Under Assumption 1, we can establish the asymptotic normality of the MCE in Corollary 4. This leads to our main result in Theorem 5 quantifying the asymptotic bias (see Appendix A for the proofs of Corollary 4 and Theorem 5).

###### Corollary 4.

Suppose that Assumptions 1 and 410 hold and that converges in probability to as for . Then, converges in probability to and converges in distribution to as for , where .

###### Theorem 5 (Asymptotic bias of ¯Cˆθ(t−1)n(ˆθ(t)n) in estimating C(ˆθ(t)n)).

Suppose that Assumptions 1, 56, and 1116 hold, that converges in probability to as for , and that converges in distribution to as for . Then,

 E[¯Cˆθ(t−1)n(ˆθ(t)n)−C(ˆθ(t)n)] =−ρdn+o(1n)

for .

The asymptotic bias, , is proportional to the free parameter dimension of the parameter space , similar to the penalty term of the AIC in (5). In practice, is unknown, but we can use a consistent estimator of to estimate the asymptotic bias. For example, at the iteration, , we can use

 ˆρ(t−1) =1nn∑i=1r(X(t−1)i)qˆθ(t−1)n(X(t−1)i) (11)

as an unbiased, consistent estimator of . Furthermore, if , the estimator in (11) is the optimal importance sampling estimator having zero variance (Kahn and Marshall, 1953). Because the iterative procedure refines to be closer to , it is generally expected that will have a smaller variance as gets larger.

The fact that we can estimate with a small variance as a byproduct of approximating is particularly desirable, because , the normalizing constant of , is often a quantity of interest. For example, if is the prior density times the likelihood of a model, then is the evidence for the model, which can be compared between models for Bayesian model selection (Knuth et al., 2015).

Because can be consistently estimated, we can compute the CIC, which is a bias-corrected estimator of , the cross-entropy up to a multiplicative constant:
(12) for , where with in (9). is a consistent estimator of , such as the estimator in (11).

We note that the parameter space , in which the MCE is found, is a function of the free parameter dimension , although it is not explicitly expressed for notational simplicity. For example, if the parametric family is a mixture of Gaussian component distributions parametrized by their means and covariances, then the parameter space is fully specified once we select the number of components, , which directly determines the free parameter dimension (see Section 4.2 for more details on this example). Therefore, for a fixed , we can find the MCE and then compute in (12).

We note that the CIC reduces to the AIC up to an additive if the samples are all drawn from the target distribution, that is, for in Figure 1. If so, the first term of the CIC in (12) becomes

 ¯Cˆθ(t−1)n(ˆθ(t)n) :=−1nn∑i=1r(X(t−1)i)qˆθ(t−1)n(X(t−1)i)logqˆθ(t)n(X(t−1)i) (13) =−1nn∑i=1r(X(t−1)i)q∗(X(t−1)i)logqˆθ(t)n(X(t−1)i) (14) =−ρnn∑i=1logqˆθ(t)n(X(t−1)i), (15)

because in (13) and in (14). By plugging the expression in (15) into the CIC in (12), it shows that the CIC is equal to times the AIC in (5) up to an additive .

The asymptotic bias expression in Theorem 5 holds only for , because when , the initial sample is drawn from , which is not a distribution converging to . If we want to select a reasonable parameter dimension to use at the first iteration, it is still necessary to penalize the model complexity. Therefore, we define the CIC even for .

If we use the same sample size for each iteration, the model dimension for later iterations may vary only a little from the earlier iterations. Alternatively, we can allocate larger to later iterations, or aggregate the samples gathered through iterations to obtain a cumulative version of the CIC as proposed in Section 4. The potential benefit of the latter approach is the tendency of the aggregated estimator of to have a smaller variance than the non-aggregated estimator in (9). Section 4 gives the details.

## 4 Application of the Cross-Entropy Information Criterion

### 4.1 The CIC based on cumulative data

To reduce the variance of the MCE , instead of using only the last iteration’s data , we can use all the observed data up to the last iteration to estimate . For more flexibility, we can allocate a different sample size for each iteration, i.e., for the iteration, (e.g., a large for the initial sample to broadly cover the support of and equal sample sizes for the later iterations). Then, we can find the MCE

 ˆθ(t) :=argminθ∈Θ¯C(t−1)(θ), (16)

where

 ¯C(t−1)(θ) :=1∑t−1s=0nst−1∑s=0ns¯Cˆθ(s)(θ) Missing or unrecognized delimiter for \left =−1∑t−1s=0nst−1∑s=0ns∑i=1h(X(s)i,ˆθ(s),θ) (17)

for with . We note that in (17) is an unbiased estimator of .

By using all data gathered until the iteration, we can determine the model parameter dimension at the iteration with the following CIC:

(18) for , where in (16). is a consistent estimator of , such as the estimators in (19) and (20). As increases, the accumulated sample size increases so that the free parameter dimension can increase. Thus, the cumulative version of the CIC allows the use of a highly complex model if it can better approximate .

As a consistent and unbiased estimator of , we can use

 ˆρ(0) =1n0n0∑i=1r(X(0)i)qˆθ(0)(X(0)i) (19)

at the iteration. At the iteration for , we can use

 ˆρ(t−1) =1∑t−1s=1nst−1∑s=1ns∑i=1r(X(s)i)qˆθ(s)(X(s)i), (20)

where we do not use the data at the iteration because they could potentially increase the variance of the resulting estimator if is too different from . The estimator in (20) takes the form of importance sampling estimator of . The potential for the increased variance has been well studied in the importance sampling literature (e.g., Hesterberg, 1995; Owen and Zhou, 2000).

### 4.2 Mixture model and an EM algorithm

For a parametric family to have a flexible parameter dimension, we can consider a mixture model. Parametric mixture models are especially often used to approximate the optimal importance sampling density (Botev et al., 2013; Kurtz and Song, 2013; Wang and Zhou, 2015). Because the performance of importance sampling strongly depends on the approximate density, we need to carefully choose the number of mixture components, . We note that prior studies either assume that is given (Botev et al., 2013; Kurtz and Song, 2013) or use a rule of thumb based on “some understanding of the structure of the problem at hand” (Wang and Zhou, 2015).

In theory, we can use the CIC to select the number of components, , for any parametric mixture model, considering various parametric component families (e.g., exponential families are especially convenient because the MCE can be found by using the expectation-maximization (EM) algorithm (Dempster et al., 1977)). In this paper, we use the Gaussian mixture model (GMM) as the parametric family in which we find a density approximating the target density. Specifically, the GMM density can be expressed as

 q(x;θ) =k∑j=1αjqj(x;μj,Σj), (21)

where the component weights, , satisfy . The th Gaussian component density is parametrized by the mean and the covariance . Thus, the model parameter denotes .

To find the MCE of , we want to minimize in (17) and thus set its gradient to zero:

 −1∑t−1s=0nst−1∑s=0ns∑i=1∇θh(X(s)i,ˆθ(s),θ) =0.

This leads to the following updating equations for our version of the EM algorithm:

 αj =∑t−1s=0∑nsi=1r(X(s)i)qˆθ(s)(X(s)i)γ(s)ij∑t−1s=0∑nsi=1r(X(s)i)qˆθ(s)(X(s)i), (22) μj =∑t−1s=0∑nsi=1r(X(s)i)qˆθ(s)(X(s)i)γ(s)ijX(s)i∑t−1s=0∑nsi=1r(X(s)i)qˆθ(s)(X(s)i)γ(s)ij, (23) Σj =∑t−1s=0∑nsi=1r(X(s)i)qˆθ(s)(X(s)i)γ(s)ij(X(s)i−μj)(X(s)i−μj)T∑t−1s=0∑nsi=1r(X(s)i)qˆθ(s)(X(s)i)γ(s)ij, (24)

where

 γ(s)ij =αjqj(X(s)i;μj,Σj)∑kj′=1αj′qj′(X(s)i;μj′,Σj′). (25)

The right-hand sides of the updating equations in (22), (23), and (24) involve either explicitly or implicitly through . The equations cannot be analytically solved for . Instead, starting with an initial guess of , our version of the EM algorithm alternates between the expectation step (computing ) and the maximization step (updating ) based on the updating equations until convergence is reached.

We note that Kurtz and Song (2013) derive similar, but simpler, updating equations for the purpose of minimizing the cross-entropy, although they do not use the equations for the EM algorithm. Prior studies (Botev et al., 2013; Wang and Zhou, 2015; Kurtz and Song, 2013) using mixture models for the cross-entropy method for importance sampling (recall that this method is a special case of the procedure in Figure 1 when is proportional to the optimal importance sampling density) do not iterate their updating equations; rather, they solve them only once when new data are gathered in each iteration. This paper uses the aforementioned EM algorithm (i.e., iterating the updating equations until convergence) within each iteration to minimize in (17) (see Appendix B for the implementation details).

### 4.3 Summary of the CIC-based distribution approximation procedure

Similar to the AIC, the CIC tends to decrease and then slowly increase as increases, subject to the randomness of the data (see Figure 2, where is the number of mixture components in the GMM with unconstrained means and covariances. is proportional to the free parameter dimension , with denoting the dimension of the GMM density support). Thus, we minimize the CIC to find the best number of components, (or the best model dimension ) to use in the iteration (see Appendix B for the implementation details). Figure 3 summarizes the CIC-based distribution approximation procedure.

If we additionally want to estimate a quantity of interest such as , we can sample and use the estimator such as in (20). Section 5 uses this additional step for importance sampling.

## 5 Numerical Study

Theoretically, importance sampling (Kahn and Marshall, 1953), which has been widely used to reduce the estimator variance in Monte Carlo studies (Givens and Raftery, 1996; Zhang, 1996; Owen and Zhou, 2000; Neddermeyer, 2009), can lead to zero variance of an estimator if we can sample from the optimal distribution. In practice, however, we do not know the optimal distribution, but usually we can evaluate a nonnegative function proportional to the optimal density. Therefore, we use the CIC to approximate the optimal distribution. We expect that if the approximation is better, the variance reduction will be greater.

As a benchmark, we consider the importance sampling method in Kurtz and Song (2013), which has a similar structure to our CIC-based approach. Their method, which is called cross-entropy-based adaptive importance sampling using Gaussian mixture (CE-AIS-GM) uses the GMM with a pre-specified value for the number of mixture components, . The GMM parameters are updated once in each iteration using updating equations similar to ours in (22), (23), and (24) to reduce the cross-entropy from the GMM density to the optimal importance sampling density.

We use a classical example in the structural safety literature, which is also used in Kurtz and Song (2013). For with the density of a bivariate Gaussian distribution with zero mean and identity covariance matrix, a system of interest fails when falls on the region , where

 g(x) =b−x2−κ(x1−e)2. (26)

To compare the CE-AIS-GM with our method, which we call CIC-IS, we vary the parameter , , and , to test three different failure thresholds. We fix the other two parameters, and , to maintain the shape of the failure region. We use the same sample size in Kurtz and Song (2013), namely, the total of 8700 replications: for and for .

As is well known in the importance sampling literature (Kahn and Marshall, 1953), the optimal importance sampling density for this example should be proportional to , where is the indicator function. Therefore, is the probability of the failure event.

We set the CE-AIS-GM to use and to estimate the failure probability based only on the last () iteration data as in Kurtz and Song (2013). The CIC-IS adaptively chooses within the algorithm described in Figure 3 and uses the data from all iterations except the iteration to estimate the failure probability by in (20), as described in Section 4. Because the CIC helps find a distribution reasonably close to the optimal distribution throughout all iterations, in general we can use the data from all iterations, not just the last iteration, to estimate the failure probability. It also helps to reduce the standard error of the estimator.

Table 1 shows the estimation results based on 500 experiment repetitions. The sample mean of the failure probability estimates (Mean) decreases as the threshold, , increases. Compared to the CE-AIS-GM, the CIC-IS obtains at least 50% smaller standard errors.

Figure 4 compares the theoretically optimal density and a CIC-IS density , for . We observe that the CIC-IS density with the automatically chosen is close to the theoretically optimal density, being able to capture the shape of the important region.

## 6 Summary and Future Research Directions

This paper proposed an information criterion, called CIC, to find a parametric density that has the asymptotically minimum cross-entropy to a target density to approximate. The CIC is the sum of two terms: an estimator of the cross-entropy (up to a multiplicative constant) from the parametric density to the target density, and a model complexity penalty term that is proportional to the free parameter dimension of the parametric density. Under certain regularity conditions, we proved that the penalty term corrected the asymptotic bias of the first term in estimating the true cross-entropy. Empirically, we demonstrated that minimizing the CIC could lead to a density that well approximated the optimal importance sampling density.

Our findings suggest several future research directions. Importance sampling has been used (Liu, 1996; Bassetti and Diaconis, 2006) as an alternative to the Metropolis-Hastings algorithm (Metropolis et al., 1953; Hastings, 1970) for Bayesian inference. A CIC-based importance sampling may lead to more accurate inference with less computational burden than existing importance sampling methods using a fixed trial (or proposal) density.

Variational inference (Jordan et al., 1999; Wainwright and Jordan, 2008), which is an emerging Bayesian inference technique for approximating a posterior distribution, is a fast and scalable alternative to Markov chain Monte Carlo (MCMC) (Hastings, 1970; Gelfand and Smith, 1990) (for a recent review on variational inference and its comparison to MCMC, see Blei et al. (2016)). The computational efficiency is achieved by positing the posterior approximation as an optimization problem that minimizes the KL divergence from a distribution in a parametric family to the posterior distribution. The computational complexity of the optimization problem depends on the complexity of the parametric family (Blei et al., 2016). We conjecture that the CIC could be a way to balance the inference accuracy and the computational complexity in variational inference.

In proposing the AIC, Akaike (1974) stated that “IC stands for information criterion and A is added so that similar statistics, BIC, DIC etc., may follow.” Indeed, the BIC (Schwarz, 1978) soon followed. Now the CIC, which can be viewed as a generalization of the AIC, follows. Future research on a Bayesian criterion that is to the CIC as the BIC is to the AIC appears to be next, with an obvious choice of the criterion name.

## Appendix A: Assumptions and Proofs

Having stated Assumption 1 in Section 3.3, we give the remaining assumptions that constitute the regularity conditions for our theoretical results. Specifically, Assumptions 28 are the standard regularity conditions to establish consistency (see Lemma 1) and asymptotic normality (see Lemma 2) of an M-estimator. Assumptions 9 and 10 are mild conditions to regularize in a neighborhood of to establish Corollary 4.

Theorem 5 is based on the additional regularity conditions, Assumptions 1116. Assumption 11 is a common regularity condition to interchange expectation and differentiation. Assumptions 12 and 15 are mild conditions for some matrices involving Hessians to converge in probability. Assumptions 13, 14, and 16 regulate the MCE in (10) for and . The uniform integrability conditions in Assumptions 13 and 16 are commonly used (e.g., Conditions A7–A8 in Donohue et al. (2011)) for an information criterion similar to the AIC to make the model complexity penalty to be expressed in the free parameter dimension . Assumption 14 is relatively mild, because approaches , which is an interior point under Assumption 4.

We recall that the functions, and , are defined in (6) and (8), respectively. Below, and denote the gradient and the Hessian with respect to , respectively. For example, denotes the gradient of with respect to at .

###### Assumption 2.

For any , is continuous on for a.e. under .

###### Assumption 3.

For any , there exists a measurable function such that and for a.e. under and all .

###### Assumption 4.

is an interior point of .

###### Assumption 5.

For any , is twice continuously differentiable in a neighborhood of for a.e. under .

###### Assumption 6.

For any , there exist measurable functions such that and for a.e. under and all . The norm is Euclidean for and Frobenius for .

is nonsingular.

###### Assumption 8.

For any , exists.

###### Assumption 9.

is continuous in a neighborhood of for a.e. under .

###### Assumption 10.

is bounded in a neighborhood of .

###### Assumption 11.

There exists a measurable function such that and in a neighborhood of for a.e. under .

###### Assumption 12.

is continuous at .

###### Assumption 13.

, , are uniformly integrable for in a neighborhood of .