Recursive partitioning and multi-scale modeling on conditional densities

Recursive partitioning and multi-scale modeling on conditional densities

\fnmsLi \snmMa \thanksreft1label=e1]li.ma@duke.edu [ Duke University Department of Statistical Science
Duke University
Durham, NC 27708-0251, USA
\printeade1
August 27th, 2011
Abstract

We introduce a nonparametric prior on the conditional distribution of a (univariate or multivariate) response given a set of predictors. The prior is constructed in the form of a two-stage generative procedure, which in the first stage recursively partitions the predictor space, and then in the second stage generates the conditional distribution by a multi-scale nonparametric density model on each predictor partition block generated in the first stage. This design allows adaptive smoothing on both the predictor space and the response space, and it results in the full posterior conjugacy of the model, allowing exact Bayesian inference to be completed analytically through a forward-backward recursive algorithm without the need of MCMC, and thus enjoying high computational efficiency (scaling linearly with the sample size). We show that this prior enjoys desirable theoretical properties such as full support and posterior consistency. We illustrate how to apply the model to a variety of inference problems such as conditional density estimation as well as hypothesis testing and model selection in a manner similar to applying a parametric conjugate prior, while attaining full nonparametricity. Also provided is a comparison to two other state-of-the-art Bayesian nonparametric models for conditional densities in both model fit and computational time. A real data example from flow cytometry containing 455,472 observations is given to illustrate the substantial computational efficiency of our method and its application to multivariate problems.

[
\kwd
\runtitle

A multi-scale prior for conditional distributions

\thankstext

t1Supported in part by NSF grants DMS-1309057 and DMS-1612889, and a Google Faculty Research Award.

class=AMS] \kwd[Primary ]62F15 \kwd62G99 \kwd[; secondary ]62G07

Pólya tree \kwdmulti-resolution inference \kwdBayesian nonparametrics \kwddensity regression \kwdBayesian CART

1 Introduction

In recent years there has been growing interest in nonparametrically modeling probability densities based on multi-scale partitioning of the sample space. A prime example in the Bayesian nonparametric literature is the Pólya tree (PT) [12, 22, 31] and its extensions [17, 18, 45, 21, 27]. In particular, Wong and Ma [45] introduced randomization into the partitioning component (involving both random selection of partition directions as well as optional stopping) of the PT framework, which enhances the model’s ability to approximate the shape and smoothness of the underlying density. A PT model with these features is called an optional Pólya tree (OPT).

A further desirable feature of the PT and its relatives such as the OPT and the more recently introduced adaptive Pólya tree (APT) [27] is the computational ease for carrying out inference. In turns out that the extra component of randomized partitioning such as that employed in the OPT does not impair the conjugacy enjoyed by the PT. For example, after observing i.i.d. data, the corresponding posterior of an OPT is still an OPT, that is, the same generative procedure for random probability distributions with its parameters updated to their posterior values. Moreover, the corresponding posterior parameter values can be computed exactly through a sequence of recursive computations, which is in essence a forward-backward algorithm [25]. This, together with the constructive nature of these models, allows one to draw samples from the exact posterior directly without resorting to Markov Chain Monte Carlo (MCMC) procedures, and to compute various summary statistics of the posterior analytically. Furthermore, the marginal posterior of the random partitioning adapts to the underlying structure of the data—the sample space will with high posterior probability be more finely divided in places where the underlying distribution has richer structure, i.e. less uniform topological shape.

Motivated by the computational efficiency and statistical properties of the OPT, which is tied to its use of recursive random partitioning, we aim to further exploit the random recursive partitioning idea in the context of multi-scale density modeling, and build such a model for conditional densities for a response (vector) given a predictor (vector) . The objective is to construct a flexible nonparametric model for conditional distributions that maintain all of the desirable statistical and computational properties of PT and OPT.

A variety of inference tasks involve the estimation, prediction, and testing regarding conditional distributions, and nonparametric inference on conditional densities has been studied from both frequentist and Bayesian perspectives. Many frequentist works are based on kernel estimation methods [10, 16, 11], and they achieve proper smoothing through bandwidth selection, which often involves resampling procedures such as cross-validation [2, 19, 11] and the bootstrap [16]. An alternative frequentist strategy introduced more recently is to employ the so-called block-wise shrinkage [8, 9]. In Bayesian nonparametrics, inference on conditional distributions is often referred to as covariate-dependent distribution modeling, and existing methods fall into two categories. The first category is methods that construct priors for the joint distribution of the response and the predictors, and then use the induced conditional distribution for inference. Some examples are [32, 37, 33, 41], which propose using mixtures of multivariate normals as the model for joint distributions, along with different priors for the mixing distribution. The other category is methods that construct conditional distributions directly without specifying the marginal distribution of the predictors. Many of these methods are based on extending the stick breaking construction for the Dirichlet Process (DP) [39]. Some notable examples, among others, are proposed in [29, 20, 13, 15, 7, 4, 36, 1]. Some recent works in this category do not utilize stick breaking. In [43], the authors propose to use the logistic Gaussian process [23, 42] together with subspace projection to construct smooth conditional distributions. In [21], the authors incorporate covariate dependency into tail-free processes by generating the conditional tail probabilities from covariate-dependent logistic Gaussian processes, and propose a mixture of such processes as a way for modeling conditional distributions. The authors of [24] introduce dependent normalized complete random measures. In [44] the authors introduce the covariate-dependent multivariate Beta process, and use it to generate the conditional tail probabilities of Pólya trees. More recently, in [40] the authors use the tensor product of B-splines to construct a prior for conditional densities, and incorporate a variable selection feature. While many of these nonparametric models on conditional distributions enjoy desirable theoretical properties, inference using these priors generally relies on intense MCMC sampling, and can take substantial computing time even when both the response and the covariate are one-dimensional.

We introduce a new prior, called the conditional optional Pólya tree, for the conditional density of given , in the form of a two-stage generative procedure consisting of first randomly partitioning the predictor space , and then for each predictor partition block, generates the response distribution on each block using an OPT, which implicitly employs a further random partitioning of the response space . We show that this new prior is a fully nonparametric model and yet achieves extremely high computational efficiency even for multivariate responses and covariates. It enjoys all of the desirable theoretical properties of the PT and the OPT priors—namely large support, posterior consistency, and posterior conjugacy, and its posterior parameters can also be computed exactly through forward-backward recursion. Under this two-stage design, the posterior distribution on the partitions reflect the structure of the conditional distribution at two levels—first, the predictor space will be partitioned finely in parts where the conditional distribution changes most abruptly, shedding light on how the conditional distribution depends on the predictors; second, the response space will be divided adaptively for different locations of the predictor space, to capture the local structure of the conditional density through adaptive smoothing.

The rest of the paper is organized as follows. In Section 2 we introduce our two-stage prior and show that it is fully nonparametric—with full (integrated) support—for conditional densities. In addition, we make a connection to Bayesian CART and show that our method can be considered a nonparametric version of the latter. In Section 3 we show the full conjugacy of the model, derive the exact form of the posterior through forward-backward recursion, and thereby provide a recipe for carrying out Bayesian inference using the prior. We also prove the posterior consistency of such inference. In Section 4 we discuss practical computational issues in implementing the inference. In Section 5 we provide four simulation examples to illustrate the work of our method. The first two are for estimating conditional densities, and the last two concern model selection and hypothesis testing. In Section 6 we apply the proposed method to estimating conditional densities in a flow cytometry data set involving a large number (455,472) of observations, and demonstrate the computational efficiency of the method and its application when both the response and the predictor are multivariate. Section 7 concludes with some discussions. All proofs are given in the Appendix.

2 Conditional optional Pólya trees

In this section we introduce our proposed prior constructively in terms of a two-stage generative procedure that produces random conditional densities. First we introduce some notions and notations that will be used throughout. Let each observation be a predictor-response pair , where denotes the predictor (or covariate) vector and the response (vector) with being the predictor space and the response space. In this work we consider sample spaces that are either finite spaces, compact Euclidean rectangles, or a product of the two, and and do not have to be of the same type. (See for instance Example 3.) Let and be the “natural” measures on and . (That is, the counting measure for finite spaces, the Lebesgue measure for Euclidean rectangles, and the corresponding product measure if the space is a product of the two.) Let be the “natural” product measure on the joint sample space .

A partition rule on a sample space specifies a collection of possible ways to divide any subset of into a number of smaller sets. For example, for , the unit rectangle in , the coordinate-wise dyadic mid-split rule allows each rectangular subset of whose sides are parallel to the coordinates to be divided into two halves at the middle of the range of each coordinate. For simplicity, in this work we only consider partition rules that allow a finite number of ways for dividing each set. Such partition rules are said to be finite. (Interested readers can refer to [28, Sec. 2] for a more detailed treatment of partition rules and to Examples 1 and 2 in [45] for examples of the coordinate-wise dyadic mid-split rule over Euclidean rectangles and contingency tables.)

We are now ready to introduce our prior for conditional densities as a two-stage constructive procedure. It is important to note that the following describes the generation of conditional densities under our prior and not the operational steps for inference under the prior, which will be addressed Section 3 and Section 4.

Stage I. Predictor partition: We randomly partition according to a given partition rule on in the following recursive manner. Starting from , draw a Bernoulli variable

That is, . If , then the partitioning procedure on terminates and we arrive at a trivial partition of a single block over . (Thus is called the stopping variable, and the stopping probability.) If instead , then we randomly select one out of the possible ways for dividing under and partition accordingly. More specifically, if there are ways to divide under , we randomly draw

and partition in the th way if . (We call the partition selection probabilities for .) Let be the number of child sets that arise from this partition, and let denote these children. We then repeat the same partition procedure, starting from the drawing of a stopping variable, on each of these children.

The following lemma, first proved in [45], states that as long as the stopping probabilities are (uniformly) away from 0, this random recursive partitioning procedure will eventually terminate almost everywhere and produce a well-defined partition of .

Lemma 1.

If there exists a such that the stopping probability for all that could arise after a finite number of levels of recursive partition, then with probability 1 the recursive partition procedure on will stop a.e.

Stage II. Generating conditional densities: Next we move onto the second stage of the procedure to generate the conditional density of the response on each of the predictor partition blocks generated in Stage I. Specifically, for each stopped subset on produced in Stage I, we let the conditional distribution of given be the same across all , and generate this (conditional) distribution on , denoted as , from a “local” prior.

When the response space is finite, is simply a multinomial distribution, and so a simple choice of such a local prior is the Dirichlet prior: where represents the pseudo-count hyperparameters of the Dirichlet. In this case, we note that the two-stage prior essentially reduces to a version of the Bayesian CART proposed by Chipman et al in [3] for the classification problem. When is infinite (or finite but with a large number of elements), one may restrict to be from a parametric family. For example, when , one may require to be normal with some mean and variance , and let and . In this case the two-stage prior again reduces to a Bayesian CART, this time for the regression problem [3].

The focus of our current work, however, is on the case when no parametric assumptions are placed on the conditional density. To this end, one can draw from a nonparametric prior. A desirable choice for the local prior, which will result in analytic simplicity and computational efficiency as we will later show, is a Pólya tree type model [27], and in particular an optional Pólya tree (OPT) distribution [45]:

independently across s given the partition, where denotes a partition rule on and , , and are respectively the stopping, selection, and pseudo-count hyperparameters [45]. In general we allow the partition rule for these “local” OPTs to depend on as indicated in the superscript, but adopting a common partition rule on —that is to let for all —will suffice for most problems. In the rest of the paper, unless stated otherwise we assume that a common rule is adopted.

This completes the description of our two-stage procedure. We now formally define the resulting prior.

Definition 1.

A conditional distribution that arises from the above two-stage procedure is said to have a conditional optional Pólya tree (cond-OPT) distribution. The hyperparameters are the predictor partition rule , the response partition rule , the stopping probability , the partition selection probabilities , and the local parameters for all that could arise during the predictor partition under .

Remark I: To ensure that this definition is meaningful, one must check that the two-stage procedure will in fact generate a well-defined conditional distribution with probability 1. To see this, first note that because the collection of all potential sets on that can arise during Stage I is countable, by Theorem 1 in [45], with probability 1, the two-stage procedure will generate an absolutely continuous conditional distribution of given for in the stopped part of , provided that is uniformly away from 0. The two-stage generation procedure for the conditional density of can then be completed by letting given be uniform on for the -null subset of on which the recursive partition in Stage I never stops.

Remark II: While the cond-OPT prior involves many hyperparameters, one can appeal to very simple symmetry and self-similarity principles for choosing their values. Specifically, such considerations lead to the simple choice: (i) , (ii) , and (iii) , , and for all , following the default choices in [45]. We note that when useful prior knowledge about the structure of the underlying distribution is not available or when one is unwilling to assume particular structure over the distribution, it is desirable to specify the prior parameters in a symmetric and self-similar way. The common stopping probability should not be too close to 0 or 1, but taking a moderate value between 0.1 and 0.9. A sensitivity analysis for such choices demonstrating the robustness of such choices in the context of OPTs is provided in [28]. As for the partition rules, the coordinate-wise dyadic mid-split rule can serve as a simple default choice for both and . We will adopt such a specification in all of our numerical examples.

Remark III: One constraint in the cond-OPT is that given the random partition generated in Stage I, the generation of the conditional distribution across different predictor blocks is independent, i.e., in a similar manner as that for Bayesian CART. As we shall see, this constraint is key to the tremendous computational efficiency of the model. It is important to note however that due to the randomized partitioning incurred in Stage I, the marginal prior for the conditional distributions on nearby values of are in fact dependent, thereby achieving smoothing over to some extent. More flexible smoothing could be achieved through modeling the “local” priors jointly, but that would incur the need for MCMC sampling and the most desirable feature of PT type models would be lost.

We have emphasized that the cond-OPT prior imposes no parametric assumptions on the conditional distribution. One may wonder whether this prior is truly “nonparametric” in the sense that it can generate all possible conditional densities. Our next theorem confirms this—under mild conditions on the parameters, which the default specification satisfies, the cond-OPT will place positive probability in arbitrarily small neighborhoods of any conditional density. (A definition of an neighborhood for conditional densities is also implied in the statement of the theorem.)

Theorem 2 (Large support).

Suppose is a conditional density function that arises from a cond-OPT prior whose parameters and for all that could arise during the recursive partitioning on are uniformly away from 0 and 1, and the local OPTs all have full support on the densities on . Moreover, suppose that the underlying partition rules and both satisfy the following “fine partition criterion”: , there exists a partition of the corresponding sample space such that the diameter of each partition block is less than . Then for any conditional density function , and any ,

Furthermore, let be any density function on w.r.t. . Then we have ,

Remark: Sufficient conditions for OPTs to have full support on densities is given in Theorem 2 of [45].

3 Bayesian inference with cond-OPT

Next we investigate how Bayesian inference on conditional densities can be carried out using this prior. First, we note that Chipman et al [3] and Denison et al [6] each proposed MCMC algorithms that enable posterior inference for Bayesian CART. These sampling and stochastic search algorithms can be applied directly here as the local OPT priors can be marginalized out and so the marginal likelihood under each partition tree that arises in Stage I of the cond-OPT is available in closed form [45, 28]. However, as noted in [3] and other works, due to the multi-modal nature of tree structured models, the mixing behavior of the MCMC algorithms is often undesirable. This problem is exacerbated in higher dimensional settings. Chipman et al [3] suggested using MCMC as a tool for searching for good models rather than a reliable way of sampling from the actual posterior.

The main result of this section is that under simple partition rules such as the coordinate-wise dyadic mid-split rule, Bayesian inference under a cond-OPT prior can be carried out in an exact manner in the sense that the corresponding posterior distribution can be computed in closed form and directly sampled from, without resorting to MCMC algorithms. Not only is the computation feasible for multivariate sample spaces of moderate dimensions, but it is in fact highly efficient, scaling linearly with the number of observations.

First let us investigate what the posterior of a cond-OPT prior is. Suppose we have observed where given the ’s, the ’s are independent with some density . We assume that has a cond-OPT prior denoted by . Further, for any we let

and let denote the number of observations with predictors lying in , that is .

For , we use to denote the (conditional) likelihood under contributed from the data with predictors . That is

Then conditional on the event that arises during the recursive partition procedure on , we can write recursively in terms of , , and as follows

where

the likelihood from the data with if the partitioning stops on . Equivalently, we can write

(3.1)

Integrating out the randomness over both sides of Eq. (3.1), we get

(3.2)

where

is defined to be the marginal likelihood from data with given that arises during the recursive partitioning on , whereas

(3.3)

is the marginal likelihood from the data with if the recursive partitioning procedure stops on and the integration is taken over the local OPT prior for . We note that Eqs. (3.1), (3.2) and (3.3) hold for Bayesian CART as well, with being the corresponding marginal likelihood of the local normal model or the multinomial model under the corresponding priors such as those given earlier.

Eq. (3.2) provides a recursive recipe for calculating for all . It is recursive in the sense that is computed based on the value of on ’s children. (Of course, to complete the calculation the recursion must eventually terminate everywhere on . We shall describe the terminal conditions in the next section.) This recursive algorithm is a special case of the forward-backward algorithm [27].

The next theorem establishes the posterior conjugacy of cond-OPT.

Theorem 3 (Conjugacy).

After observing where given the ’s, the ’s are independent with density , which has a cond-OPT prior, the posterior of is again a cond-OPT (with the same partition rules on and as the prior). Moreover, for each that could arise during the recursive partitioning, the posterior parameters are given as follows.

  1. Stopping probability:

  2. Selection probabilities:

  3. The local parameters: , , and are the corresponding posterior parameters for the local OPT after updating using the observed values for the response , .

This theorem shows that a posteriori our knowledge about the underlying conditional distribution of given can again be represented by the same two-stage procedure that randomly partitions the predictor space and then generates the response distribution accordingly on each of the predictor blocks, except that now the parameters that characterize this two-stage procedure have been updated to reflect the information contained in the data. Moreover, the theorem also provides a recipe for computing these posterior parameters based on and . Given this exact posterior, Bayesian inference can then proceed—samples can be drawn from the posterior cond-OPT directly through vanilla Monte Carlo (as opposed to MCMC) and summary statistics calculated.

In the next section, we provide more details on how to implement such inference in practice. Before that, we present our last theoretical result about the cond-OPT prior—its posterior consistency, which assures the statistician that the posterior cond-OPT distribution will “converge” in some sense to the truth as the amount of data increases. To this end, we first need a notion of neighborhoods for conditional densities under which such convergence holds. We adopt the notion discussed in [35] and [34], by which a (weak) neighborhood of a conditional density function is defined in terms of a (weak) neighborhood of the corresponding joint density. More specifically, for a conditional density function , weak neighborhoods with respect to a marginal density on are collections of conditional densities of the form

where the ’s are bounded continuous functions on .

Theorem 4 (Weak consistency).

Let be independent identically distributed vectors from a probability distribution on , , with density . Suppose the conditional density is generated from a cond-OPT prior for which the conditions in Theorem 2 all hold. In addition, assume that the conditional density function and the joint density are bounded. Then for any weak neighborhood of w.r.t , , we have

with probability 1, where denotes the cond-OPT posterior for .

4 Practical implementation

Next we address some practical issues in computing the posterior and implementing the inference. For simplicity, from now on we shall refer to a set that can arise during the (Stage I) recursive partitioning procedure as a “node” (i.e., as a node in the partition tree).

A prerequisite for applying Theorem 3 is the availability of the terms, which can be determined recursively through Eq. (3.2). Of course, to carry out the computation of one must specify terminal conditions on Eq. (3.2), or in other words, on what kind of ’s the recursion should terminate. We call such nodes terminal nodes.

There are two kinds of nodes for which the value of is available directly according to theory, and thus recursion can terminate on them. They are (i) nodes that cannot be further divided under the partition rule , and (ii) nodes that contain no more than one data point. For a node that cannot be further divided, we must have and so . For a node with no data point, it has no contribution to the likelihood and so . For a node with exactly one data point, is the predictive density of the local OPT on evaluated at that data point, which is exactly the density of the prior mean of the local OPT and is directly known when the default symmetric and self-similar prior specification for the local OPTs is adopted as recommended in [45].

Note that with these two types of “theoretical” terminal nodes, in principle the recursion will eventually terminate if one divides the predictor space deep enough. In practice, however, it is unnecessary to take the recursion all the way down to these theoretical terminal nodes. Instead, one can adopt early termination by imposing a technical limit—such as a minimum size (or maximum depth) of the nodes either in terms of the natural measure or the number of observations therein —to end the recursion. Nodes that are smaller than the chosen size threshold are forced to be terminal, which is equivalent to setting and thus for these nodes. We call these nodes “technical” terminal nodes.

With these theoretical and technical terminal nodes, one can then compute through the recursion formula , and compute the posterior according to Theorem 3. Putting all the pieces together, we can summarize the procedure to carry out Bayesian inference with the cond-OPT prior as a four-step recipe:

  1. For all nodes (terminal or non-terminal), compute .

  2. For each non-terminal node (those that are ancestors of the terminal nodes), use Eq. (3.2) to recursively compute .

  3. Given the values of and , apply Theorem 3 to get the parameter values of the posterior cond-OPT distribution.

  4. Sample from the exact posterior by direct simulation of the random two-stage procedure, and/or compute summary statistics of the posterior.

For the last step, direct simulation from the posterior is straight-forward, but we have not discussed what summary statistics to compute and how to do that. This is problem-specific and will be illustrated in our numeric examples in Section 5.

5 Examples

In this section we provide four examples to illustrate inference using the cond-OPT prior. The first two illustrate the estimation of conditional densities, the latter two are for model selection and hypothesis testing. In these examples, the partition rules used on both and are always the coordinate-wise dyadic mid-split rule. We adopt the same prior specification across all the examples: the prior stopping probability on each non-terminal node is always set to 0.5, the prior partition selection probability is always evenly spread over the possible ways to partition each set, and the probability assignment pseudo-counts for the local OPTs are all set to 0.5. For continuous sample spaces, nodes at 12 levels down the partition tree, i.e., with , are set to be the technical terminal nodes.

Example 1 (Estimating conditional density with abrupt changes over predictor values).

In this example we simulate pairs according to the following distributions.

We generate data sets of three different sample sizes, , , and , and place the cond-OPT prior on the distribution of given . Following the four-step recipe given in the previous section, we can compute the posterior cond-OPT and sample from it.

A representative summary of the posterior partitioning mechanism is the so-called hierarchical maximum a posteriori (hMAP) [45] partition tree, which can be computed from the posterior analytically [45] and is plotted in Figure 1 for the different sample sizes. (Chipman et al [3] and Wong and Ma [45] both discussed reasons why the commonly adopted MAP is not a good summary for tree-structured posteriors due to their multi-level nature. See [45, Sec. 4.2] for further details and reasons why the hMAP is often preferred to the MAP.)

In Figure 1, within each “leaf” node we plot the corresponding posterior mean of the local OPT. Also plotted for each node is the posterior stopping probability. Even with only 100 data points, the posterior suggests that should be divided into three pieces—[0,0.25], [0.25,0.5], and [0.5,1]—within which the conditional distribution of is homogeneous across . Note that the posterior stopping probabilities on those three intervals are large, in contrast to the near 0 values on the larger sets. Reliably estimating the actual conditional density function on these sets nonparametrically appears to require more than 100 data points. In this example, a sample size of 500 already does a decent job.

(a)

(b)

(c)

Figure 1: The hMAP partition tree structures on and the posterior mean estimate of conditional on the random partition for Example 1. For each node, indicates the posterior stopping probability for each node and represents the number of data points in each node. The plot under each stopped node gives the mean of the posterior local OPT for within that node (solid line) along with the true conditional densities (dashed line).

We compare both the model fit and the computing speed of our cond-OPT prior to two existing Bayesian nonparametric models for conditional densities—namely the linear dependent Dirichlet process mixture of normals (LDDP) [5] and the linear dependent Dirichlet process mixture of Bernstein polynomials (LDBP) [1], both available in the DPpackage in R. In this example and the next, for LDDP and LDBP, we draw 1,000 posterior samples from the MCMC with a 2,000 burn-in period and a thinning interval of 3, and used prior specification given in the examples of the DPpackage. For details, please see the documentation for these two functions in the DPpackage manual on CRAN.

To evaluate model fit, we generate an additional testing data set from the true distribution of , and calculate the log- score (i.e., the log predictive likelihood of the testing set) for the three methods. Table 1 presents the log- score for the three methods from a typical simulated data set and the corresponding computing time on the same laptop computer with an Intel Core-i7 CPU using a single core without parallelization. A surprising phenomenon is that the performance of LDBP, in terms of the log- score for the testing sample, is not always monotone increasing in the sample size—that is, a larger training sample does not always lead to better fit on the testing set. In the particular simulation reported in Table 1, the preformance of LDBP is actually monotone decreasing with sample size. The cause for this is likely to be that under those models the conditional density is assumed to be smoothly varying over the predictors, and so as the true conditional density involves abrupt changes, the misspecified models can be consistently wrong even with large sample sizes.

cond-OPT LDDP LDBP cond-OPT LDDP LDBP cond-OPT LDDP LDBP
log- 75.5 17.6 34.3 78.2 24.9 31.4 81.5 33.1 27.8
CPU time (s) 0.48 0.82 1.8
Table 1: Log predictive score and computing time for three Bayesian nonparametric models on a simulated data set in Example 1

The previous example favors our method because (1) there are a small number of clear boundaries of change for the underlying conditional distribution, and to a lesser extent (2) those boundaries—namely 0.25 and 0.5—lie on the potential partition points of the partition rule. In the next example, we examine the case in which the conditional distribution changes smoothly across a continuous without any boundary of abrupt change.

Example 2 (Estimating conditional densities that vary smoothly with predictor values).

In this example we generate from a bivariate normal distribution.

We generate a data set of size , and apply the cond-OPT prior on the distribution of given as we did in the previous example. Again we compute the posterior cond-OPT following our four-step recipe. The hMAP tree and the posterior mean estimate of the conditional density given the random partition is presented in Figure 2. Because the underlying predictor space is unbounded, for simplicity in the above we used the empirically observed range of as , which happens to be for our simulated example. (Other ways to handle this situation include transforming to have a compact support such as through a CDF or rank transform.

One interesting observation is that the “leaf” nodes in Figure 2 have very large (close to 1) posterior stopping probability. This may seem surprising as the underlying conditional distribution is not the same for any neighboring values of . The large posterior stopping probabilities indicate that on those sets, where the sample size is not large, the gain in achieving better estimate of the common features of the conditional distribution for nearby values outweighs the loss in ignoring the difference among them.

Figure 2: The hMAP tree on and the predictive conditional density of within the stopped sets conditional on the partition tree for a sample of size in Example 2. The plot under each stopped node gives the mean of the posterior local OPT for within that node (solid line) along with the true conditional densities at the center value of the stopped predictor intervals (dashed line). The label above each node is the posterior stopping probability for each node and represents the number of data points in each node.

Again, to compare the model fit and computational efficiency with LDDP and LDBP, we repeat a set of simulations with different sample sizes , 500, and 2500, and again use the log- score on a testing sample of size 100 to evaluate the performance. The results are summarized in Table 2, and they mostly confirm our intuition—the smooth priors overall outperform our model, especially for small sample sizes. The performance difference vanishes as the sample size increases.

cond-OPT LDDP LDBP cond-OPT LDDP LDBP cond-OPT LDDP LDBP
log- 75.4 103 102 86.4 104 104 103 105 105
CPU time (s) 0.8 1.4 2.5
Table 2: Log predictive score and computing time for three Bayesian nonparametric models on a simulated data set in Example 2
Example 3 (Model selection over binary predictors).

Next we show how one can use cond-OPT to carry out model selection—that is, when multiple predictors are present, identifying the ones that affect the conditional distribution of . Consider the case in which forming a Markov Chain:

for . Suppose the conditional distribution of a continuous response is

In other words, three predictors , and impact the response in an interactive manner. Our interest is in recovering this underlying interactive structure (i.e. the “model”). To illustrate, we simulate 500 data points from this scenario and place a cond-OPT prior on , and consider predictor partitions up to four levels deep. This is achieved by setting for that arises after four steps of partitioning, and it allows us to search for models involving up to four-way interactions. We again carry out the four-step recipe to get the posterior and calculate the hMAP. The hMAP tree structure along with the predictive conditional density for within each stopped set given the random partition is presented in Figure 3. The posterior concentrates on partitions involving , and out of the 30 variables. While the predictive conditional density for is very rough given the limited number of data points in the stopped sets, the posterior recovers the exact interactive structure of the predictors with little uncertainty.

Figure 3: The hMAP tree structure on and the posterior mean estimate of given the random partition in each of the stopped sets for Example 3. The bold arrows indicate the “true model”—predictor combinations that correspond to “non-null” distributions. For each node, indicates the posterior stopping probability for each node, represents the posterior selection probability for the most probable direction if the partition does not stop on the node, and represents the number of data points in each node.

(a)     (b)

Figure 4: Estimated posterior marginal inclusion probabilities for the 30 predictors in Example 3 for two different sample sizes. The estimates are computed over 1,000 draws from the corresponding posteriors.

In addition, we sample from the posterior and use the proportion of times each predictor appears in the sampled models to estimate the posterior marginal inclusion probabilities. Our estimates based on 1,000 draws from the posterior are presented in Figure 4(a). Note that the sample size 500 is so large that the posterior marginal inclusion probabilities for the three relevant predictors are all close to 1 while those for the other predictors are close to 0. We carry out the same simulation with a reduced sample size of 200, and plot the estimated posterior marginal inclusion probabilities in Figure 4(b). We see that with a sample size of 200, one can already use the posterior to reliably recover the relevant predictors.

Example 4 (Test of independence).

In this example, we illustrate an application of the cond-OPT prior for hypothesis testing. In particular, we use it to test the independence between and . To begin, note that in Theorem 3 gives the posterior probability for the conditional distribution of to be constant over all values of in , or in other words, for to be independent of on . Hence, one can consider as a score for the statistical significance of dependence between the observed variables. A permutation null distribution of this statistic can be constructed by randomly pairing the observed and values, and based on this, permutation -values can be computed for testing the null hypothesis of independence.

To illustrate, we simulate for a sample of size 400 under the same Markov Chain model as in the previous example, and simulate a response variable as follows.

In particular, is dependent on but there is no mean or median shift in the conditional distribution of over different values of . Figure 5 gives the histogram of for 1,000 permuted samples where the vertical dashed line indicates the for the original simulated data, which equals 0.0384. For this particular simulation, 7 out of the 1,000 permuted samples produced a more extreme test statistic.

Figure 5: Histogram of for 1,000 permuted samples. The vertical line indicates for the original data.

Remark I: Note that by symmetry one can place a cond-OPT prior on the conditional distribution of given as well and that will produce a corresponding posterior stopping probability . One can thus alternatively use as the test statistic for independence.

Remark II: Testing using the posterior stopping probability is equivalent to using a Bayes factor (BF). To see this, note that the BF for testing independence under the cond-OPT can be written as

with where the numerator is the marginal conditional likelihood of given if the conditional distribution of is not constant over (i.e.  is divided) and the denominator is that if the conditional distribution of is the same for all (i.e.  is undivided). By Eq. (3.2) and Theorem 3,

which is in a one-to-one correspondence to given the prior parameters.

6 Application to real data: multivariate conditional density estimation in flow cytometry

In flow cytometry experiments for immunological studies, a number (typically 4 to 10) of biomarkers are measured on large numbers of blood cells. Estimated densities and conditional densities of such data can be used for tasks such as automatic classification of the cells [30]. We apply cond-OPT to estimate the conditional density of markers “CD4” and “CD8” given two other markers “FSC-H” and “FSC-W” in a flow cytometry data set. So in this case both and are two-dimensional. This particular data set contains cells. Flow cytometry experiments often involve large numbers of cells, and thus practical methods must scale well in computing time and memory usage with respect to the number of observations. This poses great challenge to existing nonparametric models that require intense MCMC computation. The values of the four markers are measured in the range of [0,1]. We use maximum level of partitioning to 10 on both the predictor space and the response space but otherwise the same prior specification as before.

Figure 6 presents the posterior mean of the conditional density of CD4 and CD8 given FSC-H and FSC-W under the cond-OPT model given the random partition on the predictor space being the one induced under the hMAP tree, which splits the space into 50 pieces. A vast majority, in fact 44 out of the 50 predictor blocks are in fact not technical terminal regions, and so the model indeed smooths the conditional density over the predictor space. Because the number of predictor blocks is relatively large, we present the estimates for only 16 blocks in Figure 6. The entire computation of the full posterior, the hMAP partition, as well as the conditional posterior expectation of the conditional density given the hMAP tree, took about 360 seconds to complete on a single 3.6GHz Intel Core-i7 3820 desktop core without parallelization and required about 8.2 Gbs of RAM. (Reducing the maximum level of partitions from 10 to 8 will reduce computing time to about 116 seconds and RAM to about 0.6 Gbs.)

Figure 6: The posterior mean conditional densities of the two markers CD4 and CD8 given two other markers FSC-H and FSC-W conditional on the hMAP partition on FSC-H and FSC-W for the flow cytometry data set. The first and third columns indicate the corresponding predictor block (in red) in the hMAP partition with the number of observations labeled on top while the plots to their right illustrate the predictive conditional density on that block conditional on the random partition. Due to space constraint, we only show 16 out of the 50 predictor blocks.

7 Discussion

In this work we have introduced a Bayesian nonparametric prior on the space of conditional densities. This prior, which we call the conditional optional Pólya tree, is constructed based on a two-stage procedure that first divides the predictor space and then generates the conditional distribution of the response through local OPT processes. We have established several important theoretical properties of this prior, namely large support, conjugacy and posterior consistency, and have provided a practical recipe for Bayesian inference using this prior.

The construction of this prior does not depend on the marginal distribution of . One particular implication is that one can transform before applying the prior on without invalidating the posterior inference. (Note that transforming is equivalent to choosing a different partition rule on .) In certain situations it is desirable to perform such a transformation on . For example, if the data points are very unevenly spread over , then some parts of the space may contain a very small number of data points. There the posterior is mostly dominated by the prior specification and does not provide much information about the underlying conditional distribution. One way to mitigate this problem is to transform so that the data are more evenly distributed over . When is one-dimensional, for example, this can be achieved by a rank transformation on . Another situation in which a transformation of may be useful is when the dimensionality of is very high. In this case a dimensionality reduction transformation can be applied on before carrying out the inference. Of course, in doing so one often loses the ability to interpret the posterior conditional distribution of directly in terms of the original predictors. An alternative approach when is high-dimensional is through variable selection that imposes certain sparsity assumptions, i.e., only a small number of predictors are affecting the conditional density. Exact calculation of full posterior and the marginal inclusion probabilities as we have carried out in Example 3 is impractical when the number of predictors is large . One strategy to overcome this difficulty is through sequential importance sampling as the one proposed in [26].

A general limitation of CART type randomized partitioning methods require a natural ordering of the space to be partitioned on. General partitioning strategies can be designed for unordered spaces, but then the computational efficiency of the proposed model would be lost.

Finally, we note that while we have used recursive partitioning in conjunction with the OPT to build a model for conditional density, one can build such models by replacing the OPT with other multi-scale density models in the family of Pólya tree type models, such as the more recently introduced adaptive Pólya tree (APT) [27].

Software

The proposed model has been implemented in the R package PTT (for Pólya tree type models) as the function cond.opt. A variant of the model that replaces the OPT with an APT is also implemented in the package as function cond.apt. This package is currently available for download at https://github.com/MaStatLab/PTT and will be submitted to CRAN.

Acknowledgment

The flow cytometry data set was provided by EQAPOL (HHSN272201000045C), an NIH/NIAID/DAIDS-sponsored, international resource that supports the development, implementation, and oversight of quality assurance programs (Sanchez PMC4138253).

Appendix: Proofs

Proof of Lemma 1.

The proof of this lemma is very similar to that of Theorem 1 in [45]. Let be the part of that has not been stopped after levels of recursive partitioning. The random partition of after levels of recursive partitioning can be thought of as being generated in two steps. First suppose there is no stopping on any set and let be the collection of partition selection variables generated in the first levels of recursive partitioning. Let be the collection of sets that arise in the first levels of non-stopping recursive partitioning, which is determined by . Then we generate the stopping variables for each successively for , and once a set is stopped, let all its descendants be stopped as well. Now for each , let be the indicator for ’s stopping status after levels of recursive partitioning, with if is not stopped and otherwise.

Hence , by Markov inequality and Borel-Contelli lemma, we have with probability 1. ∎

Proof of Theorem 2.

We prove only the second result as the first follows by choosing . Also, we consider only the case when and are both compact Euclidean rectangles, because the cases when at least one of the two spaces is finite follow as simpler special cases. For and , let denote the joint density. First we assume that the joint density is uniformly continuous. In this case it is bounded on . We let and

By uniform continuity, we have as . In addition, we define

Note that in particular the continuity of implies the continuity of . Let be any positive constant. Choose a positive constant such that . Because all the parameters in the cond-OPT are uniformly bounded away from 0 and 1, there is positive probability that will be partitioned into where the diameter of each is less than , and the partition stops on each of the ’s. (The existence of such a partition follows from the fine partition criterion.) Let , , and if , and 0 otherwise. Let be the set of indices such that . Then

Let us consider each of the four terms on the right hand side in turn. First,

Note that for each , is a density function in . Therefore by the large support property of the OPT prior (Theorem 2 in [45]), with positive probability,

and so

for all . Also, for any , by the choice of ,

Thus

Finally, again by the choice of , , and so

Therefore for any , by choosing a small enough , we can have

with positive probability. This completes the proof of the theorem for continuous . Now we can approximate any density function arbitrarily close in distance by a continuous one . The theorem still holds because

where and denote the corresponding marginal and conditional density functions for . ∎

Proof of Theorem 3.

Given that a set is reached during the random partitioning steps on , is the marginal conditional likelihood of

The first term on the right hand side of Eq. (3.2), , is the marginal conditional likelihood of

{Stop partitioning on , } given {}.

Each summand in the second term, , is the marginal conditional likelihood of

{Partition in the th way, } given {}.

Thus the conjugacy of the prior and the posterior updates for , and OPT follows from Bayes’ Theorem and the posterior conjugacy of the standard optional Pólya tree prior (Theorem 3 in [45]). ∎

Proof of Theorem 4.

By Theorem 2.1 in [34], which follows directly from Schwartz’s theorem (see [38] and [14, Theorem 4.4.2]), we just need to prove that the prior places positive probability mass in arbitrarily small Kullback-Leibler (K-L) neighborhoods of w.r.t . Here a K-L neighborhood w.r.t is defined to be the collection of conditional densities

for some .

To prove this, we just need to show that any conditional density that satisfies the conditions given in the theorem can be approximated arbitrarily well in K-L distance by a piecewise constant conditional density of the sort that arises from the cond-OPT procedure. We first assume that is continuous. Following the proof of Theorem 2, let denote the modulus of continuity of . Let be a reachable partition of such that the diameter of each partition block is less than . Next, for each , let be a partition on allowed under OPT such that the diameter of each is also less than . Let

Let