Compressed Gaussian Process

Compressed Gaussian Process

Rajarshi Guhaniyogi and David B. Dunson
Abstract

Nonparametric regression for massive numbers of samples () and features () is an increasingly important problem. In big settings, a common strategy is to partition the feature space, and then separately apply simple models to each partition set. We propose an alternative approach, which avoids such partitioning and the associated sensitivity to neighborhood choice and distance metrics, by using random compression combined with Gaussian process regression. The proposed approach is particularly motivated by the setting in which the response is conditionally independent of the features given the projection to a low dimensional manifold. Conditionally on the random compression matrix and a smoothness parameter, the posterior distribution for the regression surface and posterior predictive distributions are available analytically. Running the analysis in parallel for many random compression matrices and smoothness parameters, model averaging is used to combine the results. The algorithm can be implemented rapidly even in very big and problems, has strong theoretical justification, and is found to yield state of the art predictive performance.

Key Words: Big data; Compressed regression; Gaussian process; Gaussian random projection; Large p, large n; Manifold regression.

1 Introduction

With recent technological progress, it is now routine in many disciplines to collect data containing massive numbers of features, ranging from thousands to millions or more. To account for complex nonlinear relationships between the features and the response, nonparametric regression models are employed. For example,

where , is the unknown regression function and is a residual. When is massive, estimating can lead to a statistical and computational curse of dimensionality. One strategy for combatting this curse is dimensionality reduction via variable selection or (more broadly) subspace learning, with the high-dimensional features replaced with their projection to a -dimensional subspace or manifold with . In many applications, the relevant information about the high-dimensional features can be encoded in such low dimensional coordinates.

There is a vast frequentist literature on subspace learning for regression, typically employing a two stage approach. In the first stage, a dimensionality reduction technique is used to obtain lower dimensional features that can “faithfully” represent the higher dimensional features. Examples include principal components analysis and more elaborate methods that accommodate non-linear subspaces, such as isomap ([tenenbaum2000global]) and Laplacian eigenmaps ([belkin2003laplacian, guerrero2011laplacian]). These techniques rely on eigen-decomposition of an matrix, making efficient computation in large challenging. There is a rich literature focused on freeing this bottleneck. For example, [talwalkar2008large] employs a column sampling algorithm that only requires eigen-decomposition of an matrix without losing much accuracy, even with . Once lower dimensional features are obtained, the second stage uses these features in standard regression and classification procedures as if they were observed initially. Such two stage approaches rely on learning the manifold structure embedded in the high dimensional features, which adds unnecessary computational burden when inferential interest lies mainly in prediction.

Another thread of research focuses on prediction using divide-and-conquer techniques. As the number of features increases, the problem of finding the best splitting attribute becomes intractable, so that CART ([breiman1984classification]), MARS and multiple tree models, such as Random Forest ([breiman2001random]) cannot be efficiently applied. A much simpler approach is to apply high dimensional clustering techniques, such as metis, cover trees and spectral clustering. Once the observations are clustered into a few groups, simple models (glm, Lasso etc) are fitted in each cluster. Such methods are sensitive to clustering, do not characterize predictive uncertainty, and may lack efficiency, an important consideration outside the setting. There is also a recent literature on scaling up sparse optimization methods, such as Lasso, to huge and settings relying on algorithms that can exploit multiple processors in a distributed manner ([boyd2011distributed]). However, such methods are yet to be developed for non-linear manifold regression, which is the central focus of this article.

This naturally motivates Bayesian models that simultaneously learn the mapping to the lower-dimensional subspace along with the regression function in the coordinates on this subspace, providing a characterization of predictive uncertainties. [tokdar2010bayesian] proposes a logistic Gaussian process approach, while [reich2011sufficient] use finite mixture models for sufficient dimension reduction. [page2013classification] propose a Bayesian nonparametric model for learning of an affine subspace in classification problems. These approaches have the disadvantages of being limited to linear subspaces, lacking scalability beyond a few dozen features and having potential sensitivity to features corrupted with noise. There is also a literature on Bayesian methods that accommodate non-linear subspaces, ranging from Gaussian process latent variable models (GP-LVMs) ([lawrence2005probabilistic]) for probabilistic nonlinear PCA to mixture factor models ([chen2010compressive]). However, such methods similarly face barriers in scaling up to large and/or . There is a heavy computational price for learning the number of latent variables, the distribution of the latent variables, and the mapping functions while maintaining identifiability restrictions.

Recently, [yang2013bayesian] show that this computational burden can be largely bypassed by using usual Gaussian process (GP) regression without attempting to learn the mapping to the lower-dimensional subspace. They showed that when the features lie on a -dimensional manifold embedded in the -dimensional feature space with and the regression function is not highly smooth, the optimal rate can be obtained using GP regression with a squared exponential covariance in the original high-dimensional feature space. This is an exciting theoretical result, which provides motivation for the approach in this article, which is focused on scalable Bayesian nonparametric regression in large and settings. For broader applicability than ([yang2013bayesian]), we accommodate features that are contaminated by noise and hence do not lie exactly on a low-dimensional manifold. In addition, we facilitate massive scaling in both and by bypassing MCMC and reducing matrix inversion bottlenecks via random projections. Sensitivity to the random projection and to tuning parameters is eliminated through the use of Bayesian model averaging. To our knowledge, no Bayesian manifold regression technique has yet been developed that can scale up for large sample size and massive number of features yielding accurate predictive inference rapidly.

Section 2 proposes the model and computational approach in large settings. Section 3 describes extensions to large , and section 4 develops theoretical justification. Section 5 contains simulation examples relative to state-of-the-art competitors. Section 6 presents an image data application, Section 7 concludes the paper with a discussion.

2 Compressed Gaussian process regression

2.1 Model

For subjects , let denote a response with associated features , , , where is a -dimensional manifold embedded in the ambient space . We assume that the response is continuous. The measured features do not fall exactly on the manifold but are corrupted by noise. We assume a compressed nonparametric regression model

(1)

with the residuals modeled as Gaussian with variance , though other distributions including heavy-tailed ones can be accommodated. is an matrix that compresses -dimensional features to dimension . Following a Bayesian approach, we choose a prior distribution for the regression function and residual variance , while randomly generating following precedence in the literature on feature compression ([maillard2009compressed, fard2012compressed, guhaniyogi2013bayesian]). These earlier approaches differ from ours in focusing on parametric regression. Specifically, we independently draw elements of from , and then normalize the rows using Gram-Schmidt orthogonalization.

We assume that is a continuous function belonging to , a Holder class with smoothness . To allow to be unknown, we use a Gaussian process (GP) prior, with the covariance function chosen to be squared exponential

(2)

with a smoothness parameter and the Euclidean norm. To additionally allow the residual variance and smoothness to be unknown, we let

with and denoting the inverse-gamma and gamma densities, respectively. The powered gamma prior for is motivated by the result of ([van2009adaptive]) showing minimax adaptive rates of for a GP prior with squared exponential covariance and powered gamma prior. This is the optimal rate for nonparametric regression in the original -dimensional ambient space. The rate can be improved to when , with a -dimensional manifold. [yang2013bayesian] shows that a GP prior with powered gamma prior on the smoothness can achieve this rate.

In many applications, features may not lie exactly on due to noise and corruption in the data. We apply random compression in (1) to de-noise the features, obtaining much more concentrated near a lower-dimensional subspace than the original . With this enhanced concentration, the theory in [yang2013bayesian] suggests excellent performance for an appropriate GP prior. In addition to de-noising, this approach has the major advantage of bypassing estimation of a geodesic distance along the unknown manifold between any two data points and .

2.2 Posterior form

Let and . The prior distribution on induces a normal-inverse gamma (NIG) prior on ,

leading to a NIG posterior distribution for given . In the special case in which , we obtain Jeffrey’s prior and the posterior distribution is

(3)
(4)

where , , , , and denotes a multivariate- distribution with degrees of freedom, mean and covariance .

Hence, the exact posterior distribution of conditionally on is available analytically. The predictive of given and for new subjects marginalizing out over their posterior distribution is available analytically as

(5)

where , , , , .

2.3 Model averaging

The approach described in the previous section can be used to obtain a posterior distribution for and a predictive distribution for given for a new set of subjects conditionally on the random projection matrix and the scaling parameter . To accomplish robustness with respect to the choice of and the subspace dimension , following [guhaniyogi2013bayesian], we propose to generate random matrices having different and different from , , and then use model averaging to combine the results. We choose , . To make matters more clear, let , , represent (1) with number of rows. Corresponding to the model , we denote , , and by , , and respectively. Let denote the set of models corresponding to different random projections, denote the observed data, and denote the data for future subjects with features. Then, the predictive density of given is

(6)

where the predictive density of given under projection is given in (9) and the posterior probability weight on projection is

Assuming equal prior weights for each random projection, . In addition, the marginal likelihood under is

(7)

After a little algebra, one observes that for (1) with , , is

Plugging in the above expressions in (6), one obtains the posterior predictive distribution as a weighted average of densities. Given that the computation over different sets of are not dependent on each other, the calculations are embarrassingly parallel with a trivial expense for combining. The main computational expense comes from the inversion of an matrix under the th random projection. In the next section, we develop approaches for accelerating this inversion for large .

3 Scaling to large

Fitting (1) using model averaging requires computing inverses and determinants of covariance matrices of the order . In problems with large , this adds a heavy computational burden of the order of . Additionally, as dimension increases, matrix inversion becomes more unstable with the propagation of errors due to finite machine precision. This problem is further exacerbated if the covariance matrix is nearly rank deficient.

To address such issues, existing solutions rely on approximating by another process , which is more tractable computationally. One popular approach constructs as a finite basis approximation via kernel convolution ([higdon2002space]) or kalman filtering ([wikle1999dimension]). Alternatively, one can let , where is a Gaussian process having compactly supported correlation function that essentially makes the covariance matrix of sparse ([kaufman2008covariance]), facilitating inversion through efficient sparse solvers.

[banerjee2008gaussian] proposes a low rank approach that imputes conditionally on a few knot-points, closely related to subset of regressor methods in machine learning ([smola2000sparse]). Subsequently, [finley2009hierarchical] in statistics and [snelson2006sparse] in machine learning report bias in parameter estimation for the proposed approaches (i.e. [banerjee2008gaussian, smola2000sparse]) and suggest possible remedies for bias adjustments. To avoid sensitivity to knot selection in these approaches, [banerjee2013efficient] approximates using , with an , random matrix with . are independent feature specific noises with , which are introduced for bias correction similar to [finley2009hierarchical].

We adapt [banerjee2013efficient] from usual GP regression to our compressed manifold regression setting. In particular, let

(8)

where , ,
and
. Denoting and , marginal posterior distributions of and are available in analytical forms

where , , , . Owing to the special structure of and , matrix inversion can be efficiently achieved by Sherman-Woodbury-Morrison matrix inversion technique.

Attention now turns to prediction from (8). The predictive of given and for new subjects marginalizing out over their posterior distribution is available analytically as

(9)

where , , , , . Evaluating the above expression requires inverting matrices of order . Model averaging is again employed to limit sensitivity over the choices of . Following similar calculations as in section 2.3, model averaging weights are found to be

Model averaging is performed on a wide interval of possible values determined by the “compressed sample size” and , analogous to section 2.3.

An important question that remains is how much information is lost in compressing the high-dimensional feature vector to a much lower dimension? In particular, one would expect to pay a price for the huge computational gains in terms of predictive performance or other metrics. We address this question in two ways. First we argue satisfactory theoretical performance in prediction in a large , large asymptotic paradigm in Section 3. Then, we will consider practical performance in finite samples using simulated and real data sets.

4 Convergence analysis

This section provides theory supporting the excellent practical performance of the proposed method. In our context the feature vector is assumed to be , , . Compressing the feature vector results in compressing and the noise followed by their addition, . The following two directions are used to argue that compression results in near optimal inference.

  1. When features lie on a manifold a two stage estimation procedure (compression followed by a Gaussian process regression) leads to optimal convergence properties. This is used to show that using as features in the Gaussian process regression yields the optimal rate of convergence.

  2. Noise compression through mitigates the deleterious effect of noise in on the resulting performance.

Let and be the true and the fitted regression functions respectively. Define as the distance between , under a fixed design. When the design is random, let , where is the marginal distribution of the features. Denote to be the posterior distribution given . Then the interest lies in the rate at which the posterior contracts around under the metric . This calls for finding a sequence of lower bounds such that

(10)

Definition: Given two manifolds and , a differentiable map is called a diffeomorphism if it is a bijection and its inverse is differentiable. If these functions are times continuously differentiable, is called a -diffeomorphism.

Our analysis builds on the following result (Theorem 2.3 in [yang2013bayesian]).

Theorem 4.1

Assume is a dimensional compact sub-manifold of . Let be the embedding map so that . Further assume is a dimensionality reducing map s.t. the restriction of on is a -diffeomorphism onto its image. Then for any with , a Gaussian process prior on with features , , leads to a posterior contraction rate at least . This is a huge improvement upon the minimax optimal adaptive rate of without the manifold structure in the features.

We use the above result in our context. Define the linear transformation . Using the property of random projection matrix, we have that, given , if the projected dimension then with probability greater than , the following relationship holds for every point ,

(11)

implying that is a diffeomorphism onto its image with probability greater than . Define so that .

On , is a diffeomorphism. Therefore, Theorem 4.1 implies that with features . Finally, assuming yields with features . This proves (A).

Let be the -th row of , . Denote and assume is the -th row of . Using Lemma 2.9.5 in [vanweak], we obtain

Therefore, , reducing the magnitude of noise in the original features. Hence (B) is proved. Thus, even if noise exists, asymptotic performance of will be similar to in the GP regression (which by (A) has “optimal” asymptotic performance).

5 Simulation Examples

We assess the performance of compressed Gaussian process (CGP) regression in a number of simulation examples. We consider various numbers of features () and level of noise in the features () to study their impact on the performance. In all the simulations out of sample predictive performance of the proposed CGP regression was compared to that of uncompressed Gaussian process (GP), BART (Bayesian Additive Regression Trees) [chipman2010bart], RF (Random Forests) [breiman2001random] and TGP (Treed Gaussian process) [gramacy2008bayesian]. Unfortunately, with massive number of features, traditional BART, RF and TGP are computationally prohibitive. Therefore, we consider compressed versions in which we generate a single projection matrix to obtain a single set of compressed features, running the analysis with compressed features instead of original features. This idea leads to compressed versions of random forest (CRF), Bayesian additive regression tree (CBART) and Treed Gaussian process (CTGP). These methods entail faster implementation when the number of features is massive.

As a default in these analyses, we use , which seems to be a reasonable choice of upper bound for the dimension of the linear subspace to compress to. In addition, we implement two stage GP (2GP) where the -dimensional features are projected into smaller dimension by using Laplacian eigenmap ([belkin2003laplacian, guerrero2011laplacian]) in the first stage and then a GP with projected features is fitted in the second stage. We also compared Lasso and partial least square regression (PLSR) to indicate advantages of our proposed method over linear regularizing methods. However, in presence of strong nonlinear relationship between the response and the features, Lasso and PLSR perform poorly and hence results for them are omitted.

When is massive, to bypass heavy computational price associated with CGP for inverting an matrix, we employ a low rank approximation of the compressed Gaussian process as described in section 3. As an uncompressed competitor of CGP in settings with large , efficient Gaussian random projection technique [banerjee2013efficient] is implemented. This is also referred to as the GP to avoid needless confusion. Along with GP, CBART and CRF are included as competitors. CTGP with large poses heavy computational burden and is, therefore, omitted.

As a more scalable competitor, we employ the popular two stage technique of clustering the massive sample into a number of clusters followed by fitting simple model such as Lasso in each of these clusters. To facilitate clustering of high dimensional features in the first stage, we use the spectral clustering algorithm ([ng2001spectral]) described in Algorithm 1 .

Input: features and the number of clusters required .

  • Form the affinity matrix defined by if , , for some judicious choice of .

  • Define to be the diagonal matrix whose -th entry is the sum of the elements in the -th row of . Construct .

  • Find be the eigenvectors corresponding to the largest eigenvalues of . Form the matrix by stacking the eigenvectors in column.

  • Normalize so that each row of has unit norm.

  • Now treating each row of as a point in cluster them into clusters via clustering.

  • Finally assign in cluster if the th row of goes to cluster .

Algorithm 1 Spectral Clustering Algorithm

Once observations are clustered, separate Lasso is fitted in each of these clusters. Henceforth, we refer to this procedure as distributed supervised learning (DSL).

The model averaging step in CGP requires choosing a window over the possible values of . When is small, we adopt the choice suggested in [guhaniyogi2013bayesian] to have a window of , which implies that the number of possible models to be averaged across is . When is large, we choose the window of . The number of rows of is fixed at for the simulation study with large . However, changing moderately does not alter the performance of CGP.

5.1 Manifold Regression on the Swiss Roll

To provide some intuition for our model, we start with a concrete example where the distribution of the response is a nonlinear function of the coordinates along a swissroll, which is embedded in a high dimensional ambient space. To be more specific, we sample manifold coordinates, , . A high dimensional feature is then sampled following

Finally responses are simulated to have nonlinear and non-monotonic relationship with the features

(12)

Clearly, and are conditionally independent given , which is the low-dimensional signal manifold. In particular, lives on a (noise corrupted) swissroll embedded in a -dimensional ambient space (see Figure 1(a)), but is only a function of coordinates along the swissroll (see Figure 1(b)).

The geodesic distance between two points in a swiss roll can be substantially different from their Euclidean distance in the ambient space . For example, in Figure 1(c) two points joined by the line segment have much smaller Euclidean distance than geodesic distance. Theorem 4.1 in Section 4 guarantees optimal performance when the compact sub-manifold is sufficiently smooth, so that the locally Euclidean distance serves as a good approximation of the geodesic distance. The Swiss roll presents a challenging set up for CGP, since points on that are close in a Euclidean sense can be quite far in a geodesic sense.

To assess the impact of the number of features (p) and noise levels of the features () on the performance of CGP, a number of simulation scenarios are considered in Table 1. For each of these simulation scenarios, we generate multiple datasets and present predictive inference such as mean squared prediction error (MSPE), coverage and lengths of 95% predictive intervals (PI) averaged over all replicates.

Simulation sample size () no. of features () noise in the features ()
1 100 10,000 0.02
2 100 20,000 0.02
3 100 10,000 0.05
4 100 20,000 0.05
5 100 10,000 0.10
6 100 20,000 0.10
Table 1: Different Simulation settings for CGP.

In our experiments, and are centered. To implement LASSO, we use glmnet ([friedman2009glmnet]) package in R with the optimal tuning parameter selected through 10 fold cross validation. CRF, CBART and CTGP in R using randomForest ([liaw2002classification]), BayesTree ([chipman2009package]) and tgp ([gramacy2007tgp]) packages, respectively.

(a) noise corrupted swiss roll
(b) response vs.
(c) swiss roll shown in 2d
Figure 1: Simulated features and response on a noisy Swiss Roll,

5.1.1 MSPE Results

Predictive MSE for each of the simulation settings averaged over 10 simulated datasets is shown in Table 2. Subscripted values represent bootstrap standard errors for the averaged MSPEs, calculated by generating 50 bootstrap datasets resampled from the MSPE values, finding the average MSPE of each, and then computing their standard error.

Table 2 shows that feeding randomly compressed features into any of the nonparametric methods leads to good predictive performance, while Lasso fails to improve much upon the null model (not shown here). For both , when the swiss roll is corrupted with low noise, CGP, CBART and CRF provide significantly better performance than GP. Increasing noise in the features results in deteriorating performances for all the competitors. CGP is an effective tool to reduce the effect of noise in the features, but at a tipping point (depending on ) noise distorts the manifold too much, and CGP starts performing similarly to GP. CRF and CTGP perform much worse than CGP in high noise scenarios, while CBART produces competitive performance. Two stage GP (2GP) performs much worse than all the other competitors; perhaps the two stage procedure is considerably more sensitive to noise. Increasing number of features does not alter MSPE for CGP significantly in presence of low noise, consistent with asymptotic results showing posterior convergence rates depend on the intrinsic dimension of instead of when features are concentrated close to . In the next section, we will study these aspects with increasing sample size and noise in the features.

Noise in the feature
.02 .05 .10
CGP
GP
CRF
CBART
CTGP
2GP
CGP
GP
CRF
CBART
CTGP
2GP
Table 2: Performance comparisons for competitors in terms of mean squared prediction errors (MSPE)

5.1.2 Coverage and Length of PIs

To assess if CGP is well calibrated in terms of uncertainty quantification, we compute coverage and length of 95% predictive intervals (PI) of CGP along with all the competitors. Although most frequentist methods such as CRF are unable to provide such coverage probabilities in producing point estimates, we present a measure of predictive uncertainty for those methods following the popular two stage plug-in approach, (i) estimate the regression function in the first stage; (ii) construct 95% PI based on the normal distribution centered on the predictive mean from the regression model with variance equal to the estimated variance in the residuals. Boxplots for coverage probabilities in all the simulation cases are presented in Figure 2. Figure 3 presents median lengths of the 95% predictive intervals.

Both these figures demonstrate that in all the simulation scenarios CGP, uncompressed GP, 2GP and CBART result in predictive coverage of around 95%, while CRF suffers from severe under-coverage. The gross under-coverage of CRF is attributed to the overly narrow predictive intervals. Additionally, CTGP shows some under-coverage, with shorter predictive intervals than CGP, GP, 2GP or CBART. CGP turns out to be an excellent choice among all the competitors in fairly broad simulation scenarios. We consider larger sample sizes and high noise scenarios in the next subsection.

(a) , ,
(b) , ,
(c) , ,
(d) , ,
(e) , ,
(f) , ,
Figure 2: coverage of 95% PI’s for CGP, GP, CBART, CTGP, CRF, 2GP
(a) , ,
(b) , ,
(c) , ,
(d) , ,
(e) , ,
(f) , ,
Figure 3: lengths of 95% PI’s for CGP, GP, CBART, CRF, CBART, 2GP

5.2 Manifold Regression on Swiss roll for Large Sample

To assess how the relative performance of CGP changes for larger sample size, we implement manifold regression on swiss roll using methodologies developed in section 3. For this simulation example, data generation scheme similar to section 5.1 is used. Ideally, larger sample size should lead to better predictive performance. Therefore, one would expect more accurate prediction even with higher degree of noise in the features for larger sample size, as long as there is sufficient signal in the data. To accommodate higher signal than in section 5.1, we simulate manifold coordinates as , and sample responses as per (12). We also increase noise variability in the features for all the simulation settings. Simulation scenarios are described in Table 3.

Simulation sample size () no. of features () noise in the features ()
1 5,000 10,000 .03
2 5,000 20,000 .03
3 5,000 10,000 .06
4 5,000 20,000 .06
5 5,000 10,000 .10
6 5,000 20,000 .10
Table 3: Different Simulation settings for CGP for large .

MSPE of all the competing methods are calculated along with their bootstrap standard errors and presented in Table 4. Results in Table 4 provide more evidence supporting our conclusion in section 5.1. With smaller noise variance, CGP along with other compressed methods outperform uncompressed GP and 2GP. As increases, the manifold structure is more and more disrupted, with performance of all the competitors worsening. With increasing noise variance, performance of CGP and GP start becoming comparable, while the other compressed methods provide inferior performance. Comparing results from the last section it is quite evident that with large samples, CGP is able to perform well even with very large number of features and moderate variance of noise in the features. In large samples, the performance difference between GP and CGP becomes drastic with small noise variance. This shows the effectiveness of CGP for large when features are close to lying on a low-dimensional manifold.

In all the simulation scenarios, DSL is the best performer in terms of MSPE, consistent with the routine use of DSL in large scale settings. However, the performance is extremely sensitive to the choice of clusters. In real data applications often inaccurate clustering leads to “suboptimal” performance, as will be seen in the data analysis. Additionally, we are not just interested in obtaining a point prediction approach, but want to obtain methods that provide an accurate characterization of predictive uncertainty. With this in mind, we additionally examine coverage probabilities and lengths of 95% predictive intervals (PIs).

Noise in the feature
.03 .06 .10
CGP
GP
CRF
CBART
DSL
2GP
CGP
GP
CRF
CBART
DSL
2GP
Table 4: along with the bootstrap for all the competitors

Boxplots for coverage probabilities of 95% PI’s are presented in figure 4. Figure 5 presents lengths of 95% prediction intervals for all the competitors. As expected, CGP, GP, 2GP and CBART demonstrate better performance in terms of coverage. However, in low noise cases CGP and CBART achieve similar coverage with a two fold reduction in the length of PIs compared to GP or 2GP. CRF, like in the previous section, shows under-coverage with narrow predictive intervals. The predictive interval for CGP is found to be a marginally wider than CBART with comparable coverage. With high noise features tend to lose their manifold structure more and hence performance is affected for all the competitors. It is observed that with high noise all of them tend to have wider predictive intervals. DSL presents overly narrow predictive intervals (not shown here) yielding severe under-coverage.

(a) , ,
(b) , ,
(c) , ,
(d) , ,
(e) , ,
(f) , ,
Figure 4: coverage of 95% PI’s for CGP, GP, CRF, CBART, 2GP
(a) , ,
(b) , ,
(c) , ,
(d) , ,
(e) , ,
(f) , ,
Figure 5: lengths of 95% PI’s for CGP, GP, CBART, CRF, 2GP

5.3 Computation Time

One of the major motivations in developing CGP was to massively improve computational scalability to huge , settings. Clearly, the computational time for nonparametric estimation methods such as BART, TGP or RF applied to the original data will become notoriously prohibitive for huge , and hence we focus on comparisons with more scalable methods. The approach of applying BART, RF and TGP to the compressed features, which is employed in CBART, CRF and CTGP respectively, is faster to implement. Using non-optimized R code implemented on a single 3.06-GHz Intel Xeon processor with 4.0 Gbytes of random-access memory running Debian LINUX, the computing time for 2,000 iterations of CBART in the and are only seconds, while CGP has run time of seconds, respectively. Increasing moderately we find CBART and CGP have similar run time. CRF is a bit faster than both of them, while CTGP has run time seconds for respectively. For moderate n, 2GP is found to have similar run time as CBART.

With huge , CTGP is impractically slow to run and hence it is omitted in the comparison. GP needs to calculate and store distance matrix of features. Apart from the storage bottleneck, such a procedure incurs a computational complexity of which is massive for large . CGP instead proposes calculating and storing a distance matrix of compressed features, with a computational complexity of . computation time for CGP additionally depends on a number of factors, (i) Gram Schmidt orthogonalization of rows of matrices, (ii) inverting an matrix for large n, (iii) multiplying and matrices. For large along with these three steps, one requires multiplying with . All of these steps are computationally less demanding even with large and . The computation is further facilitated by the easy parallelizing over different choices of in the model averaging step. Even not exploiting any parallelization, one obtains results quickly using non-optimized R code on a single 3.06-GHz Intel Xeon processor with 4.0 Gbytes of random-access memory running Debian LINUX.

Figure 6 shows the computational speed comparison between CGP, GP, CBART and CRF for various and . Computational speed is recorded assuming existence of a number of processors on which parallelization can be executed, whenever needed. Clearly, as increases, CGP enjoys substantial computational advantage over all its competitors. The computational advantage is especially notable over CBART and GP. Run times of DSL are also recorded for and and they turn out to be seconds respectively. On the other hand, 2GP involves creating adjacency matrices followed by eigen-decomposition of matrix. Both these steps are computationally demanding. It has been observed that 2GP takes seconds to run for and respectively. Therefore, CGP can outperform even a simple two stage estimation procedure such as DSL in terms of computational speed. Such rapid computation offered by CGP becomes particularly notable as we scale from tens of thousands of samples and features to millions or more, which is becoming increasingly common.

(a)
(b)
(c)
Figure 6: log of the Computational time in seconds for CGP, GP, CBART, CRF against log of the number of features.

6 Application to Face Images

Figure 7: Representative images from the Isomap face data.

In our simulation examples, the underlying manifold is three dimensional and can be directly visualized. In this section we present an application in which both the dimension and the structure of the underlying manifold is unknown. The dataset consists of 698 images of an artificial face and is referred to as the Isomap face data ([tenenbaum2000global]). A few such representative images are presented in Figure 7. Each image is labeled with three different variables: illumination-, horizontal- and vertical-orientation. Three dimensional images observed in the data should ideally form three dimensional manifold. However, only a two dimensional projection of the images are presented in the data in the form of matrices of the order pixels in size. intuitively a limited number of additional features are needed for different views of the face. This is confirmed by the recent work of [levina2004maximum, aswani2011regression] where the intrinsic dimensionality is estimated to be small from these images. More details about the dataset can be found in http://isomap.stanford.edu/datasets.html.

We apply CGP and all the competitors to the dataset to assess relative performances. To set up the regression problem, we consider horizontal pose angles (vary in ) of the images, after standardization, as the responses. The features are taken dimensional vectorized images for each sample. To deal with more realistic situations, noise is added to each pixel of the images, with varying , to make predictive inference more challenging from the noisy images. We carry out random splitting of the data into training cases and test cases and run all the competitors to obtain predictive inference in terms of MSPE, length and coverage of 95% predictive intervals. To avoid spurious inference due to small validation set, this experiment is repeated 20 times.

Table 5 presents MSPE for all the competing methods averaged over 20 experiments along with their standard errors computed using 100 bootstrap samples.

CGP GP CBART CRF DSL 2GP
Table 5: MSPE and standard error (computed using 100 bootstrap samples) for all the competitors over 20 replications

It is clear from Table 5 that CGP along with its compressed competitors explain a lot of variation in the response. GP and 2GP are the worst performers in terms of MSPE. DSL also performs much worse than the compressed competitors. This is consistent with our experience that, in the presence of a complex and unknown manifold structure along with noise, DSL can be unreliable relative to CGP which tends to be more robust to the type of manifold and noise level.

Note that, because of the standardization, the null model yields MSPE 1. Therefore, it is clear from Table 5 that CGP along with its compressed competitors explain a lot of variation in the response. GP and 2GP are the worst performers in terms of MSPE. DSL also performs much worse than the compressed competitors. This is consistent with our experience that, in the presence of a complex and unknown manifold structure along with noise, DSL can be unreliable relative to CGP which tends to be more robust to the type of manifold and noise level.

To see how well calibrated these methods are, Figure 8 provides coverage probabilities along with the lengths of predictive intervals for all the competitors. It is evident from the figure that CGP, CBART, GP and 2GP yield excellent coverage. However, for CGP and CBART this coverage is achieved with much narrower predictive intervals compared to GP and 2GP. On the other hand, both CRF and DSL produce extremely narrow predictive intervals resulting in severe under-coverage. In fact for , length of 95% predictive intervals for DSL are respectively. Therefore, both in terms of MSPE and predictive coverage, CGP does a good job. More importantly, these results serve as a testimony of the robust performance demonstrated by compressed Bayesian nonparametric methods (CGP being one of them) even in the presence of unknown and complex manifold structure in the features.

(a)
(b)
(c)
(d)
(e)
(f)
Figure 8: Left panel: Boxplot for coverage of 95% predictive intervals over 20 replications; Right panel: Boxplot for length of 95% predictive intervals over 20 replications for CGP, GP, 2GP, CBART, CRF. In the left and right panels y-axis corresponds to the coverage and length respectively.

7 Discussion

The overarching goal of this article is to develop nonparametric regression methods that scale to massive and/or when features lie on a noise corrupted manifold. The statistical and machine learning literature is somewhat limited in robust and flexible methods that can accurately provide predictive inference for massive and , while taking into account the geometric structure. We develop a method based on nonparametric low-rank Gaussian process methods combined with random feature compression to accurately characterize predictive uncertainties quickly, bypassing the need to estimate the underlying manifold. The computational template exploits model averaging to limit sensitivity of the inference to the specific choices of the random projection matrix . The proposed method is also guaranteed to yield minimax optimal convergence rates.

There are many future directions motivated by our work. For example, the present work is not able to estimate the true dimensionality of the noise corrupted manifold. Arguably, a nonparametric method that can simultaneously estimate the intrinsic dimensionality of the manifold in the ambient space would improve performance both theoretically and practically. One possibility is to simultaneously learn the marginal distribution of the features, accounting for the low-dimensional structure. Other possible directions include adapting to massive streaming data where inference is to be made online. Although random compression both in and provides substantial benefit in terms of computation and inference, it might be worthwhile to learn the matrices , while attempting to limit the associated computational burden.

References

Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel