Algorithms for Envelope Estimation II

Algorithms for Envelope Estimation II

R. Dennis Cook,   Liliana Forzani  and Zhihua Su R. Dennis Cook is Professor, School of Statistics, University of Minnesota, Minneapolis, MN 55455 (E-mail: dennis@stat.umn.edu).Liliana Forzani is Professor, Facultad de Ingeniería Química, Universidad Nacional del Litoral and Instituto Matemática Aplicada Litoral, CONICET-UNL, Santa Fe, Argentina (Email: liliana.forzani@gmail.com). Zhihua Su is Assistant Professor, Department of Statistics, University of Florida, Gainsville, FL 32611-8545 (E-mail: zhihuasu@stat.ufl.edu).
Abstract

We propose a new algorithm for envelope estimation, along with a new -consistent method for computing starting values. The new algorithm, which does not require optimization over a Grassmannian, is shown by simulation to be much faster and typically more accurate that the best existing algorithm proposed by Cook and Zhang (2015c).

Key Words: Envelopes; Grassmann manifold; reducing subspaces.

\setstretch

2

1 Introduction

The goal of envelope methods is to increase efficiency in multivariate parameter estimation and prediction by exploiting variation in the data that is effectively immaterial to the goals of the analysis. Envelopes achieve efficiency gains by basing estimation on the variation that is material to those goals, while simultaneously excluding that which is immaterial. It now seems evident that immaterial variation is often present in multivariate analyses and that the estimative improvement afforded by envelopes can be quite substantial when the immaterial variation is large, sometimes equivalent to taking thousands of additional observations.

Algorithms for envelope estimation require optimization of a non-convex objective function over a Grassmannian, which can be quite slow in all but small or modest sized problems, possibly taking hours or even days to complete an analysis of a sizable problem. Local optima are another complication that may increase the difficulty of the computations and the analysis generally. Until recently, envelope methods were available only in Matlab, as these computing issues hindered implementation in R.

In this article we propose new easily computed -consistent starting values and a novel non-Grassmann algorithm for optimization of the most common envelope objective function. These computing tools are much faster than current algorithms in sizable problems and can be implemented straightforwardly in R. The new starting values have proven quite effective and can be used as fast standalone estimators in exploratory analyses.

In the remainder of this introduction we review envelopes and describe the computing issues in more detail. We let denote a projection with , let be the set of all real matrices, and let be the set of all real and symmetric matrices. If , then is the subspace spanned by columns of . is the vectorization operator that stacks the columns of a matrix. A subspace is said to be a reducing subspace of if decomposes as . If is a reducing subspace of , we say that reduces .

1.1 Review of envelopes

Envelopes were originally proposed and developed by Cook, Li and Chiaromonte (2007, 2010) in the context of multivariate linear regression,

(1)

where is a normal error vector with mean 0, variance and is independent of , and is the regression coefficient matrix in which we are primarily interested. Immaterial variation can occur in or or both. Cook et al. (2010) operationalized the idea of immaterial variation in the response vector by asking if there are linear combinations of whose distribution is invariant to changes in . Specifically, let denote the projection onto a subspace with the properties (1) the distribution of does not depend on the value of the non-stochastic predictor and (2) is independent of given . These conditions imply that the distribution of is not affected by marginally or through an association with . Consequently, changes in the predictor affect the distribution of only via and so we refer to informally as the material part of and to as the immaterial part of .

Conditions (1) and (2) hold if and only if (a) (so envelopes ) and (b) reduces . The -envelope of , denoted , is defined formally as the intersection of all reducing subspaces of that contain . Let and let be an orthogonal matrix with and . This leads directly to the envelope version of model (1),

(2)

where , gives the coordinates of relative to basis , and and are positive definite matrices. While , and depend on the basis selected to represent , the parameters of interest and depend only on and not on the basis. All parameters in (2) can be estimated by maximizing its likelihood with the envelope dimension determined by using standard methods like likelihood ratio testing, information criteria, cross-validation or a hold-out sample, as described by Cook et al. (2010). The envelope estimator of is just the projection of the maximum likelihood estimator onto the estimated envelope, , and is asymptotically normal with mean 0 and covariance matrix given by Cook et al. (2010), where is assumed to be known. An introductory example of response envelopes is available in Cook and Zhang (2015a).

Similar reasoning leads to partial envelopes for use when only selected columns of are of interest (Su and Cook, 2011), to predictor envelopes allowing for immaterial variation in (Cook, Helland and Su, 2013), to predictor-response envelopes allowing simultaneously for immaterial variation in and (Cook and Zhang, 2015b) and to heteroscedastic envelopes for comparing the means of multivariate populations with unequal covariance matrices (Su and Cook, 2013).

Cook and Zhang (2015a) extended envelopes beyond multivariate linear models by proposing the following estimative construct for vector-valued parameters. Let denote an estimator of a parameter vector based on a sample of size and assume, as is often the case, that converges in distribution to a normal random vector with mean 0 and covariance matrix as . To accommodate the presence of nuisance parameters, decompose as , where , , is the parameter vector of interest and is the nuisance parameter vector. The asymptotic covariance matrix of is represented as , which is the lower right block of . Then Cook and Zhang (2015a) defined the envelope for improving as the smallest reducing subspace of that contains , . This definition links the envelope to a particular pre-specified method of estimation through the covariance matrix , while normal-theory maximum likelihood is the only method of estimation allowed by the previous approaches. The goal of an envelope is to improve that pre-specified estimator, perhaps a maximum likelihood, least squares or robust estimator. Second, the matrix to be reduced – here – is dictated by the method of estimation. Third, the matrix to be reduced can now depend on the parameter being estimated, in addition to perhaps other parameters. Cook and Zhang (2015a) sketched application details for generalized linear models, weighted least squares, Cox regression and described an extension to matrix-valued parameters.

1.2 Computational issues

The approaches reviewed in the last section all require estimation of an envelope, now represented generically as , the smallest reducing subspace of that contains , where . Let , let be a semi-orthogonal basis matrix for , let be a -consistent estimator of , and let be a positive semi-definite -consistent estimator of a basis matrix for . With specified, the most common objective function used for envelope estimation is

(3)

and the envelope is estimated as , where the minimum is taken over all semi-orthogonal matrices . Objective function (3) corresponds to maximum likelihood estimation under normality for many envelopes, including those associated with (1). Otherwise it provides a -consistent estimator of the projection onto provided and are -consistent (Cook and Zhang, 2015c, who also provided additional background on ).

In the case of response envelopes reviewed in Section 1.1, is the covariance matrix of the residuals from the ordinary least squares fit of (1), denoted , and is marginal sample covariance matrix of , denoted , and the envelope estimator is the maximum likelihood estimator if the errors are normal. If the errors are not normal but have finite fourth moments then is -consistent and asymptotically normal. In the general context of Cook and Zhang (2015a), also reviewed in Section 1.1, is set to a -consistent estimator of and .

For any orthogonal matrix , , so depends only on and not on a particular basis. Thus the optimization problem is over a Grassmannian (See Edelman et al. (1998) for background on optimization over Grassmann manifolds.). Since it takes real numbers to specify uniquely, Grassmann optimization is usually computationally straightforward when is not too large, but it can be very slow when is large. Also, since is non-convex, the solution returned may correspond to a local rather than global minimum, particularly when the signal is small relative to the noise.

It is important that we have a fast and reliable method of determining because we may need to repeat that operation hundreds or even thousands of times in an analysis. An information criterion like AIC or BIC is often used to select a suitable value for , and this requires that we find for . Predictive cross validation might also be used to select , again requiring many optimizations of ; repeating five fold cross validation with 50 random partitions require in total optimizations. Asymptotic standard errors are available for many normal models, but we may wish to use a few hundred bootstrap samples to determine standard errors when normality is in doubt or when we wish to check the accuracy of the asymptotic approximations. And may more bootstrap samples may be required if we want accurate inference statements. In some analyses we may wish to fit a few model variations, again multiplying the computation time. In cases like those discussed at the end of Section 1.1, , which may depend on unknown parameters, necessitating another level of iteration for the best results (See Cook and Zhang 2015a for further discussion of this point.) In short, a seemingly small savings in computation time for one optimization of can translate into massive savings over the course of an analysis. Additionally, the choice of starting value for can be crucial since the objective function is non-convex. Converging to a local minimum can negate the advantages of maximum likelihood estimation, for example. Trying several different starting values is not really an effective method since it again multiplies the total computation time and in our experience is not likely to result in the global optimum.

Cook, Su and Yang (2014; https://github.com/emeryyi/envlp) developed a fairly comprehensive Matlab toolbox envlp for envelope estimation based on Lippert’s sg_min program for optimization over Stiefel and Grassmann manifolds (http://web.mit.edu/ ripper/www/software/). This is a very effective toolbox for small to moderate sized analyses, but otherwise is susceptible to all of the issues mentioned previously. Cook and Zhang (2015c) replaced with a sequential 1D algorithm that can be computationally much faster than sg_min and is less dependent on good starting values. Nevertheless, it is still susceptible to the problems described previously, although less so than methods based on sg_min. Additionally, since it does not provide , it loses the advantages of that accrue with maximum likelihood estimation when normality is a reasonable assumption. For instance, information criteria like AIC and BIC are no longer available straightforwardly, and likelihood ratio testing is problematic and thus dimension selection must typically be guided by cross validation.

In this paper we propose an iterative non-Grassmann method to compute that is faster and more reliable that existing methods in large analyses and otherwise performs about the same. It depends crucially on new effective -consistent starting values that can also be used as standalone estimators. We restrict our comparisons to the 1D algorithm, since Cook and Zhang (2015c) have demonstrated its superiority over direct optimization methods based on sg_min.

The new starting values are developed in Section 2 and the new algorithm, which relies the new starting values, is described in Section 3. Supporting simulation results are given in Section 4 and contrasts on real data are given in Section 5. Proofs are given in an appendix.

2 Starting values

In this section we describe how to choose the columns of the starting value for from the eigenvectors of or . To gain intuition about the approach, consider the population representations , and . For the starting values selected from the eigenvectors of to work well, the eigenvalues of need to be well distinguished from those of . If some of the eigenvalues of are close to a subset of the eigenvaues of then in samples the corresponding eigenspaces will likely be confused when attempting to minimize . In other words, we may well pick vectors near instead of eigenvectors near . In such cases we may obtain a better starting value by choosing from the eigenvectors of rather than the eigenvectors of . The same argument applies to choosing the starting values from the eigenvectors of : the eigenvalues of need to be well distinguished from those of . If some of the eigenvalues of are close to a subset of the eigenvalues of then in samples the corresponding eigenspaces will again likely be confused. In such cases we may obtain better starting values by starting with the eigenvectors of rather than the eigenvectors of . The general conclusion from this discussion is that for effective starting values we will need to consider both and . Scaling will also be an issue, as discussed later in this section, leading to four potential starting values. The actual starting value used is the one that minimizes .

We make use of the following result.

Proposition 2.1

Let be an orthogonal matrix with and let be a positive definite matrix. Then and are both minimized globally when the columns of span any dimensional reducing subspace of .

In the next section we describe how to select starting values from the eigenvectors of .

2.1 Choosing the starting value from the eigenvectors of

Define , and , where is a standardized version of . Assume for convenience that the eigenvalues of are unique, which will typically hold with probability , and let be the collection of all subsets of eigenvectors of . Then

Proposition 2.2

.

Consequently, instead of we can work with the more amenable objective function when restricting starting values to the eigenvectors of . It follows from Proposition 2.1 that is minimized when the columns of are any eigenvectors of . Restricting , we next need to find . This does not have a closed-form solution and evaluating at all -choose- elements of will be effectively impossible when is large. For these reasons we replace the -determinant in with the trace and minimize , which is equivalent to maximizing

where is the -th selected eigenvector of (the -th column of ). Computation is now easy, since we just select the eigenvectors of that maximize .

Applying this in response envelopes, let denote the marginal sample covariance matrix of the predictors. Then , , , and is a standardized version of the ordinary least squares estimator of .

2.2 Choosing the starting value from the eigenvectors of

Define , and , where is another standardized version of . Let be the collection of all subsets of eigenvectors of . Then

Proposition 2.3

.

Consequently, instead of we can again work with a more amenable objective function, this time . It follows from Proposition 2.1 that is minimized when the columns of are any eigenvectors of . Restricting , we next need to find . Again, this does not have a closed-form solution and evaluating at all -choose- elements of will be effectively impossible when is large. For these reasons we again replace the -determinant with the trace and minimize , which is equivalent to maximizing

where is the -th selected eigenvector of (the -th column of ). Computation is again easy, since we just select the eigenvectors of that maximize . This is exactly the same as the previous case, except the standardization of is with instead of .

Applying this in response envelopes, , , , and is just a standardized matrix of ordinary least squares regression coefficients as before.

2.3 Scaling and consistency

The standardized forms and are important when the scales involved in and are very different. This can perhaps be appreciated readily in the context of response envelopes, where and . In this case the standardization will be important and effective if the scales of the elements of are very different. However, the standardization will be effectively unnecessary when the scales are similar. In the case of response envelopes this means that the scales of the elements of are the same or similar.

Depending on the scales involved, standardization can also be counterproductive when the sample size is not large enough to give sufficiently accurate estimates of and . In such cases, we abandon the standardization and use either or as the objective function. The only difference between these is that confines to the eigenvectors of , while confines to the eigenvectors of . We now have four possible starting values from which to choose, corresponding to the arguments that minimize , , , and . The value chosen to start the algorithm described in Section 3 is the one that minimizes .

We conclude this section with the following consistency result:

Proposition 2.4

Let denote the projection onto . Then with known , is a -consistent of the projection onto .

3 New iterative algorithm

In this section we describe a re-parameterized version of that does not require optimization over a Grassmannian. The new parameterization requires first selecting rows of and then constraining the matrix formed with these rows to be non-singular. Without loss of generality, assume that is constructed from the first rows of which we can then partition as

where is an unconstrained matrix and . Since and is non-singular, . Using these relationships, can be re-parameterized as a function of only :

With this objective function minimization over is unconstrained. The number of real parameters comprising is the same as the number of reals needed to specify a single element in the Grassmannian .

If is not too large, might be minimized directly by using standard optimization software and the starting values described in Section 2. In other cases minimization can be carried out by minimizing iteratively over the rows of . Suppose that we wish to minimize over the last row of . Partition

Then after a little algebra, the objective function for minimizing over with held fixed can be written up to terms that do not depend on as

where

The objective function can now be minimized using any suitable off-the-shelf algorithm. Iteration then cycles over rows of until a convergence criterion is met.

This algorithm requires the starting value described in Section 2. Prior to application of the algorithm we must identify rows of and then constrain the matrix formed from those rows to be non-singular. This implies that the matrix formed from the corresponding rows of a basis matrix for should also be non-singular. This can be achieved reliably by first applying Gaussian elimination with partial pivoting to . The rows of identified during this process then form .

Proposition 3.1

Assume that the eigenvalues of and are distinct. Then the submatrix of that consists of the rows selected by Gaussian elimination converges to a non-singular matrix with rate .

This proposition shows that asymptotically Gaussian elimination produces a non-singular submatrix. The condition that the eigenvalues of and be distinct is mainly for clarity of exposition and is not necessary. The proof given in the appendix demonstrates a more complete result. Let denote the population version of , and let consist of the rows of formed by applying Gaussian elimination to . Then is a basis matrix for and converges to at rate .

The new algorithm estimates a basis row by row, while the 1D algorithm optimizes column by column. When is small, the 1D algorithm tends to be a bit more efficient as it optimizes one column at a time and it needs only one pass through those columns. When is larger, the new algorithm dominates, and sometimes substantially. In each estimation, the 1D algorithm uses conjugate gradient with Polak-Ribiere updates while our algorithm uses Newton updates.

4 Simulations

4.1 Starting values

The first series of simulations was designed to illustrate why it is important to consider the eigenvalues of both and . All simulations are for response envelopes reviewed in Section 1.1, model (2). The results displayed in the tables of this section are the average over 50 replications in each simulation scenario. The angle between the subspaces spanned by columns of the semi-orthogonal basis matrices and was computed in degrees as the arc cosine of the smallest absolute singular value of , and , where is as defined in Proposition 2.4. The starting value is still denoted as but its definition depends on the simulation. was obtained from the new algorithm described in Section 3 using the simulation-specific starting value , and .

Scenario I.

This simulation was designed to illustrate a regression in which the eigenvalues of are close and the signal is strong. We generated the data with , and , taking and to be diagonal matrices with diagonal elements generated as independent uniform variates. Elements in were independent uniform variates, followed a multivariate normal distribution with mean and covariance matrix , and the elements of were obtained by standardizing a matrix of independent uniform variates. In this scenario, the eigenvalues of are close to each other, but we have a strong signal arising from the distribution of . Starting values based on the eigenvectors of were expected to perform poorly, while starting values based on were expected to perform well, as conjectured at the start of Section 2 and confirmed by the results in Table 1.

Standardized Unstandardized
Summary statistic
Table 1: Results for Scenario I. The starting value was constructed from the eigenvectors of the matrices indicated by the headings for columns 2-5.

The overarching conclusion from Table 1 is that the starting values from did very well, whether was standardized or not, while the starting values from were effectively equivalent to choosing a 20-dimensional subspace at random. Additionally, iteration from the starting value produced essentially no change in the angle, the value of the objective function or the envelope estimator of .

Scenario II.

We generated data with , and , taking and . Elements in were independent uniform variates, followed multivariate normal distribution with mean and covariance matrix , and was obtained by standardizing an matrix of independent uniform variates. Since the eigenvalues in and are very different and the signal is modest, the results in Table 2 show as expected from the argument given in Section 2 that the starting values based on did much better than those based on . As in Scenario I, the starting value did very well. Iteration improved the starting value a small amount and scaling had no notable affect.

Standardized Unstandardized
Summary statistic
Table 2: Results for scenario II. The starting value was constructed from the eigenvectors of the matrices indicated by the headings for columns 2-5.

Scenario III.

The intent of this simulation is to demonstrate the importance of scaling . We generated data with , and , taking to be a diagonal matrix with diagonal elements and to be a diagonal matrix with diagonal elements , , . Elements in were generated as independent uniform variates, follows the multivariate normal distribution with mean and covariance matrix , and . We see from the results of Table 3 that standardization performs well and that now iteration improves the starting value considerably. Here and in the other results of this section, the smallest value of produced best results.

Standardized Unstandardized
Summary statistic
Table 3: Results for scenario III. The starting value was constructed from the eigenvectors of the matrices indicated by the headings for columns 2-5.

Scenario IV.

For this simulation we kept the same settings as Scenario III, except that diagonal elements of and were , , and , , , and was generated by standardizing a matrix of uniform random variables. In this setup heteroscedasticity across the elements is reduced substantially from that in scenario III. As indicated in Table 4, the standardization no longer provides much improvement. Also, since the eigenvalues of and are similar, again does not work well.

Standardized Unstandardized
Summary statistic
Table 4: Results for scenario IV. The starting value was constructed from the eigenvectors of the matrices indicated by the headings for columns 2-5.

4.2 Comparisons with the 1D algorithm

In this section we give three different simulation scenarios based on response envelopes for comparing the new non-Grassmann algorithm with the 1D algorithm. In all scenarios , , orthogonal bases were obtained by normalizing an matrix of independent uniform variates, the elements in were generated as independent uniform variates, and . The predictors were generated as independent normal random vectors with mean and variance . We varied from to and recorded and computing times and the angles between the true and estimated subspaces.

The 1D algorithm was implemented in R for all simulations reported in this and the next section. Using efficient programming tools in R, it is now much faster than its Matlab version, which produced the results in Cook and Zhang (2015c). To insure a fair comparison, we used the default convergence criterion in R for optimizations within both the 1D algorithm and the new algorithm. The angle between subspaces was computed as described previously. In all case the results tabled are the averages over 50 replications. We use to denote the basis generated by the the 1D algorithm.

Scenario V.

In this scenario we set and . To reflect multivariate regressions with large immaterial variation, so envelopes give large gains, we generated the error covariance matrix as , where , , the elements in were generated as independent standard normal variates and elements in were generated as independent normal variates. The results are shown in Table 5.

The 1D algorithm tends to perform a bit better on accuracy (Table 5) for small values of , while performing poorly for large values of . The same phenomenon occurs in terms of time: the 1D algorithm tends to be a bit faster for smal