Sparse canonical correlation analysis from a predictive point of view

Sparse canonical correlation analysis from a predictive point of view

Abstract

Canonical correlation analysis (CCA) describes the associations between two sets of variables by maximizing the correlation between linear combinations of the variables in each data set. However, in high-dimensional settings where the number of variables exceeds the sample size or when the variables are highly correlated, traditional CCA is no longer appropriate. This paper proposes a method for sparse CCA. Sparse estimation produces linear combinations of only a subset of variables from each data set, thereby increasing the interpretability of the canonical variates. We consider the CCA problem from a predictive point of view and recast it into a regression framework. By combining an alternating regression approach together with a lasso penalty, we induce sparsity in the canonical vectors. We compare the performance with other sparse CCA techniques in different simulation settings and illustrate its usefulness on a genomic data set.

00

10

Sparse canonical correlation analysis from a predictive point of view

W

Keywords: Canonical correlation analysis; Genomic data; Lasso; Penalized regression; Sparsity.

1Introduction

The aim of canonical correlation analysis (CCA), introduced by [?], is to identify and quantify linear relations between two sets of variables. CCA is used in various research fields to study associations, for example, in physical data [?], biomedical data [?], or environmental data [?]. One searches for the linear combinations of each of the two sets of variables having maximal correlation. These linear combinations are called the canonical variates and the correlations between the canonical variates are called the canonical correlations. We refer to e.g. [?] ([?], Chapter 10) for more information on canonical correlation analysis.

At the same time, we want to induce sparsity in the canonical vectors such that the linear combinations only include a subset of the variables. Sparsity is especially helpful in analyzing associations between high-dimensional data sets, which are commonplace today in, for example, genetics [?] and machine learning [?]. Therefore, we propose a sparse version of CCA where some elements of the canonical vectors are estimated as exactly zero, which eases interpretation. For this aim, we use the formulation of CCA as a prediction problem.

Consider two random vectors and . We assume, without loss of generality, that all variables are mean centered and that . Denote the joint covariance matrix of (,) by

with . Let and be the matrices with in their columns the canonical vectors. The new variables and are the canonical variates and the correlations between each pair of canonical variates give the canonical correlations. The canonical vectors contained in the matrices and are respectively given by the eigenvectors of the matrices

Both matrices have the same positive eigenvalues, the canonical correlations are given by the positive square root of those eigenvalues.

The canonical vectors and correlations are typically estimated by taking the sample versions of the covariances in and computing the corresponding eigenvectors and eigenvalues. However, to implement this procedure, we need to invert the matrices and . When the original variables are highly correlated or when the number of variables becomes large compared to the sample size, the estimation imprecision will be large. Moreover, when the largest number of variables in both data sets exceeds the sample size (i.e. ), traditional CCA cannot be performed. [?] proposed the canonical ridge, which is an adaptation of the ridge regression concept of [?] to the framework of CCA, to solve this problem. The canonical ridge replaces the matrices and by respectively and . By adding the penalty terms and to the diagonal elements of the sample covariance matrices, one obtains more reliable and stable estimates when the data are nearly or exactly collinear.

Another approach is to use sparse CCA techniques. [?] consider a sparse singular value decomposition to derive sparse singular vectors. A limitation of their approach is that sparsity in the canonical vectors is only guaranteed if and are replaced by their corresponding diagonal matrices. A similar approach was taken by [?] who apply a penalized matrix decomposition to the cross-product matrix , but assume that one can replace the matrices and by identity matrices. [?] consider Wold’s ([?]) alternating least squares approach to CCA and obtain sparse canonical vectors using penalized regression with elastic net. The ridge parameter of the elastic net is set to be large, thereby, according to the authors, ignoring the dependency structure within each set of variables.

[?], [?], and [?] all impose covariance restrictions, i.e. for [?] and [?]; diagonal matrices for [?]. In contrast, we propose in this paper to estimate the canonical variates without imposing any prior covariance restrictions. Our proposed method obtains the canonical vectors using a alternating penalized regression framework. By performing variable selection in a penalized regression framework using the lasso penalty [?], we obtain sparse canonical vectors.

We demonstrate in a simulation study that our Sparse Alternating Regression (SAR) algorithm produces good results in terms of estimation accuracy of the canonical vectors, and detection of the sparseness structure of the canonical vectors. Especially in simulation settings when there is a dependency structure within each set of variables, the SAR algorithm clearly outperforms the sparse CCA methods described above. We also apply the SAR algorithm on a high-dimensional genomic data set. Sparse estimation is appealing since it highlights the most important variables for the association study.

The remainder of this article is organized as follows. In Section 2 we formulate the CCA problem from a predictive point of view. Section 3 describes the Sparse Alternating Regression (SAR) approach and provides details on the implementation of the algorithm. Section 4 compares our methodology to other sparse CCA techniques by means of a simulation study. Section 5 discusses the genomic data example, Section 6 concludes.

2CCA from a predictive point of view

A characterization of the canonical vectors based on the concept of prediction is proposed by [?] and [?]. Given observations and (), consider the optimization problem

We restrict the parameter space to the space , given by

We impose normalization conditions requiring the canonical variates to have unit variance and to be uncorrelated. [?] proves that the objective function in is minimized when and contain in their columns the canonical vectors.

We build on this equivalent formulation of the CCA problem to obtain the canonical vectors using an alternating regression procedure (see e.g. [?]; [?]). The subsequent canonical variates are sequentially derived.

First canonical vector pair. Denote the first canonical vectors (i.e. the first columns of the matrices and ) by . Suppose we have an initial value for the first canonical vector in the matrix . Then the minimization problem in reduces to

where we require to have unit variance. The solution to can be obtained from a multiple regression with as response and as predictor, where and .

Analogously, for a fixed value . The optimal value for is obtained by a multiple regression with as response and as predictor

where we require to have unit variance. This leads to an alternating regression scheme, where we alternately update our estimates of the first canonical vectors until convergence. We iterate until the angle between the estimated canonical vectors in iteration and the respective estimated canonical vectors in the previous iteration are both smaller than some value (e.g. ).

Higher order canonical vector pairs. The higher order canonical variates need to be orthogonal to the previously found canonical variates. Therefore, the alternating regression scheme is applied on deflated data matrices (see e.g. [?]). For the second pair of canonical vectors, consider the deflated matrices

The deflated matrix is obtained as the residuals of the multivariate regression of on , the first canonical variate. Analogously, the deflated matrix is given by

the residuals of the multivariate regression of on .

Using the Least Squares property, each column of is uncorrelated with the first canonical variate . The second canonical variate will be a linear combination of the columns of and, hence, will be uncorrelated to the previously found canonical variate. The same holds for . The second canonical variate pair is then obtained by alternating between the following regressions until convergence:

where we require and to have both unit variance.

Finally, we need to express the second canonical vector pair in terms of the original data sets and . To obtain the second canonical vector , we regress on

yielding the fitted values . To obtain , we regress on .

The same idea is applied to obtain the higher order canonical variate pairs.

3Sparse alternating regressions

The canonical vectors obtained with the alternating regression scheme from Section 2 are in general not sparse. Sparse canonical vectors are obtained by replacing the Least Squares regressions in the alternating regression approach of Section 2 with Lasso regressions (-penalty). As such, some coefficients in the canonical vectors will be set to exactly zero, thereby producing linear combinations of only a subset of variables.

For the first pair of sparse canonical vectors, the sparse equivalents of the Least Squares regressions in equations and are given by

where and are sparsity parameters, is the () element of the first canonical vector and is the () element of the first canonical vector . The first pair of canonical variates are given by and . We require both to have unit variance.

To obtain the second pair of sparse canonical vectors, the same deflated matrices as in equations and are used. The Least Squares regressions in equations and are replaced by the Lasso regressions

Finally, to express the second pair of canonical vectors in terms of the original data matrices, we replace the Least Squares regression in and by the two Lasso regressions.

yielding the fitted values and . We add a lasso penalty to the above regressions, first because the design matrix can be high-dimensional, and second, because we want and to be sparse.

A complete description of the algorithm is given below. We numerically verified that without imposing penalization (i.e. , for ), the traditional CCA solution is obtained. Our numerical experiments all converged reasonably fast. Finally, note that as in other sparse CCA proposals ([?]; [?]; [?]) the sparse canonical variates are in general not uncorrelated. We do not consider this lack of uncorrelatedness as a major flaw. The sparse canonical vectors yield an easily interpretable basis of the space spanned by the canonical vectors. After suitable rotation of the corresponding canonical variates, this basis can be made orthogonal (but not sparse) if one desires so.

Starting values. To start up the Sparse Alternating Regression (SAR) algorithm, an initial value is required. We use the canonical vectors delivered by the canonical ridge as starting value, which is available at no computational cost. The regularization parameters of the canonical ridge are chosen using -fold cross-validation such that the average test sample canonical correlation is maximized [?].

We performed a simulation study (unreported) to assess the robustness of the SAR algorithm to different choices of starting values. The SAR algorithm shows similar performance when either the canonical ridge or other choices of starting values (i.e. CCA in low-dimensional settings and randomly drawn starting values) are used.

Number of canonical variates to extract. For practical implementation, one needs to have an idea on the number of canonical variates to extract. Most often, only a limited number of canonical variate pairs are truly relevant. We follow [?] who propose the maximum eigenvalue ratio criterion to decide on the number of canonical variates to extract. We apply the canonical ridge and calculate the canonical correlations , with . Let for . Then we set , and extract pairs of canonical variates using the SAR algorithm.

Selection of sparsity parameters. In the SAR algorithm, the sparsity parameters and , which control the penalization on the respective regression coefficient matrices, need to be selected. We select the sparsity parameters according to a minimal Bayes Information Criterion (BIC). We solve the corresponding penalized regression problems over a range of values and select for each the one with lowest value of

for . is the estimated likelihood using sparsity parameter and is the number of non-zero estimated regression coefficients. Analogously for .

4Simulation Study

We compare the performance of the Sparse Alternating Regression approach with three other sparse CCA techniques. We consider

  • The Sparse Alternating Regression (SAR) algorithm detailed in Section 3.

  • The sparse CCA of [?]1, relying on a penalized matrix decomposition applied to the cross-product matrix . Sparsity parameters are selected using the permutation approach described in [?].

  • The sparse CCA of [?]2. Sparsity parameters are selected using -fold cross-validation where the average test sample canonical correlation is maximized.

  • The sparse CCA of [?]3. The lasso parameter of the elastic net is selected using -fold cross-validation such that the mean absolute difference between the canonical correlation of the training and test sets is minimized.

We emphasize that the sparsity parameters of all methods are selected as proposed by the respective authors. The traditional CCA solution and the canonical ridge4 are computed as additional benchmarks.

We consider several simulation schemes. For each setting we generate data matrices and according to multivariate normal distributions, with covariance matrices described in Table 1. The number of simulations for each setting is . In all simulation settings, the canonical vectors have a sparse structure. In the first simulation setup (revised from [?]) the covariance restrictions of [?], [?] and [?] (i.e. for the former two, diagonal matrices for the latter) are satisfied. These restrictions are violated in the second, third and fourth simulation setup. In the third design, the number of variables is large compared to the sample size. Traditional CCA can still be performed in this setting. In the fourth design, the number of variables in the data matrix is larger than the sample size, and traditional CCA can no longer be performed.

Table 1: Simulation settings
Design
Uncorrelated 50 4 6
Correlated 50 6 10
with
High-dimensional 50 25 40
with
Overparametrized 80 60 85
with

Performance measures. We compare the SAR algorithm to its alternatives and evaluate (i) the precision accuracy of the space spanned by the estimated canonical vectors, and (ii) the detection of the sparsity structure of the canonical vectors.

We compute for each simulation run , with , the angle between the subspace spanned by the estimated canonical vectors contained in the columns of and the subspace spanned by the true canonical vectors contained in the columns of . Analogously for the matrix . The average angles are given by

Finally, we monitor the sparsity recognition performance (e.g. [?]) using the true positive rate and the true negative rate as defined as follows

The true positive rate indicates the number of true relevant variables detected by the estimation procedure. The true negative rate measures the hit rate of excluding unimportant variables from the canonical vectors. Analogue measures can be computed for the canonical vectors in the matrix .

Results. The simulation results on the estimation accuracy of the estimated canonical vectors are reported in Table 2. We compute the average angle (averaged across simulation runs) between the space spanned by the true and estimated canonical vectors. To compare the average angle of the SAR algorithm against the other approaches, we compute -values of a two-sided paired -test.

Table 2: Estimation accuracy of the canonical vectors, measured by the average angle between the subspace spanned by the true and estimated canonical vectors. -values comparing SAR to alternatives are all .
Design Method
Uncorrelated SAR 0.008 0.019
0.010 0.054
0.104 0.241
0.078 0.212
Canonical ridge 0.129 0.272
CCA 0.127 0.267
Correlated SAR 0.001 0.068
0.061 0.299
0.306 0.690
0.187 0.494
Canonical ridge 0.046 0.043
CCA 0.042 0.033
High-dimensional SAR 0.212 0.305
0.263 0.390
0.826 0.908
0.833 0.942
Canonical ridge 0.916 1.016
CCA 1.062 1.193
Overparametrized SAR 0.278 0.353
0.490 0.592
0.946 0.986
1.137 1.181
Canonical ridge 0.916 1.211

We first compare the performance of the penalized CCA techniques (i.e. canonical ridge and sparse CCA) to the unpenalized CCA solution. The estimation accuracy of the penalized CCA methods is significantly better compared to traditional CCA, especially in the high-dimensional design. In the lower dimensional simulation settings (i.e. uncorrelated and correlated design), sparse CCA techniques are still doing well since the underlying structure of the canonical vectors is sparse.

Next, we compare the SAR algorithm to its sparse alternatives. In the uncorrelated design, the covariance restrictions imposed by [?], [?] and [?] are satisfied. Therefore, we expect these methods to perform especially well. Nevertheless, even in this setting, the SAR algorithm performs significantly better. In the correlated design, the high-dimensional and the overparametrized design these covariance restrictions are violated. Here, we see even more clearly that the SAR algorithm has a significant advantage over its sparse alternatives. In the correlated design, for instance, the SAR algorithm outperforms the method of [?] by a factor 10 for the first canonical vector (i.e. estimation accuracy of 0.001 against 0.061), and by a factor 5 for the second canonical vector (i.e. estimation accuracy of 0.068 against 0.299). The gains in estimation accuracy of the SAR algorithm compared to the other sparse CCA methods are even more outspoken.

Finally, Table 3 compares the results on sparsity recognition performance among the sparse CCA techniques. The methods of [?] and [?] produce the least sparse solution, indicated by the high true positive rates and low true negative rates. The SAR algorithm and the method of [?] tend to produce the most spare solutions, indicated by the high true negative rates and low true positive rates. Contrary to sparse CCA, traditional CCA and the canonical ridge don’t perform variable selection simultaneously with model estimation. Therefore, traditional CCA and canonical ridge are not included in Table 3. All elements of the canonical vectors are estimated as non-zero, resulting in a perfect true positive rate and zero true negative rate.

To conclude, as we can see from Table 2, in every simulation design we consider, the SAR algorithm did perform significantly better than the other sparse CCA methods.

Table 3: Sparsity recognition performance: true positive rate and true negative rate for canonical vectors in the and matrices.
Design Method TPR TNR TPR TNR
Uncorrelated SAR 0.79 0.81 0.79 0.86
0.75 0.83 0.77 0.78
0.94 0.22 0.93 0.25
0.91 0.25 0.91 0.25
Correlated SAR 0.83 0.92 0.55 0.93
0.52 0.80 0.43 0.75
0.86 0.23 0.83 0.26
0.86 0.31 0.83 0.31
High-dimensional SAR 0.54 0.82 0.51 0.83
0.38 0.87 0.31 0.86
0.72 0.35 0.70 0.40
0.87 0.24 0.82 0.27
Overparametrized SAR 0.44 0.89 0.44 0.89
0.37 0.82 0.32 0.82
0.61 0.49 0.56 0.55
0.90 0.16 0.87 0.18

5Genomic data application

In recent years, high-dimensional genomic data sets have arisen, containing thousands of gene expression and other phenotype measurements (e.g., [?]). We use the publicly available breast cancer data set described in [?] and available in the R package PMA [?]. Comparative genomic hybridization (CGH) data (2149 variables) and gene expression data (19 672 variables) are available on 89 samples. The objective is to identify copy number change variables that are correlated with a subset of gene expression variables. Copy number changes on a particular chromosome are associated with expression changes in genes located on the same chromosome [?]. Therefore, we analyze the data for each chromosome separately, each time using the CGH and gene expression variables for that particular chromosome. The dimension of both sets of variables is large compared to the sample size such that traditional CCA cannot be performed. In such high-dimensional setting, the use of sparse CCA techniques is appealing. We use the SAR algorithm to perform sparse CCA for each chromosome separately.

To decide on the number of canonical variates pairs to extract, we apply the canonical ridge to each chromosome. Figure 1 shows the first 20 estimated canonical correlations for each of the 23 chromosomes. For each chromosome, we use the maximum eigenvalue ratio criterion, discussed in Section 3, to determine the number of canonical variate pairs to extract. Depending on the specific chromosome, this criterion indicates to extract either 1, 2, 3 or 4 canonical variate pairs.

Figure 1: Estimated canonical correlations using the canonical ridge, for each of the 23 chromosomes. The highest order pair of canonical variates to retain, as selected by the maximum eigenvalue ratio criterion, is indicated by a solid black circle.
Figure 1: Estimated canonical correlations using the canonical ridge, for each of the 23 chromosomes. The highest order pair of canonical variates to retain, as selected by the maximum eigenvalue ratio criterion, is indicated by a solid black circle.

To compare the performance of the SAR algorithm to the other sparse CCA procedures discussed in Section 4, we perform an out-of-sample cross-validation exercise. More precisely, we perform a leave-one-out cross-validation exercise and compute the cross-validation score

where and contain the estimated canonical vectors when the observation is left out of the estimation sample. We compute this cross-validation score for each of the sparse CCA techniques. The technique that leads to the lowest value of this cross-validation score achieves the best out-of-sample performance.

Averaged across all chromosomes, the SAR algorithm attains a cross-validation score of 87.21, the method of [?] 367.38, [?] 2778.57 and [?] 713.57. Thus, the SAR algorithm outperforms its alternatives. Furthermore, we compute relative cross-validation scores, being the cross-validation score of a method relative to the cross-validation score of the SAR algorithm. Boxplots of these relative cross-validations scores (23 scores, one for each chromosome) are presented in Figure ?. A value of the relative cross-validation score larger than 1 (horizontal red line) indicates better performance of the SAR algorithm. The SAR algorithm always attains the best cross-validation score, except for two cases (out of 23) where [?] achieves the lowest cross-validation score. The differences in performance compared to the method of [?] and [?] are large. The cross-validation scores obtained with the SAR algorithm and the method of [?] are substantially lower than those obtained with the method of [?] and [?]. The solutions obtained with the former two are much sparser than the once obtained with the later two. Sparsity thus helps in achieving a good cross-validation score.

Boxplots of the 23 relative cross-validation scores of ,  and , relative to the SAR algorithm.
Boxplots of the 23 relative cross-validation scores of , and , relative to the SAR algorithm.

The dependency structure within each set of variables might explain the good performance of the SAR algorithm relative to its alternatives. For the first chromosome, for instance, 20% of the (absolute) correlations between the 136 CGH spots are larger than 0.6. The same holds for the other chromosomes. In the simulation study from Section 4, we show that the SAR algorithm performs much better for highly correlated data sets than the other sparse CCA techniques, that impose prior covariance restrictions. This might explain why the SAR algorithm outperforms its alternatives in the out-of-sample cross-validation exercise.

Next, we discuss the solution provided by the SAR algorithm. For each chromosome, sparse canonical vectors are obtained. We do not fix the number of non-zero elements in the canonical vectors in advance, but select the sparsity parameter using the BIC discussed in Section 3. Figure 2 represents for each chromosome the copy number change measurements with non-zero weights5. Each CGH spot has a certain position on a chromosome, called the nucleotide position. The CGH measurements selected by the SAR algorithm are indicated by plotting a vertical line on their respective nucleotide position. The different colors indicate the subset of variables selected in the construction of the corresponding canonical variate pair (first pair: black, second: red, third: blue, fourth: green).

We see from Figure 2 that the degree of sparsity selected by the BIC varies from one chromosome to the other. For chromosome 15, for example, only one canonical variate pair is selected and the BIC suggests a very sparse canonical vector. For chromosome 1, two canonical variate pairs are extracted with a large number of non-zero elements in the second canonical variate pair. However, a lot of non-zero weights are small in magnitude which can be seen from the length of the vertical lines. By adjusting the sparsity parameter to a higher value, a sparser solution could be obtained. A trade-off needs to be made between inducing more sparsity and thus performing better noise filtering, on the one hand, and reducing the risk of not including all important variables, on the other hand. Depending on the researcher’s objective, the desired level of sparsity can be easily controlled by adjusting the sparsity parameter.

Figure 2: SAR algorithm: copy number change measurements with non-zero weights in the first (black), the second (red), the third (blue) and the fourth (green) canonical vectors are indicated for each of the 23 chromosomes.
Figure 2: SAR algorithm: copy number change measurements with non-zero weights in the first (black), the second (red), the third (blue) and the fourth (green) canonical vectors are indicated for each of the 23 chromosomes.

6Conclusion

In high-dimensional settings, the estimation imprecision of traditional CCA will be large. To overcome this problem, penalized versions of CCA have been introduced such as the canonical ridge or sparse CCA. The canonical ridge still includes all variables in the canonical vectors, whereas sparse CCA only includes a subset of the variables. This is highly valuable in high-dimensional settings since it eases interpretation, as illustrated in the genomic data application.

In this paper, we introduce a Sparse Alternating Regression (SAR) algorithm that considers the CCA problem from a predictive point of view. We recast the CCA problem into a penalized alternating regression framework to obtain sparse canonical vectors. Contrary to other popular sparse CCA procedures (i.e. [?]; [?]; [?]) we do not impose any covariance restrictions. We show that the SAR algorithm produces much better results than the other sparse CCA approaches. Especially in simulation settings when there is a dependency structure within each set of variables, the gains in estimation accuracy achieved by the SAR algorithm are outspoken. Also in the genomic data application, the data sets contain highly correlated variables. We illustrate that the SAR algorithm considerably outperforms the other sparse CCA techniques in an out-of-sample cross-validation exercise.

Both the SAR algorithm and the method of [?] use an alternating regression framework. There are, however, two important differences between both approaches, leading towards significant differences in performance. First, [?] perform univariate soft thresholding, which ignores the dependency structure within each set of variables. In contrast, we apply the lasso penalty to multiple linear regressions. The lasso only equals the soft thresholding estimator for a linear model with orthonormal design (see e.g. [?]). Secondly, we express the higher order canonical vectors in terms of the original data sets, whereas [?] express them in terms of the deflated data matrices.

In this paper, a lasso penalty is used to induce sparsity. Future work might consider other choices of penalty functions (see [?]). For instance, the adaptive lasso [?], the smoothly clipped absolute deviation (SCAD) penalty [?], or a lasso with positivity constraints (see [?]). Note that [?] also treat CCA as a least squares problem. They focus on orthogonality properties of CCA and only construct the first two pairs of sparse canonical vectors. Their approach could be extended to higher order canonical correlations, but this would increase the number of orthogonality constraints and the computing time substantially.

The level of sparsity produced by all sparse CCA techniques hinges on the selection method used for the sparsity parameters. This might lead to substantial differences in sparsity recognition performance, as illustrated in the simulation study. Future work still needs to be done on the comparison of methods (BIC, cross-validation, measure of explained variability, among others) to select the optimal tuning parameters.

Footnotes

  1. Available in the R package PMA [?].
  2. Available at http://www.uhnres.utoronto.ca/labs/tritchler/.
  3. We re-implemented the algorithm of [?] in R.
  4. Available in the R package CCA [?].
  5. The construction of this figure is similar to the one presented in [?]. We use the R-code available in the R package PMA [?].
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
11310
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description