SMSSVD – SubMatrix Selection Singular Value Decomposition
High throughput biomedical measurements normally capture multiple overlaid biologically relevant signals and often also signals representing different types of technical artefacts like e.g. batch effects. Signal identification and decomposition are accordingly main objectives in statistical biomedical modeling and data analysis. Existing methods, aimed at signal reconstruction and deconvolution, in general, are either supervised, contain parameters that need to be estimated or present other types of ad hoc features. We here introduce SubMatrix Selection SingularValue Decomposition (SMSSVD), a parameter-free unsupervised signal decomposition and dimension reduction method, designed to reduce noise, adaptively for each low-rank-signal in a given data matrix, and represent the signals in the data in a way that enable unbiased exploratory analysis and reconstruction of multiple overlaid signals, including identifying groups of variables that drive different signals.
The Submatrix Selection Singular Value Decomposition (SMSSVD) method produces a denoised signal decomposition from a given data matrix. The SMSSVD method guarantees orthogonality between signal components in a straightforward manner and it is designed to make automation possible. We illustrate SMSSVD by applying it to several real and synthetic datasets and compare its performance to golden standard methods like PCA (Principal Component Analysis) and SPC (Sparse Principal Components, using Lasso constraints). The SMSSVD is computationally efficient and despite being a parameter-free method, in general, outperforms existing statistical learning methods.
A Julia implementation of SMSSVD is openly available on GitHub (https://github.com/rasmushenningsson/SMSSVD.jl).
High throughput biomedical measurements, by design, normally capture multiple overlaid biologically relevant signals, but often also signals representing different types of biological and technical artefacts like e.g. batch effects. There exist different methods aimed at signal reconstruction and deconvolution of the resulting high dimensional and complex datasets, but these methods almost always contain parameters that need to be estimated or present other types of ad hoc features. Developed specifically for Omics data and more particularly gene expression data such methods include the gene shaving method [hastie2000gene], tree harvesting [hastie2001supervised], supervised principal components [bair2004semi] and amplified marginal eigenvector regression [ding2017predicting]. They employ widely different strategies do deal with the ubiquitous (many more variables than samples) problem in omics data. Gene Shaving uses the first principal component to iteratively guide variable selection towards progressively smaller nested subsets of correlated genes with large variances. An optimal subset size is then chosen using the ‘gap statistic’, a measure of how much better the subset is than what is expected by random chance. To find additional subsets (signals), each gene is first projected onto the orthogonal complement of the average gene in the current subset, and the whole process is repeated.
We here introduce SubMatrix Selection Singular Value Decomposition (SMSSVD), a parameter-free unsupervised dimension reduction technique primarily designed to reduce noise, adaptively for each low-rank-signal in a data matrix, and represent the data in a way that enable unbiased exploratory analysis and reconstruction of the multiple overlaid signals, including finding the variables that drive the different signals.
Our first observation for the theoretical foundation of SMSSVD is that the SVD of a linear map restricted to a hyperplane (linear subspace) share many properties with the SVD of the corresponding unrestricted linear map. Using this we show that, by iteratively choosing orthogonal hyperplanes based on criteria for optimal variable selection and concatenating the decompositions, we can construct a denoised decomposition of the data matrix. The SMSSVD method guarantees orthogonality between components in a straightforward manner and coincide with the SVD if no variable selection is applied. We illustrate the SMSSVD by applying it to several real and synthetic datasets and compare its performance to golden standard methods for unsupervised exploratory analysis: Classical PCA (Principal Component Analysis) [hotelling1933analysis] and the lasso or elastic net based methods like SPC (Sparse Principal Components) [witten2009penalized]. Just like PCA and SPC, SMSSVD is intended for use in wide range of situtations, and no assumptions specific to gene expression analysis are made in the derivation of the method. The SMSSVD is computationally efficient and despite being a parameter-free method, in general, it outperforms or equals the performance of the golden standard methods. A Julia implementation of SMSSVD is openly available on GitHub.
Let be the restriction of a linear map to a -dimensional subspace such that . Furthermore, let be the singular value decomposition of . Then
Remark. In the statement of the theorem and in the proof below, we consider all vectors to belong to the full-dimensional spaces. In particular, we extend all vectors in subspaces of the full spaces with zero in the orthogonal complements.
1. The columns of are an orthonormal basis of and thus orthogonal to . 2. The columns of are an orthonormal basis of and . 3. . 4. Using 3 we get
5. The statement follows from , where we have used that . 6. Let and be the parts of the decomposition , which is possible since . The linear map is orthogonal projection onto and thus maps and . Since , it follows immediately that and that . ∎
Note that is the orthogonal projection on and is the orthogonal projection on . If is spanned by the right singular vectors corresponding to the largest singular values of , then is the truncated SVD which by the Eckhart-Young Theorem is the closest rank matrix to in Frobenius and Spectral norms. Furthermore, if , then and is the SVD of (without expanding and to orthonormal matrices). Also note that for these two cases, property 4 takes a simpler form, (symmetric to property 3), but the residual is nonzero in general.
Theorem 2.1 concerns the relationship between and and shows that many important properties that hold for the (truncated) SVD are retained regardless how the subspace is chosen. The results from Theorem 2.1 are put into practice in this iterative algorithm. Let and repeat the following steps for
Compute from .
The iterations can continue as long as is nonzero or until some other stopping criteria is met. Finally, the results are concatenated:
Orthogonality between columns within each and respectively follow immediately from the definition. Step 3 above, together with Theorem 2.1, property 2, guarantees orthogonality between the columns of different ’s, since the columns of are in , for all . Similarly, properties 5 and 1 of Theorem 2.1 imply orthogonality between the columns of different ’s. That is, . The diagonal entries of each are decreasing, but the algorithm above does not ensure any structure between the blocks. In practice however, with each chosen to capture a strong signal in , we can expect the SMS singular values to be decreasing, or at least close to decreasing.
The rank decreases by in each iteration, that is , which follows from property 6 in Theorem 2.1. This implies that if the iterations are run all the way until . In general, , with equality iff the residual for all . Indeed, if equality holds, then . Step 3 of the algorithm above now implies that and Theorem 2.1, property 4, yields
To adaptively reduce noise, must depend on . Our motivating example is to use for selecting a subset of the variables that are likely to be less influenced by noise. This is a special case of choosing after performing a linear transform of the variables, which is described in the following theorem:
Take a linear map and an integer such that and let be the rank truncated SVD of . Furthermore let be the subspace spanned by the columns of and let be the SVD of . Then
and are orthonormal bases of .
and are bases of .
1. The columns of are orthogonal to . 2. . 3. Follows immediately from the definitions. 4. is a basis of . By property 2, , showing that span . Finally, since and have the same rank, is also a basis of . 5. For general matrices and , consider acting on each column of . We get
The result now follows from property 2, with and , since and . 6. From Theorem 2.1, property 4, we get . It remains to show that . By property 4, there exists a matrix such that and
where because of property 3. ∎
If , then .
Another way to interpret is that defines a (possibly degenerate) inner product on the sample space, which is used to find . To see this, let so that and , showing the well-known result that is an eigendecomposition of , where is the inner product of sample and . This naturally extends to kernel PCA, where is defined by taking scalar products after an (implicit) mapping to a higher-dimensional space. Any method that results in a low-dimensional sample space representation can indeed be used, since is spanned by the columns of by definition. We will not pursue these extensions here.
The Projection Score [fontes2011projection] provides a natural optimality criterion for and (and thus ) needed in each iteration of the SMSSVD algorithm. It is a measure of how informative a specific variable subset is, when constructing a rank approximation of a data matrix. A common application is to maximize the Projection Score over a sequence of variable subsets, where each subset consists of those variables that have a variance above a specific threshold. Using the notation from Theorem 2.2, the optimal variable subset describes a matrix and the optimal low-rank approximation is . Here has exactly one element in each column equal to , at most one element in each row equal to and all other elements equal to zero. Hence corresponds to selecting a subset of the variables of a data matrix and . In iteration of the SMSSVD algorithm, we optimize the Projection Score jointly over the variance filtering threshold and the dimension, which gives both an optimal variable subset and a simple dimension estimate of the signal that was captured.
The performance of SMSSVD is evaluated in comparison to SVD and SPC (Sparse Principal Components), a method similar to SVD, but with an additional lasso () constraint to achieve sparsity [witten2009penalized]. The methods are evaluated both for real data using three Gene Expression data sets and for synthetic data where the ground truth is known.
3.1 Gene Expression Data
Three Gene Expression data sets, two openly available with microarray data and one based on RNA-Seq available upon request from the original authors, were analyzed. Gene expression microarray profiles from a study of breast cancer [chin2006genomic] was previously used to evaluate SPC [witten2009penalized], but in contrast to their analysis, we use all samples and all genes. Each sample was labeled as one of five breast cancer subtypes: ‘basal-like’, ‘luminal A’, ‘luminal B’, ‘ERBB2’, and ‘normal breast-like’. In a study of pediatric Acute Lymphoblastic Leukemia (ALL), gene expression profiles were measured for diagnostic samples [ross2003classification]. The samples were labeled by prognostic leukemia subtype (‘TEL-AML1’, ‘BCR-ABL’, ‘MLL’, ‘Hyperdiploid (¿50)’, ‘E2A-PBX1’, ‘T-ALL’ and ‘Other’). Our final data set is from another pediatric ALL study, where gene expression profiling was done from RNA-Seq data for samples [lilljebjorn2016identification]. The samples were aligned with Tophat2 [kim2013tophat2] and gene expression levels were normalized by TMM [robinson2010scaling]. Only genes with a support of at least reads in at least samples were kept. The annotated subtypes in this data set were ‘BCR-ABL1’, ‘ETV6-RUNX1’, ‘High hyperdiploid’, ‘MLL’, ‘TCF3-PBX1’ and ‘Other’. Here, ‘Other’ is a very diverse group containing everything that did not fit in first five categories. We thus present results both with and without this group included.
The ability to extract relevant information from the gene expression data sets was evaluated for each model by how well they could explain the subtypes, using the Akaike Information Criterion (AIC) for model scoring. Given the low-dimensional sample representations from SMSSVD, SVD or SPC (for different values of the sparsity parameter, ), a Gaussian Mixture Model was constructed by fitting one Multivariate Gaussian per subtype. The class priors were chosen proportional to the size of each subtype. The loglikelihood , where are the subtype labels, is the model and a vector of fitted model parameters is used to compute the . Figure 1 displays the AIC scores for the different models as a function of the model dimension. SMSSVD generally performs better than SVD, by a margin. Comparison with SPC is trickier, since the performance of SPC is determined by the sparsity parameter and there is no simple objective way to choose . However, SMSSVD compares well with SPC regardless of the value of the parameter.
3.2 Synthetic Data
SMSSVD decomposes a matrix observed in noisy conditions as a series of orthogonal low-rank signals. The aim is to get a stable representation of the samples and then recover as much as possible of the variables, even for signals that are heavily corrupted by noise. To evaluate SMSSVD, we synthetically create a series of low-rank signals that are orthogonal (i.e. and for ) and that has a chosen level of sparsity on the variable side and try to recover the individual ’s from the observed matrix where is a matrix and . To measure how well SMSSVD recovers the signals from the data, we look at each signal separately, considering only variables where the signal has support. Let be the reconstruction error of signal ,
where is the reconstructed signal and is defined such that multiplying with from the left selects the variables (rows) where is nonzero.
While SMSSVD is designed to find -dimensional signals (), the same is not true for SVD and SPC. To test the ability to find the signals, rather than the ability to find them in the right order, the components are reordered using a algorithm that tries to minimize the total error by greedily matching the rank matrices from the decomposition to signals , always picking the match that lowers the total error the most. The number of rank matrices matched to each signal is equal to . Note that with no noise present, SVD is guaranteed to always find the optimal decomposition.
The biplots in Figure 2 illustrate how SMSSVD works and how the signal reconstructions compares to other methods. If there is no noise, perfect decompositions are achieved by all methods apart from SPC with a high degree of sparsity. An artificial example where the noise is only added to the non-signal variables highlights that SMSSVD can still perfectly reconstruct both samples and signal variables, whereas the other methods display significant defects. Finally, when all variables are affected by noise, SMSSVD still get the best results.
Next, we created several data sets for a variety of conditions based on the parameters : Number of samples, : Number of variables, : Number of variables in the support of each signal, : number of signals and : the rank of each signal. For each signal, we randomize matrices and , choose a diagonal matrix and let . For both and , each new column is created by sampling a vector of i.i.d. Gaussian random variables and projecting onto the orthogonal complement of the subspace spanned by previous columns (in current and previous signals). For , we only consider the subspace spanned by randomly selected variables. The result is then expanded by inserting zeros for the other variables. To complete the signal, let the ’th diagonal element of , , such that there is a decline in the power between signals and within components of each signal. Finally, i.i.d. Gaussian noise is added to the data matrix. s 3, 4 and 5 show test results for data sets randomized in this way for different sets of parameters. SMSSVD is the only method that performs well over the whole set of parameters. The only situation where SMSSVD is consistently outperformed is by SVD for large , and it is by a narrow margin. SMSSVD performs particularly well, in comparison to the other methods, in the difficult cases when the signal to noise ratio is low. SPC performance clearly depends on the regularization parameter which must be chosen differently in different situations. However, despite being a parameter-free method, SMSSVD outperforms SPC in most cases.
We have presented SMSSVD, a dimension reduction technique designed for complex data sets with multiple overlaid signals observed in noisy conditions. When compared to other methods, over a wide range of conditions, SMSSVD performs equally well or better. SMSSVD excels in situations where (many more variables than samples) but most of the variables just contribute with noise, a very common situation for high throughput biological data. As a parameter-free method, SMSSVD requires no assumptions to be made of the level of sparsity. Indeed, SMSSVD can handle different signals within the same data set that exhibit very different levels of sparsity. Being parameter-free also makes SMSSVD suitable for automated pipelines, where few assumptions can be made about the data.
A common strategy when analyzing high dimensional data is to first apply PCA (SVD) to reduce the dimension to an intermediate number, high enough to give an accurate representation of the data set, but low enough to get rid of some noise and to speed up downstream computations (see e.g. [maaten2008visualizing]). We argue that since SMSSVD can recover multiple overlaid signals and adaptively reduce the noise affecting each signal so that even signals with a lower signal to noise ratio can be found, it is very useful in this situation.
Our unique contribution is that we first solve a more suitable dimension reduction problem for robustly finding signals in a data set corrupted by noise and then map the result back to the original variables. We also show how this combination of steps gives SMSSVD many desirable properties, related to the SVD of both the full data matrix and of the smaller matrix from the variable selection step. Orthogonality between components is one of the cornerstones of SVD, but it is often difficult to satisfy the orthogonality conditions when other factors are taken into account. SPC does for instance give orthogonality for samples, but not for variables and the average genes of each subset in gene shaving are ‘reasonable’ uncorrelated. For SMSSVD, orthogonality follows immediately from the construction, simplifying interpretation and subsequent analysis steps. Theorem 2.2, property 2 highlights that the variables retained in the variable selection step are unaffected when the solution is expanded to the full set of variables. Hence, we can naturally view each signal from the point of view of the selected variables, or using all variables.
The variable selection step in the SMSSVD algorithm can be chosen freely. For exploratory analysis, optimizing the Projection Score based on variance filtering is a natural and unbiased choice. Another option is to use Projection Score for response related filtering, e.g. ranking the variables by the absolute value of the -statistic when performing a -test between two groups of samples. The algorithm also has verbatim support for variable weighting, by choosing the matrix as a diagonal matrix with a weight for each variable. Clearly this is a generalization of variable selection.
Kernel PCA, SPC, and other methods that give low-dimensional sample representations, but where the variable information is (partially) lost, can also be extended by SMSSVD (relying on Theorem 2.1 only), as long as a linear representation in the original variables can be considered meaningful. Apart from retrieving a variable-side representation, the SMSSVD algorithm also makes it possible to find multiple overlapping signals, by applying the dimension reduction method of interest as the first step of each SMSSVD iteration.
The authors would like to thank Thoas Fioretos and Henrik Lilljebjörn at the Department of Laboratory Medicine, Division of Clinical Genetics, Lund University, for giving us access to the RNA-seq data presented in [lilljebjorn2016identification].