Sparse Variable Selection on High Dimensional Heterogeneous Data with Tree Structured Responses
Abstract
We consider the problem of sparse variable selection on high dimension heterogeneous data sets, which has been taken on renewed interest recently due to the growth of biological and medical data sets with complex, noni.i.d. structures and prolific response variables. The heterogeneity is likely to confound the association between explanatory variables and responses, resulting in a wealth of false discoveries when Lasso or its variants are naïvely applied. Therefore, the research interest of developing effective confounder correction methods is growing. However, ordinarily employing recent confounder correction methods will result in undesirable performance due to the ignorance of the convoluted interdependency among the prolific response variables. To fully improve current variable selection methods, we introduce a model that can utilize the dependency information from multiple responses to select the active variables from heterogeneous data. Through extensive experiments on synthetic and real data sets, we show that our proposed model outperforms the existing methods.
1 Introduction
Variable selection is one of the central tasks in statistics and has been studied for decades [\citeauthoryearCai, Zhang, and He2010, \citeauthoryearNie et al.2010, \citeauthoryearDu and Shen2015]. Modern machine learning problems, especially biological or medical applications often seek for solutions in the existing statistical approaches. Lasso [\citeauthoryearTibshirani1996] is an example of those extensively adopted methods in a variety of areas for sparse variable selection tasks. However, the increasing volume of data sets often requires the data to be collected from multiple batches and then integrated together. This procedure is particularly harmful to the biological [\citeauthoryearHe and Lin2011] and medical [\citeauthoryearChen and Wang2013, \citeauthoryearZhou et al.2013] data sets, which are sensitive to the data sources, like populations, hospitals or even experimental devices. This sensitivity results in heterogeneity, therefore, breaks one of the most fundamental assumptions (i.i.d. assumption) that most statistical machine learning methods make. More importantly, due to the expensiveness of biological and medical data, different batches of data are collected for different purposes from distinctly different sources. Consequently, the heterogeneity often induces confounding factors between explanatory variables and response variables, resulting in numerous false positive selected variables when classical variable selection methods are applied [\citeauthoryearAstle and Balding2009].
To further understand the challenge heterogeneity introduces to biological or medical data sets and define the problem, consider that we have data samples in the format of , where stands for the explanatory variables, stands for the responses, and stands for the indicator of the data source. The dependency between and is the premise of any variable selection task [\citeauthoryearRakitsch et al.2013], and the dependency between and is introduced through heterogeneity [\citeauthoryearWang and Yang2016, \citeauthoryearWang, Aragam, and Xing2017, \citeauthoryearWang et al.2017]. More importantly, biological and medical data are typically expensive to collect, therefore, individual samples of certain diseases are mostly collected from the patients of many hospitals, while samples of the control group are mostly collected from volunteers from several different undeveloped regions. This data collection procedure brings the dependency between and .
In the real world, this problem may be even challenging, for the origin of different samples is lost either through data compression or experimental necessity in most cases. Nowadays genetic association studies rarely see the origin of samples listed. becomes the confounding factor between and [\citeauthoryearHenderson1975]. One challenge of the heterogeneous data variable selection problem is to correct the confounding effects introduced by .
Further, many of the real world biological and medical data sets are collected with multiple response variables. These responses are often more closely related and more likely to share common relevant covariates than others [\citeauthoryearChen et al.2010, \citeauthoryearKim and Xing2012]. For instances, in genetic association analysis, which aims to select the singlenucleotide polymorphism (explanatory variables) that could affect the phenotype (response variables), the genes that in the same pathway are more likely to share the common set of relevant explanatory variables than other genes. Thus, to improve the performance of the variable selection, incorporating the complex correlation structure in the responses is under our consideration. Therefore, in this paper, we extend the recent solutions of sparse linear mixed model [\citeauthoryearRakitsch et al.2013, \citeauthoryearWang and Yang2016] that can correct confounding factors and perform variable selection simultaneously further to account the relatedness between different responses. We propose the treeguided sparse linear mixed model, namely TgSLMM, to correct the confoundings and incorporate the relatedness among response variables simultaneously. Further, we examine our model through extensive synthetic experiments and show that our method is superior to other existing approaches.
2 Related Work
Recent years have witnessed the great advances in the variable selection area. The most classical approach is norm regularization (i.e. Lasso regression [\citeauthoryearTibshirani1996]). Further, recent studies have extended the model capability by introducing different regularizers [\citeauthoryearChen et al.2010]. Examples including the Smoothly Clipped Absolute Deviation (SCAD) [\citeauthoryearFan and Li2001], the Local Linear Approximation (LLA) [\citeauthoryearZou and Li2008] and the Minimax Concave Penalty (MCP) [\citeauthoryearZhang and others2010] have been introduced recently. These methods overcome the limitations of Lasso [\citeauthoryearFan and Li2001] at the cost of introducing nonconvexity in the optimization problem. Some other variable selection methods like [\citeauthoryearKolda and Bader2009] ignore underlying multidimensional structure, leading to severe small dataset problems. [\citeauthoryearTan et al.2012] imposes a rank constraint into regularization to factor matrice and promotes sparsity in variable selection, which hurts the interpretability. [\citeauthoryearLiu, Shao, and Fu2016] proposed a unsupervised variable selection method, but cannot apply to high dimensional data with heterogeneity.
In the noni.i,d setting, such as when the data set is originated from different sources, confounders could raise a challenge in variable selection. Corresponding solutions have been studied for decades. Principal components analysis (PCA) [\citeauthoryearPatterson, Price, and Reich2006, \citeauthoryearPrice et al.2006] and linear mixed model [\citeauthoryearGoddard2009, \citeauthoryearKang et al.2010] are two popular and efficient approaches to correct the confounding effect. The latter provides a more finegrained way to model the population structure and won its prominence in the animal breeding literature, where they were used to correct the kinship and family structure [\citeauthoryearHenderson1975]. Many extensions have been developed. But these extensions such as LMMSelect [\citeauthoryearListgarten et al.2012] LMMBOLT [\citeauthoryearLoh et al.2015] and Liabilitythreshold mixed linear model (LTMLM) [\citeauthoryearLoh et al.2015] along with other methods [\citeauthoryearLippert et al.2011, \citeauthoryearSegura et al.2012, \citeauthoryearPirinen et al.2013] only rely on univariate testing to select the variable once the confounding factor is corrected. Recent attempts have been made to propose multivariable testing model [\citeauthoryearBondell, Krishna, and Ghosh2010, \citeauthoryearFan and Li2012, \citeauthoryearRakitsch et al.2013, \citeauthoryearWang and Yang2016].
3 TreeGuided Sparse Linear Mixed Model
Throughout this paper, denotes the matrix for explanatory variables of each individuals, for the matrix for response variables, and for the effect sizes. We use subscripts to denote rows and superscripts to denote columns, for example, and is the th column and th row of respectively.
3.1 Sparse Linear Mixed Model
The linear mixed model (LMM) is an extension of the standard linear regression model that explicitly describes the relationship between a response variable and explanatory variables incorporating an extra random term to account for confounding factors. To introduce sparse linear mixed model (LMMLasso), we briefly revisit the classical linear mixed model as Equation 1:
(1) 
where is a matrix of random effect, is confounding influences and denotes observation noise and they all follow the Gaussian distribution with the zero means. Intuitively, the product models the covariance between the observations . Assuming that , we can further rewrite the formula as Equation 2 to simplify mathematical derivation:
(2) 
where , representing the covariance between the responses, represents the magnitude of confounder factors. Assuming the priori distribution of could be expressed as , we can define log likelihood function as Equation 3:
(3) 
Due to the sparsity of , it’s reasonable to assume that follows Laplace shrinkage prior. Such assumption leads to the sparse linear mixed model (LMMLasso). However, LMMLasso fails to consider the relatedness among response variables. Such defect drives us to the treeguided sparse linear mixed model.
3.2 TreeGuided Sparse Linear Mixed Model
Based on the framework of Equation 3ï¼ we substitute the above Equation 4 then define TreeLasso into Equation 3, we can get the optimization equation for the proposed TgSLMM.
(4) 
where is a tuning parameter that controls the amount of sparsity in the solution. TreeLasso incorporates the relatedness among responses simultaneously as Equation 4. The weights and overlaps of groups of TreeLasso are determined by the hierarchical clustering tree [\citeauthoryearGolub et al.1999, \citeauthoryearSørlie et al.2001]. Given each node of the th tree is associated with height of the group whose members are the response variables at the leaf nodes of the subtree rooted at node . In general, represents the weight for selecting relevant covariates separately for the responses associated with each child of node , whereas the represents the weight for selecting relevant covariates jointly for the responses for all of the children of node . is a vector of regression coefficients . Each group of subtree regression coefficients is weighted with , which is defined as the Equation 5:
(5) 
Further, assuming is the number of response variables, which is equivalent to the number of leaf nodes in the tree.
3.3 Parameter Learning
Overall, optimizing Equation 4 with hyperparameter {} is a nonconvex optimization problem aside with weight . Hence, we use the nullmodel fitting method first to correct the confounding factors without the consideration of fixed effect, and then solve TreeLasso regression problem using smoothing proximal gradient method [\citeauthoryearChen et al.2012].
Nullmodel
Nullmodel fitting method by first optimizing , under null model while ignoring the fixed effect of explanatory variables, can yield nearidentical result as an exact method [\citeauthoryearKang et al.2010]. Introducing the ratio of the random effect and the noise variance, , we could transform the equation as Equation 6:
(6) 
In general, we first compute the spectral decomposition of , where for eigenvector matrix and for eigenvalue matrix. After that, we rotate the data in order to make the covariance of the Gaussian distribution isotropic. Then, we carry out a onedimension optimization with regard to to optimize the loglikelihood, while can be optimized in closed form in every evaluation.
Reduction to TreeLasso regression problem
Having the yielded , we use the computed before to rotate the data such that the covariance matrix becomes isotropic. Assume and are the resulting rescaled data, which can be calculated by following equation:
Using this transformation, the equation eventually ends up with a standard TreeLasso regression problem since it is free of population structure. Thus in the following step, we can obtain the as Equation 7:
(7) 
where denotes the matrix Frobenius norm, and is determined by the Equation 4, then we can easily employ the smoothing proximal gradient descent method.
4 Synthetic Experiments
In this section, we evaluate the yielded results of the TgSLMM versus TreeLasso, LMMLasso and some methods mentioned above, which are shown in the receiver operating characteristic (ROC) curves ^{1}^{1}1The problem can be regarded as classification problem–identifying the active response variables from all genes. So for each threshold, we select the response variables whose absolute effect sizes are higher than the threshold. If the selected explanatory variable has nonzero value in ground truth effect size, it will be a true positive of this problem..
4.1 Data Generation
First, we simulate a sparse treestructured vector as . An illustrated example is shown in Figure 1. Each row of can be viewed as either a leaf node with elements shattered in different columns, or a nonleaf node, which contains all elements its child node holds. The generation rules are listed below:

The righter columns have fewer nonzero elements

The elements from righter columns have bigger value.

The leftmost column has nonzeros elements in all positions, which denotes the most general feature.

The nonleaf nodes can be iteratively divided into two different kinds of nodes. One has nonzeros elements in all column while the other one only has nonzeros elements in the rightest column.
Then we generate centroids of different distributions. is the centroid of th distribution, we generate explanatory variable data from a multivariate Gaussian distribution as follows:
(8) 
where denotes the ith data and originates from jth distribution. Then we generate an immediate response vector from vector with :
(9) 
To get the final response vector , we introduce a covariance matrix to simulate correlation between different responses:
(10) 
where is to control the magnitude of the variance. Assuming is the matrix formed by stacking the centroids , we choose to simulate the correlation between observations.
4.2 Experiment Results
We assessed the ability of TgSLMM in our synthetic datasets. The experimental setting is listed in Table 1.
Parameter  Default  Description 

1000  the number of data samples  
5000  the number of explanatory variables  
50  the number of response variables  
10  the number of distributions that data originates from  
0.05  the percentage of active variables  
0.001  the magnitude of covariance of explanatory variables  
1  the magnitude of covariance of response variables  
0.05  the magnitude of noise 
The results are shown in Figure 10. To evaluate the performance of the proposed model, TreeLasso, LMMLasso, MCP, SCAD, Lasso, BOLTLMM, LTMLM and LMMSelect are also tested. In general, our method outperforms all the other methods.
We examine each method’s ability to identify active variables in different datasets by modifying each default values. In Figure 10(a) as increases, and in Figure 10(b) as decreases, the ratio of gets smaller and the performance gets better as expected. Compared to TreeLasso along with other methods, our method is more robust with big datasets, which are accord with the realworld situation. As we increase the number of response variables in Figure 10(c), increase the number of distributions in Figure 10(d), or decrease the proportion of active variables in as Figure 10(e), the problem becomes more challenging. Figure 10(f) and Figure 10(g) show that our method is more flexible to different covariance of explanatory variables and response variables. In Figure 10(e), we notice that when the proportion of active variables in is large, the performance of TgSLMM and LMMLasso is similar. However, it contradicts the background of our research, that the active variables should be sparse among data. Through our experiments, it is hard for TreeLasso to identify the active variables on high dimension heterogeneous data.
TgSLMM also performs best in most cases in the figure of PrecisionRecall curves. These figures are omitted to save space.
4.3 Analysis of yielded and
We use the same experimental setting^{2}^{2}2Other parameters are as follow: is 10; is 0.05; is 0.001; is 1; is 0.05. as in Figure 1. The results are shown in Figure 11 and 12.
Figure 11 shows that TgSLMM recovers both the values and structure of ground truth effect size, revealing the supreme ability of TgSLMM in variable selection. LMMLasso has trouble finding enough useful information. Trapped into the confounding factors, the TreeLasso discovers too many false positives. TreeLasso also falls short when the dataset becomes complicated as the dataset we generated in the Figure 10. Based on Figure 12, both prediction performance of TgSLMM and TreeLasso are convincing, LMMLasso fails as reported before. Unsurprisingly, the proposed TgSLMM also behaves the best in estimating with respect to meansquared error through almost all experimental settings. The figures of meansquared error of are omitted, as well as analysis for the other methods. The remaining other approaches cannot discover any meaningful information.
By using the proposed method, we are able to detect weak signals and reveal clear groupings in the patterns of associations between explanatory variables and responses. The results also prove our method does well both in the field of variable selection, effect sizes estimation and response prediction.
5 Real Genome Data Experiments
Having shown the capacity of TgSLMM in recovering explanatory variables of synthetic datasets, we now demonstrate how TgSLMM can be used in realworld genome data and discover meaningful information. To evaluate the method, we focus on some practical datasets, Arabidopsis thaliana, Heterogeneous Stock Mice and Human Alzheimer Disease. Since Arabidopsis thaliana and Heterogeneous Stock Mice have been studied for over a decade, the scientific community has reached a general consensus regarding these species [\citeauthoryearAtwell et al.2010]. With such authentic golden standard, we could plot the ROC curve and assess model’s performance using area under it. However, since Alzheimerâs disease is a very active area of research with no ground truth available, we listed the genetic variables identified by our proposed model and verified the top genetic variables by searching the relevant literature.
5.1 Data Sets
Arabidopsis Thaliana
The Arabidopsis thaliana dataset we obtained is a collection of around 200 plants, each with around 215,000 genetic variables [\citeauthoryearAnastasio et al.2011]. We study the association between these genetic variables and a set of observed responses. These plants were collected from 27 different countries in Europe and Asia, so that geographic origin serves as a potential confounding factor. For example, different sunlight conditions in different regions may affect the observed responses of these plants. We tested the genetic associations between genetic variables with 44 different responses such as days to germination, days to flowering, etc.
Heterogeneous Stock Mice
The heterogeneous stock mice dataset contains measurements from around 1700 mice, with 10,000 genetic variables [\citeauthoryearValdar et al.2006]. These mice were raised in cages by four generations over a twoyear period. In total, the mice come from 85 distinct families. The obvious confounding variable is genetic inheritance due to family relationships. We studied the association between the genetic variables and a set of 27 response variables that could possibly be affected by inheritance. These 27 response variables fall into six different categories, relating to the glucose level, insulin level, immunity, EPM, FN and OFT respectively.
Human Alzheimer Disease
We use the lateonset Alzheimer’s Disease data provided by Harvard Brain Tissue Resource Center and Merck Research Laboratories [\citeauthoryearZhang et al.]. It consists of measurements from 540 patients with 500,000 genetic variables. We tested the association between these genetic variables and 28 responses corresponding to a patient’s disease status of Alzheimer’s disease.
5.2 Arabidopsis Thaliana
Since we have access to a validated gold standard of the Arabidopsis thaliana dataset, we compared the alternative methods in terms of their ability in recovering explanatory variables with a true association. Figure 13 illustrates the area under the ROC curve for each response variables for Arabidopsis thaliana. By analyzing the yielded results, it can be concluded that TgSLMM equals or outperforms the other methods for all of responses. We can also reach that TgSLMM allows for dissecting individual explanatory variable effects from global genetic effects driven by population structure.
Further, we applied crossvalidation to evaluate the proposed model’s ability of response prediction. Using the data our method selects, 61.4% of prediction for Arabidopsis thaliana is better than using origin dataset, 56.8% is better than employing TreeLasso, 79.5% is better than applying LMMLasso, 84.1% is better than MCP and SCAD, 66.0% is better than Lasso, 91.0% is better than LMMBOLT, 56.7% is better than LTMLM. Our method only works worse than LMMSelect while considering prediction.
5.3 Heterogeneous Stock Mice
For Heterogeneous Stock Mice dataset, ground truth is also available so that we could evaluate the methods based on the area under their ROC Curve as Figure 14. TgSLMM behaves as the best one on 22.2% of the traits and achieved the highest area for the whole datasets as 0.627. The second best model is MCP with area of 0.604. The area under ROC of TreeLasso, Lasso and SCAD is 0.582, 0.591 and 0.590 respectively. The area of the remaining models is all around 0.5, showing little ability to process such complex datasets. On trait Glucose_75, Glucose_30, Glucose.DeadFromAnesthetic, Insulin.AUC, Insulin.Delta and FN.postWeight, our method TgSLMM behaves best. The results are interesting: The left side of the figure mostly consists of traits regarding glucose and insulin in the mice, while the right side of the figure consists of trait related to immunity. This raises the interesting question of whether or not immune levels in stock mice are largely independent of family origin.
5.4 Human Alzheimer Disease
Finally, we proceed to the Human Alzheimerâs Disease dataset. We report the top 30 genetic variables our model discovered in Table 2 to foster further research.
Rank  SNP  Rank  SNP  Rank  SNP 

1  rs30882  11  rs10897029  21  rs2298955 
2  rs10027921  12  rs464906  22  rs1998933 
3  rs12641981  13  rs874404  23  rs17467420 
4  rs12506805  14  rs4421632  24  rs7629705 
5  rs684240  15  rs6086773  25  rs27162 
6  rs1684438  16  rs4882754  26  rs4740820 
7  rs6431428  17  rs2272445  27  rs1495805 
8  rs12509328  18  rs12410705  28  rs7916633 
9  rs7783626  19  rs4578488  29  rs13221797 
10  rs11848278  20  rs3887171  30  rs429536 
Due to space limitation, we only verify the top 10 reported genetic variables with prior research. The discovered genetic variable is corresponded to apoB gene, which can influence serum concentration in Alzheimerâs disease [\citeauthoryearCaramelli et al.1999]. The one is associated with ARHGAP10 gene (also called GRAF2), which affects the developmentally regulated expression of the GRAF proteins that promote lipid droplet clustering and growth, and is enriched at lipid droplet junctions [\citeauthoryearHäsler et al.2014]. The SNP is expressed by the SYNPO2, which influences hypercholesterolemia or hypertension that has a identified a link between cognitive deficits [\citeauthoryearLoke, Wong, and Ong2017]. The genetic variable is associated with AGAP1. AGAP1 can regulate membrane trafficking, actin remodeling [\citeauthoryearLiu et al.2006] and is reported to be associated with Alzheimer’s disease. The one is coded by gene Fam114a1. Biologists have found that Fam114a1 is highly expressed in the developing neocortex [\citeauthoryearZhang et al.2014]. The is corresponded with gene CNTNAP2 and the direct downregulation of CNTNAP2 by STOX1A is associated with Alzheimer’s disease[\citeauthoryearvan Abel et al.2012].
6 Conclusion
In this paper, we aim to solve the challenging task of sparse variable selection when the data is not i.i.d. This type of situation often occurs in genomics since different batches of medical data are collected from different sources for different purposes. Due to such confounding factors, naiv̈ely applying the traditional variable selection methods will result in a wealth of false discoveries. In addition to that, existing methods ignore the convoluted interdependency among prolific responses, hence a joint analysis that can utilize such relatedness information in a heterogeneous data set is crucial. To address these problems, we proposed the treeguided sparse linear mixed model for sparse variable selection in the heterogeneous data set. We extend the recent solutions of LMM that can correct confounding factors and perform variable selection simultaneously further to account the relatedness between different responses. Through extensive experiments, we compare our method with stateofart methods and deeply analyze how confounding factors from the high dimensional heterogeneous data set influence the capability of the model to identify active variables. We show that traditional methods easily fall into the trap of utilizing false information, while our proposed model outperforms other existing methods in both the synthetic data set and real genome data set.
References
 [\citeauthoryearAnastasio et al.2011] Anastasio, A. E.; Platt, A.; Horton, M.; Grotewold, E.; Scholl, R.; Borevitz, J. O.; Nordborg, M.; and Bergelson, J. 2011. Source verification of misidentified arabidopsis thaliana accessions. The Plant Journal 67(3):554–566.
 [\citeauthoryearAstle and Balding2009] Astle, W., and Balding, D. J. 2009. Population structure and cryptic relatedness in genetic association studies. Statistic Sci 451–471.
 [\citeauthoryearAtwell et al.2010] Atwell, S.; Huang, Y. S.; Vilhjálmsson, B. J.; Willems, G.; Horton, M.; Li, Y.; Meng, D.; Platt, A.; Tarone, A. M.; Hu, T. T.; et al. 2010. Genomewide association study of 107 phenotypes in arabidopsis thaliana inbred lines. Nature 465(7298):627–631.
 [\citeauthoryearBondell, Krishna, and Ghosh2010] Bondell, H. D.; Krishna, A.; and Ghosh, S. K. 2010. Joint variable selection for fixed and random effects in linear mixedeffects models. Biometrics 66(4):1069–1077.
 [\citeauthoryearCai, Zhang, and He2010] Cai, D.; Zhang, C.; and He, X. 2010. Unsupervised feature selection for multicluster data. In Proceedings of the 16th ACM SIGKDD. ACM.
 [\citeauthoryearCaramelli et al.1999] Caramelli, P.; Nitrini, R.; Maranhao, R.; Lourenco, A.; Damasceno, M.; Vinagre, C.; and Caramelli, B. 1999. Increased apolipoprotein b serum concentration in alzheimer’s disease. Acta neurologica scandinavica 100(1):61–63.
 [\citeauthoryearChen and Wang2013] Chen, Q., and Wang, S. 2013. Variable selection for multiplyimputed data with application to dioxin exposure study. Statistics in medicine 32(21):3646–3659.
 [\citeauthoryearChen et al.2010] Chen, X.; Kim, S.; Lin, Q.; Carbonell, J. G.; and Xing, E. P. 2010. Graphstructured multitask regression and an efficient optimization method for general fused lasso. arXiv preprint arXiv:1005.3579.
 [\citeauthoryearChen et al.2012] Chen, X.; Lin, Q.; Kim, S.; Carbonell, J. G.; and Xing, E. P. 2012. Smoothing proximal gradient method for general structured sparse regression. The Annals of Applied Statistics 719–752.
 [\citeauthoryearDu and Shen2015] Du, L., and Shen, Y.D. 2015. Unsupervised feature selection with adaptive structure learning. In Proceedings of the 21th ACM SIGKDD. ACM.
 [\citeauthoryearFan and Li2001] Fan, J., and Li, R. 2001. Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American statistical Association 96(456):1348–1360.
 [\citeauthoryearFan and Li2012] Fan, Y., and Li, R. 2012. Variable selection in linear mixed effects models. Annals of statistics 40(4):2043.
 [\citeauthoryearGoddard2009] Goddard, M. 2009. Genomic selection: prediction of accuracy and maximisation of long term response. Genetica 136(2):245–257.
 [\citeauthoryearGolub et al.1999] Golub, T. R.; Slonim, D. K.; Tamayo, P.; Huard, C.; Gaasenbeek, M.; Mesirov, J. P.; Coller, H.; Loh, M. L.; Downing, J. R.; Caligiuri, M. A.; et al. 1999. Molecular classification of cancer: class discovery and class prediction by gene expression monitoring. science.
 [\citeauthoryearHäsler et al.2014] Häsler, S. L.A.; Vallis, Y.; Jolin, H. E.; McKenzie, A. N.; and McMahon, H. T. 2014. Graf1a is a brainspecific protein that promotes lipid droplet clustering and growth, and is enriched at lipid droplet junctions. J Cell Sci 127(21):4602–4619.
 [\citeauthoryearHe and Lin2011] He, Q., and Lin, D.Y. 2011. A variable selection method for genomewide association studies. Bioinformatics 27(1):1–8.
 [\citeauthoryearHenderson1975] Henderson, C. R. 1975. Best linear unbiased estimation and prediction under a selection model. Biometrics 423–447.
 [\citeauthoryearKang et al.2010] Kang, H. M.; Sul, J. H.; Service, S. K.; Zaitlen, N. A.; Kong, S.y.; Freimer, N. B.; Sabatti, C.; Eskin, E.; et al. 2010. Variance component model to account for sample structure in genomewide association studies. Nature genetics 42(4):348–354.
 [\citeauthoryearKim and Xing2012] Kim, S., and Xing, E. P. 2012. Treeguided group lasso for multiresponse regression with structured sparsity, with an application to eqtl mapping. The Annals of Applied Statistics 1095–1117.
 [\citeauthoryearKolda and Bader2009] Kolda, T. G., and Bader, B. W. 2009. Tensor decompositions and applications. SIAM review 51(3):455–500.
 [\citeauthoryearLippert et al.2011] Lippert, C.; Listgarten, J.; Liu, Y.; Kadie, C. M.; Davidson, R. I.; and Heckerman, D. 2011. Fast linear mixed models for genomewide association studies. Nature methods 8(10):833–835.
 [\citeauthoryearListgarten et al.2012] Listgarten, J.; Lippert, C.; Kadie, C. M.; Davidson, R. I.; Eskin, E.; and Heckerman, D. 2012. Improved linear mixed models for genomewide association studies. Nature methods 9(6):525–526.
 [\citeauthoryearLiu et al.2006] Liu, Q. Y.; Sooknanan, R. R.; Malek, L. T.; RibeccoLutkiewicz, M.; Lei, J. X.; Shen, H.; Lach, B.; Walker, P. R.; Martin, J.; and Sikorska, M. 2006. Novel subtractive transcriptionbased amplification of mrna (star) method and its application in search of rare and differentially expressed genes in ad brains. BMC genomics.
 [\citeauthoryearLiu, Shao, and Fu2016] Liu, H.; Shao, M.; and Fu, Y. 2016. Consensus guided unsupervised feature selection.
 [\citeauthoryearLoh et al.2015] Loh, P.R.; Tucker, G.; BulikSullivan, B. K.; Vilhjalmsson, B. J.; Finucane, H. K.; Salem, R. M.; Chasman, D. I.; et al. 2015. Efficient bayesian mixedmodel analysis increases association power in large cohorts. Nature genetics.
 [\citeauthoryearLoke, Wong, and Ong2017] Loke, S.Y.; Wong, P. T.H.; and Ong, W.Y. 2017. Global gene expression changes in the prefrontal cortex of rabbits with hypercholesterolemia and/or hypertension. Neurochemistry international.
 [\citeauthoryearNie et al.2010] Nie, F.; Huang, H.; Cai, X.; and Ding, C. H. 2010. Efficient and robust feature selection via joint â2, 1norms minimization. In Advances in neural information processing systems, 1813–1821.
 [\citeauthoryearPatterson, Price, and Reich2006] Patterson, N.; Price, A. L.; and Reich, D. 2006. Population structure and eigenanalysis. PLoS genet 2(12):e190.
 [\citeauthoryearPirinen et al.2013] Pirinen, M.; Donnelly, P.; Spencer, C. C.; et al. 2013. Efficient computation with a linear mixed model on largescale data sets with applications to genetic studies. The Annals of Applied Statistics.
 [\citeauthoryearPrice et al.2006] Price, A. L.; Patterson, N. J.; Plenge, R. M.; Weinblatt, M. E.; Shadick, N. A.; and Reich, D. 2006. Principal components analysis corrects for stratification in genomewide association studies. Nature genetics 38(8):904–909.
 [\citeauthoryearRakitsch et al.2013] Rakitsch, B.; Lippert, C.; Stegle, O.; and Borgwardt, K. 2013. A lasso multimarker mixed model for association mapping with population structure correction. Bioinformatics.
 [\citeauthoryearSegura et al.2012] Segura, V.; Vilhjálmsson, B. J.; Platt, A.; Korte, A.; Seren, Ü.; Long, Q.; and Nordborg, M. 2012. An efficient multilocus mixedmodel approach for genomewide association studies in structured populations. Nature genetics 44(7):825–830.
 [\citeauthoryearSørlie et al.2001] Sørlie, T.; Perou, C. M.; Tibshirani, R.; Aas, T.; Geisler, S.; Johnsen, H.; Hastie, T.; Eisen, M. B.; Van De Rijn, M.; Jeffrey, S. S.; et al. 2001. Gene expression patterns of breast carcinomas distinguish tumor subclasses with clinical implications. Proceedings of the National Academy of Sciences 98(19):10869–10874.
 [\citeauthoryearTan et al.2012] Tan, X.; Zhang, Y.; Tang, S.; Shao, J.; Wu, F.; and Zhuang, Y. 2012. Logistic tensor regression for classification. In International Conference on Intelligent Science and Intelligent Data Engineering, 573–581. Springer.
 [\citeauthoryearTibshirani1996] Tibshirani, R. 1996. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society.
 [\citeauthoryearValdar et al.2006] Valdar, W.; Solberg, L. C.; Gauguier, D.; Burnett, S.; Klenerman, P.; Cookson, W. O.; Taylor, M. S.; Rawlins, J. N. P.; Mott, R.; and Flint, J. 2006. Genomewide genetic association of complex traits in heterogeneous stock mice. Nature genetics 38(8):879–887.
 [\citeauthoryearvan Abel et al.2012] van Abel, D.; Michel, O.; Veerhuis, R.; Jacobs, M.; van Dijk, M.; and Oudejans, C. 2012. Direct downregulation of cntnap2 by stox1a is associated with alzheimer’s disease. Journal of Alzheimer’s Disease 31(4):793–800.
 [\citeauthoryearWang and Yang2016] Wang, H., and Yang, J. 2016. Multiple confounders correction with regularized linear mixed effect models, with application in biological processes. In Bioinformatics and Biomedicine (BIBM), 2016 IEEE International Conference on, 1561–1568. IEEE.
 [\citeauthoryearWang, Aragam, and Xing2017] Wang, H.; Aragam, B.; and Xing, E. P. 2017. Variable selection in heterogeneous datasets: A truncatedrank sparse linear mixed model with applications to genomewide association studies. In Bioinformatics and Biomedicine (BIBM), 2016 IEEE International Conference on. IEEE.
 [\citeauthoryearWang et al.2017] Wang, H.; Liu, X.; Xiao, Y.; Xu, M.; and Xing, E. P. 2017. Multiplex confounding factor correction for genomic association mapping with squared sparse linear mixed model. In Bioinformatics and Biomedicine (BIBM), 2016 IEEE International Conference on. IEEE.
 [\citeauthoryearZhang and others2010] Zhang, C.H., et al. 2010. Nearly unbiased variable selection under minimax concave penalty. The Annals of statistics 38(2):894–942.
 [\citeauthoryearZhang et al.] Zhang, B.; Gaiteri, C.; Bodea, L.G.; Wang, Z.; McElwee, J.; Podtelezhnikov, A. A.; Zhang, C.; Xie, T.; Tran, L.; Dobrin, R.; et al. Integrated systems approach identifies genetic nodes and networks in lateonset alzheimerâs disease. Cell 153(3):707–720.
 [\citeauthoryearZhang et al.2014] Zhang, W.; Thevapriya, S.; Kim, P. J.; Yu, W.P.; Je, H. S.; Tan, E. K.; and Zeng, L. 2014. Amyloid precursor protein regulates neurogenesis by antagonizing mir5745p in the developing cerebral cortex. Nature communications 5:3330.
 [\citeauthoryearZhou et al.2013] Zhou, J.; Lu, Z.; Sun, J.; Yuan, L.; Wang, F.; and Ye, J. 2013. Feafiner: biomarker identification from medical data through feature generalization and selection. In Proceedings of the 19th ACM SIGKDD. ACM.
 [\citeauthoryearZou and Li2008] Zou, H., and Li, R. 2008. Onestep sparse estimates in nonconcave penalized likelihood models. Annals of statistics 36(4):1509.