A sparse conditional Gaussian graphical model for analysis of genetical genomics data\thanksrefT1

A sparse conditional Gaussian graphical model for analysis of genetical genomics data\thanksrefT1

[ [    [ [ University of Pennsylvania School of Medicine Department of Biostatistics
 and Epidemiology
University of Pennsylvania School
 of Medicine
Philadelphia, Pennsylvania 19104
USA
\printeade1
E-mail: \printead*e3
\smonth3 \syear2010\smonth6 \syear2011
\smonth3 \syear2010\smonth6 \syear2011
\smonth3 \syear2010\smonth6 \syear2011
Abstract

Genetical genomics experiments have now been routinely conducted to measure both the genetic markers and gene expression data on the same subjects. The gene expression levels are often treated as quantitative traits and are subject to standard genetic analysis in order to identify the gene expression quantitative loci (eQTL). However, the genetic architecture for many gene expressions may be complex, and poorly estimated genetic architecture may compromise the inferences of the dependency structures of the genes at the transcriptional level. In this paper we introduce a sparse conditional Gaussian graphical model for studying the conditional independent relationships among a set of gene expressions adjusting for possible genetic effects where the gene expressions are modeled with seemingly unrelated regressions. We present an efficient coordinate descent algorithm to obtain the penalized estimation of both the regression coefficients and the sparse concentration matrix. The corresponding graph can be used to determine the conditional independence among a group of genes while adjusting for shared genetic effects. Simulation experiments and asymptotic convergence rates and sparsistency are used to justify our proposed methods. By sparsistency, we mean the property that all parameters that are zero are actually estimated as zero with probability tending to one. We apply our methods to the analysis of a yeast eQTL data set and demonstrate that the conditional Gaussian graphical model leads to a more interpretable gene network than a standard Gaussian graphical model based on gene expression data alone.

\kwd
\doi

10.1214/11-AOAS494 \volume5 \issue4 2011 \firstpage2630 \lastpage2650

\runtitle

A sparse conditional Gaussian graphical model

\thankstext

T1Supported in part by NIH R01ES009911 and R01CA127334.

{aug}

A]\fnmsJianxin \snmYinlabel=e1]yinj@mail.med.upenn.edu and A]\fnmsHongzhe \snmLi\correflabel=e3]hongzhe@upenn.edu

eQTL \kwdGaussian graphical model \kwdregularization \kwdgenetic networks \kwdseemingly unrelated regression.

1 Introduction

Genetical genomics experiments have now been routinely conducted to measure both the genetic variants and the gene expression data on the same subjects. Such data have provided important insights into gene expression regulations in both model organisms and humans [Brem and Kruglyak (2005), Schadt et al. (2003), Cheung and Spielman (2002)]. Gene expression levels are treated as quantitative traits and are subject to standard genetic analysis in order to identify the gene expression quantitative loci (eQTL). However, the genetic architecture for many gene expressions may be complex due to possible multiple genetic effects and gene–gene interactions, and poorly estimated genetic architecture may compromise the inferences of the dependency structures of genes at the transcriptional level [Neto et al. (2010)]. For a given gene, typical analysis of such eQTL data is to identify the genetic loci or single nucleotide polymorphisms (SNPs) that are linked or associated with the expression level of this gene. Depending on the locations of the eQTLs or the SNPs, they are often classified as distal trans-linked loci or proximal cis-linked loci [Kendziorski and Wang (2003), Kendziorski et al. (2006)]. Although such a single gene analysis can be effective in identifying the associated genetic variants, gene expressions of many genes are in fact highly correlated due to either shared genetic variants or other unmeasured common regulators. One important biological problem is to study the conditional independence among these genes at the expression level.

eQTL data provide important information about gene regulation and have been employed to infer regulatory relationships among genes [Zhu et al. (2004), Bing and Hoeschele (2005), Chen, Emmert-Streib and Storey (2007)]. Gene expression data have been used for inferring the genetic regulatory networks, for example, in the framework of Gaussian graphical models (GGM) [Schäfer and Strimmer (2005), Segal et al. (2005), Li and Gui (2006), Peng, Zhou and Zhu (2009)]. Graphical models use graphs to represent dependencies among stochastic variables. In particular, the GGM assumes that the multivariate vector follows a multivariate normal distribution with a particular structure of the inverse of the covariance matrix, called the concentration matrix. For such Gaussian graphical models, it is usually assumed that the patterns of variation in expression for a given gene can be predicted by those of a small subset of other genes. This assumption leads to sparsity (i.e., many zeros) in the concentration matrix and reduces the problem to well-known neighborhood selection or covariance selection problems [Dempster (1972), Meinshausen and Bühlmann (2006)]. In such a concentration graph modeling framework, the key idea is to use partial correlation as a measure of the independence of any two genes, rendering it straightforward to distinguish direct from indirect interactions. Due to high-dimensionality of the problem, regularization methods have been developed to estimate the sparse concentration matrix where a sparsity penalty function such as the  penalty or SCAD penalty is often used on the concentration matrix [Li and Gui (2006), Friedman, Hastie and Tibshirani (2008), Fan, Feng and Wu (2009)]. Among these methods, the coordinate descent algorithm of Friedman, Hastie and Tibshirani (2008), named glasso, provides a computationally efficient method for performing the Lasso-regularized estimation of the sparse concentration matrix.

Although the standard GGMs can be used to infer the conditional dependency structures using gene expression data alone from eQTL experiments, such models ignore the effects of genetic variants on the means of the expressions, which can compromise the estimate of the concentration matrix, leading to both false positive and false negative identifications of the edges of the Gaussian graphs. For example, if two genes are both regulated by the same genetic variants, at the gene expression level, there should not be any dependency of these two genes. However, without adjusting for the genetic effects on gene expressions, a link between these two genes is likely to be inferred. For eQTL data, we are interested in identifying the conditional dependency among a set of genes after removing the effects from shared regulations by the markers. Such a graph can truly reflect gene regulation at the expression level.

In this paper we introduce a sparse conditional Gaussian graphical model (cGGM) that simultaneously identifies the genetic variants associated with gene expressions and constructs a sparse Gaussian graphical model based on eQTL data. Different from the standard GGMs that assume constant means, the cGGM allows the means to depend on covariates or genetic markers. We consider a set of regressions of gene expression in which both regression coefficients and the error concentration matrix have many zeros. Zeros in regression coefficients arise when each gene expression only depends on a very small set of genetic markers; zeros in the concentration matrix arise since the gene regulatory network and therefore the corresponding concentration matrix is sparse. This approach is similar in spirit to the seemingly unrelated regression (SUR) model of Zellner (1962) in order to improve the estimation efficiency of the effects of genetic variants on gene expression by considering the residual correlations of the gene expression of many genes. In the analysis of eQTL data, we expect sparseness in both the regression coefficients and also the concentration matrix. We propose to develop a regularized estimation procedure to simultaneously select the SNPs associated with gene expression levels and to estimate the sparse concentration matrix. Different from the original SUR model of Zellner (1962) that focuses on improving the estimation efficiency of the regression coefficients, we focus more on estimating the sparse concentration matrix adjusting for the effects of the SNPs on mean expression levels. We develop an efficient coordinate descent algorithm to obtain the penalized estimates and present the asymptotic results to justify our estimates.

In the next sections we first present the formulation of the cGGM for both the mean gene expression levels and the concentration matrix. We then present an efficient coordinate descent algorithm to perform the regularized estimation of the regression coefficients and concentration matrix. Simulation experiments and asymptotic theory are used to justify our proposed methods. We apply the methods to an analysis of a yeast eQTL data set. We conclude the paper with a brief discussion. All the proofs are given in the supplementary material [Yin and Li (2011)].

2 The sparse cGGM and penalized likelihood estimation

2.1 The sparse conditional Gaussian graphical model

Suppose we have  independent observations from a population of a vector , where is a  random vector of gene expression levels of genes and is a  vector of the numerically-coded SNP genotype data for SNPs. Furthermore, suppose that conditioning on , follows a multivariate normal distribution,

(1)

where is a coefficient matrix for the means and the covariance matrix does not depend on . We are interested in both the effects of the SNPs on gene expressions and the conditional independence structure of  adjusting for the effects of , that is, the Gaussian graphical model for conditional on . In applications of gene expression data analysis, we are more interested in the concentration matrix after their shared genetic regulators are accounted for. It has a nice interpretation in the Gaussian graphical model, as the -element is directly related to the partial correlation between the th and th components of after their potential joint genetic regulators are adjusted. In the Gaussian graphical model with undirected graph , vertices correspond to components of the vector and edges indicate the conditional dependence among different components of . The edge between and exists if and only if , where is the -element of . We emphasize that in the graph representation of the random variable , the nodes include only the genes and the markers are not part of the graph. We call this the sparse conditional Gaussian graph model (cGGM) of the genes. Hence, of particular interest is to identify zero entries in the concentration matrix. Note that instead of assuming a constant mean as in the standard GGM, model (1) allows heterogeneous means.

In eQTL experiments, each row of and the concentration matrix are expected to be sparse and our goal is to simultaneously learn the Gaussian graphical model as defined by the matrix and to identify the genetic variants associated with gene expressions based on independent observations of . From now on, we use to denote the vector of gene expression levels of the genes and to denote the vector of the genotype codes of the SNPs for the th observation unless otherwise specified. Finally, let be the genotype matrix and .

2.2 Penalized likelihood estimation

Suppose that we have independent observation from the cGGM (1). Let , and . Then the negative of the logarithm of the likelihood function corresponding to the cGGM model can be written as

where represents the associated parameters in the cGGM.

The Hessian matrix of the negative log-likelihood function is

(see Proposition 1 in the supplementary material [Yin and Li (2011)], Section 3). In addition, is a bi-convex function of and . In words, this means that for any fixed , is a convex function of , and for any , is a convex function of . When , the global minimizer of  is given by

Under the penalized likelihood framework, the estimate of the and in model (1) is the solution to the following optimization problem:

(2)

where and denote the generic penalty functions, is the th element of the matrix and is the th element of the matrix, and

Here and are the two tuning parameters that control the sparsity of the sparse cGGM. We consider in this paper both the Lasso or penalty function [Tibshirani (1996)] and the adaptive Lasso penalty function for some and any consistent estimate of , denoted by [Zou (2006)]. In this paper we use .

2.3 An efficient coordinate descent algorithm for the sparse cGGM

We present an algorithm for the optimization problem (2) with Lasso penalty function for and . A similar algorithm can be developed for the adaptive Lasso penalty with simple modifications. Under this penalty function, the objective function is then

(3)

The subgradient equation for maximization of the log-likelihood (3) with respect to is

(4)

where . If is known, Banerjee, El Ghaoui and d’Aspremont (2008) and Friedman, Hastie and Tibshirani (2008) have cast the optimization problem (3) as a block-wise coordinate descent algorithm, which can be formulated as iterative Lasso problems. Before we proceed, we first introduce some notation to better represent the algorithm. Let be the estimate of . We partition and as

Banerjee, El Ghaoui and d’Aspremont (2008) show that the solution for satisfies

which by convex duality is equivalent to solving the dual problem

(5)

where . Then the solution for can be obtained via the solution of the Lasso problem and through the relation . The estimate for can also be updated in this block-wise manner very efficiently through the relationship [Friedman, Hastie and Tibshirani (2008)].

After we finish an updating cycle for , we can proceed to update the estimate of . Since the object function of our penalized log-likelihood is quadratic in given , we can use a direct coordinate descent algorithm to get the penalized estimate of . For the (, )th entry of , , note that for an arbitrary matrix , , where and are the corresponding base vector with and dimensions. So the derivative of the penalized log-likelihood function (3)with respect to is

(6)

where function is defined as

Setting equation (6) to zero, we get the updating formula for :

(7)

where and , are the estimates in the last step of the iteration.

Taking these two updating steps together, we have the following coordinate descent-based regularization algorithm to fit the sparse cGGM:

The Coordinate Descent Algorithm for the sparse cGGM. {longlist}[(1)]

Start with and . If is not invertible, use and instead.

For each , solve the Lasso problem (5) under the current estimate of . Fill in the corresponding row and column of using . Update .

For each , and update each entry in using the formula (7), under the current estimate for .

Repeat step (2) and step (3) until convergence.

Output the estimate , and .

The adaptive version of the algorithm can be derived in the same steps with adaptive penalty parameters and is omitted here. Note that when , this algorithm simply reduces to the glasso or the adaptive glasso (aglasso) algorithm of Friedman, Hastie and Tibshirani (2008). A similar algorithm was used in Rothman, Levina and Zhu (2010) for sparse multivariate regressions. Proposition 2 in the supplementary material [Yin and Li (2011)] proves that the above iterative algorithm for minimizing with respective to and converges to a stationary point of .

While the iterative algorithm reaches a stationary point of , it is not guaranteed to reach the global minimum. Since the objective function of the optimization problem (2) is not always convex in , it is convex in either or with the other fixed. There are potentially many stationary points due to the high-dimensional nature of the parameter space. We also note a few straightforward properties of the iterative procedure, namely, that each iteration monotonically decreases the penalized negative log-likelihood and the order of minimization is unimportant. Finally, the computational complexity of this algorithm is plus the complexity of the glasso.

2.4 Tuning parameter selection

The tuning parameters and in the penalized likelihood formulation (2) determine the sparsity of the cGGM and have to be tuned. Since we focus on estimating the sparse precision matrix and the sparse regression coefficients, we use the Bayesian information criterion (BIC) to choose these two parameters. The BIC is defined as

where is the dimension of , is the number of nonzero off-diagonal elements of and is the number of nonzero elements of . The BIC has been shown to perform well for selecting the tuning parameter of the penalized likelihood estimator [Wang, Li and Tsai (2007)] and has been applied for tuning parameter selection for GGMs [Peng, Zhou and Zhu (2009)].

3 Theoretical properties

Sections 4 and 5 in the supplementary material [Yin and Li (2011)] state and prove theoretical properties of the proposed penalized estimates of the sparse cGGM: its asymptotic distribution, the oracle properties when and are fixed as and the convergence rates and sparsistency of the estimators when and diverge as . By sparsistency, we mean the property that all parameters that are zero are actually estimated as zero with probability tending to one [Lam and Fan (2009)].

We observe that the asymptotic bias for is at the same rate as Lam and Fan (2009) for sparse GGMs, which is multiplied by a logarithm factor , and goes to zero as long as is at a rate of with some . The total square errors for are at least of rate since each of the nonzero elements can be estimated with rate . The price we pay for high-dimensionality is a logarithmic factor . The estimate is consistent as long as is at a rate of with some .

4 Monte Carlo simulations

In this section we present results from Monte Carlo simulations to examine the performance of the proposed estimates and to compare it with the glasso procedure for estimating the Gaussian graphical models using only the gene expression data. We also compare the cGGM with a modified version of the neighborhood selection procedure of Meinshausen and Bühlmann (2006), where each gene is regressed on other genes and also the genetic markers using the Lasso regression, and a link is defined between gene and if gene is selected for gene and gene is also selected by gene . We call this procedure the multiple Lasso (mLasso). Note that the mLasso does not provide an estimate of the concentration matrix. For adaptive procedures, the MLEs of both the regression coefficients and the concentration matrix were used for the weights when and . For each simulated data set, we chose the tuning parameters and based on the BIC.

To compare the performance of different estimators for the concentration matrix, we used the quadratic loss function

where is an estimate of the true concentration matrix . We also compared , , and , where is the difference between the true concentration matrix and its estimate, is the operator or spectral norm of a matrix , is the element-wise norm of a matrix , for is the matrix norm of a matrix , and is the Frobenius norm, which is the square-root of the sum of the squares of the entries of . In order to compare how different methods recover the true graphical structures, we considered the Hamming distance between the estimated and the true concentration matrix, defined as , where is the th entry of and is the indicator function. Finally, we considered the specificity (SPE), sensitivity(SEN) and Matthews correlation coefficient (MCC) scores, which are defined as follows:

where TP, TN, FP and FN are the numbers of true positives, true negatives, false positives and false negatives in identifying the nonzero elements in the concentration matrix. Here we consider the nonzero entry in a sparse concentration matrix as “positive.”

4.1 Models for concentration matrix and generation of data

In the following simulations, we considered a general sparse concentration matrix, where we randomly generated a link (i.e., nonzero elements in the concentration matrix, indicated by ) between variables and with a success probability proportional to . Similar to the simulation setup of Li and Gui (2006), Fan, Feng and Wu (2009) and Peng, Zhou and Zhu (2009), for each link, the corresponding entry in the concentration matrix is generated uniformly over . Then for each row, every entry except the diagonal one is divided by the sum of the absolute value of the off-diagonal entries multiplied by 1.5. Finally, the matrix is symmetrized and the diagonal entries are fixed at 1. To generate the coefficient matrix , we first generated a sparse indicator matrix , where with a probability proportional to . If , we generated from , where is the minimum absolute nonzero value of generated.

After and were generated, we generated the marker genotypes by assuming . Finally, given , we generated the multivariate normal distribution . For a given model and a given simulation, we generated a data set of i.i.d. random vectors . The simulations were repeated 50 times.

Estimation of Graph selection
Method LOSS DIST SPE SEN MCC
Model 1: , ,
cGGM 0.33 1.17 0.67 0.99 0.48 0.56
acGGM 0.31 1.17 0.66 0.99 0.42 0.50
glasso 0.69 1.89 1.12 0.97 0.24 0.21
aglasso 0.69 1.89 1.11 0.97 0.32 0.28
mLasso 0.99 0.38 0.48
Model 2: , ,
cGGM 0.37 1.30 0.72 0.98 0.69 0.66
acGGM 0.29 1.14 0.63 0.99 0.66 0.71
glasso 0.75 2.12 1.20 0.87 0.37 0.18
aglasso 0.74 2.11 1.19 0.87 0.49 0.25
mLasso 0.95 0.60 0.48
Model 3: , ,
cGGM 0.24 0.90 0.52 0.91 0.76 0.62
acGGM 0.22 0.87 0.49 0.94 0.72 0.65
glasso 0.65 1.99 1.12 0.43 0.73 0.12
aglasso 0.65 1.98 1.12 0.54 0.65 0.14
mLasso 0.84 0.68 0.44
Table 1: Comparison of the performances of the cGGM, adaptive cGGM (acGGM), graphical Lasso (glasso), adaptive graphical Lasso (aglasso) and a modified neighborhood selection procedure using multiple Lasso (mLasso) for models 1–3 when based on 50 replications, where is the sample size, is the number of genes and is the number of markers. For each measurement, mean is given based on 50 replications. Simulation standard errors are given in the supplementary material [Yin and Li (2011)]

4.2 Simulation results when and

We first consider the setting when the sample size is larger than the number of genes and the number of genetic markers . In particular, the following three models were considered:

{longlist}

[Model 2:]

, where , ;

, where , ;

, where , .

We present the simulation results in Table 1. Clearly, cGGM provided better estimates (in terms of the defined LOSS function and the four metrics of “closeness” of the estimated and true matrices) of the concentration matrix over glasso for all three models considered in all measurements. This is expected since glasso assumes a constant mean of the multivariate vector, which is not a misspecified model. We also observed that the adaptive cGGM and adaptive glasso both resulted in better estimates of the concentration matrix, although the improvements were minimal. This may be due to the fact that the MLEs of the concentration matrix when is relatively large do not provide very informative weights in the penalty functions.

In terms of graph structure selection, we first observed that different values of the tuning parameter for the penalty on the mean parameters resulted in different identifications of the nonzero elements in the concentration matrix, indicating that the regression parameters in the means indeed had effects on estimating the concentration matrix. Table 1 shows that for all three models, the cGGM or the adaptive cGGM resulted in higher sensitivities, specificities and MCCs than the glasso or the adaptive glasso. We observed that glasso often resulted in much denser graphs than the real graphs. This is partially due to the fact that some of the links identified by glasso can be explained by shared common genetic variants. By assuming constant means, in order to compensate for the model misspecification, glasso tends to identify many nonzero elements in the concentration matrix and result in larger Hamming distance between the estimate and the true concentration matrix. The results indicate that by simultaneously considering the effects of the covariates on the means, we can reduce both false positives and false negatives in identifying the nonzero elements of the concentration matrix.

The modified neighborhood selection procedure using multiple Lasso accounts for the genetic effects in modeling the relationship among the genes. It performed better than glasso or adaptive glasso in graph structure selection, but worse than the cGGM or the adaptive cGGM. This procedure, however, did not provide an estimate of the concentration matrix.

4.3 Simulation results when

In this section we consider the setting when and simulate data from the following three models with values of , and specified as follows:

{longlist}

[Model 4:]

, , ;

, , ;

, , .

Note that for all three models, the graph structure is very sparse due to the large number of genes considered.

Since in this setting we did not have consistent estimates of or , we did not consider the adaptive cGGM or adaptive glasso in our comparisons. Instead, we compared the performance of cGGM, glasso and the modified neighborhood selection procedure using multiple Lasso in terms of estimation of the concentration matrix and graph structure selection. The performances over 50 replications are reported in Table 2 for the optimal tuning parameters chosen by the

Estimation of Graph selection
Method LOSS DIST SPE SEN MCC
Model 4: , ,
cGGM 0.59 1.81 0.97 2,414.28 1.00 0.31
glasso 0.71 2.86 1.31 23,746.98 0.98 0.08
mLasso 3,886.96 1.00 0.12
Model 5: , ,
cGGM 0.75 2.30 1.20 2,341.28 1.00 0.21
glasso 0.76 2.97 1.40 20,871.44 0.97 0.07
mLasso 23,750.04 0.96 0.61
Model 6: , ,
cGGM 0.44 1.55 0.77 2,044.52 1.00 0.05
glasso 0.69 2.72 1.22 9,258.92 0.95 0.03
mLasso 2,967.30 0.99 0.08
Table 2: Comparison of the performances of the cGGM, the graphical Lasso (glasso) and a modified neighbor selection using multiple lasso (mLasso) when based on 50 replications, where is the sample size, is the number of genes and is the number of markers. For each measurement, mean is given based on 50 replications. Simulation standard errors are given in the supplementary material [Yin and Li (2011)]

BICs. For all three models, we observed much improved estimates of the concentration matrix from the proposed cGGM as reflected by both smaller loss functions and different norms of the difference between the true and estimated concentration matrices. The mLasso procedure did not provide estimates of the concentration matrix.

In terms of graph structure selection, since glasso does not adjust for potential effects of genetic markers on gene expressions, it resulted in many wrong identifications and much lower sensitivities and smaller MCCs than the cGGM. Compared to the modified neighborhood selection using multiple Lasso, estimates from the cGGM have smaller Hamming distance and larger MCC than mLasso. In general, we observed that when is larger than the sample size, the sensitivities from all three procedures are much lower than the settings when the sample size is larger. For models 5 and 6, mLasso gave higher sensitivities but lower specificities than cGGM or glasso. This indicates that recovering the graph structure in a high-dimensional setting is statistically difficult. However, the specificities are in general very high, agreeing with our theoretical sparsistency result of the estimates.

5 Analysis of yeast eQTL data

To demonstrate the proposed methods, we present results from the analysis of a data set generated by Brem and Kruglyak (2005). In this experiment, 112 yeast segregants, one from each tetrad, were grown from a cross involving parental strains BY4716 and wild isolate RM11-1a. RNA was isolated and cDNA was hybridized to microarrays in the presence of the same BY reference material. Each array assayed 6,216 yeast genes. Genotyping was performed using GeneChip Yeast Genome S98 microarrays on all 112 segregants. These 112 segregants were individually genotyped at 2,956 marker positions. Since many of these markers are in high linkage disequilibrium, we combined the markers into 585 blocks where the markers within a block differed by at most 1 sample. For each block, we chose the marker that had the least number of missing values as the representative marker.

Due to small sample size and limited perturbation to the biological system, it is not possible to construct a gene network for all 6,216 genes. We instead focused our analysis on two sets of genes that are biologically relevant: the first set of 54 genes that belong to the yeast MAPK signaling pathway provided by the KEGG database [Kanehisa et al. (2010)], another set of 1,207 genes of the protein–protein interaction (PPI) network obtained from a previously compiled set by Steffen et al. (2002) combined with protein physical interactions deposited in the Munich Information center for Protein Sequences (MIPS). Since the available eQTL data are based on observational data, given limited sample size and limited perturbation to the cells from the genotypes, it is statistically not feasible to learn directed graph structures among these genes. Instead, for each of these two data sets, our goal is to construct a conditional independent network among these genes at the expression levels based on the sparse conditional Gaussian graphical model in order to remove the false links by conditioning on the genetic marker information. Such graphs can be interpreted as a projection of true signaling or a protein interaction network into the gene space [Brazhnik, de la Fuente and Mendes (2002), Kontos (2009)].

5.1 Results from the cGGM analysis of 54 MAPK pathway genes

The yeast genome encodes multiple MAP kinase orthologs, where Fus3 mediates cellular response to peptide pheromones, Kss1 permits adjustment to nutrient-limiting conditions and Hog1 is necessary for survival under hyperosmotic conditions. Last, Slt2/Mpk1 is required for repair of injuries to the cell wall. A schematic plot of this pathway is presented in Figure 1. Note that this graph only presents our current knowledge about the MAPK signaling pathway. Since several genes such as Ste20, Ste12 and Ste7 appear at multiple nodes, this graph cannot be treated as the “true graph” for evaluating or comparing different methods. In addition, although some of the links are directed, this graph does not meet the statistical definition of either a directed or undirected graph. Rather than trying to recover the MAPK pathway structure, we chose this set of 54 genes on the MAPK pathway to make sure that these genes are potentially dependent at the expression level.

Figure 1: The yeast MAPK pathway from the KEGG database http://www.genome.jp/ kegg/pathway/sce/sce04011.html.

For each of the 54 genes, we first performed a linear regression analysis for the gene expression level using each of the 585 markers and selected those markers with a -value of 0.01 or smaller. We observed a total of 839 such associations between the 585 markers and 54 genes, indicating strong effects of genetic variants on expression levels. We further selected 188 markers associated with the gene expression levels of at least two out of the 54 genes, resulting in a total of 702 such associations. In addition, many genes are associated with multiple markers [see Figure 2(a)]. This indicates that many pairs of genes are regulated by some common genetic variants, which, when not taken into account, can lead to false links of genes at the expression level.

(a)
(b)
Figure 2: Analysis of yeast MAPK pathway. (a) Association between 188 markers and 54 genes in the MAPK pathway based on simple regression analysis. Black color indicates significant association at -. (b) The undirected graph of 43 genes constructed based on the cGGM.

We applied our proposed cGGM on this set of 54 genes and 188 markers and used the BIC to choose the tuning parameters. The BIC selected and . With these tuning parameters, the cGGM procedure selected 188 nonzero elements of the concentration matrix and therefore 94 links among these 54 genes. In addition, under the cGGM model, 677 elements of the regression coefficients are not zero, indicating the SNPs have important effects on the gene expression levels of these genes. The numbers of SNPs associated with the gene expressions range from 0 to 17 with a mean number of 4. Figure 2(b) shows the undirected graph for 43 linked genes on the MAPK pathway based on the estimated sparse concentration matrix from the cGGM. This undirected graph constructed based on the cGGM can indeed recover lots of links among the 54 genes on this pathway. For example, the kinase Fus3 is linked to its downstream genes Dig1, Ste12 and Fus1. The cGGM model also recovered most of the links to Ste20, including Bni1, Ste11, Ste12, Ste5 and Ste7. Ste20 is also linked to Cdc42 through Bni1. Clearly, most of the links in the upper part of the MAPK signaling pathway were recovered by cGGM. This part of the pathway mediates cellular response to peptide pheromones. Similarly, the kinase Slt2/Mpk1 is linked to its downstream genes Swi4 and Rlm1. Three other genes on this second layer of the pathway, Fks1, Rho1 and Bck1, are also closed linked. These linked genes are related to cell response to hypotonic shock.

As a comparison, we applied the glasso to the gene expression of these 54 genes without adjusting the effects of genetic markers on gene expressions and summarized the results in Table 3. The optimal tuning parameter was selected based on the BIC, which resulted in selection of 341 edges among the 54 genes (i.e., 682 nonzero elements of the concentration matrix), including all 94 links selected by the cGGM. The difference of the estimated graph structures between the cGGM and glasso can be at least partially explained by the genetic variants associated with the expression levels of multiple genes. Among these 247 edges that were identified by only the glasso, 41 pairs of genes were associated with at least one genetic variant. The cGGM adjusted the genetic effects on gene expression and therefore did not identify these edges at the expression levels. Another reason is that the glasso assumes a constant mean vector for gene expression, which clearly misspecified the model and led to the selection of more links.

cGGM mLasso
MAPK pathway (PPI network)
cGGM 0 (0)
mLasso 10 (218)
glasso 41 (1,569) 2 (66)
Table 3: Comparison of the links identified by the cGGM, modified neighborhood selection using multiple Lasso (mLasso), the graphical Lasso (glasso) for the genes of the MAPK pathway and genes of the protein–protein interaction (PPI) network. Shown in the table is the number of links that were identified by the procedure indexed by row but were not identified by the procedure indexed by column due to sharing of at least one common genetic marker

We also compared the graph identified by the modified neighborhood selection procedure of using multiple Lasso. Specifically, each gene was regressed on all other genes and 188 markers using the Lasso. Again, the BIC was used for selecting the tuning parameter. This procedure identified a total of 45 links among the 54 genes. In addition, a total of 33 associations between the SNPs and gene expressions were identified. Among these 45 links, 36 were identified by the cGGM and 45 were identified by glasso.

MAPK pathway PPI network
Method Min Max Mean Median Min Max Mean Median
cGGM 0 0 57
glasso 5 5 60
mLasso 0 0 12
Table 4: Summary of degrees of the graphs constructed by three different methods: cGGM, the graphical Lasso (glasso) and a modified neighborhood selection using multiple Lasso (mLasso), for the genes of the MAPK pathway and genes of the protein–protein interaction (PPI) network

Table 4 shows a summary of the degrees of the graphs estimated by these three procedures. It is clear that glasso resulted in a much denser graph than the neighborhood selection and cGGM, and the mLasso tends to select few links.

5.2 Results from the cGGM analysis of 1,207 genes on yeast PPI network

We next applied the cGGM to the yeast protein–protein interaction network data obtained from a previously compiled set by Steffen et al. (2002) combined with protein physical interactions deposited in MIPS. We further selected 1,207 genes with variance greater than 0.05. Based on the most recent yeast protein–protein interaction database BioGRID [Stark et al. (2011)], there are a total of 7,619 links among these 1,207 genes. The BIC chose and , which resulted in selection of 12,036 links out of a total of 727,821 possible links, which gives a sparsity of 1.65%. Results from comparisons with the two other procedures are shown in Table 3. The glasso without adjusting for the effects of genetic markers resulted in a total of 18,987 edges with an optimal tuning parameter . There were 9,854 links that were selected by both procedures. Again glasso selected a lot more links than the cGGM; among the links that were identified by the glasso only, 1,569 pairs are associated with at least one common genetic marker (see Table 3), further explaining that some of the links identified by gene expression data alone can be due to shared comment genetic variants.

The modified neighborhood selection procedure mLasso identified only 1,917 edges with , out of which 1,750 were identified by the cGGM and 1,916 were identified by the glasso. There was a common set of 1,749 links that were identified by all three procedures. A summary of the degrees of the graphs estimated by these three procedures is given in Table 4. We observe that the glasso gave a much denser graph than the other two procedures, agreeing with what we observed in simulation studies.

If we treat the PPI of the BioGRID database as the true network among these genes, the true positive rates from cGGM, glasso and the modified neighborhood selection procedure were 0.067, 0.071 and 0.019, respectively, and the false positive rates were 0.016, 0.026 and 0.0025, respectively. The MCC scores from cGGM, glasso and the modified neighborhood selection procedure were 0.041, 0.030 and 0.033, respectively. One reason for having low true positive rates is that many of the protein–protein interactions cannot be reflected at the gene expression level. Figure 3(a) shows the histogram of the correlations of

(a) (b)
(c) (d)
Figure 3: Histograms of marginal correlations for pairs of linked genes based on BioGRID (a) and linked genes identified by cGGM (b), glasso (c) and a modified neighborhood selection procedure (mLasso) (d).

genes that are linked on the BioGRID PPI network, indicating that many linked gene pairs have very small marginal correlations. The Gaussian graphical models are not able to recover these links. Figure 3 plots (b)–(d) show the marginal correlations of the genes pairs that were identified by cGGM, glasso and mGlasso, clearly indicating that the linked genes identified by the cGGM have higher marginal correlations. In contrast, some linked genes identified by glasso have quite small marginal correlations. Another reason is that the PPI represents the marginal pair-wise interactions among the proteins rather than the conditional interactions.

6 Conclusions and discussion

We have presented a sparse conditional Gaussian graphical model for estimating the sparse gene expression network based on eQTL data in order to account for genetic effects on gene expressions. Since genetic variants are associated with expression levels of many genes, it is important to consider such heterogeneity in estimating the gene expression networks using the Gaussian graphical models. We have demonstrated by simulation studies that the proposed sparse cGGM can estimate the underlying gene expression networks more accurately than the standard GGM. For the yeast eQTL data set we analyzed, the standard Gaussian graphical model without adjusting for possible genetic effects on gene expressions identified many possible false links that result in very dense graphs and make the interpretation of the resulting networks difficult. On the other hand, our proposed cGGM resulted in a much sparser and biologically more interpretable network. We expect similarly good performance on data from other published sources, such as from Schadt et al. (2003) and Cheung and Spielman (2002).

Due to the limits of the gene expression data, one should not expect to recover completely the true signaling networks since many dependencies among these genes can be observed only at the protein or metabolite level. In any global biochemical network such signaling network or protein interaction network, genes do not interact directly with other genes; instead, gene induction or repression occurs through the activation of certain proteins, which are products of certain genes [Brazhnik, de la Fuente and Mendes (2002), Kontos (2009)]. Similarly, gene transcription can also be affected by protein-metabolite complexes. Despite these limitations of the gene expression, it is still useful to abstract the actions of proteins and metabolites and represent genes acting on other genes in a gene network [Kontos (2009)]. This gene network is what we aim to learn based on the proposed cGGM. As we observed from our analysis of the yeast eQTL data, such graphs or gene networks constructed from the cGGM can indeed explain the data and provide certain biological insights into gene interactions. Such graphs can be interpreted as a projection of true signaling or protein interaction network into the gene space [Brazhnik, de la Fuente and Mendes (2002), Kontos (2009)].

We have focused in this paper on estimating the sparse conditional Gaussian graphical model for gene expression data by adjusting for the genetic effects on gene expressions. However, we expect that by explicitly modeling the covariance structure among the gene expressions, we should also improve the identification of the genetic variants associated with the gene expressions [Rothman, Levina and Zhu (2010)]. This is in fact the original motivation of the SUR models proposed by Zellner (1962). It would be interesting to investigate theoretically as to how modeling the concentration matrix can lead to improvement in estimation and identification of the genetic variants associated with the gene expression traits.

We used the Gaussian graphical models for studying the conditional independence among genes at the transcriptional level. Such undirected graphs do not provide information on causal dependency. Data from genetic genomics experiments have been proposed to construct the gene networks represented by directed causal graphs. For example, Liu, De La Feunte and Hoeschele (2008) and Bing and Hoeschele (2005) used structural equation modeling and a genetic algorithm to construct causal genetic networks among genetic loci and gene expressions. Chaibub Neto et al. (2010) developed an efficient Markov chain Monte Carlo algorithm for joint inference of causal network and genetic architecture for correlated phenotypes. Although genetical genomics data can indeed provide opportunity for inferring the causal networks at the transcriptional level, these causal graphical model-based approaches can often only handle a small number of transcripts because the number of possible directed graphs is super-exponential in the number of genes considered [Chickering, Heckerman and Meek (2004)]. Regularization methods may provide alternative approaches to joint modeling of genetic effects on gene expressions and causal graphs among genes at the expression level.

Acknowledgments

We thank the three reviewers and the Editor for many insightful comments that have greatly improved the presentation of this paper.

{supplement}

[id=suppA] \stitleSupplemental materials for “A sparse conditional Gaussian graphical model for analysis of genetical genomics data”
\slink[doi,text=10.1214/11-AOAS494SUPP]10.1214/11-AOAS494SUPP \slink[url]http://lib.stat.cmu.edu/aoas/494/supplement.pdf \sdatatype.pdf \sdescriptionThe online supplemental materials include the simulation standard errors of Tables 1 and 2, two propositions on the Hessian matrix of the likelihood function and the convergence of the algorithm and the theoretical properties of the proposed penalized estimates of the sparse cGGM: its asymptotic distribution, the oracle properties when  and are fixed as and the convergence rates and sparsistency of the estimators when and diverge as . All the proofs are also given in the supplemental materials.

References

  • Banerjee, El Ghaoui and d’Aspremont (2008) {barticle}[mr] \bauthor\bsnmBanerjee, \bfnmOnureena\binitsO., \bauthor\bsnmEl Ghaoui, \bfnmLaurent\binitsL. \AND\bauthor\bsnmd’Aspremont, \bfnmAlexandre\binitsA. (\byear2008). \btitleModel selection through sparse maximum likelihood estimation for multivariate Gaussian or binary data. \bjournalJ. Mach. Learn. Res. \bvolume9 \bpages485–516. \bidissn=1532-4435, mr=2417243 \bptokimsref \endbibitem
  • Bing and Hoeschele (2005) {barticle}[pbm] \bauthor\bsnmBing, \bfnmNan\binitsN. \AND\bauthor\bsnmHoeschele, \bfnmIna\binitsI. (\byear2005). \btitleGenetical genomics analysis of a yeast segregant population for transcription network inference. \bjournalGenetics \bvolume170 \bpages533–542. \biddoi=10.1534/genetics.105.041103, issn=0016-6731, pii=genetics.105.041103, pmcid=1450429, pmid=15781693 \bptokimsref \endbibitem
  • Brazhnik, de la Fuente and Mendes (2002) {barticle}[pbm] \bauthor\bsnmBrazhnik, \bfnmPaul\binitsP., \bauthor\bparticlede la \bsnmFuente, \bfnmAlberto\binitsA. \AND\bauthor\bsnmMendes, \bfnmPedro\binitsP. (\byear2002). \btitleGene networks: How to put the function in genomics. \bjournalTrends Biotechnol. \bvolume20 \bpages467–472. \bidissn=0167-7799, pii=S016777990202053X, pmid=12413821 \bptokimsref \endbibitem
  • Brem and Kruglyak (2005) {barticle}[auto:STB—2011/08/30—13:45:12] \bauthor\bsnmBrem, \bfnmR. B.\binitsR. B. \AND\bauthor\bsnmKruglyak, \bfnmL.\binitsL. (\byear2005). \btitleThe landscape of genetic complexity across 5,700 gene expression traits in yeast. \bjournalProceedings of National Academy of Sciences \bvolume102 \bpages1572–1577. \bptokimsref \endbibitem
  • Chaibub Neto et al. (2010) {barticle}[auto] \bauthor\bsnmChaibub Neto, \bfnmE.\binitsE., \bauthor\bsnmKeller, \bfnmM. P.\binitsM. P., \bauthor\bsnmAttie, \bfnmA. D.\binitsA. D. \AND\bauthor\bsnmYandell, \bfnmB. S.\binitsB. S. (\byear2010). \btitleCausal Graphical Models in Systems Genetics: A unified framework for joint inference of causal network and genetic architecture for correlated phenotypes. \bjournalAnn. Appl. Statist. \bvolume4 \bpages320–339. \bptokimsref \endbibitem
  • Chen, Emmert-Streib and Storey (2007) {barticle}[pbm] \bauthor\bsnmChen, \bfnmLin S.\binitsL. S., \bauthor\bsnmEmmert-Streib, \bfnmFrank\binitsF. \AND\bauthor\bsnmStorey, \bfnmJohn D.\binitsJ. D. (\byear2007). \btitleHarnessing naturally randomized transcription to infer regulatory relationships among genes. \bjournalGenome Biol. \bvolume8 \bpagesR219. \biddoi=10.1186/gb-2007-8-10-r219, issn=1465-6914, pii=gb-2007-8-10-r219, pmcid=2246293, pmid=17931418 \bptokimsref \endbibitem
  • Cheung and Spielman (2002) {barticle}[pbm] \bauthor\bsnmCheung, \bfnmVivian G.\binitsV. G. \AND\bauthor\bsnmSpielman, \bfnmRichard S.\binitsR. S. (\byear2002). \btitleThe genetics of variation in gene expression. \bjournalNat. Genet. \bvolume32 \bpages522–525. \biddoi=10.1038/ng1036, issn=1061-4036, pii=ng1036, pmid=12454648 \bptokimsref \endbibitem
  • Chickering, Heckerman and Meek (2004) {barticle}[mr] \bauthor\bsnmChickering, \bfnmDavid Maxwell\binitsD. M., \bauthor\bsnmHeckerman, \bfnmDavid\binitsD. \AND\bauthor\bsnmMeek, \bfnmChristopher\binitsC. (\byear2004). \btitleLarge-sample learning of Bayesian networks is NP-hard. \bjournalJ. Mach. Learn. Res. \bvolume5 \bpages1287–1330 (electronic). \bidissn=1532-4435, mr=2248018 \bptnotecheck year\bptokimsref \endbibitem
  • Dempster (1972) {barticle}[auto:STB—2011/08/30—13:45:12] \bauthor\bsnmDempster, \bfnmA. P.\binitsA. P. (\byear1972). \btitleCovariance selection. \bjournalBiometrics \bvolume28 \bpages157–175. \bptokimsref \endbibitem
  • Fan, Feng and Wu (2009) {barticle}[mr] \bauthor\bsnmFan, \bfnmJianqing\binitsJ., \bauthor\bsnmFeng, \bfnmYang\binitsY. \AND\bauthor\bsnmWu, \bfnmYichao\binitsY. (\byear2009). \btitleNetwork exploration via the adaptive lasso and SCAD penalties. \bjournalAnn. Appl. Stat. \bvolume3 \bpages521–541. \biddoi=10.1214/08-AOAS215, issn=1932-6157, mr=2750671 \bptokimsref \endbibitem
  • Friedman, Hastie and Tibshirani (2008) {barticle}[pbm] \bauthor\bsnmFriedman, \bfnmJerome\binitsJ., \bauthor\bsnmHastie, \bfnmTrevor\binitsT. \AND\bauthor\bsnmTibshirani, \bfnmRobert\binitsR. (\byear2008). \btitleSparse inverse covariance estimation with the graphical lasso. \bjournalBiostatistics \bvolume9 \bpages432–441. \biddoi=10.1093/biostatistics/kxm045, issn=1468-4357, mid=NIHMS248717, pii=kxm045, pmcid=3019769, pmid=18079126 \bptokimsref \endbibitem
  • Kanehisa et al. (2010) {barticle}[pbm] \bauthor\bsnmKanehisa, \bfnmMinoru\binitsM., \bauthor\bsnmGoto, \bfnmSusumu\binitsS., \bauthor\bsnmFurumichi, \bfnmMiho\binitsM., \bauthor\bsnmTanabe, \bfnmMao\binitsM. \AND\bauthor\bsnmHirakawa, \bfnmMika\binitsM. (\byear2010). \btitleKEGG for representation and analysis of molecular networks involving diseases and drugs. \bjournalNucleic Acids Res. \bvolume38 \bpagesD355–D360. \biddoi=10.1093/nar/gkp896, issn=1362-4962, pii=gkp896, pmcid=2808910, pmid=19880382 \bptokimsref \endbibitem
  • Kendziorski and Wang (2003) {barticle}[auto:STB—2011/08/30—13:45:12] \bauthor\bsnmKendziorski, \bfnmC.\binitsC. \AND\bauthor\bsnmWang, \bfnmP.\binitsP. (\byear2003). \btitleA review of statistical methods for expression quantitative trait loci mapping. \bjournalMammalian Genome \bvolume17 \bpages509–517. \bptokimsref \endbibitem
  • Kendziorski et al. (2006) {barticle}[mr] \bauthor\bsnmKendziorski, \bfnmC. M.\binitsC. M., \bauthor\bsnmChen, \bfnmM.\binitsM., \bauthor\bsnmYuan, \bfnmM.\binitsM., \bauthor\bsnmLan, \bfnmH.\binitsH. \AND\bauthor\bsnmAttie, \bfnmA. D.\binitsA. D. (\byear2006). \btitleStatistical methods for expression quantitative trait loci (eQTL) mapping. \bjournalBiometrics \bvolume62 \bpages19–27. \biddoi=10.1111/j.1541-0420.2005.00437.x, issn=0006-341X, mr=2226552 \bptokimsref \endbibitem
  • Kontos (2009) {bmisc}[auto:STB—2011/08/30—13:45:12] \bauthor\bsnmKontos, \bfnmK.\binitsK. (\byear2009). \bhowpublishedGaussian graphical model selection for gene regulatory network reverse engineering and function prediction. Ph.D. dissertation, Univ. Libre de Bruxelles. \bptokimsref \endbibitem
  • Lam and Fan (2009) {barticle}[mr] \bauthor\bsnmLam, \bfnmClifford\binitsC. \AND\bauthor\bsnmFan, \bfnmJianqing\binitsJ. (\byear2009). \btitleSparsistency and rates of convergence in large covariance matrix estimation. \bjournalAnn. Statist. \bvolume37 \bpages4254–4278. \biddoi=10.1214/09-AOS720, issn=0090-5364, mr=2572459 \bptokimsref \endbibitem
  • Li and Gui (2006) {barticle}[pbm] \bauthor\bsnmLi, \bfnmHongzhe\binitsH. \AND\bauthor\bsnmGui, \bfnmJiang\binitsJ. (\byear2006). \btitleGradient directed regularization for sparse Gaussian concentration graphs, with applications to inference of genetic networks. \bjournalBiostatistics \bvolume7 \bpages302–317. \biddoi=10.1093/biostatistics/kxj008, issn=1465-4644, pii=kxj008, pmid=16326758 \bptokimsref \endbibitem
  • Liu, De La Feunte and Hoeschele (2008) {barticle}[auto:STB—2011/08/30—13:45:12] \bauthor\bsnmLiu, \bfnmB.\binitsB., \bauthor\bsnmDe La Feunte, \bfnmA.\binitsA. \AND\bauthor\bsnmHoeschele, \bfnmI.\binitsI. (\byear2008). \btitleGene network inference via structural equation modeling in genetical genomics experiments. \bjournalGenetics \bvolume178 \bpages1763–1776. \bptokimsref \endbibitem
  • Meinshausen and Bühlmann (2006) {barticle}[mr] \bauthor\bsnmMeinshausen, \bfnmNicolai\binitsN. \AND\bauthor\bsnmBühlmann, \bfnmPeter\binitsP. (\byear2006). \btitleHigh-dimensional graphs and variable selection with the lasso. \bjournalAnn. Statist. \bvolume34 \bpages1436–1462. \biddoi=10.1214/009053606000000281, issn=0090-5364, mr=2278363 \bptokimsref \endbibitem
  • Neto et al. (2010) {barticle}[pbm] \bauthor\bsnmNeto, \bfnmElias Chaibub\binitsE. C., \bauthor\bsnmKeller, \bfnmMark P.\binitsM. P., \bauthor\bsnmAttie, \bfnmAlan D.\binitsA. D. \AND\bauthor\bsnmYandell, \bfnmBrian S.\binitsB. S. (\byear2010). \btitleCausal graphical models in systems genetics: A unified framework for joint inference of causal network and genetic architecture for correlated phenotypes. \bjournalAnn. Appl. Stat. \bvolume4 \bpages320–339. \bidissn=1941-7330, mid=NIHMS241052, pmcid=3017382, pmid=21218138 \bptokimsref \endbibitem
  • Peng, Zhou and Zhu (2009) {barticle}[mr] \bauthor\bsnmPeng, \bfnmJie\binitsJ., \bauthor\bsnmZhou, \bfnmNengfeng\binitsN. \AND\bauthor\bsnmZhu, \bfnmJi\binitsJ. (\byear2009). \btitlePartial correlation estimation by joint sparse regression models. \bjournalJ. Amer. Statist. Assoc. \bvolume104 \bpages735–746. \biddoi=10.1198/jasa.2009.0126, issn=0162-1459, mr=2541591 \bptokimsref \endbibitem
  • Rothman, Levina and Zhu (2010) {barticle}[mr] \bauthor\bsnmRothman, \bfnmAdam J.\binitsA. J., \bauthor\bsnmLevina, \bfnmElizaveta\binitsE. \AND\bauthor\bsnmZhu, \bfnmJi\binitsJ. (\byear2010). \btitleSparse multivariate regression with covariance estimation. \bjournalJ. Comput. Graph. Statist. \bvolume19 \bpages947–962. \biddoi=10.1198/jcgs.2010.09188, issn=1061-8600, mr=2791263 \bptokimsref \endbibitem
  • Schadt et al. (2003) {barticle}[pbm] \bauthor\bsnmSchadt, \bfnmEric E.\binitsE. E., \bauthor\bsnmMonks, \bfnmStephanie A.\binitsS. A., \bauthor\bsnmDrake, \bfnmThomas A.\binitsT. A., \bauthor\bsnmLusis, \bfnmAldons J.\binitsA. J., \bauthor\bsnmChe, \bfnmNam\binitsN., \bauthor\bsnmColinayo, \bfnmVeronica\binitsV., \bauthor\bsnmRuff, \bfnmThomas G.\binitsT. G., \bauthor\bsnmMilligan, \bfnmStephen B.\binitsS. B., \bauthor\bsnmLamb, \bfnmJohn R.\binitsJ. R., \bauthor\bsnmCavet, \bfnmGuy\binitsG., \bauthor\bsnmLinsley, \bfnmPeter S.\binitsP. S., \bauthor\bsnmMao, \bfnmMao\binitsM., \bauthor\bsnmStoughton, \bfnmRoland B.\binitsR. B. \AND\bauthor\bsnmFriend, \bfnmStephen H.\binitsS. H. (\byear2003). \btitleGenetics of gene expression surveyed in maize, mouse and man. \bjournalNature \bvolume422 \bpages297–302. \biddoi=10.1038/nature01434, issn=0028-0836, pii=nature01434, pmid=12646919 \bptokimsref \endbibitem
  • Schäfer and Strimmer (2005) {barticle}[mr] \bauthor\bsnmSchäfer, \bfnmJuliane\binitsJ. \AND\bauthor\bsnmStrimmer, \bfnmKorbinian\binitsK. (\byear2005). \btitleA shrinkage approach to large-scale covariance matrix estimation and implications for functional genomics. \bjournalStat. Appl. Genet. Mol. Biol. \bvolume4 \bpagesArt. 32, 28 pp. (electronic). \bidissn=1544-6115, mr=2183942 \bptokimsref \endbibitem
  • Segal et al. (2005) {barticle}[pbm] \bauthor\bsnmSegal, \bfnmEran\binitsE., \bauthor\bsnmFriedman, \bfnmNir\binitsN., \bauthor\bsnmKaminski, \bfnmNaftali\binitsN., \bauthor\bsnmRegev, \bfnmAviv\binitsA. \AND\bauthor\bsnmKoller, \bfnmDaphne\binitsD. (\byear2005). \btitleFrom signatures to models: Understanding cancer using microarrays. \bjournalNat. Genet. \bvolume37 \bpagesS38–S45. \biddoi=10.1038/ng1561, issn=1061-4036, pii=ng1561, pmid=15920529 \bptokimsref \endbibitem
  • Stark et al. (2011) {barticle}[pbm] \bauthor\bsnmStark, \bfnmChris\binitsC., \bauthor\bsnmBreitkreutz, \bfnmBobby-Joe\binitsB.-J., \bauthor\bsnmChatr-Aryamontri, \bfnmAndrew\binitsA., \bauthor\bsnmBoucher, \bfnmLorrie\binitsL., \bauthor\bsnmOughtred, \bfnmRose\binitsR., \bauthor\bsnmLivstone, \bfnmMichael S.\binitsM. S., \bauthor\bsnmNixon, \bfnmJulie\binitsJ., \bauthor\bsnmVan Auken, \bfnmKimberly\binitsK., \bauthor\bsnmWang, \bfnmXiaodong\binitsX., \bauthor\bsnmShi, \bfnmXiaoqi\binitsX., \bauthor\bsnmReguly, \bfnmTeresa\binitsT., \bauthor\bsnmRust, \bfnmJennifer M.\binitsJ. M., \bauthor\bsnmWinter, \bfnmAndrew\binitsA., \bauthor\bsnmDolinski, \bfnmKara\binitsK. \AND\bauthor\bsnmTyers, \bfnmMike\binitsM. (\byear2011). \btitleThe BioGRID interaction database: 2011 update. \bjournalNucleic Acids Res. \bvolume39 \bpagesD698–D704. \biddoi=10.1093/nar/gkq1116, issn=1362-4962, pii=gkq1116, pmcid=3013707, pmid=21071413 \bptokimsref \endbibitem
  • Steffen et al. (2002) {barticle}[auto:STB—2011/08/30—13:45:12] \bauthor\bsnmSteffen, \bfnmM.\binitsM., \bauthor\bsnmPetti, \bfnmA.\binitsA., \bauthor\bsnmAach, \bfnmJ.\binitsJ., \bauthor\bsnmD’Haeseleer, \bfnmP.\binitsP. \AND\bauthor\bsnmChurch, \bfnmG.\binitsG. (\byear2002). \btitleAutomated modelling of signal transduction networks. \bjournalBMC Bioinformatics \bvolume3 \bpages34. \bptokimsref \endbibitem
  • Tibshirani (1996) {barticle}[mr] \bauthor\bsnmTibshirani, \bfnmRobert\binitsR. (\byear1996). \btitleRegression shrinkage and selection via the lasso. \bjournalJ. Roy. Statist. Soc. Ser. B \bvolume58 \bpages267–288. \bidissn=0035-9246, mr=1379242 \bptokimsref \endbibitem
  • Wang, Li and Tsai (2007) {barticle}[mr] \bauthor\bsnmWang, \bfnmHansheng\binitsH., \bauthor\bsnmLi, \bfnmRunze\binitsR. \AND\bauthor\bsnmTsai, \bfnmChih-Ling\binitsC.-L. (\byear2007). \btitleTuning parameter selectors for the smoothly clipped absolute deviation method. \bjournalBiometrika \bvolume94 \bpages553–568. \biddoi=10.1093/biomet/asm053, issn=0006-3444, mr=2410008 \bptokimsref \endbibitem
  • Yin and Li (2011) {bmisc}[auto] \bauthor\bsnmYin, \bfnmJianxin\binitsJ. \AND\bauthor\bsnmLi, \bfnmHongzhe\binitsH. (\byear2011). \bhowpublishedSupplement to “A sparse conditional Gaussian graphical model for analysis of genetical genomics data.” DOI:10.1214/11-AOAS494SUPP. \bptokimsref \endbibitem
  • Zellner (1962) {barticle}[mr] \bauthor\bsnmZellner, \bfnmArnold\binitsA. (\byear1962). \btitleAn efficient method of estimating seemingly unrelated regressions and tests for aggregation bias. \bjournalJ. Amer. Statist. Assoc. \bvolume57 \bpages348–368. \bidissn=0162-1459, mr=0139235 \bptokimsref \endbibitem
  • Zhu et al. (2004) {barticle}[auto:STB—2011/08/30—13:45:12] \bauthor\bsnmZhu, \bfnmJ.\binitsJ., \bauthor\bsnmLum, \bfnmP. Y.\binitsP. Y., \bauthor\bsnmLamb, \bfnmJ.\binitsJ., \bauthor\bsnmGuhaThakurta, \bfnmD.\binitsD., \bauthor\bsnmEdwards, \bfnmS. W.\binitsS. W., \bauthor\bsnmThieringer, \bfnmR.\binitsR., \bauthor\bsnmBerger, \bfnmJ. P.\binitsJ. P., \bauthor\bsnmWu, \bfnmM. S.\binitsM. S., \bauthor\bsnmThompson, \bfnmJ.\binitsJ., \bauthor\bsnmSachs, \bfnmA. B.\binitsA. B. \AND\bauthor\bsnmSchadt, \bfnmE. E.\binitsE. E. (\byear2004). \btitleAn integrative genomics approach to the reconstruction of gene networks in segregating populations. \bjournalCytogenetic Genome Research \bvolume105 \bpages363–374. \bptokimsref \endbibitem
  • Zou (2006) {barticle}[mr] \bauthor\bsnmZou, \bfnmHui\binitsH. (\byear2006). \btitleThe adaptive lasso and its oracle properties. \bjournalJ. Amer. Statist. Assoc. \bvolume101 \bpages1418–1429. \biddoi=10.1198/016214506000000735, issn=0162-1459, mr=2279469 \bptokimsref \endbibitem
\printaddresses
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
49873
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description