Bayesian Model Comparison in Genetic Association Analysis: Linear Mixed Modeling and SNP Set Testing

# Bayesian Model Comparison in Genetic Association Analysis: Linear Mixed Modeling and SNP Set Testing

Xiaoquan Wen
Department of Biostatistics
University of Michigan Ann Arbor USA
###### Abstract

We consider the problems of hypothesis testing and model comparison under a flexible Bayesian linear regression model whose formulation is closely connected with the linear mixed effect model and the parametric models for SNP set analysis in genetic association studies. We derive a class of analytic approximate Bayes factors and illustrate their connections with a variety of frequentist test statistics, including the Wald statistic and the variance component score statistic. Taking advantage of Bayesian model averaging and hierarchical modeling, we demonstrate some distinct advantages and flexibilities in the approaches utilizing the derived Bayes factors in the context of genetic association studies. We demonstrate our proposed methods using real or simulated numerical examples in applications of single SNP association testing, multi-locus fine-mapping and SNP set association testing.
Keywords: Bayes factor; Linear mixed model; SNP set analysis; Genetic association; Model comparison

## 1 Introduction

In the past decades, genetic association studies have taken a prominent position in uncovering the role of genetic variants in disease etiology. Most recently, two related statistical approaches have become especially important in the analysis of genetic association data: the use of linear mixed models (LMM) to control for confounding factors and account for polygenic effects and the application of SNP set analysis for regions of (rare) genetic variants. As demonstrated by many authors (Kang et al., 2010, Segura et al., 2012, Zhou and Stephens, 2012, Zhou et al., 2013), linear mixed models effectively thwart the identification of false positive associations caused by relatedness or population structures (e.g., cryptic relatedness) in the samples while at the same time increase the power of detecting genuine genetic association signals. SNP set testing (Madsen and Browning, 2009, Wu et al., 2011, Lee et al., 2012) is emerging as a method of choice in detecting associations of rare genetic variants, which may be critical in explaining the phenomenon of “missing heritability”. Recent studies have also shown the necessity of jointly applying both approaches when analyzing the genetic association of rare variants to control for population stratification or using pedigree data.

Currently, the majority of the methodological work employing LMM and/or SNP set analysis in genetic association studies has focused on reporting -values for hypothesis testing. In this paper, we discuss a Bayesian alternative to address both topics within the model comparison framework in which hypothesis testing is regarded as a special case. We first show that both problems can be naturally formulated by a unified Bayesian parametric model, and we then derive a class of analytic approximate Bayes factors for use as our primary statistical device for model comparison. We establish the connections between the approximate Bayes factors and various commonly applied frequentist test statistics in a similar fashion, as reported by Wakefield (2009), Wen (2014), Wen and Stephens (2014).

Despite its similarities in performance to the frequentist approaches in traditional hypothesis testing settings, the Bayesian approach exhibits great convenience and flexibility in dealing with complicated practical settings within and beyond hypothesis testing. One of the most significant advantages of the Bayesian comparison method is its acceptance of explicitly modeling various alternative scenarios (which are not necessarily nested) and the fluidity with which it combines the evidence from the data via Bayesian model averaging. Beyond single unit (i.e., either a SNP or a SNP set) association testing, we show that the Bayesian model comparison approach can be straightforwardly extended to a joint analysis of multiple association signals, especially when dealing with linkage disequilibrium (LD) among SNPs commonly present in the genetic data. We illustrate a highly-efficient multi-locus fine-mapping approach that is facilitated by our results based on approximate Bayes factors.

## 2 Model and Notations

We consider a general form of the linear mixed model,

 \boldmathy=\boldmathX\boldmathα+% \boldmathG\boldmathβ+\boldmathu+\boldmathe, \boldmathe∼N(0,τ−1\boldmathI), (2.1)

where is an -vector of quantitative response measurements, is an matrix of covariate variables to be controlled as fixed effects and their coefficients are encoded in the -vector . is an matrix of covariates whose effect, represented by the -vector , is of primary interest for inference. Finally, the -vectors and represent the random effects and the i.i.d residual errors, respectively. In the general LMM inference framework, the random effects vector, , is assumed to be drawn from a multivariate normal (MVN) distribution, i.e.,

 \boldmathu∼N(0,λτ−1\boldmathK), (2.2)

where the matrix is assumed known (while the variance component parameter is typically unknown). In typical genetic applications, represents the genotypes of candidate SNPs, includes intercept term and factors like age, sex that need to be controlled for, and usually represents the random effects due to cryptic genetic relatedness or population structure. The ultimate goal is to make inference of the genetic effect .

We now present a Bayesian counterpart of the LMM, the likelihood part of which is identical to (2.1). From the Bayesian perspective, it is natural to regard the “random effect” assumption (2.2) as a standard MVN prior on . For controlled “fixed” effect coefficient , we assume the MVN prior:

 \boldmathα∼N(0,\boldmathΨ), (2.3)

where is a diagonal matrix. When performing inference, we take the limit , which essentially assigns independent flat priors to each fixed effect coefficient. A flat prior might be interpreted as an assumption that the a priori effects of are extremely large. This assumption intuitively leads to a conservative inference on . However, for variables that must be controlled for, such conservative assumptions are most likely welcome.

We also assign an MVN prior for the parameter of interest, , such that

 \boldmathβ∼N(0,\boldmathW). (2.4)

The variance-covariance matrix fully characterizes a distinct candidate model in our model comparison framework. The choice of is context-dependent and has critical implications on the inference results. In practice, we recommend modeling the effect size on the unit-free scales of signal-noise ratios (Wen, 2014, Wen and Stephens, 2014) by assigning an MVN prior on the standardized effect, i.e., , which induces a prior variance matrix on the original scale of as . (Note that the prior on the random effect is formulated in the same scale.)

Finally, we assume a general joint prior distribution, , for the variance component parameters. As we will show later, the actual functional form of has little impact on our asymptotic approximations of Bayes factors. To emphasize the connection with the frequentist linear mixed effect model, we will henceforth call the above Bayesian linear regression model the Bayesian linear mixed effect model (BLMM).

## 3 Model Comparison in the BLMM

We derive Bayes factors for the BLMM in order to perform Bayesian model comparisons. More specifically, we consider a space of candidate BLMMs that only differ in their specifications of . We denote as the trivial null model, in which (or equivalently ), and we define a null-based Bayes factor for an alternative model characterized by its prior variance on as

 BF(\boldmathW)=lim\boldmathΨ−1→0P(\boldmathy∣\boldmathX,\boldmathG,\boldmath% K,\boldmathW)P(\boldmathy∣\boldmathX,% \boldmathG,\boldmathK,H0)=lim\boldmathΨ−1→0P(\boldmathy∣\boldmathX,\boldmathG,% \boldmathK,\boldmathW)P(\boldmathy∣\boldmathX,\boldmathG,\boldmathK,\boldmathW=0). (3.1)

To present our results regarding the Bayes factors, we begin by introducing several necessary additional notations. We denote and as the MLEs of the full LMM model (2.1) by treating as a fixed effect parameter. In addition, we denote . Correspondingly, we use and to represent the MLEs of the null model, where is restricted to . Furthermore, provided that parameter is known, we note that can be analytically computed as a function of (Appendix A.1), which we denote by . Accordingly, we use to represent the corresponding variance of (specifically, when , and ). Finally, we consider a class of general estimators of , denoted by , for which a tuning parameter is built-in. The statistical details of this class of estimators are explained in Appendix A.2. Most importantly, it follows that and for the two extreme values. Deriving from , we establish a corresponding estimator of , denoted by , which can be analytically expressed in terms of (see Appendix A.2) and also shares a similar property such that , and . Finally, we use the notations . In the case that is specified as a function of and/or , we denote . With these additional notations, we show that the desired Bayes factor can be approximated analytically. We summarize the main result in proposition 1, whose formal proof is given in Appendix A.2.

###### Proposition 1.

Under the BLMM, the Bayes factor can be approximated by

 ABF(\boldmathW,κ)=|\boldmathI+ˇ% \boldmathV−1ˇ\boldmathW|−12⋅exp(12ˇ\boldmathβ′ˇ\boldmathV−1[ˇ\boldmathW(\boldmathI+ˇ% \boldmathV−1ˇ\boldmathW)−1]ˇ% \boldmathV−1ˇ\boldmathβ). (3.2)

It follows that

 BF(\boldmathW)=ABF(\boldmathW,κ)⋅(1+O(1n)),  for any κ∈[0,1].

Remark 1. The approximate Bayes factors in the BLMM share the same functional form as the ABFs discussed in Wen (2014) and enjoy some of the computational properties discussed therein. In particular, the computation of the ABF is robust to the potential collinearity presented in the data matrix . Furthermore, is allowed to be rank-deficient.

Remark 2. For single SNP analysis, i.e., , both and degenerate to scalars (which we denote by and , respectively). The expression of (3.2) is reduced to

 ABF(ω,κ)=√ˇvˇv+ˇωexp⎛⎝12ˇωˇv+ˇωˇβ2ˇv⎞⎠, (3.3)

which has the same functional form as the ABF discussed in Wakefield (2009).

Although all suitable values yield the same asymptotic error bound, they have practical implications on the approximation accuracy for finite samples. Our numerical experiments (Appendix B) indicate that with sample size around hundreds, the s with and both become quite accurate.

### 3.1 Connection with frequentist test statistics

#### 3.1.1 Connection with fixed effect test statistics

Consider a specific class of prior, , for which the can be simplified to

 ABF(\boldmathW=c\boldmathV,κ)=(√1c+1)pexp(12cc+1ˇ\boldmathβ′ˇ\boldmathV−1ˇ\boldmathβ% ).

Consequently, the becomes a monotonic transformation of the quadratic form . We note that, in the following two special cases, the quadratic form corresponds to some popular frequentist statistics to test as a fixed effect. Particularly, when , the quadratic form becomes the (multivariate) Wald statistic ; as is set to 0, it coincides with the Rao’s score statistic (Appendix C.1).

The monotonic correspondence between the and these two popular frequentist test statistics indicates that, under the prior specified, the ranks candidate models (or SNP associations in single-SNP analysis) exactly the same way as both the Wald statistic (for ) and the score statistic (for ). Furthermore, applying the strategy of Bayes/non-Bayes compromise (Good, 1992, Servin and Stephens, 2007) by treating the as a regular test statistic, it becomes obvious that the possesses a -value identical to that of the corresponding Wald or score statistic, depending on the values. Wakefield (2009) first named the prior specification of the kind as the implicit p-value prior. In the special case of single SNP association testing and assuming Hardy-Weinberg equilibrium, it follows that (where represents the allele frequency of a target SNP). As a consequence, the implicit -value prior essentially assumes a larger a priori effect for SNPs that are less informative (either due to a smaller sample size or minor allele frequency). Although, from the Bayesian point of view, there seems to be a lack of proper justification for such prior assumptions (Wakefield, 2009, Wen and Stephens, 2014), we often note that the overall effect of the implicit -value prior on the final inference may be negligible in practice, especially when the sample size is large (see section 5.1.1 for illustration).

#### 3.1.2 Connection with the variance component score statistic

In SNP set analysis, it has become common practice to construct a variance component score test for the genetic effect (Wu et al., 2011, Lee et al., 2012, Schifano et al., 2012). That is, for a set of SNPs, the genetic effects are assumed to be random and follow the distribution , where the matrix is pre-defined. To test vs. , the score statistic is given by where . In the special case that the random effect is ignored (i.e., ), is reduced to the form of the original SKAT statistic (Wu et al., 2011). By re-parameterizing , we show that can be represented as a function of (Appendix C.2). In particular, as , it follows that

 ABF(\boldmathW=γ\boldmathM,κ=0)≈exp(γ2Tscore).

That is, becomes monotonic to the variance component score statistic. Interestingly, the condition represents a local alternative scenario (i.e., only slight deviates from ), for which score tests are known to be most powerful.

## 4 Genetic Association Analysis with Bayes Factors

### 4.1 Bayesian Hypothesis Testing

Bayes factors present two major advantages in the hypothesis testing of genetic association signals: namely, the convenience of Bayesian model averaging and the flexibility of utilizing useful prior information. Before we delve into the details of the advantages of Bayesian models in hypothesis testing, it is worth noting that the practical usage of Bayesian model comparison in hypothesis testing is limited, mostly due to the difficulty involved in determining significance thresholds based on Bayes factors. Traditionally, this issue has been addressed by treating a Bayes factor as a regular test statistic and deriving its -value accordingly (Good, 1992, Servin and Stephens, 2007). Because the null distribution of a Bayes factor is generally non-trivial, most practical implementations rely on permutation procedures. Recently, Wen (2013) proposed a robust Bayesian false discovery rate (FDR) control procedure that directly uses the Bayes factors as inputs. This procedure ensures FDR control, even under the mis-specification of alternative models, a property resembling the behavior of -value based procedures under similar circumstances. Most importantly, this procedure is highly computationally efficient and generally does not require extensive permutations.

#### 4.1.1 Model Averaging

In hypothesis testing, there often exist multiple alternative scenarios, and a single parametric model (or its corresponding test statistic) can hardly accommodate all cases. For example, in SNP set testing of rare-variant genetic associations, there exist two primary types of competing approaches that target different alternative scenarios. The first type, represented by the burden tests (Madsen and Browning, 2009), collapses the genetic variants in a region to form a single characteristic genetic unit, with respect to which the association test is then performed. This approach is ideal for a particular alternative scenario in which most of the variants considered are either consistently deleterious or consistently protective. The second type of the approach, represented by the C-alpha (Neale et al., 2011) and SKAT tests, targets a complementary scenario in which the variants included in the SNP set can have bi-directional effects on the phenotype of interest. In practice, because the true alternative model is never known a priori, it remains a challenge to reconcile/combine the results from the two distinct approaches into the frequentist testing paradigm. Bayesian model averaging provides a principled way to naturally address this issue. Suppose that there are possible alternative models in consideration, and for each model , a Bayes factor can be computed and a prior probability/weight, is assigned. An overall Bayes factor then can be computed by , which summarizes the overall evidence from the data compared to the null model while accounting for the uncertainty of the true alternative scenario.

In the context of SNP set analysis, Lee et al. (2012) showed that the alternative scenarios considered in the burden and SKAT tests can both be represented in the LMM framework with different specification of random effect matrix. In brief, let the column vector denote the marginal prior effect sizes for SNPs in a set. The burden test assumes , whereas the SKAT model assumes . Given these results and within the framework of BLMM, we can straightforwardly average the evidence over the the two competing alternative models by computing an overall Bayes factor, where the probability denotes the relative prior frequency of the burden model. Without prior preference over the two alternatives, a natural “objective” choice is to set .

Lee et al. (2012) provided an alternative interpretation by connecting the two models. They considered a class of matrices indexed by a non-negative correlation coefficient : namely,

 \boldmathWρ=diag(√\boldmathw)[(1−ρ)\boldmathI+ρ11′]diag(√% \boldmathw)=(1−ρ)\boldmathWs+ρ\boldmathWb, (4.1)

which we will refer to as the SKAT-O prior. It should be noted that the prior distribution for assumed by Bayesian model averaging is essentially a normal mixture, which itself is not necessarily normal and hence differs from the SKAT-O prior. Nevertheless, the SKAT-O prior can be viewed as a normal approximation of this mixture distribution (to the first two moments).

#### 4.1.2 Informative Prior

The explicit specification of the prior distribution on for alternative models is seemingly a distinct feature of Bayesian hypothesis testing. However, as we have shown, even the most commonly applied frequentist test statistics can be viewed as resulting from some implicit Bayesian priors. Therefore, it is only natural to regard the prior specification of as an integrative component in alternative modeling. This fact should encourage practitioners to explicitly formulate appropriate informative priors in Bayesian hypothesis testing: if the prior does capture some essence of reality, it improves the overall statistical power; even if the prior is mis-specified, testing with Bayes factors using the procedures, such as either the Bayes/Non-Bayes compromise or the robust Bayesian FDR control, only results in a reduction in power but no inflation of type I error.

For SNP set analysis, it has become common practice to pre-define some “weight” for each individual participating SNP in both the burden and SKAT types of approaches (i.e., the aforementioned vector). Most commonly, these priors are set up to prioritize genetic variants with low allele frequencies. When performing genetic association analysis, it is now becoming increasingly popular to incorporate genomic annotation and/or pathway information. In all of these examples, BLMM provides a convenient way to formally integrate the prior information into the hypothesis testing.

Finally, we note that there exist practical settings, especially in the studies of genome-wide scale, in which the information of the desired priors can be sufficiently “learned” from data facilitated by the Bayes factors. Take, for example, the problem of SNP set analysis with two competing alternatives, and consider inferring the weights of the burden and the SKAT models () from the data. Hypothetically, if (i) many SNP sets are investigated (in a single or multiple studies) and (ii) a sufficient amount of modest to strong signals are presented in the data, it should be intuitive that can be accurately estimated by pooling the information across all SNP sets. More specifically, for each SNP set, we can augment a latent indicator to represent the true generative model of the observed data. Subsequently, a straightforward EM algorithm (where the complete data likelihood can be evaluated via Bayes factors) can be used to estimate .

### 4.2 Bayesian Variable Selection in the BLMM

Beyond hypothesis testing, many practical problems in genetic association studies can be tackled using model comparison/selection techniques via Bayes factors. Here, we consider the problem of multi-locus fine-mapping analysis. In practice, the fine-mapping analysis usually focuses on relatively small genomic regions flagged by SNP association signals, with the aim of identifying multiple potential signals and narrowing down the candidate causal variants within a region while accounting for LD.

Consider a region of candidate variants whose genetic effects are jointly modeled by the -vector . Ultimately, we are interested in making an inference on the binary vector . Under the BLMM, we assume the following spike-and-slab prior for variable selection, namely,

 \boldmathβ∣\boldmathγ∼N(0,% \boldmathW) with \boldmathW=ϕ2diag(\boldmathγ), and Pr(\boldmathγ)=p∏i=1pξi1(1−p1)(1−ξi), (4.2)

where the parameter denotes the prior inclusion probability of a SNP and the parameter represents the prior genetic effect size of each SNP. The posterior distribution of can be computed by

 Pr(\boldmathγ∣ϕ2,\boldmathy,\boldmathX,\boldmathZ,\boldmathG)∝Pr(\boldmathγ)⋅P(\boldmathy∣\boldmathγ,ϕ2,% \boldmathX,\boldmathZ,\boldmathG)∝Pr(% \boldmathγ)⋅BF(\boldmathW), (4.3)

where the Bayes factor can be further approximated by . It is then conceptually straightforward to design an MCMC algorithm to perform Bayesian variable selection. We note that, in the case of setting , there are substantial computational savings in the proposed MCMC computation. We give the detailed description and explanation of the MCMC algorithm in Appendix D.

## 5 Numerical Illustration

### 5.1 Application of BLMM to an A. thaliana Data Set

In this example, we apply the BLMM to study the genetic associations between the genotypes of an inbred A. thaliana line and the quantitative phenotype of sodium concentration in the leaves using the data described in (Baxter et al., 2010). The data set consists of 336 inbred individuals, and each individual is genotyped at 214K SNP positions genome-wide. The data set was previously analyzed by Segura et al. (2012) under the LMM setting. We conduct an additional quantile normalization step for the original phenotype measurements to prevent the influence of potential outliers.

#### 5.1.1 Single SNP Association Analysis

We first perform single SNP association tests using the approximate Bayes factors of the BLMM and compare the results with the analyses based on -values. To specify the alternative models in the BLMM, we consider a natural exchangeable prior on the standardized effect scale, i.e., . Unlike the implicit -value prior, this prior does not assume a relationship between the genetic effect size and the features of a target SNP. Furthermore, instead of fixing a single value, we assume is uniformly drawn from the set , where the various levels of values cover a range of small, modest to large potential effect sizes. The use of multiple values forms a mixture normal prior, which is helpful for describing a longer-tailed distribution of effect sizes (Servin and Stephens, 2007, Wen, 2014). The range of the values is selected following the suggestion of Stephens and Balding (2009). We use the software package GEMMA (Zhou and Stephens, 2012) to estimate the kinship matrix, , for the random effect, and obtain the MLEs, , along with their standard errors for all the SNPs. Applying the equation (3.3), we then compute the approximate Bayes factors at and for each value. Finally, we compute an overall Bayes factor by averaging over all the prior effect size models, i.e., .

We first investigate the ranking of the association signals by the ABFs under the natural Bayesian prior and the -values based on the score and Wald test statistics. To this end, we compute the Spearman’s rank correlation coefficient () of the and . The overall rank correlation (from all 214K association tests) between based on the score statistic and is 0.817. However, we note that the majority of the discordance in ranking comes from the unlikely association signals (see Figure 1), which are generally not of interest. Focusing on the subset of 10,913 SNPs with -value , the rank correlation becomes nearly perfect (). Similarly, the based on the Wald statistic has an overall rank correlation of 0.821 with , and for the subset of 11,379 SNPs with corresponding -value , . The direct comparison between the approximate Bayes factors and corresponding -values is shown in Figure 1.

As an illustration, we further apply the Bayesian and the frequentist FDR control procedures for the Bayes factors and -values to determine the significance cut-offs, ignoring correlations among the tests. Ultimately, both the Benjamini-Hochberg and the Storey procedures using the score statistic -values select 17 significant SNPs (denoted by set ). In comparison, the standard Bonferroni procedure selects 12 SNP (denoted by set ). The Bayesian FDR control procedure (i.e., the EBF procedure, described in Wen (2013)) based on selects 14 significant SNPs (denoted by set ). Importantly, we note that . The results from the and Wald statistic -values are nearly identical.

Based on this result, we conclude that, under this particular GWAS setting with a very modest sample size, there is no obvious practical difference in applying the Bayes factors and the -values in single SNP hypothesis testing. We view this result as a numerical validation of our theoretical results discussed in section 3.1.

#### 5.1.2 Fine-Mapping Analysis

Following Segura et al. (2012), we further perform a multi-locus fine-mapping analysis of a 200kb genomic region centered around the top single SNP association signal at chr4:6392280, where 508 SNPs are included. Using the MCMC algorithm described in section 4.2, we assign the prior inclusion probability for each candidate SNP, which conservatively sets the prior expected number of signals in the region to 1. Conditional on a SNP having a non-zero effect (i.e., ), we use the same normal mixture prior for the effect size described in the single SNP association analysis. We obtain the posterior samples from 300,000 MCMC repeats after 150,000 burn-in steps, and the convergence of the MCMC algorithm is diagnosed using the procedure described in Brooks et al. (2003).

The analysis based on the posterior samples clearly indicates that there are multiple independent association signals residing in this relatively small genomic region. There is zero probability mass on those posterior models containing fewer than 3 SNPs; the probabilities for the posterior models having 3, 4, 5 and 6 independent signals are 0.175, 0.452, 0.350 and 0.023, respectively. Inspecting individual SNPs, we summarize the top five associated SNPs according to their posterior inclusion probabilities in Table 1. The correlations among the top 5 SNPs are very modest. Thus far, our result has been largely consistent with what is reported in Segura et al. (2012), in which a stepwise variable selection scheme with a BIC-like model selection criteria is employed. Nevertheless, we notice a great deal of uncertainty within the individual models from our analysis. The details of the top 10 models ranked by their posterior probabilities are shown in Table 2. The maximum a posterior (MAP) model only has a probability of 0.05, and all of the top models have similar complexities and very comparable likelihoods. In addition, we find that 61% of the posterior models contain both of the top two SNPs, and 32% of the posterior models contain a combination of the top three SNPs. One may naturally suspect that the uncertainty in relative large models (i.e., with more SNPs included) is partially due to the stringent prior. To this end, we modify the prior distribution to (the two end points correspond to equaling and , respectively), but the results do not qualitatively change. Biologically, it might be the case that the true causal variants are not directly genotyped and the observed signals are only partially correlated with them. It is then worth following up with dense genotyping experiments or genotype imputations. Statistically, it seems evident that, in this particular case, reporting a single “best” model from the variable selection procedure yields an over-simplified picture and can be misleading for the follow-up analysis.

### 5.2 Simulation Study of SNP Set Analysis

In this section, we perform simulation studies to illustrate the effectiveness of the proposed Bayesian model comparison approach in SNP set analysis. In each simulated data set, we generate 5,000 phenotype-SNP set pairs that mimics the data structure from genome-wide investigation of expression quantitative trait loci (eQTLs). We randomly select 3,500 SNP sets and simulate their phenotypes from a null model. For the remaining SNP sets, we use two types of alternative models described in Lee et al. (2012) to generate their phenotypes: one model assumes consistent directional effects of rare variants, whereas the other allows inconsistent directional effects. We use to denote the relative frequency of the sign-consistent models in all the alternative models, and vary this parameter in different simulation sets. We give a detailed account of the simulation schemes in Appendix E.1.

We analyzed the simulated data sets using the proposed Bayesian model comparison approach and the SKAT-O method implemented in the R package SKAT (version 0.95) to examine their controls of FDR and powers. For both approaches, we again follow the previous work (Wu et al., 2011, Lee et al., 2012) and assign the marginal weight for each SNP as a function of their allele frequencies. In Bayesian analysis, we apply two strategies in choosing the prior weights for Bayes factor computation. The first strategy assumes an “objective” uniform prior setting , and the second strategy estimates and the distribution of genetic effect sizes from the data by pooling information across all phenotype-SNP set pairs using a hierarchical model. The details of the analysis procedure are provided in Appendix E.2.

We summarize the simulation results in Table 3. The false discovery rates in all the methods are well controlled. The performance of the Bayesian procedure with the default uniform prior weights is very similar to that of the SKAT-O, and the Bayesian procedure based on informative priors achieves the best power in all settings defined by different true values. These results are well expected because the Bayesian method with estimated weights has the unique advantage of effectively borrowing information across genes through the use of Bayes factors and hierarchical modeling. In addition, we want to emphasize that all of the Bayesian models assumed in the analysis are indeed very “wrong” comparing to the true data-generating model; nevertheless, the robust Bayesian FDR control procedure using Bayes factors ensures the targeted FDR level.

Going beyond SNP set testing targeting rare variant associations, in Appendix F, we further demonstrate that our Bayesian model averaging framework can be conveniently extended to integrate models for detecting common variant associations into SNP set testing. We envision that this approach will have a profound impact in studies of expression trait quantitative loci at genome-wide scale.

## 6 Discussion

In this paper, we have presented a unified Bayesian framework to perform model comparisons in the contexts of a linear mixed model and SNP set analysis. Although our statistical results are presented exclusively for the quantitative response variables, it is possible to extend them to the generalized linear mixed models (GLMM) context to incorporate binary outcomes and count data using a quadratic approximation of the corresponding log-likelihood functions.

Primarily based on the results of the approximate Bayes factors, we have demonstrated an efficient Bayesian sparse variable selection algorithm to perform multi-locus association analysis using the BLMM. Recently, Zhou et al. (2013) also proposed an elegant Bayesian solution for multiple SNP association analysis under the LMM model on the genome-wide scale. It should be noted that their method also has a primary focus on estimating the heritability, whereas our method is designed for fine-mapping analysis. In addition, by treating SNP sets as selection units, our approach can be straightforwardly extended to the identification of multiple associated genes/SNP sets, which may be attractive for biological pathway analysis. Previous studies (Guan et al., 2011, Wen, 2014) have shown that Bayesian methods generally hold advantages over penalized regression approaches in variable selection problems with correlated covariates (e.g., SNPs in LD) and/or non-i.i.d. residual error structures. More importantly, as we have demonstrated, there can be great uncertainty regarding any single “best fitting” model. As a practical consequence, reporting a single “best” model but ignoring appropriate uncertainty assessments could hinder follow-up scientific investigation.

Finally, we want to note that Bayesian model comparison approaches have been successfully assessed in other areas of genetic association studies, e.g., meta-analysis (Wen and Stephens, 2014), association mapping of multiple-traits and detecting gene-environment interactions (Flutre et al., 2013, Wen and Stephens, 2014). Our results can be conveniently integrated into those existing tools, and their usages can be naturally extended to incorporate LMM and SNP set analysis.

## 7 Supplementary Material

The software, scripts used to generate simulated data can be found at http://github.com/xqwen/BLMM. Detailed derivations, proofs and descriptions of relevant algorithms and simulation details are included in the supplementary file.

## Acknowledgments

We thank Seunggeun Lee and Xiang Zhou for helpful discussions. This work is supported by NIH grants R01-MH101825 and R01-HG007022.

## Appendix A Bayes Factor Derivation

In this section, we show the detailed derivation of the approximate Bayes factors under the BLMM, which also serves as a proof for Proposition 1 in the main text.

### a.1 Exact Bayes factor with known λ and τ

We first consider the case where the variance parameters and are known, instead of being assigned priors. In this case, we show that the exact Bayes factor under the BLMM can be analytically computed. We summarize this result in the following lemma:

###### Lemma 1.

Under the BLMM, if the variance parameters and are known, the Bayes factor can be analytically computed by

 BF(\boldmathW)=|\boldmathI+^\boldmathV−1\boldmathW|−12⋅exp(12^% \boldmathβ′^\boldmathV−1[\boldmath% W(\boldmathI+^\boldmathV−1\boldmathW)−1]^\boldmathV−1^\boldmathβ). (A.1)
###### Proof.

The linear mixed model can be equivalently represented by

 \boldmathy =\boldmathX\boldmathα+\boldmathG\boldmathβ+\boldmathϵ, (A.2) \boldmathϵ ∼N(0,τ−1\boldmathΣ),

where . With the variance parameters and known, we perform the following transformations to the observed data:

 \utilde\boldmathy=\boldmathΣ−12\boldmathy (A.3) \utilde\boldmathX=\boldmathΣ−12\boldmathX \utilde\boldmathG=\boldmathΣ−12\boldmathG

This results in a linear model

 \utilde\boldmathy =\utilde\boldmathX\boldmathα+\utilde\boldmathG\boldmathβ+\utilde\boldmathϵ, (A.4) \utilde\boldmathϵ ∼N(0,τ−1\boldmathI),

where . Linear model (A.4) is a trivial special case of the complex linear model systems considered by Wen (2014). Consequently, it follows from the Lemma 1 of Wen (2014), given the prior specifications described in the main text, the Bayes factor can be analytically computed by

 BF(\boldmathW;λ,τ)=|\boldmathI+^% \boldmathV−1\boldmathW|−12⋅exp(12\boldmath^β′^\boldmathV−1[\boldmathW(\boldmathI+^\boldmathV−1% \boldmathW)−1]^\boldmathV−1\boldmath^β). (A.5)

Next, we show the detailed analytic forms of and under the BLMM. First, we define

 \boldmathG\boldmathx=(\boldmathI−% \boldmathΣ−1/2\boldmathX(\boldmathX′\boldmathΣ−1\boldmathX)−1\boldmathX% ′\boldmathΣ−1/2)\boldmathΣ−1/2\boldmathG, (A.6)

which only depends on through . It follows that

 \boldmath^β(λ)=(\boldmathG′% \boldmathx\boldmathG\boldmathx)% \boldmathG′\boldmathx\boldmathΣ−1/2\boldmathy, (A.7)

and

 ^\boldmathV(λ,τ)=τ−1(\boldmathG′\boldmathx\boldmathG\boldmathx)−1. (A.8)

### a.2 Approximate Bayes factors for unknown λ and τ

When and are unknown, to compute the Bayes factor, it is required to evaluate the following marginal likelihood

 p(\boldmathy∣\boldmathW,\boldmathX,% \boldmathG,\boldmathZ)=∫p(\boldmathy∣% \boldmathW,\boldmathX,\boldmathG,\boldmathZ,λ,τ)p(λ,τ)dλdτ, (A.9)

and the desired Bayes factor is therefore computed as

 BF(\boldmathW)=lim\boldmathΨ−1→0∫p(\boldmathy∣\boldmathW,\boldmathX,% \boldmathG,\boldmathZ,λ,τ)p(λ,τ)dλdτ∫p(\boldmathy∣\boldmathW=0,\boldmathX,% \boldmathG,\boldmathZ,λ,τ)p(λ,τ)dλdτ. (A.10)

By applying the Bounded convergence theorem (to switch limit and integration), we can carry the analytic computation up to the following point

where

 KHa =|\boldmathI+^\boldmathV(τ,λ)−1\boldmathW(τ,λ)|−12 (A.12) ⋅exp(12\boldmath^β(λ)′^\boldmathV(τ,λ)−1\boldmathW(τ,λ)[\boldmathI+^\boldmathV(τ,λ)−1\boldmathW(τ,λ)]−1\boldmathV(τ,λ)−1\boldmath^β(λ)) ⋅τn2|\boldmathΣ(λ)|−12p(λ,τ)⋅exp(−τ2[% \boldmathy−\boldmathX~\boldmathα(λ)]′\boldmathΣ(λ)−1[\boldmathy% −\boldmathX~\boldmathα(λ)]),

and

 KH0=τn2|\boldmathΣ(λ)|−12p(λ,τ)⋅exp(−τ2[\boldmathy−%\boldmath$X$~\boldmathα(λ)]′% \boldmathΣ(λ)−1[\boldmathy−\boldmathX~\boldmathα(λ)]), (A.13)

where

 ~\boldmathα(λ)=(\boldmathX′% \boldmathΣ−1\boldmathX)−1\boldmathX′\boldmathΣ−1\boldmathy. (A.14)

We propose to approximate the double integrals of both and by Laplace’s method. In general, Laplace’s method approximates a multiple integral with respect to a -vector in the following fashion,

 ∫Dh(\boldmathz)exp[g(\boldmathz)]d% \boldmathz≈(2π)p/2|\boldmathH^\boldmathz|h(^\boldmathz)exp[g(^\boldmathz)], (A.15)

where

 ^\boldmathz=argmax\boldmathzg(\boldmathz),

and is the absolute value of the determinant of the Hessian matrix of the function evaluated at . There may be multiple choices to factor an integrand into functions and , the technical requirements for a valid asymptotic approximation are

1. is smooth and positively valued

2. is smooth and obtains its unique maximum (w.r.t ) in the interior of

3. is linear increasing with respect to the sample size

Different factorization schemes satisfying above requirements usually yield different approximation accuracies for finite sample size, nonetheless, their asymptotic error bounds are the same. For a detailed discussion, see Butler (2007) chapter 2.

We apply a specific factorization for and for Laplace’s method. First, we note the decomposition of the quadratic form

 τ[\boldmathy−\boldmathX~% \boldmathα(λ)]′\boldmathΣ(λ)−1[\boldmathy−\boldmathX~\boldmathα(λ)] (A.16) =\boldmath^β(λ)^\boldmathV(τ,λ)−1\boldmath^β(λ)+τ[% \boldmathy−\boldmathX^\boldmathα(λ)−%\boldmath$G$\boldmath^β(λ)]′% \boldmathΣ(λ)−1[\boldmathy−\boldmathX^\boldmathα(λ)−\boldmathG\boldmath^β(λ)],

where

 (^\boldmathα\boldmath^β)=[(\boldmathX \boldmathG)′\boldmathΣ−1(\boldmathX \boldmathG)]−1(\boldmathX \boldmathG)′% \boldmathΣ−1\boldmathy. (A.17)

Thus, for an arbitrary weight parameter , we can write

 τ[\boldmathy−\boldmathX~\boldmathα(λ)]′\boldmathΣ(λ)−1[\boldmathy−\boldmathX~\boldmathα(λ)] (A.18) =κ⋅(\boldmath^β(λ)^\boldmathV(τ,λ)−1\boldmath^β(λ)+τ[\boldmathy−\boldmathX^\boldmathα(λ)−\boldmathG\boldmath^β(λ)]′\boldmathΣ(λ)−1[\boldmathy−% \boldmathX^\boldmathα(λ)−\boldmathG% \boldmath^β(λ)]) +(1−κ)⋅τ[\boldmathy−% \boldmathX~\boldmathα(λ)]′% \boldmathΣ(λ)−1[\boldmathy−\boldmathX~\boldmathα(λ)]

Using this decomposition, we factor into , where

 ha(λ,τ)=|\boldmathI+^% \boldmathV(τ,λ)−1\boldmathW(τ,λ)|−12 (A.19) ⋅exp(12\boldmath^β(λ)′^\boldmathV(τ,λ)−1\boldmathW(τ,λ)[\boldmathI+^\boldmathV(τ,λ)−1\boldmathW(τ,λ)]−1\boldmathV(τ,λ)−1\boldmath^β(λ)) ⋅exp(−κ2\boldmath^β(λ)^\boldmathV(τ,λ)−1\boldmath^β(λ))⋅p(λ,τ)

and

 ga(λ,τ) =n2log(τ)−12log|\boldmathΣ(λ)|−τ2(1−κ)([\boldmathy−% \boldmathX~\boldmathα(λ)]′% \boldmathΣ(λ)−1[\boldmathy−\boldmathX~\boldmathα(λ)]) (A.20) −τ2κ([\boldmathy−% \boldmathX^\boldmathα(λ)−\boldmathG% \boldmath^β(λ)]′\boldmathΣ(λ)−1[\boldmathy−\boldmathX^\boldmath% α(λ)−\boldmathG\boldmath^β(λ)]).

We factorize into , where

 h0(λ,τ)=exp(−κ2\boldmath^β(λ)^\boldmathV(τ,λ)−1\boldmath^β(λ))⋅p(λ,τ), (A.21)

and .

Note that for any given value, there is a corresponding value, namely,

 ^τ(λ;κ) =n/{(1−κ)[\boldmathy−\boldmath% X~\boldmathα(λ)]′\boldmathΣ% (λ)−1[\boldmathy−\boldmathX~\boldmathα(λ)] (A.22) +κ[\boldmathy−\boldmathX^% \boldmathα(λ)−\boldmathG\boldmath^β(λ)]′\boldmathΣ(λ)−1[\boldmathy−\boldmathX^\boldmathα(λ)−\boldmathG\boldmath^β(λ)]},

maximizes the among all possible values. Consequently, maximizing function is equivalent to maximize with respect to the single parameter . Therefore, we can simplify the target objective function to

 l(λ;κ)=n2log^τ(λ)−12log|% \boldmathΣ(λ)|. (A.23)

It should be noted that as in the special cases and , the objective function (A.23) becomes the score functions of the full and null LMMs, respectively. In general, there is no strong guarantee that the function (A.23) is strictly concave with respect to . Nevertheless, the second derivative of (not shown, see Zhou and Stephens (2012) for reference) suggests that the objective function is asymptotically concave (i.e., concave for sufficiently large sample size ). There is no analytic solution to optimize (A.23), and the gradient based numerical optimization algorithms, e.g. the Newton-Raphson method, are typically applied in this setting (because the derivatives of the objective functions can be efficiently evaluated, as demonstrated in Zhou and Stephens (2012)). We denote

 ˇλ(κ)=argmaxλl(λ;κ), (A.24)

and

 ˇτ=^τ(ˇλ). (A.25)

Based on (A.15), Laplace’s method yields the following approximation to the Bayes factor

 BF(\boldmathW) =|\boldmathI+^\boldmathV(ˇτ,ˇλ)−1\boldmathW(ˇτ,ˇλ)|−12 (A.26) ⋅exp(12\boldmath^β(ˇλ)′^\boldmathV(ˇτ,ˇλ)−1\boldmathW(ˇτ,ˇλ)[\boldmathI+^\boldmathV(ˇτ,ˇλ)−1\boldmathW% (ˇτ,ˇλ)]−1\boldmathV(ˇτ,ˇλ)−1\boldmath^β(ˇλ)) ⋅(1+O(1n)).

This essentially proves Proposition 1.

## Appendix B Numerical Accuracy of Approximate Bayes Factors

The proposition 1 shows that the approximate Bayes factors under the BLMM have an error bound for . In this section, we perform numerical experiments to investigate impacts of different values of and on the accuracy of the approximations. To this end, we sub-sample the real genotype and phenotype data from the A. thaliana example to obtain 3,000 SNPs at various sample sizes: . For each sub-sampled data set, we compute the s for based on the output from GEMMA using equation (3.7) in the main text.

In comparison, we compute the Bayes factors by numerical integration. For general prior , the two-dimensional numerical integration is practically implausible. (Although Monte Carlo integration is possible, we find the results typically exhibit extremely large variances.) To overcome this difficulty, we apply a specific form of prior,

 p(λ,τ)=p(λ)p(τ)∝1λ1τ,

i.e., the priors of and are independent, is assumed a limiting gamma distribution and is assumed a limiting inverse-gamma distribution. With this specification, it becomes possible to first integrate out analytically conditional on and then perform a one-dimension numerical integration with respect to using the adaptive Gaussian quadrature algorithm implemented in R. It is worth emphasizing that in this exercise, the statistical interpretation of the priors or the resulting Bayes factors is unimportant, we simply attempt to evaluate the numerical differences by different Bayes factor computation methods.

The comparison results for different sample sizes and values are summarized in Figure 2. Regarding the results from the numerical integrations as the “truth”, we find that for relatively small sample sizes, tends to be slightly conservative while tends to be slightly anti-conservative. However, when the sample size grows , both approximations become quite accurate.

## Appendix C Connection between Bayes factor and score statistic

### c.1 Connection with fixed effect score statistic

In this section, we give the mathematical details on connections between the approximate Bayes factor evaluated at and the fixed effect score test statistics. In particular, it is sufficient to show that the quadratic form corresponds to the score statistic for testing the fixed effect .

To see this, we relate to , the MLE of estimated under the null model restriction . The expression of is given in (A.14), and it can be shown that

 ^\boldmathβ(λ)=\boldmathQ(λ)% \boldmathG′\boldmathΣ(λ)−1[% \boldmathy−\boldmathX~\boldmathα(λ)], (C.1)

where

 \boldmathQ(λ)=[\boldmathG′\boldmath% Σ(λ)−1\boldmathG−\boldmathG′% \boldmathΣ(λ)−1\boldmathX(\boldmathX′\boldmathΣ(λ)−1\boldmathX)−1% \boldmathX′\boldmathΣ(λ)−1\boldmathG% ]−1. (C.2)

Furthermore,

 ^\boldmathV(λ,τ)=τ−1\boldmathQ(λ)−1. (C.3)

Therefore, it follows that

 ^\boldmathV−1(λ,τ)\boldmath^β(λ)=τ\boldmathG′\boldmathΣ(λ)−1(\boldmathy−\boldmathX~\boldmathα(λ)). (C.4)

For and and noting the notations

 ~\boldmathα=~\boldmathα(~λ), (C.5) ~\boldmathΣ=~Σ(~λ), ~\boldmathQ=~\boldmathQ(~λ),

the desired quadratic form can be equivalent represented by

 ~