Learning quantitative sequence-function relationships from massively parallel experiments

Learning quantitative sequence-function relationships from massively parallel experiments

Gurinder S. Atwal J. B. Kinney Simons Center for Quantitative Biology, Cold Spring Harbor Laboratory
Cold Spring Harbor, NY 11724
Tel.: +1-516-3675230
22email: jkinney@cshl.eduG. S. Atwal Simons Center for Quantitative Biology, Cold Spring Harbor Laboratory
   Justin B. Kinney J. B. Kinney Simons Center for Quantitative Biology, Cold Spring Harbor Laboratory
Cold Spring Harbor, NY 11724
Tel.: +1-516-3675230
22email: jkinney@cshl.eduG. S. Atwal Simons Center for Quantitative Biology, Cold Spring Harbor Laboratory
Received: date / Accepted: date
Abstract

A fundamental aspect of biological information processing is the ubiquity of sequence-function relationships – functions that map the sequence of DNA, RNA, or protein to a biochemically relevant activity. Most sequence-function relationships in biology are quantitative, but only recently have experimental techniques for effectively measuring these relationships been developed. The advent of such “massively parallel” experiments presents an exciting opportunity for the concepts and methods of statistical physics to inform the study of biological systems. After reviewing these recent experimental advances, we focus on the problem of how to infer parametric models of sequence-function relationships from the data produced by these experiments. Specifically, we retrace and extend recent theoretical work showing that inference based on mutual information, not the standard likelihood-based approach, is often necessary for accurately learning the parameters of these models. Closely connected with this result is the emergence of “diffeomorphic modes” – directions in parameter space that are far less constrained by data than likelihood-based inference would suggest. Analogous to Goldstone modes in physics, diffeomorphic modes arise from an arbitrarily broken symmetry of the inference problem. An analytically tractable model of a massively parallel experiment is then described, providing an explicit demonstration of these fundamental aspects of statistical inference. This paper concludes with an outlook on the theoretical and computational challenges currently facing studies of quantitative sequence-function relationships.

1 Introduction

Figure 1: Sequence-function relationships in biology. (a) A sequence-function relationship maps a biological sequence (blue bar) to a biologically relevant activity (yellow star). (b) One of the simplest sequence-function relationships is how the affinity (star) of a transcription factor protein (magenta) for its DNA binding site depends on the sequence of that site (blue). (c) A more complicated sequence-function relationship describes how the rate of mRNA transcription depends on the DNA sequence of a gene’s promoter region. At the lac promoter of E. coli (illustrated), this transcription rate (star) depends on how strongly both the transcription factor CRP (purple) and the RNA polymerase holoenzyme (RNAP; orange) bind their respective sites within the promoter region (blue).

A major long-term goal in biology is to understand how biological function is encoded within the sequences of DNA, RNA, and protein. The canonical success story in this effort is the genetic code: given an arbitrary sequence of messenger RNA, the genetic code allows us to predict with near certainty what peptide sequence will result. There are many other biological codes we would like to learn as well. How does the DNA sequence of a promoter or enhancer encode transcriptional regulatory programs? How does the sequence of pre-mRNA govern which exons are kept and which are removed from the final spliced mRNA? How does the peptide sequence of an antibody govern how strongly it binds to target antigens?

A major difference between the genetic code and these other codes is that while the former is qualitative in nature, the latter are governed by sequence-function relationships that are inherently quantitative. Quantitative sequence-function relationships111These have also called quantitative sequence-activity maps, or QSAMs Melnikov:2012dw (). describe any function that maps the sequence of a biological heteropolymer to a biologically relevant activity (Fig. 1a). Perhaps the simplest example of such a relationship is how the affinity of a transcription factor protein for its DNA binding site depends on the DNA sequence of that site (Fig. 1b). Such relationships are a key component of the more complicated relationship between the DNA sequence of a promoter or enhancer (which typically binds multiple proteins) and the resulting rate of mRNA transcription (Fig. 1c). In both of these cases, the activities of interest (affinity or transcription rate) can vary over orders of magnitude and yet still be finely tuned by adjusting the corresponding sequence (binding site or promoter/enhancer). Similarly other sequence-function relationships, like the inclusion of exons during mRNA splicing or the affinity of a protein for its ligand, are fundamentally quantitative.

The study of quantitative sequence-function relationships presents an exciting opportunity for the concepts and methods of statistical physics to shed light on biological systems. There is a natural analogy between biological sequences and the microstates of physical systems, as well as between biological activities and physical Hamiltonians. Yet we currently lack answers to basic questions a statistical physicist might ask, such as “what is the density of states?” or “is a relationship convex or glassy?” The answers to such questions may well have important consequences for diverse fields including biochemistry, systems biology, immunology, and evolution.

Experimental methods for measuring sequence-function relationships have improved dramatically in recent years. In the mid 2000s, multiple “high-throughput” methods for measuring the DNA sequence specificity of transcription factors were developed; these methods include protein binding microarrays (PBMs) Mukherjee:2004um (); Berger:2006wp (), E. coli one-hybrid technology (E1H) Meng:2005dp (), and microfluidic platforms Maerkl:2007tv (). The subsequent development and dissemination of ultra-high-throughput DNA sequencing technologies then led, starting in 2009, to the creation of a number of “massively parallel” experimental techniques for probing a wide range of sequence-function relationships (Table 1). These massively parallel assays can readily measure the functional activity of to sequences in a single experiment by coupling standard bench-top techniques to ultra-high-throughput DNA sequencing.

sequence activity system name publication
DNA binding sites protein-DNA binding affinity purified protein Bind-n-Seq Zykovich et al., 2009 Zykovich:2009dx ()
HT-SELEX Zhao et al., 2009 Zhao:2009gs ()
Jolma et al., 2010 Jolma:2010hz ()
EMSA-Seq Wong et al., 2011 Wong:2011gi ()
SELEX-Seq Slattery et al., 2011 Slattery:2011hxa ()
promoter/ enhancer DNA transcription rate purified protein Patwardhan et al., 2009 Patwardhan:2009cw ()
bacteria Sort-Seq Kinney et al., 2010 Kinney:2010tn ()
cell culture MPRA Melnikov et al., 2012 Melnikov:2012dw ()
mouse liver Patwardhan et al., 2012 Patwardhan:2012hy ()
yeast Sharon et al., 2012 Sharon:2012io ()
mouse retina CRE-Seq Kwasniesk et al., 2012 Kwasnieski:2012hu ()
protein ligand binding phage display DMS Fowler et al., 2010 Fowler:2010gt ()
cellular growth rate yeast EMPIRIC Hietpas et al., 2011 Hietpas:2011bp ()
toxin activity bacteria Adkar et al., 2012 Adkar:2012ea ()
H1N1 binding yeast display Whitehead et al., 2012 Whitehead:2012jt ()
GPCR expression bacteria Schlinkmann et al., 2012 Schlinkmann:2012il ()
RNA mRNA translation bacteria Holmqvist et al., 2013 Holmqvist:2013hx ()
sRNA targeting bacteria qSortSeq Peterman et al., 2014 Peterman:2014cd ()
mRNA translation cell culture Oikonomou et al., 2014 Oikonomou:2014goa ()
mRNA translation cell culture FACS-Seq Noderer et al., 2014 Noderer:2014fj ()
replication origins DNA replication yeast ARS-Seq Liachko et al., 2013 Liachko:2013jc ()
endonuclease sites DNA cutting purified protein Thyme et al., 2014 Thyme:2014hna ()
Table 1: Massively parallel experiments used for studying various sequence-function relationships. Columns show the type of sequences interrogated, the sequence activity assayed, the biological system on which the experiments were performed, the name (if any) of the experimental technique, and the publication first describing the method. This table is not exhaustive; it only describes some of the earliest experiments in each type of system.

Massively parallel experiments are very unlike conventional experiments in physics: they are typically very noisy and rarely provide direct readouts of the quantities that one cares about. Moreover, the noise characteristics of these measurements are difficult to accurately model. Indeed, such noise generally exhibits substantial day-to-day variability. Although standard inference methods require an explicit model of experimental noise, it is still possible to precisely learn quantitative sequence-function relationships from massively parallel data even when noise characteristics are unknown Kinney:2007dh (); Kinney:2014ge ().

The ability to fit parametric models to these data reflects subtle but important distinctions between two objective functions used for statistical inference: (i) likelihood, which requires a priori knowledge of the experimental noise function and (ii) mutual information Cover:1991ti (), a quantity based on the concept of entropy, which does not require a noise function. In contrast to the conventional wisdom that more experimental measurements will improve the model inference task, the standard maximum likelihood approach will typically never learn the right model, even in the infinite data limit, if one uses an imperfect model of experimental noise. Model inference based on mutual information does not suffer from this ailment.

Mutual-information-based inference is unable to pin down the values of model parameters along certain directions in parameter space known as “diffeomorphic modes” Kinney:2014ge (). This inability is not a shortcoming of mutual information, but rather reflects a fundamental distinction between how diffeomorphic and nondiffeomorphic directions in parameter space are constrained by data. Analogous to the emergence of Goldstone modes in particle physics due to a specific yet arbitrary choice of phase, diffeomorphic modes arise from a somewhat arbitrary choice one must make when defining the sequence-dependent activity that one wishes to model. Likelihood, in contrast to mutual information, is oblivious to the distinction between diffeomorphic and nondiffeomorphic modes.

We begin this paper by briefly reviewing a variety of massively parallel assays for probing quantitative sequence-function relationships. We then turn to the problem of learning parametric models of these relationships from the data that these experiments generate. After reviewing recent work on this problem Kinney:2014ge (), we extend this work in three ways. First, we show that “diffeomorphic modes” of the parametric activity model that one wishes to learn are “dual” to certain transformations of the corresponding model of experimental noise (the “noise function”). This duality reveals a symmetry of the inference problem, thereby establishing a close analogy with Goldstone modes. Next we compute and compare the Hessians of likelihood and mutual information. This comparisons suggests an additional analogy between this inference problem and concepts in fluid mechanics. Finally, we work through an analytically tractable model of a massively parallel experiment of protein-DNA binding. This example explicitly illustrates the differences between likelihood- and mutual-information-based approaches to inference, as well as the emergence of diffeomorphic modes.

It should be noted that the inference of receptive fields in sensory neuroscience is another area of biology in which mutual information has proved useful as an objective function, and that work in this area has also provided important insights into basic aspects of machine learning Paninski:2003eq (); Sharpee:2004ty (); Sharpee:2006vb (); Kouh:2009ik (); Rajan:2011vi (). Indeed, the problem of learning quantitative sequence-function relationships in molecular biology is very similar to the problem of learning receptive fields in neuroscience Kinney:2014ge (). The discussion of this problem in the neuroscience context, however, has largely avoided in-depth analyses of how mutual information relates to likelihood, as well as of how diffeomorphic modes emerge.

2 Massively parallel experiments probing sequence-function relationships


Figure 2: Overview of massively parallel experiments for studying quantitative sequence-function relationships. (a) The input to each experiment is a library of different sequences that one wishes to test. The output is one or more bins of sequences; each sequence in each bin is randomly selected from the library with a weight that depends on a measurement of that sequence’s activity (star). (b) The resulting data set consists of a list of (non-unique) sequences, each sequence assigned to either the input library or one of the output bins. (c) Illustration of experimental methods for measuring the sequence-dependent binding energy of purified transcription factor proteins. The input library typically consists of random DNA flanked by constant sequence. This library DNA is mixed with the protein of interest and binding is allowed to come to equilibrium. DNA bound by protein is then separated from unbound DNA, e.g. by running complexes on a gel (shown), then sequenced along with a sample from the input library. (d) Sort-Seq Kinney:2010tn () is a massively parallel experiment that uses a library of partially mutagenized sequences to probe the mechanisms of transcriptional regulation employed by a specific wild type promoter of interest. Mutant promoters are cloned upstream of the GFP gene, and E. coli cells harboring these expression constructs are sorted into bins using FACS. The mutant promoters in each bin, as well as promoters from the input library, are then sequenced.
Figure 3: The lac promoter region studied in Kinney:2010tn (). (a) Sort-Seq was used to dissect a 75 bp region of the E. coli lac promoter using a library consisting of wild type sequences mutagenized at 12% per nucleotide, i.e., each library sequence had 9 mutations on average. (b) The resulting data were used to learn a quantitative sequence-function relationship, the mathematical form of which reflected an explicit biophysical model of transcriptional regulation. This model included two “energy matrices” describing the sequence-dependent binding energy of CRP () and RNAP () to their respective sites. It also included a value for the interaction energy between these two proteins.

All of the massively parallel experiments in Table 1 share a common structure (Fig. 2a). The first step in each experiment is to generate a large set of (roughly to ) different sequences to measure. This set of sequences is called the “library.” Multiple different types of libraries can be used depending on the application. One then performs an experiment that takes this library as input, and as output provides a set of one or more “bins” of sequences. Each output bin contains sequences selected from the library with a weight that depends on the measured activity of that sequence. Finally, a sample of sequences from each of the output bins, as well as from the input library, are determined using ultra-high-throughput DNA sequencing. The resulting data thus consists of a long list of (typically non-unique) DNA sequences, each assigned to a corresponding bin (Fig. 2b). It is from these data that we wish to learn quantitative models of sequence-function relationships.

Some of the earliest massively parallel experiments were designed to measure the specificity of purified transcription factors for their DNA binding sites Zykovich:2009dx (); Zhao:2009gs (); Jolma:2010hz (); Wong:2011gi (); Slattery:2011hxa () (Fig. 2c). The library used in such studies consists of a fixed-length region of random DNA flanked by constant sequences used for PCR amplification. This library is mixed with the transcription factor of interest, after which protein-bound DNA is separated from unbound DNA, e.g., by running the protein-DNA mixture on a gel. Protein-bound DNA is then sequenced, along with the input library.

Using a library of random DNA to assay protein-DNA binding has the advantage that the same library can be used to study each protein. This is particularly useful when performing assays on many different proteins at once (e.g., Jolma:2010hz (); Jolma:2013fh ()). On the other hand, only a very small fraction of library sequences will be specifically bound by the protein of interest. Moreover, because proteins typically bind DNA in a non-specific manner, such experiments are often performed serially in order to achieve substantial enrichment.222This serial enrichment approach is known as SELEX and is much older than ultra-high-throughput DNA sequencing; see Oliphant:1989vb (); Tuerk:1990ux (); Ellington:1990ct (); Blackwell:1990tq (); Wright:1991uw ().

The first massively parallel experiment to probe how multi-protein-DNA complexes regulate transcription in living cells was Sort-Seq Kinney:2010tn () (Fig. 2d). The sequence library used in this experiment was generated by introducing randomly scattered mutations into a “wild-type” sequence of interest, specifically, the 75 bp region of the promoter of the lac gene in E. coli depicted in Fig. 3a. A few million of these mutant promoters were cloned upstream of the green fluorescent protein (GFP) gene. Cells carrying these expression constructs were grown under conditions favorable to promoter activity and were then sorted into a small number of bins according to each cell’s measured fluorescence. This partitioning of cells was accomplished using fluorescence-activated cell sorting (FACS) Herzenberg:1976th (), a method that can readily sort cells per second. The mutant promoters within each sorted bin as well as within the input library were then sequenced, yielding measurements for variant promoter sequences. We note that advances in DNA sequencing have since made it possible to accumulate much more data, and it is no long difficult to measure the activities of different sequences in this manner.

Massively parallel experiments using partially mutagenized sequences provide data about sequence-function relationships within a localized region of sequence space centered on the wild type sequence of interest. Measuring these local relationships can provide a wealth of information about the functional mechanisms of the wild type sequence. For instance, the Sort-Seq data of Kinney:2010tn () allowed the inference of an explicit biophysical model for how CRP and RNAP work together to regulate transcription at the lac promoter (Fig. 3b). In particular, the authors used their data to learn quantitative models for the in vivo sequence specificity of both CRP and RNAP. Model fitting also enabled measurement of the protein-protein interaction by which CRP is able to recruit RNAP and up-regulate transcription.

Partially mutagenized sequences have also been used extensively for “deep mutational scanning” experiments on proteins, starting with Fowler:2010gt (); Hietpas:2011bp (). In this context, selection experiments on partially mutagenized proteins allow one to identify protein domains critical for folding and function. A variety of deep mutational scanning experiments are described in Fowler:2014gq ().

3 Inference using likelihood

Figure 4: Schematic illustration of how likelihood depends on the model and the noise function in the limit. (a,b) will typically have a correlated dependence on and . If is set equal to the correct noise function , then will be maximized by the correct model . However, if is set to an incorrect noise function , will typically attain a maximum at an incorrect .

The inference of quantitative sequence-function relationships from massively parallel experiments can be phrased as follows. Data consists of a large number of sequences , each sequence having a corresponding measurement . Due to experimental noise, repeated measurements of the same sequence can yield different values for . Our experiment therefore has the following probabilistic form form: {diagram}[LaTeXeqno] sequenceS & \rTo^p(M—S)experiment & measurementM. If we assume that the measurements for each sequence are independent, and if we have an explicit parametric form for , then we can learn the values of the parameters by maximizing the per-datum log likelihood,

(1)

In what follows we will refer to the quantity simply as the “likelihood.”

In regression problems such as this, one introduces an additional layer of structure. Specifically, we assume the measurement of each sequence is a noisy readout of some underlying activity that is a deterministic function of that sequence. We call the function relating to the “activity model” and denote it using . This activity model is ultimately what we want to understand. The specific way the activity is read out by measurements is then specified by a conditional probability distribution, , which we call the “noise function.”333We use the term “noise function” in order to be consistent with the terminology of Kinney:2014ge () and to avoid deviating too much from the more standard terms “noise model” and “error model” used in the statistics and machine learning literature. We emphasize, however, that defines much more than just the characteristics of experimental noise; entirely specifies the relationship between measurements and the underlying activity . Were it not for prior terminology, the term “measurement function” might be preferable to “noise function.” Our experiment is thus represented by the Markov chain {diagram}[LaTeXeqno] sequenceS & \rTo^θ(S)model & activityR & \rTo^π(M—R)noise function & measurementM. The corresponding likelihood is

(2)

The model we adopt for our experiment therefore has two components: , which describes the sequence-function relationship of interest, and , which we do not really care about.

Standard statistical regression requires that the noise function be specified up-front. can be learned either by performing separate calibration experiments, or by assuming a functional form based on an educated guess. This can be problematic, however. Consider inference in the large data limit, , which is illustrated in Fig. 4. Likelihood is determined by both the model and the noise function (Fig. 4a). If we know the correct noise function exactly, then maximizing over is guaranteed to recover the correct model . However, if we assume an incorrect noise function , maximizing likelihood will typically recover an incorrect model (Fig. 4b).

4 Inference using mutual information

Information theory provides an alternative inference approach. Suppose we hypothesize a specific model , which gives predictions . Denote the true model and the corresponding true activity . The dependence between , , , and will then form a Markov chain, {diagram}[LaTeXeqno] R & \lTo^θ & S & \rTo^θ^* & R^* & \rTo^π & M. From the simple fact that depends on only through the value of , any dependence measure that satisfies the Data Processing Inequality (DPI) Cover:1991ti () must satisfy

(3)

Therefore, in the set of possible models , the true model is guaranteed to globally maximize the objective function .

One particularly relevant dependence measure that satisfies DPI is mutual information, a quantity that plays a fundamental role in information theory Cover:1991ti ().444See Kinney:2014vn () for an extended discussion of mutual information as a measure of statistical association. For the massively parallel experiments such as those in Fig. 2, is continuous and is discrete. In these cases, mutual information is given by

(4)

where is the joint distribution of activity predictions and measurements resulting from the model . If one is able to estimate from a finite sample of data, mutual information can be used as an objective function for determining without assuming any noise function .

It should be noted that there are multiple dependence measures that satisfy DPI. One might wonder whether maximizing multiple different dependence measures would improve on the optimization of mutual information alone. The answer is not so simple. In Kinney:2014ge () it was shown that if the correct model is within the space of models under consideration, then, in the large data limit, maximizing mutual information is equivalent to simultaneously maximizing every dependence measure that satisfies DPI. On the other hand, one rarely has any assurance that the correct model is within the space of parameterized models one is considering. In this case, considering different DPI-satisfying measures might provide a test for whether is noticeably outside the space of parameterized models. To our knowledge, this potential approach to the model selection problem has yet to be demonstrated.

5 Relationship between likelihood and mutual information

A third inference approach is to admit that we do not know the noise function a priori, and to fit both and simultaneously by maximizing over this pair. It is easy to see why this makes sense: the division of the inference problem into first measuring , then learning using that inferred , is somewhat artificial. The process that maps to is determined by both and and thus, from a probabilistic point of view, it makes sense to maximize likelihood over both of these quantities simultaneously.

We now show that, in the large limit, maximizing likelihood over both and is equivalent to maximizing the mutual information between model predictions and measurements. Here we follow the argument given in Kinney:2014ge (). In the large limit, likelihood can be written

(5)
(6)

where

(7)

is the Kullback-Leibler divergence between the assumed noise function and the observed noise function , and is the entropy of the measurements, which does not depend on . To maximize it therefore suffices to maximize over alone, then to set the noise function equal to the empirical noise function , which causes to vanish.

Thus, when we are uncertain about the noise function , we need not despair. We can, if we like, simply learn at the same time that we learn . We need not explicitly model in order to do this; it suffices instead to maximize the mutual information over alone.

The connection between mutual information and likelihood can further be seen in a quantity called the “noise-averaged” likelihood. This quantity was first described for the analysis of microarray data Kinney:2007dh (); see also Kinney:2014ge (). The central idea is to put an explicit prior on the space of possible noise functions, then compute likelihood after marginalizing over these noise functions. Explicitly, the per-datum log noise-averaged likelihood is related to via

(8)

We will refer to simply as “noise-averaged likelihood” in what follows.

Under fairly general conditions, one finds that noise-averaged likelihood is related to mutual information via

(9)

Here, the effect of the noise function prior is absorbed entirely by the term . Under very weak assumptions, vanishes in the limit and thus becomes irrelevant for the inference problem Kinney:2007dh (); Kinney:2014ge ().

6 Diffeomorphic modes

Mutual information has a mathematical property that is important to account for when using it as an objective function: the mutual information between any two variables is unchanged by an invertible transformation of either variable. So if a change in model parameters, , results in changes in model predictions that preserves the rank order of these predictions, then

(10)

and and are judged to be equally valid.

By using mutual information as an objective function, we are therefore unable to constrain any parameters of that, if changed, produce invertible transformations of model predictions. Such parameters are called “diffeomorphic parameters” or “diffeomorphic modes” Kinney:2014ge (). The distinction between diffeomorphic modes and nondiffeomorphic modes is illustrated in Fig. 5.

Figure 5: Illustration of diffeomorphic and nondiffeomorphic modes. (a) A diffeomorphic mode at a point in parameter space is a vector that will (regardless of the underlying data) be tangent to a level curve of . All other vectors (e.g., ) correspond to nondiffeomorphic modes. (b) Moving along a nondiffeomorphic mode results in a sort of “diffusion” in which the values assigned to different sequences change rank order. Here, the probability distribution is illustrated (for fixed ) in gray. The motion of individual values upon such a change in are indicated by arrows. (c) Changing along a diffeomorphic mode, however, results in a “flow” of values that maintains their rank order.

6.1 Criterion for diffeomorphic modes

Following Kinney:2014ge (), we now derive a criterion that can be used to identify all of the diffeomorphic modes of a model .555Here, as throughout this paper, we restrict our attention to situations in which is a scalar. The case of vector-valued model predictions is worked out in Kinney:2014ge (). Consider an infinitesimal change in model parameters , where the components of are specified by

(11)

for small epsilon and for some vector in -space. This change in will produce a corresponding change in model predictions , where

(12)

In general, the derivative can have arbitrary dependence on the underlying sequence . This transformation will preserve the rank order of -values only if is the same for all sequences having the same value of . The change must therefore be a function of and have no other dependence on . A diffeomorphic mode is a vector field that has this property at all points in parameter space. Specifically, a vector field is a diffeomorphic mode if and only if there is a function such that

(13)

6.2 Diffeomorphic modes of linear models

As a simple example, consider a situation in which each sequence is a -dimensional vector and is an affine function of , i.e.

(14)

for model parameters . The criterion in Eq. (13) then gives

(15)

Because the left hand side is linear in and is linear in , the function must be linear in . Thus, must have the form

(16)

for some functions and . The corresponding diffeomorphic mode is

(17)

which has two degrees of freedom. Specifically, the component of corresponds to adding a constant to while the component corresponds to multiplying by a constant.

Note that if we had instead chosen , i.e. left out the constant component , then there would be only one diffeomorphic mode, corresponding to multiplication of by a constant. This fact will be used when we analyze the Gaussian selection model in Section 8.

6.3 Diffeomorphic modes of a biophysical model of transcriptional regulation

Diffeomorphic modes can become less trivial in more complicated situations. Consider the biophysical model of transcriptional regulation by the E. coli lac promoter (Fig. 3). This model was fit to Sort-Seq data in Kinney:2010tn (). The form of this model is as follows. Let denote a matrix representing a DNA sequence of length and having elements

(18)

where and . The binding energy of CRP to DNA was modeled in Kinney:2010tn () as an “energy matrix”: each position in the DNA sequence was assumed to contribute additively to the overall energy. Specifically,

(19)

where are the parameters of this energy matrix. Similarly, the binding energy of RNAP to DNA was modeled as

(20)

Both energies were taken to be in thermal units (). The rate of transcription resulting from these binding energies was assumed to be proportional to the occupancy of RNAP at its binding site. This is given by

(21)

where is the interaction energy between CRP and RNAP (again in units of ).

Because the binding sites for CRP and RNAP do not overlap, one can learn the parameters and from data separately by independently maximizing and . Doing this, however, leaves undetermined the overall scale of each energy matrix as well as the chemical potentials and . The reason is that the energy scale and chemical potential are diffeomorphic modes of energy matrix models and therefore cannot be inferred by maximizing mutual information.

However, if and are inferred together by maximizing instead, one is now able to learn both energy matrices with a physically meaningful energy scale. The chemical potential of CRP, , is also determined. The only parameters left unspecified are the chemical potential of RNA polymerase, , and the maximal transcription rate . The reason for this is that in the formula for in Eq. (21) the energies and combine in a nonlinear way. This nonlinearity eliminates three of the four diffeomorphic modes of and .666The one additional diffeomorphic mode is created by the introduction of the parameter . See Kinney:2014ge () for the derivation of this result.

6.4 Dual modes of the noise function

Figure 6: Venn diagram illustrating the degrees of freedom of the likelihood considered over all possible data sets . Altering the model parameters will typically change in a way that cannot be recapitulated by changes in the noise function . Similarly, changes in cannot typically be imitated by changes in . However, diffeomorphic transformations of will affect in the exact same way that dual transformation of will. The diffeomorphic modes of and the dual modes of can therefore be thought of as lying within the intersection of and .

Diffeomorphic transformations of model parameters can be thought of as being equivalent to certain transformations of the noise function . Consider the transformation of model parameters

(22)

where is an infinitesimal number and is a vector in -space.777For the sake of clarity we suppress the -dependence of , , and in what follows. For any sequence , this transformation induces a transformation of the model prediction

(23)

To see the effect this transformation has on likelihood, we rewrite Eq. (2) as,

(24)

where indicates an average taken over the measurements and predictions for all of the sequences in the data set. The change in likelihood resulting from Eq. (23) is therefore given by

(25)

Now suppose that there is a noise function that has an equivalent effect on likelihood, i.e.,

(26)

for all possible data sets . We say that this transformation of the noise function is “dual” to the transformation of model parameters. The transformed noise function will necessarily have the form

(27)

for some function . To determine we consider the transformation of likelihood induced by :

(28)

Comparing Eq. (25) and Eq. (28), we see that will be dual to for all possible data sets if and only if

(29)

for all sequences .

For general choice of vector , no function will exist that satisfies Eq. (29). The reason is that will typically depend on the sequence independently of the value of . In other words, for a fixed value of and , the left hand side of Eq. (29) will retain a dependence on . The right hand side, however, cannot have such a dependence. The converse is also true: for general choice of the function , no vector will exist such that Eq. (29) is satisfied for all sequences. This is evident from the simple fact that is a finite dimensional vector while is a function of the continuous quantity and therefore has an infinite number of degrees of freedom.

In fact, Eq. (29) will have a solution if and only if

(30)

for some function . Here we have added the superscript “dif” because this is precisely the definition of a diffeomorphic mode given in Eq. (13). In this case, the function dual to this diffeomorphic mode is seen to be

(31)

These findings are summarized by the Venn diagram in Fig. 6. Arbitrary transformations of the model parameters will alter likelihood in a way that cannot be imitated by any change to the noise function . The reverse is also true: most changes to cannot be imitated by a corresponding change in . However, a subset of transformations of are equivalent to corresponding dual transformations of . These transformations are precisely the diffeomorphic transformations of . This partial duality between and has a simple interpretation: the choice of how we parse an experiment into an activity model and a noise function is not unique. The ambiguity in this choice is parameterized by the diffeomorphic modes of and the dual modes of .

7 Error bars from likelihood, mutual information, and noise-averaged likelihood

Figure 7: Posterior distributions on model parameters resulting from various objective functions. Each panel schematically illustrates the posterior distribution (gray shaded area) as it relates to the correct model (dot) along both diffeomorphic (abscissa) and nondiffeomorphic (ordinate) directions in parameter space. (a) Likelihood with the correct noise function leads to a posterior distribution consistent with in all parameters. (b) Likelihood with a noise function that differs arbitrarily from will, in general, lead to a posterior distribution that is inconsistent with along both diffeomorphic and nondiffeomorphic modes. (c) Likelihood with a noise function that differs from only along a dual mode leads to a posterior that is inconsistent with along the diffeomorphic mode (parallel to dashed line), but consistent with in all other directions (perpendicular to dashed line). (d) Using mutual information give a posterior that is consistent with ; this posterior places constraints similar to likelihood along non-diffeomorphic modes but places no constraints whatsoever along diffeomorphic modes. (e) Using noise-averaged likelihood results in a posterior distribution similar to mutual information but with weak constraints on diffeomorphic modes resulting from the noise function prior .

We now consider the consequences of performing inference using various objective functions at large but finite . Specifically, we discuss the optimal parameters and corresponding error bars that are found by sampling from posterior distributions of the form

(32)

for the following choices of the objective function :

  1. is likelihood computed using the correct noise function .

  2. where differs from by a small but arbitrary error.

  3. where differs from by a small amount along a dual mode.

  4. is the mutual information between measurements and model predictions.

  5. is the noise-averaged likelihood.

To streamline notation, we will use to denote averages computed in multiple different contexts. In each case, the appropriate context will be specified by a subscript. As above will denote averaging over a specific data set . will indicate averaging over an infinite number of data set realizations. , , , and will respectively denote averages over the distributions , , , and , the empirical distributions obtained in the infinite data limit. will indicate an average computed over parameter values sampled from the posterior distribution . Subscripts on or should be interpreted analogously.

7.1 Likelihood

Consider Eq. (32) with at large but finite . The posterior distribution will, in general, be maximized at some choice of parameters that deviates randomly from the correct parameters . At large , will become sharply peaked about with a peak width governed by the Hessian of likelihood; specifically

(33)

where

(34)

is the Hessian of the likelihood. It is also readily shown (see Appendix A) that this peak width is consistent with the correct parameters , in the sense that

(35)

In Appendix A we show that the Hessian of likelihood, Eq. (122), is given by

(36)

where

(37)

is the Fisher information of the noise function . This Fisher information is a nonnegative measure of how sensitive our experiment is in the vicinity of .888In what follows we assume that almost everywhere. This just reflects the assumption that our experiment actually does convey information about through the measurements it provides. We thus see that, as long as the set of vectors spans all directions in parameter space, the Hessian matrix will be nonsingular. Using will therefore put constraints on all directions in parameters space, and these constraints will shrink with increasing data as . This situation is illustrated in Fig. 7a.

Now consider what happens if instead we use a noise function that deviates from in a small but arbitrary way. Specifically, let

(38)

for some function and small parameter . It is readily shown (see Appendix A) that the maximum likelihood parameters will deviate from by an amount

(39)

This expected deviation does not depend on and will therefore not shrink to zero in the large limit. Indeed, for any choice of , there will always be an large enough such that this bias in dominates over the uncertainty due to finite sampling.

Is there any restriction on the types of biases in that can be produced by the choice of incorrect noise function ? In general, no. Because the Hessian matrix is nonsingular, one can always find a vector such that the deviation of from in Eq. (39) points in any chosen direction of -space. As long as the functions

(40)

are linearly independent for different indices , a function can always be found that generates the vector .

We therefore see that arbitrary errors in the noise function will bias the inference of model parameters in arbitrary directions. This fact presents a major concern for standard likelihood-based inference: if you assume an incorrect noise function , the parameters that you then infer will, in general, be biased in an unpredictable way. Moreover, the magnitude of this bias will be directly proportional to the magnitude of the error in the log of your assumed noise function. This problem is illustrated in Fig. 7b.

There is a case that deserves some additional consideration. Suppose we use a noise function that differs from only along a dual mode , i.e.,

(41)

The maximum likelihood parameters of will still deviate from by an amount that does not shrink to zero in the limit. However, this bias in parameter values will be restricted to the diffeomorphic mode to which is dual, i.e.,

(42)

This state of affairs ain’t so bad since the incorrect noise function will lead to model parameters that are inaccurate only along modes that we already know we cannot learn from the data. This situation is illustrated in Fig. 7c; see Appendix A for the derivation of Eq. (42).

7.2 Mutual information

The constraints on parameters imposed by using mutual information as the objective function in Eq. (32) are determined by the Hessian

(43)

Appendix B provides a detailed derivation of this Hessian, which after some computation is found to be given by

(44)

Comparing Eq. (44) and Eq. (36), we see that for any vector in parameter space,

(45)

Likelihood is thus seen to constrain parameters in all directions at least as much as mutual information does. As expected, mutual information provides no constraint whatsoever in the direction of any diffeomorphic mode of the model, since

(46)

The converse is also true: if there is no constraint on parameters along , then must be a diffeomorphic mode. This is because

(47)

Because is positive almost everywhere, the right hand side of Eq. (47) can vanish only if does not differ between any two sequences that have the same value. There must therefore exist a function such that for all sequences . This is precisely the requirement in Eq. (13) that be a diffeomorphic mode.

However, except along diffeomorphic modes, we can generally expect that the constraints provided by likelihood and by mutual information will be of the same magnitude. This situation is illustrated in Fig. 7d. Indeed, in the next section we will see an explicit example where all nondiffeomorphic constraints imposed by mutual information are the same as those imposed by likelihood.

Before proceeding, we note that the relationship between the Hessians of likelihood and mutual information suggests an analogy to fluid mechanics. Consider a trajectory in parameter space given by , where is time and is a velocity vector pointing in the direction of motion. This motion in parameter space will induce a motion in the prediction that the model provides for every sequence . The set of sequence thus presents us with a dynamic cloud of “particles” moving about in -space. At , the quantity will be proportional to the average kinetic energy of particles at location . The quantity will be proportional to the (per particle) kinetic energy of the bulk fluid element at , a quantity that does not count energy due to thermal motion. In this way we see that is a weighted tally of total kinetic energy, whereas corresponds to a tally of internal thermal energy only, the kinetic energy of bulk motion having been subtracted out.

7.3 Noise-averaged likelihood

Noise-averaged likelihood provides constraints in between those of likelihood, computed using the correct noise function, and those of mutual information. This is illustrated in Fig. 7e. Whereas mutual information provides no constraints whatsoever on the diffeomorphic modes of , noise-averaged likelihood provides weak constraints in these directions. These soft constraints reflect the Hessian of in Eq. (9). The constraints along diffeomorphic modes, however, have an upper bound on how tight they can become in the limit. This is because such constraints only reflect our prior on the noise function, not the information we glean from data.

8 Worked example: Gaussian selection

Figure 8: Illustration of the Gaussian selection model of a massively parallel experiment. Each assayed sequence in this model is a -dimensional vector. The library (corresponding to bin ) consists of sequences drawn from a Gaussian distribution that is centered on a specific sequence . Bin consists of sequences drawn from the distribution then enriched by a factor of where . This enrichment procedure is analogous to selecting protein-bound DNA sequences where is negative the binding energy. Calculations in the text are performed in the limit.

The above principles can be illustrated in the following analytically tractable model of a massively parallel experiment, which we call the “Gaussian selection model.” In this model, our experiment starts with a large library of “DNA” sequences , each of which is actually a -dimensional vector drawn from a Gaussian probability distribution999For the sake of simplicity we set the covariance matrix of this distribution equal to the identity matrix. The more general case of a non-identity covariance matrix yields the same basic results. Also, we note that, while approximating discrete DNA sequences by continuous vectors might seem crude, it is only the marginal distributions that matter for the inference problem. Most of the quantities that one encounters in practice are computed by summing up contributions from a large number of different nucleotide positions. In such cases, the marginal distributions will often be nearly continuous and virtually indistinguishable from the marginal distributions one might obtain from a Gaussian sequence library.

(48)

Here, is a -dimensional vector defining the average sequence in the library. From this library we extract sequences into two bins, labeled and . We fill the bin with sequences sampled indiscriminately from the library. The bin is filled with sequences sampled from this library with relative probability

(49)

where the activity is defined as the dot product of with a -dimensional vector , i.e.,

(50)

We use to denote the number of sequences in each bin , along with .

All of our calculations are performed in the limit where is large but for which is far larger. More specifically, we assume that everywhere that both and are significant. We use to denote the ratio

(51)

and all of our calculations are carried out only to first order in . This model experiment is illustrated in Fig. 8.

Our goal is this: given the sampled sequences in the two bins, recover the parameters defining the sequence-function relationship for . To do this, we adopt the following model for the sequence-dependent activity :

(52)

where is the -dimensional vector we wish to infer. From the arguments above and in Kinney:2014ge (), it is readily seen that the magnitude of , i.e. , is the only diffeomorphic mode of the model: changing this parameter rescales , which preserves rank order.

8.1 Bin-specific distributions

We can readily calculate the conditional sequence distribution for each bin , as well as the conditional distribution of model predictions. Because the sequences sampled for bin 0 are indiscriminately drawn from , we have

(53)

The selected distribution of sequences is found to be

(54)

The value of is found to be related to , , and via

(55)

Appendix C provides an explicit derivation of Eq. (54) and Eq. (55).

We compute the distribution of model predictions for each bin as follows. For each bin , this distribution is defined as

(56)

This can be analytically calculated for both of the bins owing to the Gaussian form of each sequence distribution. We find that

(57)
(58)

See Appendix C for details.

8.2 Noise function

To compute likelihood, we must posit a noise function . Based on our prior knowledge of the selection procedure, we choose so that

(59)

where and are scalar parameters that we might or might not know a priori. This, combined with the normalization requirement, , gives

(60)

This noise function is correct when and . The parameter is dual to the diffeomorphic mode , whereas the parameter is not dual to any diffeomorphic mode.

In the experimental setup used to motivate the Gaussian selection model, the parameter is affected by many aspects of the experiment, including the concentration of the protein used in the binding assay, the efficiency of DNA extraction from the gel, and the relative amount of PCR amplification used for the bin 0 and bin 1 sequences. In practice, these aspects of the experiment are very hard to control, much less predict. From the results in the previous section, we can expect that if we assume a specific value for and perform likelihood-based inference, inaccuracies in this value for will distort our inferred model in an unpredictable (i.e., nondiffeomorphic) way. We will, in fact, see that this is the case. The solution to this problem, of course, is to infer alone by maximizing the mutual information ; in this case the values for and become irrelevant. Alternatively, one can place a prior on and , then maximize noise-averaged likelihood . We now analytically explore the consequences of these three approaches.

8.3 Likelihood

Using the noise function in Eq. (60), the likelihood becomes a function of , , and . Computing in the and limits, we find that

(61)

We now consider the consequences of various approaches for using to estimate . In each case, the inferred optimum will be denoted by a superscript ‘o.’ Standard likelihood-based inference requires that we assume a specific value for and for , then optimize over alone by setting

(62)

for each component . By this criteria we find that the optimal model is given by a linear combination of and :

(63)

where is a scalar that solves the transcendental equation

(64)

See Appendix C for the derivation of this result. Note that is determined only by the value of and not by the value of . Moreover, if and only if .

If our assumed noise function is correct, i.e., and , then

(65)

Thus, maximizing likelihood will identify the correct model parameters. This exemplifies the general behavior illustrated in Fig. 7a.

If but , our assumed noise function will differ from the correct noise function only in a manner dual to the diffeomorphic mode . In this case we find that and

(66)

is thus proportional but not equal to . This comports with our claim above that the diffeomorphic mode of the inferred model, i.e. , will be biased so as to compensate for the error in the dual parameter . This finding follows the behavior described in Fig. 7c.

If , however, . As a result, is a nontrivial linear combination of and , and will thus point in a different direction than . This is true regardless of the value of . This behavior is illustrated in Fig. 7b: errors in non-dual parameters of the noise function will typically lead to errors in nondiffeomorphic parameters of the activity model.

We now consider the error bars that likelihood places on model parameters. Setting and expanding about , we find that

(67)

where Note that all eigenvalues of are greater or equal to 1. Adopting the posterior distribution

(68)

therefore gives a covariance matrix on of

(69)

Thus, in all directions of -space. Although will change somewhat if and deviate from and , this same scaling behavior will hold. Therefore, when the noise function is incorrect and is sufficiently large, the finite bias introduced into will cause to fall outside the inferred error bars.

8.4 Mutual information

In the limit, Eq. (4) simplifies to

(70)

The lowest order term on the right hand side can be evaluated exactly using Eq. (57) and Eq. (58):

(71)

See Appendix C for details. Note that the expression on the right is invariant under a rescaling of . This reflects the fact that is a diffeomorphic mode of the model defined in Eq. (52).

To find the model that maximizes mutual information, we set

(72)

The optimal model must therefore be parallel to , i.e.

(73)

Expanding about as above, we find that

(74)

where is the component of perpendicular to ; see Appendix C. Therefore, if we use the posterior distribution to infer , we find uncertainties in directions perpendicular to of magnitude . These error bars are only slightly larger than those obtained using likelihood, and have the same dependence on . However, we find no constraint whatsoever on the component of parallel to . These results are illustrated by Fig. 7d.

8.5 Noise-averaged likelihood

We can also compute the noise-averaged likelihood, , in the case of a uniform prior on and , i.e. where is an infinitesimal constant. We find that

(75)