Bayesian design of synthetic biological systems
Here we introduce a new design framework for synthetic biology that exploits the advantages of Bayesian model selection. We will argue that the difference between inference and design is that in the former we try to reconstruct the system that has given rise to the data that we observe, while in the latter, we seek to construct the system that produces the data that we would like to observe, i.e. the desired behavior. Our approach allows us to exploit methods from Bayesian statistics, including efficient exploration of models spaces and high-dimensional parameter spaces, and the ability to rank models with respect to their ability to generate certain types of data. Bayesian model selection furthermore automatically strikes a balance between complexity and (predictive or explanatory) performance of mathematical models. In order to deal with the complexities of molecular systems we employ an approximate Bayesian computation scheme which only requires us to simulate from different competing models in order to arrive at rational criteria for choosing between them. We illustrate the advantages resulting from combining the design and modeling (or in-silico prototyping) stages currently seen as separate in synthetic biology by reference to deterministic and stochastic model systems exhibiting adaptive and switch-like behavior, as well as bacterial two-component signaling systems.
As we are beginning to understand the mechanisms governing biological systems we are starting to identify potential ways of guiding or controlling the behavior of cellular and molecular systems. Rationally reengineering organisms for biomedical or biotechnological purposes has become the central aim of the fledgling discipline of synthetic biology. By redirecting regulatory and physical interactions or by altering molecular binding affinities we may, for example, control metabolic processes [1, 2] or alter intra and inter cellular communication and decision making processes [3, 4]. The range of potential applications of such engineered systems is vast: designing microbes for biofuel production [5, 6] and bioremediation ; developing control strategies which drive stem cells through the various decisions to become terminally differentiated (or back) [8, 9], with the aim of developing novel therapeutics [10, 11]; construction of new drug-delivery systems with homing microbes delivering molecular medicines directly to the site where they are needed ; use of bacteria or bacterial populations (employing swarming and quorum sensing) as biosensors ; and gaining better understanding of all manner of biological systems by systematically probing their underlying molecular machinery.
A range of tools and building blocks for such engineered biological systems are now available which allow us to, at least in principle, build such systems from simple and reusable biological components . In electronic systems, such modularity has been crucial and has allowed the cost-effective production of reliable components that can be combined to produce desired outputs. Biology, however, poses different and novel challenges that are intimately linked to the biophysical and biochemical properties of biomolecules and the media in which they are suspended. Especially in crowded environments such as found inside living cells the lack of insulation between different components, i.e. the very real possibility of undesired cross talk, can create problems; with increasing miniaturization similar, albeit quantum effects, are now also surfacing in electronic circuits .
As synthetic biology gears up to bring engineering methods and tools to bear on biological problems the way in which we manipulate biological systems and processes is likely to change. Historically, each new branch of engineering has gone through a phase of what can be described as tinkering before rationally planned and executed designs became common place. Arguably, this is the current state of synthetic biology and it has indeed been suggested that the complexity of synthetic biological systems over the past decade has reached a plateau . From the earliest days, explicit quantitative modeling of systems has been integral to the vision and practice of synthetic biology and it will become increasingly important in the future. The ability to model how a natural or synthetic system will perform under controlled conditions must in fact be seen as one of the hallmarks of success of the integrative approaches taken by systems and synthetic biology.
Here we present a statistical approach to the design of synthetic biological systems that utilizes methods from Bayesian statistics to train models according to specified input-output characteristics. It incorporates modeling and automated design and is general in the sense that it can be applied to any system that can be described by a mathematical model which can be simulated. Because of the statistical nature of this approach, previously challenging problems such as handling stochastic models, accounting for kinetic parameter uncertainty and incorporating environmental stochasticity can all be handled in a straightforward and consistent manner.
2 Bayesian approach to system design
The question of how to design a system to perform a specified task can be viewed as an analogue to reverse engineering. In design we want to elucidate the most appropriate system to achieve our design objectives; in reverse engineering we aim to infer the most probable system structure and dynamics that can give rise to some observational data. In this respect, the design question can be viewed as statistical inference on data we wish to observe.
In the Bayesian approach to statistical inference the posterior distribution is the quantity of interest and this is given by the normalized product of the likelihood and the prior. In most practical applications the posterior distribution cannot be derived analytically but if the likelihood (and prior) can be expressed mathematically we can use Monte Carlo methods to sample from the posterior. In many cases where the model structure is complex the likelihood cannot be written in closed form and traditional Monte Carlo techniques cannot be applied. These include inference for the types of stochastic processes encountered in systems and synthetic biology. In these cases a family of techniques known collectively as approximate Bayesian computation (ABC) can be applied: these use model simulations to approximate the posterior distribution directly. Here we use a sequential Monte Carlo ABC algorithm known as ABC SMC to move from the prior to the approximate posterior via a series of intermediate distributions . This framework can also be used to perform Bayesian model selection  and has been implemented in the software package ABC-SysBio .
Figure 1 depicts the approach presented here. The design objectives are first specified through input-output characteristics. Here these have been depicted as a single time series, though the method can be applied in a much broader sense with multiple inputs and outputs. A set of competing designs is then specified through deterministic or stochastic models each containing a set of kinetic parameters and associated prior distributions. The distance function measures the discrepancy between the model output and the objective. In principal it is possible to specify a distribution over the objective and each model could also contain experimental error. The ABC SMC algorithm then automatically evolves the set of models towards the desired design objectives. The results are a set of posterior probabilities representing the probability for each design to achieve the specified design objectives in addition to the posterior probability distribution of the associated kinetic parameters. This approach is similar in spirit to some existing methods for the automated design of genetic networks such as those adopting evolutionary algorithms [20, 21], Monte Carlo methods [22, 23] or optimization [24, 25, 26] but the advantages of our method over traditional ones are that we can utilize powerful concepts from Bayesian statistics in the design of complex biological systems, including
the rational comparison of models under parameter uncertainty using Bayesian model selection which automatically accounts for model complexity (number of parameters) and robustness to parameter uncertainty
a posterior distribution over possible design parameter values that can be analyzed for parameter sensitivity and robustness and provide credible limits on design parameters
the treatment of stochastic systems at the design stage including the design of systems with required probability distributions on system components.
methods for the efficient exploration of high dimensional parameter space
In the following we demonstrate the power of this approach by examining, from this new perspective, systems that have been of interest in the recent literature. First we consider systems that are capable of biochemical adaptation . We then look at the ability of two bacterial two component system (TCS) topologies to achieve particular input-output behaviors; and finally we finish with an analysis of designs for a stochastic toggle switch with no cooperative binding at the promoter.
3 Biochemical adaptation
Biochemical adaptation refers to the ability of a system to respond to an input signal and return to the pre-stimulus steady state (Figure 1A). Ma et al  identified two three-node network topologies that are necessary for biochemical adaptation: a negative feedback loop with a buffering node (NFBLB) and an incoherent feedforward loop with a proportioner node (IFFLP). Within these categories they identified eleven simple networks that were capable of adaptation (Figure 2A). We applied the Bayesian design approach to these eleven networks using Michaelis-Menten kinetic models with and without cooperativity (see appendix B for the ordinary differential equations (ODEs) describing these models). The desired output characteristics were defined through the adaptation efficiency, , and sensitivity, , given by
where are the input values (here fixed at 0.5 and 0.6 respectively), are the output steady state levels before and after the input change and is the maximal transient output level. We defined the two component distance to be such that as decreases the behavior approaches the desired behavior. The final population was defined to obey the toletances , which defines close to perfect adaptation (when ) and a fractional response equal to the fractional change in input.
The results of the model selection are shown in Figure 2 (B and C). When cooperativity is not included the most robust designs for producing the desired input-output characteristics are the incoherent feedforward loops, but when cooperativity is added the posterior shifts significantly towards the negative feedback topology. If a system with these requirements were to be implemented then not only would designs 11 and 4 be clear candidates for further study, but many of the designs can be effectively ruled out and the ranking of the models provides a clear strategy for an experimental programme. These results also illustrate how small changes in context or incomplete understanding of a system can produce a large change in the most robust design. The Bayesian framework allows us to incorporate such uncertainty — or safeguard against our ignorance — naturally into the design process.
The posterior distribution provides information on which parameters are correlated and which are the most sensitive to the desired behavior. The posterior for model 11 under no cooperativity is shown in Figure 2D, where the ODE model is given by
where corresponds to the concentrations of the active forms of proteins ( corresponds to the concentrations of the inactive form). represents the input signal and the and represent the reaction rate parameters (of which there are 12 in total). and represent background deactivating enzymes with concentration fixed to 0.5.
The posterior shows in particular that the parameters for the background deactivating enzyme ( and ) on node B are correlated, and that should be large; this is exactly the requirement for the linear regime necessary for the IFFLP system to achieve adaptation . A principal component analysis of the posterior (Figure S1) shows other correlated parameters on the first few principal components. The last principal component describes the direction of least variance and therefore the most sensitive parameters. From this we can deduce for example that the reaction between nodes A and C is relatively unimportant. A similar analysis for model 4 in the case of cooperativity (Figures S2 and S3) shows for example that the behavior is insensitive to the values of the Hill coefficients and the details of the reaction between nodes A and C.
4 Robust oscillator design
Biochemical oscillations are increasingly being implemented in various synthetic systems [28, 29, 30, 31]. A recent study by Tsai et al  compared the ability of five small networks to achieve oscillations. The five designs are shown in Figure 3A where each node represents the active form of a protein, edges represent enzymatic reactions and thicker edges represent increased feedback strength. We applied our Bayesian design methodology to the original problem and further investigated the ability of these designs, provided in detail in appendix C, to achieve particular amplitude-frequency values.
Figure 3B shows the posterior probability for each model to achieve limit cycle behavior induced by a Hopf bifurcation. The addition of the negative feedback loop in models 4 and 5 does not improve the ability to achieve oscillations. We find that the addition of a positive feedback loop on species A in models 2 and 3 increases the ability of the system to achieve limit cycle behavior, but no significant increase in the posterior probability is provided by increasing the feedback strength. This is in conflict with the original study that found that model 3 outperformed model 2 . Our approach does sample parameter space predominantly in regions where the desired behavior is more likely, rather than entirely at random as was done in the previous study; on balance this suggests that the posterior probability for delivering robust oscillations is approximately the same for models 2 and 3.
More insight can be gained into this discrepancy by specifying a particular frequency and amplitude of the oscillator as the desired output behavior. Figures 3C and D show the model posterior probability after requiring an amplitude of 0.1 and a frequency of 1.0 Hz on species A and C respectively. The first thing to note is that the model posterior is significantly different in the two cases. When the constraints are applied to species A, model 3 is favored with the increase in feedback strength decreasing the ability to reach the specified behavior. When the conditions are applied to species C (and species B by symmetry) we get a posterior that more resembles the original findings; that the increase in feedback strength does indeed increase the ability to reach the specified oscillations. Thus the posterior for the Hopf bifurcation behavior represents a sum over all possible oscillator characteristics; in a manner that is reminiscent of Bayesian model averaging.
If we examine the posterior distribution and the principal component analysis for model 2 to achieve Hopf bifurcations (Figures S4 and S5), we can see that the parameters and , which are the strengths of the deactivating reactions on nodes A and B, are constrained to be similar in magnitude to . We also see that within this model the feedback strength, , does not affect the dynamics significantly. Here, and elsewhere, we can use the posterior distributions in order to gain insights into the sensitivity and robustness of the system to variations in parameters, irrespective of whether the system’s dynamics are deterministic or stochastic: our ABC SMC framework allows us to extract such information on the fly as part of the sequential design process.
5 Bacterial two component systems
Two component systems (TCS) allow bacteria to sense external environmental stimuli and relay information into the cell, e.g. to the gene expression apparatus. They consist of a histidine kinase (HK) that autophosphorylates upon interaction with a specific stimulus. The phosphate group is then passed on to a cognate response regulator protein (RR) which, once phosphorylated, can regulate transcription .
Naturally occurring TCS differ in the number of phosphate binding domains. In the most common form there are two phosphate binding domains (Figure 4A) but an alternative form exists that consists of a phosphorelay mechanism with four phosphate binding domains (Figure 4B). These shall be referred to as the orthodox and unorthodox TCSs, respectively . The reason for the existence of two forms of TCS remains largely unknown but it has been demonstrated that the phosphorelay is robust to noise and can provide an ultrasensitive response to stimuli [34, 35], whereas the orthodox system can provide behavior that is independent of the concentrations of its components . Here we have applied the Bayesian approach to directly compare the ability of orthodox and unorthodox designs to achieve various input-output behaviors, using ODE models similar to ones described previously  (see appendix D).
Figures 4C-F show four types of behaviors that may be desired in synthetic TCS systems (e.g. for bioremediation or biopharmaceutical applications), and the corresponding posterior probabilities of the orthodox and unorthodox models to achieve them. In Figure 4C the specified behavior is that of a fast response to a square pulse input signal. That is the output should show a maximum within 0.1 seconds after the pulse starts and a minimum 0.1 seconds after the pulse ends. As can be seen from the posterior probabilities, both models achieve this behavior easily, as one would expect from a signaling system, with the orthodox system slightly outperforming the unorthodox system. In Figure 4D the ability of the two systems to achieve a steady output state for seconds under a constant input signal is examined, and again both systems perform comparably with the orthodox system appearing slightly more favorable.
In Figure 4E the input signal is a high frequency sinusoid with a mean of 0.6, and the desired output is a constant signal with the same mean and root mean square ; this would mimic a system that is robust to high-frequency noise. The output trajectories at some intermediate and final populations are shown in Figure S6. In this example the unorthodox system clearly outperforms the orthodox system, which indicates the increased robust to noisy signals that comes with the relay architecture . But the direct comparison of the two models’ ability to cope with noise, which is becoming possible in this approach, also reveals some unexpected characteristics: inspection of the posterior distribution (Figure S7) shows that all the dephosphorylation reaction rates, , are minimized while the rate of the signal induced autophosphorylation () is large. Thus the noise reduction mechanism in the unorthodox system works by saturating the system.
In Figure 4F the input is again a step function but the output is more specific; it much reach to and drop to within 0.5 seconds of the pulse start and end, respectively, thus approximately reproducing the input. Figure S8 shows the evolution of the system in this case. Here the orthodox model clearly outperforms the unorthodox model. Inspection of the posterior distribution (Figure S9) shows that both the rate of the signal induced autophosphorylation and the rate of phosphorylation of the response regulator by the histidine kinase, , are large while the rate of dephosphorylation of the response regulator, , is small. This ensures that the shape of the signal is transferred faithfully through the system.
6 Stochastic genetic toggle switch without cooperativity
The genetic toggle switch is a synthetic realization of a bistable switch that forms the basis of cellular memory . It is formed by two genes and , whose respective proteins repress the production of the other protein; protein represses the production of protein and vice versa (Figure 5). The presence of an interaction with inducer molecules allows the system to switch between steady states with the probability of spontaneous switching low enough such that, in the absence of an interaction, the system will effectively remain in its appropriate steady state indefinitely.
Here we consider four versions of the stochastic genetic toggle switch that are all bistable without the requirement for cooperative binding of the proteins to the gene promoter . Note that the deterministic models are not necessarily bistable; these are shown in Figure 5A and consist of the basic toggle switch, an exclusive version containing only one promoter, a version that includes bound repressor degradation (BRD), and a version containing a protein-protein interaction between and with the resulting complex nonfunctional. The additional reactions are are always to reduce the probability of the ’deadlock’ state where both and are bound to the promotors of and respectively . We modeled the switches using a continuous time Markov jump process which obeys the chemical master equation. Only protein level reactions were modeled which makes the models simpler (and faster to simulate) while retaining the important behavior. The stochastic models for all four switches are given in appendix E. For such complicated stochastic dynamical systems the advantages of a Bayesian perspective over conventional model design strategies (based e.g. on optimization) come to the fore: without an appreciation of the whole distribution choice of the best model would be subject to considerable uncertainty.
Figure 5B shows the desired toggle switch behavior. The inducer is added between and , after which the level of protein should reach a steady state with a mean number of 40. Between and the inducer is added after which the level of protein should drop to zero. The inducer numbers are both assumed to be 40 molecules which is fixed in the specified range. The desired output behavior was specified via the two component distance metric, defined to be ,
where is the number of protein at time , is the target (here fixed at 40), , , and . The final population was defined to be . Here represents the distance in the ”on” region and represents the distance in the “off” region.
Figure S10 shows the evolution of the stochastic simulations towards the desired behavior. Figure 5C shows the posterior probabilities for each system to achieve the specified behavior, and in particular demonstrates that the exclusive toggle switch outperforms all the others. This chimes with intuition, since the exclusive switch removes the possibility of the deadlock state without the addition of any extra reactions. The fact that the BRD switch performs worse than the original toggle shows that the addition of the two extra degradation reactions does not offer a great enough performance increase for the extra parameters, a manifestation of the parsimony principle or Occam’s razor which is inherent to the Bayesian model selection framework used here.
The reactions that comprise the exclusive switch are given by
where represent the gene promotors for protein respectively and are fixed at one copy, represent the bound transcription factors and represent the inducer molecules. The species, fixed to be one copy, ensures only or can be bound at any one time. Examination of the posterior distribution for this model, Figure 5D, clearly shows a large correlation between and , which are the production and decay rates of protein , respectively. This is clearly seen in the principal components (Figure S11) through the combination dominating the first PC and the combination dominating the last PC. Thus the system is sensitive to only the difference in these rates which is typical in birth-death processes.
In this paper we have presented a new method for the design of synthetic biological systems employing ideas from Bayesian statistics. We have demonstrated its utility and generality on three different systems spanning biochemical, signaling and genetic networks, as well as oscillatory systems. This method has advantages over traditional design approaches in that the modeling is incorporated directly into the design stage. The statistical nature of the method has many attractive features including the handling of stochastic systems, the ability to perform model selection and the handling of parameter uncertainty in a well defined manner. We used the ABC-SysBio software  which takes as input a set of SBML files and as such can be used by bioengineers and experimentalists to rationally compare their competing designs for a system. By using this method we hope that the implementation time of synthetic systems can be reduced by defining a program of experimental work based on the posterior probabilities of each design.
Monte Carlo sampling of parameter spaces has been used to assess the robustness of engineered and biological systems in the past. But like in the statistical case, simple Monte Carlo sampling tends to waste too much effort and time on those regions which are of no real interest for reverse-engineering or design purposes. Our statistically based sequential approach homes in onto those regions where the probability of observing the desired behavior is appreciable. This allows us a more nuanced comparative assessment of different design proposals, especially when dynamics are expected (or indeed desired) to exhibit elements of stochasticity. And the Bayesian model selection approach automatically strikes a balance between the systems’ abilities to generate the desired behavior effectively but also robustly.
Further developments will include the incorporation of methods for model abstraction to reduce computation time  and to handle a database of standard parts as in other existing design software systems [40, 41]. Moreover, it is also possible to include the generation of novel structures (by e.g. using stochastic-context free grammars  to propose alterations to a reaction/interaction network) as part of the design process. Just like in the case where ideas from control engineering and statistics can gainfully be combined in order to reverse-engineer the structure of naturally evolved biological systems, we feel that in the design of synthetic systems such a union will also be fruitful.
Appendix A Approximate Bayesian Computation : ABC SMC
Here we outline the background behind approximate Bayesian computation (ABC) and describe the ABC SMC algorithm , which is implemented in the software package ABC-SysBio . ABC methods have been developed to infer posterior distributions in cases where likelihood functions are computationally intractable or too costly to evaluate. They replace the calculation of the likelihood with a comparison between observed and simulated data.
Let be a parameter vector with prior and be the likelihood of the data . In Bayesian inference we are interested in the posterior density
Now imagine the case where we cannot write down the likelihood in closed form but we can simulate from the data generating model. We can proceed by first sampling a parameter vector from the prior, , and then sampling a data vector, , from the model conditional on , ie . This alone gives the joint density . To obtain samples from the posterior distribution we must condition on the data and this is done via an indicator function, i.e.
where denotes the indicator function and is equal to 1 for . Here , so the indicator is equal to one when the simulated data and the observed data are identical. This forms a rejection algorithm, and in this instance the accepted are from the true posterior density .
For most models it is impossible to achieve simulations with outputs in the subset and so an approximation must be made. This is the basis for ABC. In the first instance we can replace by where is a distance function comparing the simulated data to the observed data. We then have
where is an approximation to the true posterior distribution. The rationale behind ABC is that if is small then the resulting approximate posterior, , is close to the true posterior. Often, for complex models or stochastic systems, the subset is still too restrictive. In these cases we can resort to comparisons of summary statistics. We now specify the subset where is a summary statistic and the distance function now takes the form . We often write the marginal posterior distribution as .
a.2 Abc Smc
The simplest ABC algorithm is known as the ABC rejection algorithm  and proceeds as follows
R1 Sample from . R2 Simulate a dataset from . R3 If accept , otherwise reject. R4 Return to R1.
This gives draws from but can be very inefficient in high dimensional models or when the overlap between the prior and posterior distributions is small. One way to improve the efficiency of the rejection algorithm is to perform sequential importance sampling (SIS) . In SIS, instead of sampling directly from the posterior distribution, sampling proceeds via a series of intermediate distributions. The importance distribution at each stage is constructed from a perturbed version of the previous population. This approach can be used in ABC and the resultant algorithm is known as ABC SMC . Described here is a slightly modified version that automatically calculates the schedule and as such, only the final value, , needs be specified. To obtain samples (known as particles) from the posterior, defined as, , proceed as follows
S1 Initialize Set the population indicator S2.0 Set the particle indicator S2.1 If , sample independently from If , sample from the previous population with weights . Perturb the particle, where is the perturbation kernel. If , return to S2.1 Simulate a candidate dataset . If return to S2.1 S2.2 Set and , calculate the weight as If , set , go to S2.1 S3 Normalize the weights. Determine such that . If , set , go to S2.0.
Here is the component-wise random walk perturbation kernel that, in this study, takes the form where . The denominator in the weight calculation can be seen as the probability of observing the current particle given the previous population.
a.3 Model selection
In Bayesian inference comparison of a discrete set of models can be be performed using the marginal posterior. Consider the joint space defined by ; Bayes theorem can then be written
where , the marginal likelihood, can be written
Therefore the posterior probability of a model is given by the normalized marginal likelihood which may or may not be weighted depending on whether the prior over models is informative or uniform respectively. It has recently been noted that model selection using summary statistics can be problematic because the summary statistic must be sufficient for the joint space, , rather than just . This is not a concern here since in all our examples we use the full data set with no summary or we define our posterior distributions through the summary statistics.
Model selection can be incorporated into the ABC framework by introducing the model indicator and proceeding with inference on the joint space. For example, the ABC rejection algorithm with model selection  proceeds as follows
MR1 Sample from . MR2 Sample from . MR3 Simulate a dataset from . MR4 If accept , otherwise reject. MR5 Return to R1.
Once samples have been accepted an approximation to the marginal posterior, , is given by
Model selection can also be incorporated into the ABC SMC algorithm . To obtain samples
from the posterior, defined as, , proceed as follows
MS1 Initialize Set the population indicator MS2.0 Set the particle indicator MS2.1 If , sample from the prior . If , sample with probability and perturb . Sample from the previous population with weights . Perturb the particle, where is the perturbation kernel. If , return to MS2.1 Simulate a candidate dataset . If return to MS2.1 MS2.2 Set and , calculate the weight as where and If , set , go to MS2.1 S3 Normalize the weights. Obtain the marginal model probabilities given by Determine such that . If , set , go to MS2.0.
There are two obvious additions to the algorithm when compared to parameter inference. The model kernel, , perturbs the resampled models using a multinomial distribution, and the additional term in the weight denominator accounts for the probability of observing the current model given the previous population.
a.4 Prior distribution
The prior distribution encodes our knowledge of the system and should be set according to known biochemical properties. However, often the kinetic parameters are not well known and can be very difficult or even impossible to measure in vivo. In these cases we make the prior distribution non informative by specifying a large range over possible, biophysically and biochemically plausible values. As more information becomes available, through experimental studies or otherwise, the prior can be updated to reflect our increased knowledge of the system. Interestingly, for some systems, our design method could help to constrain kinematic parameters where experimental data are unavailable.
a.5 The distance function and output tolerance
In system design we would rarely insist on achieving the true posterior distribution corresponding to , but would like to reach the objective within some tolerance. A theorem due to Wilkinson (2008)  states that if we assume that the data can be considered as
where is a draw from the model at the ’best’ input and is an additive, independent error, then the approximate posterior distribution, can be interpreted as the ’true’ posterior . While the independence assumption is not always true, this theorem provides some insight into the relationship between the final value and the tolerance on our specified behavior. For example when using uniform kernels, as in this study, if our desired output behavior is a constant of 0.5 and we finish the inference at our final trajectories will be distributed giving a tolerance of on the output behavior. This can be used when considering our desired output objectives. To achieve other error distributions, such as Gaussian errors, we can always explicitly specify the error model in the design objectives.
a.6 Deterministic models
Inference for deterministic models such as ordinary differential equations can be problematic since there is a one to one relationship between the parameter vector and the data set . Therefore, in the absence of observational error, the posterior distribution resembles a delta function, where is data ’closest’ to . An additional problem for ABC methods is that the minimum distance, , is greater than zero . However, in practice, observational data have associated experimental errors and when this is included explicitly in the model, the problem is resolved. In the case of systems design, we omit the explicit error model for clarity, but note that it could be included with assumptions on the form of the distribution.
Appendix B Biochemical adaptation
We used the same models as those used in , which are enzymatic reactions assuming Michaelis-Menten kinetics. Below we give the full models including cooperativity but the more specific case of no cooperativity is when the exponents, , are set to one. Here denote the concentrations of the active form of the species and the concentrations of the inactive form. Species and refer to background activating and deactivating enzymes respectively and are assumed to have a constant concentration of 0.5. The models were simulated in the range .
The two component distance metric was defined to be , where and are the adaptation efficiency and sensitivity defined by
where are the input values (here fixed at 0.5 and 0.6 respectively), are the output steady state levels before and after the input change and and is the maximal transient output level. The final population was defined to be .
The priors on the Michaelis-Menten rates were chosen to correspond to the parameter ranges used in the original study; and .
Appendix C Robust oscillator design
We used the same models as those used in , simulated in the range . Again denote the concentrations of the active form of the species and the concentrations of the inactive form. The feedback is modeled using Michaelis-Menten kinetics but the conversion of inactive form into active form is assumed to have a constant rate.