Ancestral Inference from Functional Data: Statistical Methods and Numerical Examples

Ancestral Inference from Functional Data: Statistical Methods and Numerical Examples


Many biological characteristics of evolutionary interest are not scalar variables but continuous functions. Here we use phylogenetic Gaussian process regression to model the evolution of simulated function-valued traits. Given function-valued data only from the tips of an evolutionary tree and utilising independent principal component analysis (IPCA) as a method for dimension reduction, we construct distributional estimates of ancestral function-valued traits, and estimate parameters describing their evolutionary dynamics.

Keywords: comparative analysis; Ornstein-Uhlenbeck process; non-parametric Bayesian inference; functional phylogenetics; ancestral reonconstruction

1 Introduction

The number, reliability and coverage of evolutionary trees is growing rapidly [1].However, knowing organisms’ evolutionary relationships through phylogenetics is only one step in understanding the evolution of their characteristics [2]. Two issues are particularly challenging: first, information is typically only available for extant organisms, represented by tips of a phylogenetic tree, whereas understanding their evolution requires inference about ancestors deeper in the tree. Second, available information for different organisms in a phylogeny is not independent: a phylogeny describes a complex pattern of non-independence. A variety of statistical models have been developed to address these issues (e.g. see [3]). However, most deal with only one characteristic of an organism, encapsulated in a single value, at a time. This simplicity contrasts with the complexity of any organism which has, not only many individual characteristics, but also characteristics impossible to represent effectively as single numbers. Some such characteristics, for example growth curves, may be modelled as function-valued traits [4], i.e. as points in an infinite dimensional space. A novel approach to analysing the evolution of function-valued traits has recently been proposed: phylogenetic Gaussian process regression [5, 6]. Here we develop a practical implementation of this approach, which first linearly decomposes function-valued traits for a set of taxa into statistically independent components. This phylogenetically agnostic method of dimension reduction is robust to mixed inherited and taxon-specific variation in the data (e.g. see [7]). Our method then implements the Bayesian regression analysis described in [5], returning posterior distributions for ancestral function-valued traits. We show that this analysis produces reliable distributional estimates for our simulated data, which may be further separated into inherited and specific components of variation. We also analyse the statistical performance of the method for making point estimates for the (hyper-)parameters [8] describing the evolutionary dynamics of the components. Overall, our method appropriately combines developments in functional data analysis with the evolutionary dynamics of quantitative phenotypic traits, allowing nonparametric Bayesian inference from phylogenetically non-independent function-valued traits. Details of the simulation and inference methods are given in section 2 and section 3 gives results and discussion both for reconstructing ancestral function-valued traits and for estimating their evolutionary parameters.

2 Methods

2.1 Simulating function-valued traits

We simulate datasets using the Ornstein-Uhlenbeck (OU) Gaussian process as a model of evolutionary change. The technical justification for the broad applicability of this model is presented in 2.2 and 2.3. However, OU processes are already well documented in the evolutionary biology literature [9, 10], having the advantage over simpler Brownian motion models [11] of modelling both selection and genetic drift. The OU model also exhibits a stationary distribution with covariances between character values decreasing exponentially with phylogenetic distance [12].

We first generated a random, non-ultrametric, 128-tip phylogentic tree , with branch lengths drawn from an inverse Gaussian (IG) distribution, IG(.5,.5) (figure 1A). Function-valued traits were simulated at each tip and internal node by randomly mixing a common set of basis functions; for each i = 1, 2, 3 a smooth function (figure 1B,Upper-left) was discretised producing a basis vector of length 1024. An OU Gaussian Process (independent for each ) was used to generate weights for the th component at each of the 255 nodes of (128 tips and 127 internal nodes), with mean zero and covariance function


for and , where denotes the patristic distance (that is, the distance in ) between and . This covariance function is developed in section IIC2 of [5]: in the present context, quantifies the intensity of inherited variation, is the characteristic length-scale of the evolutionary dynamics [8] (equivalent to the inverse of the strength of selection), and quantifies the intensity of specific variation (i.e. variation unattributable to the phylogeny). We selected the hyperparameters in table 1, to give different qualities to each of the three components of variation. In particular, component 2 (figure 1B, upper-left, dashed line) has no inherited variation and it follows that the characteristic length-scale/strength-of-selection parameter has no meaning for this component.

1 4.5 17.9 0.45
2 0 NA 1
3 3.0 8.95 0.45
Table 1: Hyperparameter values used to generate the function-valued data shown in figure 1A. The three hyperparameters (inherited variation , length-scale and specific variation , see text), define the covariance of the basis functions (figure 1B, upper-left) at different points in the tree according to equation (1).

Each node in the tree thus had an associated vector giving the weights for each of the three basis functions. These weighted basis functions were used to produce a single function-valued trait at each node (of which four are shown in figure 1B, lower-left panel):


where is the matrix having each as its rows. The resulting set of 255 curves was divided into two parts: the 128 curves at tips of to be used as training data (that is, inputs to our regression analysis), and the 127 curves at internal nodes of to be used for validation of the method.

Figure 1: The 128-tip random tree used with function-valued traits at four tips (right), the root and one internal node (left). The maximal distance between any two tips is 17.9, the mean distance 7.43. Panels, clockwise from top-left: The common basis functions used to generate data (); results of PCA on the training data; results of IPCA on the training data; and the simulated function-valued traits at the four tips indicated in (black lines), with the sample mean curve (red).

2.2 Dimension reduction and source separation for functional data

Since the space of all (univariate) continuous functions is infinite-dimensional, if the sample curves lie“close” to a finite dimensional linear subspace, an acceptable approximation can be obtained by utilising weighted sums of basis curves that span that linear subspace. Effectively, this involves reversing equation (2): given the curves , the task is to estimate the common basis and the weights . This is the challenge of ‘source separation’ [13].

A more rigorous justification for this heuristic is provided by [5]. It is shown there that a wide class of Gaussian processes of function-valued traits on a phylogeny have exactly the linear decomposition in equation (2), where for each of the basis functions the weights themselves form a (univariate) phylogenetic Gaussian process on the phylogeny . The task, then, is to estimate the basis and weights at the tips of accurately. The standard approach to choosing basis curves is PCA [14], which does so by explaining the greatest possible variation in successive orthogonal components, an approach extended to take account of phylogenetic relationships [15]. However, if a sample of functions is generated by mixing non-orthogonal basis functions, the principal components of the sample (whether or not they account for phylogeny) will not equal the basis curves, due to the assumption of orthogonality: see figure 1B. If we remove the assumption of orthogonality, however, we must replace it with an alternative assumption in order to have the right number of mathematical constraints to perform source separation. In independent components analysis (ICA), the alternative assumption made is that the weight components are statistically independent for different values of . This assumption fits naturally into our model.

PCA is an appropriate tool for estimating true dimensionality. Therefore, to achieve both dimension reduction and source separation, we first applied PCA to the training data (the 128 curves at the tips of ) to determine the appropriate value for [14], which was correctly returned as . The principal components were then passed to the ‘CubICA’ implementation of ICA [16]. ICA has proved fruitful in other biological applications [17] as has passing the results of PCA to ICA, which has been termed IPCA [18]. CubICA returned a new set of components (figure 1B, lower-right panel) and, for each , a corresponding weight at each tip . An independent, univariate phylogenetic Gaussian process regression was then performed for each value of , as described in 2.3, to obtain posterior distributions for the weights throughout the tree. The posterior distributions at every node in the tree (one posterior for each ) were then mapped to a single functional posterior distribution.

Using IPCA we can therefore approximately reconstruct both the basis and the mixing vectors from the tip data, in such as way as to be fully compatible with the phylogenetic Gaussian process framework of [5].

2.3 Phylogenetic Gaussian process regression

The mechanics of phylogenetic Gaussian process regression using a basis are discussed in detail in [5], section IIC2. As discussed there, the phylogenetic OU process is the only stationary and Markovian Gaussian process. For each , under these assumptions we would therefore choose the covariance function given in equation (1), and only the hyperparameters would remain to be specified (for a fixed phylogeny ).

In the ancestral inference described in 2.2, we assume the parameter values in table 1 to be known. For the task of parameter estimation we performed the following procedure. For each of the three components in turn, given the values of of the other two components we maximise the likelihood expression (2) in [5] with respect to the third component to obtain their maximum likelihood estimates, and compare these to the true value.

3 Results and Discussion

Figure 2: Posterior distributions at three points in the phylogeny. The prediction made by the regression analysis shown via the posterior mean (red line), the component of posterior standard deviation due to inherited variation (dark grey band) and specific variation (light grey band). The black line shows the simulated data. In the left and centre panels this black line enables validation of the ancestral predictions. In the right panel, the black line is the training data at that tip and the posterior distribution should be understood as a prediction for an independent organism at the same point in the phylogeny. The root and internal node here are the same as those indicated in figure 1A, and the tip is the second from bottom on the right of figure 1A.

Given a tree T and functional data associated with each of its tips, we shall make inferences about ancestral traits and evolutionary parameters. We simulate data on the tree in figure 1A as described in 2.1, and we estimate the independently varying basis functions using IPCA (figure 1B) on tip data alone. Using the basis and the process hyperparameters, we derive posterior distributions over functional traits throughout the tree. Examples of the posterior distributions obtained are shown in figure 2. Since specific variation (represented by the light grey band) is modelled as statistically independent at each point in , the specific variance may simply be subtracted from the total posterior variance to obtain the posterior variance due to inherited variation (whose standard deviation is given by the width of the dark grey band). The simulated validation data is also shown in black, and can be seen typically to lie within the posterior standard deviation (given by the width of the dark grey plus light grey band). Note that the contribution of specific variation to the posterior variance is constant across the tree since tip data are silent concerning the specific component of the function-valued trait at internal nodes. On the other hand, the darker grey band decreases in width going away from the root, reflecting the decreasing (although not entirely vanishing) uncertainty concerning the inherited component of the function-valued trait as we approach the measured traits at the tips. We note that the posterior distribution, even at the root, puts a clear constraint on the possible ancestral functional data: in this (admittedly simulated and highly controlled) setting we can reason effectively about ancestral function-valued traits.

We next considered whether parameters describing evolutionary dynamics could be estimated, given only functional data from the tips of the tree. Specifically, we fixed two of the three components of to random values and estimated the third: results shown in figures S1-2. The accuracy of the predictions is very encouraging: essentially they are problematic only on relatively rare occasions when an over-simplified model is fitted excluding either inherited or specific variation entirely. Similarly promising statistical performance was obtained when was known and knowledge of either or was replaced by knowledge of the ratio between them.

When there is greater uncertainty over the hyperparameters, we anticipate that the use of hyperpriors will enable statistical performance to be maintained for inference about ancestral function-valued traits, but we reserve this for future work.

R Code for the IPCA, ancestral reconstruction and hyperparameter estimation is available from


  1. Centre for Complexity Science, University of Warwick, Coventry CV4 7AL, UK
  2. Department of Mathematics, Imperial College London, SW7 2AZ
  3. School of Mathematics, University of Manchester, Oxford Road, Manchester M13 9PL, UK
  4. Faculty of Life Sciences, University of Manchester, Oxford Road, Manchester M13 9PT, UK
  5. footnotemark:


  1. Maddison DR, Schulz KS. The Tree of Life Web Project; 2007.
  2. Yang Z, Rannala B. Molecular phylogenetics: principles and practice. Nature Reviews Genetics. 2012;13(5):303–14.
  3. Salamin N, Wuest RO, Lavergne S, Thuiller W, Pearman PB. Assessing rapid evolution in a changing environment. Trends in Ecology & Evolution. 2010;25(12):692–8.
  4. Kirkpatric M, Heckman N. A quantitative genetic model for growth, shape, reaction norms, and other infinite-dimensional characters. Journal of Mathematical Biology. 1989;27:429–450.
  5. Jones NS, Moriarty J. Evolutionary Inference for Function-valued Traits: Gaussian Process Regression on Phylogenies. arXiv:10044668. 2012;.
  6. Group TFP. Phylogenetic inference for function-valued traits: speech sound evolution. Trends in Ecology & Evolution. 2012;27(3):160–6.
  7. Cheverud JM, Dow MM, Leutenegger W. The Quantitative Assessment of Phylogenetic Constraints in Comparative Analyses: Sexual Dimorphism in Body Weight Among Primates. Evolution. 1985;39(6):pp. 1335–1351.
  8. Rasmussen CE, Williams CKI. Gaussian Processes for Machine Learning. MIT Press; 2006.
  9. Butler MA, King AA. Phylogenetic Comparative Analysis: A Modelling Approach for Adaptive Evolution. American Naturalist. 2004;164(6):683–695.
  10. Hansen T, Martins E. Translating between microevolutionary process and macroevolutionary patterns: the correlation structure of interspecific data. Evolution. 1996;50(4):1404–1417.
  11. Felsenstein J. Phylogenies and the Comparative Method. American Naturalist. 1985;125(1):1–15.
  12. Hansen T. Stabilizing Selection and the Comparative Analysis of Adaptation. Evolution. 1997;51(5):1341–1351.
  13. Hyvärinen A, Oja E. Independent component analysis: algorithms and applications. Neural Networks. 2000;13(4-5):411–430.
  14. Minka TP. Automatic choice of dimensionality for PCA. NIPS. 2000;13:514.
  15. Revell LJ. Size-correction and principal components for interspecific comparative studies. Evolution. 2009;63(12):3258–3268.
  16. Blaschke T, Wiskott L. CuBICA: Independent Component Analysis by Simultaneous Third- and Fourth-Order Cumulant Diagonalization. IEEE Transactions on Signal Processing. 2004;52(5):1250–1256.
  17. Scholz M, Gatzek S, Sterling A, Fiehn O, Selbig J. Metabolite fingerprinting: detecting biological features by independent component analysis. Bioinformatics. 2004;20(15):2447–2454.
  18. Yao F, Coquery J, Le Cao KA. Independent Principal Component Analysis for biologically meaningful dimension reduction of large biological data sets. BMC Bioinformatics. 2012;13(1):13–24.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description