Global-scale phylogenetic linguistic inference from lexical resources

Global-scale phylogenetic linguistic inference from lexical resources

Gerhard Jäger1
Abstract

Automatic phylogenetic inference plays an increasingly important role in computational historical linguistics. Most pertinent work is currently based on expert cognate judgments. This limits the scope of this approach to a small number of well-studied language families.

We used machine learning techniques to compile data suitable for phylogenetic inference from the ASJP database, a collection of almost 7,000 phonetically transcribed word lists over 40 concepts, covering two third of the extant world-wide linguistic diversity.

First, we estimated Pointwise Mutual Information scores between sound classes using weighted sequence alignment and general-purpose optimization. From this we computed a dissimilarity matrix over all ASJP word lists. This matrix is suitable for distance-based phylogenetic inference.

Second, we applied cognate clustering to the ASJP data, using supervised training of an SVM classifier on expert cognacy judgments.

Third, we defined two types of binary characters, based on automatically inferred cognate classes and on sound-class occurrences.

Several tests are reported demonstrating the suitability of these characters for character-based phylogenetic inference.

1. Tübingen University, Institute of Linguistics, Wilhelmstr. 19, 72074 Tübingen, Germany (gerhard.jaeger@uni-tuebingen.de)

Background & Summary

The cultural transmission of natural languages with its patterns of near-faithful replication from generation to generation, and the diversification resulting from population splits, are known to display striking similarities to biological evolution [1, 2]. The mathematical tools to recover evolutionary history developed in computational biology — phylogenetic inference — play an increasingly important role in the study of the diversity and history of human languages. [3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14]

The main bottleneck for this research program is the so far still limited availability of suitable data. Most extant studies rely on manually curated collections of expert judgments pertaining to the cognacy of core vocabulary items or the grammatical classification of languages. Collecting such data is highly labor intensive. Therefore sizeable collections currently exist only for a relatively small number of well-studied language families. [8, 11, 15, 16, 17, 18]

Basing phylogenetic inference on expert judgments, especially judgments regarding the cognacy between words, also raises methodological concerns. The experts making those judgments are necessarily historical linguists with some prior information about the genetic relationships between the languages involved. In fact, it is virtually impossible to pass a judgment about cognacy without forming a hypothesis about such relations. In this way, data are enriched with prior assumptions of human experts in a way that is hard to control or to precisely replicate.

Modern machine learning techniques provide a way to greatly expand the empirical base of phylogenetic linguistics while avoiding the above-mentioned methodological problem.

The Automated Similarity Judgment Program (ASJP) [19] database contains 40-item core vocabulary lists from more than 7,000 languages and dialects across the globe, covering about 75% of the extant linguistic diversity. All data are in phonetic transcription with little additional annotations.111The only expert judgments contained in the ASJP data are rather unsystematic manual identifications of loan words. This information is ignored in the present study. It is, at the current time, the most comprehensive collection of word lists available.

Phylogenetic inference techniques comes in two flavors, distance-based and character-based methods. Distance-based methods require as input a matrix of pairwise distances between taxa. Character-based methods operate on a character matrix, i.e. a classification of the taxa under consideration according to a list of discrete, finite-valued characters. While some distance-based methods are computationally highly efficient, character-based methods usually provide more precise results and afford more fine-grained analyses.

The literature contains proposals to extract both pairwise distance matrices and character data from phonetically transcribed word lists. [20, 21, 22] In this paper we apply those methods to the ASJP data and make both a distance matrix and a character matrix for 6,892 languages and dialects222These are all languages in ASJP v. 17 except reconstructed, artificial, pidgin and creole languages. derived this way available to the community. Also, we demonstrate the suitability of the results for phylogenetic inference.

While both the raw data and the algorithmic methods used in this study are freely publicly available, the computational effort required was considerable (about ten days computing time on a 160-cores parallel server). Therefore the resulting resource is worth publishing in its own right.

Methods

Creating a distance matrix from word lists

In [20] a method is developed to estimate the dissimilarity between two ASJP word lists. The main steps will be briefly recapitulated here.

Pointwise Mutual Information

ASJP entries are transcribed in a simple phonetic alphabet consisting of 41 sound classes and diacritics. In all steps described in this paper, diacritics are removed.333For instance, a sequence th~, indicating an aspirated “t”, is replaced by a simple t. This way, each word is represented as a sequence over the 41 ASJP sound classes.

The pointwise mutual information (PMI) between two sound classes is central for most methods used in this paper. It is defined as

(1)

where is the probability of an occurrence of to participate in a regular sound correspondence with in a pair of cognate words, and are the probabilities of occurrence of in an arbitrarily chosen word.

Let “-” be the gap symbol. A pairwise alignment between two strings is a pair of strings over sound class symbols and gaps of equal length such that is the result of removing all gap occurrences in , and likewise for . A licit alignment is one where a gap in one string is never followed by a gap in the other string. There are two parameters and , the gap penalties for opening and extending a gap. The aggregate PMI of an alignment is

(2)

where is the corresponding gap penalty if or is a gap.

For a given pair of ungapped strings , is the maximal aggregate PMI of all possible licit alignments between and . It can efficiently be computed with a version of the Needleman-Wunsch algorithm [23]. In this study, we used the function pairwise2.align.globalds of the Biopython library [24] for performing alignments and computing PMI scores between strings.

Parameter estimation

The probabilities of occurrence for sound classes are estimated as relative frequencies of occurrence within the ASJP entries. The scores for pairs of sound classes and the gap penalties are estimated via an iterative procedure.

In a first step, pairwise distances between languages are computed via the method described in the next subsection, using instead of as measure of string similarity, where is the normalized Levenshtein distance [25] between and , i.e. the edit distance between and divided by the length of the longest string. All pairs of languages444For the sake of readability, I will use the term “language” to refer to languages proper and to dialects alike; “doculect” would be a more correct if cumbersome term. with a distance are considered as probably related. This is a highly conservative estimate; of all probably related languages belong to the same language family and about to the same sub-family.

Next, for each pair of probably related languages and each concept , each word for from is aligned to each word for from . The pair of words with the lowest score is considered as potentially cognate.

All pairs of potentially cognate words are aligned using the Levenshtein algorithm, and for each pair of sound classes , is estimated as the relative frequency of being aligned to across all such alignments. Alignments to gaps are excluded from this computation. is then calculated according to (1).

Suppose gap penalties and a threshold parameter are given. The final PMI scores are estimated using an iterative procedure inspired by the Expectation Maximization algorithm [26]:

  • For in :

    1. All potential cognate pairs are aligned using the -scores.

    2. is estimated as the relative frequency of aligned with among all alignments between potential cognates with .

    3. is calculated using formula (1).

The target function is the average distance between all probably related languages using the -scores. The values for are determined as those minimizing , using Nelder-Mead optimization [27]. The following optimal values were found: .

The threshold ensures that only highly similar word pairs are used for estimating PMI scores. For instance, between French and Italian only five word pairs have a PMI similarity according to the final scores: soleil [sole]sole [sole] (‘sun’; PMI=11.6), corne [korn]corno [korno] (‘horn’; PMI=7.7), arbre [arbr3]albero [albero] (‘tree’; PMI=7.1), nouveau [nuvo]nuovo [nwovo] (‘new’; PMI=7.0), and montagne [motaj]montagna [monta5a] (‘mountain’; PMI=4.9).

The final PMI scores between sound classes are visualized in Figure 1.

Figure 1: PMI scores. Heatmap and hierarchical clustering dendrogram

It is easy to discern that is positive for all sound classes , and that for is negative in most cases. There are a few pairs with positive score, such as b/f. Generally, sound class pairs with a similar place of articulation tend to have relatively high scores. This pattern is also visible in the cluster dendrogram. We observe a primary split between vowels and consonants. Consonants are further divided into labials, dentals, and velar/uvular sounds.

Pairwise distances between languages

When aggregating PMI similarities between individual words into a distance measure between word lists, various complicating factors have to be taken into consideration:

  • Entries for a certain language and a certain concept often contain several synonyms. This is a potential source of bias when averaging PMI similarities of individual word pairs.

  • Cognate words tend to be more similar than non-cognate ones. However, the average similarity level between non-cognate words depends on the overall similarity between the sound inventories and phonotactic structure of the languages compared. To assess the informativeness of a certain PMI similarity score, it has to be calibrated against the overall distribution of PMI similarities between non-cognate words from the languages in question.

  • Many ASJP word lists are incomplete, so the word lists are of unequal length.

To address the first problem, [20] defined the similarity score between languages and for concept as the maximal PMI similarity between any pair of entries for from and .

The second problem is addressed by estimating, for each concept for which both languages have an entry, the -value for the null hypothesis that none of the words for being compared are cognate. This is done in a parameter-free way. For each pair of concepts , the PMI similarities between the words for from and the words for from are computed. The maximum of these values is the similarity score for . Under the simplifying assumption that cognate words always share their meaning,555This is evidently false when considering the entire lexicon. There is a plethora of examples, such as as English deer vs. German Tier ‘animal’, which are cognate (cf. [28], p. 94) without being synonyms. However, within the 40-concept core vocabulary space covered by ASJP, such cross-concept cognate pairs are arguably very rare. the distribution of such similarity scores for constitutes a sample of the overall distribution of similarity scores between non-cognates.

Now consider the null hypothesis that the words for concept are non-cognate. We assume a priori that cognate word pairs are more similar than non-cognate ones. Let the similarity score for be . The maximum likelihood estimate for the -value of that null hypothesis is is the relative frequency non-cognate pairs with a similarity score . If is the similarity score between concept and , we have

(3)

Analogously to Fisher’s method [29], the -values for all concepts are combined according to the formula

(4)

If the null hypothesis is true for concept , is distributed approximately according to a continuous uniform distribution over the interval . Accordingly, is distributed according to an exponential distribution with mean and variance .666This follows from a standard argument about changes of variables in probability density functions. Let be the density function for , where is distributed according to a uniform distribution over . We have

Suppose there are concepts for which both and have an entry. The sum of independently distributed random variables, each with mean and variance , approximately follows a normal distribution with mean and variance . This can be transformed into a -statistic by normalizing according to the formula

(5)

This normalization step addresses the third issue mentioned above, i.e., the varying length of word lists.

increases with the degree of similarity between and . It is transformed into a dissimilarity measure777We will talk of distance measure in the sequel for simplicity, even though it is not a metric distance. as follows:

(6)

The maximal possible value for would be achieved if both word lists have the maximal length of , and each synonymous word pair has a higher PMI score than any non-synonymous word pair. Therefore

The minimal value for would be achieved if all equal 1 and both word lists have length 40:

We computed for each pair of the above-mentioned 6,892 languages from the ASJP database. This distance matrix is available https://osf.io/24be8/.

Automatic cognate classification

Background

In [21] a method is developed to cluster words into equivalence classes in a way that approximates manual expert classifications. In this section this approach is briefly sketched.

The authors chose a supervised learning approach. They use word lists with manual expert cognate annotations from a diverse collection of language families, taken from [15, 16, 17, 18, 30]. A part of these goldstandard data were used to train a Support Vector Machine (SVM). For each pair of words from languages ), denoting concept , seven feature values were computed:

  1. PMI similarity. This is the string similarity measure according to [20] as described in the previous section.

  2. Calibrated PMI distance. as defined in equation (3) above.

  3. The negative logarithm thereof.

  4. Language similarity. , as defined in equation (5) above.

  5. The logarithm thereof.

  6. Average word length of words for concept across all languages from the database, measured in number of symbols in ASJP transcription.

  7. Concept-language correlation. The Pearson correlation coefficient between feature 3 and feature 4 for all word pairs expressing concept .

For each such word pair, the goldstandard contains an evaluation as cognate (1) or not cognate (0). An SVM was trained to predict these binary cognacy labels. Applying Platt scaling [31], the algorithm predics a probability of cognacy for each pair of words from different languages denoting the same concept. These probabilities were used as input for hierarchical clustering, yielding a partitioning of words into equivalence classes for each concept.

The authors divided the goldstandard data into a training set and a test set. Using an SVM trained with the training set, they achieve B-cubed F-scores [32] between and on the data sets in their test data, with a weighted average of when comparing automatically inferred clusters with manual cognate classifications.

Creating a goldstandard

We adapted this approach to the task of performing automatic cognate classification on the ASJP data. Since ASJP contains data from different families and it is confined to 40 core concepts (while the data used in [21] partially cover 200-item concept lists), the method has to be modified accordingly.

We created a goldstandard dataset from the data used in [22] (which is is drawn from the same sources as the data used in [21] but has been manually edited to correct annotation mistakes). Only the 40 ASJP concepts were used. Also, we selected the source data in such a way that each dataset is drawn from a different language family. Words from different families were generally classified as non-cognate in the goldstandard. All transcriptions were converted into ASJP format. Table 1 summarizes the composition of the goldstandard data.

Dataset Source Words Concepts Languages Families Cognate classes
ABVD [15] 2,306 34 100 Austronesian 409
Afrasian [33] 770 39 21 Afro-Asiatic 351
Chinese [34] 422 20 18 Sino-Tibetan 126
Huon [35] 441 32 14 Trans-New Guinea 183
IELex [30] 2,089 40 52 Indo-European 318
Japanese [36] 387 39 10 Japonic 74
Kadai [37] 399 40 12 Tai-Kadai 102
Kamasau [38] 270 36 8 Torricelli 59
Mayan [6] 1,113 40 30 Mayan 241
Miao-Yao [37] 206 36 6 Hmong-Mien 69
Mixe-Zoque [39] 355 39 10 Mixe-Zoque 79
Mon-Khmer [37] 579 40 16 Austroasiatic 232
ObUgrian [40] 769 39 21 Uralic 68
total 10,106 40 318 13 2,311
Table 1: Goldstandard data used for this study.

Clustering

We used the Label Propagation algorithm [41] for clustering. For each concept, a network is constructed from the words for that concept. Two nodes are connected if and only if their predicted probability of cognacy is . Label Propagation detects community structures within the network, i.e., it partitions the nodes into clusters.

Model selection

To identify the set of features suitable for clustering the ASJP data, we performed cross-validation on the goldstandard data. The data were split into a training set, consisting of the data from six randomly chosen language families, and a test set, consisting of the remaining data. We slightly deviated from [21] by replacing features 4 and 5 by language distance as defined in equation (6), and . Both are linear transformations of the original features and therefore do not affect the automatic classification.

For each of the 127 non-empty subsets of the seven features, an SVM with an RBF-kernel was trained with 7,000 randomly chosen synonymous word pairs from the training set.888Explorative tests revealed that accuracy of prediction does not increase if more training data are being used. The trained SVM plus Platt scaling were used to predict the probability of cognacy for each synonymous word pair from the test set, and the resulting probabilities were used for Label Propagation clustering. This procedure was repeated ten times for random splits of the goldstandard data into a training set and a test set.

For each feature combination, the B-cubed F-score, averaged over the ten training/test splits, was determined. The best performance (average B-cubed F-score: ) was achieved using just two features:

  • Word similarity. The negative logarithm of the calibrated PMI distance, and

  • Language log-distance. , with as defined in equation (6).

Figure 2 displays, for a sample of goldstandard data, how expert cognacy judgments depend on these features and how the trained SVM+Platt scaling predicts cognacy depending on those features. Most cognate pairs are concentrated in the lower right corner of the feature space, i.e., they display both high word similarity and low language log-distance. The SVM learns this non-linear dependency between the two features.

Figure 2: Expert cognacy judgments (left) and prediction of cognacy (right) depending on the selected features

Clustering the ASJP data

A randomly selected sample of 7,000 synonymous word pairs from the goldstandard data were used to train an SVM with an RBF-kernel, using the two features obtained via model selection. Probabilities of cognacy for all pairs of synonymous pairs of ASJP entries were obtained by (a) computing word similarity and language log-distance, (b) predict their probability of cognacy using the trained SVM and Platt scaling, and (b) apply Label Propagation clustering.

Phylogenetic inference

Distance-based

The language distances according to the definition in equation (6) can be used as input for distance-based phylogenetic infernce. In the experiments reported below, we used the BIONJ [42] algorithm for that purpose.

Character-based

We propose two methods to extract discrete character matrices from the ASJP data.

  1. Automatically inferred cognate classes. We defined one character per automatically inferred (in the sense described above) cognate class . If the word list for a language has a missing entry for the concept the elements of refer to, the character is undefined for this language. Otherwise assumes value 1 if its word list contains an element of , and 0 otherwise.

  2. Soundclass-concept characters. We define a character for each pair , where is a concept and an ASJP sound class. The character for is undefined for language if ’s word list has a missing entry for concept . Otherwise ’s value is 1 if one of the words for in contains symbol in its transcription, and 0 otherwise.

The motivation for these two types of characters is that they track two different aspects of language change. Cognacy characters contain information about lexical changes, while soundclass-concept characters also track sound changes within cognate words. Both dimensions provide information about language diversification.

Let us illustrate this with two examples.

  • The Old English word for ‘dog’ was hund, i.e., hund in ASJP transcription. It belongs to the automatically inferred cognate class dog˙149. The Modern English word for that concept is dog/dag, which belongs to class dog˙150. This amounts to two mutations of cognate-class characters between Old English and Modern English, for dog˙150 and for dog˙149.

    The same historic process is also tracked by the sound-concept characters; it corresponds to seven mutations: for dog:d, dog:a and dog:g, and for dog:h, dog:u, dog:n and dog:d.

  • The word for ‘tree’ changed from Old English treow (treow) to Modern English tree (tri). Both entries belong to cognate class tree˙17. As no lexical replacement took place for this concept, there is no mutation of cognate-class characters separating Old and Modern English here. The historical sound change processes that are reflected in these words are captured by mutations of sound-concept characters: for tree:i and for tree:e, tree:o and tree:w.

For a given sample of languages, we use all variable characters (i.e., characters that have value 1 and value 0 for at least one language in the sample) from both sets of characters. Phylogenetic inference was performed as Maximum-Likelihood estimation assuming -distributed rates with 25 rate categories, and using ascertainment bias correction according to [43]. Base frequencies and variance of rate variation were estimated from the data.

In our phylogenetic experiments, the distance-based tree was used as initial tree for tree search. This method was applied to three character matrices:

  • cognate class characters,

  • soundclass-concept characters, and

  • a partitioned analysis using both types of characters simultaneously.

Inference was performed using the software RAxML [44].

Applying more advanced methods of character-based inference, such as Bayesian inference [45, 46, 47] proved to be impractical due to hardware limitations.

Code availability

The code used to conduct this study is freely available at https://osf.io/cufv7/— (DOI 10.17605/OSF.IO/CUFV7). The workflow processes the sub-directories in the following order: 1. pmiPipeline, 2. cognateClustering, and 3. validation. All further details, including software and software versions used, are described in the README files in the individual sub-directories and sub-sub-directories (https://osf.io/fskue/, https://osf.io/2ncxm/, https://osf.io/32xg4/, https://osf.io/b8xgt/, and https://osf.io/3rzfn/).

Data Records

All data that were produced are available at https://osf.io/cufv7/ as well.

PMI data

Automatic cognate classification

Phylogenetic inference

  • family-wise data and trees (described in Subsection Phylogenetic Inference within the Section Technical Validation) are in sub-directory validation/families of https://osf.io/cufv7/. For each Glottolog family F, there are the following files (replace [F] by name of the family):

    • [F].cc.phy: character matrix, cognate class characters, Phylip format

    • [F].sc.phy: character matrix, soundclass-concept characters, Phylip format

    • [F].cc_sc.phy: combined character matrix, cognate class and soundclass-concept characters, Phylip format

    • [F].part.txt: partition file

    • [F].pmi.nex: pairwise PMI distances, Nexus format

    • [F].pmi.tre: BIONJ tree, inferred from PMI distances, Newick format

    • glot.[F].tre: Glottolog tree, Newick format

    • RAxML_bestTree.[F]_cc: Maximum Likelihood tree, inferred from cognate class characters, Newick format

    • RAxML_bestTree.[F]_sc: Maximum Likelihood tree, inferred from soundclass-concept characters, Newick format

    • RAxML_bestTree.[F]_cc_sc: Maximum Likelihood tree, inferred from combined character matrix, Newick format

  • global data over all 6,892 languages in the database are in the sub-directory validation, and global trees in the sub-directory validation/worldTree:

    • validation/world_cc.phy: character matrix, cognate class characters, Phylip format

    • validation/world_sc.phy: character matrix, soundclass-concept characters, Phylip format

    • validation/world_sc_cc.phy: combined character matrix, cognate class and soundclass-concept characters, Phylip format

    • validation/world.partition.txt: partition file

    • validation/glottologTree.tre: Glottolog tree, Newick format

    • validation/worldTree/distanceTree.tre: BIONJ tree, inferred from PMI distances, Newick format

    • validation/worldTree/RAxML_bestTree.world_cc: Maximum Likelihood tree, inferred from cognate class characters, Newick format

    • validation/worldTree/RAxML_bestTree.world_sc: Maximum Likelihood tree, inferred from soundclass-concept characters, Newick format

    • validation/worldTree/RAxML_bestTree.world_sc_cc: Maximum Likelihood tree, inferred from combined character matrix, Newick format

    • validation/worldTree/RAxML_bestTree.world_sc_ccGlot: Maximum Likelihood tree, inferred from combined character matrix using the Glottolog classifcation as constraint tree, Newick format

Technical Validation

Phylogenetic inference

To evaluate the usefulness of the distance measure and the character matrices defined above for phylogenetic inference, we performed two experiments:

  • Experiment 1. We applied both distance-based inference and character-based inference for all language families (according to the Glottolog classification) containing at least 10 languages in ASJP.

  • Experiment 2. We sampled 100 sets of languages with a size between 20 and 400 at random and applied all four methods of phylogenetic inference to each of them.

In both experiments, each automatically inferred phylogeny was evaluated by computing the Generalized Quartet Distance (GQD) [48] to the Glottolog expert tree (restricted to the same set of languages).

The results of the first experiment are summarized in Table 2 and visualized in Figure 3. The results for the individual families are given in Table 5.

Figure 3: Experiment 1: Generalized Quartet Distances for Glottolog families depending on phylogenetic inference method. Aggregated over all families (left) and split according to family size (right).
method character-based distance-based
character type cognate classes soundclass-concept combined
total 0.215 0.186 0.173 0.148
small families () 0.193 0.219 0.219 0.100
medium families () 0.249 0.179 0.171 0.168
large families () 0.147 0.147 0.102 0.154
Table 2: Median Generalized Quartet Distances for Glottolog families

Aggregating over all families suggests that distance-based inference produces the best fit with the expert goldstandard. However, a closer inspection of the results reveals that the performance of the different phylogenetic inference methods depend on the size of the language families (measured in number of taxa available in ASJP). Combining both types of characters in a partitioned model always leads to better results than the two character types individually. While distance-based inference is superior for small language families (less than 20 taxa), character-based inference appears to be about equally good for medium-sized (20–199 taxa) and large (more than 200 taxa) language families.

This assessment is based on a small sample size since there are only 33 medium-sized and 6 large language families. The results of experiment 2 confirm these conclusions though. They are summarized in Table 3 and illustrated in Figure 4.

Figure 4: Experiment 2: Generalized Quartet Distances for random samples of languages depending on phylogenetic inference method. Aggregated over all samples (left) and split according to sample size (right).
method character-based distance-based
character type cognate classes soundclass-concept combined
total 0.187 0.147 0.066 0.130
0.286 0.210 0.099 0.174
0.226 0.135 0.065 0.115
0.157 0.136 0.063 0.132
0.131 0.132 0.061 0.114
Table 3: Median Generalized Quartet Distances to Glottolog for random samples of languages

All four methods improve with growing sample size, but this effect is more pronounced for character-based inference. While combined character-based inference and distance-based inference are comparable in performance for smaller samples of languages (), character-based inference outperforms distance-based inference for larger samples, and the difference grows with sample size.

The same pattern is found when the different versions of phylogenetic inference is applied to the full dataset of 6,892 languages. We find the following GQD values:

Relation to geography

Both the distances between languages and the two methods to represent languages as character vectors are designed to identify similarities between word lists. There are essentially three conceivable causal reasons why the word lists from two languages are similar: (1) common descent, (2) language contact and (3) universal tendencies in sound-meaning association due to sound symbolism, nursery forms etc. [49]. The third effect is arguably rather weak. The signal derived from common descent and from language contact should be correlated with geographic distance. If the methods proposed here extract a genuine signal from word lists, we thus expect to find such a correlation.

To test this hypothesis, we computed the geographic distance (great-circle distance) between all pairs from a sample of 500 randomly selected languages, using the geographic coordinates supplied with the ASJP data.

We furthermore extracted pairwise distances from character vectors by computing the cosine distance between those vectors, using only characters for which both languages have a defined value. In this way we obtained three matrices of pairwise linguistic distances for the mentioned sample of 500 languages: (1) The distance as defined in equation (6), called PMI distance, (2) the cosine distance between the cognate-class vectors, and (3) the cosine distance between the sound-concept vectors.

All three linguistic distance measures show a signficant correlation with geographic distance. The Pearson correlation coefficient for PMI distances is (p= according to the Mantel test), (p=) for cognate-class distance and () for sound-concept distance. Figure 5 shows the corresponding scatter plots.

Figure 5: Geographic vs. linguistic distances between languages

The visualization suggests that for all three linguistic distance measures, we find a signal at least up to 5,000 km. This is confirmed by the Mantel correlograms [50] shown in Figure 6.

Figure 6: Mantel correlograms. Blue: significant, red: non-significant at .

We find a significant positive correlation with geographic distance for up to 5,000 km for PMI distance, and up to 4,000 km for cognate-class distance and sound-concept distance.

Usage Notes

Character-based inference from expert cognacy judgment data have been used in various downstream applications beyond phylogenetic inference, such as estimating the time course of prehistoric population events [3, 7, 9] or the identification of overarching patterns of cultural language evolution [5, 51]. In this section it will be illustrated how the automatically inferred characters described above can be deployed to expand the scope of such investigations to larger collections of language families.

A case study: punctuated language evolution

A few decades ago, [52] proposed that biological evolution is not, in general, a gradual process. Rather, they propose, long periods of stasis are separated by short periods of rapid change co-occurring with branching speciation. This model goes by the name of punctuated equilibrium. This proposal has initiated a lively and still ongoing discussion in biology. Pagel, Venditti and Meade [53] developed a method to test a version of this hypothesis statistically. They argue that most evolutionary change occurs during speciation events. Accordingly, we expect a positive correlation between the number of speciation events a lineage underwent throughout its evolutionary history and the amount of evolutionary change that happened during that time.

Estimates of both quantities can be read off a phylogenetic tree — the number of speciation events corresponds to the number of branching nodes, and the amount of change to the total path length — provided (a) the tree is rooted and (b) branch lengths reflect evolutionary change (e.g., the expected number of mutations of a character) rather than historical time. In [53] a significant correlation is found for biomolecular data, providing evidence for punctual evolution.

In [51], the same method is applied to the study of language evolution, using expert cognacy data from three language families (Austronesian, Bantu, Indo-European). The study results in strong evidence for punctuated evolution in all three families.

We conducted a similar study for all Glottolog language families with at least 10 ASJP languages. The workflow was as follows. For each family :

  • Find the language which has the minimal average PMI distance to the languages in . This language will be used as outgroup.

  • Infer a Maximum-Likelihood tree over the taxa with the Glottolog classification as constraint tree, using a partitioned analysis with cognate-class characters and soundclass-concept characters.

  • Use as outgroup to root the tree; remove from the tree.

  • Apply the -test [54] to control for the node density artifact.

  • Perform Phylogenetic Generalized Least Square [55] regression with root-to-tip path lengths for all taxa as independent and root-to-tip number of nodes as dependent variable.

  • If the -test is negative and the regression results in a significantly positive slope, there is evidence for punctuated evolution in .

Among the 66 language families considered, the -test was negative for 43 families. We applied Holm-Bonferroni correction for multiple testing to determine significance in the regression analysis. The numerical results are given in Table 4.

family slope -value number of taxa significant
Atlantic-Congo 0.003 1E-14 1332 yes
Austronesian 0.005 1E-14 1259 yes
Afro-Asiatic 0.008 2E-13 356 yes
Sino-Tibetan 0.005 9E-8 279 yes
Indo-European 0.004 2E-7 367 yes
Nuclear˙Trans˙New˙Guinea 0.003 7E-4 259 yes
Pama-Nyungan 0.005 6E-4 167 yes
Tai-Kadai 0.007 8E-3 142 no
Kiwaian 0.024 9E-3 10 no
Nakh-Daghestanian 0.011 0.01 55 no
Turkic 0.009 0.02 60 no
Quechuan 0.006 0.03 62 no
Siouan 0.004 0.04 17 no
Cariban 0.016 0.05 30 no
Eskimo-Aleut -0.052 0.05 10 no
Central˙Sudanic 0.010 0.07 58 no
Salishan 0.015 0.08 30 no
Chibchan 0.011 0.10 23 no
Ainu 0.013 0.10 22 no
Dravidian 0.008 0.10 38 no
Sko 0.038 0.10 14 no
Uralic 0.017 0.12 30 no
Ndu 0.025 0.14 10 no
Lower˙Sepik-Ramu 0.070 0.15 19 no
Japonic 0.010 0.17 32 no
Gunwinyguan 0.027 0.21 14 no
Heibanic -0.041 0.24 11 no
Khoe-Kwadi 0.015 0.39 12 no
Tungusic 0.011 0.40 25 no
Tucanoan -0.011 0.45 32 no
Angan 0.018 0.46 17 no
Cochimi-Yuman 0.005 0.47 13 no
Chocoan -0.027 0.51 10 no
Kadugli-Krongo -0.006 0.71 11 no
Pano-Tacanan 0.001 0.78 33 no
Tupian 0.001 0.80 59 no
Totonacan -0.002 0.80 14 no
Ta-Ne-Omotic -0.003 0.83 24 no
Algic -0.009 0.86 32 no
Lakes˙Plain -0.002 0.89 22 no
Timor-Alor-Pantar 0.005 0.92 59 no
Bosavi -0.003 0.99 13 no
Table 4: Test for punctuated language evolution for the families without node density artifact. Significance is determined via Holm-Bonferroni correction at the significance level of .

A significant positive dependency was found for the seven largest language families (Atlantic-Congo, Austronesian, Indo-European, Afro-Asiatic, Sino-Tibetan, Nuclear Trans-New Guinea, Pama-Nyungan). The relationships for these families are visualized in Figure 7. No family showed a significant negative dependency. This strengthend the conclusion of Atkinson et al. [51] that languages evolve in punctuational bursts.

Figure 7: Dependency between total path length and the number of branching nodes for the families with a significantly positive association. Blue lines are regression lines according to phylogenetic generalized least squares, using -correction.

Acknowledgements

This research was supported by the ERC Advanced Grant 324246 EVOLAEMP and the DFG-KFG 2237 Words, Bones, Genes, Tools, which is gratefully acknowledged.

References

  • [1] Atkinson, Q. D. & Gray, R. Curious parallels and curious connections — phylogenetic thinking in biology and historical linguistics. Systematic Biology 54, 513–526 (2005).
  • [2] Levinson, S. C., D., R. & Gray. Tools from evolutionary biology shed new light on the diversification of languages. Trends in Cognitive Sciences 16, 167–173 (2012).
  • [3] Gray, R. D. & Jordan, F. M. Language trees support the express-train sequence of Austronesian expansion. Nature 405, 1052–1055 (2000).
  • [4] Dunn, M., Terrill, A., Reesink, G., Foley, R. A. & Levinson, S. C. Structural phylogenetics and the reconstruction of ancient language history. Science 309, 2072–2075 (2005).
  • [5] Pagel, M., Atkinson, Q. D. & Meade, A. Frequency of word-use predicts rates of lexical evolution throughout Indo-European history. Nature 449, 717–720 (2007).
  • [6] Brown, C. H., Holman, E. W., Wichmann, S. & Velupillai, V. Automated classification of the world’s languages: A description of the method and preliminary results. STUF — Language Typology and Universals 4, 285–308 (2008).
  • [7] Gray, R. D., Drummond, A. J. & Greenhill, S. J. Language phylogenies reveal expansion pulses and pauses in Pacific settlement. Science 323, 479–483 (2009).
  • [8] Dunn, M., Greenhill, S. J., Levinson, S. & Gray, R. D. Evolved structure of language shows lineage-specific trends in word-order universals. Nature 473, 79–82 (2011).
  • [9] Bouckaert, R. et al. Mapping the origins and expansion of the Indo-European language family. Science 337, 957–960 (2012).
  • [10] Bowern, C. & Atkinson, Q. Computational phylogenetics and the internal structure of Pama-Nyungan. Language 88, 817–845 (2012).
  • [11] Bouchard-Côté, A., Hall, D., Griffiths, T. L. & Klein, D. Automated reconstruction of ancient languages using probabilistic models of sound change. Proceedings of the National Academy of Sciences 36, 141–150 (2013).
  • [12] Pagel, M., Atkinson, Q. D., Calude, A. S. & Meade, A. Ultraconserved words point to deep language ancestry across Eurasia. Proceedings of the National Academy of Sciences 110, 8471–8476 (2013).
  • [13] Hruschka, D. J. et al. Detecting regular sound changes in linguistics as events of concerted evolution. Current Biology 25, 1–9 (2015).
  • [14] Jäger, G. Support for linguistic macrofamilies from weighted sequence alignment. Proceedings of the National Academy of Sciences 112, 12752–12757 (2015). Doi: 10.1073/pnas.1500331112.
  • [15] Greenhill, S. J., Blust, R. & Gray, R. D. The Austronesian Basic Vocabulary Database: From bioinformatics to lexomics. Evolutionary Bioinformatics 4, 271–283 (2008).
  • [16] Wichmann, S. & Holman, E. W. Languages with longer words have more lexical change. In Borin, L. & Saxena, A. (eds.) Approaches to Measuring Linguistic Differences, 249–284 (Mouton de Gruyter, Berlin, 2013).
  • [17] List, J.-M. Data from: Sequence comparison in historical linguistics. GitHub Repository (2014). URL http://github.com/SequenceComparison/SupplementaryMaterial. Release 1.0.
  • [18] Mennecier, P., Nerbonne, J., Heyer, E. & Manni, F. A Central Asian language survey: Collecting data, measuring relatedness and detecting loans. Language Dynamics and Change 6 (2016). In press.
  • [19] Wichmann, S., Holman, E. W. & Brown, C. H. The ASJP database (version 17). http://asjp.clld.org/ (2016).
  • [20] Jäger, G. Phylogenetic inference from word lists using weighted alignment with empirically determined weights. Language Dynamics and Change 3, 245–291 (2013).
  • [21] Jäger, G. & Sofroniev, P. Automatic cognate classification with a Support Vector Machine. In Dipper, S., Neubarth, F. & Zinsmeister, H. (eds.) Proceedings of the 13th Conference on Natural Language Processing, vol. 16 of Bochumer Linguistische Arbeitsberichte, 128–134 (Ruhr Universität Bochum, 2016).
  • [22] Jäger, G., List, J.-M. & Sofroniev, P. Using support vector machines and state-of-the-art algorithms for phonetic alignment to identify cognates in multi-lingual wordlists. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics (ACL, 2017).
  • [23] Needleman, S. B. & Wunsch, C. D. A general method applicable to the search for similarities in the amino acid sequence of two proteins. Journal of Molecular Biology 48, 443–453 (1970).
  • [24] Cock, P. J. A. et al. Biopython: freely available Python tools for computational molecular biology and bioinformatics. Bioinformatics 25, 1422–1423 (2009). doi:10.1093/bioinformatics/btp163.
  • [25] Holman, E. W. et al. Advances in automated language classification. In Arppe, A., Sinnemäki, K. & Nikanne, U. (eds.) Quantitative Investigations in Theoretical Linguistics, 40–43 (University of Helsinki, 2008).
  • [26] Dempster, A. P., Laird, N. M. & Rubin, D. B. Maximum likelihood from incomplete data via the EM algorithm. Journal of the royal statistical society. Series B (methodological) 29, 1–38 (1977).
  • [27] Nelder, J. A. & Mead, R. A simplex method for function minimization. The computer journal 7, 308–313 (1965).
  • [28] Kroonen, G. Etymological Dictionary of Proto-Germanic (Brill, Leiden, Boston, 2013).
  • [29] Fisher, R. A. Statistical methods for research workers (Genesis Publishing Pvt Ltd, 1925).
  • [30] Dunn, M. Indo-European lexical cognacy database (IELex). URL: http://ielex.mpi.nl/ (2012).
  • [31] Platt, J. C. Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. In Advances in large margin classifiers, 61–74 (MIT Press, 1999).
  • [32] Bagga, A. & Baldwin, B. Entity-based cross-document coreferencing using the vector space model. In Proceedings of the 17th International Conference on Computational Linguistics-Volume 1, 79–85 (Association for Computational Linguistics, 1998).
  • [33] Militarev, A. I. Towards the chronology of Afrasian (Afroasiatic) and its daughter families (McDonald Institute for Archaelogical Research, Cambridge, 2000).
  • [34] Běijīng Dàxué. Hànyǔ fāngyán cíhuì [Chinese dialect vocabularies] (Wénzì Gǎigé, 1964).
  • [35] McElhanon, K. A. Preliminary observations on Huon Peninsula languages. Oceanic Linguistics 6, 1–45 (1967). URL http://www.jstor.org/stable/3622923.
  • [36] Hattori, S. Japanese dialects. In Hoenigswald, H. M. & Langacre, R. H. (eds.) Diachronic, areal and typological linguistics, 368–400 (Mouton, The Hague and Paris, 1973).
  • [37] Peiros, I. Comparative linguistics in Southeast Asia. Pacific Linguistics 142 (1998).
  • [38] Sanders, J. & Sanders, A. G. Dialect survey of the Kamasau language. Pacific Linguistics. Series A. Occasional Papers 56, 137 (1980).
  • [39] Cysouw, M., Wichmann, S. & Kamholz, D. A critique of the separation base method for genealogical subgrouping. Journal of Quantitative Linguistics 13, 225–264 (2006).
  • [40] Zhivlov, M. Annotated Swadesh wordlists for the Ob-Ugrian group. In Starostin, G. S. (ed.) The Global Lexicostatistical Database (RGGU, Moscow, 2011). URL: http://starling.rinet.ru.
  • [41] Raghavan, U. N., Albert, R. & Kumara, S. Near linear time algorithm to detect community structures in large-scale networks. Physical Review E 76, 036106 (2007).
  • [42] Gascuel, O. BIONJ: An improved version of the NJ algorithm based on a simple model of sequence data. Molecular Biology and Evolution 14, 685–695 (1997).
  • [43] Lewis, P. O. A likelihood approach to estimating phylogeny from discrete morphological character data. Systematic Biology 50, 913–925 (2001).
  • [44] Stamatakis, A. RAxML version 8: a tool for phylogenetic analysis and post-analysis of large phylogenies. Bioinformatics 30, 1312–1313 (2014).
  • [45] Ronquist, F. & Huelsenbeck, J. P. MrBayes 3: Bayesian phylogenetic inference under mixed models. Bioinformatics 19, 1572–1574 (2003).
  • [46] Drummond, A. J., Suchard, M. A., Xie, D. & Rambaut, A. Bayesian phylogenetics with BEAUti and the BEAST 1.7. Molecular biology and evolution 29, 1969–1973 (2012).
  • [47] Pagel, M. & Meade, A. BayesPhylogenies 2.0. software distributed by the authors (2015).
  • [48] Pompei, S., Loreto, V. & Tria, F. On the accuracy of language trees. PLoS One 6, e20109 (2011).
  • [49] Blasi, D. E., Wichmann, S., Hammarström, H., Stadler, P. F. & Christiansen, M. H. Sound–meaning association biases evidenced across thousands of languages. Proceedings of the National Academy of Sciences 113, 10818–10823 (2016).
  • [50] Legendre, P. & Legendre, L. F. J. Numerical Ecology (Elsevier, Amsterdam/Oxford, 2012).
  • [51] Atkinson, Q. D., Meade, A., Venditti, C., Greenhill, S. J. & Pagel, M. Languages evolve in punctuational bursts. Science 319, 588–588 (2008).
  • [52] Gould, S. J. & Eldredge, N. Punctuated equilibria: the tempo and mode of evolution reconsidered. Paleobiology 3, 115–151 (1977).
  • [53] Pagel, M., Venditti, C. & Meade, A. Large punctuational contribution of speciation to evolutionary divergence at the molecular level. Science 314, 119–121 (2006).
  • [54] Venditti, C., Meade, A. & Pagel, M. Detecting the node-density artifact in phylogeny reconstruction. Systematic Biology 55, 637–643 (2006).
  • [55] Grafen, A. The phylogenetic regression. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences 119–157 (1989).
Table 5: Generalized Quartet Distances to Glottolog expert tree for Glottolog families with 10 taxa
Table 6: Generalized Quartet Distances to Glottolog expert tree for randomly sampled groups of taxa
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
151006
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description