Telling Cause from Effectusing MDL-based Local and Global Regression

Telling Cause from Effect
using MDL-based Local and Global Regression

Alexander Marx and Jilles Vreeken Max Planck Institute for Informatics and Saarland University
Saarland Informatics Campus, Saarbrücken, Germany
{amarx, jilles}

We consider the fundamental problem of inferring the causal direction between two univariate numeric random variables and from observational data. The two-variable case is especially difficult to solve since it is not possible to use standard conditional independence tests between the variables.

To tackle this problem, we follow an information theoretic approach based on Kolmogorov complexity and use the Minimum Description Length (MDL) principle to provide a practical solution. In particular, we propose a compression scheme to encode local and global functional relations using MDL-based regression. We infer causes in case it is shorter to describe as a function of than the inverse direction. In addition, we introduce Slope, an efficient linear-time algorithm that through thorough empirical evaluation on both synthetic and real world data we show outperforms the state of the art by a wide margin.

Kolmogorov Complexity, MDL, Causal Inference, Regression, Hypercompression

I Introduction

Telling cause from effect from observational data is one of the fundamental problems in science [26, 18]. We consider the problem of inferring the most likely direction between two univariate numeric random variables and . That is, we are interested in identifying whether causes , whether causes , or whether they are merely correlated.

Traditional methods, that rely on conditional independence tests, cannot decide between the Markov equivalent classes of and  [18]. Recently, it has been postulated however that if , there exists an independence between the marginal distribution of the cause, , and the conditional distribution of the effect given the cause,  [25, 9]. The state of the art exploits this asymmetry in various ways, and overall obtain up to accuracy on a well-known benchmark of cause-effect pairs [24, 8, 20, 10, 17]. In this paper we break this barrier, and give an elegant score that is computable in linear-time and obtains over accuracy on the same benchmark.

To illustrate its strength, we show the results of our method, called Slope, and four state-of-the-art methods over this benchmark in Fig. 1. In particular, we show the accuracy over the top- pairs ranked according to the score at hand, the so-called decision rate. The plot shows that Slope leads by large margin over its competitors; it is accurate over the 32 pairs it is most certain about, and is accurate over its top-74 out of 98 pairs. Unlike its competitors, its accuracies are strongly significant with regard to the confidence interval of a fair coin flip. Moreover, our score comes with a natural significance test that allows us to weed out insignificant results; when we consider the significant pairs only we find that Slope is even accurate.

Fig. 1: [Higher is better] Weighted accuracy of our method, Slope, versus the state of the art in causal inference for univariate numeric pairs as identified in a recent survey [17], i.e., Cure [24], Resit [20], IGCI [10] and ANM-pHSIC [8] on the Tuebingen benchmark data set (98 pairs). The gray area indicates the confidence interval of a fair coin toss.

We base our method on the algorithmic Markov condition, a recent postulate by Janzing and Schölkopf [9], that states that if causes , the factorization of the joint distribution in the causal direction has a simpler description—in terms of Kolmogorov complexity—than that in the anti-causal direction. That is, if , . As any physical process can be modelled by a Turing machine, this ideal score can detect any causal dependence that can be explained by a physical process. However, Kolmogorov complexity is not computable, so we need a practical instantiation of this ideal. In this paper we do so using the Minimum Description Length (MDL) principle, which provides a statistically well-founded approximation of Kolmogorov complexity.

Simply put, we propose to fit a regression model from to , and vice versa, measuring both the complexity of the function, as well as the error it makes in bits, and infer that causal direction by which we can describe the data most succinctly. There is a little bit more to it, of course. We carefully construct an MDL score such that we can meaningfully compare between different types of functional dependencies, including linear, quadratic, cubic, reciprocal, and exponential functions. This way, for example, we will find that we can more succinctly describe the data in Fig. 2 by a cubic function than with a linear function, as while it takes fewer bits to describe the latter function, it will take many more bits to describe the large error it makes.

We do not only consider models that try to explain all the data with a single, global, deterministic regression function, but also allow for non-deterministic models. That is, we consider compound regression functions that in addition to a global deterministic function additionally include regression functions for local parts of the data corresponding to specific, duplicated values. For example, consider the data in Fig. 2, where the values belonging to a single value clearly show more structure than the general linear trend. In contrast, if we rotate the plot by degrees, we do not observe the same regularities for the values mapped to a single value. In many cases, e.g., there is only one mapping value. We can exploit this asymmetry by considering local regression functions per value of , each individually fitted but as we assume all noise to be of the same type, all should be of the same function class. In this particular example, we therewith correctly infer that causes . The MDL principle prevents us from overfitting, as such local functions are only included if they aid global compression. Last, but not least, we give a linear-time algorithm, Slope, to compute this score.

As we model as a function of and noise, our approach is somewhat reminiscent to causal inference based on Additive Noise Models (ANMs) [25], where one assumes that is generated as a function of plus additive noise, with . In the ANM approach, we infer if we can find a function from to that admits an ANM, but cannot do so in the opposite direction. In practice, ANM methods often measure the independence between the presumed cause and the noise in terms of p-values, and infer the direction of the lowest p-value. As we will see, this leads to unreliable confidence scores—not the least because p-values are often strongly influenced by sample size [1], but also as that a lower p-value does not necessarily mean that is more true, just that is very probably not true [1]. We will show that our score, on the other hand, is robust against sample size, and correlates strongly with accuracy. Moreover, it admits an elegant and effective analytical statistical test on the difference in score between the two causal directions based on the no-hypercompressibility inequality [3, 7].

Our key contributions can be summarised as follows, we

  • propose an MDL score for causal inference on pairs of univariate numeric random variables,

  • formulate an analytic significance test based on compression,

  • show to model unobserved mechanisms via compound deterministic and non-deterministic functions,

  • introduce the linear-time Slope algorithm,

  • give extensive empirical evaluation on synthetic and real-world data, including a case study

  • make all code, data generators, and data available.

The remainder of this paper is organised as usual. We first give a brief primer to Kolmogorov complexity and the Minimum Description Length principle in Sec. II. In Sec. III we introduce our score based on the algorithmic independence of conditional, as well as a practical instantiation based on the MDL principle. We introduce the linear-time Slope algorithm to efficiently compute the conditional score in Sec. IV. Sec. V discusses related work. We empirically evaluate Slope in Sec. VI and discuss the results in Sec. VII. We round up with conclusions in Sec. VIII.

(a) deterministic
(b) non-deterministic
Fig. 2: Example deterministic and non-deterministic data. In both cases the ground truth is causes . The left-hand data is generated using a quadratic function with Gaussian noise, whereas the right-hand data is generated using a non-deterministic function.

Ii Preliminaries

In causal inference, the goal is to determine for two random variables and that are statistically dependent whether it is more likely that causes , denoted by , or whether it is more likely that causes , . In this paper we consider the case where and are univariate and numeric. We work under the common assumption of causal sufficiency [4, 17, 20, 28]. That is, we assume there is no hidden confounder variable that causes both and .

We base our causal inference score on the notion of Kolmogorov complexity, which we will approximate via MDL. Below we give brief primers to these two main concepts.

Ii-a Kolmogorov Complexity

The Kolmogorov complexity of a finite binary string is the length of the shortest binary program for a universal Turing machine that outputs and then halts [11, 13]. Formally,

Simply put, is the most succinct algorithmic description of , and therewith Kolmogorov complexity of is the length of its ultimate lossless compression. Conditional Kolmogorov complexity, , is then the length of the shortest binary program that generates , and halts, given as input.

The Kolmogorov complexity of a probability distribution , , is the length of the shortest program that outputs to precision on input  [13]. More formally, we have

The conditional, , is defined similarly except that the universal Turing machine now gets the additional information . The algorithmic mutual information between two distributions and is , where is the shortest binary program for . For more details on Kolmogorov complexity see [13].

Ii-B Minimum Description Length Principle

Kolmogorov complexity is, however, not computable [13], but we can approximate it in a well-founded and computable way through the Minimum Description Length (MDL) principle [22, 7]. Conceptually, instead of all programs, Ideal MDL considers only those for which we know that they output and halt, i.e., lossless compressors. Formally, given a model class , MDL identifies the best model for data as the one minimizing

where is the length in bits of the description of , and is the length in bits of the description of data given . This is known as two-part, or crude MDL. There also exists one-part, or refined MDL. Although refined MDL has theoretically appealing properties, it is only efficiently computable for a small number of model classes.

To use MDL in practice we need to define a model class, and how to encode a model, resp. the data given a model, into bits. Note that in MDL we are only concerned with optimal code lengths, not actual codes—our goal is to measure the complexity of a dataset under a model class, after all [7].

Iii Information Theoretic Causal Inference

In this section, we first introduce how to infer causal directions using Kolmogorov complexity. Thereupon, we show how to obtain a computable score based on the MDL principle.

Iii-a Causal Inference by Kolmogorov Complexity

A central postulate in causal inference concerns the algorithmic independence of conditionals. For multiple random variables, this postulate is defined as follows [9].

Algorithmic Independence of Conditionals: A causal hypothesis is only acceptable if the shortest description of the joint density is given by the concatenation of the shortest description of the Markov kernels. Formally, we write


which holds up to an additive constant independent of the input, and where corresponds to the parents of in a causal directed acyclic graph (DAG).

As we consider two variables, and , either is the parent of or the other way round. That is, either


In other words, two valid ways to describe the joint distribution of and include to first describe the marginal distribution and then the conditional distribution , or first to describe and then .

Thereupon, Janzing and Schölkopf formulated the postulate for algorithmic independence of Markov kernels [9].

Algorithmic Independence of Markov Kernels: If , the marginal distribution of the cause is algorithmically independent of the conditional distribution of the effect given the cause , i.e., the algorithmic mutual information between the two will be zero,


while this is not the case in the other direction.

Simply put, for the true causal direction, the marginal distribution of the cause is algorithmically independent of the conditional distribution of the effect given the cause. Building upon Eqs. (1) and (4), Mooij et al. [16] derived an inference rule stating that if causes ,


holds up to an additive constant. This means that if , the description of the joint distribution of first describing the marginal distribution of the cause and then describing the conditional distribution of the effect given the cause , will be shorter than the other way around.

Although Eq. (5) already allows for inferring the causal direction for a given pair, we obtain a more robust score, allowing for fair comparison of results independent of data sizes, when we normalise the result. In particular, Budhathoki and Vreeken [4] recently proposed to normalise the scores with the sum of the description lengths for the marginal distributions. We therefore define our causal indicator as


and in the same manner. Consequently, we infer , if , and , if and do not decide if .

The confidence of our score is . The higher, the more certain we are that the inferred causal direction is correct. To avoid confusion, we want to emphasise that has nothing to do with a confidence interval, but can be used to rank results of several tests. Below, after introducing our practical score, we will show how we can in addition define a practical statistical test.

Iii-B Causal Inference by MDL

As Kolmogorov complexity is not computable, we will instantiate and using the Minimum Description Length principle [13, 7]. In practice this means we will estimate as


where is the length in bits of the description of the marginal distribution of , that of the marginal distribution of , and that of the conditional distribution of given . We define analogue to , and we infer , if , , if and do not decide if or below a user-defined threshold. Like above, confidence is simply the absolute difference between and .

Considering the difference between the encoded lengths is related to, but not the same as considering the ratio of the posteriors; we also include the complexity of the model, which helps against overfitting. Intuitively, if the functions we find for the two directions both explain the data equally well, we prefer that direction that explains it using the simplest function.

This leaves us to explain how we encode the data, and, most importantly, how we encode .

Intuition of the Conditional Encoding

The general idea is simple: we use regression to model the data of given . That is, we model as a function of and independent noise , i.e. . We do so by fitting a regression function over and treating the error it makes as Gaussian distributed noise. Naturally, the better fits , the fewer bits we will have to spend on encoding errors. The more parameters has, however, the more bits we will have to spend on encoding these. This way, MDL naturally balances the complexity of the model to that of the data [7]. For example, while a linear function is more simple to describe than a cubic one, the latter will fit the data plotted in Fig. 2 so much better that MDL decides it is the better choice.

A key idea in our approach is to consider not only single global deterministic regression functions , which works well for deterministic data, but to also consider non-deterministic, or compound functions as models. That is, we consider models that besides the global regression function may additionally consist of local regression functions that model for those values of that non-deterministically map to multiple values of . That is, per such value of , we take the associated values of , sort these ascending, and uniformly re-distribute them on over a fixed interval. We now see how well, just for these re-distributed points, we can fit a local regression model . This way, we will for example be able to much more succinctly describe the data in Fig. 2 than with a single global deterministic regression function, as we can now exploit the structure that the values of have given a value of , namely, being approximately equally spaced. To avoid overfitting we use MDL, and only allow a local function for a value of into our model if it provides a gain in overall compression. Since we assume that for the true causal model the data in the local components follows the same pattern, we only allow models in which all local functions are of the same type, e.g., all are linear, all are quadratic, etc.

In the following paragraphs, we formalise these ideas and define our cost functions. All logarithms are to base 2, and we use the common convention that .

Complexity of the Marginals

We start by defining the cost for the marginal distributions, and , which mostly serve to normalise our causal indicators and . As we beforehand do not know how or are distributed, and do not want to incur any undue bias, we encode both using a uniform prior with regard to the data resolution of and . That is, we have , where is the resolution of the data of . Note that resolution can be different between and —we specify how we choose in the next section. We define analogue.

Complexity of the Conditional Model

Formally, we write for the set of regression functions, or model, we use to encode the data of given . A model consists of at least one global regression function , and up to local regression functions , associated with individual values of . We write for the set of local regression functions , and require that all are of the same type. The description length, or encoded size, of is


where we first describe the number of local functions using , the MDL optimal encoding for integers  [23], then map each to its associated value of , after which we use bits to identify the type of the global regression function , and whenever is non-empty also bits to identify the type of the local regression functions , finally, we encode the functions themselves. Knowing the type of a function, we only need to encode its parameters, and hence


where we encode each parameter up to a certain precision . We shift by the smallest integer number such that , i.e. means that we consider three digits. What remains is that we need to encode the shift, the shifted digit and the sign.

Complexity of the Conditional Data

Reconstructing the data of given corresponds to encoding the errors, or deviations, the model makes. Since we fit our regression functions by minimizing the sum of squared errors, which corresponds to maximizing the likelihood under a Gaussian, it is a natural choice to encode the errors using a Gaussian distribution with zero-mean.

Since we have no assumption on the standard deviation, we use the empirical estimate to define the standard deviation of the Gaussian. By doing so, the encoded size of the error of with respect to the data of corresponds to


where is the number of data points for which we use a specific function . Intuitively, this score is higher the less structure of the data is described by the model and increases proportionally to the sum of squared errors.

Complexity of the Conditional

Having defined the data and model costs above, we can now proceed and define the total encoded size of the conditional distribution of given as


By MDL we are after that model that minimises Eq. (12). After discussing a significance test for our score, we will present the Slope algorithm to efficiently compute the conditional score in the next section.

Significance by Hypercompression

Ranking based on confidence works well in practice. Ideally, we would additionally like to know the significance of an inference. It turns out we can define an appropriate hypothesis test using the no-hypercompressibility inequality [3, 7]. In a nutshell, under the hypothesis that the data was sampled from the null-model, the probability that any other model can compresses it bits better is .

This means that if we assume the null complexity, , to be the least-well compressed causal direction, we can evaluate the probability of gaining bits by instead using the most-well compressed direction. Formally, if we write for , and vice-versa for , would be . The probability that the data can be compressed bits better in the opposite direction then simply is .

In fact, we can construct a stronger test by assuming that the data is not causated, but merely correlated. That is, we assume both directions are wrong; the one compresses too well, the other compresses too poor. Following, if we assume these two to be equal in terms of exceptionality, the null complexity is the mean between the complexities of the two causal directions, i.e., . The probability of the best-compressing direction is then with . We can now set a significance threshold as usual, such as , and use this to prune out those cases where the difference in compression between the two causal directions is insignificant. We will evaluate this procedure, in addition to our confidence score, in the experiments.

Iv The Slope Algorithm

With the framework defined in the previous section, we can determine the most likely causal direction and the corresponding confidence value. In this section we present the Slope algorithm to efficiently compute the causal indicators. To keep the computational complexity of the algorithm linear, we restrict ourselves to linear, quadratic, cubic, exponential and reciprocal functions—although at the cost of extra computation this class may be expanded arbitrarily. We start by introducing the subroutine of Slope that computes the conditional complexity of given .

Iv-a Calculating the Conditional Scores

Algorithm 1 describes the subroutine to calculate the conditional costs or . We start with fitting a global function for each function class and choose the one with the minimum sum of data and model costs (line 1). Next, we add to the model and store the total costs (11). For purely deterministic functions, we are done.

If includes duplicate values, however, we need to check whether fitting a non-deterministic model leads to a gain in compression. To this end we have to check for each value of that occurs at least twice, whether we can express the ascendingly ordered corresponding values, , as a function of uniformly distributed data between , where is a user-determined scale parameter (lines 11). If the model costs of the new local function are higher than the gain on the data side, we do not add to our model (1). As it is fair to assume that for truly non-deterministic data the generating model for each local component is the same, we hence restrict all local functions to be of the same model class . As final result, we return the costs according to the model with the smallest total encoded size. In case of deterministic data, this will be the model containing only .

input : random variables and
output : score
1 ;
2 ;
3 ;
4 ;
5 ;
6 foreach  do
7       ;
8       foreach  do
9             ;
10             ;
11             ;
12             ;
13             if  then ;
15      if  then ;
17return ;
Algorithm 1 ConditionalCosts

Iv-B Causal Direction and Confidence

In the previous paragraph, we described Algorithm 1, which is the main algorithmic part of Slope. Before applying it, we first normalise and to be from the same domain and then determine the data resolutions and for and . To obtain the data resolution, we calculate the smallest difference between two instances of the corresponding random variable. Next, we apply Algorithm 1 for both directions to obtain and . Subsequently, we estimate the marginals and based on their data resolutions. This we do by modelling both as a uniform prior with and . In the last step, we compute and and report the causal direction as well as the corresponding confidence value .

Iv-C Computational Complexity

To assess the computational complexity, we have to consider the score calculation and the fitting of the functional relations. The model costs are computed in linear time according to the number of parameters, whereas the data costs need linear time with regard to the number of data points . Since we here restrict ourselves to relatively simple functions, we can fit these in time linear to the number of data points. To determine the non-deterministic costs, in the worst case we perform times fits over two data points, which is still linear. In total the runtime complexity of Slope hence is . In practice, this means that Slope is fast, and only takes seconds up to a few minutes depending on the size of the data.

V Related Work

Causal inference from observational data is an important open problem that has received a lot of attention in recent years [4, 17, 24, 18]. Traditional constraint based approaches, such as conditional independence test, require at least three random variables and can not decide between Markov equivalent causal DAGs [18, 28]. In this work, we focus specifically on those cases where we have to decide between the Markov equivalent DAGs and .

A well studied framework to infer the causal direction for the two-variable case relies on the additive noise assumption [25]. Simply put, it makes the strong assumption that is generated as a function of plus additive noise, , with . It can then be shown that while such a function is admissible in the causal direction, this is not possible in the anti-causal direction. There exist many approaches based on this framework that try to exploit linear [25] or non-linear functions [8] and can be applied to real valued [25, 8, 31, 20] as well as discrete data [19]. Recently, Mooij et al. [17] reviewed several ANM-based approaches from which ANM-pHSIC, a method employing the Hilbert-Schmidt Independence Criterion (HSIC) to test for independence, performed best. For ANMs the confidence value is often expressed as the negative logarithm of the p-value from the used independence test [17]. P-values are, however, quite sensitive to the data size [1], which leads to a less reliable confidence value. As we will show in the experiments, our score is robust and nearly unaffected by the data size.

Another class of methods rely on the postulate that if the marginal distribution of the cause and the conditional distribution of the effect given the cause are independent of each other. The same does not hold for the opposite direction [9]. The authors of IGCI define this independence via orthogonality in the information space. Practically, they define their score using the entropies of and  [10]. Liu and Chan implemented this framework by calculating the distance correlation for discrete data between and  [14]. A third approach based on this postulate is Cure [24]. Here, the main idea is to estimate the conditional using unsupervised inverse Gaussian process regression on the corresponding marginal and compare the result to the supervised estimation. If the supervised and unsupervised estimation for deviate less than those for , an independence of and is assumed and causal direction is inferred. Although well formulated in theory, the proposed framework is only solvable for data of up to data points and otherwise relies strongly on finding a good sample of the data.

Recently, Janzing and Schölkopf postulated that if , the complexity of the description of the joint distribution in terms of Kolmogorov complexity, , will be shorter when first describing the distribution of the cause and than describing the distribution of the effect given the cause than vice versa [9, 12]. To the best of our knowledge, Mooij et al. [16] were the first to propose a practical instantiation of this framework based on the Minimum Message Length principle (MML) [30] using Bayesian priors. Vreeken [29] proposed to approximate the Kolmogorov complexity for numeric data using the cumulative residual entropy, and gave an instantiation for multivariate continuous-valued data. Perhaps most related to Slope is Origo [4], which uses MDL to infer causal direction on binary data, whereas we focus on univariate numeric data.

Vi Experiments

In this section we empirically evaluate Slope. In particular, we consider synthetic data, a benchmark data set, and a real-world case study. We implemented Slope in and make both the code, the data generators, and real world data publicly available for research purposes.111 We compare Slope to the state of the art for univariate causal inference. These include Cure [24], IGCI [10] and Resit [20]. From the class of ANM-based methods we compare to ANM-pHSIC [8, 17], which a recent survey identified as the most reliable ANM inference method [17]. We use the implementations by the authors, sticking to the recommended parameter settings.

To run Slope, we have to define the parameter , which is used to normalise the data within a local component, on which the data is fitted. Generally, the exact value of is not important for the algorithm, since it only defines the domain of the data points , which can be compensated by the parameters of the fitted function. In our experiments, we use and set the precision for the parameters to three.

Vi-a Synthetic Data

We first consider data with known ground truth. To generate such data, we follow the standard scheme of Hoyer et al. [8]. That is, we first generate randomly according to a given distribution, and then generate as , where is a function that can be linear, cubic or reciprocal, and is the noise term, which can either be additive or non-additive.


First, we evaluate the performance of Slope under different distributions. Following the scheme above, we generate randomly from either

  1. a uniform distribution with and , where ,

  2. a sub-Gaussian distribution by sampling data with , where and taking each value to the power of maintaining its sign,222We consider sub-Gaussian distributions since linear functions with both and Gaussian distributed are not identifiable by ANMs [8].

  3. a binomial distribution with and the number of trials , or

  4. a Poisson distribution with .

Note that the binomial and Poisson distribution generate discrete data points, which with high probability results in non-deterministic pairs. To generate we first apply either a linear, cubic or reciprocal function on , with fixed parameters, and add either additive noise using a uniform or Gaussian distribution with or non-additive noise with according to [24], where we choose . For every combination we generate data sets of samples each.

Fig. 3: [Higher is better] Accuracies of Slope, Resit and IGCI on synthetic data. The first letter of the labels corresponds to the distribution of (: uniform, : sub-Gaussian, : binomial and : Poisson), the second letter to that of the noise (: uniform, : Gaussian and : non-additive).

Next, we apply Slope, Resit, and IGCI and record how many pairs they correctly infer. As they take up to hours to process a single pair, we do not consider Cure and ANM here. We give the averaged results over all three function types in Fig. 3. In general, we find that Slope and IGCI perform on par and reach for most setups, whereas Slope performs better on the sub-Gaussian data. If we consider the single results for linear, cubic and reciprocal, we find that on the linear data with sub-Gaussian distributed , Slope performs on average better than IGCI.


Second, we investigate the dependency of the Resit, IGCI, and Slope scores on the size of the data. In an ideal world, a confidence score is not affected by the size of the data, as this allows easy comparison and ranking of scores.

To analyse this, we generate datasets of and samples each, where is Gaussian distributed and is a cubic function of with uniform noise. Subsequently, we apply Resit, IGCI and Slope and record their confidence values. We show the results per sample size in Fig. 4. As each method uses a different score, the scale of the Y-axis is not important. What is important to note, is the trend of the scores over different sample sizes. We see Slope has very consistent confidence values that are independent of the number of samples, in strong contrast to Resit and IGCI. For Resit the standard deviation of the confidence values grows with the sample size, while for IGCI, we observe that the average confidence increases with the number of samples. In other words, while it is easy to compare and rank Slope scores, this is not the case for the two others—which, as we will see below results in comparatively bad decision rates.

(a) Resit
(b) IGCI
(c) Slope
Fig. 4: [The more stable the better] Confidence values on a cubic function for different sample sizes. Unlike Resit and IGCI, the Slope scores can be meaningfully compared between different sample sizes.


Local regression on non-deterministic data adds to the modelling power of Slope, yet, it may also lead to overfitting. Here we evaluate whether MDL protects us from picking up spurious structure.

To control non-determinacy, we sample uniformly from equidistant values over , i.e., . To obtain , we apply a linear function and additive Gaussian noise as above. Per data set we sample data points.

In Fig. 5 we plot the non-determinism of the model, i.e. the average number of used bins divided by the average number of bins Slope could have used, against the number of distinct values. As a reference, we also include the average number of values of per value of . We see that for at least unique values, Slope does not infer non-deterministic models. Only at distinct values, i.e., an average of duplicates per , Slope consistently starts to fit non-deterministic models. This shows that if anything, rather than being prone to overfit, Slope is conservative in using local models.

Fig. 5: [Slope does not overfit] Percentage of non-deterministic models Slope chooses, resp. the expected number of values per value, for the number of unique values of .

Vi-B Real World Data

Next we evaluate Slope on real world benchmark data. In particular, we consider the Tuebingen cause-effect data set.333 At the time of writing the data set included 98 univariate numeric cause effect pairs. We compare Slope to IGCI, Resit, ANM, and Cure, using the suggested parameter settings for this benchmark. In addition, we include SlopeD in our comparison, which is an ablated version of Slope that only fits a single global deterministic function.

Decision Rate and Accuracy

We first consider the accuracy and decision rates over the benchmark data. To determine the decision rate for an approach, we order the results according to its confidence values. We then determine, from to , the accuracy over the top- ranked pairs. To determine the accuracy at we weigh each pair according to its specification in the database. In case an algorithm does not decide, we weigh this result as one half of the corresponding weight.

We plot the results in Fig. 6, where in addition we show the confidence interval for the binomial distribution with success probability of in gray. We observe that Slope strongly outperforms its competitors in both decision rate and overall accuracy; it identifies the correct result for top-ranked data sets, over the top- pairs (which correspond to of the weights) it has an accuracy of , while when overall it obtains an accuracy of .

In Fig. 7 we show the corresponding confidence values of Slope for the benchmark pairs. The plot emphasises not only the predictive power of Slope, but also the strong correlation between confidence value and accuracy. In comparison to the other approaches the decision rate (Fig. 6) of Slope is stable and only decreases slightly at the very end. Moreover, our competitors obtain much worse overall accuracies, between 56% (Cure) and 71% (Resit), which for the most part are insignificant.

We identify three reasons why Slope performs better than other approaches based on functional learning. First, we consider also piecewise regression, which leads to an improvement of for Slope compared to SlopeD. Second, we consider the complexity of the functions, and incorporate the model cost to prevent overfitting; as they are so powerful, methods based on Gaussian process learning, for example, are likely to overfit. Last, the calculation of our confidence value is less dependent on the data size.

If we not only consider the confidence values, but also our proposed statistical test, we can improve our results even further. After adjusting the p-values using the Benjamini-Hochberg correction [2] to control the false discovery rate (FDR), out of the decisions are significant w.r.t. . As shown in Fig. 7 the pairs rated as insignificant correspond to small confidence values. In addition, from the insignificant pairs, were inferred incorrect from Slope and correct. Over the significant pairs the weighted accuracy increases to .

To provide further evidence that the confidence values and the p-values are indeed related, we plot the adjusted p-values and confidence values in Fig. 8. We observe that high confidence values correspond to highly significant p-values. We also computed the decision rate for Slope when ranking by p-values, and find it is only slightly worse than that ranked by confidence. We posit that confidence works better as it is more independent of the data size. To test this, we calculate the correlation between data size and corresponding measures using the maximal information coefficient (MIC) [21]. We find a medium correlation between confidence and p-values (), and between p-values and data size (), and only a weak correlation between confidence and data size ().

To show that the strength of our method comes from its ability to fit non-deterministic models, we also consider SlopeD, an ablated version of Slope that only considers deterministic models, i.e., only considers models that consist of a single global regression function. As it uses the same score, we see in Fig. 6 that its decision rate is still good. Although the overall accuracy drops by to , it is still as good as the best competing method. This also shows that the full approach is much better able to deal with a broader spectrum of cause effect pairs.

Fig. 6: [Higher is better] Decision rates of Slope, Cure, Resit, IGCI and ANM on the Tuebingen benchmark data set (98 pairs). SlopeD is an ablated version of Slope, which fits the data with a single deterministic function (more detailed version of Fig. 1).

Apart from the accuracies, we also tracked which functional dependencies Slope found on the benchmark data. We found that most of the time (), it fits linear functions. For of the data it fits exponential models, and for cubic models. Quadratic and reciprocal models are rarely fitted (). This shows that working with a richer class of functions allows us to pick up more structure. In addition, we observe that although we allow to fit complex models, in many cases a simple model is preferred since it has sufficient explanatory power at lower model costs.

Fig. 7: Confidence values of Slope for the Tuebingen benchmark pairs, in descending order, corresponding to Fig. 6. Correct inferences marked in green, errors in red, and inferences insignificant at marked with a gray arrow.
(a) vs. -value
(b) runtime
Fig. 8: (left) Confidence and significance of Slope on the Tuebingen benchmark pairs. Only samples with low confidence are also insignificant. (right) Runtime in seconds over all 98 pairs, in log-scale. Slope is more accurate than all, and faster than all except for IGCI.

Vi-C Case Study: Octet Binary Semi Conductors

To evaluate real-world performance we conduct a case study on octet binary semi-conductors [5, 27]. In essence, these are 82 different materials for which we have data on the radii of electron orbitals, and important other energy quantities that together determine their chemical properties. The target of interest is the energy difference between rocksalt and zincblende crystal structures. It is still an open challenge to find that combination of physical properties that can fully explain  [5, 6]. For causal inference, it suffices to know that the energy difference is influenced by the descriptor variables and not the other way round.

As there is no known combination of physical properties that fully describes , we mine the top features that each explain as much of as possible [15], and therewith obtain cause effect pairs where we set as and one of the mined features as . After consulting with domain experts, we assume as ground truth for all pairs.

We find that Slope infers the correct direction for out of pairs. The only error is also the only insignificant score () at . In comparison, we find that Cure infers all pairs correctly, whereas IGCI makes the same decisions as Slope. Resit and ANM, on the other hand, only get resp. pairs correct.

Vi-D Runtime

Last, we evaluate the computational efficiency of Slope. To this end we report, per method, the wall-clock time needed to decide on all 98 pairs of the benchmark data set. We ran these experiments on Linux servers with two six-core Intel Xenon E5-2643v2 processors and 64GB RAM. The implementations of Slope, IGCI and Resit are single-threaded, whereas ANM and Cure are implemented in Matlab and use the default number of threads. We give the results in Fig. 8. We see that IGCI is fastest, followed by Slope, which takes seconds to processes all pairs. The other competitors are all at least one order of magnitude slower. Taking days, Cure has the longest runtime. The large gain in runtime of Slope compared to Resit, ANM and Cure rises from the fact that those methods employ Gaussian process regression to fit the functions.

Vii Discussion

The experiments clearly show that Slope works very well. It performs well in a wide range of settings, both on synthetic and real world data. In particular on the latter it outperforms the state of the art, obtaining highly stable decision rates and an overall accuracy of more than better than the state of the art. Our case study showed it makes sensible decisions. Most importantly, Slope is simple and elegant. Its models are easy to interpret, it comes with a stable confidence score, a natural statistical test, and is computationally efficient.

The core idea of Slope is to decide for the causal direction by the simplest, best fitting regression function. To deal with non-deterministic data, we allow our model to additionally use local regression functions for non-deterministic values of , which the experiments show leads to a large increase in performance. Importantly, we employ local regression within an MDL framework; without this, fitting local regressors would not make sense, as it would lead to strong overfitting.

A key advantage of our MDL-based instantiation of the algorithmic Markov condition, compared to HSIC-based independence tests and IGCI, is that our score is not dependent on the size of the data. This makes it possible to meaningfully compare results among different tests; this is clearly reflected in the stable decision rates. Another advantage is that it allows us to define a natural statistical test based on compression rates, which allows us to avoid insignificant inferences.

Although the performance of Slope is impressive, there is always room for improvement. With regard to confidence and significance, we are highly interested in investigating whether we can define a test that directly infers the significance of a confidence score (without resorting to permutation testing).

In addition, it is possible to improve the search for local components by considering alternate re-distributions of , apart from uniformly ascending values. This is not trivial, as there exist possible orders, and it is not immediately clear how to efficiently optimise regression fit over this space. More obviously, there is room to expand the set of function classes that we use at the moment—kernel based, or Gaussian-process based regression are powerful methods that, at the expense of computation, will likely improve performance further.

For future work, we additionally aim to consider the detection of confounding variables—an open problem that we believe our information theoretic framework naturally lends itself to—as well as to extend Slope to multivariate and possibly mixed-type data. We are perhaps most enthusiastic about leveraging the high accuracy of Slope towards inferring causal networks from biological processes without the need of conditional independence tests.

Viii Conclusion

We studied the problem of inferring the causal direction between two univariate numeric random variables and . To model the causal dependencies we proposed an MDL-based framework employing local and global regression. Further, we proposed Slope, an efficient linear-time algorithm, to instantiate this framework. In addition, we introduced new cause effect pairs from a material science data set.

Empirical evaluations on synthetic and real world data show that Slope reliably infers the correct causal direction with an high accuracy. On benchmark data, at over accuracy Slope outperforms the state of the art by more than , provides a more robust decision rate, while additionally also being computationally more efficient. In future research, we plan to refine our statistical test, consider detecting confounding, causal inference on multivariate setting, and use Slope to infer causal networks directly from data.


The authors wish to thank Panagiotis Mandros and Mario Boley for help with the octet binary causal pairs. Alexander Marx is supported by the International Max Planck Research School for Computer Science (IMPRS-CS). Both authors are supported by the Cluster of Excellence Multimodal Computing and Interaction within the Excellence Initiative of the German Federal Government.


  • [1] D. Anderson, K. Burnham, and W. Thompson, “Null Hypothesis Testing: Problems , Prevalence , and an Alternative,” J. Wildl. Manage., vol. 64, no. 4, pp. 912–923, 2000.
  • [2] Y. Benjamini and Y. Hochberg, “Controlling the false discovery rate: a practical and powerful approach to multiple testing,” J. R. Statist. Soc. B, vol. 57(1), pp. 289–300, 1995.
  • [3] P. Bloem and S. de Rooij, “Large-scale network motif learning with compression,” CoRR, vol. abs/1701.02026, 2017.
  • [4] K. Budhathoki and J. Vreeken, “Causal inference by compression,” in ICDM.   IEEE, 2016, pp. 41–50.
  • [5] L. M. Ghiringhelli, J. Vybiral, S. V. Levchenko, C. Draxl, and M. Scheffler, “Big data of materials science: Critical role of the descriptor,” PRL, vol. 114, no. 10, pp. 1–5, 2015.
  • [6] B. R. Goldsmith, M. Boley, J. Vreeken, M. Scheffler, and L. M. Ghiringhelli, “Uncovering structure-property relationships of materials by subgroup discovery,” New J. Phys., vol. 19, no. 1, p. 013031, 2017.
  • [7] P. Grünwald, The Minimum Description Length Principle.   MIT Press, 2007.
  • [8] P. Hoyer, D. Janzing, J. Mooij, J. Peters, and B. Schölkopf, “Nonlinear causal discovery with additive noise models,” in NIPS, 2009, pp. 689–696.
  • [9] D. Janzing and B. Schölkopf, “Causal inference using the algorithmic markov condition,” IEEE TIT, vol. 56, no. 10, pp. 5168–5194, 2010.
  • [10] D. Janzing, J. Mooij, K. Zhang, J. Lemeire, J. Zscheischler, P. Daniušis, B. Steudel, and B. Schölkopf, “Information-geometric approach to inferring causal directions,” AIJ, vol. 182-183, pp. 1–31, 2012.
  • [11] A. Kolmogorov, “Three approaches to the quantitative definition of information,” Problemy Peredachi Informatsii, vol. 1, no. 1, pp. 3–11, 1965.
  • [12] J. Lemeire and E. Dirkx, “Causal models as minimal descriptions of multivariate systems,” 2006.
  • [13] M. Li and P. Vitányi, An Introduction to Kolmogorov Complexity and its Applications.   Springer, 1993.
  • [14] F. Liu and L. Chan, “Causal inference on discrete data via estimating distance correlations,” Neur. Comp., vol. 28, no. 5, pp. 801–814, 2016.
  • [15] P. Mandros, M. Boley, and J. Vreeken, “Discovering Reliable Approximate Functional Dependencies,” in KDD.   ACM, 2017, pp. 355–364.
  • [16] J. Mooij, O. Stegle, D. Janzing, K. Zhang, and B. Schölkopf, “Probabilistic latent variable models for distinguishing between cause and effect,” NIPS, no. 26, pp. 1–9, 2010.
  • [17] J. M. Mooij, J. Peters, D. Janzing, J. Zscheischler, and B. Schölkopf, “Distinguishing cause from effect using observational data: Methods and benchmarks,” JMLR, vol. 17, no. 32, pp. 1–102, 2016.
  • [18] J. Pearl, Causality: Models, Reasoning and Inference, 2nd ed.   New York, NY, USA: Cambridge University Press, 2009.
  • [19] J. Peters, D. Janzing, and B. Schölkopf, “Identifying cause and effect on discrete data using additive noise models,” in AISTATS.   JMLR, 2010, pp. 597–604.
  • [20] J. Peters, J. Mooij, D. Janzing, and B. Schölkopf, “Causal discovery with continuous additive noise models,” JMLR, vol. 15, pp. 2009–2053, 2014.
  • [21] D. N. Reshef, Y. A. Reshef, H. K. Finucane, S. R. Grossman, G. McVean, P. J. Turnbaugh, E. S. Lander, M. Mitzenmacher, and P. C. Sabeti, “Detecting novel associations in large data sets,” Science, vol. 334, no. 6062, pp. 1518–1524, 2011.
  • [22] J. Rissanen, “Modeling by shortest data description,” Automatica, vol. 14, no. 1, pp. 465–471, 1978.
  • [23] ——, “A universal prior for integers and estimation by minimum description length,” Annals Stat., vol. 11, no. 2, pp. 416–431, 1983.
  • [24] E. Sgouritsa, D. Janzing, P. Hennig, and B. Schoelkopf, “Inference of Cause and Effect with Unsupervised Inverse Regression,” AISTATS, vol. 38, pp. 847–855, 2015.
  • [25] S. Shimizu, P. O. Hoyer, A. Hyvärinen, and A. Kerminen, “A linear non-gaussian acyclic model for causal discovery,” JMLR, vol. 7, pp. 2003–2030, 2006.
  • [26] P. Spirtes, C. Glymour, and R. Scheines, Causation, Prediction, and Search.   MIT press, 2000.
  • [27] J. A. Van Vechten, “Quantum dielectric theory of electronegativity in covalent systems. i. electronic dielectric constant,” Physical Review, vol. 182, no. 3, p. 891, 1969.
  • [28] T. Verma and J. Pearl, “Equivalence and synthesis of causal models,” in UAI, 1991, pp. 255–270.
  • [29] J. Vreeken, “Causal inference by direction of information,” in SDM.   SIAM, 2015, pp. 909–917.
  • [30] C. S. Wallace and D. M. Boulton, “An information measure for classification,” Comput. J., vol. 11, no. 1, pp. 185–194, 1968.
  • [31] K. Zhang and A. Hyvärinen, “On the identifiability of the post-nonlinear causal model,” in UAI.   AUAU Press, 2009, pp. 647–655.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description