Causal inference using
the algorithmic Markov condition
Inferring the causal structure that links observables is usually based upon detecting statistical dependences and choosing simple graphs that make the joint measure Markovian. Here we argue why causal inference is also possible when only single observations are present.
We develop a theory how to generate causal graphs explaining similarities between single objects. To this end, we replace the notion of conditional stochastic independence in the causal Markov condition with the vanishing of conditional algorithmic mutual information and describe the corresponding causal inference rules.
We explain why a consistent reformulation of causal inference in terms of algorithmic complexity implies a new inference principle that takes into account also the complexity of conditional probability densities, making it possible to select among Markov equivalent causal graphs. This insight provides a theoretical foundation of a heuristic principle proposed in earlier work.
We also discuss how to replace Kolmogorov complexity with decidable complexity criteria. This can be seen as an algorithmic analog of replacing the empirically undecidable question of statistical independence with practical independence tests that are based on implicit or explicit assumptions on the underlying distribution.
- 1 Introduction to causal inference from statistical data
- 2 Inferring causal relations among individual objects
- 3 Novel statistical inference rules from the algorithmic Markov condition
- 4 Decidable modifications of the inference rule
- 5 Conclusions
1 Introduction to causal inference from statistical data
Causal inference from statistical data has attracted increasing interest in the past decade. In contrast to traditional statistics where statistical dependences are only taken to prove that some kind of relation between random variables exists, causal inference methods in machine learning are explicitly designed to generate hypotheses on causal directions automatically based upon statistical independence tests [1, 2]. The crucial assumption connecting statistics with causality is the causal Markov condition explained below after we have introduced some notations and terminology.
We denote random variables by capitals and their values by the corresponding lowercase letters. Let be random variables and be a directed acyclic graph (DAG) representing the causal structure where an arrow from node to node indicates a direct causal effect. Here the term direct is understood with respect to the chosen set of variables in the sense that the information flow between the two variables considered is not performed via using one or more of the other variables as intermediate nodes. We will next briefly rephrase the postulates that are required in the statistical theory of inferred causation [2, 1].
1.1 Causal Markov condition
When we consider the causal structure that links random variables we will implicitly assume that is causally sufficient in the sense that all common causes of two variables in are also in . Then a causal hypothesis is only acceptable as potential causal structure if the joint distribution of satisfies the Markov condition with respect to . There are several formulations of the Markov condition that are known to coincide under some technical condition (see Lemma 1). We will first introduce the following version which is sometimes referred to as the parental or the local Markov condition .
To this end, we introduce the following notations. is the set of parents of and the set of non-descendants of except itself. If are sets of random variables, means is statistically independent of , given .
Postulate 1 (statistical causal Markov condition, local)
If a directed acyclic graph formalizes the causal structure among the random variables . Then
for all .
We call this postulate the statistical causal Markov condition because we will later introduce an algorithmic version. The fact that conditional irrelevance not only occurs in the context of statistical dependences has been emphasized in the literature (e.g. [4, 1]) in the context of describing abstract properties (like semi-graphoid axioms) of the relation . We will therefore state the causal Markov condition also in an abstract form that does not refer to any specific notion of conditional informational irrelevance:
Postulate 2 (abstract causal Markov condition, local)
Given all the direct causes of an observable , its non-effects provide no additional information on .
Here, observables denote something in the real world that can be observed and the observation of which can be formalized in terms of a mathematical language. In this paper, observables will either be random variables (formalizing statistical quantities) or they will be strings (formalizing the description of objects). Accordingly, information will be statistical or algorithmic mutual information, respectively.
The importance of the causal Markov condition lies in the fact that it links causal terms like “direct causes” and “non-effects” to informational relevance of observables. The local Markov condition is rather intuitive because it echoes the fact that the information flows from direct causes to their effect and every dependence between a node and its non-descendants involves the direct causes. However, the independences postulated by the local Markov condition imply additional independences. It is therefore hard to decide whether an independence must hold for a Markovian distribution or not, solely on the basis of the local formulation. In contrast, the global Markov condition makes the complete set of independences obvious. To state it we first have to introduce the following graph-theoretical concept.
Definition 1 (d-separation)
A path in a DAG is said to be d-separated (or blocked) by a set of nodes if and only if
contains a chain or fork such that the middle node is in , or
contains an inverted fork (or collider) such that the middle node is not in and such that no descendant of is in .
A set is said to d-separate from if and only if blocks every (possibly undirected) path from a node in to a node in .
The following Lemma shows that d-separation is the correct condition for deciding whether an independence is implied by the local Markov condition , Theorem 3.27.
Lemma 1 (equivalent Markov conditions)
Let have a density with respect to a product measure. Then the following three statements are equivalent:
Recursive form: admits the factorization
where is shorthand for the conditional probability density, given the values of all parents of .
Local (or parental) Markov condition: for every node we have
i.e., it is conditionally independent of its non-descendants (except itself), given its parents.
Global Markov condition:
for all three sets of nodes for which and are d-separated by .
Moreover, the local and the global Markov condition are equivalent even if does not have a density with respect to a product measure.
The conditional densities are also called the Markov kernels relative to the hypothetical causal graph . It is important to note that every choice of Markov kernels define a Markovian density , i.e., the Markov kernels define exactly the set of free parameters remaining after the causal structure has been specified.
To select graphs among all those that render Markovian, we also need an additional postulate:
Postulate 3 (causal faithfulness)
Among all graphs for which is Markovian, prefer the ones for which all the observed conditional independences in the joint measure are imposed by the Markov condition.
The idea is that the set of observed independences is typical for the causal structure under consideration rather than being the result of specific choices of the Markov kernels. This becomes even more intuitive when we restrict our attention to random variables with finite value set and observe that the values then define a natural parameterization of the set of Markovian distributions in a finite dimensional space. The non-faithful distributions form a submanifold of lower dimension, i.e., a set of Lebesgue measure zero . They therefore almost surely don’t occur if we assume that “nature chooses” the Markov kernels for the different nodes independently according to some density on the parameter space.
The above “zero Lebesgue measure argument” is close to the spirit of Bayesian approaches , where priors on the set of Markov kernels are specified for every possible hypothetical causal DAG and causal inference is performed by maximizing posterior probabilities for hypothetical DAGs, given the observed data. This procedure leads to an implicit preference of faithful structures in the infinite sampling limit given some natural conditions for the priors on the parameter space. The assumption that “nature chooses Markov kernels independently”, which is also part of the Bayesian approach, will turn out to be closely related to the algorithmic Markov condition postulated in this paper.
We now discuss the justification of the statistical causal Markov condition because we will later justify the algorithmic Markov condition in a similar way. To this end, we introduce functional models :
Postulate 4 (functional model of causality)
If a directed acyclic graph formalizes the causal relation between the random variables then every can be written as a deterministic function of and a noise variable ,
where all are jointly independent.
Then we have , Theorem 1.4.1:
Lemma 2 (Markov condition in functional models)
Every joint distribution generated according to the functional model in Postulate 4 satisfies the local and the global Markov condition relative to .
We rephrase the proof in  because our proof for the algorithmic version will rely on the same idea.
Proof of Lemma 2: extend to a graph with nodes that additionally contains an arrow from each to . The given joint distribution of noise variables induces a joint distribution
that satisfies the local Markov condition with respect to : first, every is completely determined by its parents making the condition trivial. Second, every is parentless and thus we have to check that it is (unconditionally) independent of its non-descendants. The latter are deterministic functions of . Hence the independence follows from the joint independence of all .
By Lemma 1, is also globally Markovian w.r.t. . Then we observe that and are d-separated in (where the parents and non-descendants are defined with respect to ). Hence satisfies the local Markov condition w.r.t. and hence also the global Markov condition.
Functional models formalize the idea that the outcome of an experiment is completely determined by the values of all relevant parameters where the only uncertainty stems from the fact that some of these parameters are hidden. Even though this kind of determinism is in contrast with the commonly accepted interpretation of quantum mechanics , we still consider functional models as a helpful framework for discussing causality in real life since quantum mechanical laws refer mainly to phenomena in micro-physics.
Causal inference using the Markov condition and the faithfulness assumption has been implemented as causal learning algorithms . The following fundamental limitations of these methods deserve our further attention:
Markov equivalence: There are only few cases where the inference rules provide unique causal graphs. Often one ends up with a large class of Markov equivalent graphs, i.e., graphs that entail the same set of independences. For this reason, additional inference rules are desirable.
Dependence on i.i.d. sampling: the whole setting of causal inference relies on the ability to sample repeatedly and independently from the same joint distribution . As opposed to this assumption, causal inference in real life also deals with probability distributions that change in time and often one infers causal relations among single observations without referring to statistics at all.
The idea of this paper is to develop a theory of probability-free causal inference that helps to construct causal hypotheses based on similarities of single objects. Here, similarities will be defined by comparing the length of the shortest description of single objects to the length of their shortest joint description. Despite the analogy to causal inference from statistical data (which is due to known analogies between statistical and algorithmic information theory) our theory also implies new statistical inference rules. In other words, our approach to address weakness 2 also yields new methods to address 1.
The paper is structured as follows. In the remaining part of this Section, i.e., Subsection 1.2, we describe recent approaches from the literature to causal inference from statistical data that address problem 1 above. In Section 2 we develop the general theory on inferring causal relations among individual objects based on algorithmic information. This framework appears, at first sight, as a straightforward adaption of the statistical framework (using well-known correspondences between statistical and algorithmic information theory). However, Section 3 describes that this implies novel causal inference rules for statistical inference because non-statistical algorithmic dependences can even occur in data that were obtained from statistical sampling. In Section 4 we describe how to replace causal inference rules based on the uncomputable algorithmic information with decidable criteria that are still motivated by the uncomputable idealization.
The table in fig. 1 summarizes the analogies between the theory of statistical and the theory of algorithmic causal inference described in this paper. The differences, however, which are the main subject of Sections 3 to 4, can hardly be represented in the table.
|observables||random variables||sequences of strings|
|(vertices of a DAG)|
|observations||i.i.d. sampled data||strings|
|I. recursion formula|
|II. local Markov condition|
|III. global Markov||d-separation||d-separation|
|statistical independence||algorithmic independence|
|equivalence of I-III||Theorem 3.27||Theorem 3|
|functional models||Section 1.4||Postulate 6|
|functional models||Theorem 1.4.1||Theorem 4|
|imply Markov condition||in |
|decidable dependence||assumptions on||Section 4|
1.2 Seeking for new statistical inference rules
In  and  we have proposed causal inference rules that are based on the idea that the factorization of into and typically leads to simpler terms than the “artificial” factorization into . The generalization of this principle reads: Among all graphs that render Markovian prefer the one for which the decomposition in eq. (1) yields the simplest Markov kernels. We have called this vague idea the “principle of plausible Markov kernels”.
Before we describe several options to define simplicity we describe a simple example to illustrate the idea. Assume we have observed that a binary variable (with values ) and a continuous variable (with values in ) are distributed according to a mixture of two Gaussians (see fig. 2). Since this will simplify the further discussion let us assume that the two components are equally weighted, i.e.,
where determines the shift of the mean caused by switching between and .
The marginal is given by
One will prefer the causal structure compared to because the former explains in a natural way why is bimodal: the effect of on is simply to shift the Gaussian distribution by . In the latter model the bimodality of remains unexplained. To prefer one causal model to another one because the corresponding conditionals are simpler seems to be a natural application of Occam’s Razor. However, Section 3 will show that such an inference rule also follows from the theory developed in the present paper when simplicity is meant in the sense of low Kolmogorov complexity. In the remaining part of this section we will sketch some approaches to implement the “principle of plausible Markov kernels” in practical applications.
In  we have defined a family of “plausible Markov kernels” by conditionals that are second order exponential models, i.e., is a polynomial of order two in the variables up to some additive partition function (for normalization) that depends only on the variables . For every hypothetical causal graph, one thus obtains a family of “plausible joint distributions ” that are products of the plausible Markov kernels. Then we prefer the causal direction for which the plausible joint distributions provide the best fit for the given observations.
In  we have proposed the following principle for causal inference: Given a joint distribution of the random variables , prefer a causal structure for which
is minimal, where is some complexity measure on conditional probability densities.
There is also another recent proposal for new inference rules that refers to a related simplicity assumption, though formally quite different from the ones above. The authors of  observe that there are joint distributions of that can be explained by a linear model with additive non-Gaussian noise for one causal direction but require non-linear causal influence for the other causal directions. For real data they prefer the causal graph for which the observations are closer to the linear model.
To justify the belief that conditionals that correspond to the true causal direction tend to be simpler than non-causal conditionals (which is common to all the approaches above) is one of the main goals of this paper.
2 Inferring causal relations among individual objects
It has been emphasized  that the application of causal inference principles often benefits from the non-determinism of causal relations between the observed random variables. In contrast, human learning in real-life often is about quite deterministic relations. Apart from that, the most important difference between human causal learning and the inference rules in [2, 1] is that the former is also about causal relations among single objects and does not necessarily require sampling. Assume, for instance, that the comparison of two texts show similarities (see e.g. ) such that the author of the text that appeared later is blamed to have copied it from the other one or both are blamed to have copied from a third one. The statement that the texts are similar could be based on a statistical analysis of the occurrences of certain words or letter sequences. However, such kind of simple statistical tests can fail in both directions: In Subsection 2.2 (before Theorem 3) we will discuss an example showing that they can erroneously infer causal relations even though they do not exist. This is because parts that are common two both objects, e.g., the two texts, are only suitable to prove a causal link if they are not “too straightforward” to come up with.
On the other hand, causal relations can generate similarities between texts for which every efficient statistical analysis is believed to fail. We will describe an idea from cryptography to show this. A cryptosystem is called ROR-CCA-secure (Real or Random under Chosen Ciphertext Attacks) if there is no efficient method to decide whether a text is random or the encrypted version of some known text without knowing the key . Given that there are ROR-CCA-secure schemes (which is unknown but believed by cryptographers) we have a causal relation leading to similarities that are not detected by any kind of simple counting statistics. However, once an attacker has found the key (maybe by exhaustive search), he recognizes similarities between the encrypted text and the plain text and infers a causal relation. This already suggests two things: (1) detecting similarities involves searching over potential rules how properties of one object can be algorithmically derived from the structure of the other. (2) It is likely that inferring causal relations therefore relies on computationally infeasible decisions (if computable at all) on whether two objects have information in common or not.
2.1 Algorithmic mutual information
We will now describe how the information one object provides about the other can be measured in terms of Kolmogorov complexity. We start with some notation and terminology. Below, strings will always be binary strings since every description given in terms of a different alphabet can be converted into a binary word. The set of binary strings of arbitrary length will be denoted by . Recall that the Kolmogorov complexity of a string is defined as the length of the shortest program that generates using a previously defined universal Turing machine [13, 14, 15, 16, 17, 18, 19]. The conditional Kolmogorov complexity  of a string given another string is the length of the shortest program that can generate from . In order to keep our notation simple we use to refer to the complexity of the concatenation of .
We will mostly have equations that are valid only up to additive constant terms in the sense that the difference between both sides does not depend on the strings involved in the equation (but it may depend on the Turing machines they refer to). To indicate such constants we denote the corresponding equality by and likewise for inequalities. In this context it is important to note that the number of nodes of the causal graph is considered to be a constant. Moreover, for every string we define as its shortest description. If the latter is not unique, we consider the first one in an lexicographic order. It is necessary to distinguish between and . This is because there is a trivial algorithmic method to generate from (just apply the Turing machine to ), but there is no algorithm of length that computes the shortest description from a general input . One can show  that . Here, the equivalence symbol means that both sides can be obtained from each other by programs. The following equation for the joint algorithmic information of two strings will be useful :
The conditional version reads :
The most important notion in this paper will be the algorithmic mutual information measuring the amount of algorithmic information that two objects have in common. Following  we define:
Definition 2 (algorithmic mutual information)
Let be two strings. Then the algorithmic mutual information of is
The mutual information is the number of bits that can be saved in the description of when the shortest description of is already known. The fact that one uses instead of ensures that it coincides with the symmetric expression :
Lemma 3 (symmetric version of algorithmic mutual information)
For two strings we have
In the following sections, non-vanishing mutual information will be taken as an indicator for causal relations, but more detailed information on the causal structure will be inferred from conditional mutual information. This is in contrast to approaches from the literature to measure similarity versus differences of single objects that we briefly review now. To measure differences between single objects, e.g. pictures [22, 23], one defines the information distance between the two corresponding strings as the length of the shortest program that computes from and from . It can be shown  that
where means equality up to a logarithmic term. However, whether is small or large is not an appropriate condition for the existence and the strength of a causal link. Complex objects can have much information in common even though their distance is large. In order to obtain a measure that relates the amount of information that is disjoint for the two strings to the amount they share, Li et al.  and Bennett et al.  use the “normalized distance measure”
The intuitive meaning of is obvious from its direct relation to mutual information, and measures the fraction of the information of the more complex string that is shared with the other one. Bennett et al.  propose to construct evolutionary histories of chain letters using such kinds of information distance measures. However, like in statistical causal inference, inferring adjacencies on the basis of strongest dependences is only possible for simple causal structures like trees. In the general case, non-adjacent nodes can share more information than adjacent ones when information is propagated via more than one path. Instead of constructing causal neighborhood relations by comparing information distances we will therefore use conditional mutual information.
In order to define its algorithmic version, we first observe that Definition 2 can be rewritten into the less concise form
This formula generalizes more naturally to the conditional analog :
Definition 3 (conditional algorithmic mutual information information)
Let be three strings. Then the conditional mutual algorithmic information of , given is
As shown in  (Remark II.3), the conditional mutual information also is symmetric up to a constant term:
Lemma 4 (symmetric algorithmic conditional mutual information)
For three strings one has:
Definition 4 (algorithmic conditional independence)
Given three strings , we call conditionally independent of , given (denoted by ) if
In words: Given , the additional knowledge of does not allow us a stronger compression of . This remains true if we are given the Kolmogorov complexity of , given .
The theory developed below will describe laws where symbols like represent arbitrary strings. Then one can always think of sequences of strings of increasing complexity and statements like “the equation holds up to constant terms” are well-defined. We will then understand conditional independence in the sense of . However, if we are talking about three fixed strings that represent objects in real-life, this does not make sense and the threshold for considering two strings dependent will heavily depend on the context. For this reason, we will not specify the symbol any further. This is the same arbitrariness as the cutoff rate for statistical dependence tests.
The definitions and lemmas presented so far were strongly motivated by the statistical analog. Now we want to focus on a theorem in  that provides a mathematical relationship between algorithmic and statistical mutual information. First we rephrase the following theorem Theorem 7.3.1 of , showing that the Kolmogorov complexity of a random string is approximatively given by the entropy of the underlying probability distribution:
Theorem 1 (entropy and Kolmogorov complexity)
Let be a string whose symbols are drawn i.i.d. from a probability distribution over the finite alphabet . Slightly overloading notation, set . Let denote the Shannon entropy of a probability distribution. Then there is a constant such that
where is short hand for the expected value with respect to . Hence
However, for our purpose, we need to see the relation between algorithmic and statistical mutual information. If and such that each pair is drawn i.i.d. from a joint distribution , the theorem already shows that
This can be seen by writing statistical mutual information as
The above translations between entropy and algorithmic information refer to a particular setting and to special limits. The focus of this paper is mainly the situation where the above limits are not justified. Before we rephrase Theorem 5.3 in  which provides insights into the general case, we recall that a function is called recursive if there is a program on a Turing machine that computes from the input , and halts on all possible inputs.
Theorem 2 (statistical and algorithmic mutual information)
Given string-valued random variables with a recursive probability mass function over pairs of strings. We then have
where is the length of the shortest prefix-free program that computes from .
We want to provide an intuition about various aspects of this theorem.
(1) If is large compared to the expected algorithmic mutual information is dominated by the statistical mutual information.
(2) If is no longer assumed to be small, statistical dependences do not necessarily ensure that the knowledge of allows us to compress further than without knowing . It could be that the description of the statistical dependences requires more memory space than its knowledge would save.
(3) On the other hand, knowledge of could allow us to compress even in the case of a product measure on and . Consider, for instance, the case that we have the point mass distribution on the pair with . To describe a more sophisticated example generalizing this case we first have to introduce a family of product probability distributions on that we will need several times throughout the paper.
Definition 5 (Defining product distributions by strings)
Let be two probability distributions on and be a binary string of length . Then
defines a distribution on . We will later also need the following generalization: If are four distributions on , then
defines also a family of product measures on that is labeled by two strings.
Denote by the -fold copy of from Definition 5. It describes a distribution on assigning the probbaility to . If
knowledge of in the typical case provides knowledge of , provided is large enough. Then we can compress better than without knowing because we do not have to describe any more. Hence the algorithmic mutual information is large and the statistical mutual information is zero because is by construction a product distribution. In other words, algorithmic dependences in a setting with i.i.d sampling can arise from statistical dependences and from algorithmic dependences between probability distributions.
2.2 Markov condition for algorithmic dependences among individual objects
Now we state the causal Markov condition for individual objects as a postulate that links algorithmic mutual dependences with causal structure:
Postulate 5 (algorithmic causal Markov condition)
Let be strings representing descriptions of observations whose causal connections are formalized by a directed acyclic graph with as nodes. Let be the concatenation of all parents of and the concatenation of all its non-descendants except itself. Then
As in Definition 4, the appropriate cut-off rate for rejecting when will not be specified here.
This formulation is a natural interpretation of Postulate 2 in terms of algorithmic independences. The only point that remains to be justified is why we condition on instead of , i.e., why we are given the optimal joint compression of the parent strings. The main reason is that this turns out to yield nice statements on the equivalence of different Markov conditions (in analogy to Lemma 1). Since the differences between and can only be logarithmic in the string lengths111this is because , see  we will not focus on this issue any further.
If we apply Postulate 5 to a trivial graph consisting of two unconnected nodes, we obtain the following statement.
Lemma 5 (causal principle for algorithmic information)
If the mutual information between two objects is significantly greater than zero they have some kind of common past.
Here, common past between two objects means that one has causally influenced the other or there is a third one influencing both. The statistical version of this principle is part of Reichenbach’s principle of the common cause  stating that statistical dependences between random variables222The original formulation considers actually dependences between events, i.e., binary variables. and are always due at least one of the following three types of causal links: (1) is a cause of or (2) vice versa or (3) there is a common cause . For objects, the term “common past” includes all three types of causal relations. For a text, for instance, it reads: similarities of two texts indicate that one author has been influenced by the other or that both have been influenced by a third one.
Before we construct a model of causality that makes it possible to prove the causal Markov condition we want to discuss some examples. If one discovers significant similarities in the genome of two sorts of animals one will try to explain the similarities by relatedness in the sense of evolution. Usually, one would, for instance, assume such a common history if one has identified long substrings that both animals have in common. However, the following scenario shows two observations that superficially look similar, but nevertheless we cannot infer a common past since their algorithmic complexity is low (implying that the algorithmic mutual information is low, too).
Assume two persons are instructed to write down a binary string of length and both decide to write the same string . It seems straightforward to assume that the persons have communicated and agreed upon this choice. However, after observing that is just the binary representation of , one can easily imagine that it was just a coincidence that both wrote the same sequence. In other words, the similarities are no longer significant after observing that they stem from a simple rule. This shows that the length of the pattern that is common to both observations, is not a reasonable criterion on whether the similarities are significant.
To understand the algorithmic causal Markov condition we will study its implications as well as its justification. In analogy to Lemma 1 we have
Theorem 3 (equivalence of algorithmic Markov conditions)
Given the strings and a directed acyclic graph . Then the following conditions are equivalent:
Recursive form: the joint complexity is given by the sum of complexities of each node, given the optimal compression of its parents:
Local Markov condition: Every node is independent of its non-descendants, given the optimal compression of its parents:
Global Markov condition:
if d-separates and .
Below we will therefore no longer distinguish between the different versions and just refer to “the algorithmic Markov condition”. The intuitive meaning of eq. (6) is that the shortest description of all strings is given by describing how to generate every string from its direct causes. A similar kind of “modularity” of descriptions will also occur later in a different context when we consider description complexity of joint probability distributions.
For the proof of Theorem 3 we will need a Lemma that is an analogue of the observation that for any two random variables the statistical mutual information satisfies for every measurable function . The algorithmic analog is to consider two strings and one string that is derived from by a simple rule.
Lemma 6 (monotonicity of algorithmic information)
Let be three strings such that . Then
This lemma is a special case of Theorem II.7 in . We will also need the following result:
Lemma 7 (monotonicity of conditional information)
Let be three strings. Then
Note that and is obvious but Lemma 7 is non-trivial because the star operation is jointly applied to and .
where is shorthand for . Hence
Then we obtain the statement by subtracting and inverting the sign.
The following lemma will only be used in Subsection 3.3. We state it here because it is closely related to the ones above.
Lemma 8 (generalized data processing inequality)
For any three strings ,
The name “data processing inequality” is justified because the assumption may arise from the typical data processing scenario where is obtained from via .
where the second inequality holds because can obviously be computed from the pair by an program. The last equality uses, again, the equivalence of and . Hence we obtain:
Proof of Theorem 3: I III: Define a probability mass function on , i.e., the set of -tuples of strings, as follows. Set
where is a normalization factor. In this context, it is important that the symbol refers to conditioning on the -tuple of strings that are parents of (in contrast to conditional complexities where we can interpret equally well as conditioning on one string given by the concatenation of all those ). Note that Kraft’s inequality (see , Example 3.3.1) implies
for every entailing that the expression is indeed normalizable by . We have
Then we set
i.e., is by construction recursive with respect to . It is easy to see that can be computed from :
Remarkably, we can also compute Kolmogorov complexities of subsets of from the corresponding marginal probabilities. We start by proving
To this end, we observe
where denotes equality up to a multiplicative constant. The equality follows from eq. (4) and the inequality is obtained by applying Kraft’s inequality  to the conditional complexity . On the other hand we have
since adding the -bit string certainly can be performed by a program of length . Hence we have
Combining this with ineq. (2.2) yields
Using eq. (2.2) we obtain
which proves equation (11). This implies
Since the same argument holds for marginalizing over any other variable we conclude that
for every subset of strings of size with . This follows by induction over .
Now we can use the relation between marginal probabilities and Kolmogorov complexities to show that conditional complexities are also given by the corresponding conditional probabilities, i.e., for any two subsets we have
Let be three subsets of such that d-separates and . Then with respect to because satisfies the recursion (9) (see Lemma 1)333Since is, by construction, a discrete probability function, the density with respect to a product measure is directly given by the probability function itself.. Hence
This proves algorithmic independence of and , given and thus I III.
To show that III II it suffices to recall that and are d-separated by . Now we show II I in strong analogy to the proof for the statistical version of this statement in : Consider first a terminal node of . Assume, without loss of generality, that it is . Hence all strings are non-descendants of . We thus have where means that both strings coincide up to a permutation (on one side) and removing those strings that occur twice (on the other side). Due to eq. (4) we have
Using, again, the equivalence of for any string we have
The second step follows from . The inequality holds because can be computed from via a program of length . The last step follows directly from the assumption . Combining ineq. (15) with Lemma 7 yields
Then statement I follows by induction over .
To show that the algorithmic Markov condition can be derived from an algorithmic version of the functional model in Postulate 4 we introduce the following model of causal mechanisms.
Postulate 6 (algorithmic model of causality)
Let be a DAG formalizing the causal structure among the strings . Then every is computed by a program with length from its parents and an additional input . We write formally
meaning that the Turing machine computes from the input using the additional program and halts. The inputs are jointly independent in the sense
By defining new programs that contain we can, equivalently, drop the assumption that the programs are simple and assume that they are jointly independent instead.
We could also have assumed that is a function of all its parents, but our model is more general since the map defined by the input-output behavior of need not be a total function , i.e., the Turing machine simulating the process would not necessarily halt on all inputs .
The idea to represent causal mechanisms by programs written for some universal Turing machine is basically in the spirit of various interpretations of the Church-Turing thesis. One formulation, given by Deutsch , states that every process taking place in the real world can be simulated by a Turing machine. Here we assume that the way different systems influence each other by physical signals can be simulated by computation processes that exchange messages of bit strings.444Note, however, that sending quantum systems between the nodes could transmit a kind of information (“quantum information” ) that cannot be phrased in terms of bits. It is known that this enables completely new communication scenarios, e.g. quantum cryptography. The relevance of quantum information transfer for causal inference is not yet fully understood. It has, for instance, been shown that the violation of Bell’s inequality in quantum theory is also relevant for causal inference . This is because some causal inference rules between classical variables break down when the latent factors are represented by quantum states rather than being classical variables.
Note that mathematics also allows us to construct strings that are linked to each other in an uncomputable way. For instance, let be an arbitrary binary string and be defined by . However, it is hard to believe that a real causal mechanism could create such kind of relations between objects given that one believes that real processes can always be simulated by algorithms. These remarks are intended to give sufficient motivation for our model.
Postulate 6 implies the algorithmic causal Markov condition:
Theorem 4 (algorithmic model implies Markov)
Let be generated by the model in Postulate 6. Then they satisfy the algorithmic Markov condition with respect to .
Proof (straightforward adaption of the proof of Lemma 2): Extend to a causal structure with nodes . To see that the extended set of nodes satisfy the local Markov condition w.r.t. , observe first that every node is given by its parents via an program. Second, every is parentless and (unconditionally) independent of all its non-descendants because they can be computed from via an program.
By Theorem 3 the extended set of nodes is also globally Markovian w.r.t. . The parents d-separate and in (here the parents are still defined with respect to ). This implies the local Markov condition for .
It is trivial to construct examples where the causal Markov condition is violated if the programs are mutually dependent (for instance, the trivial graph with two nodes and no edge would satisfy if the programs computing from an empty input are dependent).
The last sentence of Postulate 6 makes apparent that the mechanisms that generate causal relations are assumed to be independent. This is essential for the general philosophy of this paper. To see that such a mutual independence of mechanisms is a reasonable assumption we recall that the causal graph is meant to formalize all relevant causal links between the objects. If we observe, for instance, that two nodes are generated from their parents by the same complex rule we postulate another causal link between the nodes that explains the similarity of mechanisms.555One could argue that this would be just the causal principle implying that similarities of the “machines” generating from has to be explained by a causal relation, i.e., a common past of the machines. However, in the context of this paper, such an argument would be circular. We have argued that the causal principle is a special case of the Markov condition and derived the latter from the algorithmic model above. We will therefore consider the independence of mechanisms as a first principle.
2.3 Relative causality
This subsection explains why it is sensible to define algorithmic dependence and the existence or non-existence of causal links relative to some background information. To this end, we consider genetic sequences of two persons that are not relatives. We certainly find high similarity that leads to a significant violation of due to the fact that both genes are taken from humans. However, given the background information “ is a human genetic sequence”, can be further compressed. The same applies to . Let be a code that is particularly adapted to the human genome in the sense that the expected conditional Kolmogorov complexity, given , of a randomly chosen human genome is minimal. Then it would make sense to consider as a hint for a relation that goes beyond the fact that both persons are human. In contrast, for the unconditional mutual information we expect . We will therefore infer some causal relation (here: common ancestors in the evolution) using the causal principle in Lemma 5 (cf. ).
The common properties between different and unrelated individuals of the same species can be screened off by providing the relevant background information. Given this causal background, we can detect further similarities in the genes by the conditional algorithmic mutual information and take them as an indicator for an additional causal relation that goes beyond the common evolutionary background. For this reason, every discussion on whether there exists a causal link between two objects (or individuals) requires a specification of the background information. In this sense, causality is a relative concept.
One may ask whether such a relativity of causality is also true for the statistical version of the causality principle, i.e., Reichenbach’s principle of the common cause. In the statistical version of the link between causality and dependence, the relevance of the background information is less obvious because it is evident that statistical methods are always applied to a given statistical ensemble. If we, for instance, ask whether there is a causal relation between the height and the income of a person without specifying whether we refer to people of a certain age, we observe the same relativity with respect to additionally specifying the “background information”, which is here given by referring to a specific ensemble.
In the following sections we will assume that the relevant background information has been specified and it has been clarified how to translate the relevant aspects of a real object into a binary string such that we can identify every object with its binary description.
3 Novel statistical inference rules from the algorithmic Markov condition
3.1 Algorithmic independence of Markov kernels
To describe the implications of the algorithmic Markov condition for statistical causal inference, we consider random variables and where causally influences . We can think of as describing a source that generates -values and sends them to a “machine” that generates -values according to . Assume we observe that
Then we conclude that there must be a causal link between and that goes beyond transferring -values from to . This is because and are inherent properties of and , respectively which do not depend on the current value of that has been sent. Hence there must be a causal link that explains the similarities in the design of and . Here we have assumed that we know that is the correct causal structure on the statistical level. Then we have to accept that a causal link on the level of the machine design is present.
If the causal structure on the statistical level is unknown, we would prefer causal hypotheses that explain the data without needing a causal connection on the higher level provided that they satisfy the statistical Markov condition. Given this principle, we thus will prefer causal graphs for which the Markov kernels become algorithmically independent. This is equivalent to saying that the shortest description of is given by concatenating the descriptions of the Markov kernels, a postulate that has already been formulated by Lemeire and Dirkx :
Postulate 7 (algorithmic independence of statistical properties)
A causal hypothesis (i.e., a DAG) is only acceptable if the shortest description of the joint density is given by a concatenation of the shortest description of the Markov kernels, i.e.