Mining Combined Causes in Large Data Sets

Mining Combined Causes in Large Data Sets

Saisai Ma Jiuyong Li Lin Liu Thuc Duy Le11footnotemark: 1 School of Information Technology and Mathematical Sciences, University of South Australia, Mawson Lakes, SA 5095, Australia

In recent years, many methods have been developed for detecting causal relationships in observational data. Some of them have the potential to tackle large data sets. However, these methods fail to discover a combined cause, i.e. a multi-factor cause consisting of two or more component variables which individually are not causes. A straightforward approach to uncovering a combined cause is to include both individual and combined variables in the causal discovery using existing methods, but this scheme is computationally infeasible due to the huge number of combined variables. In this paper, we propose a novel approach to address this practical causal discovery problem, i.e. mining combined causes in large data sets. The experiments with both synthetic and real world data sets show that the proposed method can obtain high-quality causal discoveries with a high computational efficiency.

Causal discovery, Combined causes, Local causal discovery, HITON-PC, Multi-level HITON-PC
journal: Knowledge-Based Systems

1 Introduction

Causal relationships can reveal the causes of a phenomenon and predict the potential consequences of an action or an event Spirtes2010 (). Therefore, they are more useful and reliable than statistical associations freedman1997association (); freedman1999association (); shaughnessy1985research ().

In recent decades, causal inference has attracted great attentions in computer science. Causal Bayesian networks (CBNs) pearl2000causality (); neapolitan2004learning (); heckerman1995bayesian () have emerged as a main framework for representing causal relationships and uncovering them in observational data. Due to the incapability of CBNs in coping with high dimensional data, some efficient methods were proposed for local causal discovery around a target variable aliferis2010local (); tsamardinos2003time (); pellet2008using ().

One limitation of current causal discovery methods is that they only find a cause consisting of a single variable. However, single causal factors are often insufficient for reasoning about the causes of particular effects novick2004assessing (). For example, a burning cigarette stub and inflammable material nearby can start a fire, but neither of them alone may cause a fire. With gene regulation, it was found that the expression level of a gene might be co-regulated by a group of other genes, which could lead to a disease wagner2007road (); d2000genetic (). Furthermore, a main objective of data mining is to find previously unobserved patterns and relationships in data. Causal relationships between single variables are easier to be identified by domain experts, but combined causes are much more difficult to be detected mackie1965causes (). Hence data mining methods for discovering combined causes are in demand. In this paper, we address the problem of finding combined causes in large data sets.

Figure 1: Multiple individual causes vs. the combined cause, where solid arrows denote causal relationships and the dashed lines represent the interaction between the two variables.

The combined causes considered in this paper are different from the generally discussed multiple causes. For example, in Figure 1, sprinkler causes wet ground, and so does rain. Sprinkler and rain together cause wetter ground. However, in this work, we concern the situation when multiple variables each alone is not sufficient to cause an effect, but their combination is. As shown in Figure 1, there is no causal link from burning cigarette stub or inflammable material to a fire, but the combination of these two factors leads to a fire.

The combined causes studied in this paper cannot be discovered with CBN learning, as in a CBN an edge is drawn from to only when is a cause of . If and each alone is not a cause of , no edge is drawn from or to , and thus impossible to examine the combined causal effect of and on . This limitation of CBNs was discussed in spirtes2000causation () (page 48) as follows:

“Suppose drugs A and B both reduce symptoms C, but the effect of A without B is quite trivial, while the effect of B alone is not. The directed graph representations we have considered in this chapter offer no means to represent this interaction and to distinguish it from other circumstances in which A and B alone each have an effect on C.”

To identify combined causes in data, one critical challenge is the computational complexity with large data sets, as the number of combined variables is exponential to the number of individual variables.

In this paper, we propose a multi-level approach to discovering the combined causes of a target variable. Our method is designed based on an efficient local causal discovery method, HITON-PC aliferis2010local (), which was developed on the same theoretical ground as the well-known PC algorithm spirtes2000causation () for CBN learning.

In the rest of the paper, the related work and the contributions of this paper are described in Section 2. Section 3 introduces the background, including the notation and the HITON-PC algorithm. Section 4 presents the proposed method. The experiments and results are described in Section 5. Finally, Section 6 concludes the paper.

2 Related Work and Contributions

As discussed in the previous section, causal Bayesian networks (CBNs), as a main stream causal discovery approach, have been studied extensively. Many algorithms for CBN learning and inference pearl2000causality (); spirtes2000causation (); hill2011bayesian (); mani2010bayesian () have been developed. Researchers have also tried to incorporate other models and prior knowledge into the CBN framework. The domain experts are interested in taking the prior knowledge and observational data to produce Bayesian networks heckerman1995learning (). Messaoud et al. messaoud2015semcado () proposed a framework to learn CBNs, by incorporating semantic background knowledge provided by the domain ontology. In order to address the uncertainties resulting from incomplete and partial information, Kabir et al. kabir2015integrating () combined Bayesian belief network with data fusion model to predict the failure rate of water mains. However, these methods are designed to analyse individual causes, instead of combined causes. Moreover, it may be difficult for domain experts to elicit the CBN structure with combined causes from domain knowledge only.

Another approach segal2002learning (); Azizi2014learning () was proposed to find the relationship structures between groups of variables. Segal et al. segal2002learning () defined the module network of which each node (module) was formed by a set of variables having the same statistical behavior. They also proposed an algorithm to learn the module assignment and the module network structure. Many algorithms and applications segal2003module (); Azizi2014learning () have been developed to extend the module network model. Yet et al. yet2014compatible () proposed a method for abstracting the BN structure, where they also merged nodes with similar behavior to simplify the BN structure. The modules or nodes of a module network are not the same as the combined causes defined in this paper, since the components of a combined cause do not necessarily have the similar behaviour.

The sufficient-component cause model rothman1976causes (); rothman2005causation () (often referred by epidemiologists) addresses the combined causes discussed in this paper. According to the model, a disease is an inevitable consequence of a minimal set of factors. However, no computational methods have been developed for finding a sufficient-component cause in observational data. Although the model and interactive causes have attracted statisticians’ attentions vanderweele2008empirical (); vanderweele2007identification (); vanderweele2012general (), the work is at the level of theoretical discussions.

Li et al. li2013mining () used the idea of retrospective cohort studies euser2009cohort () and Jin et al. jin2012discovery () applied partial association tests birch1964detection () to discover causal rules from association rules. While the work has initiated the concept of the combined causes, their focus was on integrating association rule mining with observational studies or traditional statistical analysis for causal discovery.

In this paper, a novel method is proposed to discover the combined causes of the given target variable, based on the causal inference framework established for CBN learning. The contributions of this paper are summarised as follows:

  1. We study the problem of mining combined causes which are different from multiple individual causes, and the problem has not been tackled by most existing methods.

  2. We develop a new method for discovering combined (and single) causes, and demonstrate its performance and efficiency by experiments with synthetic and real world data.

3 Background

In this section, we firstly describe the notation to be used in the paper (Section 3.1). In Section 3.2, we introduce the HITON-PC algorithm, which is the basis of our algorithms, and then discuss its time complexity.

3.1 Notation

We use upper case letters, e.g. and , to represent random variables, and multiple upper case letters, e.g. or , to denote the combined variable consisting of and . Bold-faced upper case letters, e.g. and , represent a set of variables. Particularly, we denote the set of predictor variables and the target variable with and respectively. The conditional independence between and given is represented as .

This paper deals with binary variables only, i.e. each variable has two possible values, 1 or 0. The value of a combined (binary) variable is 1 if and only if each of its component (binary) variables is equal to 1 (i.e. and ). A multi-valued variable can be converted to a number of binary variables, e.g. the nominal variable Eduction can be converted to 3 binary variables, High School, Undergraduate and Postgraduate. With binary variables, we can easily create and examine a combined cause involving different values of multiple variables. For example, given the two nominal variables, Gender and Eduction, after converting them to binary variables, we can combine them to have variables, such as (Male, High School) and (Female, Postgraduate).

3.2 Hiton-Pc

Given its high efficiency and origin in the sound CBN learning theory, HITON-PC aliferis2010local () is a commonly used method for discovering local causal structures with a fixed target variable. The semi-interleaved HITON-PC is used as the basis for our proposed method. Under the causal assumptions spirtes2000causation (), HITON-PC uses conditional independence (CI) tests to find the causal relationships around a target variable , i.e. the set of parents (P) and children (C) of .

Referring to Algorithm 1, HITON-PC takes a data set of the predictors and the target to produce , the set of parents and children of . The algorithm uses two data structures, a priority queue OPEN and a list . Initially OPEN contains all predictors associated with and is empty (see lines 1 and 2 of Algorithm 1). It then iterates between the two phases, inclusion and elimination, until OPEN becomes empty.

Input: A data set for predictor variable set and target
Output: , the set of parents and children of
2:Let OPEN contain all variables associated with , sorted in descending order of strength of associations.
4:    Phase I: Inclusion (line 4)
5:   Move the first variable from OPEN to the end of
6:    Phase II: Elimination (lines 5-16)
7:    // Forward stage
8:   if OPEN  then
9:       the variable last added to
10:      if , s.t.  then
11:         Remove from
12:      end if
13:    // Backward stage
14:   else
15:      for each  do
16:         if , s.t.  then
17:            Remove from
18:         end if
19:      end for
20:   end if
21:until OPEN
ALGORITHM 1 The Semi-interleaved HITON-PC Algorithm aliferis2010local ()

In the inclusion phase, the variable having the strongest association with is removed from OPEN and added to (line 4). In the elimination phase, if OPEN is not empty, the forward stage (lines 5-9) is executed. The variable newly added to , , is eliminated from if it is independent of given a subset of current , otherwise it is kept (still tentatively) in . If OPEN is empty, the backward stage (lines 10-16) is activated, and each variable in current is tested, and if a subset of is found such that is independent of given the subset, is removed from .

HITON-PC uses several heuristics to improve efficiency. At the forward stage, CI tests are conducted only on the newly added variable, instead of performing a full variable elimination. To compensate for possible false discoveries caused by this heuristic, HITON-PC uses the backward stage to “tighten up” by testing the conditional independence of each variable with given the other variables in . Moreover, the use of the priority queue, OPEN, allows variables having stronger associations with to be included and evaluated first. As these variables are more likely to be the true parents or children of the target, once they are in , it is expected that given these variables, other variables that should not be in are quickly identified and removed so that in the forward stage will not be over expanded, thus reducing the number of CI tests. Additionally, in practice, HITON-PC restricts the maximum order of CI tests to a given threshold max-k, i.e. the maximum size of the conditioning set (, see Algorithm 1) is max-k.

The time complexity of HITON-PC mainly depends on the number of CI tests. Each variable needs to be tested on all subsets of . Thus the complexity regarding each variable is , and the total time complexity is . When max-k is specified, the complexity becomes polynomial, i.e. . Extensive experiments have shown that HITON-PC is able to cope with thousands of variables with low rate of false discoveries aliferis2010local ().

4 Uncovering Combined Causes

Having introduced the background knowledge, in this section, we present the proposed method for discovering combined causes. We firstly introduce the naïve approach (Section 4.1), which is a straightforward way to detect the combined causes. Then we give the formal definition of combined causes and present the basic idea of the proposed method (Section 4.2). Finally, we describe the proposed method (including two algorithms, MH-PC-F and MH-PC-B) for discovering combined causes (Section 4.3) and discuss their possible false discoveries (Section 4.4).

4.1 The Naïve Approach

A naïve scheme for finding the combined causes can be as follows. Firstly, we generate a new variable set with combined variables using the original variable set. For example, for , the new variable set (with variables) is . Then we run a local causal discovery algorithm, such as HITON-PC, to find both single and combined causes using the data set created for .

The naïve approach, however, is not feasible because the number of combined variables is exponential to the number of individual variables. In the following, we discuss how our proposed method tackles the problem.

4.2 Basic Idea of the Proposed Method

In fact, it is not necessary to consider all combined variables. Particularly, we are not interested in a combined variable, e.g. , of which a component or is a cause already, as it is reasonable to assume that the causal relationship between and the target is due to the relationship between or and . To improve efficiency, we can exclude such combined variables when finding combined causes of , and only consider the combined variables whose components are not causes of .

Furthermore, as discussed in Section 1, a combined cause consisting of non-cause components is more difficult to be observed by domain experts and they cannot be represented or discovered using other approaches such as CBNs, hence mining such combined causes is useful in practice.

The definition of the combined causes studied in this paper is given below.

Definition 1 (Combined Cause)

Let be a combination of multiple variables. is a combined cause of if is a cause of and any of its component variables, , is not a cause of .

Based on Definition 1, we can design an algorithm to find such combined causes (and single causes) in a level by level manner. We firstly, at the level (), obtain , the set of parents and children of , each consisting of individual variables. Then at the level, combined variables are generated based on , the set of non-cause variables at all the lower levels, i.e. , where () is the set of non-cause variables each containing individual variables. For example, a level combined variable can be generated by combining an level () non-cause variable and a level non-cause variable.

For non-cause variables and , if the combination is a combined cause, then it is reasonable to assume that positively contributes to the relationship between and the target and vice versa. For example, the combustible dust suspended in the air (even at a high concentration) has no causal effect on a dust explosion, but an ignition source will improve their relationship significantly and thus the combination of the two factors can result in a dust explosion. This observation leads to the following definition.

Definition 2 (Redundant Combined Variable)

For , the combination is a redundant combined variable, if either or , where and are not individual causes of the target .

By excluding redundant combined variables, we can further improve the efficiency of the causal discovery.

Based on the above discussion and HITON-PC, we propose the Multi-level HITON-PC (MH-PC) method for finding both single and combined causes of a given target. In the following section, we present the details of the method.

4.3 Multi-level HITON-PC

Referring to Algorithm 2, at the first level, MH-PC invokes HITON-PC to find the single causes of , (line 1) and initiates as (line 2). The single non-cause variables are put in and (non-cause variables identified at all the lower levels) is initially empty (line 2).

Input: A data set for predictor variable set and target ; , the maximum level of causal discovery
Output: TPC, the set of (single and combined) parents and children of
1:Call Algorithm 1 (HITON-PC), i.e. HITON-PC()
2:; ;
3:for  to  do
4:   nTPC
5:   Generate level combined variable set based on nTPC
6:    redundancyTest()
7:   Generate corresponding data set for
8:   Let OPEN contain all variables (in ) associated with , sorted
9:    in descending order of strength of associations.
10:   repeat
11:       Phase I: Inclusion (line 10)
12:      Move the first variable from OPEN, add it to the end of
13:       and
14:       Phase II: Elimination (lines 11-22)
15:       // Forward stage
16:      if OPEN  then
17:          the variable last added to TPC
18:         if  , s.t.  then
19:            Remove from TPC and TPC
20:         end if
21:       // Backward stage
22:      else
23:         for each  do
24:            if , s.t.  then
25:               Remove from TPC and TPC
26:            end if
27:         end for
28:      end if
29:   until OPEN
31:end for
32:Output TPC

At level (), MH-PC firstly updates so that it contains the non-causes from levels 1 to (line 4). Next the algorithm generates combined variables containing individual variables by combining the variables in (line 5). Redundant combined variables are then removed (line 6) and the new data set for level combined variables () is created too (line 7). From lines 8 to 23, we identify level combined causes from . Initially OPEN contains all the combined variables in which are associated with and the variables are sorted in descending order of the strength of associations (line 8). Similar to HITON-PC, the inclusion and elimination phases are carried out iteratively till OPEN is empty. At the end of the iteration (line 23), includes the discovered combined causes consisting of variables, and includes all the causes from level 1 to level . Note that in line 17, to improve the efficiency further, the backward stage only checks the level candidates in , instead of all candidates in , as all the lower level parents and children have been confirmed at previous levels.

In line 24, the set of level non-causes is updated before completing the work at level . Finally MH-PC outputs until (the maximum level of causal discovery).

In the forward stage (i.e. the case when OPEN ), as with HITON-PC, MH-PC searches the current for a subset to test whether the combined variable is independent of given (line 13). Since combined variables in are combinations of individual variables, MH-PC may have conducted some redundant conditional independence tests. For example, the CI test between and given a combined variable YZ (i.e. ) may be unnecessary if the test given the two individual variables and (i.e. ) has been done. To address this issue, we propose a variant of MH-PC, called MH-PC-B (B version of MH-PC). To avoid confusion, in the rest of the paper, we call the MH-PC algorithm shown in Algorithm 2 MH-PC-F (Full version of MH-PC). In the forward stage, when conducting the level test with MH-PC-B, we do not include the level variables in into conditioning sets, i.e. we replace line 13 in Algorithm 2 with the following statement:

As MH-PC-B conducts the tests conditioning only on the lower level variables, it can have higher efficiency than MH-PC-F, but at the same time it may produce some false positives. However, since in the backward stage (lines 17-21 of Algorithm 2), we do another check of the candidate causes remained in , it is expected that the false discoveries are removed. As we will see from the next section, the experiments show that MH-PC-B is more efficient than MH-PC-F, while producing the same results as MH-PC-F with the data sets used.

4.4 False Discoveries of Multi-level HITON-PC

Since MH-PC-F (and MH-PC-B) follows the idea of HITON-PC, we firstly analyse the quality of HITON-PC in term of false discoveries. In HITON-PC, possible false decisions mainly come from two sources spirtes2000causation (); li2009controlling (): the use of -, the maximum size of conditioning sets (, see Algorithms 1 and 2) used for conditional independence tests, and incorrect results of statistical tests. Using a smaller - reduces the number of conditional independence tests, thus improves efficiency, but results in false positive discoveries. Fortunately, when - or 4, the false positive rate is not high, as shown in aliferis2010local (). When we do not have enough number of samples, the statistical tests may produce incorrect results.

In the following, we will discuss the false discoveries coming from the interactions between variables. As mentioned above, the proposed method only focuses on non-redundant combined variables (Definition 2). While this strategy is used for reducing complexity, it may lead to false discoveries. However, we argue that our algorithms can still obtain high-quality causal findings. For non-cause variable , if another non-cause variable cannot improve the relationship between and the target , then the combination , in most cases, may not improve the relationship between and . This type of combined variables are unlikely to be combined causes. The experiment results in Section 5 have confirmed this intuition.

5 Experiments

We implemented MH-PC-F and MH-PC-B based on the semi-interleaved HITON-PC implementation in the R package, bnlearn scutari2009learning (). In the experiments, the maximum level of combination (i.e. in Algorithm 2) is restricted to 2, i.e. a combined cause at most consists of two component variables. We set the threshold of -value to 0.01 to prune redundant combined variables and 0.05 to test causal relationships, for both synthetic and real world data sets.

5.1 Data Sets

10 synthetic and 7 real world data sets were used in the experiments, and a summary of the data sets is shown in Table 1. The variables in all data sets are binary, i.e. each variable has two possible values, 1 or 0. The class variable in each data set is specified as the target variable. The numbers of variables shown in the table refer to the numbers of single predictor variables. The distribution of each data set indicates the percentages of the two different values of class variables. For synthetic data sets, the ground truth (i.e. the number of true causes) is shown in the table, where the first value is the number of single causes each consisting of one predictor variable and the second value is the number of combined causes each consisting of two predictor variables.

Name #Records #Variables Distributions Ground Truth
Syn-7 1000 6 39.1% & 60.9% 2, 1
Syn-10 1000 9 42.2% & 57.8% 3, 2
Syn-12 2000 11 72.1% & 27.9% 4, 3
Syn-16 2000 15 45.2% & 54.8% 4, 3
Syn-20 2000 19 55.6% & 44.4% 4, 4
Syn-50 5000 49 30.8% & 69.2% 5, 5
Syn-60 5000 59 30.8% & 69.2% 5, 5
Syn-80 5000 79 30.5% & 69.5% 5, 5
Syn-100 5000 99 30.0% & 70.0% 5, 5
Syn-120 5000 119 30.4% & 69.6% 5, 5
CMC 1473 22 57.3% & 42.7% -
German 1000 60 30.0% & 70.0% -
House-votes-84 435 16 61.4% & 38.6% -
Hypothyroid 3163 51 4.8% & 95.2% -
Kr-vs-kp 3196 74 47.8% & 52.2% -
Sick 2800 58 6.1% & 93.9% -
Census 299285 495 6.2% & 93.8% -
Table 1: A Brief Description of Data Sets

The first five synthetic data sets (with small number of variables) in Table 1 were generated with two main steps: (1) generating a data set based on a BN (Bayesian network) created randomly by the TETRAD software tool (, and (2) generating the final synthetic data set by “splitting” some causes of the target into two new variables. Specifically, we firstly created a random BN using the TETRAD software, whose structure and conditional probability tables were both generated randomly. In the obtained BN, one of the variables was designated as the target and the others as predictor variables. Records of all variables were generated based on the conditional probability tables, using the built-in Bayes Instantiated Model. Then we selected and split a parent node, e.g. , of the target into two variables, e.g. and , such that (1) (i.e. and both are equal to 1 if and only if is 1), and (2) or is not an individual cause of the target. Note that, for combined causes in the synthetic data, we do not have a complete ground truth, since it may include some combined causes that we do not observe.

For the next five larger synthetic data sets (Syn-50, …, Syn-120), it is unpractical to generate them based on randomly drawn BNs, since it takes too long time to generate one. We firstly drew a simple BN where some variables were the parents of the target, some were not. Then we adopted logistic regression to generate the data based on the BN. Next, we employed the aforementioned splitting process to obtain the final data sets.

All real world data sets shown in Table 1 are obtained from the UCI Machine Learning Repository asuncion2007uci (). The first six real world data sets were employed to assess the effectiveness of the proposed algorithms, while the Census data set was used for evaluating the efficiency. The CMC (Contraceptive Method Choice) data set is an extraction of the National Indonesia Contraceptive Prevalence Survey in 1987. The German data set is a data set for classifying people’s credit risks based on a set of attributes. House-votes-84 contains the United States Congressional Voting Records in 1984. Hypothyroid and Sick are two medical data sets, which are from the Thyroid Disease data set of the repository (discretised using the MLC++ discretisation utility kohavi.ea:using-mlc:96 ()). The Kr-vs-kp data set is generated and described based on a chess game, King-Rook versus King-Pawn on A7 (usually abbreviated as KRKPA7). The Census data set is the Census Income (KDD) data set from the UCI Machine Learning Repository. In our experiments, all continuous attributes have been removed from the original data sets.

5.2 Performance Evaluation

Three sets of experiments with the synthetic data were done to assess the accuracy of MH-PC-F and MH-PC-B by examining the results against the ground truth.

Data Predictor Ground Naïve-H Naïve-S MH-PC-F MH-PC-B
set variables truth
Syn-7 Yes
Syn-10 Yes
Table 2: Comparison of the proposed algorithms with the naïve methods

Firstly we compared MH-PC-F and MH-PC-B with two naïve approaches using HITON-PC and PC-select pcalg () respectively (denoted as Naïve-H and Naïve-S in the following). PC-select is an effective method for discovering the parents and children of a target variable, so we employed it as a benchmark for accuracy comparison.

Because the two naïve methods, especially Naïve-S, cannot handle large data sets, two small synthetic data sets (Syn-7 and Syn-10) were used in this set of experiments. Moreover, it is easier for small data sets to provide a good visualization of the detailed results.

The ground truth of the Syn-7 data set is that and are two single causes of the target and is a combined cause (see the Ground truth column of Table 2, where Yes means the predictor variable is a cause of the target, and No means otherwise). In Table 2, MH-PC-F and MH-PC-B find exactly the ground truth in Syn-7. While Naïve-H identifies the ground truth, it includes a number of redundant results, for example, and since and are causes already. Naïve-S misses the combined cause () and it finds some redundant combined causes too. Similar results can be observed with the Syn-10 data set. MH-PC-F and MH-PC-B miss the true single cause, , and the naïve methods do not find it either.

Syn-12 Syn-16 Syn-20
CR-CS 0.25 0.23 0.36
1.00 1.00 1.00
0.40 0.38 0.53
CR-PA 0.16 0.17 0.27
1.00 1.00 1.00
0.27 0.29 0.42
MH-PC-F 0.67 0.50 1.00
0.67 1.00 1.00
0.67 0.67 1.00
MH-PC-B 0.67 0.50 1.00
0.67 1.00 1.00
0.67 0.67 1.00
Table 3: Comparison of combined causes discovered by CR-CS, CR-PA, MH-PC-F and MH-PC-B with small synthetic data sets
Syn-50 Syn-60 Syn-80 Syn-100 Syn-120
CR-CS 0.71 1.00 0.71 0.83 1.00
1.00 1.00 1.00 1.00 1.00
0.83 1.00 0.83 0.91 1.00
CR-PA 0.71 1.00 0.83 0.83 1.00
1.00 1.00 1.00 1.00 1.00
0.83 1.00 0.91 0.91 1.00
MH-PC-F 0.83 0.71 1.00 0.83 0.83
1.00 1.00 1.00 1.00 1.00
0.91 0.83 1.00 0.91 0.91
MH-PC-B 0.83 0.71 1.00 0.83 0.83
1.00 1.00 1.00 1.00 1.00
0.91 0.83 1.00 0.91 0.91
Table 4: Comparison of combined causes discovered by CR-CS, CR-PA, MH-PC-F and MH-PC-B with larger synthetic data sets

Then we compared MH-PC-F and MH-PC-B with CR-CS li2015from () and CR-PA jin2012discovery () using three synthetic data sets, Syn-12, Syn-16 and Syn-20. CR-CS and CR-PA are both designed to explore causal relationships from association rules, and they are also capable of finding both single and combined causes. The results are shown in Table 3, where , and represent the Precision, Recall and -measure respectively. In the paper, we used odds ratio greater than 1.5 as the threshold to indicate a significant result in both CR-CS and CR-PA. We can see that MH-PC-F and MH-PC-B both achieve higher accuracy than CR-CS and CR-PA, based on the known ground truth. Actually, CR-CS and CR-PA both perform very well in term of Recall, but they also include many false positives, since a main aim of these two methods is for explorations and they tolerate false positives and seek high recall.

In the next set of experiments, the last five larger synthetic data sets in Table 1 were used. From Table 4, all the four algorithms (i.e. CR-CS, CR-PA, MH-PC-F and MH-PC-B) can recover the ground truth very well from the data sets with relatively large sizes.

No. of single causes No. of combined causes
CMC 2 0
German 1 13
House-votes-84 0 18
Hypothyroid 3 4
Kr-vs-kp 7 12
Sick 3 8
Table 5: Number of (single and combined) causes discovered by MH-PC-F and MH-PC-B in real world data sets

Based on the results of three sets of experiments, it is reasonable to conclude that MH-PC-F and MH-PC-B are capable to find single and combined causes. Another finding is that the causes (single and combined) identified by MH-PC-F and MH-PC-B are always the same, and this indicates two algorithms can achieve consistent results. This is also demonstrated by the results of two algorithms with all real world data sets, as described in the following.

To investigate combined causes in the real world cases, we ran the proposed algorithms on the first six real world data sets in Table 1 for performance evaluation, where MH-PC-F and MH-PC-B still return consistent results as shown in Table 5. The proposed algorithms find many combined causes, and some of the combined causes discovered are reasonable as judged by common sense, shown in Table 6. For example, from the Sick data set it is found that a low level of TT4 (Total T4) and T3 may result in thyroid disease (Table 6, where T4 and T3 are hormones produced by thyroid), and being sick and having a low level of T3 can lead to thyroid disease too. Some interesting combined causes are also discovered in the German data set. If one person has a private real estate and does not apply for any other installment plan, then this person is very likely to have a low default risk.

Sick T3 & TT4
sick = true & T3
German Checking.account = no-account & Savings.account 100DM
Property = & Other.installment.plans = none
Table 6: Examples of combined causes identified from Sick and German data sets

5.3 Efficiency and Scalability

Figure 2: Scalability with number of variables - Census data
Figure 3: Scalability with number of variables - Synthetic data

We ran Naïve-H, Naïve-S, CR-CS, CR-PA, MH-PC-F and MH-PC-B with various data sets on the same computer with a 3.4 GHz quad-core CPU and 16 GB of memory.

Figure 4: Scalability with umber of records - Census data

The running time of the algorithms on subsets of the Census data containing 30, 50, 70, 100 and 150 variables with the same sample size (50K) is shown in Figure 2. The two naïve methods are much slower than MH-PC-F and MH-PC-B, and Naïve-S is the most inefficient one. While the two naïve methods do not scale well with the number of variables, the two proposed algorithms both perform good scalability.

When applying the algorithms to the synthetic data sets containing different numbers of variables, both naïve methods do not return results after 5 hours. So no results of naïve methods are shown in Figure 3. From the figure, both proposed algorithms again scale well.

We then ran the algorithms with 50K, 100K, 150K, 200K, and 250K samples respectively from the Census data set with 100 variables selected randomly, and the execution time of MH-PC-F and MH-PC-B is shown in Figure 4. No results are obtained for Naïve-S, and Naïve-H also cannot handle data sets with more than 50K samples. Similarly, MH-PC-B is more efficient and scalable than MH-PC-F.

To summarise, MH-PC-F and MH-PC-B are much faster than the naïve methods, and both proposed algorithms scale well in terms of the number of variables and number of samples. The experiments have also confirmed the discussions in Section 4.3 that MH-PC-B can achieve higher efficiency than MH-PC-F.

6 Conclusion

In practice, it is useful to identify a cause consisting of multiple variables, which individually are not causes of the target variable. However, finding such combined causes is challenging as the number of combined variables will increase exponentially with the increase of the number of individual variables. As far as we know, there has been very little work on discovering the combined causes, and the problem has not been studied in causal Bayesian network research either.

In this paper, we have proposed two efficient algorithms to mine the combined causes from large data sets. The proposed algorithms are based on a well-designed local causal discovery method, the semi-interleaved HITON-PC algorithm, with the novel extensions for dealing with combined causes. Experiments have shown that the proposed algorithms can find single and combined causes with a low number of false discoveries from synthetic data sets, and discover many reasonable combined causes from real world data. Additionally, the algorithms have been shown to scale up well with respective to the number of variables and the number of samples with both synthetic and real world data.

In the near future, we will apply the proposed algorithms to solving real world problems, such as investigating the mechanisms of gene regulation, for which there is evidence showing that many gene regulators work together to regulate their target genes.

7 Acknowledgement

This work has been supported by Australian Research Council (ARC) Discovery Project Grant DP140103617.


  • (1) P. Spirtes, Introduction to causal inference, The Journal of Machine Learning Research 11, 2010, pp. 1643–1662.
  • (2) D. Freedman, From association to causation via regression, Advances in Applied Mathematics 18 (1), 1997, pp. 59–110.
  • (3) D. Freedman, From association to causation: some remarks on the history of statistics, Statistical Science 1999, pp. 243–258.
  • (4) J. J. Shaughnessy, E. B. Zechmeister, Research methods in psychology., Alfred A. Knopf, 1985.
  • (5) J. Pearl, Causality: models, reasoning and inference, Cambridge University Press, 2000.
  • (6) R. E. Neapolitan, Learning Bayesian networks, 38, Prentice Hall Upper Saddle River, 2004.
  • (7) D. Heckerman, A Bayesian approach to learning causal networks, in Proceedings of the Eleventh Conference on Uncertainty in Artificial Intelligence, Morgan Kaufmann Publishers Inc., 1995, pp. 285–295.
  • (8) C. F. Aliferis, A. Statnikov, I. Tsamardinos, S. Mani, X. D. Koutsoukos, Local causal and Markov blanket induction for causal discovery and feature selection for classification part I: algorithms and empirical evaluation, The Journal of Machine Learning Research 11, 2010, pp. 171–234.
  • (9) I. Tsamardinos, C. F. Aliferis, A. Statnikov, Time and sample efficient discovery of Markov blankets and direct causal relations, in: ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2003, pp. 673–678.
  • (10) J.-P. Pellet, A. Elisseeff, Using Markov blankets for causal structure learning, The Journal of Machine Learning Research 9, 2008, pp. 1295–1342.
  • (11) L. R. Novick, P. W. Cheng, Assessing interactive causal influence, Psychological Review 111 (2), 2004, pp. 455.
  • (12) G. P. Wagner, M. Pavlicev, J. M. Cheverud, The road to modularity, Nature Reviews Genetics 8 (12), 2007, pp. 921–931.
  • (13) P. D’haeseleer, S. Liang, R. Somogyi, Genetic network inference: from co-expression clustering to reverse engineering, Bioinformatics 16 (8), 2000, pp. 707–726.
  • (14) J. L. Mackie, Causes and conditions, American Philosophical Quarterly, 1965, pp. 245–264.
  • (15) P. Spirtes, C. N. Glymour, R. Scheines, Causation, prediction, and search, 2nd Edition, The MIT Press, 2000.
  • (16) K. J. Rothman, Causes, American Journal of Epidemiology 104 (6), 1976, pp. 587–592.
  • (17) K. J. Rothman, S. Greenland, Causation and causal inference in epidemiology, American Journal of Public Health 95 (S1), 2005, pp. 144–150.
  • (18) J. L. Hill, Bayesian nonparametric modeling for causal inference, Journal of Computational and Graphical Statistics 20 (1), 2011.
  • (19) S. Mani, C. F. Aliferis, A. R. Statnikov, M. NYU, Bayesian algorithms for causal data mining, in Neural Information Processing Systems Workshop on Causality: Objectives and Assessment, 2010, pp. 121–136.
  • (20) D. Heckerman, G. Dan, M. David, Chickering. Learning Bayesian networks: the combination of knowledge and statistical data, Machine Learning 20 (3), 1995, pp. 197–243.
  • (21) M. B. Messaoud, L. Philippe, B. A. Nahla, SemCaDo: a serendipitous strategy for causal discovery and ontology evolution, Knowledge-Based Systems, 76, 2015, pp. 79–95.
  • (22) G. Kabir, G. Demissie, R. Sadiq, S. Tesfamariam, Integrating failure prediction models for water mains: Bayesian belief network based data fusion, Knowledge-Based Systems, 85, 2015, pp. 159-169.
  • (23) E. Segal, D. Pe’er, A. Regev, D. Koller, Learning module networks, in Proceedings of the Nineteenth Conference on Uncertainty in Artificial Intelligence, 2002, pp. 525–534.
  • (24) E. Azizi, E. M. Airoldi, J. E. Galagan, Learning modular structures from network data and node variables, in Proceedings of the 31st International Conference on Machine Learning (ICML), 2014, JMLR: W&CP 32 (1), pp. 1440-1448.
  • (25) E. Segal, M. Shapira, A. Regev, D. Pe’er, D. Botstein, D. Koller, N. Friedman, Module networks: identifying regulatory modules and their condition-specific regulators from gene expression data, Nature genetics 34 (2), 2003, pp. 166–176.
  • (26) B. Yet, D. W. R. Marsh, Compatible and incompatible abstractions in Bayesian networks, Knowledge-Based Systems 62, 2014, pp. 84–97.
  • (27) T. J. Vanderweele, J. M. Robins, Empirical and counterfactual conditions for sufficient cause interactions, Biometrika 95 (1), 2008, pp. 49–61.
  • (28) T. J. VanderWeele, J. M. Robins, The identification of synergism in the sufficient-component-cause framework, Epidemiology 18 (3), 2007, pp. 329–339.
  • (29) T. J. VanderWeele, T. S. Richardson, General theory for interactions in sufficient cause models with dichotomous exposures, Annals of Statistics 40 (4), 2012, pp. 2128–2161.
  • (30) J. Li, T. D. Le, L. Liu, J. Liu, Z. Jin, B. Sun, Mining causal association rules, in Proceedings of IEEE International Conference on Data Mining Workshop on Causal Discovery (CD), 2013.
  • (31) A. M. Euser, C. Zoccali, K. J. Jager, F. W. Dekker, et al., Cohort studies: prospective versus retrospective, Nephron Clinical Practice 113 (3), 2009, pp. 214–217.
  • (32) Z. Jin, J. Li, L. Liu, T. D. Le, B.-Y. Sun, R. Wang, Discovery of causal rules using partial association, in: IEEE International Conference on Data Mining, 2012, pp. 309–318.
  • (33) M. Birch, The detection of partial association, I: the 2 2 case, Journal of the Royal Statistical Society. Series B (Methodological), 1964, pp. 313–324.
  • (34) J. Li, Z. J. Wang, Controlling the false discovery rate of the association/causality structure learned with the PC algorithm, The Journal of Machine Learning Research 10, 2009, pp. 475–514.
  • (35) M. Scutari, Learning Bayesian networks with the bnlearn R package, arXiv preprint arXiv:0908.3817, 2009.
  • (36) J. Li, T. D. Le, L. Liu, J. Liu, Z. Jin, B. Sun, S. Ma, From observational studies to causal rule mining, ACM Transactions on Intelligent Systems and Technology (in press), 2015.
  • (37) D. Colombo, A. Hauser, M. Kalisch, M. Maechler, Package ‘pcalg’,, 2014.
  • (38) A. Asuncion, D. Newman, UCI machine learning repository,, 2007.
  • (39) R. Kohavi, D. Sommerfield, J. Dougherty, Data mining using MLC++: A machine learning library in C++, in Tools with Artificial Intelligence, IEEE Computer Society Press, 1996, pp. 234–245.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description