Mining Combined Causes in Large Data Sets
Abstract
In recent years, many methods have been developed for detecting causal relationships in observational data. Some of them have the potential to tackle large data sets. However, these methods fail to discover a combined cause, i.e. a multifactor cause consisting of two or more component variables which individually are not causes. A straightforward approach to uncovering a combined cause is to include both individual and combined variables in the causal discovery using existing methods, but this scheme is computationally infeasible due to the huge number of combined variables. In this paper, we propose a novel approach to address this practical causal discovery problem, i.e. mining combined causes in large data sets. The experiments with both synthetic and real world data sets show that the proposed method can obtain highquality causal discoveries with a high computational efficiency.
keywords:
Causal discovery, Combined causes, Local causal discovery, HITONPC, Multilevel HITONPC1 Introduction
Causal relationships can reveal the causes of a phenomenon and predict the potential consequences of an action or an event Spirtes2010 (). Therefore, they are more useful and reliable than statistical associations freedman1997association (); freedman1999association (); shaughnessy1985research ().
In recent decades, causal inference has attracted great attentions in computer science. Causal Bayesian networks (CBNs) pearl2000causality (); neapolitan2004learning (); heckerman1995bayesian () have emerged as a main framework for representing causal relationships and uncovering them in observational data. Due to the incapability of CBNs in coping with high dimensional data, some efficient methods were proposed for local causal discovery around a target variable aliferis2010local (); tsamardinos2003time (); pellet2008using ().
One limitation of current causal discovery methods is that they only find a cause consisting of a single variable. However, single causal factors are often insufficient for reasoning about the causes of particular effects novick2004assessing (). For example, a burning cigarette stub and inflammable material nearby can start a fire, but neither of them alone may cause a fire. With gene regulation, it was found that the expression level of a gene might be coregulated by a group of other genes, which could lead to a disease wagner2007road (); d2000genetic (). Furthermore, a main objective of data mining is to find previously unobserved patterns and relationships in data. Causal relationships between single variables are easier to be identified by domain experts, but combined causes are much more difficult to be detected mackie1965causes (). Hence data mining methods for discovering combined causes are in demand. In this paper, we address the problem of finding combined causes in large data sets.
The combined causes considered in this paper are different from the generally discussed multiple causes. For example, in Figure 1, sprinkler causes wet ground, and so does rain. Sprinkler and rain together cause wetter ground. However, in this work, we concern the situation when multiple variables each alone is not sufficient to cause an effect, but their combination is. As shown in Figure 1, there is no causal link from burning cigarette stub or inflammable material to a fire, but the combination of these two factors leads to a fire.
The combined causes studied in this paper cannot be discovered with CBN learning, as in a CBN an edge is drawn from to only when is a cause of . If and each alone is not a cause of , no edge is drawn from or to , and thus impossible to examine the combined causal effect of and on . This limitation of CBNs was discussed in spirtes2000causation () (page 48) as follows:
“Suppose drugs A and B both reduce symptoms C, but the effect of A without B is quite trivial, while the effect of B alone is not. The directed graph representations we have considered in this chapter offer no means to represent this interaction and to distinguish it from other circumstances in which A and B alone each have an effect on C.”
To identify combined causes in data, one critical challenge is the computational complexity with large data sets, as the number of combined variables is exponential to the number of individual variables.
In this paper, we propose a multilevel approach to discovering the combined causes of a target variable. Our method is designed based on an efficient local causal discovery method, HITONPC aliferis2010local (), which was developed on the same theoretical ground as the wellknown PC algorithm spirtes2000causation () for CBN learning.
In the rest of the paper, the related work and the contributions of this paper are described in Section 2. Section 3 introduces the background, including the notation and the HITONPC algorithm. Section 4 presents the proposed method. The experiments and results are described in Section 5. Finally, Section 6 concludes the paper.
2 Related Work and Contributions
As discussed in the previous section, causal Bayesian networks (CBNs), as a main stream causal discovery approach, have been studied extensively. Many algorithms for CBN learning and inference pearl2000causality (); spirtes2000causation (); hill2011bayesian (); mani2010bayesian () have been developed. Researchers have also tried to incorporate other models and prior knowledge into the CBN framework. The domain experts are interested in taking the prior knowledge and observational data to produce Bayesian networks heckerman1995learning (). Messaoud et al. messaoud2015semcado () proposed a framework to learn CBNs, by incorporating semantic background knowledge provided by the domain ontology. In order to address the uncertainties resulting from incomplete and partial information, Kabir et al. kabir2015integrating () combined Bayesian belief network with data fusion model to predict the failure rate of water mains. However, these methods are designed to analyse individual causes, instead of combined causes. Moreover, it may be difficult for domain experts to elicit the CBN structure with combined causes from domain knowledge only.
Another approach segal2002learning (); Azizi2014learning () was proposed to find the relationship structures between groups of variables. Segal et al. segal2002learning () defined the module network of which each node (module) was formed by a set of variables having the same statistical behavior. They also proposed an algorithm to learn the module assignment and the module network structure. Many algorithms and applications segal2003module (); Azizi2014learning () have been developed to extend the module network model. Yet et al. yet2014compatible () proposed a method for abstracting the BN structure, where they also merged nodes with similar behavior to simplify the BN structure. The modules or nodes of a module network are not the same as the combined causes defined in this paper, since the components of a combined cause do not necessarily have the similar behaviour.
The sufficientcomponent cause model rothman1976causes (); rothman2005causation () (often referred by epidemiologists) addresses the combined causes discussed in this paper. According to the model, a disease is an inevitable consequence of a minimal set of factors. However, no computational methods have been developed for finding a sufficientcomponent cause in observational data. Although the model and interactive causes have attracted statisticians’ attentions vanderweele2008empirical (); vanderweele2007identification (); vanderweele2012general (), the work is at the level of theoretical discussions.
Li et al. li2013mining () used the idea of retrospective cohort studies euser2009cohort () and Jin et al. jin2012discovery () applied partial association tests birch1964detection () to discover causal rules from association rules. While the work has initiated the concept of the combined causes, their focus was on integrating association rule mining with observational studies or traditional statistical analysis for causal discovery.
In this paper, a novel method is proposed to discover the combined causes of the given target variable, based on the causal inference framework established for CBN learning. The contributions of this paper are summarised as follows:

We study the problem of mining combined causes which are different from multiple individual causes, and the problem has not been tackled by most existing methods.

We develop a new method for discovering combined (and single) causes, and demonstrate its performance and efficiency by experiments with synthetic and real world data.
3 Background
In this section, we firstly describe the notation to be used in the paper (Section 3.1). In Section 3.2, we introduce the HITONPC algorithm, which is the basis of our algorithms, and then discuss its time complexity.
3.1 Notation
We use upper case letters, e.g. and , to represent random variables, and multiple upper case letters, e.g. or , to denote the combined variable consisting of and . Boldfaced upper case letters, e.g. and , represent a set of variables. Particularly, we denote the set of predictor variables and the target variable with and respectively. The conditional independence between and given is represented as .
This paper deals with binary variables only, i.e. each variable has two possible values, 1 or 0. The value of a combined (binary) variable is 1 if and only if each of its component (binary) variables is equal to 1 (i.e. and ). A multivalued variable can be converted to a number of binary variables, e.g. the nominal variable Eduction can be converted to 3 binary variables, High School, Undergraduate and Postgraduate. With binary variables, we can easily create and examine a combined cause involving different values of multiple variables. For example, given the two nominal variables, Gender and Eduction, after converting them to binary variables, we can combine them to have variables, such as (Male, High School) and (Female, Postgraduate).
3.2 HitonPc
Given its high efficiency and origin in the sound CBN learning theory, HITONPC aliferis2010local () is a commonly used method for discovering local causal structures with a fixed target variable. The semiinterleaved HITONPC is used as the basis for our proposed method. Under the causal assumptions spirtes2000causation (), HITONPC uses conditional independence (CI) tests to find the causal relationships around a target variable , i.e. the set of parents (P) and children (C) of .
Referring to Algorithm 1, HITONPC takes a data set of the predictors and the target to produce , the set of parents and children of . The algorithm uses two data structures, a priority queue OPEN and a list . Initially OPEN contains all predictors associated with and is empty (see lines 1 and 2 of Algorithm 1). It then iterates between the two phases, inclusion and elimination, until OPEN becomes empty.
In the inclusion phase, the variable having the strongest association with is removed from OPEN and added to (line 4). In the elimination phase, if OPEN is not empty, the forward stage (lines 59) is executed. The variable newly added to , , is eliminated from if it is independent of given a subset of current , otherwise it is kept (still tentatively) in . If OPEN is empty, the backward stage (lines 1016) is activated, and each variable in current is tested, and if a subset of is found such that is independent of given the subset, is removed from .
HITONPC uses several heuristics to improve efficiency. At the forward stage, CI tests are conducted only on the newly added variable, instead of performing a full variable elimination. To compensate for possible false discoveries caused by this heuristic, HITONPC uses the backward stage to “tighten up” by testing the conditional independence of each variable with given the other variables in . Moreover, the use of the priority queue, OPEN, allows variables having stronger associations with to be included and evaluated first. As these variables are more likely to be the true parents or children of the target, once they are in , it is expected that given these variables, other variables that should not be in are quickly identified and removed so that in the forward stage will not be over expanded, thus reducing the number of CI tests. Additionally, in practice, HITONPC restricts the maximum order of CI tests to a given threshold maxk, i.e. the maximum size of the conditioning set (, see Algorithm 1) is maxk.
The time complexity of HITONPC mainly depends on the number of CI tests. Each variable needs to be tested on all subsets of . Thus the complexity regarding each variable is , and the total time complexity is . When maxk is specified, the complexity becomes polynomial, i.e. . Extensive experiments have shown that HITONPC is able to cope with thousands of variables with low rate of false discoveries aliferis2010local ().
4 Uncovering Combined Causes
Having introduced the background knowledge, in this section, we present the proposed method for discovering combined causes. We firstly introduce the naïve approach (Section 4.1), which is a straightforward way to detect the combined causes. Then we give the formal definition of combined causes and present the basic idea of the proposed method (Section 4.2). Finally, we describe the proposed method (including two algorithms, MHPCF and MHPCB) for discovering combined causes (Section 4.3) and discuss their possible false discoveries (Section 4.4).
4.1 The Naïve Approach
A naïve scheme for finding the combined causes can be as follows. Firstly, we generate a new variable set with combined variables using the original variable set. For example, for , the new variable set (with variables) is . Then we run a local causal discovery algorithm, such as HITONPC, to find both single and combined causes using the data set created for .
The naïve approach, however, is not feasible because the number of combined variables is exponential to the number of individual variables. In the following, we discuss how our proposed method tackles the problem.
4.2 Basic Idea of the Proposed Method
In fact, it is not necessary to consider all combined variables. Particularly, we are not interested in a combined variable, e.g. , of which a component or is a cause already, as it is reasonable to assume that the causal relationship between and the target is due to the relationship between or and . To improve efficiency, we can exclude such combined variables when finding combined causes of , and only consider the combined variables whose components are not causes of .
Furthermore, as discussed in Section 1, a combined cause consisting of noncause components is more difficult to be observed by domain experts and they cannot be represented or discovered using other approaches such as CBNs, hence mining such combined causes is useful in practice.
The definition of the combined causes studied in this paper is given below.
Definition 1 (Combined Cause)
Let be a combination of multiple variables. is a combined cause of if is a cause of and any of its component variables, , is not a cause of .
Based on Definition 1, we can design an algorithm to find such combined causes (and single causes) in a level by level manner. We firstly, at the level (), obtain , the set of parents and children of , each consisting of individual variables. Then at the level, combined variables are generated based on , the set of noncause variables at all the lower levels, i.e. , where () is the set of noncause variables each containing individual variables. For example, a level combined variable can be generated by combining an level () noncause variable and a level noncause variable.
For noncause variables and , if the combination is a combined cause, then it is reasonable to assume that positively contributes to the relationship between and the target and vice versa. For example, the combustible dust suspended in the air (even at a high concentration) has no causal effect on a dust explosion, but an ignition source will improve their relationship significantly and thus the combination of the two factors can result in a dust explosion. This observation leads to the following definition.
Definition 2 (Redundant Combined Variable)
For , the combination is a redundant combined variable, if either or , where and are not individual causes of the target .
By excluding redundant combined variables, we can further improve the efficiency of the causal discovery.
Based on the above discussion and HITONPC, we propose the Multilevel HITONPC (MHPC) method for finding both single and combined causes of a given target. In the following section, we present the details of the method.
4.3 Multilevel HITONPC
Referring to Algorithm 2, at the first level, MHPC invokes HITONPC to find the single causes of , (line 1) and initiates as (line 2). The single noncause variables are put in and (noncause variables identified at all the lower levels) is initially empty (line 2).
At level (), MHPC firstly updates so that it contains the noncauses from levels 1 to (line 4). Next the algorithm generates combined variables containing individual variables by combining the variables in (line 5). Redundant combined variables are then removed (line 6) and the new data set for level combined variables () is created too (line 7). From lines 8 to 23, we identify level combined causes from . Initially OPEN contains all the combined variables in which are associated with and the variables are sorted in descending order of the strength of associations (line 8). Similar to HITONPC, the inclusion and elimination phases are carried out iteratively till OPEN is empty. At the end of the iteration (line 23), includes the discovered combined causes consisting of variables, and includes all the causes from level 1 to level . Note that in line 17, to improve the efficiency further, the backward stage only checks the level candidates in , instead of all candidates in , as all the lower level parents and children have been confirmed at previous levels.
In line 24, the set of level noncauses is updated before completing the work at level . Finally MHPC outputs until (the maximum level of causal discovery).
In the forward stage (i.e. the case when OPEN ), as with HITONPC, MHPC searches the current for a subset to test whether the combined variable is independent of given (line 13). Since combined variables in are combinations of individual variables, MHPC may have conducted some redundant conditional independence tests. For example, the CI test between and given a combined variable YZ (i.e. ) may be unnecessary if the test given the two individual variables and (i.e. ) has been done. To address this issue, we propose a variant of MHPC, called MHPCB (B version of MHPC). To avoid confusion, in the rest of the paper, we call the MHPC algorithm shown in Algorithm 2 MHPCF (Full version of MHPC). In the forward stage, when conducting the level test with MHPCB, we do not include the level variables in into conditioning sets, i.e. we replace line 13 in Algorithm 2 with the following statement:
As MHPCB conducts the tests conditioning only on the lower level variables, it can have higher efficiency than MHPCF, but at the same time it may produce some false positives. However, since in the backward stage (lines 1721 of Algorithm 2), we do another check of the candidate causes remained in , it is expected that the false discoveries are removed. As we will see from the next section, the experiments show that MHPCB is more efficient than MHPCF, while producing the same results as MHPCF with the data sets used.
4.4 False Discoveries of Multilevel HITONPC
Since MHPCF (and MHPCB) follows the idea of HITONPC, we firstly analyse the quality of HITONPC in term of false discoveries. In HITONPC, possible false decisions mainly come from two sources spirtes2000causation (); li2009controlling (): the use of , the maximum size of conditioning sets (, see Algorithms 1 and 2) used for conditional independence tests, and incorrect results of statistical tests. Using a smaller  reduces the number of conditional independence tests, thus improves efficiency, but results in false positive discoveries. Fortunately, when  or 4, the false positive rate is not high, as shown in aliferis2010local (). When we do not have enough number of samples, the statistical tests may produce incorrect results.
In the following, we will discuss the false discoveries coming from the interactions between variables. As mentioned above, the proposed method only focuses on nonredundant combined variables (Definition 2). While this strategy is used for reducing complexity, it may lead to false discoveries. However, we argue that our algorithms can still obtain highquality causal findings. For noncause variable , if another noncause variable cannot improve the relationship between and the target , then the combination , in most cases, may not improve the relationship between and . This type of combined variables are unlikely to be combined causes. The experiment results in Section 5 have confirmed this intuition.
5 Experiments
We implemented MHPCF and MHPCB based on the semiinterleaved HITONPC implementation in the R package, bnlearn scutari2009learning (). In the experiments, the maximum level of combination (i.e. in Algorithm 2) is restricted to 2, i.e. a combined cause at most consists of two component variables. We set the threshold of value to 0.01 to prune redundant combined variables and 0.05 to test causal relationships, for both synthetic and real world data sets.
5.1 Data Sets
10 synthetic and 7 real world data sets were used in the experiments, and a summary of the data sets is shown in Table 1. The variables in all data sets are binary, i.e. each variable has two possible values, 1 or 0. The class variable in each data set is specified as the target variable. The numbers of variables shown in the table refer to the numbers of single predictor variables. The distribution of each data set indicates the percentages of the two different values of class variables. For synthetic data sets, the ground truth (i.e. the number of true causes) is shown in the table, where the first value is the number of single causes each consisting of one predictor variable and the second value is the number of combined causes each consisting of two predictor variables.
Name  #Records  #Variables  Distributions  Ground Truth 
Syn7  1000  6  39.1% & 60.9%  2, 1 
Syn10  1000  9  42.2% & 57.8%  3, 2 
Syn12  2000  11  72.1% & 27.9%  4, 3 
Syn16  2000  15  45.2% & 54.8%  4, 3 
Syn20  2000  19  55.6% & 44.4%  4, 4 
Syn50  5000  49  30.8% & 69.2%  5, 5 
Syn60  5000  59  30.8% & 69.2%  5, 5 
Syn80  5000  79  30.5% & 69.5%  5, 5 
Syn100  5000  99  30.0% & 70.0%  5, 5 
Syn120  5000  119  30.4% & 69.6%  5, 5 
CMC  1473  22  57.3% & 42.7%   
German  1000  60  30.0% & 70.0%   
Housevotes84  435  16  61.4% & 38.6%   
Hypothyroid  3163  51  4.8% & 95.2%   
Krvskp  3196  74  47.8% & 52.2%   
Sick  2800  58  6.1% & 93.9%   
Census  299285  495  6.2% & 93.8%   
The first five synthetic data sets (with small number of variables) in Table 1 were generated with two main steps: (1) generating a data set based on a BN (Bayesian network) created randomly by the TETRAD software tool (http://www.phil.cmu.edu/tetrad/), and (2) generating the final synthetic data set by “splitting” some causes of the target into two new variables. Specifically, we firstly created a random BN using the TETRAD software, whose structure and conditional probability tables were both generated randomly. In the obtained BN, one of the variables was designated as the target and the others as predictor variables. Records of all variables were generated based on the conditional probability tables, using the builtin Bayes Instantiated Model. Then we selected and split a parent node, e.g. , of the target into two variables, e.g. and , such that (1) (i.e. and both are equal to 1 if and only if is 1), and (2) or is not an individual cause of the target. Note that, for combined causes in the synthetic data, we do not have a complete ground truth, since it may include some combined causes that we do not observe.
For the next five larger synthetic data sets (Syn50, …, Syn120), it is unpractical to generate them based on randomly drawn BNs, since it takes too long time to generate one. We firstly drew a simple BN where some variables were the parents of the target, some were not. Then we adopted logistic regression to generate the data based on the BN. Next, we employed the aforementioned splitting process to obtain the final data sets.
All real world data sets shown in Table 1 are obtained from the UCI Machine Learning Repository asuncion2007uci (). The first six real world data sets were employed to assess the effectiveness of the proposed algorithms, while the Census data set was used for evaluating the efficiency. The CMC (Contraceptive Method Choice) data set is an extraction of the National Indonesia Contraceptive Prevalence Survey in 1987. The German data set is a data set for classifying people’s credit risks based on a set of attributes. Housevotes84 contains the United States Congressional Voting Records in 1984. Hypothyroid and Sick are two medical data sets, which are from the Thyroid Disease data set of the repository (discretised using the MLC++ discretisation utility kohavi.ea:usingmlc:96 ()). The Krvskp data set is generated and described based on a chess game, KingRook versus KingPawn on A7 (usually abbreviated as KRKPA7). The Census data set is the Census Income (KDD) data set from the UCI Machine Learning Repository. In our experiments, all continuous attributes have been removed from the original data sets.
5.2 Performance Evaluation
Three sets of experiments with the synthetic data were done to assess the accuracy of MHPCF and MHPCB by examining the results against the ground truth.
Data  Predictor  Ground  NaïveH  NaïveS  MHPCF  MHPCB 

set  variables  truth  
Syn7  Yes  
Yes  
Yes  
No  
No  
Syn10  Yes  
Yes  
Yes  
Yes  
Yes  
No  
No  
No  
No 
Firstly we compared MHPCF and MHPCB with two naïve approaches using HITONPC and PCselect pcalg () respectively (denoted as NaïveH and NaïveS in the following). PCselect is an effective method for discovering the parents and children of a target variable, so we employed it as a benchmark for accuracy comparison.
Because the two naïve methods, especially NaïveS, cannot handle large data sets, two small synthetic data sets (Syn7 and Syn10) were used in this set of experiments. Moreover, it is easier for small data sets to provide a good visualization of the detailed results.
The ground truth of the Syn7 data set is that and are two single causes of the target and is a combined cause (see the Ground truth column of Table 2, where Yes means the predictor variable is a cause of the target, and No means otherwise). In Table 2, MHPCF and MHPCB find exactly the ground truth in Syn7. While NaïveH identifies the ground truth, it includes a number of redundant results, for example, and since and are causes already. NaïveS misses the combined cause () and it finds some redundant combined causes too. Similar results can be observed with the Syn10 data set. MHPCF and MHPCB miss the true single cause, , and the naïve methods do not find it either.
Syn12  Syn16  Syn20  

CRCS  0.25  0.23  0.36  
1.00  1.00  1.00  
0.40  0.38  0.53  
CRPA  0.16  0.17  0.27  
1.00  1.00  1.00  
0.27  0.29  0.42  
MHPCF  0.67  0.50  1.00  
0.67  1.00  1.00  
0.67  0.67  1.00  
MHPCB  0.67  0.50  1.00  
0.67  1.00  1.00  
0.67  0.67  1.00 
Syn50  Syn60  Syn80  Syn100  Syn120  

CRCS  0.71  1.00  0.71  0.83  1.00  
1.00  1.00  1.00  1.00  1.00  
0.83  1.00  0.83  0.91  1.00  
CRPA  0.71  1.00  0.83  0.83  1.00  
1.00  1.00  1.00  1.00  1.00  
0.83  1.00  0.91  0.91  1.00  
MHPCF  0.83  0.71  1.00  0.83  0.83  
1.00  1.00  1.00  1.00  1.00  
0.91  0.83  1.00  0.91  0.91  
MHPCB  0.83  0.71  1.00  0.83  0.83  
1.00  1.00  1.00  1.00  1.00  
0.91  0.83  1.00  0.91  0.91 
Then we compared MHPCF and MHPCB with CRCS li2015from () and CRPA jin2012discovery () using three synthetic data sets, Syn12, Syn16 and Syn20. CRCS and CRPA are both designed to explore causal relationships from association rules, and they are also capable of finding both single and combined causes. The results are shown in Table 3, where , and represent the Precision, Recall and measure respectively. In the paper, we used odds ratio greater than 1.5 as the threshold to indicate a significant result in both CRCS and CRPA. We can see that MHPCF and MHPCB both achieve higher accuracy than CRCS and CRPA, based on the known ground truth. Actually, CRCS and CRPA both perform very well in term of Recall, but they also include many false positives, since a main aim of these two methods is for explorations and they tolerate false positives and seek high recall.
In the next set of experiments, the last five larger synthetic data sets in Table 1 were used. From Table 4, all the four algorithms (i.e. CRCS, CRPA, MHPCF and MHPCB) can recover the ground truth very well from the data sets with relatively large sizes.
No. of single causes  No. of combined causes  

CMC  2  0 
German  1  13 
Housevotes84  0  18 
Hypothyroid  3  4 
Krvskp  7  12 
Sick  3  8 
Based on the results of three sets of experiments, it is reasonable to conclude that MHPCF and MHPCB are capable to find single and combined causes. Another finding is that the causes (single and combined) identified by MHPCF and MHPCB are always the same, and this indicates two algorithms can achieve consistent results. This is also demonstrated by the results of two algorithms with all real world data sets, as described in the following.
To investigate combined causes in the real world cases, we ran the proposed algorithms on the first six real world data sets in Table 1 for performance evaluation, where MHPCF and MHPCB still return consistent results as shown in Table 5. The proposed algorithms find many combined causes, and some of the combined causes discovered are reasonable as judged by common sense, shown in Table 6. For example, from the Sick data set it is found that a low level of TT4 (Total T4) and T3 may result in thyroid disease (Table 6, where T4 and T3 are hormones produced by thyroid), and being sick and having a low level of T3 can lead to thyroid disease too. Some interesting combined causes are also discovered in the German data set. If one person has a private real estate and does not apply for any other installment plan, then this person is very likely to have a low default risk.
Sick  T3 & TT4 

sick = true & T3  
German  Checking.account = noaccount & Savings.account 100DM 
Property = real.estate & Other.installment.plans = none 
5.3 Efficiency and Scalability
We ran NaïveH, NaïveS, CRCS, CRPA, MHPCF and MHPCB with various data sets on the same computer with a 3.4 GHz quadcore CPU and 16 GB of memory.
The running time of the algorithms on subsets of the Census data containing 30, 50, 70, 100 and 150 variables with the same sample size (50K) is shown in Figure 2. The two naïve methods are much slower than MHPCF and MHPCB, and NaïveS is the most inefficient one. While the two naïve methods do not scale well with the number of variables, the two proposed algorithms both perform good scalability.
When applying the algorithms to the synthetic data sets containing different numbers of variables, both naïve methods do not return results after 5 hours. So no results of naïve methods are shown in Figure 3. From the figure, both proposed algorithms again scale well.
We then ran the algorithms with 50K, 100K, 150K, 200K, and 250K samples respectively from the Census data set with 100 variables selected randomly, and the execution time of MHPCF and MHPCB is shown in Figure 4. No results are obtained for NaïveS, and NaïveH also cannot handle data sets with more than 50K samples. Similarly, MHPCB is more efficient and scalable than MHPCF.
To summarise, MHPCF and MHPCB are much faster than the naïve methods, and both proposed algorithms scale well in terms of the number of variables and number of samples. The experiments have also confirmed the discussions in Section 4.3 that MHPCB can achieve higher efficiency than MHPCF.
6 Conclusion
In practice, it is useful to identify a cause consisting of multiple variables, which individually are not causes of the target variable. However, finding such combined causes is challenging as the number of combined variables will increase exponentially with the increase of the number of individual variables. As far as we know, there has been very little work on discovering the combined causes, and the problem has not been studied in causal Bayesian network research either.
In this paper, we have proposed two efficient algorithms to mine the combined causes from large data sets. The proposed algorithms are based on a welldesigned local causal discovery method, the semiinterleaved HITONPC algorithm, with the novel extensions for dealing with combined causes. Experiments have shown that the proposed algorithms can find single and combined causes with a low number of false discoveries from synthetic data sets, and discover many reasonable combined causes from real world data. Additionally, the algorithms have been shown to scale up well with respective to the number of variables and the number of samples with both synthetic and real world data.
In the near future, we will apply the proposed algorithms to solving real world problems, such as investigating the mechanisms of gene regulation, for which there is evidence showing that many gene regulators work together to regulate their target genes.
7 Acknowledgement
This work has been supported by Australian Research Council (ARC) Discovery Project Grant DP140103617.
References
 (1) P. Spirtes, Introduction to causal inference, The Journal of Machine Learning Research 11, 2010, pp. 1643–1662.
 (2) D. Freedman, From association to causation via regression, Advances in Applied Mathematics 18 (1), 1997, pp. 59–110.
 (3) D. Freedman, From association to causation: some remarks on the history of statistics, Statistical Science 1999, pp. 243–258.
 (4) J. J. Shaughnessy, E. B. Zechmeister, Research methods in psychology., Alfred A. Knopf, 1985.
 (5) J. Pearl, Causality: models, reasoning and inference, Cambridge University Press, 2000.
 (6) R. E. Neapolitan, Learning Bayesian networks, 38, Prentice Hall Upper Saddle River, 2004.
 (7) D. Heckerman, A Bayesian approach to learning causal networks, in Proceedings of the Eleventh Conference on Uncertainty in Artificial Intelligence, Morgan Kaufmann Publishers Inc., 1995, pp. 285–295.
 (8) C. F. Aliferis, A. Statnikov, I. Tsamardinos, S. Mani, X. D. Koutsoukos, Local causal and Markov blanket induction for causal discovery and feature selection for classification part I: algorithms and empirical evaluation, The Journal of Machine Learning Research 11, 2010, pp. 171–234.
 (9) I. Tsamardinos, C. F. Aliferis, A. Statnikov, Time and sample efficient discovery of Markov blankets and direct causal relations, in: ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2003, pp. 673–678.
 (10) J.P. Pellet, A. Elisseeff, Using Markov blankets for causal structure learning, The Journal of Machine Learning Research 9, 2008, pp. 1295–1342.
 (11) L. R. Novick, P. W. Cheng, Assessing interactive causal influence, Psychological Review 111 (2), 2004, pp. 455.
 (12) G. P. Wagner, M. Pavlicev, J. M. Cheverud, The road to modularity, Nature Reviews Genetics 8 (12), 2007, pp. 921–931.
 (13) P. Dâhaeseleer, S. Liang, R. Somogyi, Genetic network inference: from coexpression clustering to reverse engineering, Bioinformatics 16 (8), 2000, pp. 707–726.
 (14) J. L. Mackie, Causes and conditions, American Philosophical Quarterly, 1965, pp. 245–264.
 (15) P. Spirtes, C. N. Glymour, R. Scheines, Causation, prediction, and search, 2nd Edition, The MIT Press, 2000.
 (16) K. J. Rothman, Causes, American Journal of Epidemiology 104 (6), 1976, pp. 587–592.
 (17) K. J. Rothman, S. Greenland, Causation and causal inference in epidemiology, American Journal of Public Health 95 (S1), 2005, pp. 144–150.
 (18) J. L. Hill, Bayesian nonparametric modeling for causal inference, Journal of Computational and Graphical Statistics 20 (1), 2011.
 (19) S. Mani, C. F. Aliferis, A. R. Statnikov, M. NYU, Bayesian algorithms for causal data mining, in Neural Information Processing Systems Workshop on Causality: Objectives and Assessment, 2010, pp. 121–136.
 (20) D. Heckerman, G. Dan, M. David, Chickering. Learning Bayesian networks: the combination of knowledge and statistical data, Machine Learning 20 (3), 1995, pp. 197–243.
 (21) M. B. Messaoud, L. Philippe, B. A. Nahla, SemCaDo: a serendipitous strategy for causal discovery and ontology evolution, KnowledgeBased Systems, 76, 2015, pp. 79–95.
 (22) G. Kabir, G. Demissie, R. Sadiq, S. Tesfamariam, Integrating failure prediction models for water mains: Bayesian belief network based data fusion, KnowledgeBased Systems, 85, 2015, pp. 159169.
 (23) E. Segal, D. Pe’er, A. Regev, D. Koller, Learning module networks, in Proceedings of the Nineteenth Conference on Uncertainty in Artificial Intelligence, 2002, pp. 525–534.
 (24) E. Azizi, E. M. Airoldi, J. E. Galagan, Learning modular structures from network data and node variables, in Proceedings of the 31st International Conference on Machine Learning (ICML), 2014, JMLR: W&CP 32 (1), pp. 14401448.
 (25) E. Segal, M. Shapira, A. Regev, D. Pe’er, D. Botstein, D. Koller, N. Friedman, Module networks: identifying regulatory modules and their conditionspecific regulators from gene expression data, Nature genetics 34 (2), 2003, pp. 166–176.
 (26) B. Yet, D. W. R. Marsh, Compatible and incompatible abstractions in Bayesian networks, KnowledgeBased Systems 62, 2014, pp. 84–97.
 (27) T. J. Vanderweele, J. M. Robins, Empirical and counterfactual conditions for sufficient cause interactions, Biometrika 95 (1), 2008, pp. 49–61.
 (28) T. J. VanderWeele, J. M. Robins, The identification of synergism in the sufficientcomponentcause framework, Epidemiology 18 (3), 2007, pp. 329–339.
 (29) T. J. VanderWeele, T. S. Richardson, General theory for interactions in sufficient cause models with dichotomous exposures, Annals of Statistics 40 (4), 2012, pp. 2128–2161.
 (30) J. Li, T. D. Le, L. Liu, J. Liu, Z. Jin, B. Sun, Mining causal association rules, in Proceedings of IEEE International Conference on Data Mining Workshop on Causal Discovery (CD), 2013.
 (31) A. M. Euser, C. Zoccali, K. J. Jager, F. W. Dekker, et al., Cohort studies: prospective versus retrospective, Nephron Clinical Practice 113 (3), 2009, pp. 214–217.
 (32) Z. Jin, J. Li, L. Liu, T. D. Le, B.Y. Sun, R. Wang, Discovery of causal rules using partial association, in: IEEE International Conference on Data Mining, 2012, pp. 309–318.
 (33) M. Birch, The detection of partial association, I: the 2 2 case, Journal of the Royal Statistical Society. Series B (Methodological), 1964, pp. 313–324.
 (34) J. Li, Z. J. Wang, Controlling the false discovery rate of the association/causality structure learned with the PC algorithm, The Journal of Machine Learning Research 10, 2009, pp. 475–514.
 (35) M. Scutari, Learning Bayesian networks with the bnlearn R package, arXiv preprint arXiv:0908.3817, 2009.
 (36) J. Li, T. D. Le, L. Liu, J. Liu, Z. Jin, B. Sun, S. Ma, From observational studies to causal rule mining, ACM Transactions on Intelligent Systems and Technology (in press), 2015.
 (37) D. Colombo, A. Hauser, M. Kalisch, M. Maechler, Package ‘pcalg’, http://cran.rproject.org/web/packages/pcalg/index.html, 2014.
 (38) A. Asuncion, D. Newman, UCI machine learning repository, http://archive.ics.uci.edu/ml/, 2007.
 (39) R. Kohavi, D. Sommerfield, J. Dougherty, Data mining using MLC++: A machine learning library in C++, in Tools with Artificial Intelligence, IEEE Computer Society Press, 1996, pp. 234–245.