The GrowShrink strategy for learning Markov network structures constrained by contextspecific independences
Abstract
Markov networks are models for compactly representing complex probability distributions. They are composed by a structure and a set of numerical weights. The structure qualitatively describes independences in the distribution, which can be exploited to factorize the distribution into a set of compact functions. A key application for learning structures from data is to automatically discover knowledge. In practice, structure learning algorithms focused on “knowledge discovery” present a limitation: they use a coarsegrained representation of the structure. As a result, this representation cannot describe contextspecific independences. Very recently, an algorithm called CSPC was designed to overcome this limitation, but it has a high computational complexity. This work tries to mitigate this downside presenting CSGS, an algorithm that uses the GrowShrink strategy for reducing unnecessary computations. On an empirical evaluation, the structures learned by CSGS achieve competitive accuracies and lower computational complexity with respect to those obtained by CSPC.
Keywords:
Markov networks, structure learning, contextspecific independences, knowledge discovery, canonical models.1 Introduction
Markov networks are parametric models for compactly representing complex probability distributions of a wide variety of domains. These models are composed by two elements: a structure and a set of numerical weights. The structure plays an important role, because it describes a set of independences that holds in the domain, thus making assumptions about the functional form or factorization of the distribution [5]. For this reason, the structure is an important source of knowledge discovery because it depicts intricate patterns of probabilistic (in)dependences between the domain variables. Usually, the structure of a Markov network can be constructed by algorithms using observations taken from an unknown distribution. Interestingly, the constructed structure can be used by human experts for discovering unknown knowledge [16]. For this reason, the problem of structure learning from data has received an increasing attention in machine learning [14, 9, 8]. However, Markov network structure learning from data is still challenging. One of the most important problems is that it requires weight learning that cannot be solved in closedform, requiring to perform a convex optimization with inference as a subroutine. Unfortunately, inference in Markov networks is #Pcomplete [8].
As a result, structure learning algorithms seek the “best” approximation to the solution structure, making assumptions about the form of the solution space or the used objective function. The choice of these approximations depends on the goal of learning used for designing learning algorithms [8, Chapter 16]. In generative learning, we can find two goals of learning: density estimation, where a structure is “best” when the resulting Markov network is accurate for answering inference queries; and knowledge discovery, where a structure is “best” when it is accurate for qualitatively describing the independences that hold in the distribution. Depending on the goal of learning, we can categorize structure learning algorithms in: density estimation algorithms [4, 11]; and knowledge discovery algorithms [1, 15]. In this work, we are focusing in the knowledge discovery goal.
In practice, knowledge discovery algorithms exploit the fact that the structure can be viewed as a set of independences. Thus, for constructing a structure, such algorithms successively make (in)dependence queries to data in order to restrict the number of possible structures, converging toward the solution structure. To achieve a good performance in this procedure, knowledge discovery algorithms use a sound and complete representation of the structure: a single undirected graph. A graph can be viewed as an inference engine which efficiently represents and manipulates (in)dependences in polynomial time [14]. Unfortunately, this graph representation cannot capture a type of independences known as contextspecific independences [6, 7, 8]. For these cases, knowledge discovery algorithms cannot achieve good results in their goal of learning, because a single graph cannot capture such independences, obscuring the acquisition of knowledge. To overcome this limitation, a novel knowledge discovery algorithm has recently been developed [3]. This algorithm, called CSPC, uses an alternative representation of the structure called canonical models, a particular class of Context Specific Interaction models (CSI models) [7]. Canonical models allow us to encode contextspecific independences by using a set of mutually independent graphs. Using this representation, CSPC can learn more accurate structures than several stateoftheart algorithms. However, despite the benefits in accuracy, CSPC presents an important downside: it has a high computational complexity, because it must perform a large number of independence queries in comparison to traditional algorithms.
Therefore, this paper focuses on reducing the number of independence queries required for learning canonical models. This reduction was thought in order to achieve competitive accuracies with respect to CSPC, but avoiding unnecessary queries. To achieve this, we present the CSGS algorithm, a knowledge discovery algorithm that learns canonical models by using the GrowShrink strategy [12] in a similar way to the GSMN algorithm, a Markov network structure learning algorithm [1]. Basically, under the assumption of bounded maximum degree, this strategy constructs a structure in polynomial time by identifying local neighborhoods of each variable [12]. On an empirical evaluation, the canonical models learned by CSGS achieve competitive accuracies and lower time complexity with respect to those obtained by CSPC.
2 Background
We introduce our general notation. Hereon, we use the symbol to denote a finite set of indexes. Lowercase subscripts denote particular indexes, for instance ; in contrast, uppercase subscripts denote subsets of indexes, for instance . Let be a set of random variables of a domain, where single variables are denoted by single indexes in , for instance where . We simply use instead of when is clear from the context. We focus on the case where takes discrete values , that is, the values for any are discrete: . For instance, for booleanvalued variables, that is , the symbols and denote the assignments and , respectively. Moreover, we overload the symbol to also denote the set of nodes of a graph. Finally, we use for denoting an arbitrary set of complete or canonical assignments, that is, all the variables take a fixed value. For instance, .
2.1 Conditional and contextspecific independences
A set of independence assumptions is commonly called the structure of a distribution because independences determine the factorization, or functional form, of a distribution. Two of the most known types of independences are conditional and contextspecific independences. The latter has received an increased interest [6, 7, 8, 2, 3], because one conditional independence can be expressed as a set of contextspecific independences. Formally, contextspecific independences are defined as follows:
Definition 1
Let be disjoint subsets of indexes, and let be some assignment in . Let be a probability distribution. We say that variables and are contextually independent given and the context , denoted by , iff satisfies:
for all assignments , , and ; whenever .
As a consequence, if holds in , then it logically follows that also holds in for any assignment , , . Interestingly, if holds for all , then we say that the variables are conditionally independent. Formally,
Definition 2
Let be disjoint subsets of indexes, and let be a probability distribution. We say that variables and are conditionally independent given and , denoted by , iff satisfies:
for all assignments , , , and ; whenever .
Thus, a conditional independence that holds in can be seen as a conjunction of contextspecific independences of the form for all . Moreover, each contextspecific independence , that holds in , can be seen as a conditional independence that holds in the conditional distribution [2].
2.2 Representation of structures
The independence relation commonly assumes the
Markov properties [9, Section 3.1]; we also
assume that probability distributions are positive
In practice, for overcoming the previous limitation, an alternative representation of the structure consists in a set of features, where each feature is commonly represented as an indicator function (Kronecker’s delta), that is, a booleanvalued function . Given an arbitrary assignment , a feature returns 1, if ; and otherwise. A set of features is a more flexible representation than a graph, because the former can encode contextspecific independences. For example, an independence of the form is encoded in iff for any feature , the variables and do not appear simultaneously in the set , that is, either or . From a set of features, we can induce a graph by adding an edge between every pair of nodes whose variables appear together in some feature [3]. In a similar way, following our previous example, we can induce a graph from . This graph is known as an instantiated graph , namely, a graph whose nodes are associated to the assignment [6]. Unfortunately, a set of features is not easily interpretable as a single graph, because we cannot efficiently verify independence assertions, since we are required to check all the features in .
A graph representation for overcoming the previous limitations is canonical models [3]. These models are a proper subset of the CSI models [6, 7], which can capture contextspecific independences in a more interpretable way than a set of features. A canonical model is a pair , where is a collection of instantiated graphs of the form , and is a set of canonical assignments. These instantiated graphs are called canonical graphs, because every graph is associated to a canonical assignment . In contrast to a single graph , a canonical model requires several canonical graphs for capturing both conditional and contextspecific independences. For instance, let us suppose that we want to encode the contextspecific independence in a canonical model . By Definition 2.1, this independence implies a set of independences of the form , for all the assignments . Then, each independence is captured by a particular , one whose context satisfies: , , and .
2.3 Markov networks
A Markov network is a parametric model for representing probability
distributions in a compact way. This model is defined by a structure and a set
of potential functions , where , and is known as the
scope of . For discrete domains, a usual representation of
the potential functions is a tablebased function. Markov networks can
represent a very important class of probability distributions called
Gibbs distributions, whose functional form is as follows: , where is a global constant,
called partition function, that guarantees the normalization of the
product.
A Gibbs distribution factorizes over a graph , if any
scope corresponds to a complete subgraph (a.k.a.
clique) of the graph . Without loss of generality, the Gibbs
distribution is often factorized by using the maximum cliques of the
graph . For positive distributions, one important theoretical result
states the converse [5], that is, can be
represented as a Gibbs distribution (Markov network) that factorizes over
, if is an Imap
Proposition 1
Let be a positive Gibbs distribution. Then, for any :

the set of assertions is false in , presence of an edge , iff each assertion satisfies .

the set of assertions is true in , absence of an edge , iff each assertion satisfies .
Although a Gibbs distribution makes the structure explicit, it encodes the potential functions as a tablebased function, obscuring finergrained structures such as contextspecific independences [8]. For this reason, a commonly used representation of a Markov network is the loglinear model defined as A loglinear model can be constructed from a Gibbs distribution as follows: for the th row of the tablebased potential function , an indicator function is defined whose weight is .
3 ContextSpecific GrowShrink algorithm
In this section we present CSGS (ContextSpecific GrowShrink), a knowledge discovery algorithm for learning the structure of Markov networks by using canonical models as structure representation. The design of CSGS was inspired by the search strategy used by CSPC for learning canonical models [3], and the GS search strategy for learning graphs [12, 1]. Therefore, CSGS obtains a canonical model by learning a collection of mutually independent canonical graphs, where each canonical graph is learned by using the GS strategy. More precisely, GS obtains a graph in two steps: first, it generalizes an initial very specific graph (one that makes many independence assumptions) by adding edges. Then, the resulting graph is specialized by removing spurious edges. In sum, Algorithm 1 shows an overview of CSGS. In line 1 and 1, CSGS defines an initial specific canonical model from a set of canonical assignments . Subsequently, lines 1 and 1 construct each canonical graph by using the GS strategy. For determining the presence or absence of an edge, CSGS uses Proposition 1 as criterion. The validation of this criterion is realized by eliciting contextspecific independences from data in a similar way to CSPC [3, Section 4.3]. Finally, in a similar fashion to CSPC [3, Section 4.4], CSGS uses the resulting canonical model for generating a set of features in order to enable us to use standard software packages for performing weight learning and inference. The remaining of this section is structured by using the key elements of CSGS: i) Section 3.1 describes how the initial canonical model is defined; ii) Section 3.2 presents the GS strategy for obtaining the canonical graphs; and iii) Section 3.3 concludes analyzing the time complexity of CSGS.
3.1 Initial canonical model
The definition of the initial canonical model consists, firstly, in the set of canonical assignments . In a similar fashion to CSPC [3], this set is composed by the unique training examples in . This definition is the consequence of using the datadriven approach, that is, we use only contexts that appear in data, and for the remaining contexts which do not appear in the data, we assume that they are improbable due to the lack of other information. Lastly, once is defined, we associate the most specific graph to each context , namely, the empty graph. As a result, in each initial canonical graph, every Markov blanket is empty. The idea behind the GS strategy is to add edges, thus adding nodes to each blanket.
3.2 GrowShrink strategy for learning canonical graphs
CSGS uses the GS strategy under the localtoglobal approach [15, 11]. In this approach, the structure is obtained by constructing each Markov blanket in turn. In this manner, for each node , the strategy GS determines the Markov blanket in two phases: the grow phase and the shrink phase. The grow phase adds a new edge to as long as Proposition 1.1 is satisfied in data. However, due to the node ordering used [12, 1], the grow phase can add nodes that are outside of the blanket, resulting in spurious edges. For this reason, the shrink phase removes an edge as long as Proposition 1.2 is satisfied in data. Algorithm 2 shows a more detailed description of the construction of the canonical graph . Initially, the canonical graph is empty, then it is generalized by using the localtoglobal approach shown in the loop of line 2. In this loop, the two steps of GS are performed: the grow phase, starting in line 2; and the shrink phase, starting in line 2. In each iteration of the main loop, line 2 and 2 change the Markov blanket by adding/removing new edges to the current set of edges. Once the main loop has finished, the Markov blankets of each node are obtained and, in consequence, the resulting canonical graph encodes contextspecific independences.
3.3 Asymptotic complexity
As is usual in knowledge discovery algorithms, we analyze the complexity of CSGS by determining the number of independence tests performed for constructing a structure from data. Let be the number of unique examples in the dataset, the complexity of performing a test is linear in . However, this cost can be particularly high if is large. In our implementation of CSGS, we reduce this cost by using ADTree [13]. We assume that nodes in line 2 in Algorithm 2 are taken in an unspecified but fixed order, and we bound the maximum degree of a node to . Let be the number of variables, and let be an empty canonical graph, we can decompose the analysis into the number of tests performed by grow and shrink phases. In the grow phase, a test is performed for each edge , resulting in tests. At the end of the grow phase, the size of a blanket is at worst, thus shrink phase performs tests. Additionally, Algorithm 1 performs the GS strategy times, one per each initial canonical graph. Therefore, the total complexity is independence tests.
4 Empirical evaluation
This section shows experimental results obtained from the structures learned
by CSGS and several structure learning algorithms on synthetic
datasets. Basically, the goals of our experiments remark the greatly practical
utility of our algorithm in a twofold manner. First, we compare the accuracy of the
structures learned by CSGS and CSPC, as well as by other stateoftheart
structure learners. Second, we compare the computational complexity between
CSGS and CSPC. For evaluating the accuracy of the learned structures, we use
the underlying distributions that were sampled to generate the synthetic
datasets; since there is a direct correlation between the correctness of the
structure and the accuracy of the distribution[5],
the accuracy of a structure can be measured by comparing the similarity between the learned and underlying distributions. On
the other hand, for evaluating the computational complexity, we report the
number of tests performed for constructing the structures
4.1 Datasets
The datasets of our experiment are used in [2, 3] and were sampled from Markov networks with contextspecific independences for different numbers of variables that range from 6 to 9, varying their sizes from 20 to 100k datapoints. For each , 10 datasets were sampled from 10 different Markov networks with fixed structure but randomly choosing their weights. For more details, we refer the readers to [3, Appendix B]. Roughly speaking, the underlying structure of these models encodes independence assertions of the form for all pairs , becoming dependent when . In this way, the underlying structure can be seen as two instantiated graphs: a fully connected graph , and a star graph whose central node is . Despite the simplicity of this structure, this cannot be correctly captured by using a single graph, yet it can be captured by sets of features or canonical models. On the other hand, as the maximum degree of the underlying structure is equal to , learning the structure is a challenging problem [2, 15]. The generated datasets are partitioned into: a training set (70%) and a validation set (30%). The validation set is used by density estimation algorithms to set their tuning parameters. Specifically, they use different tuning parameters for learning several structures from the training set, selecting one whose pseudolikelihood on the validation set is maximum. In contrast, CSGS, CSPC, GSMN and IBMAPHC algorithms do not use tuning parameters, thus they learn structures by using the whole dataset, i.e. the union of training and validation sets.
4.2 Methodology
In this subsection we explain the methodology used for evaluating our approach against several structure learning algorithms. First, we explain which structure learning algorithms are used as competitors and their configuration settings, and then we describe the method used for measuring the accuracies of the learned structures: KullbackLeibler divergence (KL) [8, Appendix A].
CSGS is compared against CSPC (ContextSpecific Parent and Children) algorithm
and two representative algorithms for knowledge discovery and density
estimation goals. The knowledge discovery algorithms are: GSMN (GrowShrink Markov Network
learning algorithm) [1], and IBMAPHC (IBMAP
HillClimbing)
[15]. For a fair comparison, we use the Pearson’s as the statistical
independent test with a significance level of for CSGS, CSPC and
GSMN, but not for IBMAPHC which only works with the Bayesian statistical test
with a threshold equal to . On the other hand, the density estimation
algorithms are: GSSL (Generate Select Structure Learning)
[4], and DTSL (Decision Tree Structure Learner)
[11]. For a fair comparison, we replicate the recommended
tuning parameters for both algorithms detailed in [4], and
[10], respectively. KL divergence is a “distance measure” widely used to evaluate how similar
are two distributions. Thus, using the learned structures, we obtain Markov
networks by learning their weights with pseudolikelihood
4.3 Results of experimentation
Figure 1 presents the KL divergences computed from the structures learned by the different algorithms. For comparison reasons, Figure 1 also shows the KL divergence computed by using a Markov network whose structure is the underlying one, showing the best KL divergence that can be obtained. In these results, we can see three important trends. First, the structures learned by CSGS reach similar divergences in most cases to CSPC. Second, in most cases, the divergences obtained by CSGS and CSPC are better than those obtained by the other structure learners. Finally, the divergences of CSGS and CSPC are closer to the divergences obtained by the underlying structure. These trends allow us to conclude that the structures learned by CSGS and CSPC can encode the contextspecific independences present in data, resulting in Markov networks more accurate than those obtained by the remaining algorithms. Figure 2 presents the number of tests performed by CSGS and CSPC for learning the structures used previously for computing the KL divergences. As shown, the number of tests performed by CSGS is smaller than those performed by CSPC. The difference between both dramatically increases as data increases. These results show the great impact of using the GS strategy for learning canonical models. In conclusion, the results shown in both figures show that CSGS is an efficient alternative to CSPC for learning canonical models.
5 Conclusions and future work
In this work we presented CSGS, a new knowledge discovery algorithm for learning Markov network structures by using canonical models. CSGS is similar to the CSPC algorithm [3], except that CSGS uses an alternative search strategy called GrowShrink [12, 1], that avoids performing unnecessary independence tests. We evaluated our algorithm against CSPC and several stateoftheart learning algorithms on synthetic datasets. In our results, CSGS learned structures with similar accuracy to CSPC but performing a reduced number of tests. The directions of future work are focused on further reducing the computational complexity and improving the quality of the learned structures using alternative search strategies. For instance, IBMAPHC on the side of knowledge discovery algorithms [15], and GSSL on the side of density estimation algorithms [4].
Footnotes
 A distribution is positive if , for all .
 We simply use when the structure from which the Markov blanket is defined is clear from the context.
 A structure is an Imap for if every independence described by the structure holds in .
 We further refer the readers to Section 3.2.1 in [9] and Section 4.3.2 in [8] for more details about Markov properties on undirected graphs.
 Additional empirical results are available in the online appendix http://dharma.frm.utn.edu.ar/papers/iberamia14/supplementaryinformationoncsgs.pdf
 http://dharma.frm.utn.edu.ar/papers/iberamia14
 Weight learning was performed using version 0.5.0 of the Libra toolkit (http://libra.cs.uoregon.edu/).
References
 Bromberg, F., Margaritis, D., Honavar, V.: Efficient Markov network structure discovery using independence tests. Journal of Artificial Intelligence Research 35(2), 449 (2009)
 Edera, A., Schlüter, F., Bromberg, F.: Learning markov networks with contextspecific independences. In: The 25th International Conference on Tools with Artificial Intelligence, Herndon, VA, USA, November 46, 2013. pp. 553–560. IEEE (2013)
 Edera, A., Schlüter, F., Bromberg, F.: Learning Markov networks structures constrained by contextspecific independences. viXra submission 1405.0222v1 (2014), http://viXra.org/abs/1405.0222
 Haaren, J.V., Davis, J.: Markov network structure learning: A randomized feature generation approach. In: Proceedings of the TwentySixth National Conference on Artificial Intelligence. AAAI Press (2012)
 Hammersley, J.M., Clifford, P.: Markov fields on finite graphs and lattices. Unpublished manuscript (1971)
 Højsgaard, S.: Yggdrasil: a statistical package for learning split models. In: Proceedings of the Sixteenth conference on Uncertainty in artificial intelligence. pp. 274–281. Morgan Kaufmann Publishers Inc. (2000)
 Højsgaard, S.: Statistical inference in context specific interaction models for contingency tables. Scandinavian journal of statistics 31(1), 143–158 (2004)
 Koller, D., Friedman, N.: Probabilistic Graphical Models: Principles and Techniques. MIT Press, Cambridge (2009)
 Lauritzen, S.L.: Graphical models. Oxford University Press (1996)
 Lowd, D., Davis, J.: Learning Markov network structure with decision trees. In: Data Mining (ICDM), 2010 IEEE 10th International Conference on. pp. 334–343. IEEE (2010)
 Lowd, D., Davis, J.: Improving markov network structure learning using decision trees. Journal of Machine Learning Research 15, 501–532 (2014), http://jmlr.org/papers/v15/lowd14a.html
 Margaritis, D., Thrun, S.: Bayesian network induction via local neighborhoods. Tech. rep., DTIC Document (2000)
 Moore, A., Lee, M.S.: Cached Suficient Statistics for Efficient Machine Learning with Large Datasets. Journal of Artificial Intelligence Research 8, 67–91 (1998)
 Pearl, J.: Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann Publishers, Inc., 1re edn. (1988)
 Schlüter, F., Bromberg, F., Edera, A.: The IBMAP approach for Markov network structure learning. Annals of Mathematics and Artificial Intelligence pp. 1–27 (2014), http://dx.doi.org/10.1007/s1047201494195
 Smith, V.A., Yu, J., Smulders, T.V., Hartemink, A.J., Jarvis, E.D.: Computational inference of neural information flow networks. PLoS computational biology 2(11), e161 (2006)