Learning Data Dependency with Communication Cost
Abstract
In this paper, we consider the problem of recovering a graph that represents the statistical data dependency among nodes for a set of data samples generated by nodes, which provides the basic structure to perform an inference task, such as MAP (maximum a posteriori). This problem is referred to as structure learning. When nodes are spatially separated in different locations, running an inference algorithm requires a nonnegligible amount of message passing, incurring some communication cost. We inevitably have the tradeoff between the accuracy of structure learning and the cost we need to pay to perform a given messagepassing based inference task because the learnt edge structures of data dependency and physical connectivity graph are often highly different. In this paper, we formalize this tradeoff in an optimization problem which outputs the data dependency graph that jointly considers learning accuracy and messagepassing costs. We focus on a distributed MAP as the target inference task due to its popularity, and consider two different implementations, ASYNCMAP and SYNCMAP that have different messagepassing mechanisms and thus different cost structures. In ASYNCMAP, we propose a polynomial time learning algorithm that is optimal, motivated by the problem of finding a maximum weight spanning tree. In SYNCMAP, we first prove that it is NPhard and propose a greedy heuristic. For both implementations, we then quantify how the probability that the resulting data graphs from those learning algorithms differ from the ideal data graph decays as the number of data samples grows, using the large deviation principle, where the decaying rate is characterized by some topological structures of both original data dependency and physical connectivity graphs as well as the degree of the tradeoff, which provides some guideline on how many samples are necessary to obtain a certain learning accuracy. We validate our theoretical findings through extensive simulations, which confirms that it has a good match.
1 Introduction
In many online/offline systems with spatiallyseparated agents (or nodes), a variety of applications involve distributed innetwork statistical inference tasks, which have been widely studied, exploiting given knowledge of statistical dependencies among agents. As one example, in sensor networks with multiple targets, each sensor node measures the targetspecific information in its coverage area (e.g., position, direction, distance), which further has a correlation among sensors. One wellrecognized inference problem is a data association which determines the correct match between measurements of sensors and target tracks by maximum a posteriori (MAP) estimation that is executed in a distributed fashion by exchanging some information messages. Other examples include target tracking, and detection/estimation in sensor networks [1, 2, 3, 4] and deanonymization, rumor/infection propagation in social networks [5, 6, 7, 8].
To solve these distributed innetwork inference problems, it is of crucial importance to understand how data from nodes are interdependent. To that end, a notion of the graphical model has been one of the powerful frameworks in machine learning for a succinct modeling of the statistical uncertainty, where each node in the graphical models corresponds to a random variable and each edge specifies the statistical dependency between random variables. A wide variety of scalable inference algorithms on graphical models via messagepassing have been developed, of which examples include belief propagation (BP) or maxproduct with a certain degree of convergence and accuracy guarantees [9, 10, 11, 12]. This graphical model, which we also call data dependency graph or simply data graph throughout this paper, is not given a priori, and it should be learnt only by using a given set of data samples from nodes. This problem, referred to as graph learning or structure learning [13, 14, 15, 16], has been an active research topic in statistical machine learning.
In this paper, for a collection of data sample vectors generated by nodes, we study a problem of graph learning, which also considers the communication cost incurred by the distributed innetwork inference algorithm being applied to the learnt data graph. Physical communication cost often becomes a critical issue, for example, exerting a significant impact on the lifetime of networked sensors. Clearly, there exists a tradeoff between the amount of incurred cost and the learning accuracy of the data graph. Figure 1(a) illustrates the physical connection among sensors, which differs from the exact data dependency graph in Figure 1(b). The sensor nodes and have nonnegligible data dependency, requiring messagepassing when performing inference, but they are three hops away from each other, incurring a large amount of communication cost. In this case, one may want to sacrifice the estimation accuracy a little bit and reduce communication cost by utilizing the data graph as shown in Figure 1(c). As done in many prior works on graph learning [16, 17, 18, 19], we restrict our attention to treestructured data graphs due to its simplicity, yet a large degree of expressive powers and other benefits, e.g., some inference algorithms such as BP over treestructured data graphs become optimal.
We now summarize our contributions in what follows:

We first formulate an optimization problem of learning data graph, having as an objective function the weighted sum of learning accuracy and the amount of cost that will be incurred by a distributed inference algorithm. Out of many possible inference algorithms, we consider the maximum a posteriori (MAP) estimator that is popular for many inference tasks, and two versions for the MAP implementation: (i) asynchronous and (ii) synchronous, which we call ASYNCMAP and SYNCMAP. These implementations have different patterns of passing messages, thus leading to different forms of communication costs, being useful to understand how distributed algorithms’ cost affect the resulting data dependency graph.

Next, for ASYNCMAP we develop a polynomialtime algorithm to find an optimal (costefficient) data graph that corresponds to simply finding a maximum weight spanning tree. This simplicity stems from the cost structure of ASYNCMAP that is characterized only by the sum of all ‘localized’ edge costs. Being in sharp contrast to ASYNCMAP, for SYNCMAP we first prove that it is computationally intractable (i.e., NPhard) in terms of the number of nodes, by reducing it to the problem of the Exact Cover by sets. The hardness is due to the fact that the cost structure of SYNCMAP depends on the diameter of the resulting tree which is the ‘global’ information involving the entire topology. As a practical solution, we propose a polynomialtime greedy heuristic to recover a suboptimal, but costefficient data graph.

Finally, for both ASYNCMAP and SYNCMAP, we quantify how the probability that the resulting (costefficient) data graph for a finite number of samples differs from the ideal data graph decays as increases, using the large deviation principle (LDP), as a form of The error exponent is characterized for each of ASYNCMAP and SYNCMAP by some topological information of physical/data graphs, cost structure for both inference mechanisms, and the degree of the tradeoff. We validate our theoretical findings through simulations over a 20node graph for a variety of scenarios and show their good match with the simulation results.
To validate our theoretical results, we perform numerical simulations a pair of physical and data graphs with 20 nodes, where we quantitatively analyze (i) how estimating a data graph considering communication cost affects the resulting estimation for various values of tradeoff parameters between inference accuracy and cost, (ii) how the estimation error decays as the same size increases.
1.1 Related Work
A variety of applications which involve distributed innetwork statistical inference tasks among spatially interconnected agents or sensors have been widely studied in many online/offline systems. In sensor networks, where the knowledge of statistical dependencies among sensed data is given, the tasks of target tracking [20, 21, 22], detection [23], parameter estimation [24, 2] are the examples, see [4] for a survey. In social networks, where the underlying social phenomenon of interest such as voting models, rumor/opinion propagation [7] evolves over a given social interaction graph, the inference tasks of distributed consensusbased estimation [6], deanonymization of communitystructured social network [8] and distributed observability [5] are studied.Messagepassing has manifested as an efficient procedure for inference over graphical models that provide the framework of succinct model of the statistical uncertainty of multiagents. Examples include belief propagation (BP) [9], maxproduct [12, 10] and references therein. They are known to be exact and efficient when the underlying graphical model is a tree [9, 11]. Recent research progress has been made for scalable messagepassing for general graphs, e.g., junction tree [25] and graphs with loops [26].
In the area of structure learning, several algorithms have been proposed in the literatures to recover the statistical dependencies from a set of data samples [13, 14, 15, 16]. It is known that the exact structure learning for general graphical models is NPhard. The research of structure learning for special graphical models includes: maximum likelihood estimation (MLE) [17, 16] for tree graphs, regularized MLE for binary undirected graphs [13], convexified MLE for Gaussian graphical models, known as Lasso [14]. Theoretical guarantees for the learning accuracy have been established as the number of data samples, e.g., on tree graph [27], on binary undirected graphs [13], on a class of Ising model [28], or on Bayesian network [29]. Our work differs from all of the above works in that we consider physical communication cost incurred by some target inference algorithms when learning the data dependency graph.
There exists an array of work that addresses the tradeoff between inference quality and cost in running distributed innetwork inference on the known data graph, which are summarized as two directions: (i) developing novel inference algorithms with less communication of messages or (ii) constructing a new graphical model upon which the existing distributed innetwork inference algorithms are performed with less communication resources. In (i), the need of conserving resources requires to propose new messagepassing schemes where the messages are compressed by allowing some approximation error in message values [21, 30, 26, 31], and/or some messages are censored (i.e., not to be transmitted) [20]. In (ii), most of the related works focused on constructing a junction tree that minimizes the inference cost [3], building a data dependency structure upon which messagepassing is run energyefficiently, where the communications among all agents are assumed to be done in onehop [1], or optimizing the data dependency structure formulated by a multiobjective problem of inference quality and energy, assuming that the exact statistical dependencies are given as a complete graph [32]. While the main interest of this area has been focused on characterizing the desirable dependency structure for given complete knowledge of accurate data dependencies, our work is motivated by the practical situation where one can just be able to observe a finite number of data sample vectors of nodes, which do not provide such a complete knowledge. Therefore, our interest lies in learning the desirable data dependencies from a finite number of data samples.
2 Model and Preliminary
2.1 Model
Physical graph. We consider a (connected) physical network with a set of nodes and links , where each node corresponds to an agent such as a sensor or an individual, and each link corresponds to a physical connectivity between two nodes. For example, in sensor networks, when nodes have wireless radios, then each link is established when two corresponding nodes over the link reach each other within each radio’s communication range.
Data samples. Each node generates a
binary data, denoted by
Data graph via graphical model. The underlying statistical dependency is often understood by the framework of graphical model, which has been a popular tool for modeling uncertainty by a graph structure, where each node corresponds to a random variable and each edge captures the probabilistic interaction between nodes. In particular, we model the data distribution as an undirected graph , which we call data graph, which consists of the same set of nodes as that in the physical graph and nodes’ statistical dependencies captured by an edge structure as: any two nonadjacent random variables are conditionally independent given all other variables, i.e., for any ,
(1) 
In this paper, we limit our focus on the treestructured data graph (thus simply data tree), for which let and be set of all spanning trees and set of all tree data distributions over , respectively, i.e., we assume and . Tree data graph is a class of graphical models that has received considerable attention in literatures [17, 19], since it possesses the following factorization property:
(2) 
where and are the marginals on node and edge , respectively. Treestructured data graph is known to strike a good balance between the expressive power and the computational tractability. In particular, the distribution in (2) is completely specified only by the set of edges and their pairwise marginals. Thus, if has the factorization property as in (2), in other words, if there exists a unique tree corresponding to To abuse the notation, we henceforth denote by the unique data tree of a tree distribution Figure 1 shows an example of the physical graph and two data graphs with nodes.
2.2 Goal: Costefficient Learning of Data Graph
Learning data graph: What and why? To understand the underlying data dependency (2), it is enough to learn the structure of data graph from the observed samples, which is known as the problem of (data graph) structure learning. Formally, when we are given a set of i.i.d. samples generated from an unknown (tree) data distribution on a data tree , a structure learning algorithm is a (possibly randomized) map defined by:
The quality of this algorithm is evaluated by how “close” is to the original data graph
Distributed inference on data graph. One of the practical goals of estimating the data tree given a set of data samples is to perform an inference task based on Thus, in many applications, primary interests are not focused on data itself but rather on how to exploit the data dependency for reliable decision making, such as target tracking, detection, estimation in sensor networks and/or social networks, which involves statistical inference about the networks described by a data graph. One example of inference tasks is the MAP (maximum a posteriori) based estimation. Distributed innetwork inference has been widely studied with the help of various distributed algorithms on graphical models using messagepassing. In particular, for a specific inference problem, a message between two nodes contains the information on influence that one node exerts on another, which is obtained based on the value contained in neighboring messages over an estimated data graph . One critical issue of messagepassing based inference algorithm is that messages are often passed along the multihop path on the physical graph , which incurs some amount of communication cost. Then, assuming that some inference algorithm would be run for the estimated data graph , such a data graph learning must have the tradeoff between the accuracy of the learnt graph (i.e., how close the learnt graph is to the original data graph) and the communication cost generated by performing the distributed inference.
Goal: Costefficient data graph learning. Given an observed samples from the unknown data distribution , our objective is to estimate a costefficient data tree, which captures the tradeoff between (i) inference accuracy and (ii) communication cost for inference. For tree distributions, finding a distribution naturally gives rise to the corresponding data tree, as mentioned earlier. Thus, it is natural to find the tree distribution that is the solution of the following optimization problem: for a constant parameter and a fixed inference algorithm
(3) 
where is the empirical distribution of , is some distance metric between two distributions, and is the communication cost paid by running an inference algorithm with respect to the data tree over the physical graph Recall that is the data tree for the tree distribution The value of parameterizes how much we prioritize the communication cost compared to the inference accuracy Note that as converges to the original data distribution which requires to solve CDG().
Then, this paper aims at answering the following two questions:

What are good datatree learning algorithms that compute by solving CDG()? In Section 3, we consider the MAP estimator as an applied inference algorithm, and their two implementations having different cost functions, for which we propose two costefficient learning algorithms.

How fast does converge to as the number of samples grows? We use the large deviation principle (LDP) to characterize the decaying rate of the probability that for two different MAP implementations in Section 3.
In this paper, we use the popular KL divergence as a distance metric for inference accuracy, denoted by , where for two distributions and , For notional simplicity, we simply denote by the solution of CDG() throughout this paper.
3 Costefficient Data Graph Learning Algorithms
In this paper, out of many possible inference tasks, we consider the maximum a posteriori (MAP) estimation, which is popularly applied in many applications such as data association for a multitarget tracking problem in sensor networks, communitystructured social network deanonymization problem in social networks [8].
3.1 Distributed MAP and Cost
Distributed MAP on treestructured data graph. The MAP estimator of some tree distribution on its associated data tree is given by:
(4) 
where we use and for simplicity. A standard messagepassing algorithm for the distributed MAP is a maxproduct algorithm, which defines a message from node to at th iteration with . Each node exchanges messages with their neighbors on the data tree , and these messages are updated over time in an iterative fashion by the following rule: at th iteration,
(5) 
with the normalizing constant to make the sum of all message values be , and denotes the neighboring nodes of .
Communication cost of distributed MAP. The communication cost of MAP is paid, depending on the actual protocol that specifies how to schedule messagepassing procedures. Two natural messagepassing protocols studied in literatures are: (a) asynchronous depthfirst (unicast) update [33] and (b) synchronous (broadcast) parallel update [9]. Both protocols for a tree distribution with its data tree have been shown to be consistent in that the message update (5) converges to a unique fixed point , which defines the exact MAP assignment in (4) as for each . We denote the the cost of a single messagepassing over an edge , under a given physical graph , as or . Recall that the message passing over may need to be done over a multihop path on the physical graph One simple example of is the shortest path distance from node to in . Then, both protocols incur the communication cost as elaborated in what follows:

Asynchronous: In the asynchronous protocol (simply ASYNCMAP), one node is arbitrarily picked as a root, and messages are passed from the leaves upwards to the root, then back downwards to the leaves. It involves a total number of messages upon termination. Thus, the communication cost would be:
(6) 
Synchronous: In the synchronous protocol (simply SYNCMAP), at each iteration, every node sends messages to all of its neighbors. Then, since the diameter
^{2} is the minimum amount of time required for a message to pass between two most distant nodes in , this protocol involves at most iterations with total number of messages. Thus, we have the following cost:(7)
In the next subsection, we will use the above two cost functions for two different learning algorithms for CDG() in (3) to estimate two costefficient data trees.
3.2 Algorithm for Asynchronous MAP
Using the cost function for the asynchronous MAP in (6), the original optimization problem CDG() is recast into:
(8)  
(9) 
We now describe ASYNCALGO that computes in (8) and thus estimates the costefficient data tree in Algorithm 1. As we see, the algorithm is remarkably simple. Using given data samples, we construct a weighted complete graph, where the weight for each edge is assigned some combination of the mutual information of nodes and with respect to the empirical distribution obtained from the data samples and the permessage cost, as in (10). Then, we run an algorithm that computes the maximum weight spanning tree, e.g., Prim’s algorithm or Kruskal’s algorithm, and the resulting spanning tree is the output of this algorithm.
and for each possible edge we initialize its weight by:
(10) 
where is the mutual information between two endpoints of edge with respect to a given joint distribution
Run a maximum weight spanning tree algorithm for and save its resulting spanning tree at .
Return
Correctness of ASYNCALGO. We now present the correctness of the above algorithm in the sense that we can obtain the data tree corresponding to the optimal distribution formulated in (8), as explained in what follows: For some tree distribution (thus, satisfying the factorization property in (2)), we have:
(11)  
(12) 
where is the entropy, and the inequality holds when the pairwise marginals over the edges of a fixed are set to that of , i.e., for all . Since the entropy terms are constant w.r.t. , it is straightforward that the structure of the estimator of CDGA in (8) is given by:
(13)  
(14) 
Then, it is easy to see that (13) requires us to find the maximum weight spanning tree using as the edge ’s weight, where the standard maximum weight spanning tree (MWST) computation algorithm runs in time, where recall that
3.3 Algorithm for Synchronous MAP
Similarly to ASYNCMAP, using the cost in (7), the original optimization problem CDG() is recast into:
(15)  
(16) 
Following the similar arguments in Section 3.2, the structure of the above estimator of CDGS in (15) is given by
(17)  
(18) 
We comment that this optimization is nontrivial in that the objective function contains the diameter of the tree, which can be computed only when the solution is fully characterized.
Hardness. The key difference in the cost function of SYNCMAP from ASYNCMAP is simply the existence of . However, this simple difference completely changes the hardness of learning the optimal data tree in SYNCMAP, as formally stated in the next Theorem.
Theorem 1 (Hardness of CdgS).
For any parameter , obtaining the optimal distribution in CDGS and thus its associated data tree is NPhard with respect to the number of nodes.
Proof sketch. Due to space limitation, we present the full proof of Theorem 1 in Appendix A.1, and we only provide its sketch here. The key step in proof is to reduce the CDGS in (15) to the wellknown NPcomplete problem: Exact Cover by sets problem, which we simply call X3C. In [34], the bounded diameter minimum weight spanning tree (BDMST) problem that finds the MWST with a diameter less than of for a fixed edge weights is shown to be an NPhard problem, by reducing it to the X3C problem. The main technical challenge in CDGS lies in that the edge weights are diameterdependent, via the form of in (17), where the weights become smaller as the diameter grows. Therefore, the optimal structure of in (17) would be attained at the tree with small diameter. If we consider a fixed diameter of so that the edge weights are set by constant values, then the problem becomes similar to the BDMST problem. To prove NPhardness of our problem, we first construct a specific tree distribution and the cost functions , at which the optimal solution of CDGS should have a certain diameter, a diameter of in our proof, then we show that CDGS for the weights of diameter has the optimal solution with diameter if and only if X3C problem has a solution. From understanding the reduction of BDMST problem, we construct and , under which (i) the tree with diameter less than does not attain optimal solution due to its structural limitations (to force the small diameter), and (ii) the edge weights for the diameter larger than become too small to achieve optimal solution of CDGS. The remaining technique to verify the reduction of our problem to the X3C problem follows the arguments in [34]. Then, we are done with the reduction.
and for each possible edge we initialize its weight by:
(19) 
and initialize the edge set by the set of all possible edges. repeat
Select an edge with the maximum weight, and update and .
Update as the set of all edges such that and and set the weight of each edge as:
(20)  
(21) 
where
(22) 
and
(23) 
Greedy algorithm. Due to the abovementioned hardness, we propose a greedy heuristic algorithm that outputs the tree structure denoted by , called SYNCALGO(), as we describe in Algorithm 2, where is the algorithm parameter. The overall algorithm operates as follows:

Initialize the weight of each possible edge with some initial value.

Sequentially select the edge that has the maximum weight and add it to the temporary resulting tree.

Update the weight of each edge whose one endpoint is in the current resulting tree and another endpoint is not, and go to S1 until we handle all nodes.
One of the central steps here is: first, we dynamically update the weight of the candidate edges (i.e., the set ) that we will add and, second, which value is chosen as the weight is different from the “oneshot” weight assignment as done in ASYNCALGO. To explain this intuition, we first note that from (17) it is easy to see that the degree of contribution in terms of weight by adding an edge to the existing resulting tree would be reexpressed as:
(24) 
where is defined in (22). Here, corresponds to the change of the communication cost over the existing edges in , under the grown tree For example, if the diameter of the grown tree does not change by adding the edge or , if the diameter of the grown tree increases by .
In dynamically assigning the weight of the candidate edges in we do not use the value of (24). Instead, as seen in (20), (i) we use the expected diameter growth of the tree, denoted by in (23), and (ii) we use a tunable parameter to compensate for the impact of the change in communication cost over the existing edges in (22). In more detail, we use in (23), which captures the expected diameter growth of the tree via the term , since the diameter of a uniformly random spanning tree is known to be of the order in [35]. We note that this term decreases to as the tree becomes to a spanning tree from the term . Second, we consider the impact of old weights over the existing edges in , captured by in (24), by controlling a scale of .
To summarize, these two modified choices of the weight are for handling a probable sacrifice of the performance when using a vanilla greedy method as in (22), since the edge weight should be modified suitably for the changed diameter on the way of tree construction. We expect that these two engineerings play an important role when the costefficient data graph is attained with a large diameter, where the edges chosen in the begging phase of the procedure (i.e., with a small diameter value) could exert much impact of communication cost at the end of the procedure. Our greedy algorithm runs in times.
4 Estimation Error for Increasing Sample Size
In this section, we provide the analysis of how the estimation error probability decays with the growing number of samples using the large deviation principle (LDP).
4.1 Estimation Error of ASYNCALGO
Clearly, when we use more and more data samples, approaches to that is the optimal edge structure solving CDGA We are interested in characterizing the following error probability of the event :
(25) 
To characterize the probability in (25) that is one of the rare events, we use LDP that rare events occurs in the most probable way. To this end, we aim at studying the following rate function :
(26) 
whenever the limit exists.
We now consider a simple event, called crossover event, as
defined in what follows:
Recall that ASYNCALGO uses, for each edge , the
weight
(27) 
As the number of samples , the empirical distribution approaches to the true distribution, thus the probability of the crossover event decays to zero, whose decaying rate which we call crossover rate is defined as . Using this definition of the crossover event, we present Theorem 2 that states the decaying rate of the estimation error probability as the number of data samples grows.
Theorem 2 (Decaying rate of AsyncAlgo).
For any fixed parameter ,
(28) 
where
(29) 
where is the unique path between nodes and , such that for , and
(30) 
Moreover, we have the following (finitesample) upperbound on the error probability: for all
(31) 
In Theorem 2, we observe that the decaying rate of error probability is specified by some topological information of physical/data graphs and the tradeoff parameter . In particular, the crossover event and its rate depend on how difficult it is to differentiate two edge weights under the true data distribution with a consideration of the tradeoff parameter as well as permessage cost on edges. As interpreted from (30), when and are close, the confusion between and from samples frequently occurs, leading to high error probability, and we can show the existence of the infimum satisfying as by slightly adjusting the true distribution . Moreover, we remark that the decaying rate (and thus ) is characterized by a tradeoff parameter . The error rate becomes smaller (i.e., higher error probability) when nearly meets the condition , and the weights becomes deterministic with respect to the samples as increases since the portion of the cost in weights grows, resulting to in (30). These interpretations are wellmatched to our numerical results in Section 5.
Proof sketch. The proof of Theorem 2 is presented in Appendix A.2, and we describe the proof sketch for readers’ convenience. Our proof largely follows that of the related work in [27] that analyzes an error exponent of a standard tree structure learning (i.e., known as ChowLiu algorithm [17]), whose goal is to solely estimate the true data distribution with no consideration of communication cost. Simply, the proof idea follows LDP in the following way. The error event is expressed as a union of small events that ASYNCALGO estimates only one wrong edge (see the definition of the crossover event in (27)), two wrong edges, and three, etc. Following LDP, the decaying rate of the error probability equals to the decaying rate of the most probable crossover event, which corresponds to the case of only one wrong edge. In more detail, two minimums in (29) specify the mostprobably error, whose edge set differs from the optimal data tree structure exactly in one edge, , i.e., , where it contains the nonneighbor node pair (as selected in the first minimization) instead of the most probable replacement edge in the unique path along (as in the second minimization). To obtain the minimum crossover rate , we apply the Sanov’s theorem [36], which provides an expression of the probabilistic relationship between and via their KL divergence. Finally, in addition to the asymptotic decaying rate of the estimation error probability, we also establish its upper bound of the error probability in terms of the number of data samples, where the first term of the bound in (31) implies the number of possible crossover events, and the second term represents the number of possible empirical distributions
4.2 Estimation Error of SYNCALGO
We conduct a similar analysis here for SYNCALGO to what we did for ASYNCALGO, which has more complicated issues for the following reasons: We first denote by in (20) the assigned weight for edge to stress its dependence on the corresponding resulting tree structure and its associated empirical distribution . Then, we need to investigate the most probable pattern in the rare event through a certain tree at some iteration. Simply, the crossover event for two edges and occurs if the order of edge weights from the given finite number of samples becomes reversed to the order of weights from the true data distribution. Among all possible crossover events, we are interested in the crossover event under every tree structure that is obtained on the way of constructing the ideal data structure, denoted by . Let and be the selected edge and constructed tree at th iteration obtained by running SYNCALGO w.r.t. the true data distribution , which would finally find . Then, it is obvious that has the unique highest edge weight for , and the crossover event of our interest is defined as:
(32) 
We now state Theorem 3 that establishes the decaying rate of the estimation error probability as the number of data samples grows.
Theorem 3 (Decaying rate of SyncAlgo).
For any fixed parameter ,
(33) 
where
(34) 
where and are the selected edge and constructed tree at th iteration by running SYNCALGO w.r.t. the true data distribution , i.e., has the maximum edge weight under the tree , and it is given by: under some tree , for any ,
(35) 
Moreover, we have the following (finitesample) upperbound on the error probability: for all
(36) 
In Theorem 3, as seen in (33), the error rate function in (34) indeed provides a lowerbound of the actual decaying rate of the error event , since the crossover event which estimates an edge rather than at any th iteration does not guarantee that is a wrong edge. Intuitively, the edge weights of SYNCALGO dynamically change according to a diameter of as iteration proceeds, which makes the characterization of the exact error rate of SYNCALGO be nontrivial.
Proof sketch. Due to space limitation, we present the complete proof in Appendix A.3, and we provide a brief proof sketch. The basic idea is similar to the proof of Theorem 2. As mentioned there, the crossover event is not a subset of the error event , and as a result, we provide a lowerbound of the decaying error rate in the proof, as established by two minimizations in (35). In particular, the first minimization is taken over all iterations () so that it selects the iteration where the error occurs in the most probable way, and the second minimization specifies the nonneighbor node pair , which can be estimated instead of , having the minimum , In other words, the most probable pattern in the error event of SYNCALGO is to estimate attained in two minimizations in (34). For the crossover rate in (35), when two edges and can be clearly differentiated via their edge weights, since the difference of the cost between two edges dominantly determines the order of the edge weights, i.e., the condition in (35) does not hold, the crossover event does not happen, i.e., . This mostly corresponds to the situation of a large value of the tradeoff parameter , where the communication cost plays an important role of the error event, which do not depend on the number of samples . Otherwise, the crossover rate is attained in a similar way to (30). Finally, we establish the upper bound of the error probability in terms of the sample size , where the first term of the bound in (36) corresponds to the number of possible crossover events throughout the entire iterations, and the second term implies the number of possible empirical distributions .
5 Numerical Results
In this section, we provide a set of numerical experiments to validate our analytical results of ASYNCALGO and SYNCALGO under various numbers of data samples, communication costs, and tradeoff parameters.
5.1 Setup
Physical graph. We use a physical network consisting of nodes forming a line topology, where node can directly communicate only with nodes and , see Figure 2(a). We assign some constant cost of single messagepassing for each edge : , except for , where we appropriately choose to adjust the scale of total communication cost of two learning algorithms in the same range, for clear comparison with the same values of . In the messagepassing between nonneighboring (w.r.t. the physical graph) node pairs , we simply assume that it expenses the sum of the costs when it is passed along the unique shortest multihop path on , i.e., . For example, . We use this line topology for an exemplar physical graph to clearly observe the difference between ASYNCALGO and SYNCALGO, where it leads to a significantly huge amount of communication cost for SYNCMAP, due to large diameter value .
Data graph. As an underlying statistical dependencies among nodes in the data graph, we consider a regular tree , except for boundary nodes, where the node is a root node and every node has a degree of or less, as depicted in Figure 2(b). Each random variable associated to a node is set to follow a Bernoulli distribution. For a root node , it has and , and for other neighboring node pairs and , we set the conditional distribution between and by
(37) 
whenever . With this setting of pernode distribution, it turns out that neighboring node pairs have high correlations, and thus have distinct values of the mutual information.
Under this choice of physical and data graphs, we obtain numerical examples to show the performance of ASYNCALGO and SYNCALGO for various values of tradeoff parameter ranging from to , and a fixed in our results. For a fixed , we first generate i.i.d. samples from in (37). Then, we compute the empirical distribution and the empirical mutual information of all possible node pairs . Then, we learn the costefficient data tree by running ASYNCALGO or SYNCALGO, and estimate how well the proposed algorithms recover the ideal data graph by investigating the estimation error probability as grows.
5.2 Results
(i) Estimated trees with varying . Figures 3 and 4 show that the estimated data trees by ASYNCALGO and SYNCALGO for various . We recall that the value of parameterizes the amount of priority for communication cost compared to the inference quality, see (3), where smaller leads to higher priority to the inference quality. In both algorithms, we observe that they with estimate the exact data graph in Figure 2(b), since the goal is to achieve the highest inference accuracy. However, as grows, each of two algorithms estimates a different structure for data tree, since ASYNCMAP and SYNCMAP have different forms of communication costs. In particular, in ASYNCALGO, as grows, we observe that the algorithm produces the estimated data tree with more resemblance to the physical graph, and finally it estimates the data tree that is the same as the physical graph with , see Figures 3(b) and 3(c). We note that for a large value of , the goal of ASYNCALGO is to find a MWST of minimum total cost, which accords with the physical graph of linetopology. However, the communication cost of SYCNALGO increases in proportion to the diameter of the estimated tree, thus it estimates a tree that is of a starlike topology, i.e., a tree with a small diameter as seen in Figures 4(b) and 4(c), to significantly reduce the cost, as grows.
(ii) Quantifying tradeoff between inference accuracy and cost. We now quantify how the tradeoff between inference accuracy of the MAP estimator in (4) behaves and the total communication cost is captured for different values of . To support the tradeoff parameterized by in the optimization problem in (3), we vary from to and plot the accuracy of MAP estimator and the total cost on the learnt data dependency graph as the red and blue lines, respectively, in Figures 3(d) and 4(d). In particular, we run the ASYNCALGO and SYNCALGO with samples, respectively, and run the maxproduct algorithm on the learnt data tree to obtain the MAP estimator. We repeatedly run for times, and measure the error probability that the MAP estimator on the learnt data tree differs from the MAP estimator on the true data graph, as a metric of inference accuracy. The average (over the results) of the communication cost on the learnt data tree is measured by the form of (6) and (7) for each algorithm. In Figure 4(d), we observe that the MAP estimation error and cost with is and , respectively, while those with is and , respectively. The impact of on the tradeoff for two algorithms seems similar, as seen in Figures 3(d) and 4(d).
(iii) Impact of data sample size on graph estimation accuracy. Finally, we demonstrate the theoretical findings in Theorems 2 and 3 on the decaying rate of the error probability w.r.t. the number of samples for various values of . In both ASYNCALGO and SYNCALGO, for a fixed , we run both algorithms for times each, and measure their error probabilities. In Figures 5(a) and 6(a), we observe that the error probability for every decays exponentially as the sample size increases, as established in (31) and (36). It is interesting to see that a different choice of leads to a different decaying rate, which can be understood by our analytical findings of the crossover rate in (30) and (35), simply given by:
where the edge weights for ASYNCALGO and SYNCALGO are assigned in different forms, yet depending on the value of , as seen in (10) and (20). Some choice of makes a difference of the corresponding edge weights highly small, so that it becomes easier to estimate wrong edges with an insufficient number of samples. In our simulation, ASYNCALGO with shows higher error probability of with samples, while that with achieves almost error probability with less than samples, see Figure 5(a). This impact of on the error probability is presented in Figures 5(b) and 6(b) for both algorithms, where for large , we observe the error probability decays at a higher rate in general, since the priority to the inference accuracy is insignificant, leading to less chance of experiencing the crossover event.
6 Conclusion
In many multiagent networked systems, a variety of applications involve distributed innetwork statistical inference tasks, such as MAP (maximum a posteriori), exploiting a given knowledge of statistical dependencies among agents. When agents are spatiallyseparated, running an inference algorithm leads to a nonnegligible amount of communication cost due to inevitable messagepassing, coming from the difference between data dependency and physical connectivity. In this paper, we consider a structure learning problem which recovers the statistical dependency from a set of data samples, which also considers the communication cost incurred by the applied distributed inference algorithms to the learnt data graph. To this end, we first formulate an optimization problem formalizing the tradeoff between inference accuracy and cost, whose solution chooses a tunable point inbetween them. As an inference task, we studied the distributed MAP and their two implementations ASYNCMAP and SYNCMAP that have different cost generation structures. In ASYNCMAP, we developed a polynomial time, optimal algorithm, inspired by the problem of finding a maximum weight spanning tree, while we proved that the optimal learning in SYNCMAP is NPhard, thus proposed a greedy heuristic. For both algorithms, we then established how the error probability that the learnt data graph differs from the ideal one decays as the number of samples grows, using the large deviation principle.
Appendix A Appendix
a.1 Proof of Theorem 1
To prove the NPhardness of the problem SYNC in (15), we need some assumptions. Permessage communication cost satisfies following: (a) for all , we have and (b) for all distinct , we have . We assume that mutual information and communication cost are defined on the complete graph . As we expressed in (LABEL:eq:objdiam), for given , the objective function of a tree for a fixed is
For convenience, we denote by in the remaining of the paper. Then, the problem SYNC is equivalent to find a tree
Proof.
We reduce the problem SYNC in (15) to the known NPcomplete problem Exact Cover by 3sets, simply called X3C problem. We first describe what the X3C problem is.
Exact Cover by 3sets (X3C). Given a set of nodes and a set of element subsets of , X3C problem is the decision problem which determines that whether there exists such that (i) the union of the elements of is and (ii) the intersection of any two elements of is an empty set. It is known to be NPcomplete.
To prove the NPhardness of SYNC in (15), we build a specific (tree) data distribution with for the