Learning Data Dependency with Communication Cost
In this paper, we consider the problem of recovering a graph that represents the statistical data dependency among nodes for a set of data samples generated by nodes, which provides the basic structure to perform an inference task, such as MAP (maximum a posteriori). This problem is referred to as structure learning. When nodes are spatially separated in different locations, running an inference algorithm requires a non-negligible amount of message passing, incurring some communication cost. We inevitably have the trade-off between the accuracy of structure learning and the cost we need to pay to perform a given message-passing based inference task because the learnt edge structures of data dependency and physical connectivity graph are often highly different. In this paper, we formalize this trade-off in an optimization problem which outputs the data dependency graph that jointly considers learning accuracy and message-passing costs. We focus on a distributed MAP as the target inference task due to its popularity, and consider two different implementations, ASYNC-MAP and SYNC-MAP that have different message-passing mechanisms and thus different cost structures. In ASYNC-MAP, we propose a polynomial time learning algorithm that is optimal, motivated by the problem of finding a maximum weight spanning tree. In SYNC-MAP, we first prove that it is NP-hard and propose a greedy heuristic. For both implementations, we then quantify how the probability that the resulting data graphs from those learning algorithms differ from the ideal data graph decays as the number of data samples grows, using the large deviation principle, where the decaying rate is characterized by some topological structures of both original data dependency and physical connectivity graphs as well as the degree of the trade-off, which provides some guideline on how many samples are necessary to obtain a certain learning accuracy. We validate our theoretical findings through extensive simulations, which confirms that it has a good match.
In many online/offline systems with spatially-separated agents (or nodes), a variety of applications involve distributed in-network statistical inference tasks, which have been widely studied, exploiting given knowledge of statistical dependencies among agents. As one example, in sensor networks with multiple targets, each sensor node measures the target-specific information in its coverage area (e.g., position, direction, distance), which further has a correlation among sensors. One well-recognized inference problem is a data association which determines the correct match between measurements of sensors and target tracks by maximum a posteriori (MAP) estimation that is executed in a distributed fashion by exchanging some information messages. Other examples include target tracking, and detection/estimation in sensor networks [1, 2, 3, 4] and de-anonymization, rumor/infection propagation in social networks [5, 6, 7, 8].
To solve these distributed in-network inference problems, it is of crucial importance to understand how data from nodes are inter-dependent. To that end, a notion of the graphical model has been one of the powerful frameworks in machine learning for a succinct modeling of the statistical uncertainty, where each node in the graphical models corresponds to a random variable and each edge specifies the statistical dependency between random variables. A wide variety of scalable inference algorithms on graphical models via message-passing have been developed, of which examples include belief propagation (BP) or max-product with a certain degree of convergence and accuracy guarantees [9, 10, 11, 12]. This graphical model, which we also call data dependency graph or simply data graph throughout this paper, is not given a priori, and it should be learnt only by using a given set of data samples from nodes. This problem, referred to as graph learning or structure learning [13, 14, 15, 16], has been an active research topic in statistical machine learning.
In this paper, for a collection of data sample vectors generated by nodes, we study a problem of graph learning, which also considers the communication cost incurred by the distributed in-network inference algorithm being applied to the learnt data graph. Physical communication cost often becomes a critical issue, for example, exerting a significant impact on the lifetime of networked sensors. Clearly, there exists a trade-off between the amount of incurred cost and the learning accuracy of the data graph. Figure 1(a) illustrates the physical connection among sensors, which differs from the exact data dependency graph in Figure 1(b). The sensor nodes and have non-negligible data dependency, requiring message-passing when performing inference, but they are three hops away from each other, incurring a large amount of communication cost. In this case, one may want to sacrifice the estimation accuracy a little bit and reduce communication cost by utilizing the data graph as shown in Figure 1(c). As done in many prior works on graph learning [16, 17, 18, 19], we restrict our attention to tree-structured data graphs due to its simplicity, yet a large degree of expressive powers and other benefits, e.g., some inference algorithms such as BP over tree-structured data graphs become optimal.
We now summarize our contributions in what follows:
We first formulate an optimization problem of learning data graph, having as an objective function the weighted sum of learning accuracy and the amount of cost that will be incurred by a distributed inference algorithm. Out of many possible inference algorithms, we consider the maximum a posteriori (MAP) estimator that is popular for many inference tasks, and two versions for the MAP implementation: (i) asynchronous and (ii) synchronous, which we call ASYNC-MAP and SYNC-MAP. These implementations have different patterns of passing messages, thus leading to different forms of communication costs, being useful to understand how distributed algorithms’ cost affect the resulting data dependency graph.
Next, for ASYNC-MAP we develop a polynomial-time algorithm to find an optimal (cost-efficient) data graph that corresponds to simply finding a maximum weight spanning tree. This simplicity stems from the cost structure of ASYNC-MAP that is characterized only by the sum of all ‘localized’ edge costs. Being in sharp contrast to ASYNC-MAP, for SYNC-MAP we first prove that it is computationally intractable (i.e., NP-hard) in terms of the number of nodes, by reducing it to the problem of the Exact Cover by -sets. The hardness is due to the fact that the cost structure of SYNC-MAP depends on the diameter of the resulting tree which is the ‘global’ information involving the entire topology. As a practical solution, we propose a polynomial-time greedy heuristic to recover a sub-optimal, but cost-efficient data graph.
Finally, for both ASYNC-MAP and SYNC-MAP, we quantify how the probability that the resulting (cost-efficient) data graph for a finite number of samples differs from the ideal data graph decays as increases, using the large deviation principle (LDP), as a form of The error exponent is characterized for each of ASYNC-MAP and SYNC-MAP by some topological information of physical/data graphs, cost structure for both inference mechanisms, and the degree of the trade-off. We validate our theoretical findings through simulations over a 20-node graph for a variety of scenarios and show their good match with the simulation results.
To validate our theoretical results, we perform numerical simulations a pair of physical and data graphs with 20 nodes, where we quantitatively analyze (i) how estimating a data graph considering communication cost affects the resulting estimation for various values of trade-off parameters between inference accuracy and cost, (ii) how the estimation error decays as the same size increases.
1.1 Related Work
A variety of applications which involve distributed in-network statistical inference tasks among spatially inter-connected agents or sensors have been widely studied in many online/offline systems. In sensor networks, where the knowledge of statistical dependencies among sensed data is given, the tasks of target tracking [20, 21, 22], detection , parameter estimation [24, 2] are the examples, see  for a survey. In social networks, where the underlying social phenomenon of interest such as voting models, rumor/opinion propagation  evolves over a given social interaction graph, the inference tasks of distributed consensus-based estimation , de-anonymization of community-structured social network  and distributed observability  are studied.Message-passing has manifested as an efficient procedure for inference over graphical models that provide the framework of succinct model of the statistical uncertainty of multi-agents. Examples include belief propagation (BP) , max-product [12, 10] and references therein. They are known to be exact and efficient when the underlying graphical model is a tree [9, 11]. Recent research progress has been made for scalable message-passing for general graphs, e.g., junction tree  and graphs with loops .
In the area of structure learning, several algorithms have been proposed in the literatures to recover the statistical dependencies from a set of data samples [13, 14, 15, 16]. It is known that the exact structure learning for general graphical models is NP-hard. The research of structure learning for special graphical models includes: maximum likelihood estimation (MLE) [17, 16] for tree graphs, regularized MLE for binary undirected graphs , convexified MLE for Gaussian graphical models, known as Lasso . Theoretical guarantees for the learning accuracy have been established as the number of data samples, e.g., on tree graph , on binary undirected graphs , on a class of Ising model , or on Bayesian network . Our work differs from all of the above works in that we consider physical communication cost incurred by some target inference algorithms when learning the data dependency graph.
There exists an array of work that addresses the trade-off between inference quality and cost in running distributed in-network inference on the known data graph, which are summarized as two directions: (i) developing novel inference algorithms with less communication of messages or (ii) constructing a new graphical model upon which the existing distributed in-network inference algorithms are performed with less communication resources. In (i), the need of conserving resources requires to propose new message-passing schemes where the messages are compressed by allowing some approximation error in message values [21, 30, 26, 31], and/or some messages are censored (i.e., not to be transmitted) . In (ii), most of the related works focused on constructing a junction tree that minimizes the inference cost , building a data dependency structure upon which message-passing is run energy-efficiently, where the communications among all agents are assumed to be done in one-hop , or optimizing the data dependency structure formulated by a multi-objective problem of inference quality and energy, assuming that the exact statistical dependencies are given as a complete graph . While the main interest of this area has been focused on characterizing the desirable dependency structure for given complete knowledge of accurate data dependencies, our work is motivated by the practical situation where one can just be able to observe a finite number of data sample vectors of nodes, which do not provide such a complete knowledge. Therefore, our interest lies in learning the desirable data dependencies from a finite number of data samples.
2 Model and Preliminary
Physical graph. We consider a (connected) physical network with a set of nodes and links , where each node corresponds to an agent such as a sensor or an individual, and each link corresponds to a physical connectivity between two nodes. For example, in sensor networks, when nodes have wireless radios, then each link is established when two corresponding nodes over the link reach each other within each radio’s communication range.
Data samples. Each node generates a
binary data, denoted by
Data graph via graphical model. The underlying statistical dependency is often understood by the framework of graphical model, which has been a popular tool for modeling uncertainty by a graph structure, where each node corresponds to a random variable and each edge captures the probabilistic interaction between nodes. In particular, we model the data distribution as an undirected graph , which we call data graph, which consists of the same set of nodes as that in the physical graph and nodes’ statistical dependencies captured by an edge structure as: any two non-adjacent random variables are conditionally independent given all other variables, i.e., for any ,
In this paper, we limit our focus on the tree-structured data graph (thus simply data tree), for which let and be set of all spanning trees and set of all tree data distributions over , respectively, i.e., we assume and . Tree data graph is a class of graphical models that has received considerable attention in literatures [17, 19], since it possesses the following factorization property:
where and are the marginals on node and edge , respectively. Tree-structured data graph is known to strike a good balance between the expressive power and the computational tractability. In particular, the distribution in (2) is completely specified only by the set of edges and their pairwise marginals. Thus, if has the factorization property as in (2), in other words, if there exists a unique tree corresponding to To abuse the notation, we henceforth denote by the unique data tree of a tree distribution Figure 1 shows an example of the physical graph and two data graphs with nodes.
2.2 Goal: Cost-efficient Learning of Data Graph
Learning data graph: What and why? To understand the underlying data dependency (2), it is enough to learn the structure of data graph from the observed samples, which is known as the problem of (data graph) structure learning. Formally, when we are given a set of i.i.d. samples generated from an unknown (tree) data distribution on a data tree , a structure learning algorithm is a (possibly randomized) map defined by:
The quality of this algorithm is evaluated by how “close” is to the original data graph
Distributed inference on data graph. One of the practical goals of estimating the data tree given a set of data samples is to perform an inference task based on Thus, in many applications, primary interests are not focused on data itself but rather on how to exploit the data dependency for reliable decision making, such as target tracking, detection, estimation in sensor networks and/or social networks, which involves statistical inference about the networks described by a data graph. One example of inference tasks is the MAP (maximum a posteriori) based estimation. Distributed in-network inference has been widely studied with the help of various distributed algorithms on graphical models using message-passing. In particular, for a specific inference problem, a message between two nodes contains the information on influence that one node exerts on another, which is obtained based on the value contained in neighboring messages over an estimated data graph . One critical issue of message-passing based inference algorithm is that messages are often passed along the multi-hop path on the physical graph , which incurs some amount of communication cost. Then, assuming that some inference algorithm would be run for the estimated data graph , such a data graph learning must have the trade-off between the accuracy of the learnt graph (i.e., how close the learnt graph is to the original data graph) and the communication cost generated by performing the distributed inference.
Goal: Cost-efficient data graph learning. Given an observed samples from the unknown data distribution , our objective is to estimate a cost-efficient data tree, which captures the trade-off between (i) inference accuracy and (ii) communication cost for inference. For tree distributions, finding a distribution naturally gives rise to the corresponding data tree, as mentioned earlier. Thus, it is natural to find the tree distribution that is the solution of the following optimization problem: for a constant parameter and a fixed inference algorithm
where is the empirical distribution of , is some distance metric between two distributions, and is the communication cost paid by running an inference algorithm with respect to the data tree over the physical graph Recall that is the data tree for the tree distribution The value of parameterizes how much we prioritize the communication cost compared to the inference accuracy Note that as converges to the original data distribution which requires to solve CDG().
Then, this paper aims at answering the following two questions:
What are good data-tree learning algorithms that compute by solving CDG()? In Section 3, we consider the MAP estimator as an applied inference algorithm, and their two implementations having different cost functions, for which we propose two cost-efficient learning algorithms.
How fast does converge to as the number of samples grows? We use the large deviation principle (LDP) to characterize the decaying rate of the probability that for two different MAP implementations in Section 3.
In this paper, we use the popular KL divergence as a distance metric for inference accuracy, denoted by , where for two distributions and , For notional simplicity, we simply denote by the solution of CDG() throughout this paper.
3 Cost-efficient Data Graph Learning Algorithms
In this paper, out of many possible inference tasks, we consider the maximum a posteriori (MAP) estimation, which is popularly applied in many applications such as data association for a multi-target tracking problem in sensor networks, community-structured social network de-anonymization problem in social networks .
3.1 Distributed MAP and Cost
Distributed MAP on tree-structured data graph. The MAP estimator of some tree distribution on its associated data tree is given by:
where we use and for simplicity. A standard message-passing algorithm for the distributed MAP is a max-product algorithm, which defines a message from node to at -th iteration with . Each node exchanges messages with their neighbors on the data tree , and these messages are updated over time in an iterative fashion by the following rule: at -th iteration,
with the normalizing constant to make the sum of all message values be , and denotes the neighboring nodes of .
Communication cost of distributed MAP. The communication cost of MAP is paid, depending on the actual protocol that specifies how to schedule message-passing procedures. Two natural message-passing protocols studied in literatures are: (a) asynchronous depth-first (unicast) update  and (b) synchronous (broadcast) parallel update . Both protocols for a tree distribution with its data tree have been shown to be consistent in that the message update (5) converges to a unique fixed point , which defines the exact MAP assignment in (4) as for each . We denote the the cost of a single message-passing over an edge , under a given physical graph , as or . Recall that the message passing over may need to be done over a multi-hop path on the physical graph One simple example of is the shortest path distance from node to in . Then, both protocols incur the communication cost as elaborated in what follows:
Asynchronous: In the asynchronous protocol (simply ASYNC-MAP), one node is arbitrarily picked as a root, and messages are passed from the leaves upwards to the root, then back downwards to the leaves. It involves a total number of messages upon termination. Thus, the communication cost would be:
Synchronous: In the synchronous protocol (simply SYNC-MAP), at each iteration, every node sends messages to all of its neighbors. Then, since the diameter
2is the minimum amount of time required for a message to pass between two most distant nodes in , this protocol involves at most iterations with total number of messages. Thus, we have the following cost: (7)
In the next subsection, we will use the above two cost functions for two different learning algorithms for CDG() in (3) to estimate two cost-efficient data trees.
3.2 Algorithm for Asynchronous MAP
Using the cost function for the asynchronous MAP in (6), the original optimization problem CDG() is re-cast into:
We now describe ASYNC-ALGO that computes in (8) and thus estimates the cost-efficient data tree in Algorithm 1. As we see, the algorithm is remarkably simple. Using given data samples, we construct a weighted complete graph, where the weight for each edge is assigned some combination of the mutual information of nodes and with respect to the empirical distribution obtained from the data samples and the per-message cost, as in (10). Then, we run an algorithm that computes the maximum weight spanning tree, e.g., Prim’s algorithm or Kruskal’s algorithm, and the resulting spanning tree is the output of this algorithm.
and for each possible edge we initialize its weight by:
where is the mutual information between two end-points of edge with respect to a given joint distribution
Run a maximum weight spanning tree algorithm for and save its resulting spanning tree at .
Correctness of ASYNC-ALGO. We now present the correctness of the above algorithm in the sense that we can obtain the data tree corresponding to the optimal distribution formulated in (8), as explained in what follows: For some tree distribution (thus, satisfying the factorization property in (2)), we have:
where is the entropy, and the inequality holds when the pairwise marginals over the edges of a fixed are set to that of , i.e., for all . Since the entropy terms are constant w.r.t. , it is straightforward that the structure of the estimator of CDG-A in (8) is given by:
Then, it is easy to see that (13) requires us to find the maximum weight spanning tree using as the edge ’s weight, where the standard maximum weight spanning tree (MWST) computation algorithm runs in time, where recall that
3.3 Algorithm for Synchronous MAP
Similarly to ASYNC-MAP, using the cost in (7), the original optimization problem CDG() is re-cast into:
We comment that this optimization is non-trivial in that the objective function contains the diameter of the tree, which can be computed only when the solution is fully characterized.
Hardness. The key difference in the cost function of SYNC-MAP from ASYNC-MAP is simply the existence of . However, this simple difference completely changes the hardness of learning the optimal data tree in SYNC-MAP, as formally stated in the next Theorem.
Theorem 1 (Hardness of Cdg-S).
For any parameter , obtaining the optimal distribution in CDG-S and thus its associated data tree is NP-hard with respect to the number of nodes.
Proof sketch. Due to space limitation, we present the full proof of Theorem 1 in Appendix A.1, and we only provide its sketch here. The key step in proof is to reduce the CDG-S in (15) to the well-known NP-complete problem: Exact Cover by -sets problem, which we simply call X3C. In , the bounded diameter minimum weight spanning tree (BDMST) problem that finds the MWST with a diameter less than of for a fixed edge weights is shown to be an NP-hard problem, by reducing it to the X3C problem. The main technical challenge in CDG-S lies in that the edge weights are diameter-dependent, via the form of in (17), where the weights become smaller as the diameter grows. Therefore, the optimal structure of in (17) would be attained at the tree with small diameter. If we consider a fixed diameter of so that the edge weights are set by constant values, then the problem becomes similar to the BDMST problem. To prove NP-hardness of our problem, we first construct a specific tree distribution and the cost functions , at which the optimal solution of CDG-S should have a certain diameter, a diameter of in our proof, then we show that CDG-S for the weights of diameter has the optimal solution with diameter if and only if X3C problem has a solution. From understanding the reduction of BDMST problem, we construct and , under which (i) the tree with diameter less than does not attain optimal solution due to its structural limitations (to force the small diameter), and (ii) the edge weights for the diameter larger than become too small to achieve optimal solution of CDG-S. The remaining technique to verify the reduction of our problem to the X3C problem follows the arguments in . Then, we are done with the reduction.
and for each possible edge we initialize its weight by:
and initialize the edge set by the set of all possible edges. repeat
Select an edge with the maximum weight, and update and .
Update as the set of all edges such that and and set the weight of each edge as:
Greedy algorithm. Due to the above-mentioned hardness, we propose a greedy heuristic algorithm that outputs the tree structure denoted by , called SYNC-ALGO(), as we describe in Algorithm 2, where is the algorithm parameter. The overall algorithm operates as follows:
Initialize the weight of each possible edge with some initial value.
Sequentially select the edge that has the maximum weight and add it to the temporary resulting tree.
Update the weight of each edge whose one end-point is in the current resulting tree and another end-point is not, and go to S1 until we handle all nodes.
One of the central steps here is: first, we dynamically update the weight of the candidate edges (i.e., the set ) that we will add and, second, which value is chosen as the weight is different from the “one-shot” weight assignment as done in ASYNC-ALGO. To explain this intuition, we first note that from (17) it is easy to see that the degree of contribution in terms of weight by adding an edge to the existing resulting tree would be re-expressed as:
where is defined in (22). Here, corresponds to the change of the communication cost over the existing edges in , under the grown tree For example, if the diameter of the grown tree does not change by adding the edge or , if the diameter of the grown tree increases by .
In dynamically assigning the weight of the candidate edges in we do not use the value of (24). Instead, as seen in (20), (i) we use the expected diameter growth of the tree, denoted by in (23), and (ii) we use a tunable parameter to compensate for the impact of the change in communication cost over the existing edges in (22). In more detail, we use in (23), which captures the expected diameter growth of the tree via the term , since the diameter of a uniformly random spanning tree is known to be of the order in . We note that this term decreases to as the tree becomes to a spanning tree from the term . Second, we consider the impact of old weights over the existing edges in , captured by in (24), by controlling a scale of .
To summarize, these two modified choices of the weight are for handling a probable sacrifice of the performance when using a vanilla greedy method as in (22), since the edge weight should be modified suitably for the changed diameter on the way of tree construction. We expect that these two engineerings play an important role when the cost-efficient data graph is attained with a large diameter, where the edges chosen in the begging phase of the procedure (i.e., with a small diameter value) could exert much impact of communication cost at the end of the procedure. Our greedy algorithm runs in times.
4 Estimation Error for Increasing Sample Size
In this section, we provide the analysis of how the estimation error probability decays with the growing number of samples using the large deviation principle (LDP).
4.1 Estimation Error of ASYNC-ALGO
Clearly, when we use more and more data samples, approaches to that is the optimal edge structure solving CDG-A We are interested in characterizing the following error probability of the event :
To characterize the probability in (25) that is one of the rare events, we use LDP that rare events occurs in the most probable way. To this end, we aim at studying the following rate function :
whenever the limit exists.
We now consider a simple event, called crossover event, as
defined in what follows:
Recall that ASYNC-ALGO uses, for each edge , the
As the number of samples , the empirical distribution approaches to the true distribution, thus the probability of the crossover event decays to zero, whose decaying rate which we call crossover rate is defined as . Using this definition of the crossover event, we present Theorem 2 that states the decaying rate of the estimation error probability as the number of data samples grows.
Theorem 2 (Decaying rate of Async-Algo).
For any fixed parameter ,
where is the unique path between nodes and , such that for , and
Moreover, we have the following (finite-sample) upper-bound on the error probability: for all
In Theorem 2, we observe that the decaying rate of error probability is specified by some topological information of physical/data graphs and the trade-off parameter . In particular, the crossover event and its rate depend on how difficult it is to differentiate two edge weights under the true data distribution with a consideration of the trade-off parameter as well as per-message cost on edges. As interpreted from (30), when and are close, the confusion between and from samples frequently occurs, leading to high error probability, and we can show the existence of the infimum satisfying as by slightly adjusting the true distribution . Moreover, we remark that the decaying rate (and thus ) is characterized by a trade-off parameter . The error rate becomes smaller (i.e., higher error probability) when nearly meets the condition , and the weights becomes deterministic with respect to the samples as increases since the portion of the cost in weights grows, resulting to in (30). These interpretations are well-matched to our numerical results in Section 5.
Proof sketch. The proof of Theorem 2 is presented in Appendix A.2, and we describe the proof sketch for readers’ convenience. Our proof largely follows that of the related work in  that analyzes an error exponent of a standard tree structure learning (i.e., known as Chow-Liu algorithm ), whose goal is to solely estimate the true data distribution with no consideration of communication cost. Simply, the proof idea follows LDP in the following way. The error event is expressed as a union of small events that ASYNC-ALGO estimates only one wrong edge (see the definition of the crossover event in (27)), two wrong edges, and three, etc. Following LDP, the decaying rate of the error probability equals to the decaying rate of the most probable crossover event, which corresponds to the case of only one wrong edge. In more detail, two minimums in (29) specify the most-probably error, whose edge set differs from the optimal data tree structure exactly in one edge, , i.e., , where it contains the non-neighbor node pair (as selected in the first minimization) instead of the most probable replacement edge in the unique path along (as in the second minimization). To obtain the minimum crossover rate , we apply the Sanov’s theorem , which provides an expression of the probabilistic relationship between and via their KL divergence. Finally, in addition to the asymptotic decaying rate of the estimation error probability, we also establish its upper bound of the error probability in terms of the number of data samples, where the first term of the bound in (31) implies the number of possible crossover events, and the second term represents the number of possible empirical distributions
4.2 Estimation Error of SYNC-ALGO
We conduct a similar analysis here for SYNC-ALGO to what we did for ASYNC-ALGO, which has more complicated issues for the following reasons: We first denote by in (20) the assigned weight for edge to stress its dependence on the corresponding resulting tree structure and its associated empirical distribution . Then, we need to investigate the most probable pattern in the rare event through a certain tree at some iteration. Simply, the crossover event for two edges and occurs if the order of edge weights from the given finite number of samples becomes reversed to the order of weights from the true data distribution. Among all possible crossover events, we are interested in the crossover event under every tree structure that is obtained on the way of constructing the ideal data structure, denoted by . Let and be the selected edge and constructed tree at -th iteration obtained by running SYNC-ALGO w.r.t. the true data distribution , which would finally find . Then, it is obvious that has the unique highest edge weight for , and the crossover event of our interest is defined as:
We now state Theorem 3 that establishes the decaying rate of the estimation error probability as the number of data samples grows.
Theorem 3 (Decaying rate of Sync-Algo).
For any fixed parameter ,
where and are the selected edge and constructed tree at -th iteration by running SYNC-ALGO w.r.t. the true data distribution , i.e., has the maximum edge weight under the tree , and it is given by: under some tree , for any ,
Moreover, we have the following (finite-sample) upper-bound on the error probability: for all
In Theorem 3, as seen in (33), the error rate function in (34) indeed provides a lower-bound of the actual decaying rate of the error event , since the crossover event which estimates an edge rather than at any -th iteration does not guarantee that is a wrong edge. Intuitively, the edge weights of SYNC-ALGO dynamically change according to a diameter of as iteration proceeds, which makes the characterization of the exact error rate of SYNC-ALGO be non-trivial.
Proof sketch. Due to space limitation, we present the complete proof in Appendix A.3, and we provide a brief proof sketch. The basic idea is similar to the proof of Theorem 2. As mentioned there, the crossover event is not a subset of the error event , and as a result, we provide a lower-bound of the decaying error rate in the proof, as established by two minimizations in (35). In particular, the first minimization is taken over all iterations () so that it selects the iteration where the error occurs in the most probable way, and the second minimization specifies the non-neighbor node pair , which can be estimated instead of , having the minimum , In other words, the most probable pattern in the error event of SYNC-ALGO is to estimate attained in two minimizations in (34). For the crossover rate in (35), when two edges and can be clearly differentiated via their edge weights, since the difference of the cost between two edges dominantly determines the order of the edge weights, i.e., the condition in (35) does not hold, the crossover event does not happen, i.e., . This mostly corresponds to the situation of a large value of the trade-off parameter , where the communication cost plays an important role of the error event, which do not depend on the number of samples . Otherwise, the crossover rate is attained in a similar way to (30). Finally, we establish the upper bound of the error probability in terms of the sample size , where the first term of the bound in (36) corresponds to the number of possible crossover events throughout the entire iterations, and the second term implies the number of possible empirical distributions .
5 Numerical Results
In this section, we provide a set of numerical experiments to validate our analytical results of ASYNC-ALGO and SYNC-ALGO under various numbers of data samples, communication costs, and trade-off parameters.
Physical graph. We use a physical network consisting of nodes forming a line topology, where node can directly communicate only with nodes and , see Figure 2(a). We assign some constant cost of single message-passing for each edge : , except for , where we appropriately choose to adjust the scale of total communication cost of two learning algorithms in the same range, for clear comparison with the same values of . In the message-passing between non-neighboring (w.r.t. the physical graph) node pairs , we simply assume that it expenses the sum of the costs when it is passed along the unique shortest multi-hop path on , i.e., . For example, . We use this line topology for an exemplar physical graph to clearly observe the difference between ASYNC-ALGO and SYNC-ALGO, where it leads to a significantly huge amount of communication cost for SYNC-MAP, due to large diameter value .
Data graph. As an underlying statistical dependencies among nodes in the data graph, we consider a -regular tree , except for boundary nodes, where the node is a root node and every node has a degree of or less, as depicted in Figure 2(b). Each random variable associated to a node is set to follow a Bernoulli distribution. For a root node , it has and , and for other neighboring node pairs and , we set the conditional distribution between and by
whenever . With this setting of per-node distribution, it turns out that neighboring node pairs have high correlations, and thus have distinct values of the mutual information.
Under this choice of physical and data graphs, we obtain numerical examples to show the performance of ASYNC-ALGO and SYNC-ALGO for various values of trade-off parameter ranging from to , and a fixed in our results. For a fixed , we first generate i.i.d. samples from in (37). Then, we compute the empirical distribution and the empirical mutual information of all possible node pairs . Then, we learn the cost-efficient data tree by running ASYNC-ALGO or SYNC-ALGO, and estimate how well the proposed algorithms recover the ideal data graph by investigating the estimation error probability as grows.
(i) Estimated trees with varying . Figures 3 and 4 show that the estimated data trees by ASYNC-ALGO and SYNC-ALGO for various . We recall that the value of parameterizes the amount of priority for communication cost compared to the inference quality, see (3), where smaller leads to higher priority to the inference quality. In both algorithms, we observe that they with estimate the exact data graph in Figure 2(b), since the goal is to achieve the highest inference accuracy. However, as grows, each of two algorithms estimates a different structure for data tree, since ASYNC-MAP and SYNC-MAP have different forms of communication costs. In particular, in ASYNC-ALGO, as grows, we observe that the algorithm produces the estimated data tree with more resemblance to the physical graph, and finally it estimates the data tree that is the same as the physical graph with , see Figures 3(b) and 3(c). We note that for a large value of , the goal of ASYNC-ALGO is to find a MWST of minimum total cost, which accords with the physical graph of line-topology. However, the communication cost of SYCN-ALGO increases in proportion to the diameter of the estimated tree, thus it estimates a tree that is of a star-like topology, i.e., a tree with a small diameter as seen in Figures 4(b) and 4(c), to significantly reduce the cost, as grows.
(ii) Quantifying trade-off between inference accuracy and cost. We now quantify how the trade-off between inference accuracy of the MAP estimator in (4) behaves and the total communication cost is captured for different values of . To support the trade-off parameterized by in the optimization problem in (3), we vary from to and plot the accuracy of MAP estimator and the total cost on the learnt data dependency graph as the red and blue lines, respectively, in Figures 3(d) and 4(d). In particular, we run the ASYNC-ALGO and SYNC-ALGO with samples, respectively, and run the max-product algorithm on the learnt data tree to obtain the MAP estimator. We repeatedly run for times, and measure the error probability that the MAP estimator on the learnt data tree differs from the MAP estimator on the true data graph, as a metric of inference accuracy. The average (over the results) of the communication cost on the learnt data tree is measured by the form of (6) and (7) for each algorithm. In Figure 4(d), we observe that the MAP estimation error and cost with is and , respectively, while those with is and , respectively. The impact of on the trade-off for two algorithms seems similar, as seen in Figures 3(d) and 4(d).
(iii) Impact of data sample size on graph estimation accuracy. Finally, we demonstrate the theoretical findings in Theorems 2 and 3 on the decaying rate of the error probability w.r.t. the number of samples for various values of . In both ASYNC-ALGO and SYNC-ALGO, for a fixed , we run both algorithms for times each, and measure their error probabilities. In Figures 5(a) and 6(a), we observe that the error probability for every decays exponentially as the sample size increases, as established in (31) and (36). It is interesting to see that a different choice of leads to a different decaying rate, which can be understood by our analytical findings of the crossover rate in (30) and (35), simply given by:
where the edge weights for ASYNC-ALGO and SYNC-ALGO are assigned in different forms, yet depending on the value of , as seen in (10) and (20). Some choice of makes a difference of the corresponding edge weights highly small, so that it becomes easier to estimate wrong edges with an insufficient number of samples. In our simulation, ASYNC-ALGO with shows higher error probability of with samples, while that with achieves almost error probability with less than samples, see Figure 5(a). This impact of on the error probability is presented in Figures 5(b) and 6(b) for both algorithms, where for large , we observe the error probability decays at a higher rate in general, since the priority to the inference accuracy is insignificant, leading to less chance of experiencing the crossover event.
In many multi-agent networked systems, a variety of applications involve distributed in-network statistical inference tasks, such as MAP (maximum a posteriori), exploiting a given knowledge of statistical dependencies among agents. When agents are spatially-separated, running an inference algorithm leads to a non-negligible amount of communication cost due to inevitable message-passing, coming from the difference between data dependency and physical connectivity. In this paper, we consider a structure learning problem which recovers the statistical dependency from a set of data samples, which also considers the communication cost incurred by the applied distributed inference algorithms to the learnt data graph. To this end, we first formulate an optimization problem formalizing the trade-off between inference accuracy and cost, whose solution chooses a tunable point in-between them. As an inference task, we studied the distributed MAP and their two implementations ASYNC-MAP and SYNC-MAP that have different cost generation structures. In ASYNC-MAP, we developed a polynomial time, optimal algorithm, inspired by the problem of finding a maximum weight spanning tree, while we proved that the optimal learning in SYNC-MAP is NP-hard, thus proposed a greedy heuristic. For both algorithms, we then established how the error probability that the learnt data graph differs from the ideal one decays as the number of samples grows, using the large deviation principle.
Appendix A Appendix
a.1 Proof of Theorem 1
To prove the NP-hardness of the problem SYNC in (15), we need some assumptions. Per-message communication cost satisfies following: (a) for all , we have and (b) for all distinct , we have . We assume that mutual information and communication cost are defined on the complete graph . As we expressed in (LABEL:eq:obj-diam), for given , the objective function of a tree for a fixed is
For convenience, we denote by in the remaining of the paper. Then, the problem SYNC is equivalent to find a tree
We reduce the problem SYNC in (15) to the known NP-complete problem Exact Cover by 3-sets, simply called X3C problem. We first describe what the X3C problem is.
Exact Cover by 3-sets (X3C). Given a set of nodes and a set of -element subsets of , X3C problem is the decision problem which determines that whether there exists such that (i) the union of the elements of is and (ii) the intersection of any two elements of is an empty set. It is known to be NP-complete.
To prove the NP-hardness of SYNC in (15), we build a specific (tree) data distribution with for the